public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoindev] Mining pools, stratumv2 and oblivious shares
@ 2024-07-23 15:02 Anthony Towns
  2024-07-23 18:44 ` Luke Dashjr
  2024-08-13 13:57 ` Matt Corallo
  0 siblings, 2 replies; 8+ messages in thread
From: Anthony Towns @ 2024-07-23 15:02 UTC (permalink / raw)
  To: bitcoindev

Hi *,

1. stratumv2 templates via client-push
======================================

The stratumv2 design document suggests:

> Use a separate communication channel for transaction selection so
> that it does not have a performance impact on the main mining/share
> communication, as well as can be run in three modes - disabled (i.e.pool
> does not yet support client work selection, to provide an easier
> transition from Stratum v1), client-push (to maximally utilize the
> client’s potential block-receive-latency differences from the pool),
> and client-negotiated (for pools worried about the potential of clients
> generating invalid block templates).

 -- https://stratumprotocol.org/specification/02-Design-Goals/

To me, the "client-push" approach (vs the client-negotiated approach)
seems somewhat unworkable (at least outside of solo mining).

In particular, if you're running a pool and have many clients generating
their own templates, and you aren't doing KYC on those clients, and aren't
fully validating every proposed share, then it becomes very easy to do a
"block withholding attack" [0] -- just locally create invalid blocks,
submit shares as normal, receive payouts as normal for partial shares
because the shares aren't being validated, and if you happen to find
an actual block, the pool loses out because none of your blocks were
actually valid. (You can trivially make your block invalid by simply
increasing the pool payout by 1 sat above the correct value)

Validating arbitrary attacker submitted templates seems likely to be
expensive, as they can produce transactions which aren't already in your
mempool, are relatively expensive to validate, and potentially conflict
with transactions that other clients are selecting, causing you to have
to churn txs in and out of your mempool.

Particularly if an attacker could have an array of low hashpower workers
all submitting different templates to a server, it seems like it would
be relatively easy to overload any small array of template-validater
nodes, given a pure client-push model. In which case client-push would
only really make sense for pure solo-mining, where you're running your
own stratumv2 server and your own miners and have full control (and thus
trust) from end to end.

Does that make sense, or am I missing something?

I think a negotiated approach could look close to what Ocean's template
selection looks like today: that is the pool operator runs some nodes with
various relay policies that generate blocks, and perhaps also allows for
externally submitted templates that it validates. Then clients request
work according to their preferences, perhaps they specify that they
prefer the "no-ordinals template", or perhaps "whatever template has the
highest payout (after pool fees)". The pool tracks the various templates
it's offered in order to validate shares and to broadcast valid blocks
once one is found.

A negotiated approach seems also more amenable to including out of band
payments, such as mempool.space's accelerator, whether those payments
are distributed to miners or kept by the pool.

This could perhaps also be closer to the ethereum model of
proposer/builder separation [7], which may provide a modest boost
to MEV/MEVil resistance -- that is if there is MEV available via some
clever template construction, specialist template builders can construct
specialised templates paying slightly higher fees and submit those to
mining pools, perhaps splitting any excess profits between the pool and
template constructor. Providing a way for external parties to submit
high fee templates to pools (rate-limited by a KYC-free bond payment
over LN perhaps), seems like it would help limit the chances of that
knowledge being kept as a trade secret to one pool or mining farm, which
could then use its excess profits to become dominant, and then gain too
much ability to censor txs. Having pools publish the full templates for
auditing purposes would allow others to easily incrementally improve on
clever templates by adding any high-fee censored transactions back in.

2. block withholding and oblivious shares
=========================================

Anyway, as suggested by the subject-line and the reference to [0],
I'm still a bit concerned by block withholding attacks -- where an
attacker finds decentralised pools and throws hashrate at them to make
them unprofitable by only sending the 99.9% of shares that aren't valid
blocks, while withholding/discarding the 0.1% of shares that would be
a valid block. The result being that decentralised non-KYC pools are
less profitable and are replaced by centralised KYC pools that can
ban attackers, and we end up with most hashrate being in KYC pools,
where it's easier for the pools to censor txs, or miners to be found
and threatened or have their hardware confiscated. (See also [6])

If it were reasonable to mine blocks purely via the tx merkle root,
with only the pool knowing the coinbase or transaction set at the time
of mining, I think it could be plausible to change the PoW algorithm to
enable oblivious shares: where miners can't tell if a given valid share
corresponds to a valid block or not, but pools and nodes can still easily
easily validate work form just the block header.

In particular I think an approach like this could work:

  * take the top n-bits of the prev hash in the header, which are
    currently required to be all zeroes due to `powLimit` (n=0 for regtest,
    n<=8 in general)
  * stop requiring these bits to be zero, instead interpret them as
    `nBitsShareShift`, from 0 to up to 255
  * when nBitsShareShift > 0, check that the share hash is less than or
    equal to (2**256 - 1) >> nBitsShareShift, where the share hash is
    calculated as
      sha256d( nVersion, hashPrevBlock, sha256d( hashMerkleRoot ),
               nTime, nBits, nNonce )
  * check that the normal block hash is not greater than
      FromCompact(nBits) << nBitsShareShift

Note that this would be a light-client visible hard fork -- any blocks
that set nBitsShareShift to a non-zero value would be seen as invalid
by existing software that checks header chains.

It's also possible to take a light-client invisible approach by decreasing
the value of nBits, but providing an `nBitsBump` that new nodes validate
but existing nodes do not (see [1] eg). This has the drawback that it
would become easy to fool old nodes/light clients into following the
wrong chain, as they would not be fully validating proof-of-work, which
already breaks the usual expectations of a soft fork (see [2] eg). Because
of that, a hard fork approach seems safer here to me. YMMV, obviously.

The above approach requires two additional sha256d operations when
nodes or light clients are validating header work, but no additional
data, which seems reasonable, and should require no changes to machines
capable of header-only mining -- you just use sha256d(hashMerkleRoot)
instead of hashMerkleRoot directly.

In this scenario, pools are giving their client sha256d(hashMerkleRoot)
rather than hashMerkleRoot or a transaction tree directly, and
`nBitsShareShift` is set based on the share difficulty. Pools then
check the share is valid, and additionally check whether the share has
sufficient work to be a valid block, which they are able to do because
unlike the miner, they can calculate the normal block hash.

The above assumes that the pool has full control over the coinbase
and transaction selection, and that the miner/client is not able to
reconstruct all that data from its mining job, so this would be another
reason why a pool would only support a client-negotiated approach for
templates, not a client-push approach. Note that miners/clients could
still *audit* the work they've been given if the pool makes the full
transaction set (including coinbase) for a template available after each
template expires.

Some simple numbers: if a miner with control of 10% hashrate decided
to attack a decentralised non-KYC pool with 30% hashrate, then they
could apply 3.3% hashrate towards a blockwithholding attack, reducing
the victim's income to 90% (30% hashrate finding blocks vs 33% hashrate
getting payouts) while only reducing their own income to 96.7% (6.7%
hashrate at 100% payout, 3.3% at 90%). If they decided to attack a miner
with 50% hashrate, they would need to provide 5.55% of total hashrate to
reduce the victim's income to 90%, which would reduce their own income
to 94.45% (4.45% at 100%, 5.55% at 90%).

I've seen suggestions that block withholding could be used as a way
to attack pools that gain >50% hashpower, but as far as I can see it's
only effective against decentralised, non-KYC pools, and more effective
against small pools than large ones, which seems more or less exactly
backwards to what we'd actually want...

Some previous discussion of block withholding and KYC is at [3] [4]
[5].

3. extra header nonce space
===========================

Given BIP320, you get 2**48 values to attempt per second of nTime,
which is about 281 TH/s, or enough workspace to satisfy a single S21 XP
at 270 TH/s. Maybe that's okay, but maybe it's not very much.

Since the above would already be a (light-client visible) hard fork, it
could make sense to also provide extra nonce space that doesn't require
touching the coinbase transaction (since we're relying on miners not
having access to the contents of the tx merkle tree).

One approach would be to use the leading zeroes in hashPrevBlock as
extra nonce space (eg, [8]). That's particularly appealing since it
scales exactly with difficulty -- the higher the difficulty, the more
nonce space you need, but the more required zeroes you have. However
the idea above unfortuantely reduces the available number of zero bits
in the previous block hash by that block's choice of nBitsShareShift,
which may take a relatively large value in order to reduce traffic with
the pool. So sadly I don't think that approach would really work.

Another way to do that could work might be to add perhaps 20 bytes of
extra nonce to the header, and calculate the block hashes as:

  normal hash -> sha256d(
       nVersion, hashPrevBlock,
       sha256d( merkleRoot, TagHash_BIPxxx(extraNonce) ),
       nTime, nBits, nNonce
  )

and

  share hash -> sha256d(
       nVersion, hashPrevBlock,
       sha256d( sha256d(merkleRoot), TagHash_BIPxxx(extraNonce) ),
       nTime, nBits, nNonce
  )

That should still be compatible with existing mining hardware. Though it
would mean nodes are calculating 5x sha256d and 1x taghash to validate
a block header's PoW, rather than just 1x sha256d (now) or 3x sha256d
(above).

This approach would also not require changes in how light clients verify
merkle proofs of transaction inclusion in blocks, I believe. (Using a
bip340-esque TagHash for the extraNonce instead of nothing or sha256d
hopefully prevents hiding fake transactions in that portion)

4. plausibility
===============

It's not clear to me how serious a problem block withholding is. It
seems like it would explain why we have few pools, why they're all
pretty centralised, and why major ones care about KYC, but there are
plenty of other explanations for both those things. So even if this
was an easy fix, it's not clear to me how much sense it makes to think
about. And beyond that's it's a consensus change (ouch), a hard fork
(ouch, ouch) and one that requires every light client to upgrade (ouch,
ouch, ouch!). However, all of that is still just code, and none of those
things are impossible to achieve, if they're worth the effort. I would
expect a multiyear deployment timeline even once the code was written
and it was widely accepted as a good idea, though.

If this is a serious problem for the privacy and decentralisation of
mining, and a potential threat to bitcoin's censorship resistance,
it seems to me like it would be worth the effort.

5. conclusions?
===============

Anyway, I wanted to write my thoughts on this down somewhere they could
be critiqued. Particularly the idea that everyone building their own
blocks for public pools running stratumv2 doesn't actually make that much
sense, as far as I can see.

I think the share approach in section 2 and the extranonce approach in
section 3 are slightly better than previous proposed approaches I've seen,
so are worth having written down somewhere.

Cheers,
aj

[0] https://bitcoil.co.il/pool_analysis.pdf

[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012051.html

[2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012443.html

[3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012060.html

[4] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012111.html

[5] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012069.html

[6] https://bitcoinops.org/en/topics/pooled-mining/#block-withholding-attacks

[7] https://ethereum.org/en/roadmap/pbs/

[8] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015386.html

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/Zp/GADXa8J146Qqn%40erisian.com.au.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bitcoindev] Mining pools, stratumv2 and oblivious shares
  2024-07-23 15:02 [bitcoindev] Mining pools, stratumv2 and oblivious shares Anthony Towns
@ 2024-07-23 18:44 ` Luke Dashjr
  2024-07-31 18:00   ` Anthony Towns
  2024-08-13 13:57 ` Matt Corallo
  1 sibling, 1 reply; 8+ messages in thread
From: Luke Dashjr @ 2024-07-23 18:44 UTC (permalink / raw)
  To: bitcoindev, aj

Block withholding is already trivial, and annoying to detect. 
Decentralised mining / invalid blocks doesn't increase the risk at all - 
it only makes it easier to detect if done that way.

While it isn't made any worse, fixing it is still a good idea. 
Unfortunately, I think your proposal still requires checking every 
template, and only fixes block withholding for centralised mining. It 
would risk *creating* the very "centralised advantage over 
decentralised" problem you're trying to mitigate.

Given that block withholding is trivial regardless of approach, and 
there's no advantage to taking the invalid-block approach, the risk of 
invalid blocks would stem from buggy nodes or softfork 
disagreements/lagging upgrades. That risk can be largely addressed by 
spot-checking random templates.

Another possibility would be to use zero-knowledge proofs of block 
validity, but I don't know if we're there yet. At that point, I think 
the hardfork to solve the last remaining avenue would be a good idea. 
(Bonus points if someone can think up a way to do it without a 
centrally-held secret/server...)

Luke


On 7/23/24 11:02 AM, Anthony Towns wrote:
> Hi *,
>
> 1. stratumv2 templates via client-push
> ======================================
>
> The stratumv2 design document suggests:
>
>> Use a separate communication channel for transaction selection so
>> that it does not have a performance impact on the main mining/share
>> communication, as well as can be run in three modes - disabled (i.e.pool
>> does not yet support client work selection, to provide an easier
>> transition from Stratum v1), client-push (to maximally utilize the
>> client’s potential block-receive-latency differences from the pool),
>> and client-negotiated (for pools worried about the potential of clients
>> generating invalid block templates).
>   -- https://stratumprotocol.org/specification/02-Design-Goals/
>
> To me, the "client-push" approach (vs the client-negotiated approach)
> seems somewhat unworkable (at least outside of solo mining).
>
> In particular, if you're running a pool and have many clients generating
> their own templates, and you aren't doing KYC on those clients, and aren't
> fully validating every proposed share, then it becomes very easy to do a
> "block withholding attack" [0] -- just locally create invalid blocks,
> submit shares as normal, receive payouts as normal for partial shares
> because the shares aren't being validated, and if you happen to find
> an actual block, the pool loses out because none of your blocks were
> actually valid. (You can trivially make your block invalid by simply
> increasing the pool payout by 1 sat above the correct value)
>
> Validating arbitrary attacker submitted templates seems likely to be
> expensive, as they can produce transactions which aren't already in your
> mempool, are relatively expensive to validate, and potentially conflict
> with transactions that other clients are selecting, causing you to have
> to churn txs in and out of your mempool.
>
> Particularly if an attacker could have an array of low hashpower workers
> all submitting different templates to a server, it seems like it would
> be relatively easy to overload any small array of template-validater
> nodes, given a pure client-push model. In which case client-push would
> only really make sense for pure solo-mining, where you're running your
> own stratumv2 server and your own miners and have full control (and thus
> trust) from end to end.
>
> Does that make sense, or am I missing something?
>
> I think a negotiated approach could look close to what Ocean's template
> selection looks like today: that is the pool operator runs some nodes with
> various relay policies that generate blocks, and perhaps also allows for
> externally submitted templates that it validates. Then clients request
> work according to their preferences, perhaps they specify that they
> prefer the "no-ordinals template", or perhaps "whatever template has the
> highest payout (after pool fees)". The pool tracks the various templates
> it's offered in order to validate shares and to broadcast valid blocks
> once one is found.
>
> A negotiated approach seems also more amenable to including out of band
> payments, such as mempool.space's accelerator, whether those payments
> are distributed to miners or kept by the pool.
>
> This could perhaps also be closer to the ethereum model of
> proposer/builder separation [7], which may provide a modest boost
> to MEV/MEVil resistance -- that is if there is MEV available via some
> clever template construction, specialist template builders can construct
> specialised templates paying slightly higher fees and submit those to
> mining pools, perhaps splitting any excess profits between the pool and
> template constructor. Providing a way for external parties to submit
> high fee templates to pools (rate-limited by a KYC-free bond payment
> over LN perhaps), seems like it would help limit the chances of that
> knowledge being kept as a trade secret to one pool or mining farm, which
> could then use its excess profits to become dominant, and then gain too
> much ability to censor txs. Having pools publish the full templates for
> auditing purposes would allow others to easily incrementally improve on
> clever templates by adding any high-fee censored transactions back in.
>
> 2. block withholding and oblivious shares
> =========================================
>
> Anyway, as suggested by the subject-line and the reference to [0],
> I'm still a bit concerned by block withholding attacks -- where an
> attacker finds decentralised pools and throws hashrate at them to make
> them unprofitable by only sending the 99.9% of shares that aren't valid
> blocks, while withholding/discarding the 0.1% of shares that would be
> a valid block. The result being that decentralised non-KYC pools are
> less profitable and are replaced by centralised KYC pools that can
> ban attackers, and we end up with most hashrate being in KYC pools,
> where it's easier for the pools to censor txs, or miners to be found
> and threatened or have their hardware confiscated. (See also [6])
>
> If it were reasonable to mine blocks purely via the tx merkle root,
> with only the pool knowing the coinbase or transaction set at the time
> of mining, I think it could be plausible to change the PoW algorithm to
> enable oblivious shares: where miners can't tell if a given valid share
> corresponds to a valid block or not, but pools and nodes can still easily
> easily validate work form just the block header.
>
> In particular I think an approach like this could work:
>
>    * take the top n-bits of the prev hash in the header, which are
>      currently required to be all zeroes due to `powLimit` (n=0 for regtest,
>      n<=8 in general)
>    * stop requiring these bits to be zero, instead interpret them as
>      `nBitsShareShift`, from 0 to up to 255
>    * when nBitsShareShift > 0, check that the share hash is less than or
>      equal to (2**256 - 1) >> nBitsShareShift, where the share hash is
>      calculated as
>        sha256d( nVersion, hashPrevBlock, sha256d( hashMerkleRoot ),
>                 nTime, nBits, nNonce )
>    * check that the normal block hash is not greater than
>        FromCompact(nBits) << nBitsShareShift
>
> Note that this would be a light-client visible hard fork -- any blocks
> that set nBitsShareShift to a non-zero value would be seen as invalid
> by existing software that checks header chains.
>
> It's also possible to take a light-client invisible approach by decreasing
> the value of nBits, but providing an `nBitsBump` that new nodes validate
> but existing nodes do not (see [1] eg). This has the drawback that it
> would become easy to fool old nodes/light clients into following the
> wrong chain, as they would not be fully validating proof-of-work, which
> already breaks the usual expectations of a soft fork (see [2] eg). Because
> of that, a hard fork approach seems safer here to me. YMMV, obviously.
>
> The above approach requires two additional sha256d operations when
> nodes or light clients are validating header work, but no additional
> data, which seems reasonable, and should require no changes to machines
> capable of header-only mining -- you just use sha256d(hashMerkleRoot)
> instead of hashMerkleRoot directly.
>
> In this scenario, pools are giving their client sha256d(hashMerkleRoot)
> rather than hashMerkleRoot or a transaction tree directly, and
> `nBitsShareShift` is set based on the share difficulty. Pools then
> check the share is valid, and additionally check whether the share has
> sufficient work to be a valid block, which they are able to do because
> unlike the miner, they can calculate the normal block hash.
>
> The above assumes that the pool has full control over the coinbase
> and transaction selection, and that the miner/client is not able to
> reconstruct all that data from its mining job, so this would be another
> reason why a pool would only support a client-negotiated approach for
> templates, not a client-push approach. Note that miners/clients could
> still *audit* the work they've been given if the pool makes the full
> transaction set (including coinbase) for a template available after each
> template expires.
>
> Some simple numbers: if a miner with control of 10% hashrate decided
> to attack a decentralised non-KYC pool with 30% hashrate, then they
> could apply 3.3% hashrate towards a blockwithholding attack, reducing
> the victim's income to 90% (30% hashrate finding blocks vs 33% hashrate
> getting payouts) while only reducing their own income to 96.7% (6.7%
> hashrate at 100% payout, 3.3% at 90%). If they decided to attack a miner
> with 50% hashrate, they would need to provide 5.55% of total hashrate to
> reduce the victim's income to 90%, which would reduce their own income
> to 94.45% (4.45% at 100%, 5.55% at 90%).
>
> I've seen suggestions that block withholding could be used as a way
> to attack pools that gain >50% hashpower, but as far as I can see it's
> only effective against decentralised, non-KYC pools, and more effective
> against small pools than large ones, which seems more or less exactly
> backwards to what we'd actually want...
>
> Some previous discussion of block withholding and KYC is at [3] [4]
> [5].
>
> 3. extra header nonce space
> ===========================
>
> Given BIP320, you get 2**48 values to attempt per second of nTime,
> which is about 281 TH/s, or enough workspace to satisfy a single S21 XP
> at 270 TH/s. Maybe that's okay, but maybe it's not very much.
>
> Since the above would already be a (light-client visible) hard fork, it
> could make sense to also provide extra nonce space that doesn't require
> touching the coinbase transaction (since we're relying on miners not
> having access to the contents of the tx merkle tree).
>
> One approach would be to use the leading zeroes in hashPrevBlock as
> extra nonce space (eg, [8]). That's particularly appealing since it
> scales exactly with difficulty -- the higher the difficulty, the more
> nonce space you need, but the more required zeroes you have. However
> the idea above unfortuantely reduces the available number of zero bits
> in the previous block hash by that block's choice of nBitsShareShift,
> which may take a relatively large value in order to reduce traffic with
> the pool. So sadly I don't think that approach would really work.
>
> Another way to do that could work might be to add perhaps 20 bytes of
> extra nonce to the header, and calculate the block hashes as:
>
>    normal hash -> sha256d(
>         nVersion, hashPrevBlock,
>         sha256d( merkleRoot, TagHash_BIPxxx(extraNonce) ),
>         nTime, nBits, nNonce
>    )
>
> and
>
>    share hash -> sha256d(
>         nVersion, hashPrevBlock,
>         sha256d( sha256d(merkleRoot), TagHash_BIPxxx(extraNonce) ),
>         nTime, nBits, nNonce
>    )
>
> That should still be compatible with existing mining hardware. Though it
> would mean nodes are calculating 5x sha256d and 1x taghash to validate
> a block header's PoW, rather than just 1x sha256d (now) or 3x sha256d
> (above).
>
> This approach would also not require changes in how light clients verify
> merkle proofs of transaction inclusion in blocks, I believe. (Using a
> bip340-esque TagHash for the extraNonce instead of nothing or sha256d
> hopefully prevents hiding fake transactions in that portion)
>
> 4. plausibility
> ===============
>
> It's not clear to me how serious a problem block withholding is. It
> seems like it would explain why we have few pools, why they're all
> pretty centralised, and why major ones care about KYC, but there are
> plenty of other explanations for both those things. So even if this
> was an easy fix, it's not clear to me how much sense it makes to think
> about. And beyond that's it's a consensus change (ouch), a hard fork
> (ouch, ouch) and one that requires every light client to upgrade (ouch,
> ouch, ouch!). However, all of that is still just code, and none of those
> things are impossible to achieve, if they're worth the effort. I would
> expect a multiyear deployment timeline even once the code was written
> and it was widely accepted as a good idea, though.
>
> If this is a serious problem for the privacy and decentralisation of
> mining, and a potential threat to bitcoin's censorship resistance,
> it seems to me like it would be worth the effort.
>
> 5. conclusions?
> ===============
>
> Anyway, I wanted to write my thoughts on this down somewhere they could
> be critiqued. Particularly the idea that everyone building their own
> blocks for public pools running stratumv2 doesn't actually make that much
> sense, as far as I can see.
>
> I think the share approach in section 2 and the extranonce approach in
> section 3 are slightly better than previous proposed approaches I've seen,
> so are worth having written down somewhere.
>
> Cheers,
> aj
>
> [0] https://bitcoil.co.il/pool_analysis.pdf
>
> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012051.html
>
> [2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012443.html
>
> [3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012060.html
>
> [4] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012111.html
>
> [5] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012069.html
>
> [6] https://bitcoinops.org/en/topics/pooled-mining/#block-withholding-attacks
>
> [7] https://ethereum.org/en/roadmap/pbs/
>
> [8] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015386.html
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/6f7feb2b-2e24-4081-b555-db69f34d308e%40dashjr.org.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bitcoindev] Mining pools, stratumv2 and oblivious shares
  2024-07-23 18:44 ` Luke Dashjr
@ 2024-07-31 18:00   ` Anthony Towns
  0 siblings, 0 replies; 8+ messages in thread
From: Anthony Towns @ 2024-07-31 18:00 UTC (permalink / raw)
  To: Luke Dashjr; +Cc: bitcoindev

I think "decentralised" isn't quite a specific enough term here;
this scheme requires that the pool has a "trusted coordinator", who
can be relied upon to (a) construct sensible templates for people to
mine against, (b) reveal the secret they included in the template when
a share has enough work to be a valid block, (c) not reveal the secret
prior to work being submitted, ie, don't collaborate with people attacking
the pool.

But pools generally require trusted coordinators anyway:

 a) to hold custody over funds until miner income reaches the on-chain
    payment threshold
 b) to hold the funds prior to paying shares rewards over lightning
 c) to act as a validating proxy so that each participant in the pool doesn't
    have to check that every other share submitted to the pool had valid work

If we want hashpower to be very widely distributed, those coordinators
seem pretty important/essential; it makes for generally low payouts (ie,
rarely reaching the onchain threshold), many payouts (ie, if you don't
want to spam the chain, better pay them via lightning etc), and a lot
of shares to check.

I can see it making sense to have a coordinator-free pool if you have
a high minimum hashrate requirement to be a member, perhaps 0.1%; at
that point some of the members might themselves be pools with a central
coordinator, etc. With at most ~1000 members, though, it's not clear to
me that a pool like that wouldn't be better off doing identity-based
membership and some form of federated governance and just banning
attackers, rather than complex crypto protocols and game theory things.

I think if you wanted to fix the problem for a pool-of-pools that
does have a trusted coordinator, you'd need to do three PoW tests
instead of two: work X from the miner demonstrates a valid pool share,
additional work Y when you add in the pool's secret demonstrates
a valid pool-of-pools share, and additional work Z when you add in
the pool-of-pool's secret demonstrates a valid block. But nodes also
need to calculate each of X, Y and Z and check X+Y+Z matches the chain
difficulty. If you skip out Y, the miners can withhold from the pool;
if you skip out Z, the pool can withhold from the pool-of-pools. Not
really convinced that would be worthwhile.

So I think making things better for pools with coordinators is worthwhile,
anyway, even in an ideal world where there's a successful decentralised
pool. I think this is the case even if the pool does some light KYC: if
you're accepting shares from a mass of "lottery miners" it's probably
easy for an attacker to bypass your KYC via identity theft etc, and
still hurt you.

Conversely, even in the absence of a decentralised pool, I don't think
making it easier for pools that do have a coordinator to prevent block
withholding is a bad thing: we're already in a situation where most
hashpower goes to a few heavily-KYC pools that control the templates
being worked on.  Making it easier to setup a small non-KYC pools that
can compete with these seems a step forward; and even if one of those
pools sees huge success and becomes the new dominant pool, despite it
being easier to compete with that pool, and uses that weight to dictate
what transactions are included in the majority of block templates,
that's not even any worse than what we have now...



Anyway, I spent some time thinking about the problem for decentralised
pools, where you don't have a coordinator and can't/don't keep secrets
from miners/attackers. I did get a bit confused because [0] (expanding
on "Luke-Jr's two-stage target mechanism" from [1]) seems to describe
a very different proposal to the June 2012 description in [2].

[0] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012069.html
[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012046.html
[2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2012-June/001506.html

Unfortunately, I don't think approaches like that described in [2]
actually do what we want. My understanding of that proposal is that it
would go something like this:

 * a block header is changed to include:
    - version, nbits, timestamp, prevhash, merkleroot, nonce (as currently)
    - next_version, next_timestamp, next_merkleroot, next_nonce (additional)
   which is the combination of a share of the current block and a share for
   the following block

 * share work is based on (version, nbits, timestamp, prevprevhash,
   merkleroot, nonce) ie, when you're working on a share, you're only
   committing to the grandparent block, not the parent block

 * given a block header, you can check the share work for the current block, and
   the "next" block. both must have sufficient work

 * **additionally**, the hash of the entire block header must have work X

 * the total difficulty of a block is then the share work plus X, which
   must satisfy nbits.

 * (the next block's transaction tree is not expanded as part of consensus
   unless it happens to actually be the next block, though the pool
   members might still verify the details)

In that scenario, miners who want to find a block at height N+1 probably
take three steps:

  a) mine a bunch of shares at height N+1
  b) mine a bunch of shares at height N+2
  c) pair each N+1 share with each N+2 share until they find a pair that
     has work X, where the share work plus X matches the target difficulty

For simplicity, imagine that every block with an odd height is empty
-- otherwise it gets complicated to ensure there aren't double spends
between some blocks at height N+1 and some blocks at height N+2.

Note that once you've found block N+1, you'll have already done most of the
work for step (a) at height N+2.

The problem is that for miners/pools that don't publish their shares
to the world prior to finding a block, larger miners get an advantage:
having found K shares in each of steps (a) and (b), they have K^2 chances
of having a pair that has work X in step (c), and thus a valid block. So
a miner with twice the hashrate has four times the chances of finding
a block in any given time period.

I'd thought of a similar approach, where instead of taking a share for
the current block and a share for the next block, you simply pair two
shares for the current block. That seems a bit simpler, in that you don't
have to worry about conflicting transaction sets, because once a block is
found you simply discard all the existing shares, and you also don't need
to worry about balancing the shares you have in step (a) and (b). But it
still has the same K^2 problem benefiting large miners.

I think in that scenario, the optimal pool size (in one sense, anyway)
is about 70% (sqrt(0.5)?) -- you have h^2/(h^2 + (1-h)^2) odds of getting
the next block, which amounts to about 85%, while the remaining 30%
of hashrate only gets the remaining 15% of blocks. If you both have the
same expenses, and they're running just break-even, then your expenses
are paid by the first 35% of blocks, and the remaining 50% of blocks
that you mine are pure profit. (If you instead accepted everyone into
your pool, everyone would be getting the same profits, encouraging more
people to buy mining hardware, driving up the difficulty; in this case,
anyone who bought hardware would have to join the competing pool, and
given they had the same expenses, would not end up making a profit,
so you're effectively making excess profits by keeping Bitcoin's total
hashrate lower than what it would be with today's system)

But the main issue is that if the dominant pool is "evil" (it's
centralised, censors txs, and charges high fees eg), a small competing
pool starts out with a huge disadvantage -- if the 30% of competing
hashrate was split into two pools of 15% instead, they'd get 4% of blocks
each, eg. That's a much worse situation than today, and I don't think
it's salvageable.

For a decentralised pool, you also have the problem that this only fixes
half the withholding attack -- in the decentralised case, the attacker
can be assumed to know all the shares that have been found so far,
so for all the possible blocks that would include a share found by the
attacker, if the later of the two shares in that block was found by the
attacker, they can simply withhold that share. Depending on how much of
the pool's hashrate belongs to the attacker, that's somewhere above 50%
of the blocks that would include a share from the attacker.

I'd guess you could extend the approach further, so that rather than
a block being made up of 2 shares, you have it be made up of n shares,
so that effectively the additional O(H**N) benefit of getting additional
hashrate outweighs the cost of the last share getting withheld. That's
even worse at hurting small pools, but I think at that point you're
effectively switching from Nakomoto consensus to more of a DAG-chain
design anyway, so perhaps it's fair to design for your "pool" being 100%
of miners.

In some sense I think this is in conflict with the "progress-free" [3]
nature of bitcoin mining. If you want the chance of finding a block to
be proportional to hashrate, you want it to be independent of how much
previous work has been done, but in that case you can't make one share's
chance at being a valid block depend on other shares...

[3] eg https://bitcoin.stackexchange.com/a/107401/30971

So pretty disappointed to conclude that I'm not seeing anywhere here
where progress could usefully be made.



The issue with the "invalid blocks" approach is that if you're not
checking template validity but still rewarding payouts for submitted
shares, then there is a trivial approach to the block withholding attack:
just mine invalid blocks, and don't withhold any of them. At that point
it doesn't matter how clever a scheme you have to make it difficult
to tell which shares will be a valid block; they just submit all of
them, knowing none of them will be a valid block. If you're associating
hashrate with identities, you can still ban them when you find they've
sent an invalid block, but if that's good enough, you can already ban
identities that are fall below some threshold of unlucky-ness.



As far as zero-knowledge proofs making template validation easy goes;
I think that makes sense, but we're a long way off. First, I think you'd
need to have utreexo hashes widely available to get a handle on the utxo
set at all; and second, I think the state of the art is zerosync which
is still in the "parity with `-assumevalid=TIP`" phase, and beyond that
generating proofs is fairly compute intensive, so not something that
you'd want to have as a blocking step in announcing a share to your pool.

Cheers,
aj

On Tue, Jul 23, 2024 at 02:44:46PM -0400, Luke Dashjr wrote:
> Block withholding is already trivial, and annoying to detect. Decentralised
> mining / invalid blocks doesn't increase the risk at all - it only makes it
> easier to detect if done that way.
>
> While it isn't made any worse, fixing it is still a good idea.
> Unfortunately, I think your proposal still requires checking every template,
> and only fixes block withholding for centralised mining. It would risk
> *creating* the very "centralised advantage over decentralised" problem
> you're trying to mitigate.
>
> Given that block withholding is trivial regardless of approach, and there's
> no advantage to taking the invalid-block approach, the risk of invalid
> blocks would stem from buggy nodes or softfork disagreements/lagging
> upgrades. That risk can be largely addressed by spot-checking random
> templates.
>
> Another possibility would be to use zero-knowledge proofs of block validity,
> but I don't know if we're there yet. At that point, I think the hardfork to
> solve the last remaining avenue would be a good idea. (Bonus points if
> someone can think up a way to do it without a centrally-held
> secret/server...)

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/Zqp7p/j25tJI4zn9%40erisian.com.au.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bitcoindev] Mining pools, stratumv2 and oblivious shares
  2024-07-23 15:02 [bitcoindev] Mining pools, stratumv2 and oblivious shares Anthony Towns
  2024-07-23 18:44 ` Luke Dashjr
@ 2024-08-13 13:57 ` Matt Corallo
  2024-08-16  2:10   ` Anthony Towns
  1 sibling, 1 reply; 8+ messages in thread
From: Matt Corallo @ 2024-08-13 13:57 UTC (permalink / raw)
  To: Anthony Towns, bitcoindev

Yes, block witholding attacks are easy to do. This has nothing to do with client work selection or 
not, its just...a fact of life with Bitcoin pooled mining.

Doing block withholding by creating invalid templates I'd really call "shitty block withholding" 
cause at least the pool *can* detect it (with additional CPU power), vs traditional block 
withholding attacks they can only use statistical analysis.

In fact, any statistical analysis you can do to detect traditional block withholding attacks also 
applies equally to any "shitty block withholding" attacks - the outcome (no valid blocks from a 
miner, but shares) is the same, so any relevant defenses apply equally.

Adding more explicit "negotiation" to Stratum V2 work selection would defeat the purpose - if the 
pool is able to tell a miner not to work on some work it wants to, you might as well just have the 
pool pick the work. The only way any kind of centralized pooling with custom work selection adds any 
value to Bitcoin's decentralization is if the clients insist on mining the work they want to - 
whether on the pool or solo mining if the pool doesn't want it.

Matt

On 7/23/24 11:02 AM, Anthony Towns wrote:
> Hi *,
> 
> 1. stratumv2 templates via client-push
> ======================================
> 
> The stratumv2 design document suggests:
> 
>> Use a separate communication channel for transaction selection so
>> that it does not have a performance impact on the main mining/share
>> communication, as well as can be run in three modes - disabled (i.e.pool
>> does not yet support client work selection, to provide an easier
>> transition from Stratum v1), client-push (to maximally utilize the
>> client’s potential block-receive-latency differences from the pool),
>> and client-negotiated (for pools worried about the potential of clients
>> generating invalid block templates).
> 
>   -- https://stratumprotocol.org/specification/02-Design-Goals/
> 
> To me, the "client-push" approach (vs the client-negotiated approach)
> seems somewhat unworkable (at least outside of solo mining).
> 
> In particular, if you're running a pool and have many clients generating
> their own templates, and you aren't doing KYC on those clients, and aren't
> fully validating every proposed share, then it becomes very easy to do a
> "block withholding attack" [0] -- just locally create invalid blocks,
> submit shares as normal, receive payouts as normal for partial shares
> because the shares aren't being validated, and if you happen to find
> an actual block, the pool loses out because none of your blocks were
> actually valid. (You can trivially make your block invalid by simply
> increasing the pool payout by 1 sat above the correct value)
> 
> Validating arbitrary attacker submitted templates seems likely to be
> expensive, as they can produce transactions which aren't already in your
> mempool, are relatively expensive to validate, and potentially conflict
> with transactions that other clients are selecting, causing you to have
> to churn txs in and out of your mempool.
> 
> Particularly if an attacker could have an array of low hashpower workers
> all submitting different templates to a server, it seems like it would
> be relatively easy to overload any small array of template-validater
> nodes, given a pure client-push model. In which case client-push would
> only really make sense for pure solo-mining, where you're running your
> own stratumv2 server and your own miners and have full control (and thus
> trust) from end to end.
> 
> Does that make sense, or am I missing something?
> 
> I think a negotiated approach could look close to what Ocean's template
> selection looks like today: that is the pool operator runs some nodes with
> various relay policies that generate blocks, and perhaps also allows for
> externally submitted templates that it validates. Then clients request
> work according to their preferences, perhaps they specify that they
> prefer the "no-ordinals template", or perhaps "whatever template has the
> highest payout (after pool fees)". The pool tracks the various templates
> it's offered in order to validate shares and to broadcast valid blocks
> once one is found.
> 
> A negotiated approach seems also more amenable to including out of band
> payments, such as mempool.space's accelerator, whether those payments
> are distributed to miners or kept by the pool.
> 
> This could perhaps also be closer to the ethereum model of
> proposer/builder separation [7], which may provide a modest boost
> to MEV/MEVil resistance -- that is if there is MEV available via some
> clever template construction, specialist template builders can construct
> specialised templates paying slightly higher fees and submit those to
> mining pools, perhaps splitting any excess profits between the pool and
> template constructor. Providing a way for external parties to submit
> high fee templates to pools (rate-limited by a KYC-free bond payment
> over LN perhaps), seems like it would help limit the chances of that
> knowledge being kept as a trade secret to one pool or mining farm, which
> could then use its excess profits to become dominant, and then gain too
> much ability to censor txs. Having pools publish the full templates for
> auditing purposes would allow others to easily incrementally improve on
> clever templates by adding any high-fee censored transactions back in.
> 
> 2. block withholding and oblivious shares
> =========================================
> 
> Anyway, as suggested by the subject-line and the reference to [0],
> I'm still a bit concerned by block withholding attacks -- where an
> attacker finds decentralised pools and throws hashrate at them to make
> them unprofitable by only sending the 99.9% of shares that aren't valid
> blocks, while withholding/discarding the 0.1% of shares that would be
> a valid block. The result being that decentralised non-KYC pools are
> less profitable and are replaced by centralised KYC pools that can
> ban attackers, and we end up with most hashrate being in KYC pools,
> where it's easier for the pools to censor txs, or miners to be found
> and threatened or have their hardware confiscated. (See also [6])
> 
> If it were reasonable to mine blocks purely via the tx merkle root,
> with only the pool knowing the coinbase or transaction set at the time
> of mining, I think it could be plausible to change the PoW algorithm to
> enable oblivious shares: where miners can't tell if a given valid share
> corresponds to a valid block or not, but pools and nodes can still easily
> easily validate work form just the block header.
> 
> In particular I think an approach like this could work:
> 
>    * take the top n-bits of the prev hash in the header, which are
>      currently required to be all zeroes due to `powLimit` (n=0 for regtest,
>      n<=8 in general)
>    * stop requiring these bits to be zero, instead interpret them as
>      `nBitsShareShift`, from 0 to up to 255
>    * when nBitsShareShift > 0, check that the share hash is less than or
>      equal to (2**256 - 1) >> nBitsShareShift, where the share hash is
>      calculated as
>        sha256d( nVersion, hashPrevBlock, sha256d( hashMerkleRoot ),
>                 nTime, nBits, nNonce )
>    * check that the normal block hash is not greater than
>        FromCompact(nBits) << nBitsShareShift
> 
> Note that this would be a light-client visible hard fork -- any blocks
> that set nBitsShareShift to a non-zero value would be seen as invalid
> by existing software that checks header chains.
> 
> It's also possible to take a light-client invisible approach by decreasing
> the value of nBits, but providing an `nBitsBump` that new nodes validate
> but existing nodes do not (see [1] eg). This has the drawback that it
> would become easy to fool old nodes/light clients into following the
> wrong chain, as they would not be fully validating proof-of-work, which
> already breaks the usual expectations of a soft fork (see [2] eg). Because
> of that, a hard fork approach seems safer here to me. YMMV, obviously.
> 
> The above approach requires two additional sha256d operations when
> nodes or light clients are validating header work, but no additional
> data, which seems reasonable, and should require no changes to machines
> capable of header-only mining -- you just use sha256d(hashMerkleRoot)
> instead of hashMerkleRoot directly.
> 
> In this scenario, pools are giving their client sha256d(hashMerkleRoot)
> rather than hashMerkleRoot or a transaction tree directly, and
> `nBitsShareShift` is set based on the share difficulty. Pools then
> check the share is valid, and additionally check whether the share has
> sufficient work to be a valid block, which they are able to do because
> unlike the miner, they can calculate the normal block hash.
> 
> The above assumes that the pool has full control over the coinbase
> and transaction selection, and that the miner/client is not able to
> reconstruct all that data from its mining job, so this would be another
> reason why a pool would only support a client-negotiated approach for
> templates, not a client-push approach. Note that miners/clients could
> still *audit* the work they've been given if the pool makes the full
> transaction set (including coinbase) for a template available after each
> template expires.
> 
> Some simple numbers: if a miner with control of 10% hashrate decided
> to attack a decentralised non-KYC pool with 30% hashrate, then they
> could apply 3.3% hashrate towards a blockwithholding attack, reducing
> the victim's income to 90% (30% hashrate finding blocks vs 33% hashrate
> getting payouts) while only reducing their own income to 96.7% (6.7%
> hashrate at 100% payout, 3.3% at 90%). If they decided to attack a miner
> with 50% hashrate, they would need to provide 5.55% of total hashrate to
> reduce the victim's income to 90%, which would reduce their own income
> to 94.45% (4.45% at 100%, 5.55% at 90%).
> 
> I've seen suggestions that block withholding could be used as a way
> to attack pools that gain >50% hashpower, but as far as I can see it's
> only effective against decentralised, non-KYC pools, and more effective
> against small pools than large ones, which seems more or less exactly
> backwards to what we'd actually want...
> 
> Some previous discussion of block withholding and KYC is at [3] [4]
> [5].
> 
> 3. extra header nonce space
> ===========================
> 
> Given BIP320, you get 2**48 values to attempt per second of nTime,
> which is about 281 TH/s, or enough workspace to satisfy a single S21 XP
> at 270 TH/s. Maybe that's okay, but maybe it's not very much.
> 
> Since the above would already be a (light-client visible) hard fork, it
> could make sense to also provide extra nonce space that doesn't require
> touching the coinbase transaction (since we're relying on miners not
> having access to the contents of the tx merkle tree).
> 
> One approach would be to use the leading zeroes in hashPrevBlock as
> extra nonce space (eg, [8]). That's particularly appealing since it
> scales exactly with difficulty -- the higher the difficulty, the more
> nonce space you need, but the more required zeroes you have. However
> the idea above unfortuantely reduces the available number of zero bits
> in the previous block hash by that block's choice of nBitsShareShift,
> which may take a relatively large value in order to reduce traffic with
> the pool. So sadly I don't think that approach would really work.
> 
> Another way to do that could work might be to add perhaps 20 bytes of
> extra nonce to the header, and calculate the block hashes as:
> 
>    normal hash -> sha256d(
>         nVersion, hashPrevBlock,
>         sha256d( merkleRoot, TagHash_BIPxxx(extraNonce) ),
>         nTime, nBits, nNonce
>    )
> 
> and
> 
>    share hash -> sha256d(
>         nVersion, hashPrevBlock,
>         sha256d( sha256d(merkleRoot), TagHash_BIPxxx(extraNonce) ),
>         nTime, nBits, nNonce
>    )
> 
> That should still be compatible with existing mining hardware. Though it
> would mean nodes are calculating 5x sha256d and 1x taghash to validate
> a block header's PoW, rather than just 1x sha256d (now) or 3x sha256d
> (above).
> 
> This approach would also not require changes in how light clients verify
> merkle proofs of transaction inclusion in blocks, I believe. (Using a
> bip340-esque TagHash for the extraNonce instead of nothing or sha256d
> hopefully prevents hiding fake transactions in that portion)
> 
> 4. plausibility
> ===============
> 
> It's not clear to me how serious a problem block withholding is. It
> seems like it would explain why we have few pools, why they're all
> pretty centralised, and why major ones care about KYC, but there are
> plenty of other explanations for both those things. So even if this
> was an easy fix, it's not clear to me how much sense it makes to think
> about. And beyond that's it's a consensus change (ouch), a hard fork
> (ouch, ouch) and one that requires every light client to upgrade (ouch,
> ouch, ouch!). However, all of that is still just code, and none of those
> things are impossible to achieve, if they're worth the effort. I would
> expect a multiyear deployment timeline even once the code was written
> and it was widely accepted as a good idea, though.
> 
> If this is a serious problem for the privacy and decentralisation of
> mining, and a potential threat to bitcoin's censorship resistance,
> it seems to me like it would be worth the effort.
> 
> 5. conclusions?
> ===============
> 
> Anyway, I wanted to write my thoughts on this down somewhere they could
> be critiqued. Particularly the idea that everyone building their own
> blocks for public pools running stratumv2 doesn't actually make that much
> sense, as far as I can see.
> 
> I think the share approach in section 2 and the extranonce approach in
> section 3 are slightly better than previous proposed approaches I've seen,
> so are worth having written down somewhere.
> 
> Cheers,
> aj
> 
> [0] https://bitcoil.co.il/pool_analysis.pdf
> 
> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012051.html
> 
> [2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012443.html
> 
> [3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012060.html
> 
> [4] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012111.html
> 
> [5] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012069.html
> 
> [6] https://bitcoinops.org/en/topics/pooled-mining/#block-withholding-attacks
> 
> [7] https://ethereum.org/en/roadmap/pbs/
> 
> [8] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015386.html
> 

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/26322ee8-08e6-4718-8d1c-60bca8c13c6a%40mattcorallo.com.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bitcoindev] Mining pools, stratumv2 and oblivious shares
  2024-08-13 13:57 ` Matt Corallo
@ 2024-08-16  2:10   ` Anthony Towns
  2024-08-21 14:28     ` Matt Corallo
  0 siblings, 1 reply; 8+ messages in thread
From: Anthony Towns @ 2024-08-16  2:10 UTC (permalink / raw)
  To: Matt Corallo; +Cc: bitcoindev

On Tue, Aug 13, 2024 at 09:57:06AM -0400, Matt Corallo wrote:
> Yes, block witholding attacks are easy to do. This has nothing to do with
> client work selection or not, its just...a fact of life with Bitcoin pooled
> mining.
>
> Doing block withholding by creating invalid templates I'd really call
> "shitty block withholding"

Hmm, I thought "BlueMatt" was referring to hair colour, not language
preference.

> cause at least the pool *can* detect it (with
> additional CPU power), vs traditional block withholding attacks they can
> only use statistical analysis.
>
> In fact, any statistical analysis you can do to detect traditional block
> withholding attacks also applies equally to any "shitty block withholding"
> attacks - the outcome (no valid blocks from a miner, but shares) is the
> same, so any relevant defenses apply equally.

The only way you can do statistical analyses is if miners (including
attackers) can be assigned a persistent identity with reasonable accuracy,
and you restrict your pool to accepting individual miners with a large
enough hashrate that they're expected to find a valid block relatively
frequently.

If individuals aren't required to have a large hashrate, an attacker can
just pretend to be multiple miners with small hashrate, all of whom are
unlucky, and your statistical result is just "my small hashrate miners
are in aggregate more unlucky than I expect", without providing a way to
distinguish the attacker's small hashrate identities from honest small
hashrate miners.

If "relatively frequently" is a year or less, that means only about 50k
companies/individuals globally can potentially be miners participating
in a pool, and far less if hashpower is not evenly distributed amongst
miners.

If you're relying on some combination of burdensome KYC, statistics and
restricting pool membership to large hashrate miners, then that's fine:
block withholding is fairly easily addressed. What I'm interested in is
a pool that doesn't do those things: for example, a world where 5% of
hashrate is from 60M BitAxe devices owned by 10M people, say. Solo-mining,
each person might expect to find a block once every 4000 years; with
pooled mining, they'd expect to be paid perhaps 80c per day. But in that
scenario, a pool with 30% hashrate running a block withholding attack
using 1% of hashrate increases their total reward (from 30% to 30.1%)
while cutting the honest bitaxe miners' rewards substantially (from 5%
to 4.2%, or 80c to 67c).

And in that scenario, it seems relatively easy for the attacker to
pretend to be 2M different users, *unless* you're doing real KYC on
every bitaxe miner at signup, in which case whoever you're outsourcing
KYC too becomes your centralising factor ("whoops, your social credit
score is too low to be allowed to be a bitcoin miner").

If the pool is not doing KYC, but does have some soft/hard-forked solution
like oblivious shares, and allows users to choose their block contents,
that's the point at which the invalid block appraoch comes in: you can
still perform effectively the exact same attack, simply by having your 1%
hashrate mining invalid shares rather than withholding the shares that
would be valid blocks. If the pool isn't validating every share, then
you'll only be detected when your share did turn out to be valid PoW, but
once you're detected you simply kill that identity and spin up a new one,
and because the pool isn't doing KYC, spinning up a new identity is easy.

As far as I can see that means your pool is either:

 a) heavily KYCed
 b) limited to high-hashrate miners
 c) fully validating every share
 d) vulnerable to block-withholding attacks, and hence not viable in
    the long term in a competitive environment

Of those, "fully validating every share" seems the most appealing option
to me, but in practical terms, that seems incompatible with "any miner
can freely choose the txs they work on". In practice, of course, (a)
and (b) will presumably be the reality for the forseeable future for
all but a fairly trivial amount of hashrate.

> Adding more explicit "negotiation" to Stratum V2 work selection would defeat
> the purpose - if the pool is able to tell a miner not to work on some work
> it wants to, ...

A pool is always able to do that -- they can simply mark the share as
invalid after the fact and refuse to pay out on it, and perhaps make
a blog post explaining their policy. The over-the-wire protocol isn't
what provides that ability.

That's also precisely what they have to do if they detect a block
withholding attack by statistical methods: start refusing to pay out
for otherwise valid shares, based on non-consensus rules ("your account
has been banned", "your ip came from a range that seems suspicious",
"you don't have enough hashrate to be a member of this pool", ...).

> ... you might as well just have the pool pick the work.

> The only
> way any kind of centralized pooling with custom work selection adds any
> value to Bitcoin's decentralization is if the clients insist on mining the
> work they want to - whether on the pool or solo mining if the pool doesn't
> want it.

If you're really expecting miners are going to be constantly telling
their pool "do exactly what I want or I solo mine", I think you're pretty
likely to be disappointed by whatever the future holds... By its nature,
solo mining is something that can only be done profitably by relatively
few players at any given time; it's only potentially decentralised if the
total market for Bitcoin ownership/usage is itself very small.

Heck, we can observe right now that even *pools* aren't willing to insist
on the right to choose their own work when they can get smoother payouts
by letting someone else choose the work.

The only realistic way I see to improve mining decentralisation is
to make it easier to setup a viable competing pool when none of the
existing ones are being reasonable. If setting up a pool requires you
to do statistical analysis and setup KYC to avoid attacks from existing
players, that seems like a pretty big road-block.

Cheers,
aj

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/Zr61M64kQlYIBuzO%40erisian.com.au.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bitcoindev] Mining pools, stratumv2 and oblivious shares
  2024-08-16  2:10   ` Anthony Towns
@ 2024-08-21 14:28     ` Matt Corallo
  2024-08-27  9:07       ` Anthony Towns
  0 siblings, 1 reply; 8+ messages in thread
From: Matt Corallo @ 2024-08-21 14:28 UTC (permalink / raw)
  To: Anthony Towns; +Cc: bitcoindev



On 8/15/24 10:10 PM, Anthony Towns wrote:
> On Tue, Aug 13, 2024 at 09:57:06AM -0400, Matt Corallo wrote:
> The only way you can do statistical analyses is if miners (including
> attackers) can be assigned a persistent identity with reasonable accuracy,
> and you restrict your pool to accepting individual miners with a large
> enough hashrate that they're expected to find a valid block relatively
> frequently.

Yep.

-snip-

> As far as I can see that means your pool is either:
> 
>   a) heavily KYCed
>   b) limited to high-hashrate miners
>   c) fully validating every share
>   d) vulnerable to block-withholding attacks, and hence not viable in
>      the long term in a competitive environment
> 
> Of those, "fully validating every share" seems the most appealing option
> to me, but in practical terms, that seems incompatible with "any miner
> can freely choose the txs they work on". In practice, of course, (a)
> and (b) will presumably be the reality for the forseeable future for
> all but a fairly trivial amount of hashrate.

Except "fully validating every share" doesn't change anything. You totally missed the point that 
both I and Luke raised - you can fully validate every share, or not, but either way block 
withholding requires some kind of statistical analysis to detect, subject to the limitations you raise.

>> Adding more explicit "negotiation" to Stratum V2 work selection would defeat
>> the purpose - if the pool is able to tell a miner not to work on some work
>> it wants to, ...
> 
> A pool is always able to do that -- they can simply mark the share as
> invalid after the fact and refuse to pay out on it, and perhaps make
> a blog post explaining their policy. The over-the-wire protocol isn't
> what provides that ability.

A pool can decline to pay out, yes, but the miner will still work on that block. The point of custom 
work selection is that the miner will *always* work on the block they want, no matter what. And if 
they mind it, they broadcast it directly themselves. Anything else would defeat the point.

A pool can send their users an email and ask them to change the rules of what they mine on, but it 
then requires an active action taken by the miner to change what they want to mine on.

>> The only
>> way any kind of centralized pooling with custom work selection adds any
>> value to Bitcoin's decentralization is if the clients insist on mining the
>> work they want to - whether on the pool or solo mining if the pool doesn't
>> want it.
> 
> If you're really expecting miners are going to be constantly telling
> their pool "do exactly what I want or I solo mine", I think you're pretty
> likely to be disappointed by whatever the future holds... By its nature,
> solo mining is something that can only be done profitably by relatively
> few players at any given time; it's only potentially decentralised if the
> total market for Bitcoin ownership/usage is itself very small.

You're totally missing the point that pools can just...pay out properly? Or if they don't people 
will create new pools that do? Not sure why you think that's a far-fetched outcome.

Matt

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/c6f1dbbd-f4a3-4783-9e5a-6a64e82fc268%40mattcorallo.com.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bitcoindev] Mining pools, stratumv2 and oblivious shares
  2024-08-21 14:28     ` Matt Corallo
@ 2024-08-27  9:07       ` Anthony Towns
  2024-08-27 13:52         ` Matt Corallo
  0 siblings, 1 reply; 8+ messages in thread
From: Anthony Towns @ 2024-08-27  9:07 UTC (permalink / raw)
  To: Matt Corallo; +Cc: bitcoindev

On Wed, Aug 21, 2024 at 10:28:35AM -0400, Matt Corallo wrote:
> On 8/15/24 10:10 PM, Anthony Towns wrote:
> > On Tue, Aug 13, 2024 at 09:57:06AM -0400, Matt Corallo wrote:
> > The only way you can do statistical analyses is if miners (including
> > attackers) can be assigned a persistent identity with reasonable accuracy,
> > and you restrict your pool to accepting individual miners with a large
> > enough hashrate that they're expected to find a valid block relatively
> > frequently.
> Yep.

Right, but that just sets some threshold where low hashrate members of a
pool are indistinguishable from people attacking the pool. If there's no
attack going on, that's fine, of course. To put some numbers to this: Ocean
reportedly paid out ~10% more in reward for the same work compared to other
pools [0].

[0] https://x.com/ocean_mining/status/1825943407736533008

To reverse that, you could do something like:

 * Take Ocean's current (honest) hashrate of 2160 PH/s, ie about 0.33% of global
   hashrate, or 3.35 blocks/week
 * Find ~22% of that to attack with, ie 475 PH/s (0.74 blocks/week)
 * Run the attack:
    - Ocean's new hashrate is 2635 PH/s
    - Ocean still only achieves 3.35 blocks/week
    - You collect ~18% of their reward (0.6 blocks/week)
    - Honest ocean miners collect the remainder 2.75 blocks/week)
    - If 3.35 blocks/week was making 1.1x the FPPS reward, 2.75 blocks/week
      is now only 0.9x the FPPS reward
    - Publish a google doc saying Ocean sucks, FPPS is much better

If submitted through a single account, 475PH/s would be the third
largest member of Ocean; and relatively easy to do statistical analysis
on. If split up across perhaps 500 accounts, with less than a PH/s each,
those accounts would not be in Ocean's top 80, and each individual account
wouldn't be expected to find a block more often than once a decade or
so. Submitting shares via tor, spending some time creating the accounts
and making them look normal (ie, actually submitting the 0.6 blocks/week
they're collectively expected to find), and receiving payouts over
lightning seems like it would close out many of the obvious ways of
telling that the accounts are all sybils.

Maybe it's okay if the answer is just "analyse as best you can to find
the real culprit, and if you can't, just drop all the low hashrate pool
members". With an approach like that, you could decide something like
"if there's an attack that lasts for two months, I'll boot out everyone
who didn't find a block over that period" [1]. For someone with 0.02%
of global hashrate (130 PH/s), there's about an 80% chance they'll find
a block over two months, whereas for someone with 0.0012% (7.8 PH/s),
there's a 90% chance they won't. So at that point, even if you're honest
and you've invested $4M in ASICs (482x S21XP, 130 PH/s), you've still got
a 20% chance of being booted out of the pool; and if your investment is
only $240,000 (29x S21 XP, 7.8 PH/s), you've got a 90% chance of being
booted out. For comparison, only the top three users on Ocean's dashboard
report more than 130 PH/s, and 7.8 PH/s would put you in the top 25.

(For comparison, each miner having to have at least 0.01% of global
hashpower to be viable means there's at most 10k miners worldwide,
which is about the same as the number of members in the SWIFT
system. Alternatively, the low figure above was 29x S21's; the new ESMA
recommendation [2] seems to consider you a threat to the environment /
large scale miner with just 17x S21's...)

Or you could look at it the other way round: Ocean accepting anonymous
small miners seems like a nice thing now, but if it's also setting the
pool up for failure by providing a way for an attacker to hide its attack,
it might not actually be something that's good for the network.

[1] You'd want to do something smarter, so that the attacker doesn't just
    make an account with high hashrate that reports one block a week when
    their hashrate suggests they should be finding five, of course. But
    the details of that doesn't have much impact on the low hashrate
    honest miners that are also in your pool.

[2] https://x.com/DSBatten/status/1828182078107894108

Anyway, like I said,

> > What I'm interested in is
> > a pool that doesn't do those things: for example, a world where 5% of
> > hashrate is from 60M BitAxe devices owned by 10M people, say.

I'm interested in the scenario where there's a pool that supports large
numbers of low hashrate miners, even in the face of an attack, and I
just don't see a way in which statistical analysis can be good enough
in that scenario.

(I'm also not sure there's much difference between the thresholds for
"can be confirmed to not be an attacker via statistical means" and "can
viably solo mine". Pools being only for the miners that don't really
need them doesn't seem great)

> > As far as I can see that means your pool is either:
> >
> >   a) heavily KYCed
> >   b) limited to high-hashrate miners
> >   c) fully validating every share
> >   d) vulnerable to block-withholding attacks, and hence not viable in
> >      the long term in a competitive environment
> >
> > Of those, "fully validating every share" seems the most appealing option
> > to me, but in practical terms, that seems incompatible with "any miner
> > can freely choose the txs they work on". In practice, of course, (a)
> > and (b) will presumably be the reality for the forseeable future for
> > all but a fairly trivial amount of hashrate.
>
> Except "fully validating every share" doesn't change anything. You totally
> missed the point that both I and Luke raised - you can fully validate every
> share, or not, but either way block withholding requires some kind of
> statistical analysis to detect, subject to the limitations you raise.

With a PoW algorithm supporting oblivious shares, that's not the case:
block withholding simply stops being an attack: if you withhold n shares,
your payout drops by the value of n shares, and the pool's revenue drops
by the expected value of n shares, the same as if you'd just turned
your miner off briefly. Presuming the pool doesn't collude with whoever
is attacking the pool, the attacker doesn't have a way of biassing the
shares they withhold to be more valuable than the shares they submit.

It's only after you have a functioning system with obvlious shares
that fully validating every share matters for this purpose, and that's
because there's a very similar attack available that deprives the pool of
gaining block rewards from your shares. That attack can be prevented by
fully validating shares. ("Fully" here means "full consensus rules",
you could only validate 1-in-n shares, provided that the user can't
predict which shares will be validated, and that each detected invalid
share results in the loss of n share payments).

Fully validating shares is, obviously, trivial if the pool is assembling
the block and the user is only adding proof of work. If the user is
constructing their own templates, it's significantly more work, and as far
as I can see, that work scales according to how many users the pool has.

> > > Adding more explicit "negotiation" to Stratum V2 work selection would defeat
> > > the purpose - if the pool is able to tell a miner not to work on some work
> > > it wants to, ...
> > A pool is always able to do that -- they can simply mark the share as
> > invalid after the fact and refuse to pay out on it, and perhaps make
> > a blog post explaining their policy. The over-the-wire protocol isn't
> > what provides that ability.
> A pool can decline to pay out, yes, but the miner will still work on that
> block. The point of custom work selection is that the miner will *always*
> work on the block they want, no matter what. And if they mind it, they
> broadcast it directly themselves. Anything else would defeat the point.

If you mine a block, paying to a pool, but the pool is not paying you for
shares, all you're doing is making a large donation to the pool operator,
that at best might get passed on to other members of the pool, but won't
be passed on to you. That's even less profitable than solo mining.

You can certainly say "the miners will never tell the pool the contents
of their shares, and will never join a pool that expects that; pools
only find out about txs when a successful block is mined and broadcast",
and do a statistical analysis of shares vs blocks, but that again only
works for miners with large hashrates.

Without miners sharing the block templates for their shares with the
pool, I don't see how a pool could preference miners who are picking
"good" templates, vs mining empty blocks; if you're only getting the
same reward as someone who's mining an empty block, it seems locally
optimal to mine an empty block yourself, though that certainly doesn't
seem globally optimal. If you're not validating the templates, but
rewarding shares with templates that claim to give high fees, that also
seems exploitable in ways that are harmful for the pool.

> > > The only
> > > way any kind of centralized pooling with custom work selection adds any
> > > value to Bitcoin's decentralization is if the clients insist on mining the
> > > work they want to - whether on the pool or solo mining if the pool doesn't
> > > want it.
> > 
> > If you're really expecting miners are going to be constantly telling
> > their pool "do exactly what I want or I solo mine", I think you're pretty
> > likely to be disappointed by whatever the future holds... By its nature,
> > solo mining is something that can only be done profitably by relatively
> > few players at any given time; it's only potentially decentralised if the
> > total market for Bitcoin ownership/usage is itself very small.
> 
> You're totally missing the point that pools can just...pay out properly?

Giving "why can't we all just get along" vibes there... They certainly
could, but incentives for them to do otherwise don't go away just by
wishing they would.

> Or
> if they don't people will create new pools that do? Not sure why you think
> that's a far-fetched outcome.

Sure, making it easy to create new pools is ideal. I think "just do
statistical analysis to detect/prevent attacks" is already a pretty big
impediment to that, though: it's hard to do, not automated, and likely
requires some degree of secrecy to do well. Same as "oh, we just use
heuristics to detect credit card fraud" vs "sign with your private key,
and no one can reuse your signature or create a fake one": one requires
a bunch of specialist knowledge that's hard to duplicate and is often
ineffective, the other is something that can be done reliably by freely
downloadable open source code.

My chain of logic is:

 * I'd like to see mass-market mining be more viable (ie, lots of people each
   with small amounts of hashrate, vs a smaller number of large operations
   with large amounts of hashrate)

 * Small hashrate mining is only viable via altruism ("I'm supporting
   the network, and it's cheap"), irrationality ("I know it's bad odds
   for everyone else, but I'm lucky"), or pooled mining.

 * Altruism and irrationality don't work at scale, so I'd like to
   see pooled mining work well for small amounts of hashrate.

 * Pools that accept small miners will have a very hard problem
   preventing/addressing block withholding attacks, because statistical
   methods are not viable for people making a mining investment that's
   not in the six figure range.

 * We could make a very intrusive change to fix block kwithholding
   attacks entirely, by supporting oblivious shares, so that the miner
   can't tell which share will be a valid block.

 * Even if we did that, we'd still have to prevent rewarding miners
   for producing shares based on invalid templates, if templates aren't
   being provided by the pool.

In particular, in a world where, say, Ocean adopted stratumv2 or some
similar approach that allowed its users to construct any block they
like, didn't verify those blocks when accepting them as valid shares,
and continued to allow people to join the pool with little more than a
bitcoin address and a few TH/s, then even with oblivious shares supported
at consensus, it would still be possible to run roughly the same attack
described above, but rather than withholding shares, you'd be mining
shares against invalid blocks; still distributing your hashrate across
many accounts to avoid statistical analysis.

Cheers,
aj


-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/Zs2XQhcHe2srxLA4%40erisian.com.au.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bitcoindev] Mining pools, stratumv2 and oblivious shares
  2024-08-27  9:07       ` Anthony Towns
@ 2024-08-27 13:52         ` Matt Corallo
  0 siblings, 0 replies; 8+ messages in thread
From: Matt Corallo @ 2024-08-27 13:52 UTC (permalink / raw)
  To: Anthony Towns; +Cc: bitcoindev



On 8/27/24 5:07 AM, Anthony Towns wrote:
> On Wed, Aug 21, 2024 at 10:28:35AM -0400, Matt Corallo wrote:
> Right, but that just sets some threshold where low hashrate members of a
> pool are indistinguishable from people attacking the pool. If there's no
> attack going on, that's fine, of course. To put some numbers to this: Ocean
> reportedly paid out ~10% more in reward for the same work compared to other
> pools [0].
> 
> [0] https://x.com/ocean_mining/status/1825943407736533008
> 
> To reverse that, you could do something like:
> 
>   * Take Ocean's current (honest) hashrate of 2160 PH/s, ie about 0.33% of global
>     hashrate, or 3.35 blocks/week
>   * Find ~22% of that to attack with, ie 475 PH/s (0.74 blocks/week)
>   * Run the attack:
>      - Ocean's new hashrate is 2635 PH/s
>      - Ocean still only achieves 3.35 blocks/week
>      - You collect ~18% of their reward (0.6 blocks/week)
>      - Honest ocean miners collect the remainder 2.75 blocks/week)
>      - If 3.35 blocks/week was making 1.1x the FPPS reward, 2.75 blocks/week
>        is now only 0.9x the FPPS reward
>      - Publish a google doc saying Ocean sucks, FPPS is much better
> 
> If submitted through a single account, 475PH/s would be the third
> largest member of Ocean; and relatively easy to do statistical analysis
> on. If split up across perhaps 500 accounts, with less than a PH/s each,
> those accounts would not be in Ocean's top 80, and each individual account
> wouldn't be expected to find a block more often than once a decade or
> so. Submitting shares via tor, spending some time creating the accounts
> and making them look normal (ie, actually submitting the 0.6 blocks/week
> they're collectively expected to find), and receiving payouts over
> lightning seems like it would close out many of the obvious ways of
> telling that the accounts are all sybils.
> 
> Maybe it's okay if the answer is just "analyse as best you can to find
> the real culprit, and if you can't, just drop all the low hashrate pool
> members". With an approach like that, you could decide something like
> "if there's an attack that lasts for two months, I'll boot out everyone
> who didn't find a block over that period" [1]. For someone with 0.02%
> of global hashrate (130 PH/s), there's about an 80% chance they'll find
> a block over two months, whereas for someone with 0.0012% (7.8 PH/s),
> there's a 90% chance they won't. So at that point, even if you're honest
> and you've invested $4M in ASICs (482x S21XP, 130 PH/s), you've still got
> a 20% chance of being booted out of the pool; and if your investment is
> only $240,000 (29x S21 XP, 7.8 PH/s), you've got a 90% chance of being
> booted out. For comparison, only the top three users on Ocean's dashboard
> report more than 130 PH/s, and 7.8 PH/s would put you in the top 25.
> 
> (For comparison, each miner having to have at least 0.01% of global
> hashpower to be viable means there's at most 10k miners worldwide,
> which is about the same as the number of members in the SWIFT
> system. Alternatively, the low figure above was 29x S21's; the new ESMA
> recommendation [2] seems to consider you a threat to the environment /
> large scale miner with just 17x S21's...)
> 
> Or you could look at it the other way round: Ocean accepting anonymous
> small miners seems like a nice thing now, but if it's also setting the
> pool up for failure by providing a way for an attacker to hide its attack,
> it might not actually be something that's good for the network.

Yep, I generally agree with all of this. It kinda is what it is, we can't do much to change it, and 
pools do run the risk of getting attacked. For a PPLNS pool like Ocean, it'd mean the miners would 
lose out. For a PPS pool, like nearly all others, it'd mean the pool goes out of business. Pick your 
poison, I guess.

The one other thing to point out that you can do is pay out a bonus for the miner that finds a 
block. This is (IIRC) what p2pool did, and what any decentralized pool worth its salt will do. That 
at least adds some (hopefully non-trivial) cost to block withholding to disincentivize it, though of 
course it comes at the cost of stable rewards...which was kinda the whole point of a pool.

> 
>>> What I'm interested in is
>>> a pool that doesn't do those things: for example, a world where 5% of
>>> hashrate is from 60M BitAxe devices owned by 10M people, say.
> 
> I'm interested in the scenario where there's a pool that supports large
> numbers of low hashrate miners, even in the face of an attack, and I
> just don't see a way in which statistical analysis can be good enough
> in that scenario.
> 
> (I'm also not sure there's much difference between the thresholds for
> "can be confirmed to not be an attacker via statistical means" and "can
> viably solo mine". Pools being only for the miners that don't really
> need them doesn't seem great)

Another way to look at this is pooled mining isn't useful for miners that will find a block 
approximately never :).

> With a PoW algorithm supporting oblivious shares, that's not the case:
> block withholding simply stops being an attack: if you withhold n shares,
> your payout drops by the value of n shares, and the pool's revenue drops
> by the expected value of n shares, the same as if you'd just turned
> your miner off briefly.

Sure, there are various schemes to prevent block withholding. Last I looked into it, though, they 
all require a major PoW change, which I'm not sure is worth doing just to fix block withholding, sadly.

>>>> Adding more explicit "negotiation" to Stratum V2 work selection would defeat
>>>> the purpose - if the pool is able to tell a miner not to work on some work
>>>> it wants to, ...
>>> A pool is always able to do that -- they can simply mark the share as
>>> invalid after the fact and refuse to pay out on it, and perhaps make
>>> a blog post explaining their policy. The over-the-wire protocol isn't
>>> what provides that ability.
>> A pool can decline to pay out, yes, but the miner will still work on that
>> block. The point of custom work selection is that the miner will *always*
>> work on the block they want, no matter what. And if they mind it, they
>> broadcast it directly themselves. Anything else would defeat the point.
> 
> If you mine a block, paying to a pool, but the pool is not paying you for
> shares, all you're doing is making a large donation to the pool operator,
> that at best might get passed on to other members of the pool, but won't
> be passed on to you. That's even less profitable than solo mining.

Sure, but this has nothing to do with custom work selection or StratumV2. You can always mine a 
block for a pool and the pool can always decide to not pay out for that work. You should probably 
use pools that aren't run by scammers.

> You can certainly say "the miners will never tell the pool the contents
> of their shares, and will never join a pool that expects that; pools
> only find out about txs when a successful block is mined and broadcast",
> and do a statistical analysis of shares vs blocks, but that again only
> works for miners with large hashrates.
> 
> Without miners sharing the block templates for their shares with the
> pool, I don't see how a pool could preference miners who are picking
> "good" templates, vs mining empty blocks; if you're only getting the
> same reward as someone who's mining an empty block, it seems locally
> optimal to mine an empty block yourself, though that certainly doesn't
> seem globally optimal. If you're not validating the templates, but
> rewarding shares with templates that claim to give high fees, that also
> seems exploitable in ways that are harmful for the pool.

I never said anything about refusing to tell a pool the contents of shares. Quite the opposite, in 
fact, in order to ensure StratumV2 work custom selection doesn't result in a degradation in pool 
block propagation performance, pools should be getting the contents of shares from miners and should 
prepare to forward that. They should also probably spot-check for validity, but more to detect 
misconfiguration than active attacks.

>>>> The only
>>>> way any kind of centralized pooling with custom work selection adds any
>>>> value to Bitcoin's decentralization is if the clients insist on mining the
>>>> work they want to - whether on the pool or solo mining if the pool doesn't
>>>> want it.
>>>
>>> If you're really expecting miners are going to be constantly telling
>>> their pool "do exactly what I want or I solo mine", I think you're pretty
>>> likely to be disappointed by whatever the future holds... By its nature,
>>> solo mining is something that can only be done profitably by relatively
>>> few players at any given time; it's only potentially decentralised if the
>>> total market for Bitcoin ownership/usage is itself very small.
>>
>> You're totally missing the point that pools can just...pay out properly?
> 
> Giving "why can't we all just get along" vibes there... They certainly
> could, but incentives for them to do otherwise don't go away just by
> wishing they would.

I don't buy that its in a pool's best interest to not pay out for the contract they have with their 
users? How is trashing your business and getting sued in their best interest?

>> Or
>> if they don't people will create new pools that do? Not sure why you think
>> that's a far-fetched outcome.
> 
> Sure, making it easy to create new pools is ideal. I think "just do
> statistical analysis to detect/prevent attacks" is already a pretty big
> impediment to that, though: it's hard to do, not automated, and likely
> requires some degree of secrecy to do well. Same as "oh, we just use
> heuristics to detect credit card fraud" vs "sign with your private key,
> and no one can reuse your signature or create a fake one": one requires
> a bunch of specialist knowledge that's hard to duplicate and is often
> ineffective, the other is something that can be done reliably by freely
> downloadable open source code.

Sure, but this again has nothing to do with StratumV2 or custom work selection...

> My chain of logic is:
> 
>   * I'd like to see mass-market mining be more viable (ie, lots of people each
>     with small amounts of hashrate, vs a smaller number of large operations
>     with large amounts of hashrate)

Yea, I would too, but before we rush to go do a fork to change the PoW to add oblivious PoW, I'd 
really like to be convinced that its actually realistic that these kinds of miners can have more 
than a few % of total network hashrate. If we can only get them up to a few %....who cares?

>   * Small hashrate mining is only viable via altruism ("I'm supporting
>     the network, and it's cheap"), irrationality ("I know it's bad odds
>     for everyone else, but I'm lucky"), or pooled mining.
> 
>   * Altruism and irrationality don't work at scale, so I'd like to
>     see pooled mining work well for small amounts of hashrate.
> 
>   * Pools that accept small miners will have a very hard problem
>     preventing/addressing block withholding attacks, because statistical
>     methods are not viable for people making a mining investment that's
>     not in the six figure range.
> 
>   * We could make a very intrusive change to fix block kwithholding
>     attacks entirely, by supporting oblivious shares, so that the miner
>     can't tell which share will be a valid block.
> 
>   * Even if we did that, we'd still have to prevent rewarding miners
>     for producing shares based on invalid templates,

Yea, one step at a time, mostly. But if the market did change somehow and small miners were shooting 
for more than a few % of hashrate (I dunno, ASIC heaters become a thing?) and then we decide we 
should fork in oblivious mining....

then I think statistical checking is much, much, much easier here. StratumV2 pools are already 
expected to spot-check work and generally expected to always see block templates, so it becomes a 
problem of "throw more CPU at it to verify more clients and reject more shares" (which if you have a 
ton of shares could probably be done kinda efficiently through script validity and UTXO caching). 
You could very reasonably rate-limit new template generation without breaking things, and require 
some minimum threshold of miner hashrate (yay PoW for anti-DoS) and probably you'd be fine.

> if templates aren't
>     being provided by the pool.

Sure, and if the templates are being provided by the pool then there's no value in having lots of 
small miners :). Doesn't change your argument, just worth pointing out, I think.

Matt

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/13cec02f-9012-4ba2-bb45-e00907a55357%40mattcorallo.com.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-08-27 13:56 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-07-23 15:02 [bitcoindev] Mining pools, stratumv2 and oblivious shares Anthony Towns
2024-07-23 18:44 ` Luke Dashjr
2024-07-31 18:00   ` Anthony Towns
2024-08-13 13:57 ` Matt Corallo
2024-08-16  2:10   ` Anthony Towns
2024-08-21 14:28     ` Matt Corallo
2024-08-27  9:07       ` Anthony Towns
2024-08-27 13:52         ` Matt Corallo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox