public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoindev] Great Consensus Cleanup Revival
@ 2024-03-24 18:10 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-03-26 19:11 ` [bitcoindev] " Antoine Riard
  2024-06-17 22:15 ` Eric Voskuil
  0 siblings, 2 replies; 21+ messages in thread
From: 'Antoine Poinsot' via Bitcoin Development Mailing List @ 2024-03-24 18:10 UTC (permalink / raw)
  To: bitcoindev

Hey all,

I've recently posted about the Great Consensus Cleanup there: https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710.

I'm starting a thread on the mailing list as well to get comments and opinions from people who are not on Delving.

TL;DR:
- i think the worst block validation time is concerning. The mitigations proposed by Matt are effective, but i think we should also limit the maximum size of legacy transactions for an additional safety margin;
- i believe it's more important to fix the timewarp bug than people usually think;
- it would be nice to include a fix to make coinbase transactions unique once and for all, to avoid having to resort back to doing BIP30 validation after block 1,983,702;
- 64 bytes transactions should definitely be made invalid, but i don't think there is a strong case for making less than 64 bytes transactions invalid.

Anything in there that people disagree with conceptually?
Anything in there that people think shouldn't (or don't need to) be fixed?
Anything in there which can be improved (a simpler, or better fix)?
Anything NOT in there that people think should be fixed?


Antoine Poinsot

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/gnM89sIQ7MhDgI62JciQEGy63DassEv7YZAMhj0IEuIo0EdnafykF6RH4OqjTTHIHsIoZvC2MnTUzJI7EfET4o-UQoD-XAQRDcct994VarE%3D%40protonmail.com.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-03-24 18:10 [bitcoindev] Great Consensus Cleanup Revival 'Antoine Poinsot' via Bitcoin Development Mailing List
@ 2024-03-26 19:11 ` Antoine Riard
  2024-03-27 10:35   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-06-17 22:15 ` Eric Voskuil
  1 sibling, 1 reply; 21+ messages in thread
From: Antoine Riard @ 2024-03-26 19:11 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 4549 bytes --]

Hi Poinsot,

I think fixing the timewarp attack is a good idea, especially w.r.t safety 
implications of long-term timelocks usage.

The only beneficial case I can remember about the timewarp issue is 
"forwarding blocks" by maaku for on-chain scaling:
http://freico.in/forward-blocks-scalingbitcoin-paper.pdf

Shall we as a community completely disregard this approach for on-chain 
settlement throughput scaling ?
Personally, I think you can still design extension-block / side-chains like 
protocols by using other today available
Bitcoin Script mechanisms and get roughly (?) the same security / 
scalability trade-offs. Shall be fine to me to fix timewarp.

Worst-block validation time is concerning. I bet you can do worst than your 
examples if you're playing with other vectors like
low-level ECC tricks and micro-architectural layout of modern processors.

Consensus invalidation of old legacy scripts was quite controversial last 
time a consensus cleanup was proposed:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html

Only making scripts invalid after a given block height (let's say the 
consensus cleanup activation height) is obviously a
way to solve the concern and any remaining sleeping DoSy unspent coins can 
be handled with  newly crafted and dedicated
transaction-relay rules (e.g at max 1000 DoSy coins can be spent per block 
for a given IBT span).

I think any consensus boundaries on the minimal transaction size would need 
to be done carefully and have all lightweight
clients update their own transaction acceptance logic to enforce the check 
to avoid years-long transitory massive double-spend
due to software incoordination. I doubt `MIN_STANDARD_TX_NON_WITNESS_SIZE` 
is implemented correctly by all transaction-relay
backends and it's a mess in this area. Quid if we have  < 64 bytes 
transaction where the only witness is enforced to be a minimal 1-byte
as witness elements are only used for higher layers protocols semantics  ? 
Shall get its own "only-after-height-X" exemption, I think.

Making coinbase unique by requesting the block height to be enforced in 
nLocktime, sounds more robust to take a monotonic counter
in the past in case of accidental or provoked shallow reorgs. I can see of 
you would have to re-compute a block template, loss a round-trip
compare to your mining competitors. Better if it doesn't introduce a new 
DoS vector at mining job distribution and control.

Beyond, I don't deny other mentioned issues (e.g UTXO entries growth limit) 
could be source of denial-of-service but a) I think it's hard
to tell if they're economically neutral on modern Bitcoin use-cases and 
their plausible evolvability and b) it's already a lot of careful consensus
code to get right :)

Best,
Antoine

Le dimanche 24 mars 2024 à 19:06:57 UTC, Antoine Poinsot a écrit :

> Hey all,
>
> I've recently posted about the Great Consensus Cleanup there: 
> https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710.
>
> I'm starting a thread on the mailing list as well to get comments and 
> opinions from people who are not on Delving.
>
> TL;DR:
> - i think the worst block validation time is concerning. The mitigations 
> proposed by Matt are effective, but i think we should also limit the 
> maximum size of legacy transactions for an additional safety margin;
> - i believe it's more important to fix the timewarp bug than people 
> usually think;
> - it would be nice to include a fix to make coinbase transactions unique 
> once and for all, to avoid having to resort back to doing BIP30 validation 
> after block 1,983,702;
> - 64 bytes transactions should definitely be made invalid, but i don't 
> think there is a strong case for making less than 64 bytes transactions 
> invalid.
>
> Anything in there that people disagree with conceptually?
> Anything in there that people think shouldn't (or don't need to) be fixed?
> Anything in there which can be improved (a simpler, or better fix)?
> Anything NOT in there that people think should be fixed?
>
>
> Antoine Poinsot
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/dc2cc46f-e697-4b14-91b3-34cf11de29a3n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 5738 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-03-26 19:11 ` [bitcoindev] " Antoine Riard
@ 2024-03-27 10:35   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-03-27 18:57     ` Antoine Riard
  2024-04-18  0:46     ` Mark F
  0 siblings, 2 replies; 21+ messages in thread
From: 'Antoine Poinsot' via Bitcoin Development Mailing List @ 2024-03-27 10:35 UTC (permalink / raw)
  To: Antoine Riard; +Cc: Bitcoin Development Mailing List

[-- Attachment #1: Type: text/plain, Size: 5840 bytes --]

> Hi Poinsot,

Hi Riard,

> The only beneficial case I can remember about the timewarp issue is "forwarding blocks" by maaku for on-chain scaling:
> http://freico.in/forward-blocks-scalingbitcoin-paper.pdf

I would not qualify this hack of "beneficial". Besides the centralization pressure of an increased block frequency, leveraging the timewarp to achieve it would put the network constantly on the Brink of being seriously (fatally?) harmed. And this sets pernicious incentives too. Every individual user has a short-term incentive to get lower fees by the increased block space, at the expense of all users longer term. And every individual miner has an incentive to get more block reward at the expense of future miners. (And of course bigger miners benefit from an increased block frequency.)

> I think any consensus boundaries on the minimal transaction size would need to be done carefully and have all lightweight
> clients update their own transaction acceptance logic to enforce the check to avoid years-long transitory massive double-spend
> due to software incoordination.

Note in my writeup i suggest we do not introduce a minimum transaction, but we instead only make 64 bytes transactions invalid. See https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710#can-we-come-up-with-a-better-fix-10:

> However the BIP proposes to also make less-than-64-bytes transactions invalid. Although they are of no (or little) use, such transactions are not harmful. I believe considering a type of transaction useless is not sufficient motivation for making them invalid through a soft fork.
>
> Making (exactly) 64 bytes long transactions invalid is also what AJ implemented in [his pull request to Bitcoin-inquisition](https://github.com/bitcoin-inquisition/bitcoin/pull/24).

> I doubt `MIN_STANDARD_TX_NON_WITNESS_SIZE` is implemented correctly by all transaction-relay backends and it's a mess in this area.

What type of backend are you referring to here? Bitcoin full nodes reimplementations? These transactions have been non-standard in Bitcoin Core for the past 6 years (commit 7485488e907e236133a016ba7064c89bf9ab6da3).

> Quid if we have < 64 bytes transaction where the only witness is enforced to be a minimal 1-byte
> as witness elements are only used for higher layers protocols semantics ?

This restriction is on the size of the transaction serialized without witness. So this particular instance would not be affected and whatever the witness is isn't relevant.

> Making coinbase unique by requesting the block height to be enforced in nLocktime, sounds more robust to take a monotonic counter
> in the past in case of accidental or provoked shallow reorgs. I can see of you would have to re-compute a block template, loss a round-trip
> compare to your mining competitors. Better if it doesn't introduce a new DoS vector at mining job distribution and control.

Could you clarify? Are you suggesting something else than to set the nLockTime in the coinbase transaction to the height of the block? If so, what exactly are you referring to by "monotonic counter in the past"?

At any rate in my writeup i suggested making the coinbase commitment mandatory (even when empty) instead for compatibility reasons.

That said, since we could make this rule kick in in 25 years from now, we might want to just do the Obvious Thing and just require the height in nLockTime.

> and b) it's already a lot of careful consensus
> code to get right :)

Definitely. I just want to make sure we are not missing anything important if a soft fork gets proposed along these lines in the future.

> Best,
> Antoine
>
> Le dimanche 24 mars 2024 à 19:06:57 UTC, Antoine Poinsot a écrit :
>
>> Hey all,
>>
>> I've recently posted about the Great Consensus Cleanup there: https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710.
>>
>> I'm starting a thread on the mailing list as well to get comments and opinions from people who are not on Delving.
>>
>> TL;DR:
>> - i think the worst block validation time is concerning. The mitigations proposed by Matt are effective, but i think we should also limit the maximum size of legacy transactions for an additional safety margin;
>> - i believe it's more important to fix the timewarp bug than people usually think;
>> - it would be nice to include a fix to make coinbase transactions unique once and for all, to avoid having to resort back to doing BIP30 validation after block 1,983,702;
>> - 64 bytes transactions should definitely be made invalid, but i don't think there is a strong case for making less than 64 bytes transactions invalid.
>>
>> Anything in there that people disagree with conceptually?
>> Anything in there that people think shouldn't (or don't need to) be fixed?
>> Anything in there which can be improved (a simpler, or better fix)?
>> Anything NOT in there that people think should be fixed?
>>
>> Antoine Poinsot
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/dc2cc46f-e697-4b14-91b3-34cf11de29a3n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/1KbVdD952_XRfsKzMKaX-y4lrPOxYiknn8xXOMDQGt2Qz2fHFM-KoSplL-A_GRE1yuUkgNMeoEBHZiEDlMYwiqOiITFQTKEm5u1p1oVlL9I%3D%40protonmail.com.

[-- Attachment #2: Type: text/html, Size: 11943 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-03-27 10:35   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
@ 2024-03-27 18:57     ` Antoine Riard
  2024-04-18  0:46     ` Mark F
  1 sibling, 0 replies; 21+ messages in thread
From: Antoine Riard @ 2024-03-27 18:57 UTC (permalink / raw)
  To: Antoine Poinsot; +Cc: Bitcoin Development Mailing List

[-- Attachment #1: Type: text/plain, Size: 8001 bytes --]

Hi Darosior,

> I would not qualify this hack of "beneficial". Besides the centralization
pressure of an increased block frequency, leveraging the timewarp to
achieve it would put the network constantly on the Brink of being seriously
(fatally?) harmed. And this sets pernicious incentives too. Every
individual user has a short-term incentive to get lower fees by the
increased block space, at the expense of all users longer term. And every
individual miner has an incentive to get more block reward at the expense
of future miners. (And of course bigger miners benefit from an increased
block frequency.)

I'm not saying the hack is beneficial either. The "forward block" paper is
just good to provide more context around timewarp.

> Note in my writeup i suggest we do not introduce a minimum transaction,
but we instead only make 64 bytes transactions invalid

I think it's easier for the sake of analysis.
See this mailing list issue for 60-byte example transaction use-case:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017883.html
Only I'm aware of to the best of my knowledge.

> What type of backend are you referring to here?

I can't find where `MIN_STANDARD_TX_NON_WITNESS_SIZE` is checked in btcd's
`maybeAcceptTransaction()`.

> This restriction is on the size of the transaction serialized without
witness.

Oky.

> Could you clarify? Are you suggesting something else than to set the
nLockTime in the coinbase transaction to the height of the block? If so,
what exactly are you referring to by "monotonic counter in the past"?

Thinking more, I believe it's okay to use the nLocktime in the coinbase
transaction, as the wtxid of the coinbase is assumed to be 0x00.
To be checked if it doesn't break anything w.rt Stratum V2 / mining job
distribution.

Best,
Antoine












Le mer. 27 mars 2024 à 10:36, Antoine Poinsot <darosior@protonmail•com> a
écrit :

>
> Hi Poinsot,
>
>
> Hi Riard,
>
>
> The only beneficial case I can remember about the timewarp issue is
> "forwarding blocks" by maaku for on-chain scaling:
> http://freico.in/forward-blocks-scalingbitcoin-paper.pdf
>
>
> I would not qualify this hack of "beneficial". Besides the centralization
> pressure of an increased block frequency, leveraging the timewarp to
> achieve it would put the network constantly on the Brink of being seriously
> (fatally?) harmed. And this sets pernicious incentives too. Every
> individual user has a short-term incentive to get lower fees by the
> increased block space, at the expense of all users longer term. And every
> individual miner has an incentive to get more block reward at the expense
> of future miners. (And of course bigger miners benefit from an increased
> block frequency.)
>
>
> I think any consensus boundaries on the minimal transaction size would
> need to be done carefully and have all lightweight
> clients update their own transaction acceptance logic to enforce the check
> to avoid years-long transitory massive double-spend
> due to software incoordination.
>
>
> Note in my writeup i suggest we do not introduce a minimum transaction,
> but we instead only make 64 bytes transactions invalid. See
> https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710#can-we-come-up-with-a-better-fix-10
> :
>
> However the BIP proposes to also make less-than-64-bytes transactions
> invalid. Although they are of no (or little) use, such transactions are not
> harmful. I believe considering a type of transaction useless is not
> sufficient motivation for making them invalid through a soft fork.
>
> Making (exactly) 64 bytes long transactions invalid is also what AJ
> implemented in his pull request to Bitcoin-inquisition
> <https://github.com/bitcoin-inquisition/bitcoin/pull/24>.
>
>
>
> I doubt `MIN_STANDARD_TX_NON_WITNESS_SIZE` is implemented correctly by all
> transaction-relay backends and it's a mess in this area.
>
>
> What type of backend are you referring to here? Bitcoin full nodes
> reimplementations? These transactions have been non-standard in Bitcoin
> Core for the past 6 years (commit 7485488e907e236133a016ba7064c89bf9ab6da3
> ).
>
>
> Quid if we have < 64 bytes transaction where the only witness is enforced
> to be a minimal 1-byte
> as witness elements are only used for higher layers protocols semantics ?
>
>
> This restriction is on the size of the transaction serialized without
> witness. So this particular instance would not be affected and whatever the
> witness is isn't relevant.
>
>
> Making coinbase unique by requesting the block height to be enforced in
> nLocktime, sounds more robust to take a monotonic counter
> in the past in case of accidental or provoked shallow reorgs. I can see of
> you would have to re-compute a block template, loss a round-trip
> compare to your mining competitors. Better if it doesn't introduce a new
> DoS vector at mining job distribution and control.
>
>
> Could you clarify? Are you suggesting something else than to set the
> nLockTime in the coinbase transaction to the height of the block? If so,
> what exactly are you referring to by "monotonic counter in the past"?
>
> At any rate in my writeup i suggested making the coinbase commitment
> mandatory (even when empty) instead for compatibility reasons.
>
> That said, since we could make this rule kick in in 25 years from now, we
> might want to just do the Obvious Thing and just require the height in
> nLockTime.
>
>
>  and b) it's already a lot of careful consensus
> code to get right :)
>
>
> Definitely. I just want to make sure we are not missing anything important
> if a soft fork gets proposed along these lines in the future.
>
>
> Best,
> Antoine
>
> Le dimanche 24 mars 2024 à 19:06:57 UTC, Antoine Poinsot a écrit :
>
>> Hey all,
>>
>> I've recently posted about the Great Consensus Cleanup there:
>> https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710.
>>
>> I'm starting a thread on the mailing list as well to get comments and
>> opinions from people who are not on Delving.
>>
>> TL;DR:
>> - i think the worst block validation time is concerning. The mitigations
>> proposed by Matt are effective, but i think we should also limit the
>> maximum size of legacy transactions for an additional safety margin;
>> - i believe it's more important to fix the timewarp bug than people
>> usually think;
>> - it would be nice to include a fix to make coinbase transactions unique
>> once and for all, to avoid having to resort back to doing BIP30 validation
>> after block 1,983,702;
>> - 64 bytes transactions should definitely be made invalid, but i don't
>> think there is a strong case for making less than 64 bytes transactions
>> invalid.
>>
>> Anything in there that people disagree with conceptually?
>> Anything in there that people think shouldn't (or don't need to) be
>> fixed?
>> Anything in there which can be improved (a simpler, or better fix)?
>> Anything NOT in there that people think should be fixed?
>>
>>
>> Antoine Poinsot
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups•com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bitcoindev/dc2cc46f-e697-4b14-91b3-34cf11de29a3n%40googlegroups.com
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/CALZpt%2BGeEonE08V6tBoY0gsc1hj6r3y1yTUri_nCJ-%3DLyq6jLA%40mail.gmail.com.

[-- Attachment #2: Type: text/html, Size: 16860 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-03-27 10:35   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-03-27 18:57     ` Antoine Riard
@ 2024-04-18  0:46     ` Mark F
  2024-04-18 10:04       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  1 sibling, 1 reply; 21+ messages in thread
From: Mark F @ 2024-04-18  0:46 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 3245 bytes --]

On Wednesday, March 27, 2024 at 4:00:34 AM UTC-7 Antoine Poinsot wrote:

The only beneficial case I can remember about the timewarp issue is 
"forwarding blocks" by maaku for on-chain scaling:
http://freico.in/forward-blocks-scalingbitcoin-paper.pdf


I would not qualify this hack of "beneficial". Besides the centralization 
pressure of an increased block frequency, leveraging the timewarp to 
achieve it would put the network constantly on the Brink of being seriously 
(fatally?) harmed. And this sets pernicious incentives too. Every 
individual user has a short-term incentive to get lower fees by the 
increased block space, at the expense of all users longer term. And every 
individual miner has an incentive to get more block reward at the expense 
of future miners. (And of course bigger miners benefit from an increased 
block frequency.)

 
Every single concern mentioned here is addressed prominently in the 
paper/presentation for Forward Blocks:

* Increased block frequency is only on the compatibility chain, where the 
content of blocks is deterministic anyway. There is no centralization 
pressure from the frequency of blocks on the compatibility chain, as the 
content of the blocks is not miner-editable in economically meaningful 
ways. Only the block frequency of the forward block chain matters, and here 
the block frequency is actually *reduced*, thereby decreasing 
centralization pressure.

* The elastic block size adjustment mechanism proposed in the paper is 
purposefully constructed so that users or miners wanting to increase the 
block size beyond what is currently provided for will have to pay 
significantly (multiple orders of magnitude) more than they could possibly 
acquire from larger blocks, and the block size would re-adjust downward 
shortly after the cessation of that artificial fee pressure.

* Increased block frequency of compatibility blocks has no effect on the 
total issuance, so miners are not rewarded by faster blocks.

You are free to criticize Forward Blocks, but please do so by actually 
addressing the content of the proposal. Let's please hold a standard of 
intellectual excellence on this mailing list in which ideas are debated 
based on content-level arguments rather than repeating inaccurate takes 
from Reddit/Twitter.

To the topic of the thread, disabling time-warp will close off an unlikely 
and difficult to pull off subsidy draining attack that to activate would 
necessarily require weeks of forewarning and could be easily countered in 
other ways, with the tradeoff of removing the only known mechanism for 
upgrading the bitcoin protocol to larger effective block sizes while 
staying 100% compatible with un-upgraded nodes (all nodes see all 
transactions).

I think we should keep our options open.

-Mark

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/62640263-077c-4ac7-98a6-d9c17913fca0n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 4245 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-04-18  0:46     ` Mark F
@ 2024-04-18 10:04       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-04-25  6:08         ` Antoine Riard
  0 siblings, 1 reply; 21+ messages in thread
From: 'Antoine Poinsot' via Bitcoin Development Mailing List @ 2024-04-18 10:04 UTC (permalink / raw)
  To: Mark F; +Cc: Bitcoin Development Mailing List

[-- Attachment #1: Type: text/plain, Size: 6057 bytes --]

> You are free to criticize Forward Blocks, but please do so by actually addressing the content of the proposal. Let's please hold a standard of intellectual excellence on this mailing list in which ideas are debated based on content-level arguments rather than repeating inaccurate takes from Reddit/Twitter.

You are the one being dishonest here. Look, i understand you came up with a fun hack exploiting bugs in Bitcoin and you are biased against fixing them. Yet, the cost of not fixing timewarp objectively far exceeds the cost of making "forward blocks" impossible.

As already addressed in the DelvingBitcoin post:

- The timewarp bug significantly changes the 51% attacker threat model. Without exploiting it a censoring miner needs to continuously keep more hashrate than the rest of the network combined for as long as he wants to prevent some people from using Bitcoin. By exploiting timewarp the attacker can prevent everybody from using Bitcoin within 40 days.
- The timewarp bug allows an attacking miner to force on full nodes more block data than they agreed to. This is actually the attack leveraged by your proposal. I believe this variant of the attack is more likely to happen, simply for the reason that all participants of the system have a short term incentive to exploit this (yay lower fees! yay more block subsidy!), at the expense of the long term health of the system. As the block subsidy exponentially decreases miners are likely to start playing more games and that's a particularly attractive one. Given the level of mining centralization we are witnessing [0] i believe this is particularly worrisome.
- I'm very skeptical of arguments about how "we" can stop an attack which requires "weeks of forewarning". Who's we? How do we proceed, all Bitcoin users coordinate and arbitrarily decide of the validity of a block? A few weeks is very little time if this is at all achievable. If you add on top of that the political implications of the previous point it gets particularly messy.

I've got better things to do than to play "you are being dishonest! -no it's you -no you" games. So unless you bring something new to the table this will be my last reply to your accusations.

Antoine

[0] https://x.com/0xB10C/status/1780611768081121700
On Thursday, April 18th, 2024 at 2:46 AM, Mark F <mark@friedenbach•org> wrote:

> On Wednesday, March 27, 2024 at 4:00:34 AM UTC-7 Antoine Poinsot wrote:
>
>>> The only beneficial case I can remember about the timewarp issue is "forwarding blocks" by maaku for on-chain scaling:
>>> http://freico.in/forward-blocks-scalingbitcoin-paper.pdf
>>
>> I would not qualify this hack of "beneficial". Besides the centralization pressure of an increased block frequency, leveraging the timewarp to achieve it would put the network constantly on the Brink of being seriously (fatally?) harmed. And this sets pernicious incentives too. Every individual user has a short-term incentive to get lower fees by the increased block space, at the expense of all users longer term. And every individual miner has an incentive to get more block reward at the expense of future miners. (And of course bigger miners benefit from an increased block frequency.)
>
> Every single concern mentioned here is addressed prominently in the paper/presentation for Forward Blocks:
>
> * Increased block frequency is only on the compatibility chain, where the content of blocks is deterministic anyway. There is no centralization pressure from the frequency of blocks on the compatibility chain, as the content of the blocks is not miner-editable in economically meaningful ways. Only the block frequency of the forward block chain matters, and here the block frequency is actually *reduced*, thereby decreasing centralization pressure.
>
> * The elastic block size adjustment mechanism proposed in the paper is purposefully constructed so that users or miners wanting to increase the block size beyond what is currently provided for will have to pay significantly (multiple orders of magnitude) more than they could possibly acquire from larger blocks, and the block size would re-adjust downward shortly after the cessation of that artificial fee pressure.
>
> * Increased block frequency of compatibility blocks has no effect on the total issuance, so miners are not rewarded by faster blocks.
>
> You are free to criticize Forward Blocks, but please do so by actually addressing the content of the proposal. Let's please hold a standard of intellectual excellence on this mailing list in which ideas are debated based on content-level arguments rather than repeating inaccurate takes from Reddit/Twitter.
>
> To the topic of the thread, disabling time-warp will close off an unlikely and difficult to pull off subsidy draining attack that to activate would necessarily require weeks of forewarning and could be easily countered in other ways, with the tradeoff of removing the only known mechanism for upgrading the bitcoin protocol to larger effective block sizes while staying 100% compatible with un-upgraded nodes (all nodes see all transactions).
>
> I think we should keep our options open.
>
> -Mark
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/62640263-077c-4ac7-98a6-d9c17913fca0n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/8fFFuAU-SN2NrQ2SKhS2eOeLkHIdCQtnivE4LzWe32vk5gejNEwNvr9IIa3JJ-sII2UUIpOx8oRMslzmA1ZL6y1kBuQEB1fpTaXku2QGAC0%3D%40protonmail.com.

[-- Attachment #2: Type: text/html, Size: 8615 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-04-18 10:04       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
@ 2024-04-25  6:08         ` Antoine Riard
  2024-04-30 22:20           ` Mark F
  0 siblings, 1 reply; 21+ messages in thread
From: Antoine Riard @ 2024-04-25  6:08 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 10647 bytes --]

Hi Maaku,

> Every single concern mentioned here is addressed prominently in the 
paper/presentation for Forward Blocks:
>
> * Increased block frequency is only on the compatibility chain, where the 
content of blocks is deterministic anyway. There is no centralization 
pressure from the frequency > of blocks on the compatibility chain, as the 
content of the blocks is not miner-editable in economically meaningful 
ways. Only the block frequency of the forward block > chain matters, and 
here the block frequency is actually *reduced*, thereby decreasing 
centralization pressure.
>
> * The elastic block size adjustment mechanism proposed in the paper is 
purposefully constructed so that users or miners wanting to increase the 
block size beyond what > is currently provided for will have to pay 
significantly (multiple orders of magnitude) more than they could possibly 
acquire from larger blocks, and the block size would re-> adjust downward 
shortly after the cessation of that artificial fee pressure.

> * Increased block frequency of compatibility blocks has no effect on the 
total issuance, so miners are not rewarded by faster blocks.

> You are free to criticize Forward Blocks, but please do so by actually 
addressing the content of the proposal. Let's please hold a standard of 
intellectual excellence on this > mailing list in which ideas are debated 
based on content-level arguments rather than repeating inaccurate takes 
from Reddit/Twitter.

> To the topic of the thread, disabling time-warp will close off an 
unlikely and difficult to pull off subsidy draining attack that to activate 
would necessarily require weeks of > forewarning and could be easily 
countered in other ways, with the tradeoff of removing the only known 
mechanism for upgrading the bitcoin protocol to larger effective > block 
sizes while staying 100% compatible with un-upgraded nodes (all nodes see 
all transactions).

> I think we should keep our options open.

Somehow, I'm sharing your concerns on preserving the long-term evolvability 
w.r.t scalability options
of bitcoin under the security model as very roughly describer in the paper. 
Yet, from my understanding
of the forwarding block proposal as described in your paper, I wonder if 
the forward block chain could
be re-pegged to the main bitcoin chain using the BIP141 extensible 
commitment structure (assuming
a future hypothetical soft-fork).

From my understanding, it's like doubly linked-list in C, you just need a 
pointer in the BIP141 extensible
commitment structure referencing back the forward chain headers. If one 
wishes no logically authoritative
cross-chain commitment, one could leverage some dynamic-membership 
multi-party signature. This
DMMS could even be backup by proof-of-work based schemes.

The forward block chain can have higher block-rate frequency and the number 
of block headers be
compressed in a merkle tree committed in the BIP141 extensible commitment 
structure. Compression
structure can only be defined by the forward chain consensus algorithm to 
allow more efficient accumulator
than merkle tree to be used".

The forward block chain can have elastic block size consensus-bounded by 
miners fees on long period
of time. Transaction elements can be just committed in the block headers 
themselves, so no centralization
pressure on the main chain. Increased block frequency or block size on the 
forward block chain have not
effect on the total issuance (modulo the game-theory limits of the known 
empirical effects of colored coins
on miners incentives).

I think the time-warp issues opens the door to economically non-null 
exploitation under some scenarios
over some considered time periods. If one can think to other ways to 
mitigate the issue in minimal and
non-invasive way w.r.t current Bitcoin consensus rules and respecting 
un-upgraded node ressources
consumption, I would say you're free to share them.

I can only share your take on maintaining a standard of intellectual 
excellence on the mailing list,
and avoid faltering in Reddit / Twitter-style "madness of the crowd"-like 
conversations.

Best,
Antoine

Le vendredi 19 avril 2024 à 01:19:23 UTC+1, Antoine Poinsot a écrit :

> You are free to criticize Forward Blocks, but please do so by actually 
> addressing the content of the proposal. Let's please hold a standard of 
> intellectual excellence on this mailing list in which ideas are debated 
> based on content-level arguments rather than repeating inaccurate takes 
> from Reddit/Twitter.
>
>
> You are the one being dishonest here. Look, i understand you came up with 
> a fun hack exploiting bugs in Bitcoin and you are biased against fixing 
> them. Yet, the cost of not fixing timewarp objectively far exceeds the 
> cost of making "forward blocks" impossible.
>
> As already addressed in the DelvingBitcoin post:
>
>    1. The timewarp bug significantly changes the 51% attacker threat 
>    model. Without exploiting it a censoring miner needs to continuously keep 
>    more hashrate than the rest of the network combined for as long as he wants 
>    to prevent some people from using Bitcoin. By exploiting timewarp the 
>    attacker can prevent everybody from using Bitcoin within 40 days.
>    2. The timewarp bug allows an attacking miner to force on full nodes 
>    more block data than they agreed to. This is actually the attack leveraged 
>    by your proposal. I believe this variant of the attack is more likely to 
>    happen, simply for the reason that all participants of the system have a 
>    short term incentive to exploit this (yay lower fees! yay more block 
>    subsidy!), at the expense of the long term health of the system. As the 
>    block subsidy exponentially decreases miners are likely to start playing 
>    more games and that's a particularly attractive one. Given the level of 
>    mining centralization we are witnessing [0] i believe this is particularly 
>    worrisome.
>    3. I'm very skeptical of arguments about how "we" can stop an attack 
>    which requires "weeks of forewarning". Who's we? How do we proceed, all 
>    Bitcoin users coordinate and arbitrarily decide of the validity of a block? 
>    A few weeks is very little time if this is at all achievable. If you add on 
>    top of that the political implications of the previous point it gets 
>    particularly messy.
>
>
> I've got better things to do than to play "you are being dishonest! -no 
> it's you -no you" games. So unless you bring something new to the table 
> this will be my last reply to your accusations.
>
> Antoine
>
> [0] https://x.com/0xB10C/status/1780611768081121700
> On Thursday, April 18th, 2024 at 2:46 AM, Mark F <ma...@friedenbach•org> 
> wrote:
>
> On Wednesday, March 27, 2024 at 4:00:34 AM UTC-7 Antoine Poinsot wrote:
>
> The only beneficial case I can remember about the timewarp issue is 
> "forwarding blocks" by maaku for on-chain scaling:
> http://freico.in/forward-blocks-scalingbitcoin-paper.pdf
>
>
> I would not qualify this hack of "beneficial". Besides the centralization 
> pressure of an increased block frequency, leveraging the timewarp to 
> achieve it would put the network constantly on the Brink of being seriously 
> (fatally?) harmed. And this sets pernicious incentives too. Every 
> individual user has a short-term incentive to get lower fees by the 
> increased block space, at the expense of all users longer term. And every 
> individual miner has an incentive to get more block reward at the expense 
> of future miners. (And of course bigger miners benefit from an increased 
> block frequency.)
>
> Every single concern mentioned here is addressed prominently in the 
> paper/presentation for Forward Blocks:
>
> * Increased block frequency is only on the compatibility chain, where the 
> content of blocks is deterministic anyway. There is no centralization 
> pressure from the frequency of blocks on the compatibility chain, as the 
> content of the blocks is not miner-editable in economically meaningful 
> ways. Only the block frequency of the forward block chain matters, and here 
> the block frequency is actually *reduced*, thereby decreasing 
> centralization pressure.
>
> * The elastic block size adjustment mechanism proposed in the paper is 
> purposefully constructed so that users or miners wanting to increase the 
> block size beyond what is currently provided for will have to pay 
> significantly (multiple orders of magnitude) more than they could possibly 
> acquire from larger blocks, and the block size would re-adjust downward 
> shortly after the cessation of that artificial fee pressure.
>
> * Increased block frequency of compatibility blocks has no effect on the 
> total issuance, so miners are not rewarded by faster blocks.
>
> You are free to criticize Forward Blocks, but please do so by actually 
> addressing the content of the proposal. Let's please hold a standard of 
> intellectual excellence on this mailing list in which ideas are debated 
> based on content-level arguments rather than repeating inaccurate takes 
> from Reddit/Twitter.
>
> To the topic of the thread, disabling time-warp will close off an unlikely 
> and difficult to pull off subsidy draining attack that to activate would 
> necessarily require weeks of forewarning and could be easily countered in 
> other ways, with the tradeoff of removing the only known mechanism for 
> upgrading the bitcoin protocol to larger effective block sizes while 
> staying 100% compatible with un-upgraded nodes (all nodes see all 
> transactions).
>
> I think we should keep our options open.
>
> -Mark
>
> -- 
>
> You received this message because you are subscribed to the Google Groups 
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bitcoindev+...@googlegroups•com.
>
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bitcoindev/62640263-077c-4ac7-98a6-d9c17913fca0n%40googlegroups.com
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/3e93b83e-f0ea-43b9-8f77-f7b044fb3187n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 14107 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-04-25  6:08         ` Antoine Riard
@ 2024-04-30 22:20           ` Mark F
  2024-05-06  1:10             ` Antoine Riard
  0 siblings, 1 reply; 21+ messages in thread
From: Mark F @ 2024-04-30 22:20 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 13244 bytes --]

Hi Antoine,

That's a reasonable suggestion, and one which has been discussed in the 
past under various names. Concrete ideas for a pegged extension-block side 
chain go back to 2014 at the very least. However there is one concrete way 
in which these proposals differ from forward blocks: the replay of 
transactions to the compatibility block chain. With forward blocks, even 
ancient versions of bitcoind that have been running since 2013 (picked as a 
cutoff because of the probabilistic fork caused by v0.8) will see all 
blocks, and have a complete listing of all UTXOs, and the content of 
transactions as they appear.

Does this matter? In principle you can just upgrade all nodes to understand 
the extension block, but in practice for a system as diverse as bitcoin 
support of older node versions is often required in critical 
infrastructure. Think of all the block explorer and mempool websites out 
there, for example, and various network monitoring and charting tools. Many 
of which are poorly maintained and probably running on two or three year 
old versions of Bitcoin Core.

The forward blocks proposal uses the timewarp bug to enable (1) a 
proof-of-work change, (2) sharding, (3) subsidy schedule smoothing, and (4) 
a flexible block size, all without forcing any non-mining nodes to *have* 
to upgrade in order to regain visibility into the network. Yes it's an 
everything-and-the-kitchen-sink straw man proposal, but that was on purpose 
to show that all these so-called “hard-fork” changes can in fact be done as 
a soft-fork on vanilla bitcoin, while supporting even the oldest 
still-running nodes.

That changes if we "fix" the timewarp bug though. At the very least, the 
flexible block size and subsidy schedule smoothing can't be accomplished 
without exploiting the timewarp bug, as far as anyone can tell. Therefore 
fixing the timewarp bug will _permanently_ cutoff the bitcoin community 
from ever having the ability to scale on-chain in a backwards-compatible 
way, now or decades or centuries into the future.

Once thrown, this fuse switch can't be undone. We should be damn sure we 
will never, ever need that capability before giving it up.

Mark

On Thursday, April 25, 2024 at 3:46:40 AM UTC-7 Antoine Riard wrote:

> Hi Maaku,
>
> > Every single concern mentioned here is addressed prominently in the 
> paper/presentation for Forward Blocks:
> >
> > * Increased block frequency is only on the compatibility chain, where 
> the content of blocks is deterministic anyway. There is no centralization 
> pressure from the frequency > of blocks on the compatibility chain, as the 
> content of the blocks is not miner-editable in economically meaningful 
> ways. Only the block frequency of the forward block > chain matters, and 
> here the block frequency is actually *reduced*, thereby decreasing 
> centralization pressure.
> >
> > * The elastic block size adjustment mechanism proposed in the paper is 
> purposefully constructed so that users or miners wanting to increase the 
> block size beyond what > is currently provided for will have to pay 
> significantly (multiple orders of magnitude) more than they could possibly 
> acquire from larger blocks, and the block size would re-> adjust downward 
> shortly after the cessation of that artificial fee pressure.
>
> > * Increased block frequency of compatibility blocks has no effect on the 
> total issuance, so miners are not rewarded by faster blocks.
>
> > You are free to criticize Forward Blocks, but please do so by actually 
> addressing the content of the proposal. Let's please hold a standard of 
> intellectual excellence on this > mailing list in which ideas are debated 
> based on content-level arguments rather than repeating inaccurate takes 
> from Reddit/Twitter.
>
> > To the topic of the thread, disabling time-warp will close off an 
> unlikely and difficult to pull off subsidy draining attack that to activate 
> would necessarily require weeks of > forewarning and could be easily 
> countered in other ways, with the tradeoff of removing the only known 
> mechanism for upgrading the bitcoin protocol to larger effective > block 
> sizes while staying 100% compatible with un-upgraded nodes (all nodes see 
> all transactions).
>
> > I think we should keep our options open.
>
> Somehow, I'm sharing your concerns on preserving the long-term 
> evolvability w.r.t scalability options
> of bitcoin under the security model as very roughly describer in the 
> paper. Yet, from my understanding
> of the forwarding block proposal as described in your paper, I wonder if 
> the forward block chain could
> be re-pegged to the main bitcoin chain using the BIP141 extensible 
> commitment structure (assuming
> a future hypothetical soft-fork).
>
> From my understanding, it's like doubly linked-list in C, you just need a 
> pointer in the BIP141 extensible
> commitment structure referencing back the forward chain headers. If one 
> wishes no logically authoritative
> cross-chain commitment, one could leverage some dynamic-membership 
> multi-party signature. This
> DMMS could even be backup by proof-of-work based schemes.
>
> The forward block chain can have higher block-rate frequency and the 
> number of block headers be
> compressed in a merkle tree committed in the BIP141 extensible commitment 
> structure. Compression
> structure can only be defined by the forward chain consensus algorithm to 
> allow more efficient accumulator
> than merkle tree to be used".
>
> The forward block chain can have elastic block size consensus-bounded by 
> miners fees on long period
> of time. Transaction elements can be just committed in the block headers 
> themselves, so no centralization
> pressure on the main chain. Increased block frequency or block size on the 
> forward block chain have not
> effect on the total issuance (modulo the game-theory limits of the known 
> empirical effects of colored coins
> on miners incentives).
>
> I think the time-warp issues opens the door to economically non-null 
> exploitation under some scenarios
> over some considered time periods. If one can think to other ways to 
> mitigate the issue in minimal and
> non-invasive way w.r.t current Bitcoin consensus rules and respecting 
> un-upgraded node ressources
> consumption, I would say you're free to share them.
>
> I can only share your take on maintaining a standard of intellectual 
> excellence on the mailing list,
> and avoid faltering in Reddit / Twitter-style "madness of the crowd"-like 
> conversations.
>
> Best,
> Antoine
>
> Le vendredi 19 avril 2024 à 01:19:23 UTC+1, Antoine Poinsot a écrit :
>
>> You are free to criticize Forward Blocks, but please do so by actually 
>> addressing the content of the proposal. Let's please hold a standard of 
>> intellectual excellence on this mailing list in which ideas are debated 
>> based on content-level arguments rather than repeating inaccurate takes 
>> from Reddit/Twitter.
>>
>>
>> You are the one being dishonest here. Look, i understand you came up with 
>> a fun hack exploiting bugs in Bitcoin and you are biased against fixing 
>> them. Yet, the cost of not fixing timewarp objectively far exceeds the 
>> cost of making "forward blocks" impossible.
>>
>> As already addressed in the DelvingBitcoin post:
>>
>>    1. The timewarp bug significantly changes the 51% attacker threat 
>>    model. Without exploiting it a censoring miner needs to continuously keep 
>>    more hashrate than the rest of the network combined for as long as he wants 
>>    to prevent some people from using Bitcoin. By exploiting timewarp the 
>>    attacker can prevent everybody from using Bitcoin within 40 days.
>>    2. The timewarp bug allows an attacking miner to force on full nodes 
>>    more block data than they agreed to. This is actually the attack leveraged 
>>    by your proposal. I believe this variant of the attack is more likely to 
>>    happen, simply for the reason that all participants of the system have a 
>>    short term incentive to exploit this (yay lower fees! yay more block 
>>    subsidy!), at the expense of the long term health of the system. As the 
>>    block subsidy exponentially decreases miners are likely to start playing 
>>    more games and that's a particularly attractive one. Given the level of 
>>    mining centralization we are witnessing [0] i believe this is particularly 
>>    worrisome.
>>    3. I'm very skeptical of arguments about how "we" can stop an attack 
>>    which requires "weeks of forewarning". Who's we? How do we proceed, all 
>>    Bitcoin users coordinate and arbitrarily decide of the validity of a block? 
>>    A few weeks is very little time if this is at all achievable. If you add on 
>>    top of that the political implications of the previous point it gets 
>>    particularly messy.
>>
>>
>> I've got better things to do than to play "you are being dishonest! -no 
>> it's you -no you" games. So unless you bring something new to the table 
>> this will be my last reply to your accusations.
>>
>> Antoine
>>
>> [0] https://x.com/0xB10C/status/1780611768081121700
>> On Thursday, April 18th, 2024 at 2:46 AM, Mark F <ma...@friedenbach•org> 
>> wrote:
>>
>> On Wednesday, March 27, 2024 at 4:00:34 AM UTC-7 Antoine Poinsot wrote:
>>
>> The only beneficial case I can remember about the timewarp issue is 
>> "forwarding blocks" by maaku for on-chain scaling:
>> http://freico.in/forward-blocks-scalingbitcoin-paper.pdf
>>
>>
>> I would not qualify this hack of "beneficial". Besides the centralization 
>> pressure of an increased block frequency, leveraging the timewarp to 
>> achieve it would put the network constantly on the Brink of being seriously 
>> (fatally?) harmed. And this sets pernicious incentives too. Every 
>> individual user has a short-term incentive to get lower fees by the 
>> increased block space, at the expense of all users longer term. And every 
>> individual miner has an incentive to get more block reward at the expense 
>> of future miners. (And of course bigger miners benefit from an increased 
>> block frequency.)
>>
>> Every single concern mentioned here is addressed prominently in the 
>> paper/presentation for Forward Blocks:
>>
>> * Increased block frequency is only on the compatibility chain, where the 
>> content of blocks is deterministic anyway. There is no centralization 
>> pressure from the frequency of blocks on the compatibility chain, as the 
>> content of the blocks is not miner-editable in economically meaningful 
>> ways. Only the block frequency of the forward block chain matters, and here 
>> the block frequency is actually *reduced*, thereby decreasing 
>> centralization pressure.
>>
>> * The elastic block size adjustment mechanism proposed in the paper is 
>> purposefully constructed so that users or miners wanting to increase the 
>> block size beyond what is currently provided for will have to pay 
>> significantly (multiple orders of magnitude) more than they could possibly 
>> acquire from larger blocks, and the block size would re-adjust downward 
>> shortly after the cessation of that artificial fee pressure.
>>
>> * Increased block frequency of compatibility blocks has no effect on the 
>> total issuance, so miners are not rewarded by faster blocks.
>>
>> You are free to criticize Forward Blocks, but please do so by actually 
>> addressing the content of the proposal. Let's please hold a standard of 
>> intellectual excellence on this mailing list in which ideas are debated 
>> based on content-level arguments rather than repeating inaccurate takes 
>> from Reddit/Twitter.
>>
>> To the topic of the thread, disabling time-warp will close off an 
>> unlikely and difficult to pull off subsidy draining attack that to activate 
>> would necessarily require weeks of forewarning and could be easily 
>> countered in other ways, with the tradeoff of removing the only known 
>> mechanism for upgrading the bitcoin protocol to larger effective block 
>> sizes while staying 100% compatible with un-upgraded nodes (all nodes see 
>> all transactions).
>>
>> I think we should keep our options open.
>>
>> -Mark
>>
>> -- 
>>
>> You received this message because you are subscribed to the Google Groups 
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to bitcoindev+...@googlegroups•com.
>>
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/bitcoindev/62640263-077c-4ac7-98a6-d9c17913fca0n%40googlegroups.com
>> .
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/67ec72f6-b89f-4f8d-8629-0ebc8bdb7acfn%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 16621 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-04-30 22:20           ` Mark F
@ 2024-05-06  1:10             ` Antoine Riard
  0 siblings, 0 replies; 21+ messages in thread
From: Antoine Riard @ 2024-05-06  1:10 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 16902 bytes --]

Hi Maaku,

From reading back the "forward block" paper, while it effectively 
guarantees an on-chain settlment throughput increases without the
necessity to upgrade old clients, one could argue the proof-of-work change 
on the forward chain (unless it's a no-op double-sha256)
coupled with the subsidy schedule smoothing, constitutes a substantial 
change of the already-mined UTXO security model. You can
use a lot of hash functions as proof-of-work primitive, though it doesn't 
mean they are relying on as strong assumptions or level
of cryptanalysis.

In fine, you could have poorly picked up hash function for the forward 
chain resulting in a lowering of everyone coins security
(the > 100 TH/s of today is securing years old coins from block mined under 
< 1 TH/s). I hold the opinion that fundamental changes
affecting the security of everyone coins should be better to be opted-in by 
the super-economic majority of nodes, including non-mining
nodes. At the contrary, the "forward block" proposal sounds to make the 
point it's okay to update proof-of-work algorithm by a
combined set of mining nodes and upgraded non-mining nodes, which could 
hypothetically lead to a "security downgrade" due to weaker
proof-of-work algorithm used on the forward chain.

While your papers introduce formalization of both full-node cost of 
validation and censorship resistance concepts, one could also
add "hardness to change" as a property of the Bitcoin network we all 
cherishes. If tomorrow, 10% of the hahrate was able to enforce
proof-of-work upgrade to the broken SHA-1, I think we would all consider as 
a security downgrade.

Beyond, this is corect we have a diversity of old nodes used in the 
ecosystem, probably for block explorer and mempool websites.
Yet in practice, they're more certainly vectors of weakness for their 
end-users, as Bitcoin Core has sadly a limited security fixes
backport policy, which doesn't go as far as v0.8 for sure. That we can all 
deplore the lack of half-decade old LTS release policy for
Bitcoin Core, like done by the Linux kernel is a legitimate conversation to 
have (and it would be indeed make it easier with
libbitcoinkernel progress). I think we shall rather invite operators of 
oldest still-running nodes to upgrade to more recent
versions, before to ask them to go through the analytical process of 
weighting all the security / scalability trade-offs of a
proposal like "forward block".

Finally, on letting options open to bump block inter-val as a soft-fork on 
the compatibility chain, I think one could still have
a multi-stage "forward block" deployment, where a) a new difficutly 
adjustment algoritm with parameters is introduced bumping block
inter-val for upgraded mining nodes e.g a block every 400 s in average and 
the b) re-use this block inter-val capacity increase for
the forward chain flexible block size. Now why a miner would opt-in in such 
block-interval constraining soft-fork is a good question,
in a paradigm where they still get the same block subsidy distribution.

This is just a thought experiment aiming to invalidate the "as far as 
anyone can tell" statement on forclosing forever on-chain
settlement throughput increase, if we fix the timewarp bug.

Best,
Antoine
Le mercredi 1 mai 2024 à 09:58:48 UTC+1, Mark F a écrit :

> Hi Antoine,
>
> That's a reasonable suggestion, and one which has been discussed in the 
> past under various names. Concrete ideas for a pegged extension-block side 
> chain go back to 2014 at the very least. However there is one concrete way 
> in which these proposals differ from forward blocks: the replay of 
> transactions to the compatibility block chain. With forward blocks, even 
> ancient versions of bitcoind that have been running since 2013 (picked as a 
> cutoff because of the probabilistic fork caused by v0.8) will see all 
> blocks, and have a complete listing of all UTXOs, and the content of 
> transactions as they appear.
>
> Does this matter? In principle you can just upgrade all nodes to 
> understand the extension block, but in practice for a system as diverse as 
> bitcoin support of older node versions is often required in critical 
> infrastructure. Think of all the block explorer and mempool websites out 
> there, for example, and various network monitoring and charting tools. Many 
> of which are poorly maintained and probably running on two or three year 
> old versions of Bitcoin Core.
>
> The forward blocks proposal uses the timewarp bug to enable (1) a 
> proof-of-work change, (2) sharding, (3) subsidy schedule smoothing, and (4) 
> a flexible block size, all without forcing any non-mining nodes to *have* 
> to upgrade in order to regain visibility into the network. Yes it's an 
> everything-and-the-kitchen-sink straw man proposal, but that was on purpose 
> to show that all these so-called “hard-fork” changes can in fact be done as 
> a soft-fork on vanilla bitcoin, while supporting even the oldest 
> still-running nodes.
>
> That changes if we "fix" the timewarp bug though. At the very least, the 
> flexible block size and subsidy schedule smoothing can't be accomplished 
> without exploiting the timewarp bug, as far as anyone can tell. Therefore 
> fixing the timewarp bug will _permanently_ cutoff the bitcoin community 
> from ever having the ability to scale on-chain in a backwards-compatible 
> way, now or decades or centuries into the future.
>
> Once thrown, this fuse switch can't be undone. We should be damn sure we 
> will never, ever need that capability before giving it up.
>
> Mark
>
> On Thursday, April 25, 2024 at 3:46:40 AM UTC-7 Antoine Riard wrote:
>
>> Hi Maaku,
>>
>> > Every single concern mentioned here is addressed prominently in the 
>> paper/presentation for Forward Blocks:
>> >
>> > * Increased block frequency is only on the compatibility chain, where 
>> the content of blocks is deterministic anyway. There is no centralization 
>> pressure from the frequency > of blocks on the compatibility chain, as the 
>> content of the blocks is not miner-editable in economically meaningful 
>> ways. Only the block frequency of the forward block > chain matters, and 
>> here the block frequency is actually *reduced*, thereby decreasing 
>> centralization pressure.
>> >
>> > * The elastic block size adjustment mechanism proposed in the paper is 
>> purposefully constructed so that users or miners wanting to increase the 
>> block size beyond what > is currently provided for will have to pay 
>> significantly (multiple orders of magnitude) more than they could possibly 
>> acquire from larger blocks, and the block size would re-> adjust downward 
>> shortly after the cessation of that artificial fee pressure.
>>
>> > * Increased block frequency of compatibility blocks has no effect on 
>> the total issuance, so miners are not rewarded by faster blocks.
>>
>> > You are free to criticize Forward Blocks, but please do so by actually 
>> addressing the content of the proposal. Let's please hold a standard of 
>> intellectual excellence on this > mailing list in which ideas are debated 
>> based on content-level arguments rather than repeating inaccurate takes 
>> from Reddit/Twitter.
>>
>> > To the topic of the thread, disabling time-warp will close off an 
>> unlikely and difficult to pull off subsidy draining attack that to activate 
>> would necessarily require weeks of > forewarning and could be easily 
>> countered in other ways, with the tradeoff of removing the only known 
>> mechanism for upgrading the bitcoin protocol to larger effective > block 
>> sizes while staying 100% compatible with un-upgraded nodes (all nodes see 
>> all transactions).
>>
>> > I think we should keep our options open.
>>
>> Somehow, I'm sharing your concerns on preserving the long-term 
>> evolvability w.r.t scalability options
>> of bitcoin under the security model as very roughly describer in the 
>> paper. Yet, from my understanding
>> of the forwarding block proposal as described in your paper, I wonder if 
>> the forward block chain could
>> be re-pegged to the main bitcoin chain using the BIP141 extensible 
>> commitment structure (assuming
>> a future hypothetical soft-fork).
>>
>> From my understanding, it's like doubly linked-list in C, you just need a 
>> pointer in the BIP141 extensible
>> commitment structure referencing back the forward chain headers. If one 
>> wishes no logically authoritative
>> cross-chain commitment, one could leverage some dynamic-membership 
>> multi-party signature. This
>> DMMS could even be backup by proof-of-work based schemes.
>>
>> The forward block chain can have higher block-rate frequency and the 
>> number of block headers be
>> compressed in a merkle tree committed in the BIP141 extensible commitment 
>> structure. Compression
>> structure can only be defined by the forward chain consensus algorithm to 
>> allow more efficient accumulator
>> than merkle tree to be used".
>>
>> The forward block chain can have elastic block size consensus-bounded by 
>> miners fees on long period
>> of time. Transaction elements can be just committed in the block headers 
>> themselves, so no centralization
>> pressure on the main chain. Increased block frequency or block size on 
>> the forward block chain have not
>> effect on the total issuance (modulo the game-theory limits of the known 
>> empirical effects of colored coins
>> on miners incentives).
>>
>> I think the time-warp issues opens the door to economically non-null 
>> exploitation under some scenarios
>> over some considered time periods. If one can think to other ways to 
>> mitigate the issue in minimal and
>> non-invasive way w.r.t current Bitcoin consensus rules and respecting 
>> un-upgraded node ressources
>> consumption, I would say you're free to share them.
>>
>> I can only share your take on maintaining a standard of intellectual 
>> excellence on the mailing list,
>> and avoid faltering in Reddit / Twitter-style "madness of the crowd"-like 
>> conversations.
>>
>> Best,
>> Antoine
>>
>> Le vendredi 19 avril 2024 à 01:19:23 UTC+1, Antoine Poinsot a écrit :
>>
>>> You are free to criticize Forward Blocks, but please do so by actually 
>>> addressing the content of the proposal. Let's please hold a standard of 
>>> intellectual excellence on this mailing list in which ideas are debated 
>>> based on content-level arguments rather than repeating inaccurate takes 
>>> from Reddit/Twitter.
>>>
>>>
>>> You are the one being dishonest here. Look, i understand you came up 
>>> with a fun hack exploiting bugs in Bitcoin and you are biased against 
>>> fixing them. Yet, the cost of not fixing timewarp objectively far 
>>> exceeds the cost of making "forward blocks" impossible.
>>>
>>> As already addressed in the DelvingBitcoin post:
>>>
>>>    1. The timewarp bug significantly changes the 51% attacker threat 
>>>    model. Without exploiting it a censoring miner needs to continuously keep 
>>>    more hashrate than the rest of the network combined for as long as he wants 
>>>    to prevent some people from using Bitcoin. By exploiting timewarp the 
>>>    attacker can prevent everybody from using Bitcoin within 40 days.
>>>    2. The timewarp bug allows an attacking miner to force on full nodes 
>>>    more block data than they agreed to. This is actually the attack leveraged 
>>>    by your proposal. I believe this variant of the attack is more likely to 
>>>    happen, simply for the reason that all participants of the system have a 
>>>    short term incentive to exploit this (yay lower fees! yay more block 
>>>    subsidy!), at the expense of the long term health of the system. As the 
>>>    block subsidy exponentially decreases miners are likely to start playing 
>>>    more games and that's a particularly attractive one. Given the level of 
>>>    mining centralization we are witnessing [0] i believe this is particularly 
>>>    worrisome.
>>>    3. I'm very skeptical of arguments about how "we" can stop an attack 
>>>    which requires "weeks of forewarning". Who's we? How do we proceed, all 
>>>    Bitcoin users coordinate and arbitrarily decide of the validity of a block? 
>>>    A few weeks is very little time if this is at all achievable. If you add on 
>>>    top of that the political implications of the previous point it gets 
>>>    particularly messy.
>>>
>>>
>>> I've got better things to do than to play "you are being dishonest! -no 
>>> it's you -no you" games. So unless you bring something new to the table 
>>> this will be my last reply to your accusations.
>>>
>>> Antoine
>>>
>>> [0] https://x.com/0xB10C/status/1780611768081121700
>>> On Thursday, April 18th, 2024 at 2:46 AM, Mark F <ma...@friedenbach•org> 
>>> wrote:
>>>
>>> On Wednesday, March 27, 2024 at 4:00:34 AM UTC-7 Antoine Poinsot wrote:
>>>
>>> The only beneficial case I can remember about the timewarp issue is 
>>> "forwarding blocks" by maaku for on-chain scaling:
>>> http://freico.in/forward-blocks-scalingbitcoin-paper.pdf
>>>
>>>
>>> I would not qualify this hack of "beneficial". Besides the 
>>> centralization pressure of an increased block frequency, leveraging the 
>>> timewarp to achieve it would put the network constantly on the Brink of 
>>> being seriously (fatally?) harmed. And this sets pernicious incentives too. 
>>> Every individual user has a short-term incentive to get lower fees by the 
>>> increased block space, at the expense of all users longer term. And every 
>>> individual miner has an incentive to get more block reward at the expense 
>>> of future miners. (And of course bigger miners benefit from an increased 
>>> block frequency.)
>>>
>>> Every single concern mentioned here is addressed prominently in the 
>>> paper/presentation for Forward Blocks:
>>>
>>> * Increased block frequency is only on the compatibility chain, where 
>>> the content of blocks is deterministic anyway. There is no centralization 
>>> pressure from the frequency of blocks on the compatibility chain, as the 
>>> content of the blocks is not miner-editable in economically meaningful 
>>> ways. Only the block frequency of the forward block chain matters, and here 
>>> the block frequency is actually *reduced*, thereby decreasing 
>>> centralization pressure.
>>>
>>> * The elastic block size adjustment mechanism proposed in the paper is 
>>> purposefully constructed so that users or miners wanting to increase the 
>>> block size beyond what is currently provided for will have to pay 
>>> significantly (multiple orders of magnitude) more than they could possibly 
>>> acquire from larger blocks, and the block size would re-adjust downward 
>>> shortly after the cessation of that artificial fee pressure.
>>>
>>> * Increased block frequency of compatibility blocks has no effect on the 
>>> total issuance, so miners are not rewarded by faster blocks.
>>>
>>> You are free to criticize Forward Blocks, but please do so by actually 
>>> addressing the content of the proposal. Let's please hold a standard of 
>>> intellectual excellence on this mailing list in which ideas are debated 
>>> based on content-level arguments rather than repeating inaccurate takes 
>>> from Reddit/Twitter.
>>>
>>> To the topic of the thread, disabling time-warp will close off an 
>>> unlikely and difficult to pull off subsidy draining attack that to activate 
>>> would necessarily require weeks of forewarning and could be easily 
>>> countered in other ways, with the tradeoff of removing the only known 
>>> mechanism for upgrading the bitcoin protocol to larger effective block 
>>> sizes while staying 100% compatible with un-upgraded nodes (all nodes see 
>>> all transactions).
>>>
>>> I think we should keep our options open.
>>>
>>> -Mark
>>>
>>> -- 
>>>
>>> You received this message because you are subscribed to the Google 
>>> Groups "Bitcoin Development Mailing List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to bitcoindev+...@googlegroups•com.
>>>
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/bitcoindev/62640263-077c-4ac7-98a6-d9c17913fca0n%40googlegroups.com
>>> .
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/5c2b1a47-5a7a-48f3-9904-c17fa5ece5a6n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 20303 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-03-24 18:10 [bitcoindev] Great Consensus Cleanup Revival 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-03-26 19:11 ` [bitcoindev] " Antoine Riard
@ 2024-06-17 22:15 ` Eric Voskuil
  2024-06-18  8:13   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  1 sibling, 1 reply; 21+ messages in thread
From: Eric Voskuil @ 2024-06-17 22:15 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 901 bytes --]

Hi Antoine,

Regarding potential malleability pertaining to blocks with only 64 byte 
transactions, why is not a deserialization phase check for the coinbase 
input as a null point not sufficient mitigation (computational 
infeasibility) for any implementation that desires to perform permanent 
invalidity marking?

Best,
Eric

ref: Weaknesses in Bitcoin’s Merkle Root Construction 
<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190225/a27d8837/attachment-0001.pdf>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/72e83c31-408f-4c13-bff5-bf0789302e23n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 1211 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-17 22:15 ` Eric Voskuil
@ 2024-06-18  8:13   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-06-18 13:02     ` Eric Voskuil
  0 siblings, 1 reply; 21+ messages in thread
From: 'Antoine Poinsot' via Bitcoin Development Mailing List @ 2024-06-18  8:13 UTC (permalink / raw)
  To: Eric Voskuil; +Cc: Bitcoin Development Mailing List

[-- Attachment #1: Type: text/plain, Size: 2315 bytes --]

Hi Eric,

It is. This is what is implemented in Bitcoin Core, see [this snippet](https://github.com/bitcoin/bitcoin/blob/41544b8f96dbc9c6b8998acd6522200d67cdc16d/src/validation.cpp#L4547-L4552) and section 4.1 of the document you reference:

> Another check that was also being done in CheckBlock() relates to the coinbase transaction: if the first transaction in a block fails the required structure of a coinbase – one input, with previous output hash of all zeros and index of all ones – then the block will fail validation. The side effect of this test being in CheckBlock() was that even though the block malleability discussed in section 3.1 was unknown, we were effectively protected against it – as described above, it would take at least 224 bits of work to produce a malleated block that passed the coinbase check.

Best,
Antoine
On Tuesday, June 18th, 2024 at 12:15 AM, Eric Voskuil <eric@voskuil•org> wrote:

> Hi Antoine,
>
> Regarding potential malleability pertaining to blocks with only 64 byte transactions, why is not a deserialization phase check for the coinbase input as a null point not sufficient mitigation (computational infeasibility) for any implementation that desires to perform permanent invalidity marking?
>
> Best,
> Eric
>
> ref: [Weaknesses in Bitcoin’s Merkle Root Construction](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190225/a27d8837/attachment-0001.pdf)
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/72e83c31-408f-4c13-bff5-bf0789302e23n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/heKH68GFJr4Zuf6lBozPJrb-StyBJPMNvmZL0xvKFBnBGVA3fVSgTLdWc-_8igYWX8z3zCGvzflH-CsRv0QCJQcfwizNyYXlBJa_Kteb2zg%3D%40protonmail.com.

[-- Attachment #2: Type: text/html, Size: 4370 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-18  8:13   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
@ 2024-06-18 13:02     ` Eric Voskuil
  2024-06-21 13:09       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Voskuil @ 2024-06-18 13:02 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 2720 bytes --]

Right, a fairly obvious resolution. My question is why is that not 
sufficient - especially given that a similar (context free) check is 
required for duplicated tx malleability? We'd just be swapping one trivial 
check (first input not null) for another (tx size not 64 bytes).

Best,
Eric
On Tuesday, June 18, 2024 at 7:46:28 AM UTC-4 Antoine Poinsot wrote:

> Hi Eric,
>
> It is. This is what is implemented in Bitcoin Core, see this snippet 
> <https://github.com/bitcoin/bitcoin/blob/41544b8f96dbc9c6b8998acd6522200d67cdc16d/src/validation.cpp#L4547-L4552> 
> and section 4.1 of the document you reference:
>
> Another check that was also being done in CheckBlock() relates to the 
> coinbase transaction: if the first transaction in a block fails the 
> required structure of a coinbase – one input, with previous output hash 
> of all zeros and index of all ones – then the block will fail validation. 
> The side effect of this test being in CheckBlock() was that even though 
> the block malleability discussed in section 3.1 was unknown, we were 
> effectively protected against it – as described above, it would take at 
> least 224 bits of work to produce a malleated block that passed the 
> coinbase check.
>
>
> Best,
> Antoine
> On Tuesday, June 18th, 2024 at 12:15 AM, Eric Voskuil <er...@voskuil•org> 
> wrote:
>
> Hi Antoine,
>
> Regarding potential malleability pertaining to blocks with only 64 byte 
> transactions, why is not a deserialization phase check for the coinbase 
> input as a null point not sufficient mitigation (computational 
> infeasibility) for any implementation that desires to perform permanent 
> invalidity marking?
>
> Best,
> Eric
>
> ref: Weaknesses in Bitcoin’s Merkle Root Construction 
> <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190225/a27d8837/attachment-0001.pdf>
>
> -- 
>
> You received this message because you are subscribed to the Google Groups 
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bitcoindev+...@googlegroups•com.
>
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bitcoindev/72e83c31-408f-4c13-bff5-bf0789302e23n%40googlegroups.com
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/5b0331a5-4e94-465d-a51d-02166e2c1937n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 5603 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-18 13:02     ` Eric Voskuil
@ 2024-06-21 13:09       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-06-24  0:35         ` Eric Voskuil
  0 siblings, 1 reply; 21+ messages in thread
From: 'Antoine Poinsot' via Bitcoin Development Mailing List @ 2024-06-21 13:09 UTC (permalink / raw)
  To: Eric Voskuil; +Cc: Bitcoin Development Mailing List

[-- Attachment #1: Type: text/plain, Size: 4548 bytes --]

Making 64-bytes transactions invalid is indeed not the most pressing bug fix, but i believe it's still a very nice cleanup to include if such a soft fork ends up being seriously proposed.

As discussed here it would let node implementations cache block failures at an earlier stage of validation. Not a large gain, but still nice to have.

As discussed in the DelvingBitcoin post it would also be a small gain of bandwidth for SPV verifiers as they wouldn't have to query a merkle proof for the coinbase transaction in addition to the one for the transaction they're interested in. It would also avoid a large footgun for anyone implementing a software verifying an SPV proof verifier and not knowing the intricacies of the protocol which make such proofs not secure on their own today.

Finally, it would get rid of a large footgun in general. Certainly, unique block hashes would be a useful property for Bitcoin to have. It's not far-fetched to expect current or future Bitcoin-related software to rely on this.

Outlawing 64-bytes transactions is also a very narrow and straightforward change, with trivial confiscatory effect as any 64-bytes transactions would either be unspendable or an anyone-can-spend. Therefore i believe the benefits of making them illegal outweigh the costs.

Best,
Antoine

On Thursday, June 20th, 2024 at 6:57 PM, Eric Voskuil <eric@voskuil•org> wrote:

> Right, a fairly obvious resolution. My question is why is that not sufficient - especially given that a similar (context free) check is required for duplicated tx malleability? We'd just be swapping one trivial check (first input not null) for another (tx size not 64 bytes).
>
> Best,
> Eric
> On Tuesday, June 18, 2024 at 7:46:28 AM UTC-4 Antoine Poinsot wrote:
>
>> Hi Eric,
>>
>> It is. This is what is implemented in Bitcoin Core, see [this snippet](https://github.com/bitcoin/bitcoin/blob/41544b8f96dbc9c6b8998acd6522200d67cdc16d/src/validation.cpp#L4547-L4552) and section 4.1 of the document you reference:
>>
>>> Another check that was also being done in CheckBlock() relates to the coinbase transaction: if the first transaction in a block fails the required structure of a coinbase – one input, with previous output hash of all zeros and index of all ones – then the block will fail validation. The side effect of this test being in CheckBlock() was that even though the block malleability discussed in section 3.1 was unknown, we were effectively protected against it – as described above, it would take at least 224 bits of work to produce a malleated block that passed the coinbase check.
>>
>> Best,
>> Antoine
>>
>> On Tuesday, June 18th, 2024 at 12:15 AM, Eric Voskuil <er...@voskuil•org> wrote:
>>
>>> Hi Antoine,
>>>
>>> Regarding potential malleability pertaining to blocks with only 64 byte transactions, why is not a deserialization phase check for the coinbase input as a null point not sufficient mitigation (computational infeasibility) for any implementation that desires to perform permanent invalidity marking?
>>>
>>> Best,
>>> Eric
>>>
>>> ref: [Weaknesses in Bitcoin’s Merkle Root Construction](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190225/a27d8837/attachment-0001.pdf)
>>
>>> --
>>
>>> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups•com.
>>
>>> To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/72e83c31-408f-4c13-bff5-bf0789302e23n%40googlegroups.com.
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/5b0331a5-4e94-465d-a51d-02166e2c1937n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/yt1O1F7NiVj-WkmnYeta1fSqCYNFx8h6OiJaTBmwhmJ2MWAZkmmjPlUST6FM7t6_-2NwWKdglWh77vcnEKA8swiAnQCZJY2SSCAh4DOKt2I%3D%40protonmail.com.

[-- Attachment #2: Type: text/html, Size: 9243 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-21 13:09       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
@ 2024-06-24  0:35         ` Eric Voskuil
  2024-06-27  9:35           ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Voskuil @ 2024-06-24  0:35 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 2780 bytes --]

Thanks for the responses Antoine.

>  As discussed here it would let node implementations cache block failures 
at an earlier stage of validation. Not a large gain, but still nice to have.

It is not clear to me how determining the coinbase size can be done at an 
earlier stage of validation than detection of the non-null coinbase. The 
former requires parsing the coinbase to determine its size, the latter 
requires parsing it to know if the point is null. Both of these can be 
performed as early as immediately following the socket read.

size check

(1) requires new consensus rule: 64 byte transactions (or coinbases?) are 
invalid.
(2) creates a consensus "seam"  (complexity) in txs, where < 64 bytes and > 
64 bytes are potentially valid.
(3) can be limited to reading/skipping header (80 bytes) plus parsing 0 - 
65 coinbase bytes.

point check

(1) requires no change.
(2) creates no consensus seam.
(3) can be limited to reading/skipping header (80 bytes) plus parsing 6 - 
43 coinbase bytes.

Not only is this not a large (performance) gain, it's not one at all.

> It would also avoid a large footgun for anyone implementing a software 
verifying an SPV proof verifier and not knowing the intricacies of the 
protocol...

It seems to me that introducing an arbitrary tx size validity may create 
more potential implementation bugs than it resolves. And certainly anyone 
implementing such a verifier must know many intricacies of the protocol. 
This does not remove one, it introduces another - as there is not only a 
bifurcation around tx size but one around the question of whether this rule 
is active.
 
> Finally, it would get rid of a large footgun in general. 

I do not see this. I see a very ugly perpetual seam which will likely 
result in unexpected complexities over time.

> Certainly, unique block hashes would be a useful property for Bitcoin to 
have. It's not far-fetched to expect current or future Bitcoin-related 
software to rely on this.

This does not produce unmalleable block hashes. Duplicate tx hash 
malleation remains in either case, to the same effect. Without a resolution 
to both issues this is an empty promise.

The only possible benefit that I can see here is the possible very small 
bandwidth savings pertaining to SPV proofs. I would have a very hard time 
justifying adding any consensus rule to achieve only that result.

Best,
Eric

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/be78e733-6e9f-4f4e-8dc2-67b79ddbf677n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 3281 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-24  0:35         ` Eric Voskuil
@ 2024-06-27  9:35           ` 'Antoine Poinsot' via Bitcoin Development Mailing List
  2024-06-28 17:14             ` Eric Voskuil
  0 siblings, 1 reply; 21+ messages in thread
From: 'Antoine Poinsot' via Bitcoin Development Mailing List @ 2024-06-27  9:35 UTC (permalink / raw)
  To: Eric Voskuil; +Cc: Bitcoin Development Mailing List

[-- Attachment #1: Type: text/plain, Size: 5016 bytes --]

> It is not clear to me how determining the coinbase size can be done at an earlier stage of validation than detection of the non-null coinbase.

My point wasn't about checking the coinbase size, it was about being able to cache the hash of a (non-malleated) invalid block as permanently invalid to avoid re-downloading and re-validating it.

> It seems to me that introducing an arbitrary tx size validity may create more potential implementation bugs than it resolves.

The potential for implementation bugs is a fair point to raise, but in this case i don't think it's a big concern. Verifying no transaction in a block is 64 bytes is as simple a check as you can get.

> And certainly anyone implementing such a verifier must know many intricacies of the protocol.

They need to know some, but i don't think it's reasonable to expect them to realize the merkle tree construction is such that an inner node may be confused with a 64 bytes transaction.

> I do not see this. I see a very ugly perpetual seam which will likely result in unexpected complexities over time.

What makes you think making 64 bytes transactions invalid could result in unexpected complexities? And why do you think it's likely?

> This does not produce unmalleable block hashes. Duplicate tx hash malleation remains in either case, to the same effect. Without a resolution to both issues this is an empty promise.

Duplicate txids have been invalid since 2012 (CVE-2012-2459). If 64 bytes transactions are also made invalid, this would make it impossible for two valid blocks to have the same hash.

Best,
Antoine
On Monday, June 24th, 2024 at 2:35 AM, Eric Voskuil <eric@voskuil•org> wrote:

> Thanks for the responses Antoine.
>
>> As discussed here it would let node implementations cache block failures at an earlier stage of validation. Not a large gain, but still nice to have.
>
> It is not clear to me how determining the coinbase size can be done at an earlier stage of validation than detection of the non-null coinbase. The former requires parsing the coinbase to determine its size, the latter requires parsing it to know if the point is null. Both of these can be performed as early as immediately following the socket read.
>
> size check
>
> (1) requires new consensus rule: 64 byte transactions (or coinbases?) are invalid.
> (2) creates a consensus "seam" (complexity) in txs, where < 64 bytes and > 64 bytes are potentially valid.
> (3) can be limited to reading/skipping header (80 bytes) plus parsing 0 - 65 coinbase bytes.
>
> point check
>
> (1) requires no change.
> (2) creates no consensus seam.
> (3) can be limited to reading/skipping header (80 bytes) plus parsing 6 - 43 coinbase bytes.
>
> Not only is this not a large (performance) gain, it's not one at all.
>
>> It would also avoid a large footgun for anyone implementing a software verifying an SPV proof verifier and not knowing the intricacies of the protocol...
>
> It seems to me that introducing an arbitrary tx size validity may create more potential implementation bugs than it resolves. And certainly anyone implementing such a verifier must know many intricacies of the protocol. This does not remove one, it introduces another - as there is not only a bifurcation around tx size but one around the question of whether this rule is active.
>
>> Finally, it would get rid of a large footgun in general.
>
> I do not see this. I see a very ugly perpetual seam which will likely result in unexpected complexities over time.
>
>> Certainly, unique block hashes would be a useful property for Bitcoin to have. It's not far-fetched to expect current or future Bitcoin-related software to rely on this.
>
> This does not produce unmalleable block hashes. Duplicate tx hash malleation remains in either case, to the same effect. Without a resolution to both issues this is an empty promise.
>
> The only possible benefit that I can see here is the possible very small bandwidth savings pertaining to SPV proofs. I would have a very hard time justifying adding any consensus rule to achieve only that result.
>
> Best,
> Eric
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/be78e733-6e9f-4f4e-8dc2-67b79ddbf677n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/jJLDrYTXvTgoslhl1n7Fk9-pL1mMC-0k6gtoniQINmioJpzgtqrJ_WqyFZkLltsCUusnQ4jZ6HbvRC-mGuaUlDi3kcqcFHALd10-JQl-FMY%3D%40protonmail.com.

[-- Attachment #2: Type: text/html, Size: 10287 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-27  9:35           ` 'Antoine Poinsot' via Bitcoin Development Mailing List
@ 2024-06-28 17:14             ` Eric Voskuil
  2024-06-29  1:06               ` Antoine Riard
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Voskuil @ 2024-06-28 17:14 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 8301 bytes --]

>> It is not clear to me how determining the coinbase size can be done at 
an earlier stage of validation than detection of the non-null coinbase.
> My point wasn't about checking the coinbase size, it was about being able 
to cache the hash of a (non-malleated) invalid block as permanently invalid 
to avoid re-downloading and re-validating it.

This I understood, but I think you misunderstood me. Your point was 
specifically that, "it would let node implementations cache block failures 
at an earlier stage of validation." Since you have not addressed that 
aspect I assume you agree with my assertion above that the proposed rule 
does not actually achieve this.

Regarding the question of checking coinbase size, the issue is of detecting 
(or preventing) hashes mallied via the 64 byte tx technique. A rule against 
64 byte txs would allow this determination by checking the coinbase alone. 
If the coinbase is 64 bytes the block is invalid, if it is not the block 
hash cannot have been mallied (all txs must have been 64 bytes, see 
previous reference).

In that case if the block is invalid the invalidity can be cached. But 
block invalidity cannot actually be cached until the block is fully 
validated. A rule to prohibit *all* 64 byte txs is counterproductive as it 
only adds additional checks on typically thousands of txs per block, 
serving no purpose.

>> It seems to me that introducing an arbitrary tx size validity may create 
more potential implementation bugs than it resolves.
> The potential for implementation bugs is a fair point to raise, but in 
this case i don't think it's a big concern. Verifying no transaction in a 
block is 64 bytes is as simple a check as you can get.

You appear to be making the assumption that the check is performed after 
the block is fully parsed (contrary to your "earlier" criterion above). The 
only way to determine the tx sizes is to parse each tx for witness marker, 
input count, output count, input script sizes, output script sizes, witness 
sizes, and skipping over the header, several constants, and associated 
buffers. Doing this "early" to detect malleation is an extraordinarily 
complex and costly process. On the other hand, as I pointed out, a rational 
implementation would only do this early check for the coinbase.

Yet even determining the size of the coinbase is significantly more complex 
and costly than checking its first input point against null. That check 
(which is already necessary for validation) resolves the malleation 
question, can be performed on the raw unparsed block buffer by simply 
skipping header, version, reading input count and witness marker as 
necessary, offsetting to the 36 byte point buffer, and performing a byte 
comparison against 
[0000000000000000000000000000000000000000000000000000000000000000ffffffff].

This is:

(1) earlier
(2) faster
(3) simpler
(4) already consensus

>> And certainly anyone implementing such a verifier must know many 
intricacies of the protocol.
> They need to know some, but i don't think it's reasonable to expect them 
to realize the merkle tree construction is such that an inner node may be 
confused with a 64 bytes transaction.

A protocol developer needs to understand that the hash of an invalid block 
cannot be cached unless at least the coinbase has been restricted in size 
(under the proposal) -or- that the coinbase is a null point (presently or 
under the proposal). In the latter case the check is already performed in 
validation, so there is no way a block would presently be cached as invalid 
without checking it. The proposal adds a redundant check, even if limited 
to just the coinbase. [He must also understand the second type of 
malleability, discussed below.]

If this proposed rule was to activate we would implement it in a late stage 
tx.check, after txs/blocks had been fully deserialized. We would not check 
it an all in the case where the block is under checkpoint or milestone 
("assume valid"). In this case we would retain the early null point 
malleation check (along with the hash duplication malleation check) that we 
presently have, would validate tx commitments, and commit the block. In 
other words, the proposal adds unnecessary late stage checks only. 
Implementing it otherwise would just add complexity and hurt performance.

>> I do not see this. I see a very ugly perpetual seam which will likely 
result in unexpected complexities over time.
> What makes you think making 64 bytes transactions invalid could result in 
unexpected complexities? And why do you think it's likely?

As described above, it's later, slower, more complex, unnecessarily broad, 
and a consensus change. Beyond that it creates an arbitrary size limit - 
not a lower or upper bound, but a slice out of the domain. Discontinuities 
are inherent complexities in computing. The "unexpected" part speaks for 
itself.

>> This does not produce unmalleable block hashes. Duplicate tx hash 
malleation remains in either case, to the same effect. Without a resolution 
to both issues this is an empty promise.
> Duplicate txids have been invalid since 2012 (CVE-2012-2459).

I think again here you may have misunderstood me. I was not making a point 
pertaining to BIP30. I was referring to the other form of block hash 
malleability, which results from duplicating sets of trailing txs in a 
single block (see previous reference). This malleation vector remains, even 
with invalid 64 byte txs. As I pointed out, this has the "same effect" as 
the 64 byte tx issue. Merkle hashing the set of txs is insufficient to 
determine identity. In one case the coinbase must be checked (null point or 
size) and in the other case the set of tx hashes must be checked for 
trailing duplicated sets. [Core performs this second check within the 
Merkle hashing algorithm (with far more comparisons than necessary), though 
this can be performed earlier and independently to avoid any hashing in the 
malleation case.]

I would also point out in the interest of correctness that Core reverted 
its BIP30 soft fork implementation as a consequence of the BIP90 hard fork, 
following and requiring the BIP34 soft fork that presumably precluded it 
but didn't, so it is no longer the case that duplicate tx hashes are 
invalid in implementation. As you have proposed in this rollup, this 
requires fixing again.

> If 64 bytes transactions are also made invalid, this would make it 
impossible for two valid blocks to have the same hash.

Aside from the BIP30/34/90 issue addressed above, it is already 
"impossible" (cannot be stronger than computationally infeasible) for two 
*valid* blocks to have the same hash. The proposal does not enable that 
objective, it is already the case. No malleated block is a valid block.

The proposal aims only to make it earlier or easier or faster to check for 
block hash malleation. And as I've pointed out above, it doesn't achieve 
those objectives. Possibly the perception that this would be the case is a 
consequence of implementation details, but as I have shown above, it is not 
in fact the case.

Given either type of malleation, the malleated block can be determined to 
be invalid by a context free check. But this knowledge cannot ever be 
cached against the block hash, since the same hash may be valid. Invalidity 
can only be cached once a non-mallied block is validated and determined to 
be invalid. Block hash malleations are and will remain invalid blocks with 
or without the proposal, and it will continue to be necessary to avoid 
caching invalid against the malleation. As you said:

> it was about being able to cache the hash of a (non-malleated) invalid 
block as permanently invalid to avoid re-downloading and re-validating it.

This is already the case, and requires validating the full non-malleated 
block. Adding a redundant invalidity check doesn't improve this in any way.

Best,
Eric

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/9a4c4151-36ed-425a-a535-aa2837919a04n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 8800 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-28 17:14             ` Eric Voskuil
@ 2024-06-29  1:06               ` Antoine Riard
  2024-06-29  1:31                 ` Eric Voskuil
  0 siblings, 1 reply; 21+ messages in thread
From: Antoine Riard @ 2024-06-29  1:06 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 10702 bytes --]

Hi Eric,

> It is not clear to me how determining the coinbase size can be done at an 
earlier stage of validation than
> detection of the non-null coinbase. The former requires parsing the 
coinbase to determine its size, the latter
> requires parsing it to know if the point is null. Both of these can be 
performed as early as immediately following the socket read.

If you have code in pure C with variables on the stack no malloc, doing a 
check of the coinbase size after the socket
read can be certainly more robust than checking a non-null pointer. And 
note the attacking game we're solving is a peer
passing a sequence of malleated blocks for which the headers have been 
already verified, so there we can only have weaker
assumptions on the computational infeasibility.

Introducing a discontinuity like ensuring that both leaf / non-leaf merkle 
tree nodes are belonging to different domains
can be obviously a source of additional software complexity, however from a 
security perspective discontinuities if they're
computational asymmetries at the advantage of validating nodes I think they 
can be worthy of considerations for soft-fork extensions.

After looking on the proposed implementation in bitcoin inquisition, I 
think this is correct that the efficiency
of the 64 byte technique transaction to check full block malleability is 
very implementation dependent. Sadly, I
cannot think about other directions to alleviate this dependence on the 
ordering of the block validation checks
from socket read.

In my reasonable opinion, it would be more constructive to come out with a 
full-fleshout "fast block malleability
validation" algorithm in the sense of SipHash (-- and see to have this 
implemented and benchmarked in core) before to
consider more the 64 byte transaction invalidity at the consensus level.

Best,
Antoine (the other one).

Le vendredi 28 juin 2024 à 19:49:39 UTC+1, Eric Voskuil a écrit :

> >> It is not clear to me how determining the coinbase size can be done at 
> an earlier stage of validation than detection of the non-null coinbase.
> > My point wasn't about checking the coinbase size, it was about being 
> able to cache the hash of a (non-malleated) invalid block as permanently 
> invalid to avoid re-downloading and re-validating it.
>
> This I understood, but I think you misunderstood me. Your point was 
> specifically that, "it would let node implementations cache block failures 
> at an earlier stage of validation." Since you have not addressed that 
> aspect I assume you agree with my assertion above that the proposed rule 
> does not actually achieve this.
>
> Regarding the question of checking coinbase size, the issue is of 
> detecting (or preventing) hashes mallied via the 64 byte tx technique. A 
> rule against 64 byte txs would allow this determination by checking the 
> coinbase alone. If the coinbase is 64 bytes the block is invalid, if it is 
> not the block hash cannot have been mallied (all txs must have been 64 
> bytes, see previous reference).
>
> In that case if the block is invalid the invalidity can be cached. But 
> block invalidity cannot actually be cached until the block is fully 
> validated. A rule to prohibit *all* 64 byte txs is counterproductive as it 
> only adds additional checks on typically thousands of txs per block, 
> serving no purpose.
>
> >> It seems to me that introducing an arbitrary tx size validity may 
> create more potential implementation bugs than it resolves.
> > The potential for implementation bugs is a fair point to raise, but in 
> this case i don't think it's a big concern. Verifying no transaction in a 
> block is 64 bytes is as simple a check as you can get.
>
> You appear to be making the assumption that the check is performed after 
> the block is fully parsed (contrary to your "earlier" criterion above). The 
> only way to determine the tx sizes is to parse each tx for witness marker, 
> input count, output count, input script sizes, output script sizes, witness 
> sizes, and skipping over the header, several constants, and associated 
> buffers. Doing this "early" to detect malleation is an extraordinarily 
> complex and costly process. On the other hand, as I pointed out, a rational 
> implementation would only do this early check for the coinbase.
>
> Yet even determining the size of the coinbase is significantly more 
> complex and costly than checking its first input point against null. That 
> check (which is already necessary for validation) resolves the malleation 
> question, can be performed on the raw unparsed block buffer by simply 
> skipping header, version, reading input count and witness marker as 
> necessary, offsetting to the 36 byte point buffer, and performing a byte 
> comparison against 
> [0000000000000000000000000000000000000000000000000000000000000000ffffffff].
>
> This is:
>
> (1) earlier
> (2) faster
> (3) simpler
> (4) already consensus
>
> >> And certainly anyone implementing such a verifier must know many 
> intricacies of the protocol.
> > They need to know some, but i don't think it's reasonable to expect them 
> to realize the merkle tree construction is such that an inner node may be 
> confused with a 64 bytes transaction.
>
> A protocol developer needs to understand that the hash of an invalid block 
> cannot be cached unless at least the coinbase has been restricted in size 
> (under the proposal) -or- that the coinbase is a null point (presently or 
> under the proposal). In the latter case the check is already performed in 
> validation, so there is no way a block would presently be cached as invalid 
> without checking it. The proposal adds a redundant check, even if limited 
> to just the coinbase. [He must also understand the second type of 
> malleability, discussed below.]
>
> If this proposed rule was to activate we would implement it in a late 
> stage tx.check, after txs/blocks had been fully deserialized. We would not 
> check it an all in the case where the block is under checkpoint or 
> milestone ("assume valid"). In this case we would retain the early null 
> point malleation check (along with the hash duplication malleation check) 
> that we presently have, would validate tx commitments, and commit the 
> block. In other words, the proposal adds unnecessary late stage checks 
> only. Implementing it otherwise would just add complexity and hurt 
> performance.
>
> >> I do not see this. I see a very ugly perpetual seam which will likely 
> result in unexpected complexities over time.
> > What makes you think making 64 bytes transactions invalid could result 
> in unexpected complexities? And why do you think it's likely?
>
> As described above, it's later, slower, more complex, unnecessarily broad, 
> and a consensus change. Beyond that it creates an arbitrary size limit - 
> not a lower or upper bound, but a slice out of the domain. Discontinuities 
> are inherent complexities in computing. The "unexpected" part speaks for 
> itself.
>
> >> This does not produce unmalleable block hashes. Duplicate tx hash 
> malleation remains in either case, to the same effect. Without a resolution 
> to both issues this is an empty promise.
> > Duplicate txids have been invalid since 2012 (CVE-2012-2459).
>
> I think again here you may have misunderstood me. I was not making a point 
> pertaining to BIP30. I was referring to the other form of block hash 
> malleability, which results from duplicating sets of trailing txs in a 
> single block (see previous reference). This malleation vector remains, even 
> with invalid 64 byte txs. As I pointed out, this has the "same effect" as 
> the 64 byte tx issue. Merkle hashing the set of txs is insufficient to 
> determine identity. In one case the coinbase must be checked (null point or 
> size) and in the other case the set of tx hashes must be checked for 
> trailing duplicated sets. [Core performs this second check within the 
> Merkle hashing algorithm (with far more comparisons than necessary), though 
> this can be performed earlier and independently to avoid any hashing in the 
> malleation case.]
>
> I would also point out in the interest of correctness that Core reverted 
> its BIP30 soft fork implementation as a consequence of the BIP90 hard fork, 
> following and requiring the BIP34 soft fork that presumably precluded it 
> but didn't, so it is no longer the case that duplicate tx hashes are 
> invalid in implementation. As you have proposed in this rollup, this 
> requires fixing again.
>
> > If 64 bytes transactions are also made invalid, this would make it 
> impossible for two valid blocks to have the same hash.
>
> Aside from the BIP30/34/90 issue addressed above, it is already 
> "impossible" (cannot be stronger than computationally infeasible) for two 
> *valid* blocks to have the same hash. The proposal does not enable that 
> objective, it is already the case. No malleated block is a valid block.
>
> The proposal aims only to make it earlier or easier or faster to check for 
> block hash malleation. And as I've pointed out above, it doesn't achieve 
> those objectives. Possibly the perception that this would be the case is a 
> consequence of implementation details, but as I have shown above, it is not 
> in fact the case.
>
> Given either type of malleation, the malleated block can be determined to 
> be invalid by a context free check. But this knowledge cannot ever be 
> cached against the block hash, since the same hash may be valid. Invalidity 
> can only be cached once a non-mallied block is validated and determined to 
> be invalid. Block hash malleations are and will remain invalid blocks with 
> or without the proposal, and it will continue to be necessary to avoid 
> caching invalid against the malleation. As you said:
>
> > it was about being able to cache the hash of a (non-malleated) invalid 
> block as permanently invalid to avoid re-downloading and re-validating it.
>
> This is already the case, and requires validating the full non-malleated 
> block. Adding a redundant invalidity check doesn't improve this in any way.
>
> Best,
> Eric

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/3f0064f9-54bd-46a7-9d9a-c54b99aca7b2n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 11117 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-29  1:06               ` Antoine Riard
@ 2024-06-29  1:31                 ` Eric Voskuil
  2024-06-29  1:53                   ` Antoine Riard
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Voskuil @ 2024-06-29  1:31 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 831 bytes --]

Hello Antoine (other),

>  If you have code in pure C with variables on the stack no malloc, doing 
a check of the coinbase size after the socket read can be certainly more 
robust than checking a non-null pointer. 

Can you please clarify this for me? When you say "non-null pointer" do you 
mean C pointer or transaction input "null point" (sequence of 32 repeating 
0x00 bytes and 4 0xff)? What do you mean by "more robust"?

Thanks,
Eric

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/26b7321b-cc64-44b9-bc95-a4d8feb701e5n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 1145 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-29  1:31                 ` Eric Voskuil
@ 2024-06-29  1:53                   ` Antoine Riard
  2024-06-29 20:29                     ` Eric Voskuil
  0 siblings, 1 reply; 21+ messages in thread
From: Antoine Riard @ 2024-06-29  1:53 UTC (permalink / raw)
  To: Eric Voskuil; +Cc: Bitcoin Development Mailing List

[-- Attachment #1: Type: text/plain, Size: 2041 bytes --]

Hi Eric,

I meant C pointer and by "more robust" any kind of memory / CPU DoS arising
due to memory management (e.g. hypothetical rule checking the 64 bytes size
for all block transactions).

In my understanding, the validation logic equivalent of core's CheckBlock
is libbitcoin's block::check():
https://github.com/libbitcoin/libbitcoin-system/blob/master/src/chain/block.cpp#L751

Best,
Antoine

Le sam. 29 juin 2024 à 02:33, Eric Voskuil <eric@voskuil•org> a écrit :

> Hello Antoine (other),
>
> >  If you have code in pure C with variables on the stack no malloc, doing
> a check of the coinbase size after the socket read can be certainly more
> robust than checking a non-null pointer.
>
> Can you please clarify this for me? When you say "non-null pointer" do you
> mean C pointer or transaction input "null point" (sequence of 32 repeating
> 0x00 bytes and 4 0xff)? What do you mean by "more robust"?
>
> Thanks,
> Eric
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/bitcoindev/CAfm7D5ppjo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> bitcoindev+unsubscribe@googlegroups•com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bitcoindev/26b7321b-cc64-44b9-bc95-a4d8feb701e5n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/26b7321b-cc64-44b9-bc95-a4d8feb701e5n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/CALZpt%2BEwVyaz1%3DA6hOOycqFGJs%2BzxyYYocZixTJgVmzZezUs9Q%40mail.gmail.com.

[-- Attachment #2: Type: text/html, Size: 3091 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-29  1:53                   ` Antoine Riard
@ 2024-06-29 20:29                     ` Eric Voskuil
  2024-06-29 20:40                       ` Eric Voskuil
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Voskuil @ 2024-06-29 20:29 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 1578 bytes --]

> I meant C pointer and by "more robust" any kind of memory / CPU DoS 
arising due to memory management (e.g. hypothetical rule checking the 64 
bytes size for all block transactions).

Ok, thanks for clarifying. I'm still not making the connection to "checking 
a non-null [C] pointer" but that's prob on me.

> In my understanding, the validation logic equivalent of core's CheckBlock 
is libbitcoin's block::check(): 
https://github.com/libbitcoin/libbitcoin-system/blob/master/src/chain/block.cpp#L751

Yes, a rough correlation but not necessarily equivalence. Note that 
block.check has context free and contextual overrides.

The 'bypass' parameter indicates a block under checkpoint or milestone 
("assume valid"). In this case we must check Merkle root, witness 
commitment, and both types of malleation - as the purpose is to establish 
identity. Absent 'bypass' the typical checks are performed, and therefore a 
malleation check is not required here. The "type64" malleation is subsumed 
by the is_first_non_coinbase check and the "type32" malleation is subsumed 
by the is_internal_double_spend check.

I have some other thoughts on this that I'll post separately.

Best,
Eric

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/607a2233-ac12-4a80-ae4a-08341b3549b3n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 1906 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [bitcoindev] Re: Great Consensus Cleanup Revival
  2024-06-29 20:29                     ` Eric Voskuil
@ 2024-06-29 20:40                       ` Eric Voskuil
  0 siblings, 0 replies; 21+ messages in thread
From: Eric Voskuil @ 2024-06-29 20:40 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 5861 bytes --]

Caching identity in the case of invalidity is more interesting question 
than it might seem.

Background: A fully-validated block has established identity in its block 
hash. However an invalid block message may include the same block header, 
producing the same hash, but with any kind of nonsense following the 
header. The purpose of the transaction and witness commitments is of course 
to establish this identity, so these two checks are therefore necessary 
even under checkpoint/milestone. And then of course the two Merkle tree 
issues complicate the tx commitment (the integrity of the witness 
commitment is assured by that of the tx commitment).

So what does it mean to speak of a block hash derived from:

(1) a block message with an unparseable header?
(2) a block message with parseable but invalid header?
(3) a block message with valid header but unparseable tx data?
(4) a block message with valid header but parseable invalid uncommitted tx 
data?
(5) a block message with valid header but parseable invalid malleated 
committed tx data?
(6) a block message with valid header but parseable invalid unmalleated 
committed tx data?
(7) a block message with valid header but uncommitted valid tx data?
(8) a block message with valid header but malleated committed valid tx data?
(9) a block message with valid header but unmalleated committed valid tx 
data?

Note that only the #9 p2p block message contains an actual Bitcoin block, 
the others are bogus messages. In all cases the message can be sha256 
hashed to establish the identity of the *message*. And if one's objective 
is to reject repeating bogus messages, this might be a useful strategy. 
It's already part of the p2p protocol, is orders of magnitude cheaper to 
produce than a Merkle root, and has no identity issues.

The concept of Bitcoin block hash as unique identifier for invalid p2p 
block messages is problematic. Apart from the malleation question, what is 
the Bitcoin block hash for a message with unparseable data (#1 and #3)? 
Such messages are trivial to produce and have no block hash. What is the 
useful identifier for a block with malleated commitments (#5 and #8) or 
invalid commitments (#4 and #7) - valid txs or otherwise?

The stated objective for a consensus rule to invalidate all 64 byte txs is:

> being able to cache the hash of a (non-malleated) invalid block as 
permanently invalid to avoid re-downloading and re-validating it.

This seems reasonable at first glance, but given the list of scenarios 
above, which does it apply to? Presumably the invalid header (#2) doesn't 
get this far because of headers-first. That leaves just invalid blocks with 
useful block hash identifiers (#6). In all other cases the message is 
simply discarded. In this case the attempt is to move category #5 into 
category #6 by prohibiting 64 byte txs.

The requirement to "avoid re-downloading and re-validating it" is about 
performance, presumably minimizing initial block download/catch-up time. 
There is a computational cost to producing 64 byte malleations and none for 
any of the other bogus block message categories above, including the other 
form of malleation. Furthermore, 64 byte malleation has almost zero cost to 
preclude. No hashing and not even true header or tx parsing are required. 
Only a handful of bytes must be read from the raw message before it can be 
discarded presently.

That's actually far cheaper than any of the other scenarios that again, 
have no cost to produce. The other type of malleation requires parsing all 
of the txs in the block and hashing and comparing some or all of them. In 
other words, if there is an attack scenario, that must be addressed before 
this can be meaningful. In fact all of the other bogus message scenarios 
(with tx data) will remain more expensive to discard than this one.

The problem arises from trying to optimize dismissal by storing an 
identifier. Just *producing* the identifier is orders of magnitude more 
costly than simply dismissing this bogus message. I can't imagine why any 
implementation would want to compute and store and retrieve and recompute 
and compare hashes when the alterative is just dismissing the bogus 
messages with no hashing at all.

Bogus messages will arrive, they do not even have to be requested. The 
simplest are dealt with by parse failure. What defines a parse is entirely 
subjective. Generally it's "structural" but nothing precludes incorporating 
a requirement for a necessary leading pattern in the stream, sort of like 
how the witness pattern is identified. If we were going to prioritize early 
dismissal this is where we would put it.

However, there is a tradeoff in terms of early dismissal. Looking up 
invalid hashes is a costly tradeoff, which becomes multiplied by every 
block validated. For example, expending 1 millisecond in hash/lookup to 
save 1 second of validation time in the failure case seems like a 
reasonable tradeoff, until you multiply across the whole chain. 1 ms 
becomes 14 minutes across the chain, just to save a second for each mallied 
block encountered. That means you need to have encountered 840 such mallied 
blocks just to break even. Early dismissing the block for non-null coinbase 
point (without hashing anything) would be on the order of 1000x faster than 
that (breakeven at 1 encounter). So why the block hash cache requirement? 
It cannot be applied to many scenarios, and cannot be optimal in this one.

Eric

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/3dceca4d-03a8-44f3-be64-396702247fadn%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 6253 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2024-06-29 20:42 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-24 18:10 [bitcoindev] Great Consensus Cleanup Revival 'Antoine Poinsot' via Bitcoin Development Mailing List
2024-03-26 19:11 ` [bitcoindev] " Antoine Riard
2024-03-27 10:35   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2024-03-27 18:57     ` Antoine Riard
2024-04-18  0:46     ` Mark F
2024-04-18 10:04       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2024-04-25  6:08         ` Antoine Riard
2024-04-30 22:20           ` Mark F
2024-05-06  1:10             ` Antoine Riard
2024-06-17 22:15 ` Eric Voskuil
2024-06-18  8:13   ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2024-06-18 13:02     ` Eric Voskuil
2024-06-21 13:09       ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2024-06-24  0:35         ` Eric Voskuil
2024-06-27  9:35           ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2024-06-28 17:14             ` Eric Voskuil
2024-06-29  1:06               ` Antoine Riard
2024-06-29  1:31                 ` Eric Voskuil
2024-06-29  1:53                   ` Antoine Riard
2024-06-29 20:29                     ` Eric Voskuil
2024-06-29 20:40                       ` Eric Voskuil

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox