public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoindev] Proposing a P2QRH BIP towards a quantum resistant soft fork
@ 2024-06-08 21:04 Hunter Beast
  2024-06-14 13:51 ` [bitcoindev] " Pierre-Luc Dallaire-Demers
  0 siblings, 1 reply; 10+ messages in thread
From: Hunter Beast @ 2024-06-08 21:04 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 1630 bytes --]

The motivation for this BIP is to provide a concrete proposal for adding 
quantum resistance to Bitcoin. We will need to pick a signature algorithm, 
implement it, and have it ready in event of quantum emergency. There will 
be time to adopt it. Importantly, this first step is a more substantive 
answer to those with concerns beyond, "quantum computers may pose a threat, 
but we likely don't have to worry about that for a long time". Bitcoin 
development and activation is slow, so it's important that those with low 
time preference start discussing this as a serious possibility sooner 
rather than later.

This is meant to be the first in a series of BIPs regarding a hypothetical 
"QuBit" soft fork. The BIP is intended to propose concrete solutions, even 
if they're early and incomplete, so that Bitcoin developers are aware of 
the existence of these solutions and their potential.

This is just a rough draft and not the finished BIP. I'd like to validate 
the approach and hear if I should continue working on it, whether serious 
changes are needed, or if this truly isn't a worthwhile endeavor right now.

The BIP can be found here:
https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki

Thank you for your time.

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/62fd28ab-e8b5-4cfc-b5ae-0d5a033af057n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 1991 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-06-08 21:04 [bitcoindev] Proposing a P2QRH BIP towards a quantum resistant soft fork Hunter Beast
@ 2024-06-14 13:51 ` Pierre-Luc Dallaire-Demers
  2024-06-14 14:28   ` Hunter Beast
  0 siblings, 1 reply; 10+ messages in thread
From: Pierre-Luc Dallaire-Demers @ 2024-06-14 13:51 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 2474 bytes --]

SQIsign is blockchain friendly but also very new, I would recommend adding 
a hash-based backup key in case an attack on SQIsign is found in the future 
(recall that SIDH broke over the span of a 
weekend https://eprint.iacr.org/2022/975.pdf).
Backup keys can be added in the form of a Merkle tree where one branch 
would contain the SQIsign public key and the other the public key of the 
recovery hash-based scheme. For most transactions it would only add one bit 
to specify the SQIsign branch.
The hash-based method could be Sphincs+, which is standardized by NIST but 
requires adding extra code, or Lamport, which is not standardized but can 
be verified on-chain with OP-CAT.

On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote:

> The motivation for this BIP is to provide a concrete proposal for adding 
> quantum resistance to Bitcoin. We will need to pick a signature algorithm, 
> implement it, and have it ready in event of quantum emergency. There will 
> be time to adopt it. Importantly, this first step is a more substantive 
> answer to those with concerns beyond, "quantum computers may pose a threat, 
> but we likely don't have to worry about that for a long time". Bitcoin 
> development and activation is slow, so it's important that those with low 
> time preference start discussing this as a serious possibility sooner 
> rather than later.
>
> This is meant to be the first in a series of BIPs regarding a hypothetical 
> "QuBit" soft fork. The BIP is intended to propose concrete solutions, even 
> if they're early and incomplete, so that Bitcoin developers are aware of 
> the existence of these solutions and their potential.
>
> This is just a rough draft and not the finished BIP. I'd like to validate 
> the approach and hear if I should continue working on it, whether serious 
> changes are needed, or if this truly isn't a worthwhile endeavor right now.
>
> The BIP can be found here:
> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
>
> Thank you for your time.
>
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/b3561407-483e-46cd-b5e9-d6d48f8dca93n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 3324 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-06-14 13:51 ` [bitcoindev] " Pierre-Luc Dallaire-Demers
@ 2024-06-14 14:28   ` Hunter Beast
  2024-06-17  1:07     ` Antoine Riard
  0 siblings, 1 reply; 10+ messages in thread
From: Hunter Beast @ 2024-06-14 14:28 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 3735 bytes --]

Good points. I like your suggestion for a SPHINCS+, just due to how mature 
it is in comparison to SQIsign. It's already in its third round and has 
several standards-compliant implementations, and it has an actual 
specification rather than just a research paper. One thing to consider is 
that NIST-I round 3 signatures are 982 bytes in size, according to what I 
was able to find in the documents hosted by the SPHINCS website.
https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip

One way to handle this is to introduce this as a separate address type than 
SQIsign. That won't require OP_CAT, and I do want to keep this soft fork 
limited in scope. If SQIsign does become significantly broken, in this 
hopefully far future scenario, I might be supportive of an increase in the 
witness discount.

Also, I've made some additional changes based on your feedback on X. You 
can review them here if you so wish:
https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754

On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc Dallaire-Demers 
wrote:

> SQIsign is blockchain friendly but also very new, I would recommend adding 
> a hash-based backup key in case an attack on SQIsign is found in the future 
> (recall that SIDH broke over the span of a weekend 
> https://eprint.iacr.org/2022/975.pdf).
> Backup keys can be added in the form of a Merkle tree where one branch 
> would contain the SQIsign public key and the other the public key of the 
> recovery hash-based scheme. For most transactions it would only add one bit 
> to specify the SQIsign branch.
> The hash-based method could be Sphincs+, which is standardized by NIST but 
> requires adding extra code, or Lamport, which is not standardized but can 
> be verified on-chain with OP-CAT.
>
> On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote:
>
>> The motivation for this BIP is to provide a concrete proposal for adding 
>> quantum resistance to Bitcoin. We will need to pick a signature algorithm, 
>> implement it, and have it ready in event of quantum emergency. There will 
>> be time to adopt it. Importantly, this first step is a more substantive 
>> answer to those with concerns beyond, "quantum computers may pose a threat, 
>> but we likely don't have to worry about that for a long time". Bitcoin 
>> development and activation is slow, so it's important that those with low 
>> time preference start discussing this as a serious possibility sooner 
>> rather than later.
>>
>> This is meant to be the first in a series of BIPs regarding a 
>> hypothetical "QuBit" soft fork. The BIP is intended to propose concrete 
>> solutions, even if they're early and incomplete, so that Bitcoin developers 
>> are aware of the existence of these solutions and their potential.
>>
>> This is just a rough draft and not the finished BIP. I'd like to validate 
>> the approach and hear if I should continue working on it, whether serious 
>> changes are needed, or if this truly isn't a worthwhile endeavor right now.
>>
>> The BIP can be found here:
>> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
>>
>> Thank you for your time.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/d78f5dc4-a72d-4da4-8a24-105963155e4dn%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 5045 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-06-14 14:28   ` Hunter Beast
@ 2024-06-17  1:07     ` Antoine Riard
  2024-06-17 20:27       ` hunter
  0 siblings, 1 reply; 10+ messages in thread
From: Antoine Riard @ 2024-06-17  1:07 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 6408 bytes --]



Hi Hunter Beast,

I think any post-quantum upgrade signature algorithm upgrade proposal would 
grandly benefit to have
Shor's based practical attacks far more defined in the Bitcoin context. As 
soon you start to talk about
quantum computers there is no such thing as a "quantum computer" though a 
wide array of architectures
based on a range of technologies to encode qubits on nanoscale physical 
properties.

This is not certain that any Shor's algorithm variant works smoothly 
independently of the quantum computer
architecture considered (e.g gate frequency, gate infidelity, cooling 
energy consumption) and I think it's
an interesting open game-theory problem if you can concentrate a sufficiant 
amount of energy before any
coin owner moves them in consequence (e.g seeing a quantum break in the 
mempool and reacting with a counter-spend).

In my opinion, one of the last time the subject was addressed on the 
mailing list, the description of the state of
the quantum computer field was not realistic and get into risk 
characterization hyperbole talking about
"super-exponential rate" (when indeed there is no empirical 
realization that distinct theoretical advance on
quantum capabilities can be combined with each other) [1].

On your proposal, there is an immediate observation which comes to mind, 
namely why not using one of the algorithm
(dilthium, sphincs+, falcon) which has been through the 3 rounds of NIST 
cryptanalysis. Apart of the signature size,
which sounds to be smaller, in a network of full-nodes any PQ signature 
algorithm should have reasonable verification
performances.

Lastly, there is a practical defensive technique that can be implemented 
today by coin owners to protect in face of
hyptothetical quantum adversaries. Namely setting spending scripts to 
request an artificially inflated witness stack,
as the cost has to be burden by the spender. I think one can easily do that 
with OP_DUP and OP_GREATERTHAN and a bit
of stack shuffling. While the efficiency of this technique is limited by 
the max consensus size of the script stack 
(`MAX_STACK_SIZE`) and the max consensus size of stack element 
(`MAX_SCRIPT_ELEMENT_SIZE`), this adds an additional
"scarce coins" pre-requirement on the quantum adversarise to succeed. 
Shor's algorithm is only defined under the
classic ressources of computational complexity, time and space.

Best,
Antoine

[1] https://freicoin.substack.com/p/why-im-against-taproot

Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit :

> Good points. I like your suggestion for a SPHINCS+, just due to how mature 
> it is in comparison to SQIsign. It's already in its third round and has 
> several standards-compliant implementations, and it has an actual 
> specification rather than just a research paper. One thing to consider is 
> that NIST-I round 3 signatures are 982 bytes in size, according to what I 
> was able to find in the documents hosted by the SPHINCS website.
>
> https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip
>
> One way to handle this is to introduce this as a separate address type 
> than SQIsign. That won't require OP_CAT, and I do want to keep this soft 
> fork limited in scope. If SQIsign does become significantly broken, in this 
> hopefully far future scenario, I might be supportive of an increase in the 
> witness discount.
>
> Also, I've made some additional changes based on your feedback on X. You 
> can review them here if you so wish:
>
> https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754
>
> On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc Dallaire-Demers 
> wrote:
>
>> SQIsign is blockchain friendly but also very new, I would recommend 
>> adding a hash-based backup key in case an attack on SQIsign is found in the 
>> future (recall that SIDH broke over the span of a weekend 
>> https://eprint.iacr.org/2022/975.pdf).
>> Backup keys can be added in the form of a Merkle tree where one branch 
>> would contain the SQIsign public key and the other the public key of the 
>> recovery hash-based scheme. For most transactions it would only add one bit 
>> to specify the SQIsign branch.
>> The hash-based method could be Sphincs+, which is standardized by NIST 
>> but requires adding extra code, or Lamport, which is not standardized but 
>> can be verified on-chain with OP-CAT.
>>
>> On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote:
>>
>>> The motivation for this BIP is to provide a concrete proposal for adding 
>>> quantum resistance to Bitcoin. We will need to pick a signature algorithm, 
>>> implement it, and have it ready in event of quantum emergency. There will 
>>> be time to adopt it. Importantly, this first step is a more substantive 
>>> answer to those with concerns beyond, "quantum computers may pose a threat, 
>>> but we likely don't have to worry about that for a long time". Bitcoin 
>>> development and activation is slow, so it's important that those with low 
>>> time preference start discussing this as a serious possibility sooner 
>>> rather than later.
>>>
>>> This is meant to be the first in a series of BIPs regarding a 
>>> hypothetical "QuBit" soft fork. The BIP is intended to propose concrete 
>>> solutions, even if they're early and incomplete, so that Bitcoin developers 
>>> are aware of the existence of these solutions and their potential.
>>>
>>> This is just a rough draft and not the finished BIP. I'd like to 
>>> validate the approach and hear if I should continue working on it, whether 
>>> serious changes are needed, or if this truly isn't a worthwhile endeavor 
>>> right now.
>>>
>>> The BIP can be found here:
>>> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
>>>
>>> Thank you for your time.
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 14060 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-06-17  1:07     ` Antoine Riard
@ 2024-06-17 20:27       ` hunter
  2024-07-13  1:34         ` Antoine Riard
  0 siblings, 1 reply; 10+ messages in thread
From: hunter @ 2024-06-17 20:27 UTC (permalink / raw)
  To: Antoine Riard; +Cc: Bitcoin Development Mailing List


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2024-06-16 19:31, Antoine Riard <antoine.riard@gmail•com> wrote:

>
> Hi Hunter Beast,I think any post-quantum upgrade signature algorithm upgrade proposal would grandly benefit to haveShor's based practical attacks far more defined in the Bitcoin context. As soon you start to talk aboutquantum computers there is no such thing as a "quantum computer" though a wide array of architecturesbased on a range of technologies to encode qubits on nanoscale physical properties.
>
Good point. I can write a section in the BIP Motivation or Security section about how an attack might take place practically, and the potential urgency of such an attack.
 
I was thinking of focusing on the IBM Quantum System Two, mention how it can be scaled, and that although it might be quite limited, if running Shor's variant for a sufficient amount of time, above a certain minimum threshold of qubits, it might be capable of decrypting the key to an address within one year. I base this on the estimate provided in a study by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one day. It would seem it scales roughly linearly, and so extrapolating it further, 36,000 qubits would be needed to decrypt an address within one year. However, the IBM Heron QPU turned out to have a gate time 100x less than was estimated in 2022, and so it might be possible to make do with even fewer qubits still within that timeframe. With only 360 qubits, barring algorithmic overhead such as for circuit memory, it might be possible to decrypt a single address within a year. That might sound like a lot, but being able to accomplish that at all would be significant, almost like a Chicago Pile moment, proving something in practice that was previously only thought theoretically possible for the past 3 decades. And it's only downhill from there...
>
> This is not certain that any Shor's algorithm variant works smoothly independently of the quantum computerarchitecture considered (e.g gate frequency, gate infidelity, cooling energy consumption) and I think it'san interesting open game-theory problem if you can concentrate a sufficiant amount of energy before anycoin owner moves them in consequence (e.g seeing a quantum break in the mempool and reacting with a counter-spend).
>
It should be noted that P2PK keys still hold millions of bitcoin, and those encode the entire public key for everyone to see for all time. Thus, early QC attacks won't need to consider the complexities of the mempool.
>
> In my opinion, one of the last time the subject was addressed on the mailing list, the description of the state of the quantum computer field was not realistic and get into risk characterization hyperbole talking about "super-exponential rate" (when indeed there is no empirical realization that distinct theoretical advance on quantum capabilities can be combined with each other) [1].
>
I think it's time to revisit these discussions given IBM's progress. They've published a two videos in particular that are worth watching; their keynote from December of last year [2], and their roadmap update from just last month [3].
>
> On your proposal, there is an immediate observation which comes to mind, namely why not using one of the algorithm(dilthium, sphincs+, falcon) which has been through the 3 rounds of NIST cryptanalysis. Apart of the signature size,which sounds to be smaller, in a network of full-nodes any PQ signature algorithm should have reasonable verificationperformances.
>
I'm supportive of this consideration. FALCON might be a good substitute, and maybe it can be upgraded to HAWK for even better performance depending on how much time there is. According to the BIP, FALCON signatures are ~10x larger than Schnorr signatures, so this will of course make the transaction more expensive, but we also must remember, these signatures will be going into the witness, which already receives a 4x discount. Perhaps the discount could be increased further someday to fit more transactions into blocks, but this will also likely result in more inscriptions filling unused space also, which permanently increases the burden of running an archive node. Due to the controversy such a change could bring, I would rather any increases in the witness discount be excluded from future activation discussions, so as to be considered separately, even if it pertains to an increase in P2QRH transaction size.
 
Do you think it's worth reworking the BIP to use FALCON signatures? I've only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the readiness levels between those two are presently worlds apart.
 
Also, do you think it's of any concern to use HASH160 instead of HASH256 in the output script? I think it's fine for a cryptographic commitment since it's simply a hash of a hash (MD160 of SHA-256).
>
> Lastly, there is a practical defensive technique that can be implemented today by coin owners to protect in face ofhyptothetical quantum adversaries. Namely setting spending scripts to request an artificially inflated witness stack,as the cost has to be burden by the spender. I think one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack shuffling. While the efficiency of this technique is limited by the max consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an additional"scarce coins" pre-requirement on the quantum adversarise to succeed. Shor's algorithm is only defined under theclassic ressources of computational complexity, time and space.
>
I'm not sure I fully understand this, but even more practically, as mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally with a value of fewer than 50 coins per address, and when funds ever need to be spent, the transaction is signed and submitted out of band to a trusted mining pool, ideally one that does KYC, so it's known which individual miners get to see the public key before it's mined. It's not perfect, since this relies on exogenous security assumptions, which is why P2QRH is proposed.
>
> Best,Antoine
> [1] https://freicoin.substack.com/p/why-im-against-taproot
>
 
I'm grateful you took the time to review the BIP and offer your detailed insights.
 
[1] “The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime,” 2022 - https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-specifications-on-reaching
[2] https://www.youtube.com/watch?v=De2IlWji8Ck
[3] https://www.youtube.com/watch?v=d5aIx79OTps
 
>
>
> Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit :
>
> > Good points. I like your suggestion for a SPHINCS+, just due to how mature it is in comparison to SQIsign. It's already in its third round and has several standards-compliant implementations, and it has an actual specification rather than just a research paper. One thing to consider is that NIST-I round 3 signatures are 982 bytes in size, according to what I was able to find in the documents hosted by the SPHINCS website.
> > https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip
> >  
> > One way to handle this is to introduce this as a separate address type than SQIsign. That won't require OP_CAT, and I do want to keep this soft fork limited in scope. If SQIsign does become significantly broken, in this hopefully far future scenario, I might be supportive of an increase in the witness discount.
> >  
> > Also, I've made some additional changes based on your feedback on X. You can review them here if you so wish:
> > https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754
> >
> >
> > On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc Dallaire-Demers wrote:
> > > SQIsign is blockchain friendly but also very new, I would recommend adding a hash-based backup key in case an attack on SQIsign is found in the future (recall that SIDH broke over the span of a weekend https://eprint.iacr.org/2022/975.pdf).
> > > Backup keys can be added in the form of a Merkle tree where one branch would contain the SQIsign public key and the other the public key of the recovery hash-based scheme. For most transactions it would only add one bit to specify the SQIsign branch.
> > > The hash-based method could be Sphincs+, which is standardized by NIST but requires adding extra code, or Lamport, which is not standardized but can be verified on-chain with OP-CAT.
> > >
> > > On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote:
> > > > The motivation for this BIP is to provide a concrete proposal for adding quantum resistance to Bitcoin. We will need to pick a signature algorithm, implement it, and have it ready in event of quantum emergency. There will be time to adopt it. Importantly, this first step is a more substantive answer to those with concerns beyond, "quantum computers may pose a threat, but we likely don't have to worry about that for a long time". Bitcoin development and activation is slow, so it's important that those with low time preference start discussing this as a serious possibility sooner rather than later.  This is meant to be the first in a series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is intended to propose concrete solutions, even if they're early and incomplete, so that Bitcoin developers are aware of the existence of these solutions and their potential.  This is just a rough draft and not the finished BIP. I'd like to validate the approach and hear if I should continue working on it, whether serious changes are needed, or if this truly isn't a worthwhile endeavor right now.
> > > >  
> > > > The BIP can be found here:
> > > > https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
> > > >  
> > > > Thank you for your time.
> > > >  
> > > >
> > >
> > >
> >
> >
>
>
> -- You received this message because you are subscribed to a topic in the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To unsubscribe from this group and all its topics, send an email to bitcoindev+unsubscribe@googlegroups•com. To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com.

-----BEGIN PGP SIGNATURE-----
Version: OpenPGP.js v4.10.3
Comment: https://openpgpjs.org

wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe
JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/
8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9
bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE
tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt
Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp
mH/DU20HMBeGVSrISrvsmLw=
=+wat
-----END PGP SIGNATURE-----

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/2cbd432f-ca19-4481-93c5-3b0f7cdea1cb%40DS3018xs.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-06-17 20:27       ` hunter
@ 2024-07-13  1:34         ` Antoine Riard
  2024-08-06 17:37           ` Hunter Beast
  0 siblings, 1 reply; 10+ messages in thread
From: Antoine Riard @ 2024-07-13  1:34 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 20489 bytes --]

Hi Hunter Beast,

Apologies for the delay in answer.

> I was thinking of focusing on the IBM Quantum System Two, mention how it 
can be scaled, and that although it might be quite limited, if running 
Shor's variant for a > sufficient amount of time, above a certain minimum 
threshold of qubits, it might be capable of decrypting the key to an 
address within one year. I base this on the estimate > provided in a study 
by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one > 
day. It would seem it scales roughly linearly, and so extrapolating it 
further, 36,000 qubits would be needed to decrypt an address within one 
year. However, the IBM Heron > QPU turned out to have a gate time 100x less 
than was estimated in 2022, and so it might be possible to make do with 
even fewer qubits still within that timeframe. With > only 360 qubits, 
barring algorithmic overhead such as for circuit memory, it might be 
possible to decrypt a single address within a year. That might sound like a 
lot, but > being able to accomplish that at all would be significant, 
almost like a Chicago Pile moment, proving something in practice that was 
previously only thought theoretically > possible for the past 3 decades. 
And it's only downhill from there...

Briefly surveying the paper "The impact of hardware specifications on 
reaching quantum advantage in the fault tolerant regime", I think it's a 
reasonble framework to evaluate
the practical efficiency of quantum attacks on bitcoin, it's self 
consistent and there is a critical approach referencing the usual 
litterature on quantum attacks on bitcoin. Just
note the caveat, one can find in usual quantum complexity litterature, 
"particularly in regard to end-to-end physical resource estimation. There 
are many other error correction
techniques available, and the best choice will likely depend on the 
underlying architecture's characteristics, such as the available physical 
qubit–qubit connectivity" (verbatim). Namely, evaluating quantum attacks is 
very dependent on the concrete physical architecture underpinning it.

All that said, I agree with you that if you see a quantum computer with the 
range of 1000 physical qubits being able to break the DLP for ECC based 
encryption like secp256k1, even if it takes a year it will be a Chicago 
Pile moment, or whatever comparative experiments which were happening about 
chain of nuclear reactions in 30s / 40s.

>  I think it's time to revisit these discussions given IBM's progress. 
They've published a two videos in particular that are worth watching; their 
keynote from December of last > year [2], and their roadmap update from 
just last month [3]

I have looked on the roadmap as it's available on the IBM blog post: 
https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-to-2033
They give only a target of 2000 logical qubit to be reach in 2033...which 
is surprisingly not that strong...And one expect they might hit likely solid
state issues in laying out in hardware the Heron processor architecture. As 
a point of thinking, it took like 2 decades to advance on the state of art
of litography in traditional chips manufacturing.
 
So I think it's good to stay cool minded and I think my observation about 
talking of "super-exponential rate" as used in maaku old blog post does not
hold a lot of rigor to describe the advances in the field of quantum 
computing. Note, also how IMB is a commercial entity that can have a lot of 
interests
in "pumping" the state of "quantum computing" to gather fundings (there is 
a historical anecdote among bitcoin OG circles about Vitalik trying to do an
ICO to build a quantum computer like 10 years ago, just to remember).

> I'm supportive of this consideration. FALCON might be a good substitute, 
and maybe it can be upgraded to HAWK for even better performance depending 
on how much > time there is. According to the BIP, FALCON signatures are 
~10x larger t> han Schnorr signatures, so this will of course make the 
transaction more expensive, but we also > must remember, these signatures 
will be going into the witness, which already receives a 4x discount. 
Perhaps the discount could be incr> eased further someday to fit > more 
transactions into blocks, but this will also likely result in more 
inscriptions filling unused space also, which permanently increases the 
burden of running an archive > node. Due to the controversy s> uch a change 
could bring, I would rather any increases in the witness discount be 
excluded from future activation discussions, so as to be > considered 
separately, even if it pertains to an increase in P2QRH transaction size.
 
> Do you think it's worth reworking the BIP to use FALCON signatures? I've 
only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
readiness levels between those two are presently worlds apart.

I think FALCON is what has the smallest pubkey + sig size for hash-and-sign 
lattice-based schemes. So I think it's worth reworking the BIP to see what 
has the smallest generation / validation time and pubkey + size space for 
the main post-quantum scheme. At least for dilthium, falcon, sphincs+ and 
SQISign. For an hypothetical witness discount, a v2 P2QRH could be always 
be moved in a very template annex tag / field.

> Also, do you think it's of any concern to use HASH160 instead of HASH256 
in the output script? I think it's fine for a cryptographic commitment 
since it's simply a hash of a hash (MD160 of SHA-256).

See literature on quantum attacks on bitcoin in the reference of the paper 
you quote ("The impact of hardware specifications on reaching quantum 
advantage in the fault tolerant regime") for a discussion on Grover's 
search algorithm.

> I'm not sure I fully understand this, but even more practically, as 
mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
with a value of fewer than 50
> coins per address, and when funds ever need to be spent, the> 
 transaction is signed and submitted out of band to a trusted mining pool, 
ideally one that does KYC, so it's
> known which individual miners get to see the public key before it's 
mined. It's not perfect, since this relies on exogenou> s security 
assumptions, which is why P2QRH is
> proposed.

Again, the paper you're referencing ("The impact of hardware specifications 
on reaching quantum advantage...") is analyzing the performance of quantum 
advantage under
2 dimensions, namely space and time. My observation is in Bitcoin we have 
an additional dimension, "coin scarcity" that can be leveraged to build 
defense of address
spends in face of quantum attacks.

Namely you can introduce an artifical "witness-stack size scale ladder" in 
pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig...
I have not verified it works well on bitcoin core though this script should 
put the burden on the quantum attacker to have enough bitcoin amount 
available to burn in on-chain fees in witness size to break a P2WPKH.

>  ideally with a value of fewer than 50 coins per address, and when funds 
ever need to be spent, the transaction is signed and submitted out of band 
to a trusted mining pool, ideally
> one that does KYC, so it's known which individual > miners get to see the 
public key before it's mined. It's not perfect, since this relies on 
exogenous security assumptions, which is
> why P2QRH is proposed.

The technical issue if you implement KYC for a mining pool you're 
increasing your DoS surface and this could be exploited by competing 
miners. A more reasonable security model can be to have miner coinbase 
pubkeys being used to commit to the "seen-in-mempool" spends and from then 
build "hand wawy" fraud proofs that a miner is quantum attacking you're 
P2WSH spends at pubkey reveal time during transaction relay.

Best,
Antoine

ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30
Le lundi 17 juin 2024 à 23:25:25 UTC+1, hunter a écrit :

>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 2024-06-16 19:31, Antoine Riard <antoin...@gmail•com> wrote:
>
> >
> > Hi Hunter Beast,I think any post-quantum upgrade signature algorithm 
> upgrade proposal would grandly benefit to haveShor's based practical 
> attacks far more defined in the Bitcoin context. As soon you start to talk 
> aboutquantum computers there is no such thing as a "quantum computer" 
> though a wide array of architecturesbased on a range of technologies to 
> encode qubits on nanoscale physical properties.
> >
> Good point. I can write a section in the BIP Motivation or Security 
> section about how an attack might take place practically, and the potential 
> urgency of such an attack.
>  
> I was thinking of focusing on the IBM Quantum System Two, mention how it 
> can be scaled, and that although it might be quite limited, if running 
> Shor's variant for a sufficient amount of time, above a certain minimum 
> threshold of qubits, it might be capable of decrypting the key to an 
> address within one year. I base this on the estimate provided in a study by 
> the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
> figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one 
> day. It would seem it scales roughly linearly, and so extrapolating it 
> further, 36,000 qubits would be needed to decrypt an address within one 
> year. However, the IBM Heron QPU turned out to have a gate time 100x less 
> than was estimated in 2022, and so it might be possible to make do with 
> even fewer qubits still within that timeframe. With only 360 qubits, 
> barring algorithmic overhead such as for circuit memory, it might be 
> possible to decrypt a single address within a year. That might sound like a 
> lot, but being able to accomplish that at all would be significant, almost 
> like a Chicago Pile moment, proving something in practice that was 
> previously only thought theoretically possible for the past 3 decades. And 
> it's only downhill from there...
> >
> > This is not certain that any Shor's algorithm variant works smoothly 
> independently of the quantum computerarchitecture considered (e.g gate 
> frequency, gate infidelity, cooling energy consumption) and I think it'san 
> interesting open game-theory problem if you can concentrate a sufficiant 
> amount of energy before anycoin owner moves them in consequence (e.g seeing 
> a quantum break in the mempool and reacting with a counter-spend).
> >
> It should be noted that P2PK keys still hold millions of bitcoin, and 
> those encode the entire public key for everyone to see for all time. Thus, 
> early QC attacks won't need to consider the complexities of the mempool.
> >
> > In my opinion, one of the last time the subject was addressed on the 
> mailing list, the description of the state of the quantum computer field 
> was not realistic and get into risk characterization hyperbole talking 
> about "super-exponential rate" (when indeed there is no empirical 
> realization that distinct theoretical advance on quantum capabilities can 
> be combined with each other) [1].
> >
> I think it's time to revisit these discussions given IBM's progress. 
> They've published a two videos in particular that are worth watching; their 
> keynote from December of last year [2], and their roadmap update from just 
> last month [3].
> >
> > On your proposal, there is an immediate observation which comes to mind, 
> namely why not using one of the algorithm(dilthium, sphincs+, falcon) which 
> has been through the 3 rounds of NIST cryptanalysis. Apart of the signature 
> size,which sounds to be smaller, in a network of full-nodes any PQ 
> signature algorithm should have reasonable verificationperformances.
> >
> I'm supportive of this consideration. FALCON might be a good substitute, 
> and maybe it can be upgraded to HAWK for even better performance depending 
> on how much time there is. According to the BIP, FALCON signatures are ~10x 
> larger than Schnorr signatures, so this will of course make the transaction 
> more expensive, but we also must remember, these signatures will be going 
> into the witness, which already receives a 4x discount. Perhaps the 
> discount could be increased further someday to fit more transactions into 
> blocks, but this will also likely result in more inscriptions filling 
> unused space also, which permanently increases the burden of running an 
> archive node. Due to the controversy such a change could bring, I would 
> rather any increases in the witness discount be excluded from future 
> activation discussions, so as to be considered separately, even if it 
> pertains to an increase in P2QRH transaction size.
>  
> Do you think it's worth reworking the BIP to use FALCON signatures? I've 
> only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
> readiness levels between those two are presently worlds apart.
>  
> Also, do you think it's of any concern to use HASH160 instead of HASH256 
> in the output script? I think it's fine for a cryptographic commitment 
> since it's simply a hash of a hash (MD160 of SHA-256).
> >
> > Lastly, there is a practical defensive technique that can be implemented 
> today by coin owners to protect in face ofhyptothetical quantum 
> adversaries. Namely setting spending scripts to request an artificially 
> inflated witness stack,as the cost has to be burden by the spender. I think 
> one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack 
> shuffling. While the efficiency of this technique is limited by the max 
> consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus 
> size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an 
> additional"scarce coins" pre-requirement on the quantum adversarise to 
> succeed. Shor's algorithm is only defined under theclassic ressources of 
> computational complexity, time and space.
> >
> I'm not sure I fully understand this, but even more practically, as 
> mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
> with a value of fewer than 50 coins per address, and when funds ever need 
> to be spent, the transaction is signed and submitted out of band to a 
> trusted mining pool, ideally one that does KYC, so it's known which 
> individual miners get to see the public key before it's mined. It's not 
> perfect, since this relies on exogenous security assumptions, which is why 
> P2QRH is proposed.
> >
> > Best,Antoine
> > [1] https://freicoin.substack.com/p/why-im-against-taproot
> >
>  
> I'm grateful you took the time to review the BIP and offer your detailed 
> insights.
>  
> [1] “The impact of hardware specifications on reaching quantum advantage 
> in the fault tolerant regime,” 2022 - 
> https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-specifications-on-reaching
> [2] https://www.youtube.com/watch?v=De2IlWji8Ck
> [3] https://www.youtube.com/watch?v=d5aIx79OTps
>  
> >
> >
> > Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit :
> >
> > > Good points. I like your suggestion for a SPHINCS+, just due to how 
> mature it is in comparison to SQIsign. It's already in its third round and 
> has several standards-compliant implementations, and it has an actual 
> specification rather than just a research paper. One thing to consider is 
> that NIST-I round 3 signatures are 982 bytes in size, according to what I 
> was able to find in the documents hosted by the SPHINCS website.
> > > 
> https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip
> > >  
> > > One way to handle this is to introduce this as a separate address type 
> than SQIsign. That won't require OP_CAT, and I do want to keep this soft 
> fork limited in scope. If SQIsign does become significantly broken, in this 
> hopefully far future scenario, I might be supportive of an increase in the 
> witness discount.
> > >  
> > > Also, I've made some additional changes based on your feedback on X. 
> You can review them here if you so wish:
> > > 
> https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754
> > >
> > >
> > > On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc 
> Dallaire-Demers wrote:
> > > > SQIsign is blockchain friendly but also very new, I would recommend 
> adding a hash-based backup key in case an attack on SQIsign is found in the 
> future (recall that SIDH broke over the span of a weekend 
> https://eprint.iacr.org/2022/975.pdf).
> > > > Backup keys can be added in the form of a Merkle tree where one 
> branch would contain the SQIsign public key and the other the public key of 
> the recovery hash-based scheme. For most transactions it would only add one 
> bit to specify the SQIsign branch.
> > > > The hash-based method could be Sphincs+, which is standardized by 
> NIST but requires adding extra code, or Lamport, which is not standardized 
> but can be verified on-chain with OP-CAT.
> > > >
> > > > On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote:
> > > > > The motivation for this BIP is to provide a concrete proposal for 
> adding quantum resistance to Bitcoin. We will need to pick a signature 
> algorithm, implement it, and have it ready in event of quantum emergency. 
> There will be time to adopt it. Importantly, this first step is a more 
> substantive answer to those with concerns beyond, "quantum computers may 
> pose a threat, but we likely don't have to worry about that for a long 
> time". Bitcoin development and activation is slow, so it's important that 
> those with low time preference start discussing this as a serious 
> possibility sooner rather than later. This is meant to be the first in a 
> series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is 
> intended to propose concrete solutions, even if they're early and 
> incomplete, so that Bitcoin developers are aware of the existence of these 
> solutions and their potential. This is just a rough draft and not the 
> finished BIP. I'd like to validate the approach and hear if I should 
> continue working on it, whether serious changes are needed, or if this 
> truly isn't a worthwhile endeavor right now.
> > > > >  
> > > > > The BIP can be found here:
> > > > > https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
> > > > >  
> > > > > Thank you for your time.
> > > > >  
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
> > -- You received this message because you are subscribed to a topic in 
> the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe 
> from this topic, visit 
> https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To 
> unsubscribe from this group and all its topics, send an email to 
> bitcoindev+...@googlegroups•com. To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com
> .
>
> -----BEGIN PGP SIGNATURE-----
> Version: OpenPGP.js v4.10.3
> Comment: https://openpgpjs.org
>
> wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe
> JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/
> 8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9
> bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE
> tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt
> Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp
> mH/DU20HMBeGVSrISrvsmLw=
> =+wat
> -----END PGP SIGNATURE-----
>
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/cd6bda66-39d3-49ca-9f3c-f610258626b0n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 25089 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-07-13  1:34         ` Antoine Riard
@ 2024-08-06 17:37           ` Hunter Beast
  2024-08-15  5:05             ` Hunter Beast
  0 siblings, 1 reply; 10+ messages in thread
From: Hunter Beast @ 2024-08-06 17:37 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 25707 bytes --]

That's alright, Antoine, it's been a busy month for me too.

> So I think it's good to stay cool minded and I think my observation about 
talking of "super-exponential rate" as used in maaku old blog post does not
> hold a lot of rigor to describe the advances in the field of quantum 
computing. Note, also how IMB is a commercial entity that can have a lot of 
interests
> in "pumping" the state of "quantum computing" to gather fundings (there 
is a historical anecdote among bitcoin OG circles about Vitalik trying to 
do an
> ICO to build a quantum computer like 10 years ago, just to remember).

Well, it's also important to remember that for every qubit added, it 
doubles the power of the system. A 2,000 qubit cryptographically-relevant 
quantum computer (CRQC) is exponentially faster than a 1,000 qubit one. 
There's also the capability for cross-links for multiple chips to 
communicate with each other, which IBM is also researching. The IBM Quantum 
System Two can be upgraded to support 16,000 qubits according to their 
marketing. Also consider that the verification of the results from the CRQC 
can be done via classical computer, so a high level of error correction 
might not be as necessary so long as the program is run enough times. It 
will take much longer, of course.

> I think FALCON is what has the smallest pubkey + sig size for 
hash-and-sign lattice-based schemes. So I think it's worth reworking the 
BIP to see what has the smallest generation / validation time and pubkey + 
size space for the main post-quantum scheme. At least for dilthium, falcon, 
sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH 
could be always be moved in a very template annex tag / field.

I've decided in one of my more recent updates to the BIP to default to the 
highest level of NIST security, NIST V, which provides 256 bits of 
security. You can see my rationale for that in this PR:
https://github.com/cryptoquick/bips/pull/7/files
Then, referencing this table:
https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki#security
As such, you'll see FALCON is roughly 4x larger than SQIsign signatures. 
Although supersingular elliptic curve quaternion isogeny-based algorithms 
are newer and more experimental than lattice-based cryptography, I think 
the benefits outweigh the risks, especially when transaction throughput is 
a principal concern.

It's crucial that the signature and public key both receive the witness 
discount. Can you go into more detail in how that might be accomplished?

Although it's too early to talk about activation of a QuBit soft fork, I've 
put some thought into how we can maintain the existing Bitcoin throughput 
with a soft fork, and I think it might be prudent to, when the time comes, 
introduce a 4x additional QuBit witness discount, maybe we call it the 
quitness, which is only available to valid P2QRH signatures. This would 
preclude its abuse for things like inscriptions because the signature data 
would need to correspond to the key, and even if this were possible, it's 
likely to result in only a burner address. This would increase chain state 
growth from roughly 100GB/yr to possibly closer to 2-300GB, depending on 
adoption. As the state of the art of SSD technology advances, this should 
allow plebs to run their own node on a 4TB disk for over a decade, even 
including existing chain size of ~600GB.

If we were to use the same approach for FALCON signatures, a 16x discount 
would be needed, and I think that's far too much for the community to 
accept. As for pub key size and verification time, these are secondary 
considerations if the primary constraint is maintaining present transaction 
throughput. That's what makes SQIsign so promising.

> See literature on quantum attacks on bitcoin in the reference of the 
paper you quote ("The impact of hardware specifications on reaching quantum 
advantage in the fault tolerant regime") for a discussion on Grover's 
search algorithm.

The Impact paper seems to dismiss Grover's algorithm, but I think it's 
important to err on the size of caution and instead use a 32-byte double 
SHA-2 (HASH256) for additional security in the P2QRH output.

> Namely you can introduce an artifical "witness-stack size scale ladder" 
in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP 
...checksig...
> I have not verified it works well on bitcoin core though this script 
should put the burden on the quantum attacker to have enough bitcoin amount 
available to burn in on-chain fees in witness size to break a P2WPKH.

I'm not sure I understand what you mean by this...
Is your coin scarcity comment related to what I call "satoshi's shield" in 
the BIP?

> The technical issue if you implement KYC for a mining pool you're 
increasing your DoS surface and this could be exploited by competing 
miners. A more reasonable security model can be to have miner coinbase 
pubkeys being used to commit to the "seen-in-mempool" spends and from then 
build "hand wawy" fraud proofs that a miner is quantum attacking you're 
P2WSH spends at pubkey reveal time during transaction relay.

Yes, this makes more sense. I'm not sure anything can be done with the 
fraud proofs, but they could at least prove that a bad actor is present. 
Ideally both approaches are combined for maximum security and 
accountability.

Thanks for your time!

On Friday, July 12, 2024 at 7:44:27 PM UTC-6 Antoine Riard wrote:

Hi Hunter Beast,

Apologies for the delay in answer.

> I was thinking of focusing on the IBM Quantum System Two, mention how it 
can be scaled, and that although it might be quite limited, if running 
Shor's variant for a > sufficient amount of time, above a certain minimum 
threshold of qubits, it might be capable of decrypting the key to an 
address within one year. I base this on the estimate > provided in a study 
by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one > 
day. It would seem it scales roughly linearly, and so extrapolating it 
further, 36,000 qubits would be needed to decrypt an address within one 
year. However, the IBM Heron > QPU turned out to have a gate time 100x less 
than was estimated in 2022, and so it might be possible to make do with 
even fewer qubits still within that timeframe. With > only 360 qubits, 
barring algorithmic overhead such as for circuit memory, it might be 
possible to decrypt a single address within a year. That might sound like a 
lot, but > being able to accomplish that at all would be significant, 
almost like a Chicago Pile moment, proving something in practice that was 
previously only thought theoretically > possible for the past 3 decades. 
And it's only downhill from there...

Briefly surveying the paper "The impact of hardware specifications on 
reaching quantum advantage in the fault tolerant regime", I think it's a 
reasonble framework to evaluate
the practical efficiency of quantum attacks on bitcoin, it's self 
consistent and there is a critical approach referencing the usual 
litterature on quantum attacks on bitcoin. Just
note the caveat, one can find in usual quantum complexity litterature, 
"particularly in regard to end-to-end physical resource estimation. There 
are many other error correction
techniques available, and the best choice will likely depend on the 
underlying architecture's characteristics, such as the available physical 
qubit–qubit connectivity" (verbatim). Namely, evaluating quantum attacks is 
very dependent on the concrete physical architecture underpinning it.

All that said, I agree with you that if you see a quantum computer with the 
range of 1000 physical qubits being able to break the DLP for ECC based 
encryption like secp256k1, even if it takes a year it will be a Chicago 
Pile moment, or whatever comparative experiments which were happening about 
chain of nuclear reactions in 30s / 40s.

>  I think it's time to revisit these discussions given IBM's progress. 
They've published a two videos in particular that are worth watching; their 
keynote from December of last > year [2], and their roadmap update from 
just last month [3]

I have looked on the roadmap as it's available on the IBM blog post: 
https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-to-2033
They give only a target of 2000 logical qubit to be reach in 2033...which 
is surprisingly not that strong...And one expect they might hit likely solid
state issues in laying out in hardware the Heron processor architecture. As 
a point of thinking, it took like 2 decades to advance on the state of art
of litography in traditional chips manufacturing.
 
So I think it's good to stay cool minded and I think my observation about 
talking of "super-exponential rate" as used in maaku old blog post does not
hold a lot of rigor to describe the advances in the field of quantum 
computing. Note, also how IMB is a commercial entity that can have a lot of 
interests
in "pumping" the state of "quantum computing" to gather fundings (there is 
a historical anecdote among bitcoin OG circles about Vitalik trying to do an
ICO to build a quantum computer like 10 years ago, just to remember).

> I'm supportive of this consideration. FALCON might be a good substitute, 
and maybe it can be upgraded to HAWK for even better performance depending 
on how much > time there is. According to the BIP, FALCON signatures are 
~10x larger t> han Schnorr signatures, so this will of course make the 
transaction more expensive, but we also > must remember, these signatures 
will be going into the witness, which already receives a 4x discount. 
Perhaps the discount could be incr> eased further someday to fit > more 
transactions into blocks, but this will also likely result in more 
inscriptions filling unused space also, which permanently increases the 
burden of running an archive > node. Due to the controversy s> uch a change 
could bring, I would rather any increases in the witness discount be 
excluded from future activation discussions, so as to be > considered 
separately, even if it pertains to an increase in P2QRH transaction size.
 
> Do you think it's worth reworking the BIP to use FALCON signatures? I've 
only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
readiness levels between those two are presently worlds apart.

I think FALCON is what has the smallest pubkey + sig size for hash-and-sign 
lattice-based schemes. So I think it's worth reworking the BIP to see what 
has the smallest generation / validation time and pubkey + size space for 
the main post-quantum scheme. At least for dilthium, falcon, sphincs+ and 
SQISign. For an hypothetical witness discount, a v2 P2QRH could be always 
be moved in a very template annex tag / field.

> Also, do you think it's of any concern to use HASH160 instead of HASH256 
in the output script? I think it's fine for a cryptographic commitment 
since it's simply a hash of a hash (MD160 of SHA-256).

See literature on quantum attacks on bitcoin in the reference of the paper 
you quote ("The impact of hardware specifications on reaching quantum 
advantage in the fault tolerant regime") for a discussion on Grover's 
search algorithm.

> I'm not sure I fully understand this, but even more practically, as 
mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
with a value of fewer than 50
> coins per address, and when funds ever need to be spent, the> 
 transaction is signed and submitted out of band to a trusted mining pool, 
ideally one that does KYC, so it's
> known which individual miners get to see the public key before it's 
mined. It's not perfect, since this relies on exogenou> s security 
assumptions, which is why P2QRH is
> proposed.

Again, the paper you're referencing ("The impact of hardware specifications 
on reaching quantum advantage...") is analyzing the performance of quantum 
advantage under
2 dimensions, namely space and time. My observation is in Bitcoin we have 
an additional dimension, "coin scarcity" that can be leveraged to build 
defense of address
spends in face of quantum attacks.

Namely you can introduce an artifical "witness-stack size scale ladder" in 
pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig...
I have not verified it works well on bitcoin core though this script should 
put the burden on the quantum attacker to have enough bitcoin amount 
available to burn in on-chain fees in witness size to break a P2WPKH.


>  ideally with a value of fewer than 50 coins per address, and when funds 
ever need to be spent, the transaction is signed and submitted out of band 
to a trusted mining pool, ideally
> one that does KYC, so it's known which individual > miners get to see the 
public key before it's mined. It's not perfect, since this relies on 
exogenous security assumptions, which is
> why P2QRH is proposed.

The technical issue if you implement KYC for a mining pool you're 
increasing your DoS surface and this could be exploited by competing 
miners. A more reasonable security model can be to have miner coinbase 
pubkeys being used to commit to the "seen-in-mempool" spends and from then 
build "hand wawy" fraud proofs that a miner is quantum attacking you're 
P2WSH spends at pubkey reveal time during transaction relay.

Best,
Antoine

ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30
Le lundi 17 juin 2024 à 23:25:25 UTC+1, hunter a écrit :


-----BEGIN PGP SIGNED MESSAGE----- 
Hash: SHA256 

On 2024-06-16 19:31, Antoine Riard <antoin...@gmail•com> wrote: 

> 
> Hi Hunter Beast,I think any post-quantum upgrade signature algorithm 
upgrade proposal would grandly benefit to haveShor's based practical 
attacks far more defined in the Bitcoin context. As soon you start to talk 
aboutquantum computers there is no such thing as a "quantum computer" 
though a wide array of architecturesbased on a range of technologies to 
encode qubits on nanoscale physical properties. 
> 
Good point. I can write a section in the BIP Motivation or Security section 
about how an attack might take place practically, and the potential urgency 
of such an attack. 
  
I was thinking of focusing on the IBM Quantum System Two, mention how it 
can be scaled, and that although it might be quite limited, if running 
Shor's variant for a sufficient amount of time, above a certain minimum 
threshold of qubits, it might be capable of decrypting the key to an 
address within one year. I base this on the estimate provided in a study by 
the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one 
day. It would seem it scales roughly linearly, and so extrapolating it 
further, 36,000 qubits would be needed to decrypt an address within one 
year. However, the IBM Heron QPU turned out to have a gate time 100x less 
than was estimated in 2022, and so it might be possible to make do with 
even fewer qubits still within that timeframe. With only 360 qubits, 
barring algorithmic overhead such as for circuit memory, it might be 
possible to decrypt a single address within a year. That might sound like a 
lot, but being able to accomplish that at all would be significant, almost 
like a Chicago Pile moment, proving something in practice that was 
previously only thought theoretically possible for the past 3 decades. And 
it's only downhill from there... 
> 
> This is not certain that any Shor's algorithm variant works smoothly 
independently of the quantum computerarchitecture considered (e.g gate 
frequency, gate infidelity, cooling energy consumption) and I think it'san 
interesting open game-theory problem if you can concentrate a sufficiant 
amount of energy before anycoin owner moves them in consequence (e.g seeing 
a quantum break in the mempool and reacting with a counter-spend). 
> 
It should be noted that P2PK keys still hold millions of bitcoin, and those 
encode the entire public key for everyone to see for all time. Thus, early 
QC attacks won't need to consider the complexities of the mempool. 
> 
> In my opinion, one of the last time the subject was addressed on the 
mailing list, the description of the state of the quantum computer field 
was not realistic and get into risk characterization hyperbole talking 
about "super-exponential rate" (when indeed there is no empirical 
realization that distinct theoretical advance on quantum capabilities can 
be combined with each other) [1]. 
> 
I think it's time to revisit these discussions given IBM's progress. 
They've published a two videos in particular that are worth watching; their 
keynote from December of last year [2], and their roadmap update from just 
last month [3]. 
> 
> On your proposal, there is an immediate observation which comes to mind, 
namely why not using one of the algorithm(dilthium, sphincs+, falcon) which 
has been through the 3 rounds of NIST cryptanalysis. Apart of the signature 
size,which sounds to be smaller, in a network of full-nodes any PQ 
signature algorithm should have reasonable verificationperformances. 
> 
I'm supportive of this consideration. FALCON might be a good substitute, 
and maybe it can be upgraded to HAWK for even better performance depending 
on how much time there is. According to the BIP, FALCON signatures are ~10x 
larger than Schnorr signatures, so this will of course make the transaction 
more expensive, but we also must remember, these signatures will be going 
into the witness, which already receives a 4x discount. Perhaps the 
discount could be increased further someday to fit more transactions into 
blocks, but this will also likely result in more inscriptions filling 
unused space also, which permanently increases the burden of running an 
archive node. Due to the controversy such a change could bring, I would 
rather any increases in the witness discount be excluded from future 
activation discussions, so as to be considered separately, even if it 
pertains to an increase in P2QRH transaction size. 
  
Do you think it's worth reworking the BIP to use FALCON signatures? I've 
only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
readiness levels between those two are presently worlds apart. 
  
Also, do you think it's of any concern to use HASH160 instead of HASH256 in 
the output script? I think it's fine for a cryptographic commitment since 
it's simply a hash of a hash (MD160 of SHA-256). 
> 
> Lastly, there is a practical defensive technique that can be implemented 
today by coin owners to protect in face ofhyptothetical quantum 
adversaries. Namely setting spending scripts to request an artificially 
inflated witness stack,as the cost has to be burden by the spender. I think 
one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack 
shuffling. While the efficiency of this technique is limited by the max 
consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus 
size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an 
additional"scarce coins" pre-requirement on the quantum adversarise to 
succeed. Shor's algorithm is only defined under theclassic ressources of 
computational complexity, time and space. 
> 
I'm not sure I fully understand this, but even more practically, as 
mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
with a value of fewer than 50 coins per address, and when funds ever need 
to be spent, the transaction is signed and submitted out of band to a 
trusted mining pool, ideally one that does KYC, so it's known which 
individual miners get to see the public key before it's mined. It's not 
perfect, since this relies on exogenous security assumptions, which is why 
P2QRH is proposed. 
> 
> Best,Antoine 
> [1] https://freicoin.substack.com/p/why-im-against-taproot 
> 
  
I'm grateful you took the time to review the BIP and offer your detailed 
insights. 
  
[1] “The impact of hardware specifications on reaching quantum advantage in 
the fault tolerant regime,” 2022 - 
https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-specifications-on-reaching 
[2] https://www.youtube.com/watch?v=De2IlWji8Ck 
[3] https://www.youtube.com/watch?v=d5aIx79OTps 
  
> 
> 
> Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit : 
> 
> > Good points. I like your suggestion for a SPHINCS+, just due to how 
mature it is in comparison to SQIsign. It's already in its third round and 
has several standards-compliant implementations, and it has an actual 
specification rather than just a research paper. One thing to consider is 
that NIST-I round 3 signatures are 982 bytes in size, according to what I 
was able to find in the documents hosted by the SPHINCS website. 
> > 
https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip 
> >   
> > One way to handle this is to introduce this as a separate address type 
than SQIsign. That won't require OP_CAT, and I do want to keep this soft 
fork limited in scope. If SQIsign does become significantly broken, in this 
hopefully far future scenario, I might be supportive of an increase in the 
witness discount. 
> >   
> > Also, I've made some additional changes based on your feedback on X. 
You can review them here if you so wish: 
> > 
https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754 
> > 
> > 
> > On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc Dallaire-Demers 
wrote: 
> > > SQIsign is blockchain friendly but also very new, I would recommend 
adding a hash-based backup key in case an attack on SQIsign is found in the 
future (recall that SIDH broke over the span of a weekend 
https://eprint.iacr.org/2022/975.pdf). 
> > > Backup keys can be added in the form of a Merkle tree where one 
branch would contain the SQIsign public key and the other the public key of 
the recovery hash-based scheme. For most transactions it would only add one 
bit to specify the SQIsign branch. 
> > > The hash-based method could be Sphincs+, which is standardized by 
NIST but requires adding extra code, or Lamport, which is not standardized 
but can be verified on-chain with OP-CAT. 
> > > 
> > > On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote: 
> > > > The motivation for this BIP is to provide a concrete proposal for 
adding quantum resistance to Bitcoin. We will need to pick a signature 
algorithm, implement it, and have it ready in event of quantum emergency. 
There will be time to adopt it. Importantly, this first step is a more 
substantive answer to those with concerns beyond, "quantum computers may 
pose a threat, but we likely don't have to worry about that for a long 
time". Bitcoin development and activation is slow, so it's important that 
those with low time preference start discussing this as a serious 
possibility sooner rather than later. This is meant to be the first in a 
series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is 
intended to propose concrete solutions, even if they're early and 
incomplete, so that Bitcoin developers are aware of the existence of these 
solutions and their potential. This is just a rough draft and not the 
finished BIP. I'd like to validate the approach and hear if I should 
continue working on it, whether serious changes are needed, or if this 
truly isn't a worthwhile endeavor right now. 
> > > >   
> > > > The BIP can be found here: 
> > > > https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki 
> > > >   
> > > > Thank you for your time. 
> > > >   
> > > > 
> > > 
> > > 
> > 
> > 
> 
> 
> -- You received this message because you are subscribed to a topic in the 
Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from 
this topic, visit 
https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To 
unsubscribe from this group and all its topics, send an email to 
bitcoindev+...@googlegroups•com. To view this discussion on the web visit 
https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com. 


-----BEGIN PGP SIGNATURE----- 
Version: OpenPGP.js v4.10.3 
Comment: https://openpgpjs.org 

wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe 
JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/ 
8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9 
bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE 
tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt 
Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp 
mH/DU20HMBeGVSrISrvsmLw= 
=+wat 
-----END PGP SIGNATURE----- 

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/1b86f467-95e5-4558-98bc-b921dd29e1afn%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 28853 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-08-06 17:37           ` Hunter Beast
@ 2024-08-15  5:05             ` Hunter Beast
  2024-08-22  6:20               ` Antoine Riard
  0 siblings, 1 reply; 10+ messages in thread
From: Hunter Beast @ 2024-08-15  5:05 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 26941 bytes --]

I've taken Antoine's feedback to heart and added FALCON to the 
specification, including a section that addresses the increased maintenance 
burden of adding two distinct post-quantum cryptosystems.
Please review.
https://github.com/cryptoquick/bips/pull/9/files

On Tuesday, August 6, 2024 at 11:50:35 AM UTC-6 Hunter Beast wrote:

> That's alright, Antoine, it's been a busy month for me too.
>
> > So I think it's good to stay cool minded and I think my observation 
> about talking of "super-exponential rate" as used in maaku old blog post 
> does not
> > hold a lot of rigor to describe the advances in the field of quantum 
> computing. Note, also how IMB is a commercial entity that can have a lot of 
> interests
> > in "pumping" the state of "quantum computing" to gather fundings (there 
> is a historical anecdote among bitcoin OG circles about Vitalik trying to 
> do an
> > ICO to build a quantum computer like 10 years ago, just to remember).
>
> Well, it's also important to remember that for every qubit added, it 
> doubles the power of the system. A 2,000 qubit cryptographically-relevant 
> quantum computer (CRQC) is exponentially faster than a 1,000 qubit one. 
> There's also the capability for cross-links for multiple chips to 
> communicate with each other, which IBM is also researching. The IBM Quantum 
> System Two can be upgraded to support 16,000 qubits according to their 
> marketing. Also consider that the verification of the results from the CRQC 
> can be done via classical computer, so a high level of error correction 
> might not be as necessary so long as the program is run enough times. It 
> will take much longer, of course.
>
> > I think FALCON is what has the smallest pubkey + sig size for 
> hash-and-sign lattice-based schemes. So I think it's worth reworking the 
> BIP to see what has the smallest generation / validation time and pubkey + 
> size space for the main post-quantum scheme. At least for dilthium, falcon, 
> sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH 
> could be always be moved in a very template annex tag / field.
>
> I've decided in one of my more recent updates to the BIP to default to the 
> highest level of NIST security, NIST V, which provides 256 bits of 
> security. You can see my rationale for that in this PR:
> https://github.com/cryptoquick/bips/pull/7/files
> Then, referencing this table:
> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki#security
> As such, you'll see FALCON is roughly 4x larger than SQIsign signatures. 
> Although supersingular elliptic curve quaternion isogeny-based algorithms 
> are newer and more experimental than lattice-based cryptography, I think 
> the benefits outweigh the risks, especially when transaction throughput is 
> a principal concern.
>
> It's crucial that the signature and public key both receive the witness 
> discount. Can you go into more detail in how that might be accomplished?
>
> Although it's too early to talk about activation of a QuBit soft fork, 
> I've put some thought into how we can maintain the existing Bitcoin 
> throughput with a soft fork, and I think it might be prudent to, when the 
> time comes, introduce a 4x additional QuBit witness discount, maybe we call 
> it the quitness, which is only available to valid P2QRH signatures. This 
> would preclude its abuse for things like inscriptions because the signature 
> data would need to correspond to the key, and even if this were possible, 
> it's likely to result in only a burner address. This would increase chain 
> state growth from roughly 100GB/yr to possibly closer to 2-300GB, depending 
> on adoption. As the state of the art of SSD technology advances, this 
> should allow plebs to run their own node on a 4TB disk for over a decade, 
> even including existing chain size of ~600GB.
>
> If we were to use the same approach for FALCON signatures, a 16x discount 
> would be needed, and I think that's far too much for the community to 
> accept. As for pub key size and verification time, these are secondary 
> considerations if the primary constraint is maintaining present transaction 
> throughput. That's what makes SQIsign so promising.
>
> > See literature on quantum attacks on bitcoin in the reference of the 
> paper you quote ("The impact of hardware specifications on reaching quantum 
> advantage in the fault tolerant regime") for a discussion on Grover's 
> search algorithm.
>
> The Impact paper seems to dismiss Grover's algorithm, but I think it's 
> important to err on the size of caution and instead use a 32-byte double 
> SHA-2 (HASH256) for additional security in the P2QRH output.
>
> > Namely you can introduce an artifical "witness-stack size scale ladder" 
> in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP 
> ...checksig...
> > I have not verified it works well on bitcoin core though this script 
> should put the burden on the quantum attacker to have enough bitcoin amount 
> available to burn in on-chain fees in witness size to break a P2WPKH.
>
> I'm not sure I understand what you mean by this...
> Is your coin scarcity comment related to what I call "satoshi's shield" in 
> the BIP?
>
> > The technical issue if you implement KYC for a mining pool you're 
> increasing your DoS surface and this could be exploited by competing 
> miners. A more reasonable security model can be to have miner coinbase 
> pubkeys being used to commit to the "seen-in-mempool" spends and from then 
> build "hand wawy" fraud proofs that a miner is quantum attacking you're 
> P2WSH spends at pubkey reveal time during transaction relay.
>
> Yes, this makes more sense. I'm not sure anything can be done with the 
> fraud proofs, but they could at least prove that a bad actor is present. 
> Ideally both approaches are combined for maximum security and 
> accountability.
>
> Thanks for your time!
>
> On Friday, July 12, 2024 at 7:44:27 PM UTC-6 Antoine Riard wrote:
>
> Hi Hunter Beast,
>
> Apologies for the delay in answer.
>
> > I was thinking of focusing on the IBM Quantum System Two, mention how it 
> can be scaled, and that although it might be quite limited, if running 
> Shor's variant for a > sufficient amount of time, above a certain minimum 
> threshold of qubits, it might be capable of decrypting the key to an 
> address within one year. I base this on the estimate > provided in a study 
> by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
> figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one > 
> day. It would seem it scales roughly linearly, and so extrapolating it 
> further, 36,000 qubits would be needed to decrypt an address within one 
> year. However, the IBM Heron > QPU turned out to have a gate time 100x less 
> than was estimated in 2022, and so it might be possible to make do with 
> even fewer qubits still within that timeframe. With > only 360 qubits, 
> barring algorithmic overhead such as for circuit memory, it might be 
> possible to decrypt a single address within a year. That might sound like a 
> lot, but > being able to accomplish that at all would be significant, 
> almost like a Chicago Pile moment, proving something in practice that was 
> previously only thought theoretically > possible for the past 3 decades. 
> And it's only downhill from there...
>
> Briefly surveying the paper "The impact of hardware specifications on 
> reaching quantum advantage in the fault tolerant regime", I think it's a 
> reasonble framework to evaluate
> the practical efficiency of quantum attacks on bitcoin, it's self 
> consistent and there is a critical approach referencing the usual 
> litterature on quantum attacks on bitcoin. Just
> note the caveat, one can find in usual quantum complexity litterature, 
> "particularly in regard to end-to-end physical resource estimation. There 
> are many other error correction
> techniques available, and the best choice will likely depend on the 
> underlying architecture's characteristics, such as the available physical 
> qubit–qubit connectivity" (verbatim). Namely, evaluating quantum attacks is 
> very dependent on the concrete physical architecture underpinning it.
>
> All that said, I agree with you that if you see a quantum computer with 
> the range of 1000 physical qubits being able to break the DLP for ECC based 
> encryption like secp256k1, even if it takes a year it will be a Chicago 
> Pile moment, or whatever comparative experiments which were happening about 
> chain of nuclear reactions in 30s / 40s.
>
> >  I think it's time to revisit these discussions given IBM's progress. 
> They've published a two videos in particular that are worth watching; their 
> keynote from December of last > year [2], and their roadmap update from 
> just last month [3]
>
> I have looked on the roadmap as it's available on the IBM blog post: 
> https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-to-2033
> They give only a target of 2000 logical qubit to be reach in 2033...which 
> is surprisingly not that strong...And one expect they might hit likely solid
> state issues in laying out in hardware the Heron processor architecture. 
> As a point of thinking, it took like 2 decades to advance on the state of 
> art
> of litography in traditional chips manufacturing.
>  
> So I think it's good to stay cool minded and I think my observation about 
> talking of "super-exponential rate" as used in maaku old blog post does not
> hold a lot of rigor to describe the advances in the field of quantum 
> computing. Note, also how IMB is a commercial entity that can have a lot of 
> interests
> in "pumping" the state of "quantum computing" to gather fundings (there is 
> a historical anecdote among bitcoin OG circles about Vitalik trying to do an
> ICO to build a quantum computer like 10 years ago, just to remember).
>
> > I'm supportive of this consideration. FALCON might be a good substitute, 
> and maybe it can be upgraded to HAWK for even better performance depending 
> on how much > time there is. According to the BIP, FALCON signatures are 
> ~10x larger t> han Schnorr signatures, so this will of course make the 
> transaction more expensive, but we also > must remember, these signatures 
> will be going into the witness, which already receives a 4x discount. 
> Perhaps the discount could be incr> eased further someday to fit > more 
> transactions into blocks, but this will also likely result in more 
> inscriptions filling unused space also, which permanently increases the 
> burden of running an archive > node. Due to the controversy s> uch a change 
> could bring, I would rather any increases in the witness discount be 
> excluded from future activation discussions, so as to be > considered 
> separately, even if it pertains to an increase in P2QRH transaction size.
>  
> > Do you think it's worth reworking the BIP to use FALCON signatures? I've 
> only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
> readiness levels between those two are presently worlds apart.
>
> I think FALCON is what has the smallest pubkey + sig size for 
> hash-and-sign lattice-based schemes. So I think it's worth reworking the 
> BIP to see what has the smallest generation / validation time and pubkey + 
> size space for the main post-quantum scheme. At least for dilthium, falcon, 
> sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH 
> could be always be moved in a very template annex tag / field.
>
> > Also, do you think it's of any concern to use HASH160 instead of HASH256 
> in the output script? I think it's fine for a cryptographic commitment 
> since it's simply a hash of a hash (MD160 of SHA-256).
>
> See literature on quantum attacks on bitcoin in the reference of the paper 
> you quote ("The impact of hardware specifications on reaching quantum 
> advantage in the fault tolerant regime") for a discussion on Grover's 
> search algorithm.
>
> > I'm not sure I fully understand this, but even more practically, as 
> mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
> with a value of fewer than 50
> > coins per address, and when funds ever need to be spent, the> 
>  transaction is signed and submitted out of band to a trusted mining pool, 
> ideally one that does KYC, so it's
> > known which individual miners get to see the public key before it's 
> mined. It's not perfect, since this relies on exogenou> s security 
> assumptions, which is why P2QRH is
> > proposed.
>
> Again, the paper you're referencing ("The impact of hardware 
> specifications on reaching quantum advantage...") is analyzing the 
> performance of quantum advantage under
> 2 dimensions, namely space and time. My observation is in Bitcoin we have 
> an additional dimension, "coin scarcity" that can be leveraged to build 
> defense of address
> spends in face of quantum attacks.
>
> Namely you can introduce an artifical "witness-stack size scale ladder" in 
> pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig...
> I have not verified it works well on bitcoin core though this script 
> should put the burden on the quantum attacker to have enough bitcoin amount 
> available to burn in on-chain fees in witness size to break a P2WPKH.
>
>
> >  ideally with a value of fewer than 50 coins per address, and when funds 
> ever need to be spent, the transaction is signed and submitted out of band 
> to a trusted mining pool, ideally
> > one that does KYC, so it's known which individual > miners get to see 
> the public key before it's mined. It's not perfect, since this relies on 
> exogenous security assumptions, which is
> > why P2QRH is proposed.
>
> The technical issue if you implement KYC for a mining pool you're 
> increasing your DoS surface and this could be exploited by competing 
> miners. A more reasonable security model can be to have miner coinbase 
> pubkeys being used to commit to the "seen-in-mempool" spends and from then 
> build "hand wawy" fraud proofs that a miner is quantum attacking you're 
> P2WSH spends at pubkey reveal time during transaction relay.
>
> Best,
> Antoine
>
> ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30
> Le lundi 17 juin 2024 à 23:25:25 UTC+1, hunter a écrit :
>
>
> -----BEGIN PGP SIGNED MESSAGE----- 
> Hash: SHA256 
>
> On 2024-06-16 19:31, Antoine Riard <antoin...@gmail•com> wrote: 
>
> > 
> > Hi Hunter Beast,I think any post-quantum upgrade signature algorithm 
> upgrade proposal would grandly benefit to haveShor's based practical 
> attacks far more defined in the Bitcoin context. As soon you start to talk 
> aboutquantum computers there is no such thing as a "quantum computer" 
> though a wide array of architecturesbased on a range of technologies to 
> encode qubits on nanoscale physical properties. 
> > 
> Good point. I can write a section in the BIP Motivation or Security 
> section about how an attack might take place practically, and the potential 
> urgency of such an attack. 
>   
> I was thinking of focusing on the IBM Quantum System Two, mention how it 
> can be scaled, and that although it might be quite limited, if running 
> Shor's variant for a sufficient amount of time, above a certain minimum 
> threshold of qubits, it might be capable of decrypting the key to an 
> address within one year. I base this on the estimate provided in a study by 
> the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
> figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one 
> day. It would seem it scales roughly linearly, and so extrapolating it 
> further, 36,000 qubits would be needed to decrypt an address within one 
> year. However, the IBM Heron QPU turned out to have a gate time 100x less 
> than was estimated in 2022, and so it might be possible to make do with 
> even fewer qubits still within that timeframe. With only 360 qubits, 
> barring algorithmic overhead such as for circuit memory, it might be 
> possible to decrypt a single address within a year. That might sound like a 
> lot, but being able to accomplish that at all would be significant, almost 
> like a Chicago Pile moment, proving something in practice that was 
> previously only thought theoretically possible for the past 3 decades. And 
> it's only downhill from there... 
> > 
> > This is not certain that any Shor's algorithm variant works smoothly 
> independently of the quantum computerarchitecture considered (e.g gate 
> frequency, gate infidelity, cooling energy consumption) and I think it'san 
> interesting open game-theory problem if you can concentrate a sufficiant 
> amount of energy before anycoin owner moves them in consequence (e.g seeing 
> a quantum break in the mempool and reacting with a counter-spend). 
> > 
> It should be noted that P2PK keys still hold millions of bitcoin, and 
> those encode the entire public key for everyone to see for all time. Thus, 
> early QC attacks won't need to consider the complexities of the mempool. 
> > 
> > In my opinion, one of the last time the subject was addressed on the 
> mailing list, the description of the state of the quantum computer field 
> was not realistic and get into risk characterization hyperbole talking 
> about "super-exponential rate" (when indeed there is no empirical 
> realization that distinct theoretical advance on quantum capabilities can 
> be combined with each other) [1]. 
> > 
> I think it's time to revisit these discussions given IBM's progress. 
> They've published a two videos in particular that are worth watching; their 
> keynote from December of last year [2], and their roadmap update from just 
> last month [3]. 
> > 
> > On your proposal, there is an immediate observation which comes to mind, 
> namely why not using one of the algorithm(dilthium, sphincs+, falcon) which 
> has been through the 3 rounds of NIST cryptanalysis. Apart of the signature 
> size,which sounds to be smaller, in a network of full-nodes any PQ 
> signature algorithm should have reasonable verificationperformances. 
> > 
> I'm supportive of this consideration. FALCON might be a good substitute, 
> and maybe it can be upgraded to HAWK for even better performance depending 
> on how much time there is. According to the BIP, FALCON signatures are ~10x 
> larger than Schnorr signatures, so this will of course make the transaction 
> more expensive, but we also must remember, these signatures will be going 
> into the witness, which already receives a 4x discount. Perhaps the 
> discount could be increased further someday to fit more transactions into 
> blocks, but this will also likely result in more inscriptions filling 
> unused space also, which permanently increases the burden of running an 
> archive node. Due to the controversy such a change could bring, I would 
> rather any increases in the witness discount be excluded from future 
> activation discussions, so as to be considered separately, even if it 
> pertains to an increase in P2QRH transaction size. 
>   
> Do you think it's worth reworking the BIP to use FALCON signatures? I've 
> only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
> readiness levels between those two are presently worlds apart. 
>   
> Also, do you think it's of any concern to use HASH160 instead of HASH256 
> in the output script? I think it's fine for a cryptographic commitment 
> since it's simply a hash of a hash (MD160 of SHA-256). 
> > 
> > Lastly, there is a practical defensive technique that can be implemented 
> today by coin owners to protect in face ofhyptothetical quantum 
> adversaries. Namely setting spending scripts to request an artificially 
> inflated witness stack,as the cost has to be burden by the spender. I think 
> one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack 
> shuffling. While the efficiency of this technique is limited by the max 
> consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus 
> size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an 
> additional"scarce coins" pre-requirement on the quantum adversarise to 
> succeed. Shor's algorithm is only defined under theclassic ressources of 
> computational complexity, time and space. 
> > 
> I'm not sure I fully understand this, but even more practically, as 
> mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
> with a value of fewer than 50 coins per address, and when funds ever need 
> to be spent, the transaction is signed and submitted out of band to a 
> trusted mining pool, ideally one that does KYC, so it's known which 
> individual miners get to see the public key before it's mined. It's not 
> perfect, since this relies on exogenous security assumptions, which is why 
> P2QRH is proposed. 
> > 
> > Best,Antoine 
> > [1] https://freicoin.substack.com/p/why-im-against-taproot 
> > 
>   
> I'm grateful you took the time to review the BIP and offer your detailed 
> insights. 
>   
> [1] “The impact of hardware specifications on reaching quantum advantage 
> in the fault tolerant regime,” 2022 - 
> https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-specifications-on-reaching 
> [2] https://www.youtube.com/watch?v=De2IlWji8Ck 
> [3] https://www.youtube.com/watch?v=d5aIx79OTps 
>   
> > 
> > 
> > Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit : 
> > 
> > > Good points. I like your suggestion for a SPHINCS+, just due to how 
> mature it is in comparison to SQIsign. It's already in its third round and 
> has several standards-compliant implementations, and it has an actual 
> specification rather than just a research paper. One thing to consider is 
> that NIST-I round 3 signatures are 982 bytes in size, according to what I 
> was able to find in the documents hosted by the SPHINCS website. 
> > > 
> https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip 
> > >   
> > > One way to handle this is to introduce this as a separate address type 
> than SQIsign. That won't require OP_CAT, and I do want to keep this soft 
> fork limited in scope. If SQIsign does become significantly broken, in this 
> hopefully far future scenario, I might be supportive of an increase in the 
> witness discount. 
> > >   
> > > Also, I've made some additional changes based on your feedback on X. 
> You can review them here if you so wish: 
> > > 
> https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754 
> > > 
> > > 
> > > On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc 
> Dallaire-Demers wrote: 
> > > > SQIsign is blockchain friendly but also very new, I would recommend 
> adding a hash-based backup key in case an attack on SQIsign is found in the 
> future (recall that SIDH broke over the span of a weekend 
> https://eprint.iacr.org/2022/975.pdf). 
> > > > Backup keys can be added in the form of a Merkle tree where one 
> branch would contain the SQIsign public key and the other the public key of 
> the recovery hash-based scheme. For most transactions it would only add one 
> bit to specify the SQIsign branch. 
> > > > The hash-based method could be Sphincs+, which is standardized by 
> NIST but requires adding extra code, or Lamport, which is not standardized 
> but can be verified on-chain with OP-CAT. 
> > > > 
> > > > On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote: 
> > > > > The motivation for this BIP is to provide a concrete proposal for 
> adding quantum resistance to Bitcoin. We will need to pick a signature 
> algorithm, implement it, and have it ready in event of quantum emergency. 
> There will be time to adopt it. Importantly, this first step is a more 
> substantive answer to those with concerns beyond, "quantum computers may 
> pose a threat, but we likely don't have to worry about that for a long 
> time". Bitcoin development and activation is slow, so it's important that 
> those with low time preference start discussing this as a serious 
> possibility sooner rather than later. This is meant to be the first in a 
> series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is 
> intended to propose concrete solutions, even if they're early and 
> incomplete, so that Bitcoin developers are aware of the existence of these 
> solutions and their potential. This is just a rough draft and not the 
> finished BIP. I'd like to validate the approach and hear if I should 
> continue working on it, whether serious changes are needed, or if this 
> truly isn't a worthwhile endeavor right now. 
> > > > >   
> > > > > The BIP can be found here: 
> > > > > https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki 
> > > > >   
> > > > > Thank you for your time. 
> > > > >   
> > > > > 
> > > > 
> > > > 
> > > 
> > > 
> > 
> > 
> > -- You received this message because you are subscribed to a topic in 
> the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe 
> from this topic, visit 
> https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To 
> unsubscribe from this group and all its topics, send an email to 
> bitcoindev+...@googlegroups•com. To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com. 
>
>
> -----BEGIN PGP SIGNATURE----- 
> Version: OpenPGP.js v4.10.3 
> Comment: https://openpgpjs.org 
>
> wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe 
> JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/ 
> 8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9 
> bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE 
> tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt 
> Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp 
> mH/DU20HMBeGVSrISrvsmLw= 
> =+wat 
> -----END PGP SIGNATURE----- 
>
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/ac28feaf-6649-4501-9b1a-1410e5baa05dn%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 32846 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-08-15  5:05             ` Hunter Beast
@ 2024-08-22  6:20               ` Antoine Riard
  2024-09-25 12:04                 ` Hunter Beast
  0 siblings, 1 reply; 10+ messages in thread
From: Antoine Riard @ 2024-08-22  6:20 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 37419 bytes --]

Hello Hunter,

> Well, it's also important to remember that for every qubit added, it 
doubles the power of the system. A 2,000 qubit cryptographically-relevant 
quantum computer (CRQC) is exponentially faster than a 1,000 qubit one. 
There's also the > capability for cross-links for multiple chips to 
communicate with each other, which IBM is also researching. The IBM Quantum 
System Two can be upgraded to support 16,000 qubits according to their 
marketing. Also consider that the ve> rification of the results from the 
CRQC can be done via classical computer, so a high level of error 
correction might not be as necessary so long as the program is run enough 
times. It will take much longer, of course.

On performance, once again I think it all depends on the quantum computer 
architecture considered and if we're talking about physical qubits / 
logical qubits. As the paper "The impact of hardware specifications on 
reaching quantum advantage in the fault tolerant regime" linked in your BIP 
judiciously observe in its introduction that surface code (as used by IBM) 
is only one of the error code correction technique.

About cross-links for multiple chips, even if each chip parallelize towards 
a single classical logical unit, ordering computational units is a 
notoriously hard issue in classical computer. I don't think there is any 
certainty in quantum computer development that each set of qubits of 
isolated chips can be arithmetically additioned without a coefficient loss 
on the resulting sum (...there is always a bit of apprehension to have to 
dissociate between marketing claims and academic claim duly 
peer-reviewed...). And while indeed, the results can be evaluated via a 
classical computer, this doesn't mean transitively that the evaluation will 
be as efficient (in energy / computational cycles) rather than doing more 
error correction on the quantum computer side.

> I've decided in one of my more recent updates to the BIP to default to 
the highest level of NIST security, NIST V, which provides 256 bits of 
security. You can see my rationale for that in this PR:
> https://github.com/cryptoquick/bips/pull/7/files

Those are assumptions there is a security increase by scaling up the size 
of the public key. In the Bitcoin world, we don't even make assumption on 
the public key size
for ECDSA signature scheme as both compressed and uncompressed public keys 
have been historically valid. Similarly, the public key size does not have 
to be bundled with
the specification of the signature verification scheme itself (e.g see 
BIP340 discussion on x-only public keys).

> As such, you'll see FALCON is roughly 4x larger than SQIsign signatures. 
Although supersingular elliptic curve quaternion isogeny-based algorithms 
are newer and
> more experimental than lattice-based cryptography, I think the benefits 
outweigh the risks, especially when transaction throughput is a principal 
concern.
 
There are no public key size in the security table so it's hard to compare 
the overall on-chain space cost for each signature post-quantum algorithm 
considered.
Neither actually, there is an estimation of the verification cost for an 
average 200-bytes transactions, old good's Hamilton's quaternion and 
relying on complex numbers, which can be hard to deal with for the hobbyist 
CPUs can be a concern.

> It's crucial that the signature and public key both receive the witness 
discount. Can you go into more detail in how that might be accomplished?

The BIP341 taproot annex could be used for that, see 
https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5

> Although it's too early to talk about activation of a QuBit soft fork, 
I've put some thought into how we can maintain the existing Bitcoin 
throughput with a soft fork, and I think it might be prudent to, when the 
time comes, introdu> ce a 4x additional QuBit witness discount, maybe we 
call it the quitness, which is only available to valid P2QRH signatures. 
This would preclude its abuse for things like inscriptions because the 
signature data would need to corresp> ond to the key, and even if this were 
possible, it's likely to result in only a burner address. This would 
increase chain state growth from roughly 100GB/yr to possibly closer to 
2-300GB, depending on adoption. As the state of the a> rt of SSD technology 
advances, this should allow plebs to run their own node on a 4TB disk for 
over a decade, even including existing chain size of ~600GB.

The annex could have typed fields for post-quantum signature and public key 
further witness discount. However, I think it's a bit naive to assume that 
SSD technology advances will stay linear and that it will be economically 
accessible at the same pace to the tens of thousands of plebs actually 
running full-nodes and constituting the skeleton of the base-relay network. 
One could play out a posteriori the predictions on bandwidth technological 
advances that have been made in BIP103 to see how well they held on the 
last ~9 years.

(There is another caution with evaluating technological advances, namely 
that some hardware components could be actually massively consumed by other 
cryptocurrencies for their consensus algorithms...)

> If we were to use the same approach for FALCON signatures, a 16x discount 
would be needed, and I think that's far too much for the community to 
accept. As for pub key size and verification
> time, these are secondary considerations if the primary constraint is 
maintaining present transaction throughput. That's what makes SQIsign so 
promising.

Well, if there is something like the annex with typed fields each type of 
post-quantum signature could get a wider discount, especially if there are 
verification asymmetries favoring some scheme over another one, even if the 
security properties are differing.

> The Impact paper seems to dismiss Grover's algorithm, but I think it's 
important to err on the size of caution and instead use a 32-byte double 
SHA-2 (HASH256) for additional security in the P2QRH output.

Performance-wise, this doesn't shock me to use a double SHA-2 (HASH256) as 
it has been added for many domain separation tagged hash in taproot.
About Grover's algorithm, it's more the sample space and collision space 
that should be more defined to be relevant, you can always downgrade the 
performance of the Grover's algorithm by scaling up the sample space, 
however it's not sure it's practical for bitcoin transaction generation.

> I'm not sure I understand what you mean by this...
> Is your coin scarcity comment related to what I call "satoshi's shield" 
in the BIP?

Not at all the "satoshi's shield" as you're describing in the BIP.

This is just the observation that bitcoin coins are scarce in the sense 
that you need to burn raw energy to acquire the rewards according to the 
issuance schedule (or miners fees). Bitcoin script can be designed to 
request that a sufficient number of bitcoin coins, or satoshis, are burned 
before to unlock a coin locked under a quantum-frail scriptpubkey.

That means any quantum computer attacker, even if they have an efficient 
quantum computer, might not be able to break the redeem script itself, only 
the signatures composing the redeem script check sig operations.

Let's give a concrete example, let's say you have the following pseudo 
script:

        <<OP_DEPTH> <OP_PUSHDATA2> <998> <OP_EQUALVERIFY> <pubkey> 
<OP_CHECKSIG>>

Interpeted the following script should request from the spending party, 
whatever it is to provide a witness stack of length 998 bytes, all dummy 
elements.
Those dummy elements are putting the burden on the quantum computer 
attacker to burn fees at the current sat per vbyte rate to realize a 
quantum exploit.
(There could leverage SIGHASH_NONE to escape this "fee jail"... however it 
sounds to expose them to be overrided by a miner).

So assuming this defensive scheme in face of quantum exploit is sound, I 
think this put the burden of a quantum attacker to have hashrate 
capabilities at the current level of difficulty, not solely an efficient 
CRQC.

> Yes, this makes more sense. I'm not sure anything can be done with the 
fraud proofs, but they could at least prove that a bad actor is present. 
Ideally both approaches are combined for maximum security and 
accountability.

No KYC is necessarily hurting mining pools as there is no single kyc 
definition that you can implement that do not open the door for a kind of 
DoS exploitation.

This is not an issue to build a practical fraud proofs systems on seen 
transaction, the open question is more if the average bitcoin user would 
pay to download fraud proofs demonstrating that a given miner is not 
engaging in quantum exploit.

> I've taken Antoine's feedback to heart and added FALCON to the 
specification, including a section that addresses the increased maintenance 
burden of adding two distinct post-quantum cryptosystems.

Thanks you for the addition, for the maintenance burden there is always the 
counter-argument to be made that you can secure a coins under multiple 
post-quantun signature scheme, especially if they're from different 
hardness assumptions breed. If one of the two scheme is secure, the coins 
are still locked by the other half.

I think it could be interesting to split the BIP in multiple ones, one for 
the general consensus mechanism introducing a P2QRH with all quantum risks 
considerations, and an individual one for each signature algorithm that 
could be deployed udner this generic P2QRH. Kinda in the same way, that 
BIP340 / BIP341 are split.

Best,
Antoine
ots hash: b57e9fe0b3de603ca66be29b7f1ba04fa5b8bc516c1277114ab42ac9f8572e12

Le jeudi 15 août 2024 à 06:25:01 UTC+1, Hunter Beast a écrit :

> I've taken Antoine's feedback to heart and added FALCON to the 
> specification, including a section that addresses the increased maintenance 
> burden of adding two distinct post-quantum cryptosystems.
> Please review.
> https://github.com/cryptoquick/bips/pull/9/files
>
> On Tuesday, August 6, 2024 at 11:50:35 AM UTC-6 Hunter Beast wrote:
>
>> That's alright, Antoine, it's been a busy month for me too.
>>
>> > So I think it's good to stay cool minded and I think my observation 
>> about talking of "super-exponential rate" as used in maaku old blog post 
>> does not
>> > hold a lot of rigor to describe the advances in the field of quantum 
>> computing. Note, also how IMB is a commercial entity that can have a lot of 
>> interests
>> > in "pumping" the state of "quantum computing" to gather fundings (there 
>> is a historical anecdote among bitcoin OG circles about Vitalik trying to 
>> do an
>> > ICO to build a quantum computer like 10 years ago, just to remember).
>>
>> Well, it's also important to remember that for every qubit added, it 
>> doubles the power of the system. A 2,000 qubit cryptographically-relevant 
>> quantum computer (CRQC) is exponentially faster than a 1,000 qubit one. 
>> There's also the capability for cross-links for multiple chips to 
>> communicate with each other, which IBM is also researching. The IBM Quantum 
>> System Two can be upgraded to support 16,000 qubits according to their 
>> marketing. Also consider that the verification of the results from the CRQC 
>> can be done via classical computer, so a high level of error correction 
>> might not be as necessary so long as the program is run enough times. It 
>> will take much longer, of course.
>>
>> > I think FALCON is what has the smallest pubkey + sig size for 
>> hash-and-sign lattice-based schemes. So I think it's worth reworking the 
>> BIP to see what has the smallest generation / validation time and pubkey + 
>> size space for the main post-quantum scheme. At least for dilthium, falcon, 
>> sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH 
>> could be always be moved in a very template annex tag / field.
>>
>> I've decided in one of my more recent updates to the BIP to default to 
>> the highest level of NIST security, NIST V, which provides 256 bits of 
>> security. You can see my rationale for that in this PR:
>> https://github.com/cryptoquick/bips/pull/7/files
>> Then, referencing this table:
>>
>> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki#security
>> As such, you'll see FALCON is roughly 4x larger than SQIsign signatures. 
>> Although supersingular elliptic curve quaternion isogeny-based algorithms 
>> are newer and more experimental than lattice-based cryptography, I think 
>> the benefits outweigh the risks, especially when transaction throughput is 
>> a principal concern.
>>
>> It's crucial that the signature and public key both receive the witness 
>> discount. Can you go into more detail in how that might be accomplished?
>>
>> Although it's too early to talk about activation of a QuBit soft fork, 
>> I've put some thought into how we can maintain the existing Bitcoin 
>> throughput with a soft fork, and I think it might be prudent to, when the 
>> time comes, introduce a 4x additional QuBit witness discount, maybe we call 
>> it the quitness, which is only available to valid P2QRH signatures. This 
>> would preclude its abuse for things like inscriptions because the signature 
>> data would need to correspond to the key, and even if this were possible, 
>> it's likely to result in only a burner address. This would increase chain 
>> state growth from roughly 100GB/yr to possibly closer to 2-300GB, depending 
>> on adoption. As the state of the art of SSD technology advances, this 
>> should allow plebs to run their own node on a 4TB disk for over a decade, 
>> even including existing chain size of ~600GB.
>>
>> If we were to use the same approach for FALCON signatures, a 16x discount 
>> would be needed, and I think that's far too much for the community to 
>> accept. As for pub key size and verification time, these are secondary 
>> considerations if the primary constraint is maintaining present transaction 
>> throughput. That's what makes SQIsign so promising.
>>
>> > See literature on quantum attacks on bitcoin in the reference of the 
>> paper you quote ("The impact of hardware specifications on reaching quantum 
>> advantage in the fault tolerant regime") for a discussion on Grover's 
>> search algorithm.
>>
>> The Impact paper seems to dismiss Grover's algorithm, but I think it's 
>> important to err on the size of caution and instead use a 32-byte double 
>> SHA-2 (HASH256) for additional security in the P2QRH output.
>>
>> > Namely you can introduce an artifical "witness-stack size scale ladder" 
>> in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP 
>> ...checksig...
>> > I have not verified it works well on bitcoin core though this script 
>> should put the burden on the quantum attacker to have enough bitcoin amount 
>> available to burn in on-chain fees in witness size to break a P2WPKH.
>>
>> I'm not sure I understand what you mean by this...
>> Is your coin scarcity comment related to what I call "satoshi's shield" 
>> in the BIP?
>>
>> > The technical issue if you implement KYC for a mining pool you're 
>> increasing your DoS surface and this could be exploited by competing 
>> miners. A more reasonable security model can be to have miner coinbase 
>> pubkeys being used to commit to the "seen-in-mempool" spends and from then 
>> build "hand wawy" fraud proofs that a miner is quantum attacking you're 
>> P2WSH spends at pubkey reveal time during transaction relay.
>>
>> Yes, this makes more sense. I'm not sure anything can be done with the 
>> fraud proofs, but they could at least prove that a bad actor is present. 
>> Ideally both approaches are combined for maximum security and 
>> accountability.
>>
>> Thanks for your time!
>>
>> On Friday, July 12, 2024 at 7:44:27 PM UTC-6 Antoine Riard wrote:
>>
>> Hi Hunter Beast,
>>
>> Apologies for the delay in answer.
>>
>> > I was thinking of focusing on the IBM Quantum System Two, mention how 
>> it can be scaled, and that although it might be quite limited, if running 
>> Shor's variant for a > sufficient amount of time, above a certain minimum 
>> threshold of qubits, it might be capable of decrypting the key to an 
>> address within one year. I base this on the estimate > provided in a study 
>> by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
>> figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one > 
>> day. It would seem it scales roughly linearly, and so extrapolating it 
>> further, 36,000 qubits would be needed to decrypt an address within one 
>> year. However, the IBM Heron > QPU turned out to have a gate time 100x less 
>> than was estimated in 2022, and so it might be possible to make do with 
>> even fewer qubits still within that timeframe. With > only 360 qubits, 
>> barring algorithmic overhead such as for circuit memory, it might be 
>> possible to decrypt a single address within a year. That might sound like a 
>> lot, but > being able to accomplish that at all would be significant, 
>> almost like a Chicago Pile moment, proving something in practice that was 
>> previously only thought theoretically > possible for the past 3 decades. 
>> And it's only downhill from there...
>>
>> Briefly surveying the paper "The impact of hardware specifications on 
>> reaching quantum advantage in the fault tolerant regime", I think it's a 
>> reasonble framework to evaluate
>> the practical efficiency of quantum attacks on bitcoin, it's self 
>> consistent and there is a critical approach referencing the usual 
>> litterature on quantum attacks on bitcoin. Just
>> note the caveat, one can find in usual quantum complexity litterature, 
>> "particularly in regard to end-to-end physical resource estimation. There 
>> are many other error correction
>> techniques available, and the best choice will likely depend on the 
>> underlying architecture's characteristics, such as the available physical 
>> qubit–qubit connectivity" (verbatim). Namely, evaluating quantum attacks is 
>> very dependent on the concrete physical architecture underpinning it.
>>
>> All that said, I agree with you that if you see a quantum computer with 
>> the range of 1000 physical qubits being able to break the DLP for ECC based 
>> encryption like secp256k1, even if it takes a year it will be a Chicago 
>> Pile moment, or whatever comparative experiments which were happening about 
>> chain of nuclear reactions in 30s / 40s.
>>
>> >  I think it's time to revisit these discussions given IBM's progress. 
>> They've published a two videos in particular that are worth watching; their 
>> keynote from December of last > year [2], and their roadmap update from 
>> just last month [3]
>>
>> I have looked on the roadmap as it's available on the IBM blog post: 
>> https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-to-2033
>> They give only a target of 2000 logical qubit to be reach in 2033...which 
>> is surprisingly not that strong...And one expect they might hit likely solid
>> state issues in laying out in hardware the Heron processor architecture. 
>> As a point of thinking, it took like 2 decades to advance on the state of 
>> art
>> of litography in traditional chips manufacturing.
>>  
>> So I think it's good to stay cool minded and I think my observation about 
>> talking of "super-exponential rate" as used in maaku old blog post does not
>> hold a lot of rigor to describe the advances in the field of quantum 
>> computing. Note, also how IMB is a commercial entity that can have a lot of 
>> interests
>> in "pumping" the state of "quantum computing" to gather fundings (there 
>> is a historical anecdote among bitcoin OG circles about Vitalik trying to 
>> do an
>> ICO to build a quantum computer like 10 years ago, just to remember).
>>
>> > I'm supportive of this consideration. FALCON might be a good 
>> substitute, and maybe it can be upgraded to HAWK for even better 
>> performance depending on how much > time there is. According to the BIP, 
>> FALCON signatures are ~10x larger t> han Schnorr signatures, so this will 
>> of course make the transaction more expensive, but we also > must remember, 
>> these signatures will be going into the witness, which already receives a 
>> 4x discount. Perhaps the discount could be incr> eased further someday to 
>> fit > more transactions into blocks, but this will also likely result in 
>> more inscriptions filling unused space also, which permanently increases 
>> the burden of running an archive > node. Due to the controversy s> uch a 
>> change could bring, I would rather any increases in the witness discount be 
>> excluded from future activation discussions, so as to be > considered 
>> separately, even if it pertains to an increase in P2QRH transaction size.
>>  
>> > Do you think it's worth reworking the BIP to use FALCON signatures? 
>> I've only done a deep dive into SQIsign and SPHINCS+, and I will 
>> acknowledge the readiness levels between those two are presently worlds 
>> apart.
>>
>> I think FALCON is what has the smallest pubkey + sig size for 
>> hash-and-sign lattice-based schemes. So I think it's worth reworking the 
>> BIP to see what has the smallest generation / validation time and pubkey + 
>> size space for the main post-quantum scheme. At least for dilthium, falcon, 
>> sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH 
>> could be always be moved in a very template annex tag / field.
>>
>> > Also, do you think it's of any concern to use HASH160 instead of 
>> HASH256 in the output script? I think it's fine for a cryptographic 
>> commitment since it's simply a hash of a hash (MD160 of SHA-256).
>>
>> See literature on quantum attacks on bitcoin in the reference of the 
>> paper you quote ("The impact of hardware specifications on reaching quantum 
>> advantage in the fault tolerant regime") for a discussion on Grover's 
>> search algorithm.
>>
>> > I'm not sure I fully understand this, but even more practically, as 
>> mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
>> with a value of fewer than 50
>> > coins per address, and when funds ever need to be spent, the> 
>>  transaction is signed and submitted out of band to a trusted mining pool, 
>> ideally one that does KYC, so it's
>> > known which individual miners get to see the public key before it's 
>> mined. It's not perfect, since this relies on exogenou> s security 
>> assumptions, which is why P2QRH is
>> > proposed.
>>
>> Again, the paper you're referencing ("The impact of hardware 
>> specifications on reaching quantum advantage...") is analyzing the 
>> performance of quantum advantage under
>> 2 dimensions, namely space and time. My observation is in Bitcoin we have 
>> an additional dimension, "coin scarcity" that can be leveraged to build 
>> defense of address
>> spends in face of quantum attacks.
>>
>> Namely you can introduce an artifical "witness-stack size scale ladder" 
>> in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP 
>> ...checksig...
>> I have not verified it works well on bitcoin core though this script 
>> should put the burden on the quantum attacker to have enough bitcoin amount 
>> available to burn in on-chain fees in witness size to break a P2WPKH.
>>
>>
>> >  ideally with a value of fewer than 50 coins per address, and when 
>> funds ever need to be spent, the transaction is signed and submitted out of 
>> band to a trusted mining pool, ideally
>> > one that does KYC, so it's known which individual > miners get to see 
>> the public key before it's mined. It's not perfect, since this relies on 
>> exogenous security assumptions, which is
>> > why P2QRH is proposed.
>>
>> The technical issue if you implement KYC for a mining pool you're 
>> increasing your DoS surface and this could be exploited by competing 
>> miners. A more reasonable security model can be to have miner coinbase 
>> pubkeys being used to commit to the "seen-in-mempool" spends and from then 
>> build "hand wawy" fraud proofs that a miner is quantum attacking you're 
>> P2WSH spends at pubkey reveal time during transaction relay.
>>
>> Best,
>> Antoine
>>
>> ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30
>> Le lundi 17 juin 2024 à 23:25:25 UTC+1, hunter a écrit :
>>
>>
>> -----BEGIN PGP SIGNED MESSAGE----- 
>> Hash: SHA256 
>>
>> On 2024-06-16 19:31, Antoine Riard <antoin...@gmail•com> wrote: 
>>
>> > 
>> > Hi Hunter Beast,I think any post-quantum upgrade signature algorithm 
>> upgrade proposal would grandly benefit to haveShor's based practical 
>> attacks far more defined in the Bitcoin context. As soon you start to talk 
>> aboutquantum computers there is no such thing as a "quantum computer" 
>> though a wide array of architecturesbased on a range of technologies to 
>> encode qubits on nanoscale physical properties. 
>> > 
>> Good point. I can write a section in the BIP Motivation or Security 
>> section about how an attack might take place practically, and the potential 
>> urgency of such an attack. 
>>   
>> I was thinking of focusing on the IBM Quantum System Two, mention how it 
>> can be scaled, and that although it might be quite limited, if running 
>> Shor's variant for a sufficient amount of time, above a certain minimum 
>> threshold of qubits, it might be capable of decrypting the key to an 
>> address within one year. I base this on the estimate provided in a study by 
>> the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
>> figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one 
>> day. It would seem it scales roughly linearly, and so extrapolating it 
>> further, 36,000 qubits would be needed to decrypt an address within one 
>> year. However, the IBM Heron QPU turned out to have a gate time 100x less 
>> than was estimated in 2022, and so it might be possible to make do with 
>> even fewer qubits still within that timeframe. With only 360 qubits, 
>> barring algorithmic overhead such as for circuit memory, it might be 
>> possible to decrypt a single address within a year. That might sound like a 
>> lot, but being able to accomplish that at all would be significant, almost 
>> like a Chicago Pile moment, proving something in practice that was 
>> previously only thought theoretically possible for the past 3 decades. And 
>> it's only downhill from there... 
>> > 
>> > This is not certain that any Shor's algorithm variant works smoothly 
>> independently of the quantum computerarchitecture considered (e.g gate 
>> frequency, gate infidelity, cooling energy consumption) and I think it'san 
>> interesting open game-theory problem if you can concentrate a sufficiant 
>> amount of energy before anycoin owner moves them in consequence (e.g seeing 
>> a quantum break in the mempool and reacting with a counter-spend). 
>> > 
>> It should be noted that P2PK keys still hold millions of bitcoin, and 
>> those encode the entire public key for everyone to see for all time. Thus, 
>> early QC attacks won't need to consider the complexities of the mempool. 
>> > 
>> > In my opinion, one of the last time the subject was addressed on the 
>> mailing list, the description of the state of the quantum computer field 
>> was not realistic and get into risk characterization hyperbole talking 
>> about "super-exponential rate" (when indeed there is no empirical 
>> realization that distinct theoretical advance on quantum capabilities can 
>> be combined with each other) [1]. 
>> > 
>> I think it's time to revisit these discussions given IBM's progress. 
>> They've published a two videos in particular that are worth watching; their 
>> keynote from December of last year [2], and their roadmap update from just 
>> last month [3]. 
>> > 
>> > On your proposal, there is an immediate observation which comes to 
>> mind, namely why not using one of the algorithm(dilthium, sphincs+, falcon) 
>> which has been through the 3 rounds of NIST cryptanalysis. Apart of the 
>> signature size,which sounds to be smaller, in a network of full-nodes any 
>> PQ signature algorithm should have reasonable verificationperformances. 
>> > 
>> I'm supportive of this consideration. FALCON might be a good substitute, 
>> and maybe it can be upgraded to HAWK for even better performance depending 
>> on how much time there is. According to the BIP, FALCON signatures are ~10x 
>> larger than Schnorr signatures, so this will of course make the transaction 
>> more expensive, but we also must remember, these signatures will be going 
>> into the witness, which already receives a 4x discount. Perhaps the 
>> discount could be increased further someday to fit more transactions into 
>> blocks, but this will also likely result in more inscriptions filling 
>> unused space also, which permanently increases the burden of running an 
>> archive node. Due to the controversy such a change could bring, I would 
>> rather any increases in the witness discount be excluded from future 
>> activation discussions, so as to be considered separately, even if it 
>> pertains to an increase in P2QRH transaction size. 
>>   
>> Do you think it's worth reworking the BIP to use FALCON signatures? I've 
>> only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
>> readiness levels between those two are presently worlds apart. 
>>   
>> Also, do you think it's of any concern to use HASH160 instead of HASH256 
>> in the output script? I think it's fine for a cryptographic commitment 
>> since it's simply a hash of a hash (MD160 of SHA-256). 
>> > 
>> > Lastly, there is a practical defensive technique that can be 
>> implemented today by coin owners to protect in face ofhyptothetical quantum 
>> adversaries. Namely setting spending scripts to request an artificially 
>> inflated witness stack,as the cost has to be burden by the spender. I think 
>> one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack 
>> shuffling. While the efficiency of this technique is limited by the max 
>> consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus 
>> size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an 
>> additional"scarce coins" pre-requirement on the quantum adversarise to 
>> succeed. Shor's algorithm is only defined under theclassic ressources of 
>> computational complexity, time and space. 
>> > 
>> I'm not sure I fully understand this, but even more practically, as 
>> mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
>> with a value of fewer than 50 coins per address, and when funds ever need 
>> to be spent, the transaction is signed and submitted out of band to a 
>> trusted mining pool, ideally one that does KYC, so it's known which 
>> individual miners get to see the public key before it's mined. It's not 
>> perfect, since this relies on exogenous security assumptions, which is why 
>> P2QRH is proposed. 
>> > 
>> > Best,Antoine 
>> > [1] https://freicoin.substack.com/p/why-im-against-taproot 
>> > 
>>   
>> I'm grateful you took the time to review the BIP and offer your detailed 
>> insights. 
>>   
>> [1] “The impact of hardware specifications on reaching quantum advantage 
>> in the fault tolerant regime,” 2022 - 
>> https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-specifications-on-reaching 
>> [2] https://www.youtube.com/watch?v=De2IlWji8Ck 
>> [3] https://www.youtube.com/watch?v=d5aIx79OTps 
>>   
>> > 
>> > 
>> > Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit : 
>> > 
>> > > Good points. I like your suggestion for a SPHINCS+, just due to how 
>> mature it is in comparison to SQIsign. It's already in its third round and 
>> has several standards-compliant implementations, and it has an actual 
>> specification rather than just a research paper. One thing to consider is 
>> that NIST-I round 3 signatures are 982 bytes in size, according to what I 
>> was able to find in the documents hosted by the SPHINCS website. 
>> > > 
>> https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip 
>> > >   
>> > > One way to handle this is to introduce this as a separate address 
>> type than SQIsign. That won't require OP_CAT, and I do want to keep this 
>> soft fork limited in scope. If SQIsign does become significantly broken, in 
>> this hopefully far future scenario, I might be supportive of an increase in 
>> the witness discount. 
>> > >   
>> > > Also, I've made some additional changes based on your feedback on X. 
>> You can review them here if you so wish: 
>> > > 
>> https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754 
>> > > 
>> > > 
>> > > On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc 
>> Dallaire-Demers wrote: 
>> > > > SQIsign is blockchain friendly but also very new, I would recommend 
>> adding a hash-based backup key in case an attack on SQIsign is found in the 
>> future (recall that SIDH broke over the span of a weekend 
>> https://eprint.iacr.org/2022/975.pdf). 
>> > > > Backup keys can be added in the form of a Merkle tree where one 
>> branch would contain the SQIsign public key and the other the public key of 
>> the recovery hash-based scheme. For most transactions it would only add one 
>> bit to specify the SQIsign branch. 
>> > > > The hash-based method could be Sphincs+, which is standardized by 
>> NIST but requires adding extra code, or Lamport, which is not standardized 
>> but can be verified on-chain with OP-CAT. 
>> > > > 
>> > > > On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote: 
>> > > > > The motivation for this BIP is to provide a concrete proposal for 
>> adding quantum resistance to Bitcoin. We will need to pick a signature 
>> algorithm, implement it, and have it ready in event of quantum emergency. 
>> There will be time to adopt it. Importantly, this first step is a more 
>> substantive answer to those with concerns beyond, "quantum computers may 
>> pose a threat, but we likely don't have to worry about that for a long 
>> time". Bitcoin development and activation is slow, so it's important that 
>> those with low time preference start discussing this as a serious 
>> possibility sooner rather than later. This is meant to be the first in a 
>> series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is 
>> intended to propose concrete solutions, even if they're early and 
>> incomplete, so that Bitcoin developers are aware of the existence of these 
>> solutions and their potential. This is just a rough draft and not the 
>> finished BIP. I'd like to validate the approach and hear if I should 
>> continue working on it, whether serious changes are needed, or if this 
>> truly isn't a worthwhile endeavor right now. 
>> > > > >   
>> > > > > The BIP can be found here: 
>> > > > > 
>> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki 
>> > > > >   
>> > > > > Thank you for your time. 
>> > > > >   
>> > > > > 
>> > > > 
>> > > > 
>> > > 
>> > > 
>> > 
>> > 
>> > -- You received this message because you are subscribed to a topic in 
>> the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe 
>> from this topic, visit 
>> https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To 
>> unsubscribe from this group and all its topics, send an email to 
>> bitcoindev+...@googlegroups•com. To view this discussion on the web 
>> visit 
>> https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com. 
>>
>>
>> -----BEGIN PGP SIGNATURE----- 
>> Version: OpenPGP.js v4.10.3 
>> Comment: https://openpgpjs.org 
>>
>> wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe 
>> JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/ 
>> 8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9 
>> bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE 
>> tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt 
>> Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp 
>> mH/DU20HMBeGVSrISrvsmLw= 
>> =+wat 
>> -----END PGP SIGNATURE----- 
>>
>>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/264e0340-ddfa-411c-a755-948399400b08n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 43495 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork
  2024-08-22  6:20               ` Antoine Riard
@ 2024-09-25 12:04                 ` Hunter Beast
  0 siblings, 0 replies; 10+ messages in thread
From: Hunter Beast @ 2024-09-25 12:04 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 38782 bytes --]

Thanks for the response as always, Antoine, and I've made several 
substantial updates to the BIP in case you'd like to give it another 
once-over. I'm going to submit P2QRH to bips soon.

On Thursday, August 22, 2024 at 1:29:09 AM UTC-6 Antoine Riard wrote:

Hello Hunter,

> Well, it's also important to remember that for every qubit added, it 
doubles the power of the system. A 2,000 qubit cryptographically-relevant 
quantum computer (CRQC) is exponentially faster than a 1,000 qubit one. 
There's also the > capability for cross-links for multiple chips to 
communicate with each other, which IBM is also researching. The IBM Quantum 
System Two can be upgraded to support 16,000 qubits according to their 
marketing. Also consider that the ve> rification of the results from the 
CRQC can be done via classical computer, so a high level of error 
correction might not be as necessary so long as the program is run enough 
times. It will take much longer, of course.

On performance, once again I think it all depends on the quantum computer 
architecture considered and if we're talking about physical qubits / 
logical qubits. As the paper "The impact of hardware specifications on 
reaching quantum advantage in the fault tolerant regime" linked in your BIP 
judiciously observe in its introduction that surface code (as used by IBM) 
is only one of the error code correction technique.

About cross-links for multiple chips, even if each chip parallelize towards 
a single classical logical unit, ordering computational units is a 
notoriously hard issue in classical computer. I don't think there is any 
certainty in quantum computer development that each set of qubits of 
isolated chips can be arithmetically additioned without a coefficient loss 
on the resulting sum (...there is always a bit of apprehension to have to 
dissociate between marketing claims and academic claim duly 
peer-reviewed...). And while indeed, the results can be evaluated via a 
classical computer, this doesn't mean transitively that the evaluation will 
be as efficient (in energy / computational cycles) rather than doing more 
error correction on the quantum computer side.


After looking into it more, I believe you are correct. Qubit count 
determines a lot of things, but not necessarily the "power", there's many, 
many factors that go into that, which you've outlined.
 

> I've decided in one of my more recent updates to the BIP to default to 
the highest level of NIST security, NIST V, which provides 256 bits of 
security. You can see my rationale for that in this PR:
> https://github.com/cryptoquick/bips/pull/7/files

Those are assumptions there is a security increase by scaling up the size 
of the public key. In the Bitcoin world, we don't even make assumption on 
the public key size
for ECDSA signature scheme as both compressed and uncompressed public keys 
have been historically valid. Similarly, the public key size does not have 
to be bundled with
the specification of the signature verification scheme itself (e.g see 
BIP340 discussion on x-only public keys).


According to the spec, I was hoping to distinguish between post-quantum 
algorithms by their key size. If there's a collision, a distinguishing byte 
could be added for the new algorithm. Then they're identified by their 
PUSHDATA opcode. That's the primary reason they're specified.
 

> As such, you'll see FALCON is roughly 4x larger than SQIsign signatures. 
Although supersingular elliptic curve quaternion isogeny-based algorithms 
are newer and
> more experimental than lattice-based cryptography, I think the benefits 
outweigh the risks, especially when transaction throughput is a principal 
concern.
 
There are no public key size in the security table so it's hard to compare 
the overall on-chain space cost for each signature post-quantum algorithm 
considered.
Neither actually, there is an estimation of the verification cost for an 
average 200-bytes transactions, old good's Hamilton's quaternion and 
relying on complex numbers, which can be hard to deal with for the hobbyist 
CPUs can be a concern.


I've updated the table to reflect the key size concern. For verification 
cost, I've found it's difficult to compare numbers provided by the 
different papers. Some provide cycles, some provide durations. I do want to 
include a benchmark in the test vectors once they're ready.
 

> It's crucial that the signature and public key both receive the witness 
discount. Can you go into more detail in how that might be accomplished?

The BIP341 taproot annex could be used for that, see 
https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5


I've adjusted the BIP for this to integrate with Taproot. The primary 
difference is that this will use a hash of the Taproot public keys in the 
v3 spend script.
 

> Although it's too early to talk about activation of a QuBit soft fork, 
I've put some thought into how we can maintain the existing Bitcoin 
throughput with a soft fork, and I think it might be prudent to, when the 
time comes, introdu> ce a 4x additional QuBit witness discount, maybe we 
call it the quitness, which is only available to valid P2QRH signatures. 
This would preclude its abuse for things like inscriptions because the 
signature data would need to corresp> ond to the key, and even if this were 
possible, it's likely to result in only a burner address. This would 
increase chain state growth from roughly 100GB/yr to possibly closer to 
2-300GB, depending on adoption. As the state of the a> rt of SSD technology 
advances, this should allow plebs to run their own node on a 4TB disk for 
over a decade, even including existing chain size of ~600GB.

The annex could have typed fields for post-quantum signature and public key 
further witness discount. However, I think it's a bit naive to assume that 
SSD technology advances will stay linear and that it will be economically 
accessible at the same pace to the tens of thousands of plebs actually 
running full-nodes and constituting the skeleton of the base-relay network. 
One could play out a posteriori the predictions on bandwidth technological 
advances that have been made in BIP103 to see how well they held on the 
last ~9 years.


According to the C program in BIP-101, it looks like the block size would 
have increased by nearly 4x over the past ~9 years. I've specified in the 
BIP a separate witness, I call the quitness, that will solely receive the 
additional 4x discount. Schnorr signatures are still kept in the witness.
 

(There is another caution with evaluating technological advances, namely 
that some hardware components could be actually massively consumed by other 
cryptocurrencies for their consensus algorithms...)

> If we were to use the same approach for FALCON signatures, a 16x discount 
would be needed, and I think that's far too much for the community to 
accept. As for pub key size and verification
> time, these are secondary considerations if the primary constraint is 
maintaining present transaction throughput. That's what makes SQIsign so 
promising.

Well, if there is something like the annex with typed fields each type of 
post-quantum signature could get a wider discount, especially if there are 
verification asymmetries favoring some scheme over another one, even if the 
security properties are differing.


As you know, Bitcoin doesn't charge based on the complexity of how long it 
takes to run script, so it would make sense to charge based only upon byte 
count. If runtime is a major concern, and it is desired by the community, 
it can be proposed as a separate BIP, and potentially included in a QuBit 
soft fork.
 

> The Impact paper seems to dismiss Grover's algorithm, but I think it's 
important to err on the size of caution and instead use a 32-byte double 
SHA-2 (HASH256) for additional security in the P2QRH output.

Performance-wise, this doesn't shock me to use a double SHA-2 (HASH256) as 
it has been added for many domain separation tagged hash in taproot.
About Grover's algorithm, it's more the sample space and collision space 
that should be more defined to be relevant, you can always downgrade the 
performance of the Grover's algorithm by scaling up the sample space, 
however it's not sure it's practical for bitcoin transaction generation.


That's good. Additionally, because Grover's algorithm scales so poorly 
compared to Shor's, I think it's a safe security assumption that HASH256 
will be more secure for use in the v3 spend script.
 

> I'm not sure I understand what you mean by this...
> Is your coin scarcity comment related to what I call "satoshi's shield" 
in the BIP?

Not at all the "satoshi's shield" as you're describing in the BIP.

This is just the observation that bitcoin coins are scarce in the sense 
that you need to burn raw energy to acquire the rewards according to the 
issuance schedule (or miners fees). Bitcoin script can be designed to 
request that a sufficient number of bitcoin coins, or satoshis, are burned 
before to unlock a coin locked under a quantum-frail scriptpubkey.

That means any quantum computer attacker, even if they have an efficient 
quantum computer, might not be able to break the redeem script itself, only 
the signatures composing the redeem script check sig operations.

Let's give a concrete example, let's say you have the following pseudo 
script:

        <<OP_DEPTH> <OP_PUSHDATA2> <998> <OP_EQUALVERIFY> <pubkey> 
<OP_CHECKSIG>>

Interpeted the following script should request from the spending party, 
whatever it is to provide a witness stack of length 998 bytes, all dummy 
elements.
Those dummy elements are putting the burden on the quantum computer 
attacker to burn fees at the current sat per vbyte rate to realize a 
quantum exploit.
(There could leverage SIGHASH_NONE to escape this "fee jail"... however it 
sounds to expose them to be overrided by a miner).

So assuming this defensive scheme in face of quantum exploit is sound, I 
think this put the burden of a quantum attacker to have hashrate 
capabilities at the current level of difficulty, not solely an efficient 
CRQC.


I'm not sure I understand the point you're making, but only valid public 
key / signature pairs in the quitness will be considered valid.
 

> Yes, this makes more sense. I'm not sure anything can be done with the 
fraud proofs, but they could at least prove that a bad actor is present. 
Ideally both approaches are combined for maximum security and 
accountability.

No KYC is necessarily hurting mining pools as there is no single kyc 
definition that you can implement that do not open the door for a kind of 
DoS exploitation.

This is not an issue to build a practical fraud proofs systems on seen 
transaction, the open question is more if the average bitcoin user would 
pay to download fraud proofs demonstrating that a given miner is not 
engaging in quantum exploit.


Makes sense.
 

> I've taken Antoine's feedback to heart and added FALCON to the 
specification, including a section that addresses the increased maintenance 
burden of adding two distinct post-quantum cryptosystems.

Thanks you for the addition, for the maintenance burden there is always the 
counter-argument to be made that you can secure a coins under multiple 
post-quantun signature scheme, especially if they're from different 
hardness assumptions breed. If one of the two scheme is secure, the coins 
are still locked by the other half.


You'll see I've taken this feedback to heart and specified hybrid 
cryptography in the BIP.
 

I think it could be interesting to split the BIP in multiple ones, one for 
the general consensus mechanism introducing a P2QRH with all quantum risks 
considerations, and an individual one for each signature algorithm that 
could be deployed udner this generic P2QRH. Kinda in the same way, that 
BIP340 / BIP341 are split.


You might be right about that. I'd still like to specify FALCON for the 
first one, but additional signature algorithms can get their own BIPs.
 

Best,
Antoine
ots hash: b57e9fe0b3de603ca66be29b7f1ba04fa5b8bc516c1277114ab42ac9f8572e12


Let me know if there's any additional changes you would like me to make. 
I'll be submitting the BIP upstream to the bips repo as a draft PR soon. Do 
you mind if I credit you in the Acknowledgements section? Thanks for all 
the great feedback so far.

Le jeudi 15 août 2024 à 06:25:01 UTC+1, Hunter Beast a écrit :

I've taken Antoine's feedback to heart and added FALCON to the 
specification, including a section that addresses the increased maintenance 
burden of adding two distinct post-quantum cryptosystems.
Please review.
https://github.com/cryptoquick/bips/pull/9/files

On Tuesday, August 6, 2024 at 11:50:35 AM UTC-6 Hunter Beast wrote:

That's alright, Antoine, it's been a busy month for me too.

> So I think it's good to stay cool minded and I think my observation about 
talking of "super-exponential rate" as used in maaku old blog post does not
> hold a lot of rigor to describe the advances in the field of quantum 
computing. Note, also how IMB is a commercial entity that can have a lot of 
interests
> in "pumping" the state of "quantum computing" to gather fundings (there 
is a historical anecdote among bitcoin OG circles about Vitalik trying to 
do an
> ICO to build a quantum computer like 10 years ago, just to remember).

Well, it's also important to remember that for every qubit added, it 
doubles the power of the system. A 2,000 qubit cryptographically-relevant 
quantum computer (CRQC) is exponentially faster than a 1,000 qubit one. 
There's also the capability for cross-links for multiple chips to 
communicate with each other, which IBM is also researching. The IBM Quantum 
System Two can be upgraded to support 16,000 qubits according to their 
marketing. Also consider that the verification of the results from the CRQC 
can be done via classical computer, so a high level of error correction 
might not be as necessary so long as the program is run enough times. It 
will take much longer, of course.

> I think FALCON is what has the smallest pubkey + sig size for 
hash-and-sign lattice-based schemes. So I think it's worth reworking the 
BIP to see what has the smallest generation / validation time and pubkey + 
size space for the main post-quantum scheme. At least for dilthium, falcon, 
sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH 
could be always be moved in a very template annex tag / field.

I've decided in one of my more recent updates to the BIP to default to the 
highest level of NIST security, NIST V, which provides 256 bits of 
security. You can see my rationale for that in this PR:
https://github.com/cryptoquick/bips/pull/7/files
Then, referencing this table:
https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki#security
As such, you'll see FALCON is roughly 4x larger than SQIsign signatures. 
Although supersingular elliptic curve quaternion isogeny-based algorithms 
are newer and more experimental than lattice-based cryptography, I think 
the benefits outweigh the risks, especially when transaction throughput is 
a principal concern.

It's crucial that the signature and public key both receive the witness 
discount. Can you go into more detail in how that might be accomplished?

Although it's too early to talk about activation of a QuBit soft fork, I've 
put some thought into how we can maintain the existing Bitcoin throughput 
with a soft fork, and I think it might be prudent to, when the time comes, 
introduce a 4x additional QuBit witness discount, maybe we call it the 
quitness, which is only available to valid P2QRH signatures. This would 
preclude its abuse for things like inscriptions because the signature data 
would need to correspond to the key, and even if this were possible, it's 
likely to result in only a burner address. This would increase chain state 
growth from roughly 100GB/yr to possibly closer to 2-300GB, depending on 
adoption. As the state of the art of SSD technology advances, this should 
allow plebs to run their own node on a 4TB disk for over a decade, even 
including existing chain size of ~600GB.

If we were to use the same approach for FALCON signatures, a 16x discount 
would be needed, and I think that's far too much for the community to 
accept. As for pub key size and verification time, these are secondary 
considerations if the primary constraint is maintaining present transaction 
throughput. That's what makes SQIsign so promising.

> See literature on quantum attacks on bitcoin in the reference of the 
paper you quote ("The impact of hardware specifications on reaching quantum 
advantage in the fault tolerant regime") for a discussion on Grover's 
search algorithm.

The Impact paper seems to dismiss Grover's algorithm, but I think it's 
important to err on the size of caution and instead use a 32-byte double 
SHA-2 (HASH256) for additional security in the P2QRH output.

> Namely you can introduce an artifical "witness-stack size scale ladder" 
in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP 
...checksig...
> I have not verified it works well on bitcoin core though this script 
should put the burden on the quantum attacker to have enough bitcoin amount 
available to burn in on-chain fees in witness size to break a P2WPKH.

I'm not sure I understand what you mean by this...
Is your coin scarcity comment related to what I call "satoshi's shield" in 
the BIP?

> The technical issue if you implement KYC for a mining pool you're 
increasing your DoS surface and this could be exploited by competing 
miners. A more reasonable security model can be to have miner coinbase 
pubkeys being used to commit to the "seen-in-mempool" spends and from then 
build "hand wawy" fraud proofs that a miner is quantum attacking you're 
P2WSH spends at pubkey reveal time during transaction relay.

Yes, this makes more sense. I'm not sure anything can be done with the 
fraud proofs, but they could at least prove that a bad actor is present. 
Ideally both approaches are combined for maximum security and 
accountability.

Thanks for your time!

On Friday, July 12, 2024 at 7:44:27 PM UTC-6 Antoine Riard wrote:

Hi Hunter Beast,

Apologies for the delay in answer.

> I was thinking of focusing on the IBM Quantum System Two, mention how it 
can be scaled, and that although it might be quite limited, if running 
Shor's variant for a > sufficient amount of time, above a certain minimum 
threshold of qubits, it might be capable of decrypting the key to an 
address within one year. I base this on the estimate > provided in a study 
by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one > 
day. It would seem it scales roughly linearly, and so extrapolating it 
further, 36,000 qubits would be needed to decrypt an address within one 
year. However, the IBM Heron > QPU turned out to have a gate time 100x less 
than was estimated in 2022, and so it might be possible to make do with 
even fewer qubits still within that timeframe. With > only 360 qubits, 
barring algorithmic overhead such as for circuit memory, it might be 
possible to decrypt a single address within a year. That might sound like a 
lot, but > being able to accomplish that at all would be significant, 
almost like a Chicago Pile moment, proving something in practice that was 
previously only thought theoretically > possible for the past 3 decades. 
And it's only downhill from there...

Briefly surveying the paper "The impact of hardware specifications on 
reaching quantum advantage in the fault tolerant regime", I think it's a 
reasonble framework to evaluate
the practical efficiency of quantum attacks on bitcoin, it's self 
consistent and there is a critical approach referencing the usual 
litterature on quantum attacks on bitcoin. Just
note the caveat, one can find in usual quantum complexity litterature, 
"particularly in regard to end-to-end physical resource estimation. There 
are many other error correction
techniques available, and the best choice will likely depend on the 
underlying architecture's characteristics, such as the available physical 
qubit–qubit connectivity" (verbatim). Namely, evaluating quantum attacks is 
very dependent on the concrete physical architecture underpinning it.

All that said, I agree with you that if you see a quantum computer with the 
range of 1000 physical qubits being able to break the DLP for ECC based 
encryption like secp256k1, even if it takes a year it will be a Chicago 
Pile moment, or whatever comparative experiments which were happening about 
chain of nuclear reactions in 30s / 40s.

>  I think it's time to revisit these discussions given IBM's progress. 
They've published a two videos in particular that are worth watching; their 
keynote from December of last > year [2], and their roadmap update from 
just last month [3]

I have looked on the roadmap as it's available on the IBM blog post: 
https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-to-2033
They give only a target of 2000 logical qubit to be reach in 2033...which 
is surprisingly not that strong...And one expect they might hit likely solid
state issues in laying out in hardware the Heron processor architecture. As 
a point of thinking, it took like 2 decades to advance on the state of art
of litography in traditional chips manufacturing.
 
So I think it's good to stay cool minded and I think my observation about 
talking of "super-exponential rate" as used in maaku old blog post does not
hold a lot of rigor to describe the advances in the field of quantum 
computing. Note, also how IMB is a commercial entity that can have a lot of 
interests
in "pumping" the state of "quantum computing" to gather fundings (there is 
a historical anecdote among bitcoin OG circles about Vitalik trying to do an
ICO to build a quantum computer like 10 years ago, just to remember).

> I'm supportive of this consideration. FALCON might be a good substitute, 
and maybe it can be upgraded to HAWK for even better performance depending 
on how much > time there is. According to the BIP, FALCON signatures are 
~10x larger t> han Schnorr signatures, so this will of course make the 
transaction more expensive, but we also > must remember, these signatures 
will be going into the witness, which already receives a 4x discount. 
Perhaps the discount could be incr> eased further someday to fit > more 
transactions into blocks, but this will also likely result in more 
inscriptions filling unused space also, which permanently increases the 
burden of running an archive > node. Due to the controversy s> uch a change 
could bring, I would rather any increases in the witness discount be 
excluded from future activation discussions, so as to be > considered 
separately, even if it pertains to an increase in P2QRH transaction size.
 
> Do you think it's worth reworking the BIP to use FALCON signatures? I've 
only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
readiness levels between those two are presently worlds apart.

I think FALCON is what has the smallest pubkey + sig size for hash-and-sign 
lattice-based schemes. So I think it's worth reworking the BIP to see what 
has the smallest generation / validation time and pubkey + size space for 
the main post-quantum scheme. At least for dilthium, falcon, sphincs+ and 
SQISign. For an hypothetical witness discount, a v2 P2QRH could be always 
be moved in a very template annex tag / field.

> Also, do you think it's of any concern to use HASH160 instead of HASH256 
in the output script? I think it's fine for a cryptographic commitment 
since it's simply a hash of a hash (MD160 of SHA-256).

See literature on quantum attacks on bitcoin in the reference of the paper 
you quote ("The impact of hardware specifications on reaching quantum 
advantage in the fault tolerant regime") for a discussion on Grover's 
search algorithm.

> I'm not sure I fully understand this, but even more practically, as 
mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
with a value of fewer than 50
> coins per address, and when funds ever need to be spent, the> 
 transaction is signed and submitted out of band to a trusted mining pool, 
ideally one that does KYC, so it's
> known which individual miners get to see the public key before it's 
mined. It's not perfect, since this relies on exogenou> s security 
assumptions, which is why P2QRH is
> proposed.

Again, the paper you're referencing ("The impact of hardware specifications 
on reaching quantum advantage...") is analyzing the performance of quantum 
advantage under
2 dimensions, namely space and time. My observation is in Bitcoin we have 
an additional dimension, "coin scarcity" that can be leveraged to build 
defense of address
spends in face of quantum attacks.

Namely you can introduce an artifical "witness-stack size scale ladder" in 
pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig...
I have not verified it works well on bitcoin core though this script should 
put the burden on the quantum attacker to have enough bitcoin amount 
available to burn in on-chain fees in witness size to break a P2WPKH.


>  ideally with a value of fewer than 50 coins per address, and when funds 
ever need to be spent, the transaction is signed and submitted out of band 
to a trusted mining pool, ideally
> one that does KYC, so it's known which individual > miners get to see the 
public key before it's mined. It's not perfect, since this relies on 
exogenous security assumptions, which is
> why P2QRH is proposed.

The technical issue if you implement KYC for a mining pool you're 
increasing your DoS surface and this could be exploited by competing 
miners. A more reasonable security model can be to have miner coinbase 
pubkeys being used to commit to the "seen-in-mempool" spends and from then 
build "hand wawy" fraud proofs that a miner is quantum attacking you're 
P2WSH spends at pubkey reveal time during transaction relay.

Best,
Antoine

ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30
Le lundi 17 juin 2024 à 23:25:25 UTC+1, hunter a écrit :


-----BEGIN PGP SIGNED MESSAGE----- 
Hash: SHA256 

On 2024-06-16 19:31, Antoine Riard <antoin...@gmail•com> wrote: 

> 
> Hi Hunter Beast,I think any post-quantum upgrade signature algorithm 
upgrade proposal would grandly benefit to haveShor's based practical 
attacks far more defined in the Bitcoin context. As soon you start to talk 
aboutquantum computers there is no such thing as a "quantum computer" 
though a wide array of architecturesbased on a range of technologies to 
encode qubits on nanoscale physical properties. 
> 
Good point. I can write a section in the BIP Motivation or Security section 
about how an attack might take place practically, and the potential urgency 
of such an attack. 
  
I was thinking of focusing on the IBM Quantum System Two, mention how it 
can be scaled, and that although it might be quite limited, if running 
Shor's variant for a sufficient amount of time, above a certain minimum 
threshold of qubits, it might be capable of decrypting the key to an 
address within one year. I base this on the estimate provided in a study by 
the Sussex Centre for Quantum Technologies, et. al [1]. They provide two 
figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one 
day. It would seem it scales roughly linearly, and so extrapolating it 
further, 36,000 qubits would be needed to decrypt an address within one 
year. However, the IBM Heron QPU turned out to have a gate time 100x less 
than was estimated in 2022, and so it might be possible to make do with 
even fewer qubits still within that timeframe. With only 360 qubits, 
barring algorithmic overhead such as for circuit memory, it might be 
possible to decrypt a single address within a year. That might sound like a 
lot, but being able to accomplish that at all would be significant, almost 
like a Chicago Pile moment, proving something in practice that was 
previously only thought theoretically possible for the past 3 decades. And 
it's only downhill from there... 
> 
> This is not certain that any Shor's algorithm variant works smoothly 
independently of the quantum computerarchitecture considered (e.g gate 
frequency, gate infidelity, cooling energy consumption) and I think it'san 
interesting open game-theory problem if you can concentrate a sufficiant 
amount of energy before anycoin owner moves them in consequence (e.g seeing 
a quantum break in the mempool and reacting with a counter-spend). 
> 
It should be noted that P2PK keys still hold millions of bitcoin, and those 
encode the entire public key for everyone to see for all time. Thus, early 
QC attacks won't need to consider the complexities of the mempool. 
> 
> In my opinion, one of the last time the subject was addressed on the 
mailing list, the description of the state of the quantum computer field 
was not realistic and get into risk characterization hyperbole talking 
about "super-exponential rate" (when indeed there is no empirical 
realization that distinct theoretical advance on quantum capabilities can 
be combined with each other) [1]. 
> 
I think it's time to revisit these discussions given IBM's progress. 
They've published a two videos in particular that are worth watching; their 
keynote from December of last year [2], and their roadmap update from just 
last month [3]. 
> 
> On your proposal, there is an immediate observation which comes to mind, 
namely why not using one of the algorithm(dilthium, sphincs+, falcon) which 
has been through the 3 rounds of NIST cryptanalysis. Apart of the signature 
size,which sounds to be smaller, in a network of full-nodes any PQ 
signature algorithm should have reasonable verificationperformances. 
> 
I'm supportive of this consideration. FALCON might be a good substitute, 
and maybe it can be upgraded to HAWK for even better performance depending 
on how much time there is. According to the BIP, FALCON signatures are ~10x 
larger than Schnorr signatures, so this will of course make the transaction 
more expensive, but we also must remember, these signatures will be going 
into the witness, which already receives a 4x discount. Perhaps the 
discount could be increased further someday to fit more transactions into 
blocks, but this will also likely result in more inscriptions filling 
unused space also, which permanently increases the burden of running an 
archive node. Due to the controversy such a change could bring, I would 
rather any increases in the witness discount be excluded from future 
activation discussions, so as to be considered separately, even if it 
pertains to an increase in P2QRH transaction size. 
  
Do you think it's worth reworking the BIP to use FALCON signatures? I've 
only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the 
readiness levels between those two are presently worlds apart. 
  
Also, do you think it's of any concern to use HASH160 instead of HASH256 in 
the output script? I think it's fine for a cryptographic commitment since 
it's simply a hash of a hash (MD160 of SHA-256). 
> 
> Lastly, there is a practical defensive technique that can be implemented 
today by coin owners to protect in face ofhyptothetical quantum 
adversaries. Namely setting spending scripts to request an artificially 
inflated witness stack,as the cost has to be burden by the spender. I think 
one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack 
shuffling. While the efficiency of this technique is limited by the max 
consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus 
size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an 
additional"scarce coins" pre-requirement on the quantum adversarise to 
succeed. Shor's algorithm is only defined under theclassic ressources of 
computational complexity, time and space. 
> 
I'm not sure I fully understand this, but even more practically, as 
mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally 
with a value of fewer than 50 coins per address, and when funds ever need 
to be spent, the transaction is signed and submitted out of band to a 
trusted mining pool, ideally one that does KYC, so it's known which 
individual miners get to see the public key before it's mined. It's not 
perfect, since this relies on exogenous security assumptions, which is why 
P2QRH is proposed. 
> 
> Best,Antoine 
> [1] https://freicoin.substack.com/p/why-im-against-taproot 
> 
  
I'm grateful you took the time to review the BIP and offer your detailed 
insights. 
  
[1] “The impact of hardware specifications on reaching quantum advantage in 
the fault tolerant regime,” 2022 - 
https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-specifications-on-reaching 
[2] https://www.youtube.com/watch?v=De2IlWji8Ck 
[3] https://www.youtube.com/watch?v=d5aIx79OTps 
  
> 
> 
> Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit : 
> 
> > Good points. I like your suggestion for a SPHINCS+, just due to how 
mature it is in comparison to SQIsign. It's already in its third round and 
has several standards-compliant implementations, and it has an actual 
specification rather than just a research paper. One thing to consider is 
that NIST-I round 3 signatures are 982 bytes in size, according to what I 
was able to find in the documents hosted by the SPHINCS website. 
> > 
https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip 
> >   
> > One way to handle this is to introduce this as a separate address type 
than SQIsign. That won't require OP_CAT, and I do want to keep this soft 
fork limited in scope. If SQIsign does become significantly broken, in this 
hopefully far future scenario, I might be supportive of an increase in the 
witness discount. 
> >   
> > Also, I've made some additional changes based on your feedback on X. 
You can review them here if you so wish: 
> > 
https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754 
> > 
> > 
> > On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc Dallaire-Demers 
wrote: 
> > > SQIsign is blockchain friendly but also very new, I would recommend 
adding a hash-based backup key in case an attack on SQIsign is found in the 
future (recall that SIDH broke over the span of a weekend 
https://eprint.iacr.org/2022/975.pdf). 
> > > Backup keys can be added in the form of a Merkle tree where one 
branch would contain the SQIsign public key and the other the public key of 
the recovery hash-based scheme. For most transactions it would only add one 
bit to specify the SQIsign branch. 
> > > The hash-based method could be Sphincs+, which is standardized by 
NIST but requires adding extra code, or Lamport, which is not standardized 
but can be verified on-chain with OP-CAT. 
> > > 
> > > On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote: 
> > > > The motivation for this BIP is to provide a concrete proposal for 
adding quantum resistance to Bitcoin. We will need to pick a signature 
algorithm, implement it, and have it ready in event of quantum emergency. 
There will be time to adopt it. Importantly, this first step is a more 
substantive answer to those with concerns beyond, "quantum computers may 
pose a threat, but we likely don't have to worry about that for a long 
time". Bitcoin development and activation is slow, so it's important that 
those with low time preference start discussing this as a serious 
possibility sooner rather than later. This is meant to be the first in a 
series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is 
intended to propose concrete solutions, even if they're early and 
incomplete, so that Bitcoin developers are aware of the existence of these 
solutions and their potential. This is just a rough draft and not the 
finished BIP. I'd like to validate the approach and hear if I should 
continue working on it, whether serious changes are needed, or if this 
truly isn't a worthwhile endeavor right now. 
> > > >   
> > > > The BIP can be found here: 
> > > > https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki 
> > > >   
> > > > Thank you for your time. 
> > > >   
> > > > 
> > > 
> > > 
> > 
> > 
> 
> 
> -- You received this message because you are subscribed to a topic in the 
Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from 
this topic, visit 
https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To 
unsubscribe from this group and all its topics, send an email to 
bitcoindev+...@googlegroups•com. To view this discussion on the web visit 
https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com. 


-----BEGIN PGP SIGNATURE----- 
Version: OpenPGP.js v4.10.3 
Comment: https://openpgpjs.org 

wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe 
JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/ 
8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9 
bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE 
tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt 
Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp 
mH/DU20HMBeGVSrISrvsmLw= 
=+wat 
-----END PGP SIGNATURE----- 

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups•com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/5d43fbd6-723d-4d3d-bc35-427c36a4a06an%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 44864 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-09-25 12:45 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-08 21:04 [bitcoindev] Proposing a P2QRH BIP towards a quantum resistant soft fork Hunter Beast
2024-06-14 13:51 ` [bitcoindev] " Pierre-Luc Dallaire-Demers
2024-06-14 14:28   ` Hunter Beast
2024-06-17  1:07     ` Antoine Riard
2024-06-17 20:27       ` hunter
2024-07-13  1:34         ` Antoine Riard
2024-08-06 17:37           ` Hunter Beast
2024-08-15  5:05             ` Hunter Beast
2024-08-22  6:20               ` Antoine Riard
2024-09-25 12:04                 ` Hunter Beast

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox