public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoin-dev] Hash function requirements for Taproot
@ 2020-03-04  7:10 Lloyd Fournier
  2020-03-04 23:29 ` ZmnSCPxj
  2020-03-12 17:04 ` Tim Ruffing
  0 siblings, 2 replies; 5+ messages in thread
From: Lloyd Fournier @ 2020-03-04  7:10 UTC (permalink / raw)
  To: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 13684 bytes --]

Hi List,

I recently presented a poster at the Financial Cryptography conference
'2020 which you can find here:
https://github.com/LLFourn/taproot-ggm/blob/master/main.pdf.  It attempts
to show the security requirements for the tweak hash function in Taproot.
In this post I'll give a long description of it but first let me tl;dr:

Taproot requires no new assumptions of SHA256 over what are already made by
Schnorr signatures themselves with one exception: when using a
non-interactive key generation protocol to produce a Taproot internal key
(e.g MuSig). To prove security in this scenario we need a make an
additional assumption about SHA256: as well as being collision resistant
(i.e. find two hashes h_1 - h_2 = 0), it must satisfy a more general kind
of collision resistance where it is hard to find h_1 - h_2 = d for *any d*
when the adversary is challenged to find h_1 and h_2 with random prefixes.
This is obviously a plausible assumption. Put informally, it says that zero
is not a special case where finding collisions is difficult but rather
solving the 2-sum problem is hard for all values of d (when challenged with
random prefixes).

Now the long version.

My motivation for creating this poster came from questions I had after
discussions in Taproot Study Group #18 (this study group initiative was a
great idea btw). The main question I had was "Why is Taproot binding?" i.e.
why is it true that I can only commit to one Merkle root. Isn't it possible
that a malicious party could produce a second covert Taproot spend that
none of the other parties to the output agreed to? I submitted a poster
proposal to FC to force myself to get to the bottom of it.

The premise of the poster is to use the Generic Group Model to try and
figure out how the hash function would have to fail for Taproot to be
insecure. Most of the poster is taken up cartoon reductions I made to
remind myself as to why what I was saying might be true. They are
incomplete and difficult to parse on their own so hopefully this post is a
useful companion to them.

=== The Security of Taproot ===

There are three scenarios/games we must consider when asking whether
Taproot is secure in the context of Bitcoin:

1. Taproot Forge: Forging taproot spends must be hard. The adversary must
not be able to take a public key off the blockchain and produce a forged
Taproot spend from it.
2. Covert Taproot: When an adversary is executing a multi-party key
generation protocol (e.g. MuSig) it should be hard for them to produce a
covert malicious Taproot spend from the joint key  i.e. when honest parties
think there is no Taproot on a key there shouldn't be any Taproot on the
key. Note this is not guaranteed to be hard by 1 being hard.
3. Second Covert Taproot: Like 2, except that if honest parties agree to a
Taproot spend then the adversary shouldn't be able to generate a second
Taproot spend they are unaware of.

Properties (1) and (2) can be argued succinctly if we just prove that
Taproot is a secure commitment scheme. It should be clear that if a Taproot
external key T = X + H(X||m)*G is a secure commitment scheme (Hiding and
Binding) to any arbitrary message m, then it is a secure commitment scheme
to a Merkle root. If so, then properties (1) and (3) hold. (1) holds
because if you can create an opening to a commitment not generated by you,
you either broke hiding (if your opening is the same as the honest one) or
broke binding (if it's different). (3) holds because you must have broken
binding as there are now two openings to the same commitment.

Property (2) is more difficult to argue as it depends on the multi-party
key generation protocol. Case in point: Taproot is completely broken when
combined with a proof of knowledge key generation protocol where along with
their public keys each party provides a proof of knowledge of the secret
key. Where X_1 is the key of the honest party, the malicious party can
choose their key X_2 to be G*H(X_1 || m) where m is a malicious Merkle
root. Clearly the malicious party has a covert Taproot for X = X_1 + X_2
and can produce a proof of knowledge for X_2.

Given this definition of security, we now move onto how we should model the
problem to prove they hold.

=== Generic Group Model vs Random Oracle Model ===

For practical cryptographic schemes you often have to idealise one of its
components to prove it secure. The most popular candidate for idealisation
is the hash function in the Random Oracle Model (ROM), which idealises a
hash function as a "random oracle", a black box which spits out random
values for each input. For example, the original "forking lemma" proof by
Pointcheval and Stern [1] shows the Schnorr signature scheme is unforgeable
in this model if the discrete logarithm problem is hard. In other words,
idealising the hash function allows us to isolate what security assumptions
we are making about the group (e.g. the discrete logarithm problem being
hard in it).

But what if we want to know what assumptions we are making about the hash
function? Does the challenge hash in Schnorr signatures have to be
collision resistant or pre-image resistant or something else? To answer
this question Neven et al.[2] analysed Schnorr signatures by idealising the
group in the "Generic Group Model" (GGM). By idealising the group, they
were able to isolate the security requirements of the hash function away
from any assumptions being made about the group. In the GGM, the group
becomes a black box which, when given two group elements, spits out their
subtraction (for technical reasons it's subtraction rather than addition).
The adversary can only produce new group elements by querying the oracle.
Using the GGM they prove that the hash function needs to be Random-Prefix
Preimage (RPP) resistant (and Random-Prefix Second-Preimage resistant)
which are strictly weaker assumptions than collision resistance.

=== Taproot in the Random Oracle Model ===

Proving that Taproot is a binding commitment scheme in the ROM is
straightforward (hiding is too but I'm going to skip that). To produce two
openings for the same external key, the adversary must have two random
oracle queries H(X || m) that result in the same external key T = X +
H(X||m)*G. Since H(X||m)*G is an (almost) uniformly distributed group
element in the ROM, T is also uniformly distributed, thus breaking the
binding of Taproot is equivalent to solving a birthday problem of size
2^256 (the same as finding hash collisions in the ROM). Note that this
statement is true regardless of the discrete logarithm problem being hard
or not. This proves properties (1) and (3).

For property (2) let's consider MuSig as the key generation protocol. If we
model the MuSig key tweak hash function as a random oracle as well then for
every key X_2,  the adversary has to query the MuSig hash oracle to
determine the joint key X = X_1*H(X_1||L) + X_2*H(X_2| L). As before, it is
clear to see that this makes X a uniform group element for every X_2 in the
ROM. Liekwise for every covert Taproot internal key C and message pair the
external key T = C + H(C||m) *G will be uniform as before in the ROM. Thus,
breaking property (2) is the same as finding T = X, where you the adversary
can only sample T and X from uniform distributions and so we have another
birthday problem. This completes the proof of all three properties.

Poelstra presented a proof in the ROM for the security of Taproot [3]. It
frames Taproot as a way of combining two signature schemes into one public
key (in our case Schnorr and Tapscript). He uses a similar line of
reasoning to what I have just presented in his proof (Lemma 1, step 3) but
this approach brings in many other considerations that I think can be
avoided by modelling it as a commitment scheme. Note that this proof only
shows that Taproot forgeries are hard i.e. property (1).

=== Taproot in the Generic Group Model ===

The ROM proof is an important first step -- if it couldn't be proved secure
in ROM then it would probably be completely broken. But Taproot, unlike
Schnorr, only relies on the security of its hash function when viewed as a
commitment scheme so it would be prudent to figure out what those
properties are. By using the ROM we artificially hide what those properties
from our analysis. As in the case of Schnorr, we can idealise the group in
the GGM to help isolate the hash function's properties.

To prove Taproot was a binding commitment scheme in the GGM I had to
introduce a new property I called "Chosen Offset Prefix-Collision" (COPC)
resistance. The precise security game is sketched in the poster, but I like
to describe it as a more general kind of collision resistance. Instead of
it being hard to find two preimages a and b where H(a) - H(b) = 0, it must
be hard to find H(P_1 || a) - H(P_2 || b) = d for any d (not just d  = 0)
with random prefixes P_1 and P_2 given by the challenger (d chosen by the
adversary). COPC is necessary and sufficient to prove Taproot is a secure
commitment scheme in the GGM (the proof for this isn't in the poster but is
very similar to Second Covert Taproot proof).

This was not the ideal outcome, so I decided to analyse properties Taproot
(1) and (3) independently rather than just imply them from the commitment
scheme result. What ended up in the poster is three independent proofs for
each Taproot security property with MuSig assumed to be key generation
scheme for properties (2) and (3). Here's a summary of what I concluded for
each property.

1. Taproot Forge: In the GGM, an adversary who forges Taproot openings can
be used as a black box to mount a "Random Prefix-Preimage" (RPP) attack
against the hash function. This is a very good result as RPP is already
required by Schnorr. Essentially, this means anyone who can forge Taproot
spends can also forge Schnorr signatures.

2. Covert Taproot (MuSig): For this problem I had to split the adversary
into two types: those who query their MuSig public key X_2 from the group
oracle before their malicious internal key C and those that query C first
or set X_2 = C. For the first case I was able to show another reduction
from RRP (which shown in the poster).  The other case I was able to break
preimage resistance as long as I modelled the MuSig hash function as a
random oracle (not shown in the poster and this is only from memory). In
both cases the reduction does not work for n-party MuSig (only for 2
parties). Obviously, this is not totally satisfying. The problem with
n-party MuSig is it becomes exponentially more unlikely (in n) for the
reduction to guess which keys the adversary will use for their MuSig keys.

3. Second Covert Taproot (MuSig): Once again, this is where honest parties
agree on a joint key and Taproot spend from it, but the adversary is
somehow able to create a second covert spend during the key generation
phase. This is where I found that COPC does actually need to be hard to
ensure this property. This is true regardless of the number of parties.
Thus this is the only scenario where you need the additional security
assumption to prove security.

== Concluding Remarks ==

The main important take away of this is that there is actually a small
security cost to using a group element as both a commitment scheme and as a
public key. It would be very surprising if we got this for free. By using
the random oracle model we merely hide this in the idealisation of the hash
function. The generic group model exposes it. The question is: is the cost
worth it and who bears it? Here's what I consider to be the most important
points:

1. You only take on this COPC assumption if you use Tapscript. If you're
just putting your funds into a Taproot output without an internal key,
either as a group or an individual there is no extra security assumption.
(with the caveat that my technique only really works for  2-party MuSig).
2. The COPC assumption seems to be very plausible.
3. Even if COPC is broken and an adversary can output two openings to the
same external key, both those openings must be valid taproot spends for
anyone to lose coins (i.e. Merkle roots with valid paths to leaves with
valid tapscript).
4. Even if COPC was that badly broken on SHA256, old taproot outputs would
not be affected, the adversary has to break it during key generation before
funds are put into the output.
5. You can completely circumvent this result by using coin-tossing rather
than MuSig for the key generation protocol. In most cases this doesn't even
add any extra rounds of communication since you are doing 3-round coin
tossing to choose the R values for the signatures that spend from the joint
output anyway. You can just toss your public keys in parallel.

In my opinion, the cost of Taproot is mostly borne by theoreticians. They
can no longer treat a a public key ideally but have to consider the
implications of it also being a commitment. For the user and Bitcoin as a
whole it seems to offer an overwhelming benefit. In exchange for the
complexity it adds to making security claims in the GGM (if using
Taprscript and MuSig), it offers exciting new opportunities for
non-interactivity and fungibility over what just what Schnorr would provide.

I don't consider my work to be a final proof of anything. I would welcome
anyone who wants to take over this research direction and do a proper job
of it! I didn't have any personal motivation for doing this work other than
curiosity and that curiosity has been satisfied. Questions and thoughts
welcome :)

[1] https://www.di.ens.fr/david.pointcheval/Documents/Papers/2000_joc.pdf
[2] http://www.neven.org/papers/schnorr.pdf
[3] https://github.com/apoelstra/taproot/blob/master/main.pdf

Cheers,

LL

[-- Attachment #2: Type: text/html, Size: 15151 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [bitcoin-dev] Hash function requirements for Taproot
  2020-03-04  7:10 [bitcoin-dev] Hash function requirements for Taproot Lloyd Fournier
@ 2020-03-04 23:29 ` ZmnSCPxj
  2020-03-05  9:56   ` Lloyd Fournier
  2020-03-12 17:04 ` Tim Ruffing
  1 sibling, 1 reply; 5+ messages in thread
From: ZmnSCPxj @ 2020-03-04 23:29 UTC (permalink / raw)
  To: Lloyd Fournier, Bitcoin Protocol Discussion

Good morning LL,

Thank you very much for this work, it seems quite interesting.

> 5. You can completely circumvent this result by using coin-tossing rather than MuSig for the key generation protocol. In most cases this doesn't even add any extra rounds of communication since you are doing 3-round coin tossing to choose the R values for the signatures that spend from the joint output anyway. You can just toss your public keys in parallel.

I am uncertain what you mean here by "coin-tossing".
From the comparison to MuSig, I imagine it is an interactive key generation protocol like this:

* Everybody generates fresh keypairs.
* Everybody sends the hash of their pubkey to everyone else.
* After receiving a hash of pubkey from everyone else, everybody sends their pubkey to everyone else.
* They add all their pubkeys to generate the aggregate key (and if using Taproot, use it as the internal key).

Is that correct?

In any case, the comparison to MuSig signing appears to imply interactive key generation.
The advantage of MuSig is that it requires no interactivity for key generation of n-of-n (I am told it requires interactivity to generate k-of-n).

However, it can generally be pointed out that, before you put anything into an n-of-n, you would damn well sure want to have *some* assurance that you can get it out later.
So in general you would need coordination and interaction anyway to arrange getting into an n-of-n in the first place.

On the other hand, it would be best to have at least some minimum of privacy by always interacting over Tor and having a Tor .onion address, which has absolutely horrid latency because human beings cry when peeling onions.
So in general reducing the latency by reducing communication rounds is better in general.
Counter to this, assuming you use an n-of-n in an offchain protocol of some sort, the number of communication rounds to generate the aggregate key may be dwarfed by the total number of communication rounds to create signatures to update the offchain protocol.
Counter counter to this is that one plan for reducing communications rounds for creating signatures during offchain operation is to (haha) use a Taproot with an n-of-n internal key and a tapscript that has n `OP_CHECKSIG` operations, so that for normal operation you just toss individual signatures at each other but at termination of the offchain protocol you can do the heavy MuSig-style signing with the n-of-n aggregate key.

Regards,
ZmnSCPxj


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [bitcoin-dev] Hash function requirements for Taproot
  2020-03-04 23:29 ` ZmnSCPxj
@ 2020-03-05  9:56   ` Lloyd Fournier
  0 siblings, 0 replies; 5+ messages in thread
From: Lloyd Fournier @ 2020-03-05  9:56 UTC (permalink / raw)
  To: ZmnSCPxj; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 3879 bytes --]

> I am uncertain what you mean here by "coin-tossing".
> From the comparison to MuSig, I imagine it is an interactive key
generation protocol like this:

> * Everybody generates fresh keypairs.
> * Everybody sends the hash of their pubkey to everyone else.
> * After receiving a hash of pubkey from everyone else, everybody sends
their pubkey to everyone else.
> * They add all their pubkeys to generate the aggregate key (and if using
Taproot, use it as the internal key).

> Is that correct?

Yes exactly. The reason it's called coin tossing is that the resulting key
is guaranteed to be uniformly random (in the random oracle model at least),
so it's like tossing a fair 2^256 sided coin. This is not true in MuSig for
example, where the aggregate key is not guaranteed to be from a uniform
distribution against a malicious party (but still secure as an aggregate
key).

> However, it can generally be pointed out that, before you put anything
into an n-of-n, you would damn well sure want to have *some* assurance that
you can get it out later. So in general you would need coordination and
interaction anyway to arrange getting into an n-of-n in the first place.

Right. Taking your example of a lightning channel, when you set it up I
don't *think* there is a way to use the non-interactivity of MuSig to
remove any rounds of communication to get to the starting state where there
is a channel funding on-chain and both parties have a tx that spends from
it which returns their funds. Doing coin tossing for the aggregate key as
well as the aggregate nonce shouldn't lead to any extra rounds of
communication. The downside of coin tossing is that it requires honest
parties to sample their keys non-deterministically (or at least have a
counter to avoid using the same key twice).

> On the other hand, it would be best to have at least some minimum of
privacy by always interacting over Tor and having a Tor .onion address,
which has absolutely horrid latency because human beings cry when peeling
onions.
> So in general reducing the latency by reducing communication rounds is
better in general.
> Counter to this, assuming you use an n-of-n in an offchain protocol of
some sort, the number of communication rounds to generate the aggregate key
may be dwarfed by the total number of communication rounds to create
signatures to update the offchain protocol.
> Counter counter to this is that one plan for reducing communications
rounds for creating signatures during offchain operation is to (haha) use a
Taproot with an n-of-n internal key and a tapscript that has n
`OP_CHECKSIG` operations, so that for normal operation you just toss
individual signatures at each other but at termination of the offchain
protocol you can do the heavy MuSig-style signing with the n-of-n aggregate
key.

Counter³ to this is that, in the case of lightning, the aggregate key for a
PTLC does not need to be chosen at payment time.  They channel members
could simply use the "master" aggregate key they generated by coin tossing
at the channel's inception and pseudorandomly randomise it every time they
need a new joint key (so the keys do not look related to everyone else on
the chain but you would effectively just be reusing the same public key).

Having said that if there is some advantage to using MuSig in some
particular case I wouldn't hesitate to use it in combination with Taproot.
I don't think the new assumption that I think you have to make wrt to the
hash function really weighs up against most design considerations. In
general, it is probably worth considering whether your protocol actually
benefits from the non-interactivity MuSig gives in the key generation
stage. If it doesn't due to the fact that it doesn't make signing anymore
non-interactive, then coin tossing might be the answer.

LL

[-- Attachment #2: Type: text/html, Size: 4194 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [bitcoin-dev] Hash function requirements for Taproot
  2020-03-04  7:10 [bitcoin-dev] Hash function requirements for Taproot Lloyd Fournier
  2020-03-04 23:29 ` ZmnSCPxj
@ 2020-03-12 17:04 ` Tim Ruffing
  2020-03-16  7:31   ` Lloyd Fournier
  1 sibling, 1 reply; 5+ messages in thread
From: Tim Ruffing @ 2020-03-12 17:04 UTC (permalink / raw)
  To: Lloyd Fournier, Bitcoin Protocol Discussion

Hi Lloyd,

This is great research, thanks for this effort!

Here are some comments:

On Wed, 2020-03-04 at 18:10 +1100, Lloyd Fournier via bitcoin-dev
wrote:
> 
> Property (2) is more difficult to argue as it depends on the multi-
> party key generation protocol. Case in point: Taproot is completely
> broken when combined with a proof of knowledge key generation
> protocol where along with their public keys each party provides a
> proof of knowledge of the secret key. Where X_1 is the key of the
> honest party, the malicious party can choose their key X_2 to be
> G*H(X_1 || m) where m is a malicious Merkle root. Clearly the
> malicious party has a covert Taproot for X = X_1 + X_2 and can
> produce a proof of knowledge for X_2.

I mean, the good thing is that there's a general method to defend
against this, namely always adding a Merkle root on top. Maybe it's
useful to make the warning here a litte bit more drastic:
https://github.com/sipa/bips/blob/bip-taproot/bip-0341.mediawiki#cite_ref-22-0
Maybe we could actually mention this in BIP340, too, when we talk about
key generation,

> 
> Poelstra presented a proof in the ROM for the security of Taproot
> [3]. It frames Taproot as a way of combining two signature schemes
> into one public key (in our case Schnorr and Tapscript). He uses a
> similar line of reasoning to what I have just presented in his proof
> (Lemma 1, step 3) but this approach brings in many other
> considerations that I think can be avoided by modelling it as a
> commitment scheme. Note that this proof only shows that Taproot
> forgeries are hard i.e. property (1).

I agree that modeling it as a commitment scheme is more natural. But I
think an optimal model would capture both worlds, and would give the
attacker signing oracles for the inner and the outer key, and an
commitment opening oracle That is, it would capture that 
 * the ability to obtain signatures for the inner key does not help you
   to forge for the outer key
 * the ability to obtain signatures for the outer key does not help you
   to open the commitment, and --- if already opened --- do not help
   you to forge for the inner key
 * the ability to obtain an opening does not help you to forge for
   either key... 
 * etc

I believe that all these properties hold, and I believe this even
without a formal proof. 

Still, it would be great to have one. The problem here is really that
things get complex so quickly. For example, how do you model key
generation in the game(s) that I sketched above? The traditional way or
with MuSig. The reality is that we want to have everything combined:
 * BIP32
 * MuSig (and variants of it)
 * Taproot (with scripts that refer to the inner key)
 * sign-to-contract stuff (e.g., to prevent covert channels with
   hardware wallets)
 * scriptless scrips
 * blind signatures
 * threshold signtures
 * whatever you can imagine on top of this

It's very cumbersome to come up with a formal model that includes all
of this. One common approach to protocols that are getting too complex
is to switch to simpler models, e.g., symbolic models/Dolev-Yao models
but that's hard here given that we don't have clear layering. Things
would be easier to analyze if Taproot was really  just a commitment to
a verification key. But it's more, it's something that's both a
verification and a commitment. Taproot interferes with Schnorr
signatures on an algebraic level (not at all black-box), and that's
actually the reason why it's so powerful and efficient. The same is
true for almost everything in the list above, and this puts Taproot
outside the scope of proof assistants for cryptographic protocols that
work on a symbolic level of abstraction. I really wonder how we can
handle this better. This would improve our understanding of the
interplay between various crypto components better, and make it easier
to judge future proposals on all levels, from consensus changes to new
multi-signature protocols, etc.

> 
> In my opinion, the cost of Taproot is mostly borne by theoreticians.
> They can no longer treat a a public key ideally but have to consider
> the implications of it also being a commitment. For the user and
> Bitcoin as a whole it seems to offer an overwhelming benefit. In
> exchange for the complexity it adds to making security claims in the
> GGM (if using Taprscript and MuSig), it offers exciting new
> opportunities for non-interactivity and fungibility over what just
> what Schnorr would provide.

I agree with this overall statement. I'm confident in Taproot, and I
guess what say above really applies to the cost for theoreticians.
(Let's just make sure that we don't forget how theory is relevant to
security in practice.) 

Tim



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [bitcoin-dev] Hash function requirements for Taproot
  2020-03-12 17:04 ` Tim Ruffing
@ 2020-03-16  7:31   ` Lloyd Fournier
  0 siblings, 0 replies; 5+ messages in thread
From: Lloyd Fournier @ 2020-03-16  7:31 UTC (permalink / raw)
  To: Tim Ruffing; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 6254 bytes --]

On Fri, Mar 13, 2020 at 4:04 AM Tim Ruffing <crypto@timruffing•de> wrote:
>
> I mean, the good thing is that there's a general method to defend
> against this, namely always adding a Merkle root on top. Maybe it's
> useful to make the warning here a litte bit more drastic:
>
https://github.com/sipa/bips/blob/bip-taproot/bip-0341.mediawiki#cite_ref-22-0
> Maybe we could actually mention this in BIP340, too, when we talk about
> key generation,

I missed this note in the BIP. This trick means you get property 2  (covert
taproot) for free if you prove property 3 (second covert taproot). This is
a big improvement as property 2 was dependent on the particulars of the key
generation scheme whereas property 3 is just based on Taproot being a
secure commitment scheme. Nice!

> I agree that modeling it as a commitment scheme is more natural. But I
> think an optimal model would capture both worlds, and would give the
> attacker signing oracles for the inner and the outer key, and an
> commitment opening oracle That is, it would capture that
>  * the ability to obtain signatures for the inner key does not help you
>    to forge for the outer key
>  * the ability to obtain signatures for the outer key does not help you
>    to open the commitment, and --- if already opened --- do not help
>    you to forge for the inner key
>  * the ability to obtain an opening does not help you to forge for
>    either key...
>  * etc
>
> I believe that all these properties hold, and I believe this even
> without a formal proof.
>
>
> Still, it would be great to have one. The problem here is really that
> things get complex so quickly. For example, how do you model key
> generation in the game(s) that I sketched above? The traditional way or
> with MuSig. The reality is that we want to have everything combined:
>  * BIP32
>  * MuSig (and variants of it)
>  * Taproot (with scripts that refer to the inner key)
>  * sign-to-contract stuff (e.g., to prevent covert channels with
>    hardware wallets)
>  * scriptless scrips
>  * blind signatures
>  * threshold signtures
>  * whatever you can imagine on top of this
>
> It's very cumbersome to come up with a formal model that includes all
> of this. One common approach to protocols that are getting too complex
> is to switch to simpler models, e.g., symbolic models/Dolev-Yao models
> but that's hard here given that we don't have clear layering. Things
> would be easier to analyze if Taproot was really  just a commitment to
> a verification key. But it's more, it's something that's both a
> verification and a commitment. Taproot interferes with Schnorr
> signatures on an algebraic level (not at all black-box), and that's
> actually the reason why it's so powerful and efficient. The same is
> true for almost everything in the list above, and this puts Taproot
> outside the scope of proof assistants for cryptographic protocols that
> work on a symbolic level of abstraction. I really wonder how we can
> handle this better. This would improve our understanding of the
> interplay between various crypto components better, and make it easier
> to judge future proposals on all levels, from consensus changes to new
> multi-signature protocols, etc.
>

I hope we can prove these things in a more modular way without creating a
hybrid scheme with multiple oracles. My hope is that you can prove that any
secure key generation method will be secure once Taproot is applied to it
if it is a secure commitment scheme. This was difficult before I knew about
the empty commitment trick! Although the Taprooted key and the internal key
are algebraically related, the security requirements on the two primitives
(the group and the hash function) are nicely separated. Intuitively,
1. being able to  break the Taproot hash function (e.g. find pre-images)
does not help you forge signatures on any external key; it can only help
you forge fake commitment openings (for the sake of this point assume that
Schnorr uses an unrelated hash function for the challenge).
2. being able solve discrete logarithms doesn't help you break Taproot; it
just helps you forge signatures.

I believe we can formally prove these two points and therefore dismiss the
need for any signing or commitment opening oracles in any security notion
of Taproot:

1. We can dismiss the idea of an adversary that uses a commitment opening
oracle to forge a signature because the commitment opening is not even an
input into the signing algorithm. Therefore it is information theoretically
impossible to learn anything about forging a signature from a Taproot
opening.
2. I think we can dismiss the idea of an adversary that uses a signing
oracle to forge a fake Taproot opening. To see this note that the Taproot
Forge reduction to RPP in my poster actually still holds if the adversary
is given the secret key x (with a few other modifications). In the proof I
kept it hidden just because that seemed more realistic. If we give the
adversary the secret key we can dismiss the idea that a signing oracle will
help them because they can just simulate it. Furthermore, if honest parties
always require the empty commitment be applied to their key we can dismiss
the idea of an adversary that forges just based on the binding of the
commitment scheme even if they know the secret key and regardless of the
key generation algorithm.

This allows us to restrict our notion of Taproot's security to its
interaction with the key generation protocol only. It should be sufficient
to prove these three things:
1. The key generation scheme is secure. I don't believe we have a
definition for this yet but I guess it would be something like "if the
adversary can't output the secret key of the agg key then it is secure".
2. The Taproot transformation of any key generation scheme satisfying (1)
also satisfies (1).
3. The external key produced by any transformed protocol is a secure
commitment to the message (if one is desired, if not the empty commitment
trick fixes this).

This gives us a modular and composable security model for Taproot. We can
just prove that MuSig, threshold keygen, and all the other things you
mentioned satisfy (1) and then by implication the Taprooted version of it
is also secure. Or something like that!

LL

[-- Attachment #2: Type: text/html, Size: 6909 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-03-16  7:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-04  7:10 [bitcoin-dev] Hash function requirements for Taproot Lloyd Fournier
2020-03-04 23:29 ` ZmnSCPxj
2020-03-05  9:56   ` Lloyd Fournier
2020-03-12 17:04 ` Tim Ruffing
2020-03-16  7:31   ` Lloyd Fournier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox