On Fri, Mar 13, 2020 at 4:04 AM Tim Ruffing <crypto@timruffing.de> wrote:
>
> I mean, the good thing is that there's a general method to defend
> against this, namely always adding a Merkle root on top. Maybe it's
> useful to make the warning here a litte bit more drastic:
> https://github.com/sipa/bips/blob/bip-taproot/bip-0341.mediawiki#cite_ref-22-0
> Maybe we could actually mention this in BIP340, too, when we talk about
> key generation,

I missed this note in the BIP. This trick means you get property 2  (covert taproot) for free if you prove property 3 (second covert taproot). This is a big improvement as property 2 was dependent on the particulars of the key generation scheme whereas property 3 is just based on Taproot being a secure commitment scheme. Nice!

> I agree that modeling it as a commitment scheme is more natural. But I
> think an optimal model would capture both worlds, and would give the
> attacker signing oracles for the inner and the outer key, and an
> commitment opening oracle That is, it would capture that
>  * the ability to obtain signatures for the inner key does not help you
>    to forge for the outer key
>  * the ability to obtain signatures for the outer key does not help you
>    to open the commitment, and --- if already opened --- do not help
>    you to forge for the inner key
>  * the ability to obtain an opening does not help you to forge for
>    either key...
>  * etc
>
> I believe that all these properties hold, and I believe this even
> without a formal proof.
>
>
> Still, it would be great to have one. The problem here is really that
> things get complex so quickly. For example, how do you model key
> generation in the game(s) that I sketched above? The traditional way or
> with MuSig. The reality is that we want to have everything combined:
>  * BIP32
>  * MuSig (and variants of it)
>  * Taproot (with scripts that refer to the inner key)
>  * sign-to-contract stuff (e.g., to prevent covert channels with
>    hardware wallets)
>  * scriptless scrips
>  * blind signatures
>  * threshold signtures
>  * whatever you can imagine on top of this
>
> It's very cumbersome to come up with a formal model that includes all
> of this. One common approach to protocols that are getting too complex
> is to switch to simpler models, e.g., symbolic models/Dolev-Yao models
> but that's hard here given that we don't have clear layering. Things
> would be easier to analyze if Taproot was really  just a commitment to
> a verification key. But it's more, it's something that's both a
> verification and a commitment. Taproot interferes with Schnorr
> signatures on an algebraic level (not at all black-box), and that's
> actually the reason why it's so powerful and efficient. The same is
> true for almost everything in the list above, and this puts Taproot
> outside the scope of proof assistants for cryptographic protocols that
> work on a symbolic level of abstraction. I really wonder how we can
> handle this better. This would improve our understanding of the
> interplay between various crypto components better, and make it easier
> to judge future proposals on all levels, from consensus changes to new
> multi-signature protocols, etc.
>

I hope we can prove these things in a more modular way without creating a hybrid scheme with multiple oracles. My hope is that you can prove that any secure key generation method will be secure once Taproot is applied to it if it is a secure commitment scheme. This was difficult before I knew about the empty commitment trick! Although the Taprooted key and the internal key are algebraically related, the security requirements on the two primitives (the group and the hash function) are nicely separated. Intuitively,
1. being able to  break the Taproot hash function (e.g. find pre-images) does not help you forge signatures on any external key; it can only help you forge fake commitment openings (for the sake of this point assume that Schnorr uses an unrelated hash function for the challenge).
2. being able solve discrete logarithms doesn't help you break Taproot; it just helps you forge signatures.

I believe we can formally prove these two points and therefore dismiss the need for any signing or commitment opening oracles in any security notion of Taproot:

1. We can dismiss the idea of an adversary that uses a commitment opening oracle to forge a signature because the commitment opening is not even an input into the signing algorithm. Therefore it is information theoretically impossible to learn anything about forging a signature from a Taproot opening.
2. I think we can dismiss the idea of an adversary that uses a signing oracle to forge a fake Taproot opening. To see this note that the Taproot Forge reduction to RPP in my poster actually still holds if the adversary is given the secret key x (with a few other modifications). In the proof I kept it hidden just because that seemed more realistic. If we give the adversary the secret key we can dismiss the idea that a signing oracle will help them because they can just simulate it. Furthermore, if honest parties always require the empty commitment be applied to their key we can dismiss the idea of an adversary that forges just based on the binding of the commitment scheme even if they know the secret key and regardless of the key generation algorithm.

This allows us to restrict our notion of Taproot's security to its interaction with the key generation protocol only. It should be sufficient to prove these three things:
1. The key generation scheme is secure. I don't believe we have a definition for this yet but I guess it would be something like "if the adversary can't output the secret key of the agg key then it is secure".
2. The Taproot transformation of any key generation scheme satisfying (1) also satisfies (1).
3. The external key produced by any transformed protocol is a secure commitment to the message (if one is desired, if not the empty commitment trick fixes this).

This gives us a modular and composable security model for Taproot. We can just prove that MuSig, threshold keygen, and all the other things you mentioned satisfy (1) and then by implication the Taprooted version of it is also secure. Or something like that!

LL