One thing that we recently stumbled over was that we use CLTV in eltoo not for timelock but to have a comparison between two committed numbers coming from the spent and the spending transaction (ordering requirement of states). We couldn't use a number on the stack of the scriptSig as the signature doesn't commit to it, which is why we commandeered nLocktime values that are already in the past.

With the annex we could have a way to get a committed to number we can pull onto the stack, and free the nLocktime for other uses again. It'd also be less roundabout to explain in classes :-)

An added benefit would be that update transactions, being singlesig, can be combined into larger transactions by third parties or watchtowers to amortize some of the fixed cost of getting them confirmed, allowing on-path-aggregation basically (each node can group and aggregate transactions as they forward them). This is currently not possible since all the transactions that we'd like to batch would have to have the same nLocktime at the moment.

So I think it makes sense to partition the annex into a global annex shared by the entire transaction, and one for each input. Not sure if one for inputs would also make sense as it'd bloat the utxo set and could be emulated by using the input that is spending it.

Cheers,
Christian

On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, <bitcoin-dev@lists.linuxfoundation.org> wrote:
On Fri, Mar 04, 2022 at 11:21:41PM +0000, Jeremy Rubin via bitcoin-dev wrote:
> I've seen some discussion of what the Annex can be used for in Bitcoin.

https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html

includes some discussion on that topic from the taproot review meetings.

The difference between information in the annex and information in
either a script (or the input data for the script that is the rest of
the witness) is (in theory) that the annex can be analysed immediately
and unconditionally, without necessarily even knowing anything about
the utxo being spent.

The idea is that we would define some simple way of encoding (multiple)
entries into the annex -- perhaps a tag/length/value scheme like
lightning uses; maybe if we add a lisp scripting language to consensus,
we just reuse the list encoding from that? -- at which point we might
use one tag to specify that a transaction uses advanced computation, and
needs to be treated as having a heavier weight than its serialized size
implies; but we could use another tag for per-input absolute locktimes;
or another tag to commit to a past block height having a particular hash.

It seems like a good place for optimising SIGHASH_GROUP (allowing a group
of inputs to claim a group of outputs for signing, but not allowing inputs
from different groups to ever claim the same output; so that each output
is hashed at most once for this purpose) -- since each input's validity
depends on the other inputs' state, it's better to be able to get at
that state as easily as possible rather than having to actually execute
other scripts before your can tell if your script is going to be valid.

> The BIP is tight lipped about it's purpose

BIP341 only reserves an area to put the annex; it doesn't define how
it's used or why it should be used.

> Essentially, I read this as saying: The annex is the ability to pad a
> transaction with an additional string of 0's

If you wanted to pad it directly, you can do that in script already
with a PUSH/DROP combo.

The point of doing it in the annex is you could have a short byte
string, perhaps something like "0x010201a4" saying "tag 1, data length 2
bytes, value 420" and have the consensus intepretation of that be "this
transaction should be treated as if it's 420 weight units more expensive
than its serialized size", while only increasing its witness size by
6 bytes (annex length, annex flag, and the four bytes above). Adding 6
bytes for a 426 weight unit increase seems much better than adding 426
witness bytes.

The example scenario is that if there was an opcode to verify a
zero-knowledge proof, eg I think bulletproof range proofs are something
like 10x longer than a signature, but require something like 400x the
validation time. Since checksig has a validation weight of 50 units,
a bulletproof verify might have a 400x greater validation weight, ie
20,000 units, while your witness data is only 650 bytes serialized. In
that case, we'd need to artificially bump the weight of you transaction
up by the missing 19,350 units, or else an attacker could fill a block
with perhaps 6000 bulletproofs costing the equivalent of 120M signature
operations, rather than the 80k sigops we currently expect as the maximum
in a block. Seems better to just have "0x01024b96" stuck in the annex,
than 19kB of zeroes.

> Introducing OP_ANNEX: Suppose there were some sort of annex pushing opcode,
> OP_ANNEX which puts the annex on the stack

I think you'd want to have a way of accessing individual entries from
the annex, rather than the annex as a single unit.

> Now suppose that I have a computation that I am running in a script as
> follows:
>
> OP_ANNEX
> OP_IF
>     `some operation that requires annex to be <1>`
> OP_ELSE
>     OP_SIZE
>     `some operation that requires annex to be len(annex) + 1 or does a
> checksig`
> OP_ENDIF
>
> Now every time you run this,

You only run a script from a transaction once at which point its
annex is known (a different annex gives a different wtxid and breaks
any signatures), and can't reference previous or future transactions'
annexes...

> Because the Annex is signed, and must be the same, this can also be
> inconvenient:

The annex is committed to by signatures in the same way nVersion,
nLockTime and nSequence are committed to by signatures; I think it helps
to think about it in a similar way.

> Suppose that you have a Miniscript that is something like: and(or(PK(A),
> PK(A')), X, or(PK(B), PK(B'))).
>
> A or A' should sign with B or B'. X is some sort of fragment that might
> require a value that is unknown (and maybe recursively defined?) so
> therefore if we send the PSBT to A first, which commits to the annex, and
> then X reads the annex and say it must be something else, A must sign
> again. So you might say, run X first, and then sign with A and C or B.
> However, what if the script somehow detects the bitstring WHICH_A WHICH_B
> and has a different Annex per selection (e.g., interpret the bitstring as a
> int and annex must == that int). Now, given and(or(K1, K1'),... or(Kn,
> Kn')) we end up with needing to pre-sign 2**n annex values somehow... this
> seems problematic theoretically.

Note that you need to know what the annex will contain before you sign,
since the annex is committed to via the signature. If "X" will need
entries in the annex that aren't able to be calculated by the other
parties, then they need to be the first to contribute to the PSBT, not A.

I think the analogy to locktimes would be "I need the locktime to be at
least block 900k, should I just sign that now, or check that nobody else
is going to want it to be block 950k or something? Or should I just sign
with nLockTime at 900k, 910k, 920k, 930k, etc and let someone else pick
the right one?" The obvious solution is just to work out what the
nLockTime should be first, then run signing rounds. Likewise, work out
what the annex should be first, then run the signing rounds.

CLTV also has the problem that if you have one script fragment with
CLTV by time, and another with CLTV by height, you can't come up with
an nLockTime that will ever satisfy both. If you somehow have script
fragments that require incompatible interpretations of the annex, you're
likewise going to be out of luck.

Having a way of specifying locktimes in the annex can solve that
particular problem with CLTV (different inputs can sign different
locktimes, and you could have different tags for by-time/by-height so
that even the same input can have different clauses requiring both),
but the general problem still exists.

(eg, you might have per-input by-height absolute locktimes as annex
entry 3, and per-input by-time absolute locktimes as annex entry 4,
so you might convert:

 "900e3 CLTV DROP" -> "900e3 3 PUSH_ANNEX_ENTRY GREATERTHANOREQUAL VERIFY"

 "500e6 CLTV DROP" -> "500e6 4 PUSH_ANNEX_ENTRY GREATERTHANOREQUAL VERIFY"

for height/time locktime checks respectively)

> Of course this wouldn't be miniscript then. Because miniscript is just for
> the well behaved subset of script, and this seems ill behaved. So maybe
> we're OK?

The CLTV issue hit miniscript:

https://medium.com/blockstream/dont-mix-your-timelocks-d9939b665094

> But I think the issue still arises where suppose I have a simple thing
> like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if
> COLD_LOGIC and HOT_LOGIC can both have different costs, I need to decide
> what logic each satisfier for the branch is going to use in advance, or
> sign all possible sums of both our annex costs? This could come up if
> cold/hot e.g. use different numbers of signatures / use checksigCISAadd
> which maybe requires an annex argument.

Signatures pay for themselves -- every signature is 64 or 65 bytes,
but only has 50 units of validation weight. (That is, a signature check
is about 50x the cost of hashing 520 bytes of data, which is the next
highest cost operation we have, and is treated as costing 1 unit, and
immediately paid for by the 1 byte that writing OP_HASH256 takes up)

That's why the "add cost" use of the annex is only talked about in
hypotheticals, not specified -- for reasonable scripts with today's
opcodes, it's not needed.

If you're doing cross-input signature aggregation, everybody needs to
agree on the message they're signing in the first place, so you definitely
can't delay figuring out some bits of some annex until after signing.

> It seems like one good option is if we just go on and banish the OP_ANNEX.
> Maybe that solves some of this? I sort of think so. It definitely seems
> like we're not supposed to access it via script, given the quote from above:

How the annex works isn't defined, so it doesn't make any sense to
access it from script. When how it works is defined, I expect it might
well make sense to access it from script -- in a similar way that the
CLTV and CSV opcodes allow accessing nLockTime and nSequence from script.

To expand on that: the logic to prevent a transaction confirming too
early occurs by looking at nLockTime and nSequence, but script can
ensure that an attempt to use "bad" values for those can never be a
valid transaction; likewise, consensus may look at the annex to enforce
new conditions as to when a transaction might be valid (and can do so
without needing to evaluate any scripts), but the individual scripts can
make sure that the annex has been set to what the utxo owner considered
to be reasonable values.

> One solution would be to... just soft-fork it out. Always must be 0. When
> we come up with a use case for something like an annex, we can find a way
> to add it back.

The point of reserving the annex the way it has been is exactly this --
it should not be used now, but when we agree on how it should be used,
we have an area that's immediately ready to be used.

(For the cases where you don't need script to enforce reasonable values,
reserving it now means those new consensus rules can be used immediately
with utxos that predate the new consensus rules -- so you could update
offchain contracts from per-tx to per-input locktimes immediately without
having to update the utxo on-chain first)

Cheers,
aj

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev