(this might be a double post as it ran into the size limit)

Hi Ruben,

Thanks! I don't really consider things final until we have a good set of test
vectors in the final set, after which we'd start to transition the set of
documents beyond the draft state.

> Seeing as there's a large amount of overlap with RGB, a protocol which I have
> examined quite extensively, I believe some of the issues I uncovered in that
> project also apply here.

I'm happy to hear that someone was actually able to extract enough details from
the RGB devs/docs to be able to analyze it properly! In the past I tried to ask
their developers questions about how things like transfers worked[1][2], but it
seemed either people didn't know, or they hadn't finished the core design
(large TBD sections) as they were working on adding other components to create
a "new new Internet".

> Furthermore, the Taro script is not enforced by Bitcoin, meaning those who
> control the Bitcoin script can always choose to ignore the Taro script and
> destroy the Taro assets as a result.

This is correct, as a result in most contexts, an incentive exists for the
holder of an asset to observe the Taro validation rules as otherwise, their
assets are burnt in the process from the PoV of asset verifiers. In the single
party case things are pretty straight forward, but more care needs to be taken
in cases where one attempts to express partial application and permits anyone
to spend a UTXO in question.

By strongly binding all assets to Bitcoin UTXOs, we resolve issues related to
double spending or duplicate assets, but needs to mind the fact that assets can
be burnt if a user doesn't supply a valid witness. There're likely ways to get
around this by lessening the binding to Bitcoin UTXO's, but then the system
would need to be able to collect, retain and order all the set of possible
spends, essentially requiring a parallel network. The core of the system as it
stands today is pretty simple (which was an explicit design goal to avoid
getting forever distracted by the large design space), with a minimal
implementation being relatively compact given all the Bitcoin context/design
re-use.

Also one cool trait of the way commitments are designed is that the Taro
commitment impact the final derived taproot output key. As a result, potential
Script extensions like TAPLEAF_UPDATE_VERIFY can actually be used to further
_bind_ Taro transitions at the Bitcoin level, without Bitcoin explicitly
needing to be aware of the Taro rules. In short, covenants can allow Bitcoin
Script to bind Taro state transitions, without any of the logic bleeding over,
as the covenant just checks for a certain output key, which is a function of
the Taro commitment being present.

> There are two possible designs here: a.) The token history remains separate –
> Dave receives Alice's 2 tokens, Bob's tokens are split and he receives 2 (or
> 3 from Bob 1 from Alice).  b.) The token history gets merged – Dave receives
> 4 tokens (linking the new output with both Alice and Bob's history).

Mechanically, with respect to the way the change/UTXOs work in the system, both
are expressible: Dave can chose to merge them into a single UTXO (with the
appropriate witnesses included for each of them), or Dave can keep them
distinct in the asset tree. You're correct in that asset issuers may opt to
issue assets in denominations vs allowing them to be fully divisible.
Ultimately, the compatibility with the LN layer will be the primary way to keep
asset histories compressed, without relying on another trust model, or relying
on the incentive of an asset issuer to do a "re-genesis" which would
effectively re-create assets in a supply-preserving manner (burn N units, then
produce a new genesis outpoint for N units). Alternatively, implementations can
also chose to utilize a checkpointing system similar to what some Bitcoin full
node clients do today.

>  is that you end up with a linked transaction graph, just like in Bitcoin

This is correct, the protocol doesn't claim to achieve better privacy
guarantees than the base chain. However inheriting this transaction graph model
imo makes it easier for existing Bitcoin developers to interact with the
system, and all the data structures are very familiar tooling wise. However any
privacy enhancing protocol used for on-chain top-level Bitcoin UTXOs can also
be applied to Taro, so people can use things like coinswap and coinjoin, along
with LN to shed prior coin lineages.

> This implies the location of the Taro tree inside the taproot tree is not
> fixed. What needs to be prevented here is that a taproot tree contains more
> than one Taro tree, as that would enable the owner of the commitment to show
> different histories to different people.

Great observation, I patched a similar issue much earlier in the design process
by strongly binding all signatures to a prevOut super-set (so the outpoint
along with the unique key apth down into the tree), which prevents duplicating
the asset across outputs, as signature verification would fail.

In terms of achieving this level of binding within the Taro tree itself, I can
think of three options:

  1. Require the Taro commitment to be in the first/last position within the
  (fully sorted?) Tapscript tree, and also require its sibling to be the hash
  of some set string (all zeroes or w/e). We'd require the sibling to the empty
  as the tapscript hashes are sorted before hashing so you sort of lose that
  final ordering information.

  2. Include the position of the Taro commitment within the tapscript tree
  within the sighash digest (basically the way the single input in the virtual
  transaction is created from the TLV structure).

  3. Include the position of the Taro commitment within the tapscript tree as
  part of the message that's hashed to derive asset IDs.

AFAICT, #1 resolves the issue entirely, #2 renders transfers outside of the
canonical history invalid, and #2 minds hte asset ID to the initial position
meaning you can track a canonical lineage from the very start.

> Finally, let me conclude with two questions. Could you clarify the purpose of
> the sparse merkle tree in your design?

Sure, it does a few things:

  * Non-inclusion proofs so I can do things like prove to your I'm no longer
    committing to my 1-of-1 holographic beefzard card when we swap.

  * The key/tree structure means that the tree is history independent, meaning
    that if you and I insert the same things into the tree in a different
    order, we'll get the same root hash. This is useful for things like
    tracking all the issuance events for a given asset, or allowing two
    entities to sync their knowledge/history of a single asset, or a set of
    assets.

  * Each asset/script mapping to a unique location within the tree means it's
    easy to ensure uniqueness of certain items/commitments (not possible to
    commit to the same asset ID twice in the tree as an example).

  * The merkle-sum trait means I that validation is made simpler, as you just
    check that the input+output commitment sum to the same value, and I can
    also verify that if we're swapping, then you aren't committing to more
    units that exist (so I make sure I don't get an invalid split).

> And the second question – when transferring Taro token ownership from one
> Bitcoin UTXO to another, do you generate a new UTXO for the recipient or do
> you support the ability to "teleport" the tokens to an existing UTXO like how
> RGB does it? If the latter, have you given consideration to timing issues
> that might occur when someone sends tokens to an existing UTXO that
> simultaneously happens to get spent by the owner?

So for interactive transfers, the UTXOs generated as just the ones part of the
MIMO transaction. When sending via the address format, a new non-dust output is
created which holds the new commitment, and uses an internal key provided by
the receiver, so only they can move the UTXO. Admittedly, I'm not familiar with
how the RGB "teleport" technique works, I checked out some slide decks a while
back, but they were mostly about all the new components they were creating and
their milestone of 1 million lines of code. Can you point me to a coherent
explanation of the technique? I'd love to compare/contrast so we can analyze
the diff tradeoffs being made here.

Thanks for an initial round of feedback/analysis, I'll be updating the draft
over the next few days to better spell things out and particularly that
commitment/sighash uniqueness trait.

-- Laolu

[1]: https://twitter.com/roasbeef/status/1330654936074371073?s=20&t=feV0kWAjJ6MTQlFm06tSxA
[2]: https://twitter.com/roasbeef/status/1330692571736117249?s=20&t=feV0kWAjJ6MTQlFm06tSxA