Isn't there some way to "rebase" a relative lock-time to some anchor even further in the past while cancelling out the intermediate transactions?

best regards,
Martin

On Thu, Sep 19, 2019 at 9:52 AM ZmnSCPxj via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
Good morning list,

I was reading transcript of recent talk: https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/blockchain-design-patterns/

And in section "Taproot: main idea":

> Q: Can you do timelocks iwth adaptor signatures?
>
> ...
>
> A: This is one way it's being proposed by mimblewimble; but this requires the ability to aggregate signatures across transactions.
>
> Q: No, there's two transactions already existing. Before locktime, you can spend wit hthe adaptor signature one like atomic swaps. After locktime, the other one becomes valid and you can spend with that. They just double spend each other.
>
> A: You'd have to diagram that out for me. There's a few ways to do this, some that I know, but yours isn't one of them.

I believe what is being referred to here is to simply have an `nLockTime` transaction that is signed by all participants first, and serves as the "timelock" path.
Then, another transaction is created, for which adaptor signatures are given, before completing the ritual to create a "hashlock" path.

I find it surprising that this is not well-known.
I describe it here tangentially, for instance: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-April/016888.html
The section "Payjoin2swap Swap Protocol" refers to "pre-swap transaction" and "pre-swap backout transaction", which are `nLockTime`d transactions.
Later transactions then use a Scriptless Script-like construction to transfer information about a secret scalar x.

My understanding of MimbleWimble is that:

* There must exist a proof-of-knowledge of the sum of blinding factors used.
  This can be trivially had by using a signature of this sum, signing an empty message or "kernel".
* I believe I have seen at least one proposal (I cannot find it again now) where the "kernel" is replaced with an `nLockTime`-equivalent.
  Basically, the `nLockTime` would have to be explicitly published, and it would be rejected for a block if the `nLockTime` was less than the block height.
  * There may or may not exist some kind of proof where the message being signed is an integer that is known to be no greater than a particular value, and multiple signatures that signed a lower value can somehow be aggregated to a higher value, which serves this purpose as well, but is compressible.

My understanding is thus that the above `nLockTime` technique is what is indeed intended for MimbleWimble cross-system atomic swaps.

--------

However, I believe that Lightning and similar offchain protocols are **not possible** on MimbleWimble, at least if we want to retain its "magical shrinking blockchain" property.

All practical channel constructions with indefinite lifetime require the use of *relative* locktime.
Of note is that `nLockTime` represents an *absolute* lifetime.

The only practical channel constructions I know of that do not require *relative* locktime (mostly various variants of Spilman channels) have a fixed lifetime, i.e. the channel will have to be closed before the lifetime arrives.
This is impractical for a scaling network.

It seems to me that some kind of "timeout" is always necessary, similar to the timeout used in SPV-proof sidechains, in order to allow an existing claimed-latest-state to be proven as not-actually-latest.

* In Poon-Dryja, knowledge of the revocation key by the other side proves the published claimed-latest-state is not-actually-latest and awards the entire amount to the other party.
  * This key can only be presented during the timeout, a security parameter.
* In Decker-Wattenhofer decrementing-`nSequence` channels, a kickoff starts this timeout, and only the smallest-timeout state gets onchain, due to it having a time advantage over all other versions.
* In indefinite-lifetime Spilman channels (also described in the Decker-Wattenhofer paper), the absolute-timelock initial backoff transaction is replaced with a kickoff + relative-locktime transaction.
* In Decker-Russell-Osuntokun, each update transaction has an imposed `nSequence` that forces a state transaction to be delayed compared to the update transaction it is paired with.

It seems that all practical offchain updateable cryptocurrency systems, some kind of "timeout" is needed during which participants have an opportunity to claim an alternative version of some previous claim of correct state.

This timeout could be implemented as either relative or absolute lock time, but obviously an absolute locktime would create a limit on the lifetime of the channel.
Thus, if we were to target an indefinite-lifetime channel, we must use relative lock times, with the timeout starting only when the unilateral close is initiated by one participant.

Now, let us turn back to the MimbleWimble.
As it happens, we do *not* actually need SCRIPT to implement these offchain updateable cryptocurrency systems.
2-of-2 is often enough (and with Schnorr and other homomorphic signatures, this is possible without explicit script, only pubkeys and signatures, which MimbleWimble supports).

* Poon-Dryja revocation can be rewritten as an HTLC-like construct (indeed this was the original formulation).
  * Since we have shown that, by use of two transaction alternatives, one timelocked and the other hashlocked, we can implement an HTLC-like construct on MimbleWimble, that is enough.
* Relative locktimes in Decker-Wattenhofer are imposed by simple `nSequence`, not by `OP_CSV`.
  HTLCs hosted inside such constructions can again use the two-transactions construct in MimbleWimble.
* Ditto with indefinite-lifetime Spilman.
* Ditto with Decker-Russell-Osuntokun.
  * The paper shows the use of `OP_CSV`, but aj notes it is redundant, and I agree: https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-March/001933.html

Thus, it is not the "nonexistence of SCRIPT" that prevents Lightning from being deployed on MimbleWimble.

Instead, it is the "nonexistence of **relative** locktime" that prevents Lightning over MimbleWimble.

Why would **relative** locktimes not possibly exist?
In order to **validate** a relative locktime, we need to know the blockheight that the output we are spending was confirmed in.

But the entire point of the "magical shrinking blockchain" is that already-spent outputs can be removed completely and all that needs to be validated by a new node is:

* The coin-creation events.
* The current UTXO set (plus attached rangeproofs).
* The blinding keys.
* Signatures of the blinding keys, and the kernels they sign (if we use the "kernels encode `nLockTime`" technique in some way, they should not exceed the current supposed blockheight).

The problem is that an output that exists in the UTXO set might be invalid, if it appears "too near" to an `nSequence` minimum spend of a previous output that was spent in its creation.
That is, the above does not allow validation of **relative** locktimes, only **absolute locktimes**.
(At least as far as I understand: there may be special cryptographic constructs that allow signatures to reliably commit to some relative locktime).

This means that relative locktimes need to be implemented by showing the transactions that spend previous UTXOS and create the current UTXOs, and so no backwards to coin-creation events.
This forces us back to the old "validate all transactions" model of starting a new node (and seriously damaging the entire point of using MimbleWimble anyway).

I do not believe it is the lack of SCRIPT that prevents Lightning-over-MimbleWimble, but rather the lack of relative locktime, which seems difficult to validate without knowing the individual transactions and when they were confirmed.

Regards,
ZmnSCPxj

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev