Hi all,

A lot is being discussed but just wanted to react on some points.

# CSFS

Lloyd, good point about CSFS not providing the same privacy benefits, and OP_CAT being required in addition. And thanks Philipp for the link to your post, it was an interesting read!

Jeremy
>CSFS might have independent benefits, but in this case CTV is not being used in the Oracle part of the DLC, it's being used in the user generated mapping of Oracle result to Transaction Outcome.

My point was that CSFS could be used both in the oracle part but also in the transaction restriction part (as in the post by Philipp), but again it does not really provide the same model as DLC as pointed out by Lloyd.

# Performance

Regarding how much performance benefit this CTV approach would provide, without considering the benefit of not having to transmit and store a large number of adaptor signatures, and without considering any further optimization of the anticipation points computation, I tried to get a rough estimate through some benchmarking. Basically, if I'm not mistaken, using CTV we would only have to compute the oracle anticipation points, without needing any signing or verification. I've thus made a benchmark comparing the current approach with signing + verification with only computing the anticipation points, for a single oracle with 17 digits and 10000 varying payouts (between 45000 and 55000). The results are below.

Without using parallelization:
baseline:                            [7.8658 s 8.1122 s 8.3419 s] 
no signing/no verification:  [321.52 ms 334.18 ms 343.65 ms] 

Using parallelization:
baseline:                            [3.0030 s 3.1811 s 3.3851 s]
no signing/no verification:  [321.52 ms 334.18 ms 343.65 ms]

So it seems like the performance improvement is roughly 24x for the serial case and 10x for the parallel case.

The two benchmarks are available (how to run them is detailed in the README in the same folder):

Let me know if you think that's a fair simulation or not. One thing I'd like to see as well is what will be the impact of having a very large taproot tree on the size of the witness data when spending script paths that are low in the tree, and how it would affect the transaction fee. I might try to experiment with that at some point.

Cheers,

Thibaut 

On Mon, Feb 7, 2022 at 2:56 AM Jeremy Rubin via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
I'm not sure what is meant concretely by (5) but I think overall performance is ok here. You will always have 10mins or so to confirm the DLC so you can't be too fussy about performance!

I mean that if you think of the CIT points as being the X axis (or independent axes if multivariate) of a contract, the Y axis is the dependent variable represented by the CTV hashes. 


For a DLC living inside a lightning channel, which might be updated between parties e.g. every second, this means you only have to recompute the cheaper part of the DLC only if you update the payoff curves (y axis) only, and you only have to update the points whose y value changes.

For on chain DLCs this point is less relevant since the latency of block space is larger. 
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev