--- Log opened Mon Mar 18 00:00:31 2024 00:08 < setavenger> Josie: Saw a not so fun fact about the taproot UTXO set over the weekend and could verify with the data I collected. Apparently 80% of taproot UTXOs have a value of <= 546 sats. 85% of UTXOs have sub 1,000 sats. In other words we have 38.8M taproot UTXOs and 33.6M are below 1,000 sats. 00:12 < setavenger> Seems like one could avoid *a lot* of unnecessary scanning and tweaking if a sort of dust limit were implemented. Probably just something that could be on an implementation basis and not the protocol. Is that something that would make sense? 00:15 < setavenger> Let's say I have an implementation and say my light client will not see UTXOs sub 1,000 sats. Would reduce scanning by 85%. Would greatly improve UX for the user. Also no one should want 1,000 sat UTXOs as they are basically unspendable (the fees exceed 1,000 sats for a p2tr utxo at ~15sats/vByte) 00:48 < josie> setavenger: this is something i suspected but never verified. these are likely inscription reveal txs. regarding a scanning dust limit, seems worthwhile! this also has a nice privacy benefit in that it protects silent payment clients from dusting attacks 00:51 < josie> im not sure how hard it would be to implement, but it seems best to me if the client can set this limit, i.e. they set a dust threshold when registering with the server, or even better can set a dust threshold when requesting data from the server 01:38 < josie> this means the server would still need to compute the tweak data for every UTXO, but this could save clients a lot on bandwidth 02:17 < setavenger> Josie: yeah was thinking it's best to allow the client to set the limit on request. I already have an idea how that could be done on the index server side (client should be very simple). I guess one would just have to make sure that requests with individual dust limits don't stress the server too much. 02:26 < setavenger> Computing the entire index is not really a problem. Indexing including all the cut-through stuff takes about 7-8 hours and the storage requirements are also fairly moderate (<8gb disk space). This includes a full tweak index, a cut-through index with metadata (otherwise one couldn't prune efficiently) and a taproot UTXO set for easy serving. 02:28 < josie> setavenger: yeah, so long as the server is not doing compute *per* client, its okay to have it do more work upfront. those numbers you just shared sound very reasonable 02:52 < setavenger> Josie: Currently thinking about filtering when collecting the tweaks from the DB. Will see about the performance but I think it should be fine. I'm pretty happy with the numbers on the server side of things. 02:58 < setavenger> BTW managed to create a react-native library using libsecp. Just replacing the multiplication cuts down the time from 2m to 25s (~900 tweaks). It's better but still not there where it has to be in my opinion. But it's becoming more and more feasible with every iteration. Dust limits should also improve the situation by simply having less tweaks to compute. 03:35 < josie> setavenger: very nice! 25s is still surprisingly high, but definitely moving in the right direction 04:29 < setavenger> josie: currently one ECC Mul operation takes 10ms with libsecp. Gotta do two of those to get the pubkey. Then some peripherals and suddenly you need 32ms per tweak. Will do some further benchmarking to figure out what the real blocker is 04:41 < josie> setavenger: where is the second ECC mult coming from when scanning? it should only be one? 04:43 < setavenger> Josie: 1. tweak*b_scan and 2. Pk = Bspend + tk·G 04:45 < josie> ah gotcha. i was under the impression tk*G should be faster than a regular ECC mult. does it take the same amount of time if you convert tk to a private key and then use something like `.get_pubkey()` to get tk*G ? 04:45 < josie> (i could be wrong) 04:48 < setavenger> Josie: Did some extra benchmarking, didn't consider this before but the conversion of Bytes into a publickey in JS takes up 9.x ms of the 10ms. So libsecp is doing its job and it's doing it pretty well. Knowing this I can probably speed up things by a lot. 04:49 < setavenger> Moreover I also think that having SP computations in libsecp could help a lot. You mentioned that this is a project of yours, right? 04:50 < josie> yep! theStack and I are currently working on the API for a libsecp silentpayments module. the goal is to minimize the work a caller needs to do and avoid needing to serialize/deserialize bytes into pubkeys as much as possible 04:52 < josie> ideally, this means a receiver would have a single function call to libsecp where they provide the input pubkeys, their private scan key/spend pubkey, and the xonly outputs from the tx , and the function would return any outputs that are a match 04:54 < josie> not sure how hard it is for you to update your react bindings, but the WIP PR is here: https://github.com/bitcoin-core/secp256k1/pull/1471 04:54 < josie> my hope is that if you were to use this, you would see a pretty significant speed up 04:56 < setavenger> I think it would make sense to expose different parts as different functions as well. Bandwidth wise it helps to first compare the computed pub keys against a taproot only filter before fetching all the outputs. 04:58 < setavenger> updating the package shouldn't be a big issue. I only wrote wrappers for the relevant function. It's my first time using Obj-C and C so it takes some getting used to. 05:00 < josie> thats great feedback regarding multiple functions. in the case of using filters, you would want a function for producing a single silent payment output, rather than a function that does the whole scanning for you given a list of outputs 05:00 < setavenger> In general the more I can outsource to C and reduce JS code the better. I'll have a look at the PR as well 05:01 < josie> awesome, it would be great to get feedback on the high level API, especially for a mobile client 05:02 < setavenger> yeah exactly. I'll write down my process in the repo for light client implementations (that's also where I publish some of the stats regarding disk size, etc.). Then we can discuss what approach is best and where one could save on de-/serialisation. 05:04 < josie> awesome, thanks for helping out with this! 06:12 < setavenger> Glad to help, been wanting to get involved with Bitcoin development for a bit now 06:22 < setavenger> Josie: Got pub key computations down to ~1.7s! I cut out all the conversions to JS KeyPair types. 06:22 < setavenger> So each pubkey for a check with the filter is down to about ~2ms. 06:24 < setavenger> libsecp is definitely the way to go. I think any other implementation will have terrible UX. 06:32 < josie> great news! this what i suspected, which is why we decided to go the libsecp module route, but its great to have numbers to confirm it 06:37 < RubenSomsen> setavenger: Nice work 06:58 < setavenger> Thanks! 07:06 < setavenger> Josie: Made a basic overview for receiving with a light client https://github.com/setavenger/BIP0352-light-client-specification#workflow-receiving 07:15 < RubenSomsen> Note that while optimizations like skipping suspected ordinal outputs are nice in the short term, in the long term these things won't matter and we should assume blocks are going to be filled with normal taproot outputs. 07:16 < setavenger> My initial idea for the libsecp module would be to have these functions separate. 07:16 < setavenger> 1. compute_pub_keys_from_tweaks - inputs: [tweaks, b_scan, B_spend] -> output: [all pubkeys with n=0] those can then be matched against a filter 07:16 < setavenger> 2. scan - inputs: [b_scan, B_spend, pubKeys from compute_pub_keys_from_tweaks, labels, outputs] -> output: [Matching scripts (possibly with indication on the label which was matched)] 07:16 < setavenger> Hope I didn't miss anything. 07:17 < RubenSomsen> And don't forget that you still need a filter that also works for inputs if you want a fully functional light client, else you can't detect whether transactions you've made have been confirmed (particularly for the case where there is no change output). 07:21 < RubenSomsen> I had a few ideas how to optimize that, but I'd have to sit down and think it through again. 07:25 < setavenger> RubenSomsen: yes, you are right, it's a short term "solution". Hopefully we can get some optimisations in that are more sustainable. I can probably cut down the computation time for scanning by a bit more (even already in the short-term). I'm not sure what the baseline for this kind of computation is or should be. 07:25 < setavenger> Curious on what your ideas are. 07:30 < josie> RubenSomsen, setavanger: agree re: short term solutions, but I think having a light client skip scanning a tx if the only unspent outputs are "dust" is a generally useful thing. its just so happens that helps us skip a certain number of ordinals txs today 07:39 < RubenSomsen> setavenger: all the ideas we've had on the front of scanning optimization are in the BIP (cut-through, notifications, adding input keys, checking multiple labels), afaict there is no other low hanging fruit. 07:39 < RubenSomsen> I'm not at all opposed to short term optimizations, especially while we're still bootstrapping a new address format, as long as we keep their impermanence in mind. 07:42 < RubenSomsen> To be a bit more explicit, I'd disagree with things like "it's not fast enough unless we skip low value outputs", since that's not a sustainable solution. And even more broadly speaking, I don't think we ought to aim for a certain "speed", but just optimize it as much as we can and accept the outcome. 08:07 < josie> RubenSomsen: agree, the success criteria for a light client is "can a user privately scan for silent payment transactions without needing to run a full node," so anything that uses less bandwidth than a full node and can run on a smaller device (like a phone) is a success imo. anything that further speeds up or helps us optimize is a UX improvement. 08:51 < setavenger> Also agree. I do think though that we will be pushed to optimise further if we see that a certain speed is not feasible for a decent UX. At least that's what drives me to improve the process further. For example with current numbers it would take about an hour to scan a months worth of tweaks. This is not terrible considering it's on a phone but it motivates me to push further and try new approaches. 14:31 < setavenger> Josie: I read through the PR regarding SP in libsecp256k1. It wasn't clear to me, whether it was already usable (even if its just the basics) --- Log closed Tue Mar 19 00:00:31 2024