--- Log opened Tue Feb 18 00:00:44 2020 08:01 -!- bsm1175321 [~mcelrath@2601:196:4902:25b0:5902:4835:3642:ba57] has joined ##ctv-bip-review 08:01 -!- bsm1175321 [~mcelrath@2601:196:4902:25b0:5902:4835:3642:ba57] has quit [Client Quit] 14:08 < jeremyrubin> https://github.com/JeremyRubin/CTVSims/tree/master/batch-splitting 14:08 < jeremyrubin> I added some writeupy text 14:17 < harding> jeremyrubin: I only skimmed your text, but it seems like it doesn't deal with the case where, if you're going to send with a low priority anyway you can wait longer to send (non-CTV) in the first place and so create a larger batch. E.g., if your target is 36 blocks, you can collect unsent payments for 6 blocks and then use the 30-block target rate; if there's not much difference between the 36-block and the 30-block target rate, 14:17 < harding> then boost from larger batches ends up saving you more on fees (and, relevant to your simulation, reduces block space use). 14:18 < harding> s/then boost/the boost/ 14:19 < harding> Also, I have to confess I've never thought of a case where a spender would have more than two priority buckets, "fast and expensive" and "slow but cheap". 14:37 < jeremyrubin> harding: I explictly have that in the limitations section 14:38 < jeremyrubin> "A better simulation would allow re-bundling the offered-but-not-taken fees into a new RBF batch, which would benefit Priority Batching but not CTV batching." 14:38 < jeremyrubin> The issue with this is a bit more nuanced 14:39 < jeremyrubin> So you always want to emit a txn to the mempool to be opportunistic for low fee stuff, and RBF if appropriate 14:39 < jeremyrubin> RBF... kinda sucks 14:39 < jeremyrubin> There are so many issues with any of these things idk where to start... but let's do a quick dive 14:40 < jeremyrubin> 1) chaining -- you need to ensure that multiple batches you have retain a common input somehow so that you don't have issues where both batches get issued. 14:40 < jeremyrubin> 2) RBF locks up your change funds while it's in the pool 14:40 < jeremyrubin> 3) Miners can mine an earlier batch, which then you need to roll back your batching and reissue 14:41 < jeremyrubin> But then during a reorg you could then re-play the second batch with the RBF that wasn't taken 14:41 < jeremyrubin> So those need to serialize 14:41 < jeremyrubin> e.g., spend an output from any confirmed batch in the next one 14:41 < jeremyrubin> Which means they really can't safely sit in the mempool 14:42 < jeremyrubin> 4) Users also want to know a TXID generally... RBF messes with that a bit. 14:42 < jeremyrubin> So I'm not against simulating that, I just don't actually know what's implemable for that strategy 14:44 < jeremyrubin> I'm relatively comfortable with saying that the engineering complexity to pull off something robust there is harder than payment into non-interactive channel opens into "ball lightning"/channel factories for withdrawals, which would also cut through a lot of overhead. 14:44 < jeremyrubin> And those are CTV-only innovations 14:46 < jeremyrubin> I'd be interested if you have a good suggestion on how best to model the re-binning RBF-ing strategy though for comparison 14:47 < jeremyrubin> The other issue with that is the method is fundamentally limited as soon as you end up with buckets of size 23k users. 14:48 < jeremyrubin> E.g., one low priority output a second for 6.5 hours 14:48 < jeremyrubin> Then your bin is larger than an entire block 14:49 < jeremyrubin> Even though it's silly, the point is that assuming a large enough time window and an infinite backlog, with normal batching you eventually *have* to be worse than CTV binning (but the margin is tiny tiny tiny) 15:10 < harding> jeremyrubin: right, agree on all the RBF stuff. That's why we decided not to recommend RBF in Optech's chapter about payment batching (there's also other issues related to package size limits that affect RBF, e.g. https://github.com/bitcoinops/scaling-book/blob/master/x.payment_batching/payment_batching.md#possible-inability-to-fee-bump ). 15:12 < jeremyrubin> harding: right, yeah it gets messy pretty fast... but I do think it's worth exploring! I have an idea of how to model it indirectly maybe? In theory, some sort of priority distribution with some memory would do similarly well to optimal of that? 15:12 < jeremyrubin> E.g., if the last thing I drew was a 5, the next is more likely to be a 5 15:14 < harding> Although paying now and then RBF'ing to include more outputs later is certainly the ideal strategy, I think there's also a case to be made for delaying now to create a larger batch and then sending later at a confirmation target that's less than you would've used had you sent immediately. Assuming your fee estimator is accurate, it should be reasonably equivilent, and give current values returned by estimatesmartfee, there's a 15:14 < harding> steep fee drop off after the first few blocks, so I think there's practical savings there (I'm currently exploring quantifying that myself for an article, so I might have stats at some later point). 15:14 < jeremyrubin> Ah 15:14 < jeremyrubin> Could change the model to only issue a batch every N blocks 15:15 < jeremyrubin> I think also there's anothe important point 15:15 < jeremyrubin> You can't use historical block space 15:15 < jeremyrubin> So if you're more efficient overall, but you *could* have been in an earlier block at your priority 15:15 < jeremyrubin> that's somehow "inefficient" 15:16 < jeremyrubin> This sort of delaying issuing the payment actually leads to increased congestion kinda 15:16 < harding> I find that hard to think through in a supposedly memoryless process of block finding. 15:17 < harding> E.g. if there was a run of three blocks produced in quick succession five minutes ago, your chance of three blocks being produced in quick succession five minutes from now should be the same (assuming constant hashrate / difficulty). 15:19 < harding> As I understand it, for every n minutes you delay sending your payment, you delay confirmation by n minutes (assuming constancy). I think that means if you were going to originally use a 36-block fee target but you wait 6 blocks and use a 30-block fee target, your transaction should still confirm within 36 blocks from your original time (assuming good fee estimation). 15:21 < jeremyrubin> Hmm I think your intuition is off a little bit... but yeah memoryless is confusing. 15:22 < jeremyrubin> There's a difference between your "expectation" at any given time, and your optimal strategy given a known trace 15:22 < harding> If the feerate for 30 blocks is the same or only a small amount larger than the 36 block target, which is often the case today, then you can save money from having a larger batch (more efficiency per payment) while still delivering within your original time window. (However, I think you would on average deliver later because, as you say, you can't use historical block space so you lose the advantage of any quickly successive 15:22 < harding> blocks). 15:22 < harding> any quickly successive blocks that happened while you were waiting to grow your batch.* 15:24 < jeremyrubin> bbl 15:24 < harding> I'll be back tomorrow. 15:34 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 240 seconds] 17:48 -!- bsm1175321 [~mcelrath@2601:196:4902:25b0:3dcd:ab74:4db:7a78] has joined ##ctv-bip-review 17:48 -!- bsm1175321 [~mcelrath@2601:196:4902:25b0:3dcd:ab74:4db:7a78] has quit [Client Quit] --- Log closed Wed Feb 19 00:00:45 2020