--- Log opened Wed Dec 29 00:00:15 2021 02:18 -!- dr-orlovsky [~dr-orlovs@31.14.40.18] has quit [Ping timeout: 268 seconds] 02:24 -!- enick_849 [~afilini-m@2001:bc8:1828:245::2] has quit [Ping timeout: 240 seconds] 02:34 -!- enick_849 [~afilini-m@2001:bc8:1828:245::2] has joined #bitcoin-rust 03:07 -!- dr-orlovsky [~dr-orlovs@31.14.40.18] has joined #bitcoin-rust 03:20 -!- dr-orlovsky [~dr-orlovs@31.14.40.18] has quit [Read error: Connection reset by peer] 03:21 -!- dr-orlovsky [~dr-orlovs@31.14.40.18] has joined #bitcoin-rust 11:45 < ariard> BlueMatt[m]: yep, let's say you hav 2 strategies when we observe a feerate increase a) we close *now* because waiting is making the congestion worst 11:45 < ariard> b) we delay closing because we hope feerates should decrease to something sane 11:46 < ariard> though it's really to pick up without more than one feerate point and historical mempools data 11:47 < ariard> maybe b) is an issue as we might delay closing, thus offering the opportunity to our counterparty to timeout the HTLC 11:48 < ariard> w.r.t pinning issues, even with package relay if we don't move to replace-by-feerate it's still easy for an attacker to block replacement by attaching a high-child fee 11:49 < ariard> and you can do that across multiple, unrelated channels from victims viewpoint, thus invalidating strategies like "be-ready-to-burn-in-fees-HTLC-at-stake" 11:50 < BlueMatt[m]> right, I think we're on the same page about the risks here, I guess I just also anticipate users complaining when force-closes keep happening :/ 11:51 < BlueMatt[m]> re: pinning: my understanding is gloria's design had some replace-by-feerate stuff in it, but maybe I was thinking of that as the "next step" to do "replace-by-feerate-if-not-in-next-block" 11:54 -!- trev [~trev@user/trev] has quit [Quit: trev] 12:02 < ariard> yeah, thereby my thinking to desactivate by default this feature in the lack of consistent mempool modelling our force-close are going to be quite arbitrary... 12:03 < ariard> yes there is still an open discussion if it's compatible with miners incentivzes in edge cases of empty mempools, a switching limit could work (replace-by-fee top mempool/replace-by-feerate under) i think 12:03 < BlueMatt[m]> right.....I feel like deactivation by default kinda neuters the point, though? I mean I guess we can neuter it by just setting the increase ratio to like 10x, but... 12:04 < BlueMatt[m]> ariard: if the mempool is empty you wouldn't be allowed to replace-by-feerate - you would only be allowed to replace-by-feerate if the thing(s) being replaced are *not* to be included in the next block. 12:04 < ariard> it's just picky we have to choose between unsafe HTLC on-chain claim and user complaints about arbitrary force-close? 12:04 < BlueMatt[m]> but, yea, sounds like we're on the same plage 12:04 < BlueMatt[m]> page 12:04 < BlueMatt[m]> ariard: right, well the "real" solution to this is anchor :p 12:05 < ariard> BlueMatt[m] yes not sure if I was clear but disallow replace-by-feerate if mempool empty was my point 12:05 < ariard> BlueMatt[m]: no package relay :D 12:05 < BlueMatt[m]> right, okay, no, we're on the same page, then :) 12:05 < BlueMatt[m]> well, anchor if you assume your counterparty isn't pinning is sufficient 12:05 < BlueMatt[m]> and its not like lightning today works if your counterparty is pinning anyway 12:06 < ariard> hmmmm you still have mempool min fee > commitment tx fee which is not an adversarial case 12:06 < ariard> can't be addressed by anchor alone iirc 12:06 < BlueMatt[m]> true, good point. still hopefully a much, much, much more conservative requirement gets us there, tho 12:07 < BlueMatt[m]> like, then we only need to care about the "background" fee, or maybe "normal / 2" or something. 12:08 < ariard> right, post-anchor we could even hardcode a commit tx fee from last years worst-observed mempool min fee 12:08 < ariard> someone someone has to go through mempool datas :p 12:08 < ariard> back to the auto-close, okay we can pick up a far higher default feerate, like 50%/70% 12:09 < BlueMatt[m]> sure, either way it lets us get a much more conservative limit her.e 12:09 < BlueMatt[m]> my concern there is the same as the same as the channel buffer - 100% change from 1 sat/vbyte to 2 is normal, from 100 to 200 is crazy :p 12:09 < ariard> let's settle on 50% and add a doc note that node operators with "online"-trustworthy peers can disable this feature 12:09 < BlueMatt[m]> maybe X% + 10 or something? 12:10 < ariard> heh right, right 12:10 < ariard> let's 50% beyond 10sat per vbytes 12:10 < ariard> or 25 sats 12:11 < BlueMatt[m]> sure, I'm okay with that. 12:11 < BlueMatt[m]> do we want to codify this as a common setting/constant that is also used in the channel buffer stuff? 12:11 < BlueMatt[m]> it feels related? "the maximum expected increase in fee that may just be transitory or may happen often" 12:12 < BlueMatt[m]> i guess if we want it to be a setting we maybe don't need to bother 12:12 < BlueMatt[m]> just call it 10 sat/vbyte + configurable %? 12:14 < ariard> yeah not really related the dust htlc buffer is more subjective economic loss thing 12:14 < ariard> right 10 sat/vbyte we should be good if we assume mempools congestion like the last 6 months 12:14 < ariard> but i would say too sensible if we account from last year :/ 12:15 < ariard> meh we can still adjust across releases if we start to see permanent mempools congestion 12:29 < BlueMatt[m]> Yea, I guess start conservative. Hopefully when we see substantial fee increases later we can ship anchor asap…. Then remove it 22:28 -!- trev [~trev@user/trev] has joined #bitcoin-rust 23:54 -!- trev_ [~trev@46-138-89-118.dynamic.spd-mgts.ru] has joined #bitcoin-rust 23:56 -!- trev [~trev@user/trev] has quit [Ping timeout: 268 seconds] --- Log closed Thu Dec 30 00:00:16 2021