--- Log opened Mon Feb 15 00:00:30 2021 00:21 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 00:44 -!- laptop_ [~laptop@ppp-3-9.leed-a-1.dynamic.dsl.as9105.com] has joined #lightning-dev 01:10 -!- jonatack [~jon@37.167.240.187] has quit [Read error: Connection reset by peer] 01:18 -!- laptop_ [~laptop@ppp-3-9.leed-a-1.dynamic.dsl.as9105.com] has quit [Remote host closed the connection] 01:19 -!- laptop_ [~laptop@ppp-3-9.leed-a-1.dynamic.dsl.as9105.com] has joined #lightning-dev 01:20 -!- pm7 [pm7@91.185.186.211] has quit [Quit: No Ping reply in 180 seconds.] 01:22 -!- pm7 [pm7@gateway/shell/mydevil.net/x-xgtycbrdalhvebdj] has joined #lightning-dev 02:12 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 03:20 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 03:21 -!- musdom [~Thunderbi@202.184.0.102] has joined #lightning-dev 03:37 -!- jonatack [~jon@37.167.240.187] has joined #lightning-dev 04:53 -!- sr_gi [~sr_gi@static-125-62-230-77.ipcom.comunitel.net] has quit [Read error: Connection reset by peer] 04:54 -!- sr_gi [~sr_gi@static-125-62-230-77.ipcom.comunitel.net] has joined #lightning-dev 04:58 -!- jonatack [~jon@37.167.240.187] has quit [Read error: Connection reset by peer] 05:13 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Remote host closed the connection] 05:14 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 05:22 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 272 seconds] 05:51 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [] 06:02 -!- jonatack [~jon@37.167.240.187] has joined #lightning-dev 06:08 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 06:33 -!- yzernik [~yzernik@rrcs-45-59-254-98.west.biz.rr.com] has joined #lightning-dev 06:34 -!- bnger [8ff431dd@gateway/web/cgi-irc/kiwiirc.com/ip.143.244.49.221] has joined #lightning-dev 06:47 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 07:10 -!- carlakc [uid471474@gateway/web/irccloud.com/x-zipuyhalshrwbxqj] has joined #lightning-dev 07:44 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has quit [Quit: Konversation terminated!] 07:48 -!- jonatack [~jon@37.167.240.187] has quit [Read error: Connection reset by peer] 07:48 -!- jonatack [~jon@37.167.240.187] has joined #lightning-dev 08:26 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 08:27 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 08:27 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 08:28 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 08:28 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 08:29 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 08:29 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 08:29 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 08:29 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 08:49 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 08:51 -!- yzernik [~yzernik@rrcs-45-59-254-98.west.biz.rr.com] has quit [Ping timeout: 240 seconds] 09:01 -!- Xaldafax [~Xaldafax@cpe-104-34-193-119.socal.res.rr.com] has joined #lightning-dev 09:17 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Remote host closed the connection] 09:17 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 09:18 -!- carlakc [uid471474@gateway/web/irccloud.com/x-zipuyhalshrwbxqj] has quit [Quit: Connection closed for inactivity] 09:40 -!- Setherson [~Setherson@108-255-110-61.lightspeed.tukrga.sbcglobal.net] has quit [Ping timeout: 240 seconds] 09:41 -!- Setherson [~Setherson@108-255-110-61.lightspeed.tukrga.sbcglobal.net] has joined #lightning-dev 09:44 -!- jonatack_ [~jon@37.172.71.30] has joined #lightning-dev 09:45 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 240 seconds] 09:48 -!- jonatack [~jon@37.167.240.187] has quit [Ping timeout: 272 seconds] 10:12 -!- carlakc [uid471474@gateway/web/irccloud.com/x-mtvuxvbtmoagdkit] has joined #lightning-dev 10:17 -!- jonatack_ [~jon@37.172.71.30] has quit [Remote host closed the connection] 10:17 -!- jonatack_ [~jon@37.172.71.30] has joined #lightning-dev 10:48 -!- joost [~joostjgr@81-207-149-246.fixed.kpn.net] has joined #lightning-dev 10:48 -!- joost_ [~joostjgr@81-207-149-246.fixed.kpn.net] has joined #lightning-dev 10:49 -!- joost_ [~joostjgr@81-207-149-246.fixed.kpn.net] has quit [Client Quit] 10:49 -!- joost [~joostjgr@81-207-149-246.fixed.kpn.net] has left #lightning-dev [] 10:50 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 10:50 -!- joost [~joostjgr@81-207-149-246.fixed.kpn.net] has joined #lightning-dev 10:57 -!- t-bast [~t-bast@5.50.145.86] has joined #lightning-dev 11:01 < t-bast> Hey everyone! 11:01 < cdecker> Hi t-bast ^^ 11:03 < jkczyz> hi 11:03 < carlakc> hi all! 11:04 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 11:04 < cdecker> A wild Rusty appears :-) 11:05 < t-bast> hey, long time no see rusty! ;) 11:05 < rusty> Sorry I'm late, hi everyone! niftynei won't make it, since apparently Houston is blacked out due to weather and she's doing the camping-at-home thing. 11:05 < t-bast> That looks intense, good luck to her! 11:06 < cdecker> And there I was thinking that 2020 was weird, but this year Texas gets snow... 11:06 < t-bast> Does someone want to chair? carlakc are you interested in trying it out? No obligation, I can do it otherwise 11:07 < carlakc> maybe not for my first meeting :') 11:07 < carlakc> but thanks for offering! 11:07 < t-bast> no worries I'll chair then, I setup an agenda here: https://github.com/lightningnetwork/lightning-rfc/issues/840 11:07 < t-bast> cdecker do you know if the bot is back? :) 11:08 < cdecker> Nope, haven't done my duty, sorry :-( 11:08 < t-bast> No worries, we'll use the commands anyway 11:08 < t-bast> #startmeeting 11:08 < lndbot> hi evbody 11:09 < t-bast> #topic Additional tx test vectors 11:09 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/539 11:09 < t-bast> Hey johanth! 11:09 < rusty> Oops, I will do this today, thanks for reminder (and sorry!) 11:09 < t-bast> Cool this is quick one then ;) 11:10 < t-bast> Thanks, let's report back on github then, jkczyz you integrated this in RL without issues, right? 11:10 < jkczyz> t-bast: yeah, already merged :) 11:10 < t-bast> Cool, then it's just pending another validation and we can merge, let's follow up on the PR 11:11 < t-bast> #topic sync complete field 11:11 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/826 11:11 < t-bast> We've merged the change for that in eclair, lnd already does it, what about other implementations? 11:11 < BlueMatt> we merged the change 11:11 < t-bast> :+1: 11:13 < rusty> Will implement! (As an aside, have switched us to use warnings, not errors when we get upset with gossip now anyway, so incompatibilities are less of an issue!) 11:13 < rusty> But will be in our next release, scheduled ~ 1 month from now. 11:14 < t-bast> Great, shall we merge the spec PR or shall we wait? 11:14 < cdecker> Should be good to merge imho, it's been discussed and implemented extensively 11:15 < rusty> Yes, please merge! 11:15 < t-bast> #action merge #826 11:15 < t-bast> #topic opt_shutdown_anysegwit 11:15 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/672 11:15 < bitconner> hi all! 11:15 < t-bast> hey bitconner 11:15 < cdecker> Heya bitconner ^^ 11:15 < BlueMatt> several questions came up in review while we were reviewing a pr on our end. 11:16 < t-bast> this one is still a bit low on our priority for eclair, don't know when we'll implement but it's an ACK for us if RL tests interop with c-lightning 11:16 < bitconner> lnd has always accepted any segwit addr iirc 11:16 < BlueMatt> bitconner: uhmmmm, that was at some point a serious dos vulnerability. 11:17 < BlueMatt> I commented and people generally agrees, and ariard noted that you could end up in a screwy state where you negotiate the feature, open a channel and lock yourself into any segwit address, then reconnect *without* the feature and, uhhhmmmm 11:17 < bitconner> BlutMatt: since upfront shutdown script was added? 11:17 < BlueMatt> bitconner: since you used to not be able to broadcast such transactions 11:18 < t-bast> BlueMatt: you mean you have an issue if you use it in combination with upfront_shutdown_script and then want to downgrade? 11:18 < BlueMatt> t-bast: correct. 11:18 < ariard> hello 11:18 < BlueMatt> hi 11:19 < BlueMatt> its a useless edge-case, it just should be documented in the spec change that if you negotiate it on open, it still applies 11:19 < t-bast> I think that in this case, it was validated when the channel was setup and upfront_shutdown_script set, so you should not re-validate and reject it afterwards. Is that an issue? 11:19 < t-bast> gotcha 11:19 < BlueMatt> hmm, the way i read the spec you would be supposed to 11:19 < BlueMatt> but i think the correct fix is to just note it and say "dont reject later" 11:20 < rusty> Hmm, I don't think it actually matters, TBH. 11:21 < rusty> I'm quite happy to leave it undefined. And if you manage to do this, meh, unilaterally close and go thwack yourself for being an idiot? 11:21 < BlueMatt> I dunno, its a +/- 2 word diff, and it clarifies it :) 11:21 < rusty> No, it's implementation complexity, too, in a subtle way. 11:21 < ariard> to me it was that kind of edge case, if implementations don't agree you might have costly unilateral which could have been avoided 11:22 < cdecker> How can one downgrade the upfront shutdown script if you can only specify it at channel creation? 11:22 < BlueMatt> cdecker: it would be downgrading the "anysegwit" part 11:22 < BlueMatt> rusty: yea, ok, I see your point. I dont feel incredibly strongly, I guess, but should probably clarify the other note that was on the pr just for clarity 11:23 < ariard> so when you check again at closing the committed upfront script isn't accepted anymore 11:23 < rusty> ariard: in which case it's fine to be undefined. Maybe they'll take it, maybe they won't. 11:23 < cdecker> I see, thanks BlueMatt, I didn't consider the full scope 👍 11:23 < rusty> Spec implies they won't, but if they want to, cool. 11:24 < ariard> BlueMatt: maybe we should commit the features set in our persistence module to minimize risks of users loading node with old config and thus triggering that behavior 11:25 < rusty> Note that we check these two things in completely separate places, so it's actual real work for us to allow this 11:25 < BlueMatt> ariard: we already have version numbers for that to enforce it. 11:25 < rusty> ariard: seriously, they have to (1) negotiate the new feature, (2) use a future segwit address, and (3) downgrade. But by the time the future sw address is used, everyone will have upgraded, so this is silly. 11:26 < BlueMatt> rusty: is it no problem to allow later opt-in to anysegwit if you didnt have an upfront script set? 11:26 < BlueMatt> for y'all 11:26 < ariard> rusty: yeah let's move on 11:26 < rusty> BlueMatt: absolutely. 11:26 < t-bast> BlueMatt: I believe yes 11:26 < BlueMatt> cool, alright, sounds like merge? are there two implementations? 11:27 < t-bast> if there's RL and c-lightning and you tested interop, we can merge 11:27 * BlueMatt would prefer the above was clarified in the text, but it doesnt matter much 11:27 < BlueMatt> we havent tested interop, no. i thought maybe y'all had. 11:27 < BlueMatt> alright, then blocked on interop, lets move on. 11:27 < rusty> BlueMatt: I will implement, test with lnd; it's pretty trivial (on testnet, ofc). 11:27 < t-bast> #action do interop test and merge if everything goes well 11:27 < t-bast> #topic 0-fee anchor output htlc txs 11:27 < rusty> I'll also add a note that technically turning the feature off when you've used it for upfront-shutdown lets you break yourself. 11:28 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/824 11:28 < t-bast> Has anyone completed the 0-fee htlc feature (apart from lnd)? 11:28 < t-bast> (otherwise we'll keep it as pending interop) 11:29 < ariard> no changes on our side since last time 11:30 < cdecker> Nope, we still fire-and-forget atm 11:30 < t-bast> ok, let's keep it as pending interop then 11:30 < t-bast> #topic Compression algorithms in `init` 11:30 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/825 11:31 < t-bast> As agreed during the previous meeting, I slightly updated the PR to make the uncompressed case explicit 11:31 < rusty> Will implement for interop, but not enable (I'm looking at rewriting our tx handling to enable such things, but writing a bitcoin wallet is still Hard and I'm procrastinating!) 11:31 < t-bast> rusty: I feel for you, I'm in that place right now for eclair :D 11:32 < rusty> t-bast: we should compare notes! I'm wondering how ambitious I should be... 11:32 < BlueMatt> 10x easier if you ignore unconfirmeds :p 11:32 < t-bast> rusty: sure, let's sync offline on the scope we're each targeting, I think it will be a good learning experience 11:33 < rusty> Anyway, to catch up. I really think zlib is trivial enough to JFDI. But I know others want this... 11:33 < t-bast> BlueMatt: clearly a lot less headaches, but a lot more utxos to reserve in case hell breaks loose! 11:34 < rusty> Technically, this is a bitmap of *de*compression the node supports. 11:34 < t-bast> true 11:34 < t-bast> do you often support decompression without compression though? :D 11:35 < rusty> And why is there a feature bit for this? Surely being an odd TLV is enough? 11:36 < t-bast> There was a reason, let me find it again 11:37 < rusty> Might have been left-over from previous proposals? 11:37 < cdecker> Yeah, lost that battle twice already re zlib :-) 11:38 < t-bast> The rationale is that a feature bit lets you find nodes that support this through node_announcement to ensure you don't do the `init` dance just to see if they'll advertize the compression algorithms 11:38 < t-bast> But I'm not sure it's really worth a feature bit, we could drop it. BlueMatt and ariard, what are your thoughts on that? 11:39 < BlueMatt> cdecker: zlib has had a mixed security history, like most compression libs. leaving it out is not unreasonable in a money-holding context.... 11:39 < BlueMatt> t-bast: I don't feel strongly, honestly. 11:39 < BlueMatt> I kinda figure why not, cause it doesnt cost anything, really. 11:39 < BlueMatt> especially if its already there in init. 11:39 < rusty> BlueMatt: meh, it's pretty rock solid now, but understood. 11:39 < cdecker> BlueMatt: I meant re-adding zlib as an explicit opt-in after it has been implicitly enabled all the time, not whether zlib should be there at all 11:40 < BlueMatt> rusty: probably is, though I wouldnt say the same for its memory dos resistance in particularly-constrained environments. 11:40 < cdecker> Oh and uncompressed imho is no encoding, and MUST be supported by everyone. 11:40 < t-bast> I don't feel strongly either, maybe just to avoid "wasting" a feature bit slot 11:40 < BlueMatt> I meannnn... 11:40 < BlueMatt> we have unlimited, lets not be stingy 11:40 < t-bast> rusty: why do you want to remove the feature bit? 11:41 < rusty> t-bast: bit bloat! They actually start expanding gossip msgs after a while. 11:41 < rusty> (You know we're going to have Rollup Bit one day which implies old bits so we can reuse them. I plan to retire on that day, FWIW) 11:41 < bitconner> i'd be in favor of keeping the feature bit, making the things explict, etc 11:41 < BlueMatt> yea, its fair, but if we want to use them to figure out who to connect to, this is somewhat important. 11:42 < rusty> Well, I think it'll roll out so fast it's not important. It' not like it's hard to implement... 11:42 < BlueMatt> which, iiuc, is kinda the point of the node features. 11:42 < BlueMatt> thats true. 11:42 < BlueMatt> anyway, I dont feel strongly, I'm happy if rusty does :p 11:42 < rusty> I'd be convinced if I felt this was a significant feature which future things may reasonably opt out of. But I don't want to block it, OTOH. 11:43 -!- jonatack_ [~jon@37.172.71.30] has quit [Remote host closed the connection] 11:43 < rusty> (BTW, I was right! This bit creates a new byte in the msg! Boo! Boo!) :) 11:44 < t-bast> haha 11:44 < bitconner> if feature bit bloat really becomes an issue we can recycle them once they're more or less required by all implementations 11:44 < rusty> bitconner: indeed, see rollup bit above... 11:44 < bitconner> but even before that we could look into more optimal encodings that aren't O(highest bit set) 11:44 < BlueMatt> bitconner: like...zlib 11:45 < t-bast> I'm leaning towards keeping that feature bit, any strong opion against that? If not, we'll move that PR to the pending interop phase 11:45 < rusty> BlueMatt: lol 11:45 < bitconner> 👀 11:45 < rusty> OK, I concede. Ack. 11:45 < t-bast> thanks :) 11:45 < t-bast> #action implement and test interop 11:45 < t-bast> #topic Warning messages 11:45 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/834 11:46 < rusty> (Somehow I ended up with 4 things on my "implement today" list.... my fault for being absent so much recently) 11:46 < rusty> OK, so I have actually implemented this, since it really is kinda harmless. Was good to audit our code and try to see "what is actually fatal, and what could theoretically be recovered". 11:46 < t-bast> rusty: if one of these 4 is "implement a bitcoin wallet from scratch for anchor outputs" good luck, we won't see you in a while xD 11:47 < bitconner> utACK 11:47 < BlueMatt> concept ack. I'd need to go reread the full spec to make sure the usage of "close connection" vs "fail channel" is consistent 11:47 < rusty> e.g. if they update_fee out of bounds, we now send a warning and close the connection. They can't really do anything (since they're committed to that state), but in theory our own side might come into alignment with fees later and allow it. 11:47 -!- jonatack [~jon@37.172.71.30] has joined #lightning-dev 11:48 < bitconner> lol, link copying went to PR 83, not 834. ignore 11:48 < carlakc> I think I'm missing a bit of real life context for this one, why do we need a new message rather than using a zero channel ID for a warning type message? 11:48 < rusty> Main issue is that older nodes won't log the warning msg, so if there's a problem they'll just see disconnects. 11:48 < carlakc> difficult to figure out the difference between what the spec says, and what everybody is doing 11:48 < rusty> carlakc: that technically means "CLOSE ALL CHANNELS". 11:48 < BlueMatt> rusty: I'm ok with that. 11:49 < cdecker> carlakc: zero channel_id basically means "kill all channels". 11:49 < t-bast> the big red button 11:49 < carlakc> ah :| did anybody actually implement that? 11:49 < BlueMatt> yes. 11:49 < rusty> carlakc: context for this is that the original text was very much "if something goes wrong, kill channel". That's cute for alpha testing, but increasingly ignored IRL. 11:49 < cdecker> Yep, c-lightning got a lot of flak for it.... 11:49 < rusty> carlakc: yeah, and *worse* we would send such msgs if we didn't like your gossip! 11:49 < t-bast> yep eclair implements this as well 11:50 < cdecker> Since some implementations were using "soft-errors", that we'd interpret as emergencies 11:50 < rusty> So, c-lightning currently pretends those are warnings when it receives them. 11:50 < rusty> Doesn't help much if the other side actually kills our channels, ofc. 11:50 < carlakc> rusty: right, but hasn't always done so, so would still be an issue for older nodes. got it. 11:51 < carlakc> definitely going to be helpful to get a message before disconnect, debugging a 11:51 < carlakc> an EOF on disconnect is tricky 11:52 < t-bast> I think these warning messages make sense, definitely worth having IMO, so concept ACK until we implement 11:52 < BlueMatt> anyway, sounds like general agreement on strong concept ack, but I assume everyone else also wants to read all the cases? 11:52 < BlueMatt> and implement 11:52 < t-bast> BlueMatt: ACK 11:52 < rusty> Great, I did a sweep of the spec but may have missed some non-conformant wording. 11:52 < bitconner> sgtm 11:53 < t-bast> #action everyone sweeps the spec for inconsistencies and start implementing 11:53 < BlueMatt> cool 11:53 < t-bast> #topic Remove obsolete requirement on addresses 11:53 < cdecker> In case others are wondering which former errors are now warnings in c-lightning: https://github.com/ElementsProject/lightning/pull/4364 11:53 < rusty> (Note that this kind of implies you should have some timeout based shutdown if channels are soft-failing for long enough. c-lightning does not have that currently) 11:53 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/842 11:54 < t-bast> thanks cdecker, it's going to be helpful 11:55 < rusty> OK, no, this is broken BlueMatt! 11:55 < t-bast> You're going to mention re-encoding and signing? 11:55 < rusty> We can absolutely allow duplicates, but we don't have explicit lengths, so you have to stop parsing when you see a number we don't understand. 11:56 < rusty> If you put your new i2p addresses first, I won't parse the IPv4 address at the end! 11:56 < BlueMatt> I mean the current gossip has violations of this live 11:56 < bitconner> OP_ADDR_SUCCESS 11:56 < BlueMatt> so its not being enforced and you have to support that already today, rusty. 11:56 < BlueMatt> assuming you want to fetch the full routing graph. 11:56 < rusty> BlueMatt: you literally cannot implement that. 11:56 < BlueMatt> if you dont support it, that is correct. 11:57 < rusty> BlueMatt: if we add a new address format, you cannot parse it. 11:57 < BlueMatt> if you do support all the encodings used, then you can read them fine 11:57 < t-bast> that's a good point, I hadn't thought of that 11:57 < rusty> what implementation is out of order? It needs to be fixed. 11:57 < BlueMatt> we currently enforce this on reading gossip messages and get very angry very fast. 11:57 < BlueMatt> I didnt check into where they came from. 11:57 < rusty> I know some do duplicates. 11:57 < BlueMatt> right, we dont enforce that. only the order limits 11:58 < rusty> Well, c-lightning explicitly sorts them. We even disallow duplicates, but that can be relaxed. 11:58 < BlueMatt> I get the sense the only way to ever have a limit like this is if people enforce it on the gossip-receive end 11:58 < rusty> BlueMatt: well, when we add a new format ppl will fix their code pretty fast. We just haven't done that yet. 11:59 < BlueMatt> I dunno, good chance people just get yelled at to upgrade 11:59 < t-bast> Hum I'm not sure eclair sorts them correctly, but I can fix that 11:59 < cdecker> Consensus by shouting match as it were 11:59 < BlueMatt> this is why we enforce it at the message end, but....we still need to be able to connect to people 12:00 < BlueMatt> so, anyway, we're gonna stop enforcing this. sadly its somewhat difficult for us to serialize when sending but not when receiving/sending from our own graph. 12:00 < rusty> BlueMatt: absolutely, and thanks for bringing it to our attention! If we'd had TLV ofc, this wouldn't be an issue. 12:00 < bitconner> i'm not sure lnd sorts either, i can check 12:00 < BlueMatt> so we're gonna be out of spec on this. 12:01 < rusty> (You already have that problem with multiples, since their order is undefined) 12:01 -!- bnger93 [8ff43034@gateway/web/cgi-irc/kiwiirc.com/ip.143.244.48.52] has joined #lightning-dev 12:01 < rusty> (We could force a lex sort now, if you wanted BlueMatt?) 12:01 < BlueMatt> rusty: hmm? I dont think we can force any sort unless its also done on decode. 12:01 < rusty> Actually, that won't help. You can't unmarshall unknown fields, so you really need to preserve this as a blob. 12:01 < BlueMatt> I mean *can*, but its a good chunk more code. 12:02 -!- bnger93 [8ff43034@gateway/web/cgi-irc/kiwiirc.com/ip.143.244.48.52] has quit [Client Quit] 12:02 < BlueMatt> in a related note - since we now do not enforce single address anymore, should we have an analygous "if you have > 10 addresses, dont relay" to the "if there is unknown unparsable data, dont relay" requirement 12:02 < t-bast> #action implementations should check their code and ensure addresses are ordered (duplicates are ok though) 12:03 < BlueMatt> I dont think that makes sense 12:03 -!- bnger79 [4273a52a@gateway/web/cgi-irc/kiwiirc.com/ip.66.115.165.42] has joined #lightning-dev 12:03 < BlueMatt> it already exists, in practice, live 12:03 < BlueMatt> you cant just start enforcing on the sending end and consider it fixed 12:03 < BlueMatt> we need to allow any order for currently-defined addresses, and enforce order for not-yet-defined ones. 12:03 < BlueMatt> well, i mean i guess we already do enforce such an order, by accident 12:04 < t-bast> But it's a good first step to fix the sending side 12:04 < t-bast> That means in X months/years it can be considered fixed on the whole network 12:04 < rusty> BlueMatt: I took t-bast to mean "... on the sending side"? 12:04 -!- bnger [8ff431dd@gateway/web/cgi-irc/kiwiirc.com/ip.143.244.49.221] has quit [Ping timeout: 240 seconds] 12:04 < t-bast> yes that's what I meant, sorry if that was unclear 12:04 < BlueMatt> right, note that I dont think we're realistically going to enfoce it on thse sending side if we cannot enforce it on the receiving end 12:04 < BlueMatt> and we cannot enforce it on the receiving end. 12:05 < t-bast> true, but a first step is to fix the sending side at least 12:05 < BlueMatt> sure, maybe doesnt matter much, but I think we should just give up for current address formats 12:05 < rusty> BlueMatt: we should probably have always had something about "MAY not relay if size is excessive"? 12:05 < BlueMatt> we've already lost 12:05 < rusty> BlueMatt: heh, next CVE everyone upgrades and we win again :) 12:05 < BlueMatt> rusty: well originally it was "only one address per type", which solves that :p 12:05 < BlueMatt> rusty: lol, k, #action someone find a big cross-implementation CVE 12:05 < rusty> BlueMatt: not well, we could have LOTS of types. 12:06 < BlueMatt> sure, but for now we have 4 so it originally limited it pretty tightly. now there is no limit for the size of relayed gossip messages 12:06 < BlueMatt> of course most folks didnt *implement* the limit, but the spec had it 12:08 < t-bast> We're already past 1 hour, shall we do a last PR, a free/open discussion, or stop there? 12:08 < rusty> TBH I'm not sure what a "reasonable limit" for gossip msgs is, so hard to spec. Hmm... OK, anyway, let's move on? 12:08 < BlueMatt> discuss on the pr. 12:08 < BlueMatt> and I'll open an issue limiting it to 10 for now 12:08 < BlueMatt> cause, whatever, we can change it later. 12:09 < rusty> BlueMatt: but you can't count them (unknown addr formats). Suggest a byte limit instead? 12:09 < BlueMatt> rusty: you are already not supposed to relay if there are any unknown, iirc 12:09 < BlueMatt> or, at least, we dont. 12:10 < rusty> BlueMatt: I think that was removed from the spec a while ago. We used to not relay unknown msgs, but it splits the network and is a Bad Idad. 12:10 < BlueMatt> right 12:11 < rusty> commit 6502e30e8f1052371dc3c6791328d218f4c1cde3 12:11 < rusty> Author: Rusty Russell 12:11 < rusty> Date: Tue Sep 17 14:55:10 2019 +0930 12:11 < rusty> BOLT 7: always propagate announcements with unknown features. 12:11 < rusty> 12:11 < cdecker> That's a great way to create a network partition BlueMatt 12:11 < BlueMatt> not just features 12:11 < BlueMatt> extra bytes 12:11 < rusty> Yeah, I think that was previously implied, but no longer is. 12:11 < BlueMatt> right, anyway, lets move on 12:11 < ariard> cdecker: not relaying different from closing connections? 12:13 < t-bast> Let's do a last PR quickly 12:13 < t-bast> #topic funding expiry 12:13 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/839 12:14 < ariard> t-bast: can you sum up the benefits quickly? 12:15 < t-bast> Routing nodes have an incentive to use low fees when opening channels to ensure their activity is economically viable. 12:15 < t-bast> However, when a funding transaction takes too long to confirm, the fundee may have forgotten the channel. In that case the funder is forced to broadcast the first commit tx to get his funds back and then open a new channel, which is costly. 12:15 < t-bast> We can avoid this issue by having an explicit commitment to a block before which the funding tx will be confirmed. The fundee will keep the channel around for that duration, and the funder needs to ensure the funding tx confirms before that deadline (using CPFP on a change output for example). This gives clearer guarantees for both the funder and fundee. 12:15 < t-bast> This is just making explicit a deadline that is currently implicit and different in each implementation 12:17 < t-bast> I'd also like to add an additional thing while we're at it: I think it would also make sense for the funder to commit to the feerate he'll use in a TLV field in the `open_channel` message. This way if you're DoS-ed by `channel_open` messages, you can immediately reject those that use a low feerate without allocating any resource. 12:17 < rusty> t-bast: my general preference here is to always ask "can we just pick a universal number instead"? I don't know what to put here... 12:17 < t-bast> Even though you'll need to wait and see the funding tx to verify that the feerate is indeed what the funder told you 12:17 < rusty> t-bast: hmm, yeah, not sure that actually works if it's a real DoS (vs. you just being popular again!) 12:17 < t-bast> rusty: that's a possibility, we could all decide that 2 weeks is a healthy limit 12:18 < BlueMatt> t-bast: adding a pre-commitment to tx fee kinda sucks 12:18 < rusty> t-bast: 2016 blocks *is* a magic number. 12:18 < BlueMatt> you may not know that in advance, and definitely not exactly. 12:18 < cdecker> Would be great if the fundee could just remember one mutual close signature indefinitely, so we can do a mutual close instead 12:18 < ariard> this is not a commitment to tx fee 12:18 < BlueMatt> piping it through various places isn't fun for us. 12:18 < ariard> just a clear timestop to clean your state machine 12:18 < BlueMatt> ariard: t-bast indicated he'd like to also add that. 12:19 < t-bast> BlueMatt: yes that's something else I'd like to add, but maybe I need to think about it further and articulate it correctly 12:19 < t-bast> Let's focus at first on what's in the PR, that would be a good start, and I'll come back with a proposal for additional stuff later if that makes sense 12:19 < ariard> t-bast: if you wanna DoS someone just use a high feeerate 12:20 < t-bast> So, as rusty mentions, we could just say fundee can forget channels after 2016 blocks 12:20 < ariard> like if the DoS is about sending a lot of open messages don't matter the content itself to be junk 12:20 < t-bast> And funders would ensure they CPFP accordingly if they don't want to have to force-close. Or we can do something cooler like cdecker says and provide an initial mutual close (but I'd need to dig more into it to see how feasible that is) 12:21 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has joined #lightning-dev 12:21 < t-bast> ariard: it's not really about the messages, but rather the state you need to keep 12:21 < rusty> OK, so it's my youngest son's birthday today and I FORGOT TO WRAP HIS PRESENT. I, um, need to go.... thanks all! 12:21 < t-bast> ariard: if you can just drop the message and not have to keep any state, it's an improvement over what we have today 12:21 < cdecker> The overhead for the canned mutual sig is 64 bytes per failed channel, and it'd speed up funds recovery a lot and safe fees as well 12:21 < BlueMatt> lol, have fun rusty 12:21 < t-bast> haha see you rusty! 12:21 * BlueMatt is ok with just fixing it to a static 2016 blocks 12:21 < cdecker> Bye Rusty, good luck and say hi from us :-) 12:22 < ariard> t-bast: if the message intention is really to be malicious use a content which will make victim allocate state anyway 12:22 < bitconner> adios rusty!! 12:23 < t-bast> Are there concerns if we simply use a 2016 blocks static value instead of this TLV? 12:24 < t-bast> We can do cdecker's proposal for a mutual close sig as a follow up 12:24 < cdecker> Agreed, let's get a fixed timeout going first, then the follow ups can make that negotiable, and stash a mutual sig aside 12:24 < ariard> t-bast: hmmmm how do you invalidate mutual close sig once exchanged ? 12:24 < bitconner> have a mutual close signature up front? 12:24 < ariard> but good to me with 2016 blocks as static value 12:25 < bitconner> that would invalidate the channel? 12:25 < t-bast> don't know, cdecker just mentioned it but haven't digged into it yet 12:25 < bitconner> i'm fine with fixed 2016 block proposoal 12:25 < t-bast> great, I'll update the PR to a much simpler version then 12:25 < bitconner> if there is a mutual close signature i can make htlc updates and then broadcast the mutual close and get my money back 12:25 < t-bast> #action t-bast to update the PR to use a fixed 2016 blocks timeout 12:26 < bitconner> same reason why we can't resume operation of a channel after sending 1 coop close proposal to the remote peer 12:26 < cdecker> We can just set a fixed timeout of another 2016 to clean up the signature as well 12:26 < ariard> the mutual close sig needs to commit to a balance state 12:26 < t-bast> IIUC the mutual close sig is only provided by the fundee once it decides to forget about the channel because it didn't confirm in time 12:26 < cdecker> Right, it can just be the funding amount - any reasonable fee (handwave) 12:26 < ariard> you can't invalidate the signature without renewing the utxo 12:27 < bitconner> but the sig remains valid if the channel does confirm 12:27 < t-bast> It's being nice to the funder to avoid having him waste too much funds by force-closing and re-opening 12:27 < bitconner> unless i'm missing understanding 12:27 < ariard> so it's more a goodbye signature 12:27 < t-bast> exactly 12:27 < BlueMatt> bitconner: presumably you'd only provide it if the channel is going to be ignored 12:27 < BlueMatt> "sorry, deleting all other state, heres a sig to get your funds back if you care, but I dont remember it as a channel now" 12:28 < cdecker> If the fundee wants to forget, it marks the channel as closing, does not accept anyfuture updates, and just returns the canned sig to reestablish, as an extension to "don't know that channel" 12:28 < ariard> the sig can be SIGHASH_ANYONECANPAY to allow fee flexibility 12:28 < t-bast> It would be much nice than now, where the fundee just forgets about the channel and the funders gets a generic "unknown channel" error and has to force-close / re-open 12:28 < ariard> or even SINGLE, likely the fundee don't have a balance output yet 12:28 < cdecker> Right, good idea ariard 12:28 < bitconner> gotcha, as long as it's clear that the channel MUST NOT be used if does end up confirming that's okay 12:29 < cdecker> Yep, that should definitely be noted in the proposal ^^ 12:30 < t-bast> The explicit 2016 timeout will already be a very nice improvement, let's start with that cause it's going to be trivial to implement and then we can do another PR for this courtesy signature 12:30 < t-bast> I gotta go, shall we end the meeting? 12:30 < ariard> cdecker: in fact the best fit is likely the unsafe SIGHASH_NONE 12:30 < ariard> t-bast: yeah thanks for chairing 12:30 < cdecker> Yeah, that'd allow the fundee to bump it as well, removing the need to have a good fee estimation 12:31 < cdecker> Thanks for chairing t-bast 12:31 < cdecker> I better be off as well ^^ 12:31 < cdecker> Thanks everyone for a productive meeting :+1: 12:31 < t-bast> #endmeeting 12:31 < t-bast> Thank you everyone! 12:33 -!- t-bast [~t-bast@5.50.145.86] has quit [Quit: Leaving] 12:35 -!- bnger79 [4273a52a@gateway/web/cgi-irc/kiwiirc.com/ip.66.115.165.42] has quit [Quit: Connection closed] 13:19 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in] 13:20 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 13:27 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 13:31 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 240 seconds] 13:32 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 240 seconds] 13:43 -!- harrigan [~harrigan@ptr-93-89-242-202.ip.airwire.ie] has quit [Read error: Connection reset by peer] 13:45 -!- harrigan [~harrigan@ptr-93-89-242-202.ip.airwire.ie] has joined #lightning-dev 14:09 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 14:27 -!- joost [~joostjgr@81-207-149-246.fixed.kpn.net] has quit [Quit: Leaving] 14:48 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 264 seconds] 14:58 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 14:58 -!- EndFiat [EndFiat@gateway/vpn/mullvad/endfiat] has quit [Ping timeout: 246 seconds] 14:58 -!- carlakc [uid471474@gateway/web/irccloud.com/x-mtvuxvbtmoagdkit] has quit [Quit: Connection closed for inactivity] 15:00 -!- EndFiat [EndFiat@gateway/vpn/mullvad/endfiat] has joined #lightning-dev 15:00 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 15:16 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 256 seconds] 15:49 -!- belcher_ is now known as belcher 16:25 -!- laptop_ [~laptop@ppp-3-9.leed-a-1.dynamic.dsl.as9105.com] has quit [Ping timeout: 264 seconds] 16:37 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 16:37 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 16:39 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 17:00 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Remote host closed the connection] 17:01 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 17:05 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 17:10 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 265 seconds] 17:16 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 17:19 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 240 seconds] 17:20 -!- yzernik [~yzernik@rrcs-45-59-254-98.west.biz.rr.com] has joined #lightning-dev 17:44 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 17:48 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 240 seconds] 17:58 -!- yzernik [~yzernik@rrcs-45-59-254-98.west.biz.rr.com] has quit [Ping timeout: 246 seconds] 17:59 -!- belcher_ [~belcher@unaffiliated/belcher] has joined #lightning-dev 18:01 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 18:02 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 265 seconds] 18:16 -!- yzernik [~yzernik@rrcs-45-59-254-98.west.biz.rr.com] has joined #lightning-dev 18:17 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 18:33 -!- bitromortac_ [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 18:36 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has quit [Ping timeout: 268 seconds] 18:39 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 18:46 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 18:50 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 268 seconds] 18:51 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 265 seconds] 18:55 -!- mol [~mol@unaffiliated/molly] has quit [Read error: Connection reset by peer] 18:56 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 19:02 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 19:02 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 19:06 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 19:11 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 19:18 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 19:29 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 19:30 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 19:32 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 19:37 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 19:43 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 20:02 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 20:16 -!- shesek [~shesek@164.90.217.137] has joined #lightning-dev 20:16 -!- shesek [~shesek@164.90.217.137] has quit [Changing host] 20:16 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 20:20 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 20:28 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 246 seconds] 20:29 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 20:30 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 20:43 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 20:44 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 20:48 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 21:03 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 21:07 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 240 seconds] 21:09 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 21:15 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 21:20 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 264 seconds] 21:23 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 21:25 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 21:25 -!- vtnerd [~vtnerd@50-82-248-114.client.mchsi.com] has quit [Ping timeout: 246 seconds] 21:38 -!- vtnerd [~vtnerd@50-82-248-114.client.mchsi.com] has joined #lightning-dev 21:44 -!- jonatack_ [~jon@37.173.9.3] has joined #lightning-dev 21:47 -!- jonatack [~jon@37.172.71.30] has quit [Ping timeout: 264 seconds] 21:49 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 21:52 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 21:57 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 21:58 -!- Xaldafax [~Xaldafax@cpe-104-34-193-119.socal.res.rr.com] has quit [Quit: Bye...] 22:00 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 22:17 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 22:18 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 22:19 -!- _0x0ff- [~potatoe@163.172.166.225] has joined #lightning-dev 22:19 -!- _0x0ff [~potatoe@unaffiliated/0x0ff/x-1210984] has quit [Read error: Connection reset by peer] 22:21 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 272 seconds] 22:23 -!- _0x0ff- [~potatoe@163.172.166.225] has quit [Client Quit] 22:24 -!- _0x0ff [~potatoe@163.172.166.225] has joined #lightning-dev 22:24 -!- _0x0ff [~potatoe@163.172.166.225] has quit [Changing host] 22:24 -!- _0x0ff [~potatoe@unaffiliated/0x0ff/x-1210984] has joined #lightning-dev 22:55 -!- kiltzman [~k1ltzman@195.189.99.96] has quit [Ping timeout: 256 seconds] 23:01 -!- kiltzman [~k1ltzman@5.206.224.243] has joined #lightning-dev 23:07 -!- jonatack_ [~jon@37.173.9.3] has quit [Ping timeout: 256 seconds] 23:09 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 23:10 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 23:14 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 246 seconds] 23:18 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 23:24 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 23:25 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 23:38 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 268 seconds] 23:41 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 23:51 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 256 seconds] --- Log closed Tue Feb 16 00:00:30 2021