--- Log opened Mon Jul 19 00:00:10 2021 00:11 -!- AaronvanW [~AaronvanW@178.239.173.218] has joined #lightning-dev 00:16 -!- AaronvanW [~AaronvanW@178.239.173.218] has quit [Ping timeout: 268 seconds] 00:29 -!- AaronvanW [~AaronvanW@178.239.173.217] has joined #lightning-dev 00:34 -!- AaronvanW [~AaronvanW@178.239.173.217] has quit [Ping timeout: 255 seconds] 00:45 -!- AaronvanW [~AaronvanW@178.239.173.218] has joined #lightning-dev 00:47 -!- jarthur [~jarthur@2603-8080-1540-002d-7828-89c1-5bc4-4b6a.res6.spectrum.com] has quit [Quit: jarthur] 00:50 -!- AaronvanW [~AaronvanW@178.239.173.218] has quit [Ping timeout: 268 seconds] 01:00 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 01:06 -!- jarthur [~jarthur@cpe-70-114-198-37.austin.res.rr.com] has joined #lightning-dev 01:52 -!- AaronvanW [~AaronvanW@178.239.173.218] has joined #lightning-dev 01:55 -!- jarthur [~jarthur@cpe-70-114-198-37.austin.res.rr.com] has quit [Quit: jarthur] 01:56 -!- AaronvanW [~AaronvanW@178.239.173.218] has quit [Ping timeout: 252 seconds] 02:10 -!- AaronvanW [~AaronvanW@178.239.173.217] has joined #lightning-dev 02:15 -!- AaronvanW [~AaronvanW@178.239.173.217] has quit [Ping timeout: 255 seconds] 02:28 -!- AaronvanW [~AaronvanW@178.239.173.217] has joined #lightning-dev 02:32 -!- AaronvanW [~AaronvanW@178.239.173.217] has quit [Ping timeout: 255 seconds] 02:45 -!- AaronvanW [~AaronvanW@178.239.167.157] has joined #lightning-dev 02:50 -!- AaronvanW [~AaronvanW@178.239.167.157] has quit [Ping timeout: 252 seconds] 03:02 -!- AaronvanW [~AaronvanW@178.239.173.216] has joined #lightning-dev 03:07 -!- AaronvanW [~AaronvanW@178.239.173.216] has quit [Ping timeout: 256 seconds] 03:21 -!- AaronvanW [~AaronvanW@178.239.167.157] has joined #lightning-dev 03:25 -!- AaronvanW [~AaronvanW@178.239.167.157] has quit [Ping timeout: 252 seconds] 03:28 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 03:28 < lnd-bot> [lightning-rfc] OrfeasLitos opened pull request #889: Replace "than" with "as" (trampoline-routing-no-gossip...typo) https://github.com/lightningnetwork/lightning-rfc/pull/889 03:28 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 03:39 -!- AaronvanW [~AaronvanW@178.239.167.158] has joined #lightning-dev 03:43 -!- AaronvanW [~AaronvanW@178.239.167.158] has quit [Ping timeout: 252 seconds] 03:44 -!- dr-orlovsky [~dr-orlovs@31.14.40.19] has joined #lightning-dev 03:50 -!- dr-orlovsky [~dr-orlovs@31.14.40.19] has quit [Quit: ZNC 1.8.0 - https://znc.in] 03:57 -!- AaronvanW [~AaronvanW@178.239.173.217] has joined #lightning-dev 04:08 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 04:51 -!- robertspigler [~robertspi@2001:470:69fc:105::2d53] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- ksedgwic [~ksedgwicm@2001:470:69fc:105::ce1] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- merkle_noob[m] [~merklenoo@2001:470:69fc:105::bad0] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- mrjumper[m] [~mr-jumper@2001:470:69fc:105::7f1] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- prusnak[m] [~stickmatr@2001:470:69fc:105::98c] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- Enki[m] [~enkimatri@2001:470:69fc:105::382c] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- cdecker[m] [~cdeckerma@2001:470:69fc:105::2e8e] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- vincenzopalazzo [~vincenzop@2001:470:69fc:105::a67] has quit [Quit: Bridge terminating on SIGTERM] 04:51 -!- devrandom [~devrandom@2001:470:69fc:105::d4d] has quit [Quit: Bridge terminating on SIGTERM] 04:54 -!- devrandom [~devrandom@2001:470:69fc:105::d4d] has joined #lightning-dev 05:03 -!- vincenzopalazzo [~vincenzop@2001:470:69fc:105::a67] has joined #lightning-dev 05:03 -!- merkle_noob[m] [~merklenoo@2001:470:69fc:105::bad0] has joined #lightning-dev 05:03 -!- mrjumper[m] [~mr-jumper@2001:470:69fc:105::7f1] has joined #lightning-dev 05:03 -!- prusnak[m] [~stickmatr@2001:470:69fc:105::98c] has joined #lightning-dev 05:03 -!- robertspigler [~robertspi@2001:470:69fc:105::2d53] has joined #lightning-dev 05:03 -!- ksedgwic [~ksedgwicm@2001:470:69fc:105::ce1] has joined #lightning-dev 05:03 -!- cdecker[m] [~cdeckerma@2001:470:69fc:105::2e8e] has joined #lightning-dev 05:03 -!- Enki[m] [~enkimatri@2001:470:69fc:105::382c] has joined #lightning-dev 05:03 -!- AaronvanW [~AaronvanW@178.239.173.217] has quit [Ping timeout: 252 seconds] 05:16 -!- AaronvanW [~AaronvanW@178.239.173.218] has joined #lightning-dev 05:21 -!- AaronvanW [~AaronvanW@178.239.173.218] has quit [Ping timeout: 268 seconds] 05:26 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 05:26 < lnd-bot> [lightning-rfc] rustyrussell pushed 3 commits to guilt/pr-887: https://github.com/lightningnetwork/lightning-rfc/compare/f20e4071a8e5^...bb05f2d0e493 05:26 < lnd-bot> lightning-rfc/guilt/pr-887 f20e407 Rusty Russell: BOLT 11: remove now-obsolete missing-s comment on test vectors. 05:26 < lnd-bot> lightning-rfc/guilt/pr-887 d3b729e Rusty Russell: Minor tweak to reorder fields in one test vector. 05:26 < lnd-bot> lightning-rfc/guilt/pr-887 bb05f2d Rusty Russell: BOLT 11: update the signature and checksum analyses now the strings have c... 05:26 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 05:32 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 05:32 < lnd-bot> [lightning-rfc] t-bast pushed 1 commit to bolt11-test-vectors-payment-secret: https://github.com/lightningnetwork/lightning-rfc/compare/aee6e1b7a3bb...fe4a664b30b7 05:32 < lnd-bot> lightning-rfc/bolt11-test-vectors-payment-secret fe4a664 t-bast: Modify checksum and signature details 05:32 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 05:32 -!- AaronvanW [~AaronvanW@178.239.167.157] has joined #lightning-dev 05:32 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 05:32 < lnd-bot> [lightning-rfc] t-bast force pushed 1 commit to bolt11-test-vectors-payment-secret: https://github.com/lightningnetwork/lightning-rfc/compare/fe4a664b30b7...02b8026c7657 05:32 < lnd-bot> lightning-rfc/bolt11-test-vectors-payment-secret 02b8026 t-bast: Add payment secret to Bolt 11 test vectors 05:32 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 05:36 -!- AaronvanW [~AaronvanW@178.239.167.157] has quit [Ping timeout: 246 seconds] 05:40 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 05:40 < lnd-bot> [lightning-rfc] t-bast pushed 1 commit to bolt11-test-vectors-payment-secret: https://github.com/lightningnetwork/lightning-rfc/compare/02b8026c7657...424898a9f291 05:40 < lnd-bot> lightning-rfc/bolt11-test-vectors-payment-secret 424898a t-bast: Reorder fields 05:40 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 05:41 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 05:41 < lnd-bot> [lightning-rfc] t-bast force pushed 1 commit to bolt11-test-vectors-payment-secret: https://github.com/lightningnetwork/lightning-rfc/compare/424898a9f291...42bd71d49c68 05:41 < lnd-bot> lightning-rfc/bolt11-test-vectors-payment-secret 42bd71d t-bast: Add payment secret to Bolt 11 test vectors 05:41 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 05:49 -!- AaronvanW [~AaronvanW@178.239.173.217] has joined #lightning-dev 05:53 -!- AaronvanW [~AaronvanW@178.239.173.217] has quit [Ping timeout: 268 seconds] 06:07 -!- AaronvanW [~AaronvanW@178.239.173.216] has joined #lightning-dev 06:11 -!- AaronvanW [~AaronvanW@178.239.173.216] has quit [Ping timeout: 255 seconds] 06:24 -!- AaronvanW [~AaronvanW@178.239.173.218] has joined #lightning-dev 06:29 -!- AaronvanW [~AaronvanW@178.239.173.218] has quit [Ping timeout: 255 seconds] 06:40 -!- AaronvanW [~AaronvanW@178.239.167.157] has joined #lightning-dev 06:45 -!- AaronvanW [~AaronvanW@178.239.167.157] has quit [Ping timeout: 258 seconds] 06:56 -!- AaronvanW [~AaronvanW@178.239.173.218] has joined #lightning-dev 07:02 -!- AaronvanW [~AaronvanW@178.239.173.218] has quit [Ping timeout: 268 seconds] 07:14 -!- AaronvanW [~AaronvanW@178.239.173.216] has joined #lightning-dev 07:19 -!- AaronvanW [~AaronvanW@178.239.173.216] has quit [Ping timeout: 246 seconds] 07:23 -!- AaronvanW [~AaronvanW@178.239.173.216] has joined #lightning-dev 08:27 -!- AaronvanW [~AaronvanW@178.239.173.216] has quit [Ping timeout: 252 seconds] 08:29 -!- AaronvanW [~AaronvanW@209.235.170.242] has joined #lightning-dev 08:48 -!- AaronvanW [~AaronvanW@209.235.170.242] has quit [Remote host closed the connection] 08:56 -!- jarthur [~jarthur@2603-8080-1540-002d-acff-de01-3c77-7564.res6.spectrum.com] has joined #lightning-dev 08:58 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has joined #lightning-dev 10:16 -!- gioyik [~gioyik@gateway/tor-sasl/gioyik] has joined #lightning-dev 10:34 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has quit [Remote host closed the connection] 10:55 -!- ryanthegentry [~ryanthege@2605:a601:ab05:f800:856b:8eef:ea98:b63c] has joined #lightning-dev 11:05 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has joined #lightning-dev 11:10 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has quit [Ping timeout: 268 seconds] 11:18 -!- bitromortac_ [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 11:19 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has quit [Ping timeout: 244 seconds] 11:24 -!- yonson [~yonson@2600:8801:d900:7bb::d7c] has quit [Remote host closed the connection] 11:24 -!- yonson [~yonson@2600:8801:d900:7bb::d7c] has joined #lightning-dev 11:40 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has joined #lightning-dev 11:45 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has quit [Ping timeout: 255 seconds] 12:02 -!- gioyik [~gioyik@gateway/tor-sasl/gioyik] has quit [Remote host closed the connection] 12:03 -!- gioyik [~gioyik@gateway/tor-sasl/gioyik] has joined #lightning-dev 12:33 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 12:35 -!- bitromortac_ [~admin@gateway/tor-sasl/bitromortac] has quit [Ping timeout: 244 seconds] 12:37 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has joined #lightning-dev 12:37 -!- niftynei_ [~niftynei@4.53.92.114] has joined #lightning-dev 12:44 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has quit [Remote host closed the connection] 12:44 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has joined #lightning-dev 12:53 -!- t-bast [~t-bast@user/t-bast] has joined #lightning-dev 12:55 < niftynei_> looks like we've got an agenda for the meeting posted https://github.com/lightningnetwork/lightning-rfc/issues/888 12:55 < niftynei_> thank you t-bast ! 12:56 < t-bast> hey niftynei! 12:56 < cdecker[m]> Good evening everybody ^^ 12:56 < vincenzopalazzo> Hello guys 12:56 < t-bast> Good evening all 12:57 -!- rusty [~rusty@103.93.169.121] has joined #lightning-dev 12:57 -!- lndev-bot [~docker-me@243.86.254.84.ftth.as8758.net] has joined #lightning-dev 12:57 < cdecker> Almost forgot the meting bot 12:57 < t-bast> cdecker, master of the meeting bots 12:58 < niftynei_> should we start drawing lots for chair? 12:58 < t-bast> good idea, are you a candidate niftynei? 12:58 < niftynei_> i nominate BlueMatt 12:59 -!- renepickhardt [~renepickh@2a02:a18:9169:9101:e5ac:16e5:6da:2140] has joined #lightning-dev 12:59 < t-bast> I second that 12:59 < niftynei_> (he's standing next to me irl) 12:59 < ariard> hi all :) 13:00 < niftynei_> hello! 13:00 < niftynei_> do we have any chair volunteers? 13:01 < t-bast> You're doing real-life meetups??? So lucky... 13:01 < ryanthegentry> I thought we were supposed to sell all our chairs 13:01 < BlueMatt> hi all 13:01 < ariard> hi our meeting chair 13:01 < BlueMatt> sorry, I can chair, I guess, dunno what that even means tbh, but... 13:01 < BlueMatt> #startmeeting 13:01 < lndev-bot> BlueMatt: Error: A meeting name is required, e.g., '#startmeeting Marketing Committee' 13:02 < BlueMatt> #startmeeting Lightning Meeting Name 13:02 < lndev-bot> Meeting started Mon Jul 19 20:02:02 2021 UTC and is due to finish in 60 minutes. The chair is BlueMatt. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:02 < lndev-bot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:02 < lndev-bot> The meeting name has been set to 'lightning_meeting_name' 13:02 < BlueMatt> ok, first up https://github.com/lightningnetwork/lightning-rfc/pull/847 13:02 < BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/pull/847 13:02 < BlueMatt> mostly, probably, the discussion ending here: https://github.com/lightningnetwork/lightning-rfc/pull/847#discussion_r671873899 13:03 < BlueMatt> where I argued, and likely rusty will disagree, that we should simply drop the requirement for suggested fee being below channel feerate_per_kw for *both* anchor and non-anchor 13:03 < t-bast> BlueMatt IIUC you've started implementing this on the RL side? 13:04 < BlueMatt> yes 13:04 < t-bast> neat 13:04 < rusty> BlueMatt: yeah, I could buy that. Let me check if we enforce that on recv though.... 13:04 < BlueMatt> the proposal is technically an incompatibility with existing nodes, but at worst it causes force-closure when we intended to close the channel anyway 13:05 < BlueMatt> as an alternative, we could suggest dropping on the receive-side check, and then say something generic like "on jan 1 2022 you can stop caring when you send it" 13:05 < rusty> BlueMatt: it's not an incompatibility if we make it so for the new-style quick close though. 13:05 < t-bast> and if you breach that by sending a fee higher than the commit feerate, it's probably okay-ish to force-close (barring the csv delays) 13:05 < BlueMatt> you dont know if its new-style or not if you're the channel funder and speak first 13:05 < t-bast> (in terms of fees paid) 13:06 < rusty> BlueMatt: true. 13:07 < rusty> I know we didn't want to burn a feature bit here, but it would have been easier in transition. Oh well. 13:07 < BlueMatt> yea, we could....yuck tho 13:08 < t-bast> agreed, yuck, I don't think we need a feature bit for that 13:08 < t-bast> I 13:08 < BlueMatt> at least personally I'm basically fine with some accidental force-closes during shutdown while nodes upgrade 13:08 < BlueMatt> like, you were already gonna shut down....eh 13:09 < niftynei_> but at what cost? 13:09 < t-bast> in that case the fees would be the same (force-closer would even be cheaper) 13:09 < t-bast> it's just an additional csv delay 13:09 < BlueMatt> right, you'd *save* on fees, but pay the csv delay. 13:09 < BlueMatt> of course given you wanted to use a higher fee, you're probably pretty sad about the csv delay 13:10 < BlueMatt> cause you probably wanted to pay a higher fee *because* you didnt want to wait 13:10 < t-bast> but you're not the one waiting for the delay though 13:10 < BlueMatt> but, still, that's a risk the sender takes, the spec doesnt need to care about that 13:10 < t-bast> because it's your peer that will force-close on you 13:10 < t-bast> so no delay on your side 13:10 < BlueMatt> sure, but you still dont get to pay the higher fee that you wanted to pay, presumably to get into the Next Block 13:10 < t-bast> unless they just send an error and wait 13:10 < t-bast> true 13:10 < BlueMatt> (which nodes do do...) 13:11 < BlueMatt> but, in any case, votes in favor/against just saying the incompatibility is ok? 13:11 < t-bast> I won't be very helpful, I'm fine with both xD 13:11 < rusty> Hmm, seems like our enforcement is lax here, but the logic is a bit gnarly and I'll have to actually test. 13:12 < rusty> BlueMatt: I'm happy to remove the requirement; I expect it won't happen very often in practice. 13:12 < BlueMatt> are you staunchly against spec change even if you enforce it, rusty? 13:12 < BlueMatt> right, that's my other thinking 13:12 < BlueMatt> not many nodes are going to by default send a higher fee than the channel fee, cause otherwise they would have increased the channel fee 13:13 < t-bast> yes, in the case of eclair, we would have done an update_fee beforehand, so we shouldn't be in this case except for anchor outputs where we keep the commit feerate low 13:13 < rusty> OK, let's simply remove that requirement? 13:13 < BlueMatt> alright, sounds like maybe rough agreement. lets do it. 13:13 < BlueMatt> next topic... 13:13 < BlueMatt> #https://github.com/lightningnetwork/lightning-rfc/pull/880 13:13 < BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/pull/880 13:14 < BlueMatt> rusty: has the floor 13:14 < rusty> #action t-bast to remove cap-at-unilateral from 847 for all channel types 13:14 < rusty> (Needed for minutes) 13:14 < t-bast> ack 13:15 < rusty> OK, this is implemented in a PR. 13:15 < roasbeef> this is dropping the requirement that co-op close fee is below the commit fee rate? 13:15 < BlueMatt> roasbeef: yes. 13:15 < rusty> roasbeef: yeah, for any type. 13:16 < BlueMatt> so whats the status of channel types agreement/disagreement? rusty? 13:16 < rusty> Node MUST play audio of Hey Big Spender when close fee proposal is above commitment fee rate. 13:16 < rusty> BlueMatt: so it's now a simple "take it or leave it" proposal by opener. 13:16 < BlueMatt> echo "Hey Big Spender" > /dev/audio 13:17 < jkczyz> > /dev/null 13:17 < roasbeef> rusty: for all commits, or just anchors? 13:17 < t-bast> roasbeef: for all commits 13:17 < roasbeef> but would only apply for this new feature bit? 13:17 < ariard> rusty: what's the purpose of sending back the `channel_type` in accept_channel, if you don't like the channel_type just stay silent? 13:17 < t-bast> I have the latest version of channel_type implemented in a PR in eclair as well, I can test cross-compat if you want with the c-lightning version rusty 13:17 < roasbeef> ariard: just to echo I guess? 13:18 < BlueMatt> rusty: cool. whats the status of error codes to indicate "no, try another channel type"? 13:18 < t-bast> ariard: it's important 13:18 < rusty> ariard: yeah, I thought about that, but it's also nice that it's caught immediately, not later when they try to add an HTLC 13:18 < BlueMatt> roasbeef: no new feature bit. 13:18 < t-bast> it shows you selected it, if you don't mention it and it's different from the implicit one, the opener cannot know what you expect 13:18 < rusty> BlueMatt: that discussion is ongoing, let me check ml 13:18 < ariard> rusty: well they won't try to add a HTLC if receiver never send back sigs for initial commitment 13:19 < ariard> echo sounds good, just a small bandwidth waste, and could be catched with error messages if we had ones 13:19 < t-bast> oh I misunderstood, you mean in the case where you disagree and don't want that channel? 13:19 < roasbeef> BlueMatt: that breaks compat... 13:19 < BlueMatt> roasbeef: yes, that was the above discussion.... 13:19 < BlueMatt> roasbeef: it was discussed in the scrollback :) 13:19 < ariard> t-bast: it's take it or leave it, so opener have to figure out by itself what you expect? 13:20 < rusty> ariard: ah, yes, you need echo to know they understood. 13:20 < BlueMatt> t-bast/rusty: wait, then how do your current implementations decide when to suggest the next channel type in a new open_channel message? 13:20 -!- renepick [~renepick@2a02:a18:9169:9101:a469:128:7bef:b9e3] has joined #lightning-dev 13:20 < BlueMatt> or is it just "on any error with the same channel id"? 13:20 < rusty> BlueMatt: mine just sets the default (except there's an lnprototest which tries everything possible from the feature bits) 13:20 < rusty> BlueMatt: today, it's "meh, some error happened, me try again!" 13:21 < BlueMatt> ah, ok. 13:21 < BlueMatt> yes, makes sense. 13:21 < t-bast> the flow in eclair is that the node operator explicitly chooses what channel_type to try, and either the flow completes or they receive an `error`, and are free to analyze it and decide whether to try a different channel type or not 13:21 < ariard> rusty: ah okay, in case of talking with non-upgraded `channel_type` nodes, silently acking a channel type they don't understand at all 13:21 < rusty> ariard: yeah. 13:22 < BlueMatt> alright, I mean sounds cool. glad its getting cross-node impl. will implement it when we get there at least from our end, but may not be immediately. 13:22 < BlueMatt> any further discussion that should happen live in it? 13:22 < rusty> But a side comment: this explicit use of channel types is a kind of latent concept which made our code nicer when we actually called it out (kudos, roasbeef); the spec could use a similar sweep to refer to channel types rather than "if option_static_remotekey applies to the channel...." lang. 13:23 < rusty> I think if t-bast and I interop, we're good to apply? Should we approve that now, or wait for another meeting? 13:23 < ariard> rusty: you mean should we pin the channel types board in bolt9 or somewhere else and reuse it across the spec? 13:23 < BlueMatt> I think that's fine, at least in concept. I'll read it over but yea go for it. 13:23 < t-bast> rusty: ACK on my side, I can test interop this week 13:24 < rusty> t-bast: great, thanks! 13:24 < t-bast> #action t-bast test channel_type interop with c-lightning 13:24 -!- hex17or [~hex17or@gateway/tor-sasl/hex17or] has quit [Ping timeout: 244 seconds] 13:24 < t-bast> roasbeef, are you fine with that version of the proposal? 13:25 < rusty> ariard: more that we can now refer to "channel type" everywhere and know what we mean ("if channel type includes option_static_remotekey" for example). But I'll have to see what it looks like when I actually sweep the spec. 13:25 < BlueMatt> rusty: nice! 13:25 < rusty> #action rusty to start spec cleanup to refer to channel type throughout. 13:25 < BlueMatt> ok, no news from roasbeef is good news :) next topic. 13:25 < BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/pull/834 13:26 < BlueMatt> warning messages 13:26 < BlueMatt> another rusty special. anything you want to get feedback on it live, rusty? 13:26 -!- hex17or [~hex17or@gateway/tor-sasl/hex17or] has joined #lightning-dev 13:26 < BlueMatt> looks like the pr itself needs rebase, but t-bast ack'd 13:26 < rusty> BlueMatt: it Just Works.... though it'd be nice for debugging if other impls printed it out rather than unknown msg. 13:27 < BlueMatt> yea, we can do that pretty easy 13:27 < rusty> Weakening the error semantics is just recognizing reality, it's long overdue. 13:27 < t-bast> yes I've found warnings very useful, and I've got a concrete use-case related for #847 that I think is worth sharing 13:27 < BlueMatt> yep, cool! 13:27 < t-bast> in some cases our only current choice is "disconnect", but it's actually putting us in a deadlock in some situations 13:28 < rusty> #action rusty to rebase 834 13:28 < t-bast> in the closing fee_range negotiation, if your peer sends a fee_range you completely disagree with, disconnecting isn't helpful because at reconnection they must re-send the closing_signed 13:28 < t-bast> and you will still disagree 13:28 -!- renepickhardt [~renepickh@2a02:a18:9169:9101:e5ac:16e5:6da:2140] has quit [Ping timeout: 246 seconds] 13:28 < t-bast> sending a warning and then staying silent is much better 13:28 < BlueMatt> wait, shouldnt you force-close if the fee rate suggested for close is insane? 13:28 < t-bast> the node operator can pick that up and send a different closing_signed with a fee_range you'd like, or force-close 13:29 < t-bast> you could, but there's no real reason to if you don't need the funds immediately 13:29 < t-bast> you can send a warning first 13:29 < BlueMatt> sure there is - the channel is useless, and most node operators dont sit there and read the logs carefully 13:29 < rusty> (Note there's a proposal on the ml to add some concrete semantics to errors, which could apply to warnings too, but Carla hasn't responded) 13:29 < t-bast> when they see that the node they sent shutdown is stuck, they will likely look at their logs though 13:30 < t-bast> and then they can decide to either force-close or try different fee_range 13:30 < BlueMatt> sure, but the non-shutdown-initiator is unlikely to read the logs, so that ndoe should just force-close. 13:30 < rusty> BlueMatt, t-bast: this is true in general, that there should be some "we haven't made progress in X days, let's force close". 13:31 < BlueMatt> anyway, this is a node policy issue, which doesn't seem super relevant, we all agree we want warnings anyway :) 13:31 < BlueMatt> next topic! 13:31 < BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/issues/745 13:31 < t-bast> ACK! 13:31 < rusty> I think we all agreed on this one, want me to draft a clarification? 13:32 < BlueMatt> sounds good. I admit I didnt read it, but ariard thinks we do the thing everyone else does, so I'm happy ;) 13:32 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 13:32 < lnd-bot> [lightning-rfc] t-bast pushed 2 commits to relax-closing-fee-requirement: https://github.com/lightningnetwork/lightning-rfc/compare/f02916485c49...c99002013e6a 13:32 < lnd-bot> lightning-rfc/relax-closing-fee-requirement 8683525 t-bast: Use warning instead of disconnecting 13:32 < lnd-bot> lightning-rfc/relax-closing-fee-requirement c990020 t-bast: Remove fee below commit fee requirement 13:32 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 13:32 < BlueMatt> hmm, can we not use lightning-rfc branches for PRs? that seems a bit weird imo 13:33 < rusty> BlueMatt: yeah, it's weird (I usually use my personal copy) but I don't really mind. 13:35 < t-bast> I agree with the way rusty reframed the requirement at the end of the discussion: it's clear and concise 13:35 < BlueMatt> yes, agreed 13:36 < BlueMatt> note that implementing it in not-this-way would actually be really quite annoying for us. 13:36 < rusty> Yeah, just added another comment. 13:37 < BlueMatt> correct, I agree with you rusty (and thats the way our code works, if I'm reading it correctly) 13:37 < BlueMatt> we would reject that add 13:38 < BlueMatt> any further discussion? 13:38 < ariard> yeah what i'm trying to understand is what lnd is doing on this behavior, crypt-iq comment is a bit unclear 13:38 < t-bast> I'd need to write that as a test to be 100% sure whether eclair would reject it or not...it's a simple test to write though I'll try that 13:38 < BlueMatt> It seems we're all in agreement, maybe t-bast wants to comment on the latest issue from the reporter 13:39 < BlueMatt> ariard: which comment was unclear? 13:39 < BlueMatt> I think I understood it 13:39 < t-bast> I'll need to write that test, I think eclair is quite conservative here and wouldn't "risk" sending that last add, but I'll need to verify 13:39 < BlueMatt> for the sake of time, lets move on and leave further discussion on the issue. in the mean time, rusty graciously offered to write up a spec clarification :) 13:39 < BlueMatt> #action rusty to clarify spec to resolve #745 13:40 < BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/issues/873 13:40 < BlueMatt> rusty had proposed some wording in the issue 13:40 < BlueMatt> there was some concern over DoS in the previous meeting on may 24 13:42 < BlueMatt> roasbeef: had suggested there that he'd have crypt-iq write up a patch to test for cpu dos 13:42 < BlueMatt> did that happen? 13:42 < t-bast> Probably worth exchanging some kind of `max_accepted_dust_htlc` to limit that? 13:42 < ariard> i think we sould introduce a new limit for dust htlc count and not let them unbounded 13:42 < BlueMatt> t-bast: you already have a max total htlc in-flight limit 13:42 < BlueMatt> ariard: y tho 13:43 < t-bast> but that's what we want to override...? 13:43 -!- crypt-iq [~crypt-iq@71.69.230.255] has joined #lightning-dev 13:43 < t-bast> we don't want these dust htlc to be included in that limit, right? 13:43 < BlueMatt> t-bast: no, not in-flight limit, but the htlc-count limit 13:43 < niftynei_> the comments from last time i suggested a 'max_fee_from_dust" limit 13:43 < t-bast> right, but it's only an msat value, so it's probably a huge amount of dust htlcs, isn't it? 13:44 < BlueMatt> t-bast: right, the objection last time was that this could turn into a DoS issue 13:44 < crypt-iq> You don't need a max_fee_from_dust parameter I found out 13:44 < niftynei_> so putting a sats limit on the amount of extra fee you'd allow for "htlc escrow that's too small for its own htlc output" 13:44 < rusty> Yeah, I really don't want you to add 1M dust htlcs, though at 1msat that's only 1000sats in fees. 13:44 < BlueMatt> but you dont send signatures for them, so, really, I dont see why 13:44 < cdecker[m]> Still needs storage and memory though 13:44 < crypt-iq> With the network today, you can limit your exposure to dust htlcs by refusing to forward them if your inbound channel is dusted or if your outbound channel will be dusted 13:45 < BlueMatt> I mean, its a computer, if you dont send a signature, the cpu cost of like 100M little HTLCs should be pretty akin to 1 normal htlc 13:45 < BlueMatt> crypt-iq: define "dusted" 13:45 < crypt-iq> lnd will probably only handle ~10k htlc's 13:45 < t-bast> BlueMatt: good point, it's true that without the sig it's quite inexpensive, it's worth testing 13:45 < crypt-iq> As a safe limit int his case 13:45 < BlueMatt> crypt-iq: why? 13:45 < crypt-iq> We have a uint16 for forwarding htlc's and we don't want an overflow 13:45 < BlueMatt> crypt-iq: which cost are you optimizing for limiting? 13:46 < BlueMatt> crypt-iq: so swap it for a u32? 13:46 < BlueMatt> or a uint64, cause thats the same speed on most x86_64 processors :) 13:46 < crypt-iq> Yeah but it's a database upgrade, which we'd want to avoid. Could be something revisited though 13:46 < niftynei_> is there a network related reason for the 10k limit? 13:46 < crypt-iq> So when receiving an incoming HTLC, if either you or your counterparty's commitment has too much dust on it (defined by your dust threshold) you can just fail back 13:47 < crypt-iq> There's no network related reason no 13:47 < BlueMatt> crypt-iq: isn't that just....the htlc in-flight total value limit? 13:47 < crypt-iq> Well this dusted amount is stealable, htlc-in-flight applies to non-dust as well 13:47 < roasbeef> re compat of the co-op close fee thing: that'll end up borking a lot of channels in the wild, if you send one above the range, lnd won't like it and you'll have to force close the channels 13:48 < BlueMatt> crypt-iq: you mean it burns to fee? thats been part of the lightning security model forever :) 13:48 < crypt-iq> Bumping antoine's ML post: https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002714.html 13:48 < roasbeef> why not tie it to a feature bit, if the negotiation logic is already gonna change? 13:49 < BlueMatt> roasbeef: yes, we're aware it was a concern, did you read the scrollback from the beginning of the meeting for the rationale proposed? 13:49 < niftynei_> crypt-iq: i think by 'stealable' you mean "gets paid to the miner"? 13:49 < crypt-iq> Right it's burned to fee, but you can limit this amount per-channel without having to negotiate any parameters. So it allows you to not have the max_dust_htlc_fee option in the option_dusty_htlcs proposal 13:49 < roasbeef> rusty: +1 re the chan types making certain sections of code nicer, this is what I was getting at w/ the like "mega switch statement for feature bits thing" 13:49 < niftynei_> but isn't that the "max fees from dust" limit ? 13:49 < BlueMatt> crypt-iq: no you cant, someone can add a ton and then create a new commitment transaction and then broadcast that. just because you fail the htlc back in the next commitment doesnt change this. 13:49 < roasbeef> t-bast: version w/ the echo of the type? yeah we have a PR we need to clean up, but we can all prob start doing interop on testnet pretty soon 13:49 < t-bast> roasbeef: I really think a feature bit would be wasted here, in practice no-one sends higher fees (we would all send an update_fee before that) so it shouldn't happen, and won't be a concern for anchor outputs channels where we have to remove that requirement anyway 13:49 < ariard> niftynei: if you're miner with any chance to mine a block during the HTLC timelock it's quite a high-success attack 13:50 < crypt-iq> BlueMatt: you're not vulnerable to this because it's subtracted from the incoming's balance, you're vulnerable when you forward. I can expand on the cases in the issue itself 13:50 < t-bast> roasbeef: cool for the channel_type! 13:50 < BlueMatt> crypt-iq: ah, I see your point, ok. 13:50 < crypt-iq> niftynei: it is the same as max fees from dust but it doesn't have to be negotiated and can be deployed selectively by impl's right now 13:51 < rusty> I think our roasbeef is laggy; maybe we should reboot him? :) 13:51 < t-bast> crypt-iq: good point as well, it's worth highlighting that in the issue 13:51 < roasbeef> re the error msg stuff, don't see why to introduce a new message vs just re-using out existing once since that'll bridge compat, it also makes things simpler in that all nodes have one error path way, and older nodes just ignore what they don't understand 13:51 < t-bast> crypt-iq: (the fact that as long as you don't forward, it doesn't open attack vectors) 13:52 < crypt-iq> t-bast: right, there are 4 cases to tackle and probably better to lay them out in the issue and not on irc 13:52 < roasbeef> I also think enumerating the initially defined set of error codes/pathways as Carla did in her proposal is important, otherwise it's just another blob that hasn't really been that useful in practice outside of trying to diagnose force close scenarios 13:52 < cdecker[m]> Sounds good to me 13:52 < niftynei_> crypt-iq: right i just wanted to point out that you're talking about the thing that i mentioned last time, that they're the same thing in terms of how to handle the issue lol 13:53 < roasbeef> coudl be used for the "i'm shutting down now" message we've talked about in the past, I've been trying to debug some p2p connectivity issues w/ pure tor nodes in the wild lately, but at times context is lacking beyond "EOF" 13:53 < crypt-iq> niftynei: gotcha 13:53 < BlueMatt> roasbeef: if you're gonna respond to something a half hour ago maybe just put it on the issue? 13:53 < t-bast> roasbeef: I find that warnings are more useful than errors, they act as a "nudge" that doesn't get your channels closed 13:53 < BlueMatt> we can't rehash the whole meeting a half hour late. 13:53 < BlueMatt> that would be a waste of everyone's time 13:54 < rusty> BTW, I'd like to discuss Turbo channels; it's not on agenda since there's not PR yet, but that can be fixed... 13:54 -!- ghost43_ [~ghost43@gateway/tor-sasl/ghost43] has joined #lightning-dev 13:55 < BlueMatt> whats the concrete next-steps for dusty htlcs uncounted? 13:55 < t-bast> Sure, we can discuss turbo even without a PR 13:55 -!- ghost43 [~ghost43@gateway/tor-sasl/ghost43] has quit [Ping timeout: 244 seconds] 13:55 < t-bast> What about a proposal PR for dusty htlcs uncounted? If crypt-iq you think you've experimented and gathered enough feedback? 13:56 -!- jarthur_ [~jarthur@2603-8080-1540-002d-c49d-5e9c-d3a7-cf1f.res6.spectrum.com] has joined #lightning-dev 13:56 < rusty> BlueMatt: next steps: on issue, debate if we actually need a limit, and what it looks like if so. crypt-iq to lead? 13:56 < rusty> (Or, if not, why not) 13:56 < BlueMatt> it sounds like maybe it can go ahead as-is, but with callouts of the ability to burn your own balance to fees and that nodes should limit their relaying of dust htlcs to limit their own total exposure 13:56 < BlueMatt> rusty: i believe I agree with crypt-iq that no in-spec limit is required/relevant 13:56 < BlueMatt> but instead at-forwarding-time limits apply 13:56 < niftynei_> ack 13:57 < crypt-iq> what limits are we talking about here? sum of dust limits? 13:57 < crypt-iq> I think a max_dust_htlcs that you can offer outstanding is necessary, no? 13:57 < t-bast> it would be up to each node's policy I guess 13:57 < BlueMatt> sum of outbound dust htlcs 13:57 < BlueMatt> but its not spec 13:57 < BlueMatt> crypt-iq: why? 13:57 < crypt-iq> where max_dust_htlcs is the literal number 13:57 < cdecker[m]> Both number and sum id guess 13:57 < BlueMatt> crypt-iq: didnt you just argue we *dont* need an in-spec limit? 13:57 < crypt-iq> I argued that we don't need a sum of dust limits 13:58 < cdecker[m]> Yeah, just fail forwarding it if it's above your internal limit I'd guess 13:58 < BlueMatt> because we can do it at forward-time (its the node *sending* the htlc that takes the risk) 13:58 < BlueMatt> and if its at forward-time it doesnt appear in the spec outside of rationale sections 13:58 < crypt-iq> Well the problem here is that it will be locked into a commitment transaction before we can fail back 13:58 < niftynei_> i mean it sounds like there's two things you could decide to limit it on, and that's up to your node/impl to decide which to apply 13:58 < crypt-iq> And lnd has a uint16 13:58 -!- jarthur [~jarthur@2603-8080-1540-002d-acff-de01-3c77-7564.res6.spectrum.com] has quit [Ping timeout: 240 seconds] 13:59 < crypt-iq> So we want to communicate that our max so that limit isn't reached 13:59 < BlueMatt> crypt-iq: no? because its outbound 13:59 < BlueMatt> you just...dont send it? 13:59 < rusty> BlueMatt: but you're still committed to it until you fail it. 13:59 < cdecker[m]> That's ok, the sender is out of pocket and lnd can apply <u16 13:59 < BlueMatt> oh, you mean that lnd will be unable to upgrade to *accept* new htlcs due to a coding issue? 13:59 < rusty> (This is the reason we *have* incoming limits). 13:59 < BlueMatt> that just seems like an issue where y'all can not set the feature bit until you add the relevant code? 14:00 < BlueMatt> or are you *also* making a security argument? 14:00 < crypt-iq> Well another problem is that we have in-memory buffers so we want to limit that exposure as well and these HTLCs all get stored with their blobs 14:00 < cdecker[m]> Wouldn't that require there to be more than `u16max-buffer` in a single commitment? 14:00 < crypt-iq> cdecker: right, which I don't see as likely except in an attack scenario 14:01 < BlueMatt> yes, do you have a specific limit in mind for that? that seems like something that could be a general spec-enforced limit 14:01 < BlueMatt> cause, like, the number can just be almost arbitrarily high 14:01 < BlueMatt> we use 64K for lots of things, lets just say that? :) 14:01 < ariard> you can limit the dusted balance implementation-wise, though at risk of silent force-close if it overrides negotiated `dust_limit_satoshis` 14:01 < crypt-iq> I was thinking 10000, but if we upgrade our database I don't see a reason to not have 64k 14:01 < BlueMatt> ariard: that's a largely-separate issue, though, thats the channel negotiation at creation? 14:02 < rusty> 32k! And bike shed BLUE dammit BLUE 14:02 < crypt-iq> ariard: how can it lead to a silent force close error? 14:02 < ariard> crypt-iq: we don't have upper bound of the dust_limit_satoshi for now, at least not negotiated in the spec 14:02 < BlueMatt> rusty: ok! 32k it is :) 14:03 < cdecker[m]> It just has to be low enough so that a single commitment round doesn't risk going above the limit. Then we can reject the ones over the limit on the next commitment 14:03 < BlueMatt> cdecker: what limit? 14:03 < ariard> if you start to enforce one on already-existent channel, your counterparty might have a stale view of what's a dust HTLC for you 14:03 < BlueMatt> cdecker[m]: cause the only relevant security concern, as I understand it, is how much *outbound* dust HTLC you have sent 14:03 < cdecker[m]> Whatever your node choses to enforce 14:03 < BlueMatt> which you can just...limit? 14:03 < BlueMatt> there's no in-channel force-close risk relevant here whatsoever 14:04 < cdecker[m]> Yep, can't see the force close risk here either 14:04 < crypt-iq> ariard: dust_limit_satoshis is negotiated and static, not sure I follow 14:04 < niftynei_> i think we have some next steps outlined here, i'd love to spend a few minutes chatting about turbos 14:04 < BlueMatt> sgtm 14:04 < ariard> crypt-iq: let's defer discussion to the issue? we might lack context here 14:04 < niftynei_> i know we're already over time a bit 14:04 < BlueMatt> last call for questions here. 14:04 < BlueMatt> alright 14:05 < BlueMatt> #topic turbossssssssssssssssssss 14:05 < cdecker[m]> Sgtm 14:05 < BlueMatt> rusty has the floor 14:05 < BlueMatt> and about 10 minutes until we call time 14:05 < t-bast> Do we need to include turbo in channel_type? 14:06 < rusty> t-bast: hmm, good q! 14:06 < t-bast> Or do we just react to `min_depth` being `0` in `accept_channel`? 14:06 < t-bast> Because what's a bit weird here is that open_channel doens't have a way to specify min_depth 14:06 < t-bast> Unless we add it as a tlv? 14:07 < BlueMatt> if we're adding stuff anyway should it include a "will accept payments at 0conf" field on both ends so you know whether you can *also* send or just receive? 14:07 < rusty> t-bast: min_depth is totally advisory: you can always delay sending as long as you want. 14:08 < t-bast> rusty: true, but you wouldn't understand why the opener delays, it would be better to be explicit? 14:08 < roasbeef> re #873, missing the context here related to CPU DoS? in that if you have more dust HTLCs things are harder to process for certain implementations? 14:08 < t-bast> BlueMatt: that could be reasonable, yes 14:08 < rusty> BlueMatt: channel type would cover that? 14:08 < BlueMatt> presumably could, yes. 14:08 -!- jarthur_ [~jarthur@2603-8080-1540-002d-c49d-5e9c-d3a7-cf1f.res6.spectrum.com] has quit [Ping timeout: 255 seconds] 14:08 < roasbeef> seems you really need to clamp both the count, and total value 14:08 < roasbeef> for dust 14:09 < rusty> The difference is not send vs recv, it's "I will route for you even though your open is unconfirmed". 14:09 < roasbeef> BlueMatt: yeah I read it, I don't agree w/ the rationale, it's gonna cause a lot of force closes in the wild 14:09 < rusty> Which, I'm tempted to say "try it and see"? 14:09 < cdecker[m]> Well but that's just local policy again isn't it? 14:10 < roasbeef> t-bast: disagree that we need to worry about wasting feature bits, people are using feature bits in teh 1000s ranges already today, there's a lot of room from 20 or so where we are rn to there 14:10 < cdecker[m]> Accept the HTLC, if it's forwarded and you don't want to just fail it immediately again 14:10 < roasbeef> and we'd already have a feature bit for the new fee stuff right? 14:10 < BlueMatt> roasbeef: again, its pretty rude to dig up a topic from an hour ago. wait till after the meeting. 14:11 < rusty> Yeah, roasbeef, comment on issue please. 14:11 < BlueMatt> rusty: hmmm, so I guess you'd just try to route, see if fail, and retry payment over another route if possible 14:11 < rusty> BlueMatt: I think so. 14:11 < BlueMatt> I guess that works, as long as you know your peer will accept the htlc to begin with 14:11 < t-bast> It's quite harmless, but why not be explicit to avoid a failed round-trip? 14:11 < roasbeef> BlueMatt: had something come up in meat space, was going thru the scrollback to reply where tagged 14:11 < rusty> So really, you only need to know "are you gonna get upset at me trying?" which is a feature negotiation. I don't even think it needs to be a channel? 14:11 < rusty> ... type 14:12 < cdecker[m]> Right 14:12 < t-bast> Yeah I'm not sure either it needs to be a channel_type 14:12 < BlueMatt> t-bast: I mean you shouldnt hit a stuck payment in this case, hopefully, and its not like you have to wait for several hops of commitment_signed dances. 14:12 < rusty> IOW, all channels are turbo channels. I think our analysis shows that (as long as you refuse to fwd) it's "why not, your funeral"? 14:12 < roasbeef> t-bast: fwiw we never close when we receive errors, always seemed like an uncessary way to make users angry (by auto force closing), but it's possible to re-use the error message as is, using the all zero connection ID flag, then using a TLV field to pin point a channel and/or action 14:12 < t-bast> BlueMatt: that's true, but it's still something we could easily avoid by being explicit, can't we? 14:13 < cdecker[m]> Well even if you're the final recipient you should maybe not act on it until it's confirmed (ship the paid goods) 14:13 < BlueMatt> t-bast: yea, I guess its a question of protocol complexity 14:13 < BlueMatt> cdecker[m]: only if you're the direct counterparty, otherwise you get paid either way :) 14:13 < t-bast> roasbeef: yeah, maybe we should do the same, I'm not sure though, I like having the two distinct mechanisms (and it's really a tiny amount of work to support) 14:13 < BlueMatt> cdecker[m]: but, yea, thats basically a node-level api issue, no? 14:14 < cdecker[m]> Yep, but we may need to bubble that up to the user so they can decide 14:14 < t-bast> BlueMatt: of course, if it's tedious to do, I can definitely live with the try-and-see approach, but if it's really just including a tiny informative tlv it could be worth it 14:14 < rusty> Yeah, easier to add a warning msg than to extend error msg, tbh. 14:16 < rusty> t-bast: I was kinda assuming we would have some command for user to say "I trust this nodeid!". That might happen after channel open though? 14:16 -!- jarthur [~jarthur@2603-8080-1540-002d-8911-e5b6-6ab4-72d3.res6.spectrum.com] has joined #lightning-dev 14:17 < rusty> t-bast: hmm, we could use funding_locked to indicate "I'm ready to fwd"? 14:17 < t-bast> rusty: we have per-node configuration overrides, it has been quite handy for that kind of things - you can declare in `eclair.conf` that you override some specific limits or features for specific node_ids 14:17 < roasbeef> BlueMatt: ppl are free to read messages or not, this is async chat 14:17 < BlueMatt> t-bast: eh, I'll implement it either way. I kinda like not having it, but I dont feel *that* strongly 14:17 < niftynei_> failure case on turbos seems kinda complex, no? like value has been exchanged but the accounting for it fell apart? 14:17 < rusty> ... ah, no you need that to send anh HTLCs, ignore. 14:18 < rusty> niftynei_: yeah, you got money, but oh no not really. 14:18 < niftynei_> isn't turbo basically "send funding_locked at successful broadcast"? 14:18 < t-bast> when the channel completely disappears from under the feet, it's nasty 14:19 < BlueMatt> I mean some folks will want to do this, I dont think we should say no? 14:19 -!- crypt-iq [~crypt-iq@71.69.230.255] has quit [Ping timeout: 258 seconds] 14:19 * niftynei_ does not want to be on that accounting team 14:19 < t-bast> niftynei: that's the way we've currently implemented it for Phoenix, yes 14:19 < rusty> BlueMatt: oh yeah, we should totally do it. I think it's just *how*. 14:19 < BlueMatt> like, you trust your counterparty, great, software still has to handle the accounting, but the user shot themselves in the face 14:19 < BlueMatt> yea, fair 14:20 < niftynei_> i definitely see the use case for channels btw mobile units and their 'service provider' so to speak 14:20 < rusty> t-bast: hmm, ok, so where does the "I am prepared to fwd for you" msg go? 14:20 < roasbeef> rusty: is it though? we have a blob that has no structure atm, can either repurpose it or add the other field 14:21 < t-bast> rusty: we assume it by default, but that's also because in the Phoenix case we're always funders so we know we're not going to double-spend ourselves 14:21 < rusty> roasbeef: yes, yes it is. It's backwards compatible. 14:21 < t-bast> rusty: so we have a simpler case than the general turbo channels mechanism 14:21 < roasbeef> re turbos, breez has a protocol they're using in the wild, and have revived a PR of it for lnd, it doesn't make a distinction w.r.t being able to route HTLCs or not, for them the whole point is they can route HTLCs to let users insta recv 14:21 < rusty> t-bast: hmm, so we could have a "... but be warned I'm not gonna fwd" tlv? 14:21 < BlueMatt> t-bast: but in the phoenix case the "just set the 'i will accept payment pre-lock-in' bit on both sides" just works as you expect 14:22 < BlueMatt> t-bast: cause, presumably, phoenix router will forward pre-lock-in? 14:22 < t-bast> rusty: yes we could, I guess 14:22 < BlueMatt> cause you wont double spend, but users wont forward at all, cause its private channels. 14:22 < rusty> t-bast: don't know if it's worth the complexity. 14:23 < t-bast> BlueMatt: yes exactly, we only open turbo channels to end nodes that won't forward, and we accept forwarding for them because we know we won't double-spend ourselves 14:23 < t-bast> BlueMatt: the trust is on the wallet user side 14:23 < t-bast> But for the general case, we probably need more configuration hooks? 14:23 < cdecker[m]> If someone receives an HTLC on a 0conf channel they're the recipient, since nobody else knows about that channel (6 conf broadcast limit) 14:24 < roasbeef> cdecker[m]: hop hints? 14:24 < t-bast> TBH I haven't thought it through yet, since our use-case is a simpler case 14:24 < BlueMatt> I guess I'm trying to understand the concrete use-case for "I'll accept payment, but not forward". because the ux is gonna need to display the same "payment pending" status to the user until lock-in either way, I dont see a ton of value in it 14:24 < cdecker[m]> Unless you do weirdnstuff with routehints 14:24 < rusty> BlueMatt: but you can send it out again instantly, via same channel? 14:24 < t-bast> cdecker[m]: it could be in routing hints though 14:24 < roasbeef> cdecker[m]: yeah afaik, ppl like breez always use hop hints, and have a scheme to generate a scid that works in the onion and the invoice 14:24 < roasbeef> BlueMatt: I think you're right here, ppl that do this in the wild always care about the forward aspect, since that's what improves UX 14:25 < BlueMatt> we need random scids/pubkeys in hop hints *anyway*, but that seems unrelated 14:25 < BlueMatt> or, can be unrelated 14:25 < cdecker[m]> If anything i think we need to say that a 0conf channel won't hold the pending HTLC until the forward depth is reached, otherwise I don't see how failing it can cause trouble 14:25 < BlueMatt> rusty: hmm, I dont quite get that? 14:25 < roasbeef> it's related since you need to identify a channel still, tho there's also the pubkey routing thing -- so put the pubkey in the onion instead of the scid 14:25 < roasbeef> since the mapping only needs to be known by the last two hops in the route 14:26 < t-bast> Maybe the simplest scheme is indeed "if we go turbo, let's go turbo all the way and just forward each other's htlc"? 14:26 < roasbeef> t-bast: turbo or bust 14:26 < t-bast> and note that this is the turbo trade-off? 14:26 < roasbeef> since that's what all the ppl in the wild that use it already do 14:26 < BlueMatt> roasbeef: I dont see how its related aside from "its kinda required for accepting 0conf payments" 14:26 < BlueMatt> t-bast: yea. 14:27 < roasbeef> BlueMatt: yeh that's it, ppl want to recv and send insta 14:27 < rusty> BlueMatt: if you open an unconf channel with me and send me some sats, I can send them through you out to anyone. 14:27 < BlueMatt> roasbeef: yea, ok 14:27 < rusty> There's no *routing* here, importantly. 14:27 < rusty> s/routing/forwarding/ 14:27 < BlueMatt> rusty: but, like you wont *accept* the original payment 14:27 < BlueMatt> you may accept the htlc 14:27 < roasbeef> so you either need a way to crafta a custom short channel ID, or you use pubkey based routing in the onion (since that's already in the invoice) 14:27 < BlueMatt> but from an api/ux perspective, you'll market it as "pending" 14:27 < roasbeef> iirc rn breez uses a scid mapping of heights below the segwit activation height 14:27 < BlueMatt> *unless* you're trusting the counterparty, at which point you'll also route 14:28 < BlueMatt> roasbeef: I think lets just create a way to create a custom scid, cause we want to do that anyway imo :) 14:28 < rusty> BlueMatt: that's naive UX though. You can still use the funds, just not out any other channel. 14:28 < BlueMatt> rusty: I guess I dont get why you'd display to the user "received a payment on 0conf channel" instead of just "payment pending waiting for sender to send" 14:28 < BlueMatt> like, that seems like a vaguely useless ux distinction 14:28 < BlueMatt> but, ok, if you want to do that, go for it :) 14:28 < roasbeef> sure, I mention this since ppl already have their own schemes in the wild, and will likely continue to use those still, but maybe they'll write them up in a bLIP or something if people want to interop (usually it's their software interacting w/ their software, so interop matters less in the wild) 14:29 < BlueMatt> in either case, it seems like not gonna be the most common use-case :) 14:29 < rusty> BlueMatt: AFAICT that's *exactly* what Phoenix does today? 14:29 < BlueMatt> roasbeef: that seems like something that could just be a bolt, no? 14:29 < BlueMatt> rusty: no, I believe they forward happily? 14:29 < BlueMatt> according to what t-bast seemed to say above? or am I wrong? 14:29 < rusty> BlueMatt: in theory, in practice most users have a single channel. 14:30 < BlueMatt> huh? 14:30 < t-bast> rusty: do you expect that people will want the safe-ish turbo? Instead of just doing full turbo-yolo when they do turbo? I'm not sure trying to half-protect the funds in case the channel is double-spent is really worth it 14:30 < ariard> to me, the problem seems to be "how do i signal to my counterparty forward-only-after-conf?" 14:30 < BlueMatt> ariard: I dont think we need to? 14:30 < t-bast> BlueMatt: yes, we forward happily and the phoenix user trusts that we won't double-spend the channel 14:30 < ariard> BlueMatt: that might the forwarding policy you wish, like being both a merchant and routing node 14:30 < BlueMatt> alright, lets discuss more on an issue, it seems like the big question is "Do We need to Tell our Counterparty that you wont forward, or do we just reject the htlcs" 14:31 < roasbeef> one other thing w/ the custom ID, is: once the channle is confirmed, do you use the actual scid or keep using the custom one? 14:31 < BlueMatt> rusty: you wanna open an issue? 14:31 < rusty> t-bast: the only issue I can see is that invoices will get marked paid, even though they're not really. That's hard! 14:31 < roasbeef> iirc rn, breez switches over to the real once after things are confirmed 14:31 < BlueMatt> roasbeef: i was thinking custom scid for privacy of private nodes, so you'd want it later. 14:31 < rusty> #action rusty to open an issue to discuss further. 14:31 < BlueMatt> roasbeef: *plus* custom fake pubkeys 14:31 < BlueMatt> #endmeeting 14:31 < lndev-bot> Meeting ended Mon Jul 19 21:31:53 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 14:31 < lndev-bot> Minutes: https://lightningd.github.io/meetings/lightning_meeting_name/2021/lightning_meeting_name.2021-07-19-20.02.html 14:31 < lndev-bot> Minutes (text): https://lightningd.github.io/meetings/lightning_meeting_name/2021/lightning_meeting_name.2021-07-19-20.02.txt 14:31 < lndev-bot> Log: https://lightningd.github.io/meetings/lightning_meeting_name/2021/lightning_meeting_name.2021-07-19-20.02.log.html 14:31 < cdecker[m]> Yeah, as long as you know what the alias means you're good 14:32 < t-bast> rusty: yes, but if you trust your peer it will stay paid! If they cheat you though as niftynei said, the accounting is nasty 14:32 < roasbeef> fake pubkeys can work today w/ the pubkey routing thing, then you add a feature bit in the invoice that says pubkey only 14:32 < rusty> t-bast: indeed! I think we'd have to only allow payments to specially-marked invoices. 14:32 < ryanthegentry> dang, unfortunate we didn't get to bLIPs today again. Wanted to flag some examples of LN-related descriptive docs standardization attempting to happen in the wild outside the BOLTs process in case y'all haven't seen them 14:32 < ryanthegentry> Podcasting 2.0 TLV registry: https://github.com/satoshisstream/satoshis.stream/blob/main/TLV_registry.md 14:33 < ryanthegentry> Moneysocket docs: https://github.com/moneysocket/moneysocket-rfc 14:33 < ryanthegentry> LNURL docs (ofc): https://github.com/fiatjaf/lnurl-rfc 14:33 < ryanthegentry> (old) hosted channels docs: https://github.com/btcontract/hosted-channels-rfc 14:33 < rusty> ryanthegentry: nice! 14:34 < ryanthegentry> (old, not nearly the same as others) Thor/turbo channel API docs: https://github.com/bitrefill/thor-api-doc 14:34 < t-bast> ryanthegentry: the first link is interesting, it touches on the issue of tlv being a shared resource (as bolt 4 error codes and feature bits), this is IMO the part we need to figure out how to coordinate centrally while giving flexibility to proposal authors 14:35 < t-bast> ryanthegentry: we could have a single table in the BOLTs that reserves those scarce values and point to where they're defined (depending on where bLIPs end up living) 14:35 < roasbeef> t-bast: what coordination is needed other than making sure things dont' collide? given for a given application, only the sender and receiver need to know about w/ TLV fields are being used in the onion 14:36 < roasbeef> t-bast: yeh I'd imagine the sub bLIPs reference some larger table so ppl can take a glance to make sure things aren't colliding 14:36 < vincenzopalazzo> It was a nice meeting guys. Have a nice day/night 🙂 see you in the next meeting 14:36 < t-bast> roasbeef: that's it, how do you ensure they don't collide if every feature author has their own separate table to "reserve" bits? 14:36 < roasbeef> fwiw I thought things would've been colliding more in the wild than they are rn, so maybe shows not much orchestration is really needed there 14:36 < valwal> ryanthegentry I'm working on a keysend bLIP currently rebased on your PR! 14:37 < t-bast> roasbeef: I don't think it's a lot of "hard" coordination, it's just having a central place to store that big table and having a few people agree before reserving bits, but it needs to be codified 14:37 < ryanthegentry> t-bast: thanks, maybe that's a good place to start then 14:37 < ryanthegentry> valwal: w00t w00t!! 14:37 < roasbeef> t-bast: yeh agreed, just pointing out that the current namespace is pretty large, which prob explains why things haven't collided as much in the wild, like there're literally thousands of slots 14:38 < roasbeef> and applications may pick a slot just to mess around w/, but not really commit to seriously using it in the wild 14:38 < t-bast> roasbeef: but if we want it to flourish, we need to expect that there will be more and more demand for those so more risk for it to collide ;) 14:38 -!- rusty [~rusty@103.93.169.121] has quit [Ping timeout: 268 seconds] 14:39 < roasbeef> yeh agreed, the e2e nature of everything means a collision only really happens if there's an endpoint that understands both side of the collision, which as you say may become more common as things start to trickle down from the app layer into wallets (tighter integration) 14:39 < t-bast> so it's not a hard task, but we should have a codified process to reserve bits and keep this in a single place that's easy to lookup for everyone (I think the rfc repo makes sense for that) 14:40 < roasbeef> yeh could just continue to extend doc 9, and also move the TLV field definition for messges away from where the message is defiend 14:40 < roasbeef> since rn there isn't a single place you can go to check out all the defined TLV fields for each message 14:40 < cdecker[m]> An index would be helpful 14:41 < t-bast> I remember that we'd discussed that in the early days of tlv 14:42 < t-bast> And I still think it's helpful, just need to find the right format for it 14:42 < cdecker[m]> Don't remember why we decided against it 14:43 < t-bast> Just because of apathy I guess, no one found it painful enough to fix it :D 14:43 < cdecker[m]> Sounds like it yeah 14:43 < t-bast> It's never a good time though, there's always a lot to do ;) 14:43 < t-bast> Alright I gotta go guys, thanks for the discussions and see you soon! 14:44 -!- t-bast [~t-bast@user/t-bast] has quit [Quit: Leaving] 14:44 < cdecker[m]> Good night, I'll be off soon too 14:44 < cdecker[m]> See you all next time around 🙂 14:58 -!- lndev-bot [~docker-me@243.86.254.84.ftth.as8758.net] has quit [Ping timeout: 265 seconds] 15:04 -!- renepick [~renepick@2a02:a18:9169:9101:a469:128:7bef:b9e3] has quit [Quit: Client closed] 15:06 -!- niftynei_ [~niftynei@4.53.92.114] has quit [Quit: Leaving] 15:31 -!- AaronvanW [~AaronvanW@50-207-231-44-static.hfc.comcastbusiness.net] has quit [Remote host closed the connection] 15:39 -!- AaronvanW [~AaronvanW@209.235.170.242] has joined #lightning-dev 15:48 -!- emcy [~emcy@user/emcy] has quit [Quit: Leaving] 15:57 -!- hex17or [~hex17or@gateway/tor-sasl/hex17or] has quit [Ping timeout: 244 seconds] 16:08 -!- emcy [~emcy@user/emcy] has joined #lightning-dev 16:18 -!- lukedashjr [~luke-jr@user/luke-jr] has joined #lightning-dev 16:19 -!- luke-jr [~luke-jr@user/luke-jr] has quit [Ping timeout: 258 seconds] 16:20 -!- lukedashjr is now known as luke-jr 16:29 -!- Aaronvan_ [~AaronvanW@159.48.55.175] has joined #lightning-dev 16:32 -!- rusty [~rusty@103.93.169.121] has joined #lightning-dev 16:33 -!- AaronvanW [~AaronvanW@209.235.170.242] has quit [Ping timeout: 255 seconds] 16:56 -!- belcher_ [~belcher@user/belcher] has joined #lightning-dev 16:59 -!- belcher [~belcher@user/belcher] has quit [Ping timeout: 265 seconds] 17:46 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 17:47 -!- AaronvanW [~AaronvanW@209.235.170.242] has joined #lightning-dev 17:51 -!- Aaronvan_ [~AaronvanW@159.48.55.175] has quit [Ping timeout: 252 seconds] 17:51 -!- hex17or [~hex17or@gateway/tor-sasl/hex17or] has joined #lightning-dev 18:03 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 258 seconds] 19:00 -!- AaronvanW [~AaronvanW@209.235.170.242] has quit [Remote host closed the connection] 19:03 -!- AaronvanW [~AaronvanW@209.235.170.242] has joined #lightning-dev 19:08 -!- AaronvanW [~AaronvanW@209.235.170.242] has quit [Ping timeout: 265 seconds] 20:00 -!- rusty [~rusty@103.93.169.121] has quit [Quit: Leaving.] 20:17 -!- AaronvanW [~AaronvanW@209.235.170.242] has joined #lightning-dev 20:19 -!- AaronvanW [~AaronvanW@209.235.170.242] has quit [Client Quit] 20:38 -!- rusty [~rusty@103.93.169.121] has joined #lightning-dev 23:19 -!- smartin [~Icedove@88.135.18.171] has joined #lightning-dev 23:51 -!- belcher_ is now known as belcher 23:57 -!- yonson [~yonson@2600:8801:d900:7bb::d7c] has quit [Remote host closed the connection] 23:57 -!- yonson [~yonson@2600:8801:d900:7bb::d7c] has joined #lightning-dev --- Log closed Tue Jul 20 00:00:11 2021