--- Log opened Mon Nov 08 00:00:26 2021 00:08 -!- jarthur_ [~jarthur@cpe-70-114-198-37.austin.res.rr.com] has quit [Quit: jarthur_] 01:00 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 02:14 -!- AaronvanW [~AaronvanW@62.252.235.225] has joined #lightning-dev 02:15 -!- smartin [~Icedove@88.135.18.171] has joined #lightning-dev 03:11 -!- AaronvanW [~AaronvanW@62.252.235.225] has quit [Quit: Leaving...] 03:57 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 03:59 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Ping timeout: 276 seconds] 04:02 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Remote host closed the connection] 04:03 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 05:03 -!- stinkPalms [146173cf@67.205.143.82] has joined #lightning-dev 05:03 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 05:04 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Remote host closed the connection] 05:26 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 05:27 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Ping timeout: 276 seconds] 05:30 -!- stinkPalms [146173cf@67.205.143.82] has quit [Quit: The Lounge - https://thelounge.chat] 05:31 -!- michaelfolkson2 is now known as michaelfolkson 05:53 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 06:01 -!- smartin [~Icedove@88.135.18.171] has quit [Remote host closed the connection] 06:01 -!- smartin [~Icedove@88.135.18.171] has joined #lightning-dev 06:28 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 06:29 -!- AaronvanW [~AaronvanW@62.252.235.225] has joined #lightning-dev 06:29 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Remote host closed the connection] 07:18 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Remote host closed the connection] 07:18 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 07:35 -!- gene [~gene@gateway/tor-sasl/gene] has joined #lightning-dev 07:55 -!- AaronvanW [~AaronvanW@62.252.235.225] has quit [Remote host closed the connection] 08:35 -!- AaronvanW [~AaronvanW@62.252.235.225] has joined #lightning-dev 08:47 -!- denis2342 [~denis@p4fd22ebd.dip0.t-ipconnect.de] has joined #lightning-dev 08:49 -!- luke-jr [~luke-jr@user/luke-jr] has quit [Ping timeout: 250 seconds] 08:49 < denis2342> the output of msats from paystatus command is doubled (amount_msat). possibly undetected because the prettyprinting of jq hides this 08:49 -!- lukedashjr [~luke-jr@user/luke-jr] has joined #lightning-dev 08:51 -!- lukedashjr is now known as luke-jr 09:44 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Remote host closed the connection] 09:45 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 10:02 -!- denis2342 [~denis@p4fd22ebd.dip0.t-ipconnect.de] has quit [Ping timeout: 244 seconds] 10:05 -!- denis2342 [~denis@p4fd2291a.dip0.t-ipconnect.de] has joined #lightning-dev 10:12 -!- AaronvanW [~AaronvanW@62.252.235.225] has quit [Quit: Leaving...] 10:14 -!- denis2342 [~denis@p4fd2291a.dip0.t-ipconnect.de] has quit [Ping timeout: 250 seconds] 10:15 -!- denis2342 [~denis@p4fd22971.dip0.t-ipconnect.de] has joined #lightning-dev 10:46 < BlueMatt> roasbeef: can y'all respond at https://github.com/lightning/bolts/pull/918#issuecomment-952085446 ? 10:46 < BlueMatt> so we dont eat 30 minutes in the meeting having the same discussion that goes nowhere? 10:52 -!- niftynei [~niftynei@99-8-218-209.lightspeed.hstntx.sbcglobal.net] has joined #lightning-dev 10:54 -!- t-bast [~t-bast@user/t-bast] has joined #lightning-dev 10:55 < niftynei> my calendar tells me we're having a meeting in 6 minutes 10:55 < t-bast> hey niftynei! I believe we do, yes ;) 10:55 < niftynei> t-bast posted an agenda here https://github.com/lightning/bolts/issues/933 10:55 < niftynei> well hello hello! 10:56 < t-bast> I've watched a few videos from tabconf, it looks like it was a great crowd there 10:56 < niftynei> there were a lot of great talks! the organizers said they got something like 70 speakers 10:57 < niftynei> denis2342: i think you want #c-lightning for the paystatus bug 10:58 -!- rusty [~rusty@203.217.90.66] has joined #lightning-dev 10:58 * BlueMatt is here, but a bit under the weather so doubly grouchy today 10:59 < niftynei> who wants to chair today? 10:59 < rusty> So, was there a decision to switch to some kind of modern technology from IRC? Lightning Video? 11:00 -!- crypt-iq [~crypt-iq@2603-6080-8f06-6d01-f49d-d255-78f9-d036.res6.spectrum.com] has joined #lightning-dev 11:00 -!- lndev-bot [~docker-me@243.86.254.84.ftth.as8758.net] has joined #lightning-dev 11:01 -!- ryanthegentry [~ryanthege@2600:1700:14d0:65e0:d95:9fc6:2a4a:a9b0] has joined #lightning-dev 11:01 < BlueMatt> crypt-iq: can y'all respond at https://github.com/lightning/bolts/pull/918#issuecomment-952085446 ? 11:01 < BlueMatt> y'all said you would last meeting but never did 11:01 < crypt-iq> i dont know timelines 11:01 < t-bast> rusty: what we discussed with cdecker in Zurich was to alternate between IRC and video, depending on what topics we want to cover 11:01 < t-bast> Let's do IRC this time, and maybe next time we add a video option? 11:02 < niftynei> sgtm 11:02 < rusty> Ack! 11:02 < BlueMatt> video seems reasonable when there's higher-bandwidth things, but decisions on irc seems reasonable....we already have such a problem with "wait, what did we say last time"? 11:02 < niftynei> it's past time to start + we still don't have a chair 11:02 < t-bast> I can chair ;) 11:02 < niftynei> dope, thanks t-bast! 11:02 < t-bast> #startmeeting Lightning Spec Meeting 11:02 < lndev-bot> Meeting started Mon Nov 8 19:02:57 2021 UTC and is due to finish in 60 minutes. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:02 < lndev-bot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 11:02 < lndev-bot> The meeting name has been set to 'lightning_spec_meeting' 11:03 < t-bast> #topic Warning messages 11:03 < t-bast> #link https://github.com/lightning/bolts/pull/834 11:03 < t-bast> There are only a couple comments remaining on this PR, and it's already in 3 implementations, so I guess it's high time we merged it? :) 11:04 < BlueMatt> https://github.com/lightning/bolts/pull/834#discussion_r719977416 was the last unresolved thing, iirc 11:04 < t-bast> There's the question of all-zeroes and a few clean-up comments (should be easy to fix) 11:04 -!- limping [~limping@195.181.160.175.adsl.inet-telecom.org] has joined #lightning-dev 11:05 < t-bast> I agree with BlueMatt to keep the all-zeroes option for now, it was already there, it's less friction to keep it 11:05 < niftynei> all-zeroes seems useful for peers with multiple channels :P 11:05 < cdecker[m]> Agreed 11:05 < rusty> I think the all-zeros thing is unnecessary, since if you have closed all my channels, you'll error each one as I reestablish. 11:05 < BlueMatt> note the debate is about all-zero *errors* (ie close-channels) 11:06 < t-bast> niftynei: right, I even forgot, c-lightning doesn't care, all-zeros or not is the same for you xD 11:06 < BlueMatt> not all-zero *warnings*, which is presumably the default for most messages 11:06 < t-bast> yep 11:06 < BlueMatt> rusty: I really dont get why we need to rip something out just because you can emulate it with a reconnect loop 11:07 < t-bast> rusty: but if you have some weird logic to not initiate the reestablish yourself (because mobile wallet that waits for an encrypted backup for example) 11:07 < rusty> BlueMatt: because it's an unnecessary complication, AFAICT? Like, tell me the channel you have a problem with, precisely! 11:07 < BlueMatt> rusty: if you have an issue with a peer, you dont know the channel precisely 11:07 < rusty> t-bast: hmm, ok, fair enough. 11:08 < BlueMatt> that's the point, if the peer is doing *handwave* then you've presumably closed your channels and may not even be tracking them 11:08 < BlueMatt> like, our main message responder doesn't know about channels that are closed 11:08 < rusty> BlueMatt: yeah, so I was thinking you'd respond (as you naturally would) to any unknown channel with an error for that thcannel. 11:08 < BlueMatt> sure, we can respond to messages with close errors, but, like, we cant like the need-to-be-closed channels, cause they're off in onchain-enforcement land 11:09 < rusty> You definitely don't want to close all channels, if they mention one you don't know. 11:09 < rusty> You only want a general error if you have blacklisted them or something AFAICT. 11:09 < BlueMatt> yes, of course we do that, but my point is still more generally about the peer 11:09 < BlueMatt> yes 11:09 < BlueMatt> exactly 11:10 < BlueMatt> I think we agree, you just think the error is entirely useless to the point we should remove the code, I think its marginally useful and we might as well keep it 11:10 < rusty> OK. Seemed like a chance to simplify but I concede :) I'll restore that part. Anything else? 11:10 < BlueMatt> IIRC I was ack modulo deciding that 11:11 < BlueMatt> I would have to re-review it, there may have been a few nits in my last review 11:11 < t-bast> Great, I think we're good on that front then. Once that's restored, we can do a last round of review for the nits, and then merge? 11:11 < BlueMatt> sgtm 11:11 < t-bast> #topic Clarify channel reestablish behavior when remote is late 11:12 < t-bast> #link https://github.com/lightning/bolts/pull/932 11:12 < t-bast> This one is interesting, I'd like other implementers feedback on that 11:12 -!- crypt-iq [~crypt-iq@2603-6080-8f06-6d01-f49d-d255-78f9-d036.res6.spectrum.com] has quit [Quit: Client closed] 11:12 < t-bast> We started creating safeguards for big nodes, where when you detect that you're late when restarting you give a chance to the node operator to check whether they messed up the DB 11:13 < cdecker[m]> Doesn't thos break SCBs? 11:13 -!- sr_gi [~sr_gi@static-43-117-230-77.ipcom.comunitel.net] has quit [Read error: Connection reset by peer] 11:13 < t-bast> But when testing it against other implementations, we realized it doesn't work because most implementations close instantly when they receive a channel_reestablish that indicate their peer is late 11:13 -!- crypt-iq [~crypt-iq@2603-6080-8f06-6d01-f49d-d255-78f9-d036.res6.spectrum.com] has joined #lightning-dev 11:13 < t-bast> They should wait for the late peer to send an error before closing, shouldn't they? 11:13 < t-bast> cdecker[m]: why? can you detail? 11:13 < cdecker[m]> afaik the SCB restore uses the outdated -> close semantic to recover funds, don't they? 11:14 < t-bast> cdecker[m 11:14 < roasbeef> outdated? 11:14 < t-bast> but stopping the node instead of going forward with the close shouldn't impact that? 11:14 < roasbeef> you mean the non-static key route, where you need to obtain a point? 11:14 < cdecker[m]> Well, how does an SCB restore cause the remote side to unilaterally close the channel? 11:14 < niftynei> right, "MUST NOT broadcast" and "fail the channel" are conflicting, per our current definition of "fail the channel" 11:14 < roasbeef> we never delete the data needed to send the chan reest to like ppl SCB restore btw 11:14 < BlueMatt> I'm not really a fan of separating "send an `error`" and "fail channel" - they're the same thing right now, afaiu, and separating it into "counterparty should force-close" vs "we force-close" sucks, especially given lnd has always ignored errors. 11:14 < t-bast> you just give the node operator a chance to detect that they pointed to an old DB, and restart with the right one? Or if they really lost data, then move forward with the close 11:15 -!- sr_gi [~sr_gi@static-43-117-230-77.ipcom.comunitel.net] has joined #lightning-dev 11:15 < cdecker[m]> Ok, must've misinterpreted how SCBs cause the channel closure, my bad 11:15 < t-bast> BlueMatt: it's not separating them? 11:15 < t-bast> BlueMatt: you mean it's misphrased in the PR 11:15 < roasbeef> so change here is just to send an error instead of force closing? like an attempt to make sure ppl dont' breach themselves? 11:16 < niftynei> does receipt of an error invoke a channel close from the peer in all cases? 11:16 < t-bast> But conceptually, do you agree that you should only close when sending/receiving the error, not on detecting a late channel_reestablish? 11:16 < roasbeef> lnd pretty much never force closes when it gets an error, only invalid sig is the main offense 11:16 < niftynei> in my reading it's not a change to the behavior, just a spec wording change 11:16 < BlueMatt> t-bast: maybe I misread this? I understand the current pr text to say "send an error message, hoping the other party closes the channel, but keep the channel in active mode locally and let them restart and maybe reestablish/actually use the channel again in the future" 11:16 < roasbeef> t-bast: ok so that's the change? wait until the error instead of closing once yyou get a bad chan reest? 11:17 < t-bast> The main issue that's not obvious is that currently, implementations aren't really following the spec: they're trigger-happy and force-close when they receive an outdated channel_reestablish, instead of waiting for the error message 11:17 < t-bast> BlueMatt: if you're late, you cannot force-close yourself, your commitment is outdated 11:17 < roasbeef> isn't that what the spec says to do rn? force clsoe if you get a bad chan reest 11:17 < cdecker[m]> I see, that makes sense 11:17 < crypt-iq> why would you send a chan reestablish if you aren't ready 11:17 < t-bast> roasbeef: yes exactly! But the implementations don't do that xD 11:17 < BlueMatt> t-bast: sorry, I dont mean "broadcast state" I mean "set the channel to unusable for offchain state updates" 11:18 < t-bast> BlueMatt: oh yeah, I agree this shouldn't change 11:18 < roasbeef> t-bast: and instead you observe they close earlier w/ some unknown trigger? 11:18 < roasbeef> did we lose the video call link this time? 11:18 < niftynei> it feels like the behavior t-bast describes is what the original spec was intending but it's much clearer with the proposed update 11:18 < t-bast> exactly what niftynei says 11:18 < BlueMatt> its not clear to me what changes are made by this text? 11:19 < t-bast> Ok let me try to summarize it better 11:19 < niftynei> nothing changes to intent, it just makes some current (wrong) behavior explicitly incorrect? iiuc 11:19 < BlueMatt> "fail the channel" without "broadcast its commitment transaction" sounds to me like "send an error message and forget the channel, maybe tell the user to think hard about broadcasting" 11:19 < roasbeef> this would be a larger divergence tho? like all the existing nodes would keep closing on chan reest recv 11:19 < rusty> Hmm, generally we close when we send an error, not when we receive an error. You're supposed to do *both* ofc, but history... 11:19 < t-bast> Alice has an outdated commitment and reconnects to Bob. Alice sends `channel_reestablish`. Bob detects Alice is late. Most implementations right now have Bob force-close at that point. 11:19 < t-bast> Instead Bob should wait for Alice to send an error, then force-close. 11:20 < roasbeef> why? 11:20 < t-bast> I believe the spec meant that, but since implementations did it differently, it's worth clarifying the spec a bit? 11:20 < BlueMatt> ok, let me rephrase, its unclear to me how the pr changes that text :p 11:20 -!- Anorak [~Anorak@cpe-66-91-76-218.hawaii.res.rr.com] has joined #lightning-dev 11:20 < t-bast> Alice cannot "fail the channel", she cannot broadcast her commitment, she can only send an error to Bob and mark the channel was "waiting for Bob to publish latest commitment" 11:21 < roasbeef> I don't see what that change achivies tho, you just want them to wait to send that error? 11:21 < rusty> BlueMatt: we don't define "fail the channel", but logically it's "return errors from now on and broadcast the latest commitment tx". If you can't do the latter, then the definition still works. 11:21 < BlueMatt> t-bast: sure, yes, that's how i read it previously? I guess my confusion is - the pr diff here seems to change nothing about behavior, but your description here includes a propose behavior change. 11:21 < t-bast> roasbeef: yes, I want Bob to publish his commitment only once he receives an error, not on receiving a channel_reestablish: that gives Alice time to potentially fix her DB and avoid that force-close 11:21 < roasbeef> t-bast: you seem be arguging that fromthe sending node? but the reciving node is the one that can actually force close 11:22 < BlueMatt> rusty: right, the text diff seems fine, i guess, my point is more that it doesnt, to my eye, indicate behavior change. 11:22 < roasbeef> mid convo (like any other message here), IRC link injection: https://github.com/lightning/bolts/issues/933#issuecomment-963494419 11:22 < t-bast> Agreed, it doesn't indicate behavior change, but most implementation's behavior does *not* match the spec currently, and it would probably be worth fixing the implementations? 11:23 < t-bast> So I thought it was worth bringing to attention 11:23 < t-bast> We haven't tried rust-lightning though, maybe you implement it correctly :) 11:23 < BlueMatt> t-bast: ok, my point, I think, is that that feels like an entirely separate conversation to the pr itself 11:23 < BlueMatt> the pr seems fine, I think 11:23 < roasbeef> t-bast: do you know which impls deviate rn? 11:23 < t-bast> Agreed, but I don't know how else to have that discussion (maybe open an issue on each implementation to highlight that its behavior diverges from the spec?) 11:24 < niftynei> wait t-bast does this mean that some channels drop old state to chain? 11:24 < roasbeef> I don't see how it's wrong still, if you lost state, you send me the chan reest, I force close 11:24 < t-bast> IIRC we tested lnd and c-lightning, but I'd need to double check with pm47 11:24 < roasbeef> this seems to be saying that I should instead send an error? 11:24 < roasbeef> niftynei: that's what I'm trying to get at 11:24 < BlueMatt> issues on implementations or an issue on the spec repo seems like a reasonable way to have that discussion 11:24 < t-bast> roasbeef: no, you should wait for my error instead of closing 11:24 < roasbeef> t-bast: ....why? 11:24 < t-bast> roasbeef: if you close instantly, you didn't give me a chance to notice I'm late, and potentially fix my DB if I messed it up 11:24 < rusty> If you're not going to close the channel because peer is behind, you really should be sending a warning I guess? 11:25 < t-bast> roasbeef: if I can fix my DB and restart, we can avoid the force-close and the channel can keep operating normally 11:25 < roasbeef> t-bast: assuming I haven't sent it yet, or? seems like a concurrency thing? 11:25 < roasbeef> if you've lost state, can you really fix it? 11:25 < t-bast> BlueMatt: noted, that's fair, I'll open issues on implementations then 11:26 < t-bast> roasbeef: if you've lost state no, but if you've just messed up your restart you could fix it, so it's really dumb not to give you this opportunity 11:26 < BlueMatt> t-bast: an issue on the spec seems reasonable too, and thanks for flagging, just very confusing to do it on an unrelated pr :p 11:26 < niftynei> this change in the spec is in the case where you're explictly not supposed to be dropping your commitment tx to chain; does fail the channel mean something else? 11:26 < t-bast> roasbeef: it's not a concurrency thing, Bob has no reason to send an error, only Alice does, so if she doesn't send anything Bob shouldn't force-close 11:27 < niftynei> who's Bob in this case? is Bob behind? 11:27 < t-bast> I don't want to hijack the meeting too much on that issue though, I'll create an issue on the spec and on implementations with a detailed step-by-step scenario 11:27 < t-bast> In the example I put above, Alice is late 11:28 < niftynei> right but this spec PR change only deals with Alice's behavior? i think? 11:28 < t-bast> #action t-bast to detail the scenario in a spec issue and tag faulty implementations 11:28 < niftynei> will wait for more info tho, ok what's next 11:29 < t-bast> niftynei: you're totally right, that's why BlueMatt is right that what I'm disucssing isn't completely related to this PR, which is a bit confusing xD 11:29 < t-bast> #topic Drop ping sending limitations 11:29 < t-bast> #link https://github.com/lightning/bolts/pull/918 11:30 < niftynei> ah that is confusing! ok thanks 11:30 < BlueMatt> roasbeef: finally responded 15 minutes ago, so dunno if there's more to be said aside from "lnd folks need to decide - they're still waffling" 11:30 < roasbeef> I think this is the issue w/ IRC, when this was brought up, I said we didn't feel strongly about it, and ppl could do w/e, we didn't commit to rate limiting 11:31 < roasbeef> then we spent like 30 mins on the topic, only to eventually move on w/ nothing really moving forward 11:31 < t-bast> But in a way this change prevents you from rate limiting in the future, right? Unless you choose to become non spec compliant? 11:31 < BlueMatt> roasbeef: feel free to respond on https://github.com/lightning/bolts/pull/918#issuecomment-963501921 11:31 < BlueMatt> what t-bast said 11:31 < BlueMatt> anyway, we dont need to spend more meeting time on this. 11:32 < BlueMatt> it seems like roasbeef has NAK'd it and it should die instead. on the LDK end we'll probably just keep violating the spec cause whatever. 11:32 < roasbeef> t-bast: that is indeed the case, but it's about optionality, we're not committing to rate limiting rn, but hold open the possibilty of doing it in the future, this is where IRC really falls shrot communication wise 11:32 < rusty> YEah, we don't rate-limit. But in my head there's this idea that we should keep a "useful traffic vs useless waffle" counter for incoming traffic and send a warning and disconnect if it gets over some threshold (or start ratelimiting our responses). 11:32 < BlueMatt> roasbeef: this isn't an irc issue, methinks 11:32 < roasbeef> my comments were interpreted as me trying to block the proposel, but I didn't care and was just providing commentary 11:32 < roasbeef> idk go back and look at those logs and see if those 30 mins were productiveily used 11:32 < t-bast> Ok, fair enough, let's not spend too much time on this and move on ;) 11:33 < roasbeef> maybe I'm spoiled now after our recent-ish meat space time 11:33 < t-bast> Let's go for one we haven't discussed in a while...wait for it.. 11:33 < t-bast> #topic Route Blinding 11:33 < BlueMatt> roasbeef: the issue appears to be that you think "well, we'll just violate the spec later cause we dont care about the spec" is totally fine way to not-nack a pr 11:33 < t-bast> #link https://github.com/lightning/bolts/pull/765 11:33 < BlueMatt> but its really nack'ing it in everyone else's mind. 11:33 < t-bast> Yay route blinding! 11:33 < BlueMatt> that's not an irc issue 11:34 < roasbeef> BlueMatt: you're putting words in my mouth, I didn't commit to anything, just that it's possible for someoen to want to eventually rate limit pings 11:34 < t-bast> We've been making progress on onion messages in eclair (we'll merge support for relaying this week) and it has route blinding as a pre-requisite 11:34 < t-bast> So it would be interesting to get your feedback! 11:34 < t-bast> I've updated the PR, so it has both a `proposals` version that's higher level and the spec requirements 11:34 < BlueMatt> roasbeef: its a requirements issue - you seem to have an entirely different view of what the spec is for from others. anyway, feel free to comment on the pr. 11:34 < t-bast> I just need to add one more test vector for blinding override and it should be ready 11:34 < roasbeef> route blinding is on my things to take a deeper look at along the way to checking out the latest flavor of trampoline 11:35 < niftynei> this is exciting t-bast! 11:35 -!- lucasdcf [~lucasdcf@2804:431:c7d8:2181:b9:45a5:da6f:a27d] has joined #lightning-dev 11:35 < rusty> t-bast: yes, we need to do double-check test vectors. 11:35 < t-bast> roasbeef: yay! I'm curious to get your feedback on the crypto part 11:35 < BlueMatt> t-bast: nice! is there a direction on cleaning up the onion messages pr? 11:35 < rusty> #action rusty check https://github.com/lightning/bolts/pull/765 test vectors 11:35 < t-bast> rusty: I've updated the tlv values we discussed yesterday, they should be final and the test vectors reflect that 11:35 < rusty> t-bast: :heartH 11:35 < ariard> t-bast: i'll try to review route blinding again soon, already done few round in the past 11:35 < t-bast> The onion messages PR will then be rebased on top of route blinding, it should clarify it 11:36 < BlueMatt> t-bast: ah, that's...confusing, but okay. 11:36 < t-bast> to be honest, it's really much easier to review while implementing it: without code, it's hard to figure out the important details 11:36 < t-bast> Well actually onion messages doesn't even need to rebase on route blinding 11:36 < t-bast> It can simply link to it 11:36 < BlueMatt> t-bast: eh, I implemented onion messages too, and still found it impossibly confusing, too many unrelated things everywhere 11:37 < t-bast> What can help is looking at how we implemented it in eclair 11:37 < t-bast> We first made one commit that adds the low-level route blinding utilities + tests that showcase how it could be used for payments 11:37 < t-bast> Then we implemented onion messages using these utilities 11:38 < BlueMatt> t-bast: the spec should stand on its own, but I'll look at route blinding, maybe its cleaner spec than onion messages 11:38 < t-bast> To be honest the most confusing part is not mixing the different tlv namespaces and correctly defining your types, then it kinda feels natural 11:38 < niftynei> off topic: is route-blinding a first-step to trampoline payments?? 11:38 < BlueMatt> iirc onion messages just links to route blinding for parts, which was part of my "ugh, wtf" 11:39 < t-bast> niftynei: it's completely orthogonal to trampoline - it can be used by trampoline for better privacy 11:39 < niftynei> sounds like it's definitely one to onion messages lol 11:40 < t-bast> If you're interested, this PR implements the crypto utilities (the code part is really small, it's mostly tests): https://github.com/ACINQ/eclair/pull/1962 11:40 < t-bast> Then this PR uses it for onion messages: https://github.com/ACINQ/eclair/pull/1962 11:41 < rusty> BlueMatt: yeah, will rebase, should be nicer. 11:41 < BlueMatt> cool! thanks rusty 11:42 < t-bast> Shall we move to another topic? I wanted to give you the latest updates on route blinding, but it's probably better to review it on your own when you have time. Don't hesitate to drop questions though! 11:44 < t-bast> roasbeef: if you have a few minutes to space this week, can you quickly address the comments in #903 and #906 so that we get these clean-ups merged? 11:44 < rusty> Yeo! 11:44 < t-bast> *to spare 11:44 < t-bast> #topic dust limit requirements 11:44 < t-bast> #link https://github.com/lightning/bolts/pull/919 11:44 < t-bast> Rusty, you had a counter-proposal there, I think it would be interesting to discuss it? 11:44 < t-bast> Since ariard is here as well 11:45 < ariard> yep 11:45 < rusty> So, this PR was unclear to me. It's basically "have your own internal limit, fail channel if it passes that". 11:46 < rusty> But that's a recipe for exploitable channel breaks, really. 11:46 < ariard> well i think you have 2 recommendations a) if dust HTLC above your dust_limit_exposure, cancel this HTLC _without_ forward 11:46 < BlueMatt> its not clear how you do better without 0-htlc-fees-on-anchor 11:47 < rusty> In practice, dust is limited by (1) number of HTLCs (until we get that infinite-dusty-htlcs feature) and (2) feerate. 11:47 < ariard> and b) if increasing update_fee, either fail channel OR accept the balance burning risk 11:47 < t-bast> rusty: there's only a fail-channel option if you're not using anchor_outputs_zero_fee_htlc_tx, I think it's an important point to note 11:48 < ariard> the fail-channel option have always been deferred to the implementors, and iirc LDK and eclair don't have the same behavior here 11:48 < rusty> t-bast: why? I don't see how that changes the problem? 11:48 < ariard> like I fully agree there is risk for channel breaks in case of fee spikes, it's to be balanced with the risk of loosing money 11:48 < t-bast> I think that since all implementations have added that fail-safe, and new ones will directly target anchor_outputs_zero_fee_htlc_tx, we don't need to bikeshed it too much, it will soon be obsolete 11:48 < ariard> ideally better to have a knob and deferre the choice to node operators 11:48 < t-bast> rusty: when using anchor_outputs_zero_fee_htlc_tx you're simply not at risk when receiving update_fee 11:49 < t-bast> because it doesn't change your dust threshold, so it doesn't impact your dust exposure 11:49 < t-bast> so when using anchor_outputs_zero_fee_htlc_tx there's no case where this PR introduces a force-close 11:49 < ariard> wit zero_fee_tlc_tx, 2nd stage HTLCs are committed with 0-fees 11:49 < niftynei> there's definitely an indeterminate point at which htlc failures begin to occur, but that's not much different from balance exhaustion 11:49 < rusty> t-bast: right. 11:49 < niftynei> it's just *not* balance exhaustion 11:50 < crypt-iq> perhaps the fix is to upgrade to zero-fee-anchors 11:50 < ariard> well it's just your channel become useless to route to for a class of low-values HTLC 11:50 < niftynei> bigger htlcs will still succeed; this will have implications for route algos that use historic payment success data to choose routes (cdecker[m]) 11:51 < rusty> t-bast: but if I add more dust than you want, what happens? We didn't fix the problem, you still could be stuck with too much dust? 11:51 < ariard> crypt-iq: though maybe we still need a dust_limit_exposure with the infinite-dusty-htlcs feature ? 11:51 < rusty> ariard: definitely. A total dust option is required for that. 11:51 < t-bast> rusty: you just fail it instead of relaying it (would be nicer with an "un-add", but it's not dangerous so never leads to a force-close) 11:51 < cdecker[m]> Correct 11:51 < niftynei> im pretty sure it's required... what rusty said 11:51 < crypt-iq> ariard: can just fail back ? 11:52 < niftynei> the real gotcha here is feerate spikes 11:52 < rusty> t-bast: there's an exposure window though :( We tried to avoid that. 11:52 < t-bast> rusty: no there's not, that's why it's interesting! 11:52 < niftynei> at least with an ahead of time known limit, you know when you'll be feerate spiking into bad territory 11:52 < t-bast> rusty: because since you haven't relayed it, right now it's only taken from your peer's balance 11:52 < ariard> crypt-iq: yes what we already doing with 919, or maybe we can introduce an error for dusty HTLCs but state machine asynchronous issues 11:52 < niftynei> whereas right now you kinda just yolo there and then .. maybe the channel closes? 11:52 < t-bast> rusty: so it's purely money they lose, not you 11:53 < rusty> t-bast: ah! good point! I had missed that!! 11:53 < ariard> inbound HTLC are subtracted from your peer balance 11:53 < t-bast> That's the important point, you can safely receive many HTLCs that make you go over your dust threshold, it only becomes dangerous if you relay them 11:53 < t-bast> So if you simply fail them instead, you're always safe 11:53 < crypt-iq> ariard: there are async issues but only for the exit-hop case, rather than forwarding 11:53 < rusty> OK, so now the trap is that your own sent htlcs get dusted by an update_fee, which is fixed by zerofee-anchor. Right. 11:53 < ariard> t-bast: or accept them as a final payee 11:54 < t-bast> (if you use anchor_outputs_zero_fee_htlc_tx or ignore the update_fee case) 11:54 < t-bast> yes exactly 11:54 < rusty> OK, I withdraw my objection. I think the spec change should be far shorter though, textually. 11:54 < niftynei> AOZFHT ftw lol 11:55 < ariard> rusty: yeah if you have suggestion to short/improve i'll take them, there was a discussion with niftynei where to put the changes as it's forwarding recommendations 11:55 < ariard> and not purely evaluation of `update_add_htlc` 11:55 -!- lucasdcf [~lucasdcf@2804:431:c7d8:2181:b9:45a5:da6f:a27d] has quit [Quit: Client closed] 11:56 < niftynei> one thing that'd fix the feerate spike problems is changing the update_fee requirements 11:56 < niftynei> and setting the max increase the same as the dust_limit checks for 11:57 < t-bast> niftynei: but you can't really do that though 11:57 < crypt-iq> what if it actually increases that high 11:57 < ariard> niftynei: wdym? a new requirment on the sender? 11:57 < niftynei> you can send multiple update_fees, but each can only increase the total feerate by the same factor as the pre-checked amount for the feerate bucket 11:57 < t-bast> niftynei: if your peer has been offline for a while, and the feerate really rised a lot, your update_fee needs to match the reality of on-chain feerates, right? 11:57 < niftynei> you just send a bunch of them 11:58 < crypt-iq> how does that change fee spikes? you fail back earlier ? 11:58 < t-bast> well ok, why not...to be honest I simply wouldn't bother and would focus on finalizing implementation of anchor_zero_fee :) 11:58 < niftynei> ah wait you're right it doesnt matter if it's sudden or not, as soon as the dust goes over the bucket limit we kill the channel 11:58 < ariard> niftynei: i think you're introducing a global as you need to have the same factor across sender/receiver 11:58 < niftynei> so it doesnt matter if it's all at once or incremental the real problem is that you've exhausted your budget 11:58 < BlueMatt> it appears we're about to run out of time. 11:58 < niftynei> timing of budget exhaustion is irrelevant 11:59 -!- gene_ [~gene@gateway/tor-sasl/gene] has joined #lightning-dev 11:59 < ariard> BlueMatt: better to run out of time than running out of feerate :p 11:59 < niftynei> feerate rises are basically a channel bomb now tho 11:59 -!- gene [~gene@gateway/tor-sasl/gene] has quit [Remote host closed the connection] 12:00 -!- gene_ is now known as gene 12:00 < niftynei> i mean, maybe they've always been? 12:00 < ariard> Package Relay Solves This (tm)? 12:00 -!- denis2342 [~denis@p4fd22971.dip0.t-ipconnect.de] has quit [Quit: denis2342] 12:00 < t-bast> ariard: :D 12:00 < niftynei> i guess knowing you're going to close the channel isnt' any better than not knowing you're going to close the channel b/c of a feerate rise 12:00 < roasbeef> dust begone 12:00 < BlueMatt> fwiw, rusty, apologies we've been bad about spec stuff lately - I'm a bit under the weather but will stick my head back above water on spec stuff this week hopefully. would love to move our onion messages impl forward and also https://github.com/lightning/bolts/pull/910 but it seems like you still wanted to change that to use a channel type? Any updates on that? I've gotta run, but would https://github.com/lightning/bolts/pull/918#issuecomment 12:00 < BlueMatt> -963519924 solve your issues, roasbeef? Then we can merge that cursed pr and move on with our lives. anyway, I've gotta run. 12:00 < ariard> thoug not really because we don't have infinite fee-bumping reserve 12:01 < t-bast> See ya BlueMatt! 12:01 < niftynei> there's something kinda ironic here about how lightning is supposed to be the Thing to Use when feerates onchain rise, but also uhh have you seen how lightning runs into problems when feerates onchain rise? 12:02 < limping> very interested in #910 12:02 < niftynei> i guess the real problem is velocity of change here 12:03 < t-bast> niftynei: and scale / number of channels 12:03 < t-bast> If you have only a few channels, you're probably ok even with a large feerate update 12:04 < roasbeef> is it really that diff in the LN case tho? similar scenario of leaking the value of an output if chain fees are high 12:04 < rusty> BlueMatt: NP, thanks! 12:04 < niftynei> t-bast, this is for zero-fee anchor outs yeah? yeah... fewer channels definitely a real win there 12:04 < t-bast> niftynei: yes, for that case, ideally with package relay as well (maybe I'm glimpsing too much into the future though!) 12:05 < niftynei> i mean the reality of feerates rising is that a swath of utxos become uneconomical; higher feerates means that some subset of bitcoin is 'economically unspendable' 12:05 < crypt-iq> then there are less txn for feerate maybe? 12:05 < niftynei> lightning failures (htlcs onchain) are like having a front row seat to this particular reality 12:06 < rusty> BlueMatt: I will revisit taht PR. The channel_type is simply a type whihc says "don't ever fwd by real scid", which is simple. 12:06 < niftynei> this is definitely not helpful or on topic but interesting nonetheless llol 12:06 < t-bast> niftynei: but if the feerate ever comes back down, you'll be able to claim these utxos then, but no guarantee... 12:06 < niftynei> which is fine for a wallet of utxos, but htlcs have time constraints iiuc 12:06 < ariard> niftynei: yes it's described in the LN paper, ideally we could stuck the "time" for LN time-sensitive closes in case of fees spikes 12:07 < t-bast> regarding #910 pm47 on our side spent time experimenting with it, he'll be able to provide more feedback as well 12:07 < niftynei> i think you end up circling back around to the observation a friend of mine who works at stripe made about how "payment processors are actuallly insurance companies" 12:07 < niftynei> which is to say there is some risk involved in routing payments! 12:07 < niftynei> and you should expect a return that justifies the risk involved ;) 12:08 < crypt-iq> It would be nice to have a channel_type for zero-conf option-scid_alias and nix the min_conf 0 setting. The spec wording as is basically the acceptor *hoping* that it's a zero-conf channel w/o knowing the intent of the initiator 12:08 < t-bast> yes that's true, we should probably get that explained better for routing node operators and get them to raise their routing fees a bit ;) 12:08 < t-bast> cdecker[m]: are you still around? I've got a quick q 12:10 < niftynei> t-bast, hehe sounds like good blogpost/ blip(spark?) material ;) 12:10 < niftynei> i'm headed out, thanks for chairing t-bast! 12:10 < t-bast> Let's stop now, meeting time is over and we already covered a lot 12:10 < t-bast> #endmeeting 12:10 < lndev-bot> Meeting ended Mon Nov 8 20:10:18 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 12:10 < lndev-bot> Minutes: https://lightningd.github.io/meetings/lightning_spec_meeting/2021/lightning_spec_meeting.2021-11-08-19.02.html 12:10 < lndev-bot> Minutes (text): https://lightningd.github.io/meetings/lightning_spec_meeting/2021/lightning_spec_meeting.2021-11-08-19.02.txt 12:10 < lndev-bot> Log: https://lightningd.github.io/meetings/lightning_spec_meeting/2021/lightning_spec_meeting.2021-11-08-19.02.log.html 12:10 -!- crypt-iq [~crypt-iq@2603-6080-8f06-6d01-f49d-d255-78f9-d036.res6.spectrum.com] has quit [Quit: Client closed] 12:10 < t-bast> niftynei: true! 12:10 < cdecker[m]> Sure 12:10 -!- niftynei [~niftynei@99-8-218-209.lightspeed.hstntx.sbcglobal.net] has quit [Quit: Leaving] 12:10 < t-bast> Great, I was wondering if you had the answer to that comment: https://github.com/lightning/bolts/pull/759#discussion_r742720955 12:11 < t-bast> Basically do you remember why hmac-sha256 was chosen for the sphinx mac instead of Poly1305? 12:11 < t-bast> Just curious about that 12:11 < t-bast> And to everyone else, thanks for your time today! 12:11 < rusty> t-bast: I think it was from the Sphinx paper, IIRC 12:12 < cdecker[m]> Hm, not sure tbh, don't think it makes a huge different 12:12 < cdecker[m]> Yeah, I definitely remember it being in the paper 12:12 < t-bast> the sphinx paper just says "pick a mac" 12:12 < t-bast> unless I missed it... 12:13 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Remote host closed the connection] 12:13 < t-bast> there is this sentence: "In a realistic implementation, we would use a MAC based on a hash function, such as SHA256-HMAC-128" 12:14 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 12:14 < t-bast> but it doesn't mandate it at all though, any mac would do 12:14 < cdecker[m]> Hm, can't find it being mentioned either 12:14 < t-bast> but maybe since it's the only one mentioned in the paper, that influenced the decision? 12:14 < cdecker[m]> Could be that roasbeef's initial code was using it? 12:14 < cdecker[m]> I based all my modifications and the C implementation on that scaffolding 12:15 < t-bast> roasbeef, you still around? 12:15 < roasbeef> t-bast: yeah, so q is why sha256 there instead of poly? iirc it was that most APIs didn't let you use poly in isolation 12:16 < roasbeef> it was always part of the chacha/poly combo 12:16 < t-bast> right, that's a good point 12:16 < t-bast> nacl probably doesn't let you do poly1305 alone (need to check) 12:16 < roasbeef> also instead of chacha, bsaically anything could've been used there, since it just wsanted something to generate a long string of random bytes, so a stream cipher was an easy choise there, couldlve' also easily been aes in ctr mode 12:17 < cdecker[m]> Hm, sounds like a good explanation ^^ 12:17 < roasbeef> t-bast: yeh exactly, even looking at the Go pacakge rn, it says the package is depcracted and you should use the full combo 12:18 < t-bast> Great, that's a very reasonable answer, thanks :) 12:18 < t-bast> Gotta go now, have a good day/evening all, see you next time! 12:19 < t-bast> regarding next time 12:19 < t-bast> I'll be in El Salvador up until the next spec meeting, won't bring my laptop so I won't be able to prepare an agenda or anything 12:20 < t-bast> Or maybe a very late one a few hours before the meeting 12:20 < t-bast> But we'll manage ;) 12:21 -!- t-bast [~t-bast@user/t-bast] has quit [Quit: Leaving] 12:23 -!- jarthur [~jarthur@2603-8080-1540-002d-58a6-5967-745c-e63f.res6.spectrum.com] has joined #lightning-dev 12:25 < ariard> roasbeef: btw, thanks for the notes https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-November/003336.html :) 12:27 < roasbeef> ariard: np! missed one or two sessions that were earlier in the morning, but tried to fill in some basic info there 12:29 -!- rusty [~rusty@203.217.90.66] has quit [Ping timeout: 244 seconds] 12:59 -!- gene [~gene@gateway/tor-sasl/gene] has quit [Quit: gene] 13:00 -!- ryanthegentry [~ryanthege@2600:1700:14d0:65e0:d95:9fc6:2a4a:a9b0] has quit [Quit: Client closed] 13:06 -!- ryanthegentry [~ryanthege@2600:1700:14d0:65e0:d95:9fc6:2a4a:a9b0] has joined #lightning-dev 13:08 -!- ryanthegentry [~ryanthege@2600:1700:14d0:65e0:d95:9fc6:2a4a:a9b0] has quit [Client Quit] 13:24 -!- smartin [~Icedove@88.135.18.171] has quit [Quit: smartin] 13:31 -!- limping [~limping@195.181.160.175.adsl.inet-telecom.org] has quit [Quit: limping] 14:05 -!- luke-jr [~luke-jr@user/luke-jr] has quit [Quit: ZNC - http://znc.sourceforge.net] 14:06 -!- luke-jr [~luke-jr@user/luke-jr] has joined #lightning-dev 14:41 -!- lndev-bot [~docker-me@243.86.254.84.ftth.as8758.net] has quit [Ping timeout: 256 seconds] 14:53 -!- rusty [~rusty@203.217.90.66] has joined #lightning-dev 15:15 -!- rusty [~rusty@203.217.90.66] has quit [Ping timeout: 240 seconds] 15:37 -!- jarthur_ [~jarthur@2603-8080-1540-002d-cce3-74e0-cf74-3ab2.res6.spectrum.com] has joined #lightning-dev 15:38 -!- jarthur [~jarthur@2603-8080-1540-002d-58a6-5967-745c-e63f.res6.spectrum.com] has quit [Ping timeout: 240 seconds] 15:48 -!- rusty [~rusty@203.217.90.66] has joined #lightning-dev 16:04 -!- rusty [~rusty@203.217.90.66] has quit [Ping timeout: 240 seconds] 17:46 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 17:50 -!- bitromortac_ [~admin@gateway/tor-sasl/bitromortac] has quit [Ping timeout: 276 seconds] 18:18 -!- sr_gi7 [~sr_gi@static-195-77-225-77.ipcom.comunitel.net] has joined #lightning-dev 18:19 -!- sr_gi [~sr_gi@static-43-117-230-77.ipcom.comunitel.net] has quit [Read error: Connection reset by peer] 18:19 -!- sr_gi7 is now known as sr_gi 18:26 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 244 seconds] 18:50 -!- _koolazer is now known as koolazer 18:53 -!- rusty [~rusty@pa49-178-159-123.pa.nsw.optusnet.com.au] has joined #lightning-dev 19:15 -!- rusty [~rusty@pa49-178-159-123.pa.nsw.optusnet.com.au] has quit [Ping timeout: 250 seconds] 19:32 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 19:33 -!- andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Ping timeout: 276 seconds] 20:06 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 20:44 -!- Netsplit *.net <-> *.split quits: jkczyz, Anorak 20:45 -!- Netsplit over, joins: jkczyz 20:45 -!- Netsplit over, joins: Anorak 20:51 -!- Anorak [~Anorak@cpe-66-91-76-218.hawaii.res.rr.com] has quit [] 21:00 -!- glix [~glix@152.67.8.183] has quit [Changing host] 21:00 -!- glix [~glix@user/glix] has joined #lightning-dev 21:03 -!- TheWizardAnorak [~TheWizard@cpe-66-91-76-218.hawaii.res.rr.com] has joined #lightning-dev 21:06 -!- TheWizardAnorak [~TheWizard@cpe-66-91-76-218.hawaii.res.rr.com] has quit [Client Quit] 21:09 -!- TheWizardAnorak [~TheWizard@cpe-66-91-76-218.hawaii.res.rr.com] has joined #lightning-dev 21:27 -!- TheWizardAnorak [~TheWizard@cpe-66-91-76-218.hawaii.res.rr.com] has quit [] 22:10 -!- TheOasis [~TheOasis@cpe-66-91-76-218.hawaii.res.rr.com] has joined #lightning-dev 22:10 -!- TheOasis is now known as Guest7461 22:10 -!- Guest7461 [~TheOasis@cpe-66-91-76-218.hawaii.res.rr.com] has quit [Client Quit] 22:22 -!- An0rak [~An0rak@user/an0rak] has joined #lightning-dev 22:51 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has quit [Remote host closed the connection] 22:51 -!- _andrewtoth_ [~andrewtot@gateway/tor-sasl/andrewtoth] has joined #lightning-dev 23:31 -!- t-bast [~t-bast@user/t-bast] has joined #lightning-dev 23:33 -!- t-bast [~t-bast@user/t-bast] has quit [Client Quit] 23:34 -!- An0rak [~An0rak@user/an0rak] has left #lightning-dev [] 23:34 -!- bitdex_ [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 23:35 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 276 seconds] 23:35 -!- jarthur_ [~jarthur@2603-8080-1540-002d-cce3-74e0-cf74-3ab2.res6.spectrum.com] has quit [Quit: jarthur_] --- Log closed Tue Nov 09 00:00:27 2021