--- Log opened Mon Mar 02 00:00:12 2020 00:08 -!- marcoagner [~user@176.78.51.150] has joined #lightning-dev 00:17 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 00:24 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 00:40 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 00:45 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Quit: Leaving] 00:46 -!- arij [uid225068@gateway/web/irccloud.com/x-omiveygzzqpxsntm] has quit [Quit: Connection closed for inactivity] 01:33 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 01:33 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 01:42 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 01:42 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 258 seconds] 01:42 -!- __gotcha1 is now known as __gotcha 01:57 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 02:00 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 02:04 -!- kexkey [~kexkey@37.120.205.237] has quit [Ping timeout: 255 seconds] 02:05 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 02:10 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 255 seconds] 02:42 -!- slivera [~slivera@217.138.204.89] has quit [Remote host closed the connection] 02:49 -!- CCR5-D32 [~CCR5@unaffiliated/ccr5-d32] has joined #lightning-dev 02:58 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 03:37 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 03:55 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 260 seconds] 04:00 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 04:04 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 04:06 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:11 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 04:16 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 04:16 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:18 -!- __gotcha1 is now known as __gotcha 04:19 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 240 seconds] 04:21 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 04:41 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 256 seconds] 04:48 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 05:00 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 05:01 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 05:08 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 05:36 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 272 seconds] 05:36 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 05:40 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 05:40 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 05:43 -!- __gotcha1 is now known as __gotcha 05:52 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 06:13 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 256 seconds] 06:40 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 06:41 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 06:48 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 07:00 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 240 seconds] 07:17 -!- Amperture [~amp@65.79.129.113] has joined #lightning-dev 07:24 -!- Amperture [~amp@65.79.129.113] has quit [Quit: Leaving] 07:44 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 265 seconds] 07:47 -!- mdunnio [~mdunnio@38.126.31.226] has joined #lightning-dev 07:48 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 07:56 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 07:58 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 08:06 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has joined #lightning-dev 08:33 -!- _tnull [~tnull@gateway/tor-sasl/tnull/x-89080035] has quit [Quit: _tnull] 08:34 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 256 seconds] 08:42 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 272 seconds] 08:43 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 258 seconds] 08:44 -!- kexkey [~kexkey@37.120.205.237] has joined #lightning-dev 08:44 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 08:49 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 08:50 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 09:19 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 09:27 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 240 seconds] 09:43 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has quit [Ping timeout: 256 seconds] 09:45 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has joined #lightning-dev 09:53 -!- Maitor_ [~goban@gateway/tor-sasl/maitor] has joined #lightning-dev 09:55 -!- Maitor [~goban@gateway/tor-sasl/maitor] has quit [Ping timeout: 240 seconds] 09:59 -!- MrPaz [~MrPaz@185.104.184.122] has joined #lightning-dev 10:11 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has joined #lightning-dev 10:43 -!- t-bast [~t-bast@2a01:e34:ec2c:260:804d:4b18:ed48:64d1] has joined #lightning-dev 10:48 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Remote host closed the connection] 10:52 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 10:54 < cdecker> @here: specification meeting in <10 minutes, agenda https://github.com/lightningnetwork/lightning-rfc/issues/744 :-) 10:57 < t-bast> Hi Christian! 10:59 < rusty> Hello everyone! 11:00 < lndbot> howdy! 11:00 < niftynei> hi hi hi 11:00 < jkczyz> hey 11:00 < ariard> hi 11:01 < t-bast> hey everyone 11:01 < cdecker> Hi @everybody :-) 11:02 < cdecker> Do we have a quorum? ariard are you representing rust-lightning or will Matt join? 11:03 < ariard> I can but he may join latter 11:04 -!- mdunnio [~mdunnio@38.126.31.226] has joined #lightning-dev 11:04 < cdecker> Great 11:05 < cdecker> t-bast: since you created the agenda for today, would you like to take the lead? 11:05 -!- sstone_ [~sstone@185.186.24.109.rev.sfr.net] has joined #lightning-dev 11:05 < t-bast> cdecker: allright, let's do this ;) 11:05 < t-bast> #startmeeting 11:05 < lightningbot> Meeting started Mon Mar 2 19:05:29 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:05 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 11:05 -!- sstone_ [~sstone@185.186.24.109.rev.sfr.net] has quit [Client Quit] 11:05 < t-bast> Let's start with a simple PR to warm-up 11:05 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has joined #lightning-dev 11:05 < t-bast> #topic https://github.com/lightningnetwork/lightning-rfc/pull/736 11:05 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/736 11:06 < t-bast> This is a very simple PR to add more test vectors to Bolt 11 and clarify pico amounts 11:06 < t-bast> It's mostly adding negative test vectors (which we lacked before) 11:07 < cdecker> I guess this also closes #699 (Add bolt11 test vector with amount in `p` units) 11:07 < t-bast> There is one pending comment on whether to be strict about pico amount ending with 0 or let people round it 11:08 < t-bast> cdecker: I think #699 adds another test vector, we should probably merge the two PRs 11:08 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Ping timeout: 255 seconds] 11:08 < bitconner> have these been implemented by more than one impl? 11:08 < rusty> t-bast: I think we add "SHOULD reject" to the reader side of the spec. It's straightforward. 11:09 < rusty> bitconner: c-lightning and eclair, IIUC. 11:09 < t-bast> rusty: sounds good to me to add a "SHOULD reject" 11:09 < cdecker> bitconner: I think no implementation was failing this test, just some random JS library, which cause sword_smith to file the issue 11:10 < t-bast> yes those test vectors have been implemented in eclair and CL (found a few places in eclair where we weren't spec compliant) 11:10 < t-bast> it's a good exercize to add them to your implementation, you may have a few surprises :D 11:10 < bitconner> rusty: nice, i'm all for negative tests. i can make a pr to lnd adding these today 11:10 < rusty> t-bast: I take it back, it's already there. 11:11 < t-bast> rusty: woops true, then it's settled 11:11 < cdecker> I guess we can decide on the correctness independently of the implementations since this is just a place we were underspecified 11:12 * cdecker ACKs on behalf of c-lightning :-) 11:13 < t-bast> ACK for me too, ready to merge 11:13 < rusty> OK, I say we apply 736. Oh, and this superceded 699, FWIW. 11:13 -!- mdunnio [~mdunnio@38.126.31.226] has joined #lightning-dev 11:13 < ariard> no blocker for RL (still a bit behind on invoices stuff...) 11:13 < t-bast> rusty: you sure 699 didn't add another test vector that we lack? 11:14 < cdecker> Ok, so closing #699 since this covers the cases, correct? 11:14 < cdecker> #699 adds a working one, not a counterexample 11:14 < t-bast> But 699 adds a test vector for the pico amount 11:14 -!- fiatjaf [~fiatjaf@2804:7f2:2980:90fa:ea40:f2ff:fe85:d2dc] has quit [Ping timeout: 240 seconds] 11:14 < t-bast> which we didn't have before 11:15 < cdecker> But it also doesn't cover new corner cases, does it? 11:15 -!- fiatjaf [~fiatjaf@2804:7f2:2980:90fa:ea40:f2ff:fe85:d2dc] has joined #lightning-dev 11:16 * rusty checks... yes, t-bast is right there's no positive pico test. Ack 699 too. 11:16 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has quit [Remote host closed the connection] 11:16 < t-bast> Yeah I believe we should apply both (positive and negative tests) 11:16 < cdecker> ok 11:17 < t-bast> #action merge #699 and #736 11:17 < t-bast> if you disagree with the action items, please say it during the meeting so we can clear up miscommunication 11:18 < t-bast> #topic Clarify gossip messages 11:18 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/737 11:18 < t-bast> There has been a lot of back and forth between implementations over channel range messages, the spec was giving a bit too much leeway there 11:18 < t-bast> This is an attempt at clarifying the situation and making the spec stricter 11:19 < t-bast> conner did you have time to look at this? I don't remember if it was you or wpaulino who worked on that recently for lnd? 11:19 <+roasbeef> stuff on the PR itself should suprecede irc (approvals), as it's a much terser medium of comms 11:19 <+roasbeef> it was wpaulino on our end 11:19 <+roasbeef> we also impl the non-overlapping req as is rn 11:20 <+roasbeef> we'll accept a few diff versions (to support older lnd nodes), but now send things in a complete (entire range) and non-overlapping manner 11:21 < rusty> t-bast: your example in there is invalid; we assume all channels in a single block are in the same response 11:21 < t-bast> roasbeef: do you know if lnd would accept reading the version defined in this PR? 11:21 < rusty> t-bast: we do the pr behaviour currently, so I think it has to be ok? 11:22 < t-bast> rusty: I think we need to make it explicit then, eclair splits between multiple responses. It's technically possible that a single response *cannot* hold all the channels that were created in a single block (I need to dig up the calculations for that) 11:23 < t-bast> IIRC a single bitcoin block may be able to fit more than 3000 channel opens, which overflows the 65kB lightning message limit 11:23 < t-bast> Do we care or do we consider this is an extremely unlikely event and ignore that? 11:23 < bitconner> why would the responder set `first_block_num` to something less that what's in the request? 11:23 < rusty> t-bast: 64k / 8 == 8k? And these compress really really well... 11:24 < t-bast> rusty: but you need to spec also for the uncompressed case? 11:24 < sstone> bitconner: if they want to reuse pre-computed responses for example 11:24 < rusty> bitconner: an implementation is allowed to keep canned responses, so you might keep a set of scids for blocks 1000-1999, etc. 11:24 < t-bast> rusty: and we'd like the same splitting behavior regardless of compressed/uncompressed, right? 11:24 <+roasbeef> t-bast: good point re max chans for uncompressed 11:25 < rusty> t-bast: sure, but we can't fit 8k txs in a block anyway? 11:25 <+roasbeef> i don't think that's disallowed as is now though? (repeated first_blocknum over multiple messages) 11:25 < t-bast> rusty: but you can also ask for timestamps, so your calculation is too optimistic I believe 11:25 < t-bast> and checksums I mean 11:26 < rusty> t-bast: ah right! Yes, 4k. 11:26 < t-bast> Point is I think that in the case where you ask for the most information, in the extreme case where a Bitcoin block is full of channel opens it would not fit in a single lightning message 11:26 < sstone> roasbeef: it's not explicit but then you'd send the next query because the current one has been fully processed 11:26 < sstone> because -> before 11:27 <+roasbeef> sstone: which is part of why we used the old "complete" as termination, makes the ending _explicit_ 11:27 < rusty> This has always been a feature of the protocol, though; there's no way to tell. 11:27 <+roasbeef> "send w/e, just let us know when you're done, we'll hadnel reconcilliation afterwards" 11:27 < t-bast> I must admit an explicit termination field clears up a lot of this complexity :) 11:28 < rusty> Sure, but I wasn't worried about 8k chans per block, and I'm not worried about 4k now, TBH. 11:28 < t-bast> Heh true, I just wanted to raise the point but we can choose to ignore this unlikely event 11:29 < cdecker> Technically it's not number of txs in a block, it's number of outputs created in a block, same thing though, not worth the extra effort 11:29 < t-bast> It may be an issue if in the future we want to add more information than just checksum + timestamp (but I don't know if we'll ever do) 11:29 < t-bast> We'll probably have replaced the gossip mechanism at that point. 11:30 <+roasbeef> possible as is also to continue protocol messages over a series of transport messages 11:30 < t-bast> Are we ok with saying that we don't care about the edge case where a block is full of channel openings? And we restrict the gossip mechanism to not split a block's scids over two responses? 11:30 < rusty> Our code will log a "broken" message if this ever happens, and get upset. But we shoujld consider this for gossip NG or whatever follows. 11:32 < bitconner> in theory doesn't that mean someone could prevent your channel from being discovered if they pack the beginning of the block? 11:32 < rusty> bitconner: unless you do random elimination for this case? BUt they could def. affect propagation. 11:33 < bitconner> i suppose you could randomize it if you have more than what would otherwise firt, but that kinda defeats the purpose of canned responses 11:33 < rusty> (In *practice*, these scids compress really well, so the actual limit is higher). 11:34 < cdecker> Too many channels sounds like a good problem to have, and until then let's just take the easy way out 11:34 < t-bast> rusty: why do you need the "minus 1" line 791 if we always include a whole block of scid in each response? 11:34 < cdecker> No need to cover every unlikely corner case imho 11:34 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has joined #lightning-dev 11:34 < m-schmoock> hey 11:35 < t-bast> cdecker: I agree, this seems unlikely enough for the lifetime of that gossip mechanism 11:35 < cdecker> We'll rework the gossip protocol before it ever becomes an issue 11:35 < rusty> t-bast: oops, the minus one is completely wrong. Remove the "minus one' from that sentence :( 11:35 <+roasbeef> unlikey? it's determinstic-ish and ther's just a set cost to crowd out the channels 11:36 < t-bast> roasbeef: if you use the compressed version it's impossible that it happens unless we reduce the block size :) 11:37 < t-bast> roasbeef: we'd need to double-check the math, but it only happens when using uncompressed and asking for both timestamps and checksums, and you'd really need to fill the whole block 11:37 < t-bast> rusty: cool in that case it makes sense to me. I'd like to re-review our code before we merge it in the spec, but I don't have other comments for now. 11:37 < bitconner> checksums compress well? 11:38 < t-bast> bitconner: mmmh not very likely, but I haven't tested it 11:38 < bitconner> i think these assumptions need some validation 11:38 < cdecker> roasbeef: what would you do that for? It's expensive (you'd need to pay all the fees in the block basically) for very little effect (you prevented your arch-nemesis from announcing a channel, in the backlog sync, but can still broadcast updates that'll push your announcement through...) 11:39 < bitconner> if we get rid of backlog sync all of these issues go away no? 11:39 < rusty> bitconner: checksums don't compress, and timestamps can be made harder to compress, but the scids for same block obv. compress well. 11:40 <+roasbeef> cdecker: idk if the "why" is all that pertinent, it's clear that it's possilbe given a certain cost, but correct that you can still just attempt to broadcast your stuff 11:40 < bitconner> or maybe not 11:40 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 11:41 <+roasbeef> but on our end, we 11:41 <+roasbeef> we 11:41 <+roasbeef> we'll check our recv logic to see if it's compat w/ #737 as is (I suspect it is) 11:41 < rusty> If someone opens 4k channels at once, I promise I'll fix this :) FWIW, we can theoretically open ~24k channels in one block. That ofc would not fit. 11:43 < t-bast> allright so let's all check our implementation against the latest status of this PR and report on github? 11:43 < bitconner> sgtm 11:43 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 11:44 < t-bast> #action all implementation should verify their compliance with the latest version of #737 and report on Github 11:44 < t-bast> #topic Stuck channels are back - and they're still stuck 11:45 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/740 11:45 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/750 11:45 < t-bast> I opened two small PRs that offer two different alternatives on how to fix this. 11:45 < t-bast> #740 adds a new reserve on top of the reserve (meta) 11:46 <+roasbeef> the game of wack-a-mole continues :p 11:46 < t-bast> #750 allows the funder to dip once into the reserve to pay for the temporary fee increase of the commit tx for an incoming HTLC 11:46 < t-bast> :P 11:46 < cdecker> +1 for keeping extra reserves, dipping into the channel reserves should be avoided 11:47 < t-bast> I think #750 makes more sense to be included in the spec: that way HTLC senders will send such an "unblocking" HTLC. 740 can be implemented regardless of whether it's in the spec, as an additional safety if implementers feel it's needed (for small channels maybe). 11:47 < rusty> 750 is not implementable :( 11:47 <+roasbeef> how isn't it the case that th eextra reserve just delays things? 11:47 < t-bast> rusty: why? 11:47 <+roasbeef> concurrent send case prob 11:47 < rusty> To be clear, the current requirement only applies to the fee payer. There *is* no requirement on the other side. 11:48 < t-bast> rusty: yes but right now the other side still avoids sending that HTLC 11:48 < m-schmoock> t-bast: additional should be in spec or not att all, not optional, otherwise we cant know hoe much a channel can send or receive, because remote might have different option 11:48 < m-schmoock> *opinion 11:48 < t-bast> at least in eclair, lnd and c-lightning 11:48 < t-bast> if we think 750 isn't the way to go, then yes 740 should go in the spec IMO :) 11:49 <+roasbeef> we recently added some additional sanity checks in this area, to avoid things like sending an htlc that can reside in one commitment but not the other 11:49 < rusty> t-bast: sorry, it is implementable (and, in fact, we implemented this), but it's a bit misleading because it can still happen due to concurrency. 11:49 < t-bast> It's true that it can happen because of concurrency 11:50 < t-bast> If you all think the additional reserve (#740) is a better solution, I'm totally fine with that. I just wanted to explore both before we decide. 11:50 < rusty> We tried not to force the payer into that state because it's a bit anti-social, and risks creating a lower fee tx than we're comfortable with. 11:50 < rusty> t-bast: I prefer 750, too, because reserve is already a PITA, and as it gets harder to calculate, it gets worse. 11:51 < rusty> (We implemented both, and have applied 740, FWIW) 11:51 < m-schmoock> at least a margin of 2*times fees eleminates the risk of someone trying to force a remote channel into that state 11:51 < rusty> My main reason for not applying my 750 implementation instead was that I wasn't sure in practice how impl would handle being forced to dip into reserves. 11:51 < m-schmoock> though technically it can still lock 11:52 < t-bast> rusty: yeah I tried that too and current implementations didn't like it :) 11:52 < rusty> t-bast: OK, then I think we go for 740 in that case. 11:52 < niftynei> i have to hop off, see you all next time :wave: 11:53 < t-bast> m-schmoock: true that the additional reserve "just works" and feels safer, but it felt like the reserve was there for this... 11:53 * t-bast waves at niftynei 11:53 < t-bast> Allright is everything more in favor of 740 (additional reserve)? 11:53 < t-bast> Not everything, everyone :) 11:53 < m-schmoock> in absent of a better solution yes 11:54 < m-schmoock> rusty: we need to adapt the factor from 1.5 to 2 11:54 < rusty> m-schmoock: agreed, but that's easy to fix. 11:54 < m-schmoock> also, how do we know when/if a node supports this? 11:55 < t-bast> we can bikeshed the factor on the PR, I'm not locked on the decision of using 2. I think matt may want to suggest a value there too, ariard what's your take on this? 11:55 < m-schmoock> because if cant cant know (yet) due to outdated software, we cant calculate receivable/spendable on a channel correcty (yet) 11:55 <+roasbeef> factor of 2 just again seems to be deferring things w/o actually fundamentally fixing anything 11:56 < t-bast> roasbeef: why? it only means that the fee increase you can handle is bigger and not unbounded, that's true, but it does prevent the issue 11:56 < t-bast> roasbeef: it's true that it's not a fundamental fix, but we don't see an easy one that fixes that in the short term... 11:57 < m-schmoock> t-bast: how do we signal the remote peer that #740 is in effect or not? 11:57 < t-bast> roasbeef: this is something users run into, and it's a pain to get out without closing the channel 11:57 < t-bast> m-schmoock: we could add a feature bit but I think it would really be wasting one...I 11:58 < m-schmoock> or do we just migrate over time and run into rejections until everyone supports this 11:58 < t-bast> I'll think about signaling more 11:58 < bitconner> iiuc, the non-initiator can still send multiple htlcs and end up in the same situation as today? 11:58 < t-bast> m-schmoock: that was my idea at first, yes. But I can investigate some signaling. 11:59 < t-bast> bitconner: no because at least one of the HTLCs will go through 11:59 < m-schmoock> t-bast: we should at least think about this 11:59 < t-bast> unless I'm missing something 11:59 < t-bast> and once 1 HTLC has gone through, you're unblocked for the others 11:59 < t-bast> roasbeef can you clarify why you think it doesn't fix this? 11:59 < t-bast> m-schmoock: will do. 12:00 -!- kam270_ [~kam270@cpc1-seve24-2-0-cust151.13-3.cable.virginm.net] has joined #lightning-dev 12:00 < m-schmoock> hmmm, am I mistaken, but it is possible to spam a channel by a lot of trimmed (no fees) HTLCs until remote is forced into lockup stil ? 12:01 < m-schmoock> :D just saying 12:01 < t-bast> m-schmoock: I don't think so, can you detail? 12:01 < t-bast> it only happens on the funder side 12:02 < m-schmoock> im not too deep into LN protocol yes, but afaik small trimmed HTLCs have no onchain fee 'requirement' because they are considered dust 12:02 < t-bast> so if funder keeps his extra reserve, when you send him trimmed HTLCs you still increase his balance slowly, which allows him to pay the fee 12:02 < t-bast> they are still added to the balance once fulfilled 12:02 < m-schmoock> so, in order to force a remote (even 3rd party) channel into the locked up state, a mean person would have to drain a remote by repeating dust HTLC until its ocked 12:03 < t-bast> no, it's the other way around 12:03 < t-bast> if you want the remote to be stuck, you have to be the fundee. And the remote has to send all his balance to you (which you don't control). 12:03 < m-schmoock> which I did by circular payments 12:03 < t-bast> If that remote keeps the extra reserve, he'll be safe, at some point he'll stop sending you HTLCs 12:04 < t-bast> and he'll have the extra reserve allowing him to receive HTLCs 12:04 < m-schmoock> you can route through a 'victim' always by using dust HTLC 12:04 < t-bast> yes but that victim will not relay once he reaches his additional reserve 12:05 < t-bast> and that additional reserve makes sure his channel isn't stuck 12:05 < m-schmoock> but the PR says 2*onchain fees (which is 0 for trimmed, right?) 12:05 < ariard> t-bast: sorry if understand well the logic of #740 fix, you require that the funder keep an additional reserve, that way when fundee try to send a HTLC to rebalance channel it works? 12:05 < t-bast> ariard: exactly 12:06 < m-schmoock> maybe I'm mistaken 12:06 < ariard> okay so no risk of funder hitting the bottom of additional reserve by sending a HTLC from its side (or that would be a spec violation) 12:06 < m-schmoock> t-bast: aahh, sorry I misread the PR. maybe we should clarify 12:06 < t-bast> ariard: exactly, maybe there's some clarification needed for dust HTLCs 12:07 < m-schmoock> * pay the fee for a future additional untrimmed HTLC at `2*feerate_per_kw` while 12:07 < m-schmoock> (untrimmed) 12:07 < t-bast> #action t-bast close 750 in favor of 740 12:07 < t-bast> #action clarify the case of dust HTLCs 12:07 < t-bast> #action everyone to continue the discussion on github 12:07 < ariard> btw you should switch to a better name, that's not a penalty_reserve but a fee_uncertainty_reserve ? 12:08 -!- rdymac [uid31665@gateway/web/irccloud.com/x-ayqzcfanjagntrog] has joined #lightning-dev 12:08 < t-bast> good idea, let's find a good name for that reserve :) 12:08 < ariard> I mean both reserves have different purposes, one is for security, the other for paying fees 12:08 < t-bast> let's move on to be able to touch on one long-term feature 12:08 < t-bast> let's continue the discussion on github 12:08 < ariard> sure, let's move on 12:09 < t-bast> @everyone we have time for one long-term topic: should we do rendezvous, trampoline or protocol testing framework? 12:09 < t-bast> the loudests win 12:10 < ariard> IMO rendez-vous, decoy, option_scid, different proposals tradeoff wrt to exact problem they are trying to cover? 12:10 <+roasbeef> down to talk about rendezvous, namely how any pure sphinx based approach leaves a ton to be desried as far as UX, so not sure it's worht implementing 12:10 < cdecker> Now he tells me, after I implemented it like 3 times xD 12:10 <+roasbeef> (namely single path, no full error propagation, not compatible wth mpp, etc) 12:10 <+roasbeef> heh 12:10 < t-bast> okay sounds like we'll do rendezvous :) 12:10 < t-bast> #topic Rendezvous with Sphinx 12:10 < cdecker> But I totally agree, it's rather inflexible, but it can provide some more tools to work with 12:11 <+roasbeef> sure, but woudl it see any actual usage? given the drastic degrade in UX 12:11 < t-bast> I think that even though it's not very flexible, the current proposal is really quite cheap to implement so it may be worth it. 12:11 <+roasbeef> rv node is offline, now what? the channel included in teh rv route was force closed or something, now what? 12:11 < t-bast> I know we'd add it to eclair-mobile and phoenix, so there would be end-users usage 12:12 <+roasbeef> the invoice is old, and another payment came in, so now that rv channel can't recv the payment, now what? 12:12 < rusty> t-bast: with compression onions, it makes sense. We can have multiple, in fact. 12:12 < cdecker> It can be a powerful tool to hide the route inside a company setup for example 12:12 < bitconner> rusty: compression onion? 12:13 < cdecker> It can be useful as a routing hint on steroids 12:13 < t-bast> I think it's a powerful tool to hide a wallet's node_id and channel, replacing decoys 12:13 < t-bast> because a wallet directly connected to a node knows its unannounced channel will not be closed for no reason (or he'd re-open one soon and non-strict forwarding will do the trick) 12:14 * cdecker likes it for its nice construction ;-) 12:14 <+roasbeef> it's like 10x larger than a routing hint though, and at least routing hints let you do things like update chan policies on that route if you have stale data 12:14 < t-bast> roasbeef: it's not in cdecker's latest proposal 12:14 <+roasbeef> feels liek were conflating rv w/ some sort of blinded hop hint 12:15 < cdecker> With the compressible onion stuff we might actually end up with smaller onions than routing hints 12:15 < t-bast> the thing is blinded hop hint may be in fact more costly to implement because there would be more new mechanisms 12:15 < t-bast> while rendezvous has all the bricks already there in any sphinx implementation, so even a one-hop rendezvous would be quite an efficient blinding 12:15 < rusty> roasbeef: yeah, we really want blinded hints. RV with compressed onions is one way of getting us there. 12:16 < t-bast> I've looked again at hornet and taranet, and it's really a big amount of work. While it's very desirable in the long term, I think a shorter-term solution would be quite useful too. 12:17 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Remote host closed the connection] 12:17 < cdecker> The question really isn't whether we want to add it to the spec (LL can obviously opt out if you don't see the value). The real question is whether the construction makes sense 12:17 <+roasbeef> t-bast: eh, we're basically half way there w/ hornet, we onlyu need to implement the data phase, we've already implemented the set up phase 12:18 < cdecker> Otherwise we can talk about sensibility all day long and not make progress... 12:18 < t-bast> roasbeef: but I think we'd want the taranet version that has data payload integrity, otherwise we're getting worse than simple sphinx in terms of tagging attacks 12:18 <+roasbeef> t-bast: would mean we'd just include a data rv in the invoice, then use that to establsih initial comms, get new htlc rv routes send other signalling informatoin, etc 12:18 <+roasbeef> can just use an arbitrary input size block cipher, and tagging attacks don't exist 12:19 <+roasbeef> cdecker: so the primary goal is extended blinded unadvertised channels primarily? 12:19 < t-bast> If all implementations can commit on dedicating resources right now to work on Hornet/Taranet, I'd be fine with it. But can we really commit on that? 12:19 < rusty> roasbeef: well, that's basically what you get with offers. You need rv to get real invoice, which contains up-to-date information. 12:20 < t-bast> roasbeef: can you send me more details offline about that suggestion for the block cipher? I'd like to look at it closely, I don't think it's so simple 12:20 < rusty> (I already (re-)implemented and speced the e2e payload in Sphinx for messaging, was hoping to post this week) 12:20 < cdecker> roasbeef: I meant the expand the use-cases section a bit, but it can serve as 1) extended route hints, 2) recipient anonymity, 3) forcing a payment through a witness (game server adjudicating who wins), ... 12:21 <+roasbeef> yeh all that stuff to me just seems to be implementing hand rolled partial solutions for these issues when we already have a fully spec'd protocl that addresses all these uses cases and more (offers, sphinx messaging, etc, etc) 12:21 < cdecker> So how is recipient anonymity implementable without RV? 12:23 * cdecker will drop off in a couple of minutes 12:23 < cdecker> Maybe next time we start with the big picture stuff? 12:23 < t-bast> Good idea, let's say that next time we start with rendezvous? And everyone can catch up on the latest state of cdecker's proposal before then? 12:23 <+roasbeef> not saying it is w/o it, i'm a fan of RV, but variants that can be used wholesale to nearly entirely replace payments as they exist now with similar or better UX 12:24 < cdecker> Cool, and I'd love to see how t-bast's proposal for trampoline + rendezvous could work out 12:25 < cdecker> Maybe that could simply be apply the compressible onion for the trampoline onion instead of the outer one? 12:25 < t-bast> cdecker: yeah in fact it's really simple, I'll share that :) 12:25 < cdecker> That'd free us from requiring the specific channels in the RV onion and instead just need the trampoline nodes to remain somehow reachable 12:25 < t-bast> roasbeef: could you (or conner, or someone else) send a quick summary of the features we're working on that hornet could offer for free? 12:26 < cdecker> Anyway, need to run off, see you next time ^^ 12:26 < t-bast> roasbeef: it would be nice to clear up what we want to offload for hornet work and what's required even if we do hornet in the future 12:26 < t-bast> Bye cdecker, see you next time! 12:27 < t-bast> I need to re-read the part where they discuss receiver anonymity 12:27 < ariard> also using hornet, we have a protocol from at least privacy analysis have been done seriously, redoing this work again for all partials impls... 12:27 < t-bast> ariard: can you clarify? I didn't get that 12:27 < bitconner> t-bast: sure i can work on a summary 12:28 < t-bast> great thanks bitconner! 12:28 < ariard> t-base: cf your point on decoy proposal and bleichenbacher-style attacks 12:29 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has quit [Remote host closed the connection] 12:29 < t-bast> oh yeah, I think rendezvous feels safer for that - a one-hop rendezvous achieves the blinding I'm interested in for wallet users 12:30 < t-bast> avoids them leaking their unannounced channel and node_id (as long as we stop signing the invoice directly with the node_id) 12:31 < t-bast> Allright sounds like we're reaching the end, please all have a look at the PRs we didn't have time to address during the meeting. 12:31 < ariard> t-bast: isn't unannounced channels a leak by themselves for the previous hop? 12:31 < t-bast> #action bitconner to draft a summary of the goodies of hornet related to currently discussed new features 12:32 < t-bast> #action t-bast to share rendezvous + trampoline 12:32 < t-bast> ariard: yes but this is one you can't avoid: if you're using a wallet on a phone and can't run your whole node, there's no point in trying to hide from the node you're connected to 12:32 < t-bast> they'll always know you are the sender/recipient and never an intermediate node 12:32 -!- mdunnio [~mdunnio@38.126.31.226] has joined #lightning-dev 12:33 < t-bast> if you want to avoid that, you'll have to run your own node, otherwise you need to be fine with that :) 12:33 < t-bast> #endmeeting 12:33 < lightningbot> Meeting ended Mon Mar 2 20:33:24 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 12:33 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-03-02-19.05.html 12:33 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-03-02-19.05.txt 12:33 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-03-02-19.05.log.html 12:33 < ariard> t-bast: okay that's only a limitation for the trampoline+gateway approach for mobile 12:33 < t-bast> Thank you all, I'm looking forward to the discussion on rendezvous, hornet, taranet, trampoline we'll have next time! 12:34 < ariard> I was thinking for privacy-preserving, not mobile-phones node, you may want announce channels 12:34 < t-bast> ariard: no that's also a limitation with non-trampoline: if you're on a phone, the node you're connected to will know that and thus knows you're not an intermediate node 12:34 < t-bast> that's true though for non-mobile nodes 12:35 < t-bast> for non-mobile nodes privacy is much simpler :D 12:35 < t-bast> because you'll always benefit from the fact that you may be simply relaying 12:35 -!- slivera [~slivera@217.138.204.71] has joined #lightning-dev 12:36 < ariard> t-bast: how the connectd node can learn than you're a mobile one if you don't use private channel? (fact you only have 1-public channel?) 12:37 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Ping timeout: 260 seconds] 12:39 < t-bast> ariard: because you can't be offline 24/7 :D 12:39 < t-bast> simply using pings he'll see that you're mostly offline, and always connecting from different IP addresses 12:40 < t-bast> that's a very good heuristic that you're on a phone, so clearly not relaying 12:40 < t-bast> sry I meant *online* 12:40 < t-bast> (in the first sentence) 12:41 < t-bast> edit: you can't be online 24/7 12:41 < m-schmoock> alright, good night everyone.. except rusty, he lives at the end of the world :D 12:41 < t-bast> haha see you m-schmoock! 12:45 < t-bast> I'm heading off too, good night/day everyone 12:45 -!- t-bast [~t-bast@2a01:e34:ec2c:260:804d:4b18:ed48:64d1] has quit [Quit: Leaving] 12:48 -!- mdunnio [~mdunnio@38.126.31.226] has joined #lightning-dev 12:53 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Ping timeout: 256 seconds] 12:56 -!- kam270_ [~kam270@cpc1-seve24-2-0-cust151.13-3.cable.virginm.net] has quit [Read error: Connection reset by peer] 12:57 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 13:03 -!- mdunnio [~mdunnio@38.126.31.226] has joined #lightning-dev 13:12 -!- deafboy [quasselcor@cicolina.org] has quit [Remote host closed the connection] 13:14 -!- deafboy [quasselcor@cicolina.org] has joined #lightning-dev 13:21 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 13:55 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 13:56 -!- arij [uid225068@gateway/web/irccloud.com/x-uwkxmpzemuapxcwv] has joined #lightning-dev 14:00 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 240 seconds] 14:00 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 14:16 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Quit: No Ping reply in 180 seconds.] 14:19 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 14:20 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 14:47 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 256 seconds] 14:47 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 14:51 -!- lukedashjr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 14:52 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 256 seconds] 14:56 -!- lukedashjr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 255 seconds] 15:03 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 15:05 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 15:27 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Remote host closed the connection] 15:37 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 15:43 -!- gleb [~gleb@pool-108-21-84-253.nycmny.fios.verizon.net] has quit [Quit: The Lounge - https://thelounge.chat] 15:44 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has joined #lightning-dev 15:45 -!- kexkey [~kexkey@37.120.205.237] has quit [Quit: Scaling pentatonically] 15:45 -!- gleb [~gleb@pool-108-21-84-253.nycmny.fios.verizon.net] has joined #lightning-dev 15:46 -!- gleb [~gleb@pool-108-21-84-253.nycmny.fios.verizon.net] has quit [Client Quit] 15:46 -!- kexkey [~kexkey@37.120.205.237] has joined #lightning-dev 16:05 -!- kexkey [~kexkey@37.120.205.237] has quit [Quit: Scaling pentatonically] 16:06 -!- arij [uid225068@gateway/web/irccloud.com/x-uwkxmpzemuapxcwv] has quit [Quit: Connection closed for inactivity] 16:07 -!- kexkey [~kexkey@37.120.205.237] has joined #lightning-dev 16:10 -!- marcoagner [~user@176.78.51.150] has quit [Ping timeout: 265 seconds] 16:11 -!- kexkey [~kexkey@37.120.205.237] has quit [Client Quit] 16:16 -!- kexkey [~kexkey@37.120.205.237] has joined #lightning-dev 16:18 -!- gleb [~gleb@cpe-67-244-100-77.nyc.res.rr.com] has joined #lightning-dev 16:25 -!- kexkey [~kexkey@37.120.205.237] has quit [Quit: Scaling pentatonically] 16:25 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Ping timeout: 258 seconds] 16:28 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 16:29 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 16:31 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 16:35 -!- Emcy_ [~Emcy@unaffiliated/emcy] has joined #lightning-dev 16:38 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Ping timeout: 268 seconds] 16:51 -!- krvopije [~krvopije@185.106.109.144] has joined #lightning-dev 16:52 -!- krvopije [~krvopije@185.106.109.144] has quit [Remote host closed the connection] 16:56 -!- kexkey [~kexkey@37.120.205.237] has joined #lightning-dev 17:03 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has quit [Remote host closed the connection] 17:06 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [] 17:12 -!- kexkey [~kexkey@37.120.205.237] has quit [Quit: Scaling pentatonically] 17:31 -!- vindard [~vindard@190.83.165.233] has quit [Quit: No Ping reply in 180 seconds.] 17:35 -!- vindard [~vindard@190.83.165.233] has joined #lightning-dev 17:53 -!- kexkey [~kexkey@37.120.205.237] has joined #lightning-dev 18:00 -!- rdymac [uid31665@gateway/web/irccloud.com/x-ayqzcfanjagntrog] has quit [Quit: Connection closed for inactivity] 18:07 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Read error: Connection reset by peer] 18:07 -!- lukedashjr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 18:08 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has joined #lightning-dev 18:09 -!- tryphe [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 18:10 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 256 seconds] 18:11 -!- lukedashjr is now known as luke-jr 18:13 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 18:27 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 258 seconds] 18:37 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 18:39 -!- arij [uid225068@gateway/web/irccloud.com/x-zjrhetumvhuuprtz] has joined #lightning-dev 19:21 -!- CCR5-D32 [~CCR5@unaffiliated/ccr5-d32] has quit [Quit: ZZZzzz…] 19:26 -!- ianthius [~ianthius@unaffiliated/ianthius] has quit [Quit: leaving] 19:28 -!- ianthius [~ianthius@unaffiliated/ianthius] has joined #lightning-dev 19:40 -!- Maitor_ [~goban@gateway/tor-sasl/maitor] has quit [Quit: Maitor_] 19:40 -!- Maitor [~goban@gateway/tor-sasl/maitor] has joined #lightning-dev 20:13 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has quit [Remote host closed the connection] 20:18 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has joined #lightning-dev 20:29 -!- Emcy_ [~Emcy@unaffiliated/emcy] has quit [Ping timeout: 255 seconds] 20:30 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 20:33 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 20:39 -!- MrPaz [~MrPaz@185.104.184.122] has quit [Quit: Leaving] 21:21 -!- captjakk [~captjakk@75-166-188-3.hlrn.qwest.net] has quit [Remote host closed the connection] 21:33 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 21:35 -!- niceplace [~nplace@185.246.211.68] has quit [Ping timeout: 272 seconds] 21:37 -!- niceplace [~nplace@178-175-148-4.static.as43289.net] has joined #lightning-dev 21:54 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 22:54 -!- rottensox_ [~rottensox@unaffiliated/rottensox] has joined #lightning-dev 22:55 -!- rottensox [~rottensox@unaffiliated/rottensox] has quit [Remote host closed the connection] 23:13 -!- kexkey [~kexkey@37.120.205.237] has quit [Ping timeout: 256 seconds] 23:32 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 23:40 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 23:43 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] --- Log closed Tue Mar 03 00:00:12 2020