--- Log opened Mon Mar 30 00:00:37 2020 00:41 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 00:44 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 00:44 -!- vasild_ is now known as vasild 00:58 -!- sr_gi [~sr_gi@132.red-83-34-186.dynamicip.rima-tde.net] has joined #lightning-dev 01:09 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 01:12 -!- marcoagner [~user@bl13-226-166.dsl.telepac.pt] has joined #lightning-dev 01:26 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 01:26 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 01:55 -!- arij [uid225068@gateway/web/irccloud.com/x-vwvhhuntypgaywnj] has quit [Quit: Connection closed for inactivity] 02:13 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 02:57 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 02:58 -!- per [~per@gateway/tor-sasl/wsm] has joined #lightning-dev 03:02 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 246 seconds] 03:05 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection] 03:05 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 03:26 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 03:29 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 252 seconds] 03:29 -!- __gotcha1 is now known as __gotcha 03:52 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 03:58 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 04:20 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:24 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 256 seconds] 04:24 -!- __gotcha1 is now known as __gotcha 04:45 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Remote host closed the connection] 04:46 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 05:12 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 05:32 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 256 seconds] 05:50 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 05:54 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 246 seconds] 06:06 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 06:11 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 256 seconds] 06:13 -!- slivera [~slivera@217.138.204.69] has quit [Remote host closed the connection] 06:13 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Read error: Connection reset by peer] 06:14 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 06:15 -!- jkczyz [sid419941@gateway/web/irccloud.com/x-hvujhcertcrhavba] has quit [Ping timeout: 240 seconds] 06:15 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-efqfyajjgwgexinh] has quit [Ping timeout: 240 seconds] 06:15 -!- hodlwave1 [hodlwavema@gateway/shell/matrix.org/x-jdgjaolhlbgyqbrw] has quit [Ping timeout: 240 seconds] 06:15 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-xveciwstxsnisaxp] has quit [Ping timeout: 240 seconds] 06:15 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-wpmilppoliftiusj] has quit [Ping timeout: 240 seconds] 06:15 -!- kallewoof [~quassel@240d:1a:759:6000:a7b1:451a:8874:e1ac] has quit [Ping timeout: 240 seconds] 06:15 -!- _flow_ [~none@salem.informatik.uni-erlangen.de] has quit [Ping timeout: 240 seconds] 06:16 -!- jkczyz_ [sid419941@gateway/web/irccloud.com/x-lrtiexgadoqaaabo] has joined #lightning-dev 06:16 -!- kallewoof [~quassel@240d:1a:759:6000:a7b1:451a:8874:e1ac] has joined #lightning-dev 06:17 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 06:17 -!- _flow_ [~none@salem.informatik.uni-erlangen.de] has joined #lightning-dev 06:19 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-rgpnagdiqlnwuxjf] has joined #lightning-dev 06:19 -!- hodlwave1 [hodlwavema@gateway/shell/matrix.org/x-hvmuhigtyrdvhzcz] has joined #lightning-dev 06:20 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-ewawagquhbfarhpb] has joined #lightning-dev 06:21 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-ktyblrtmbppcdltq] has joined #lightning-dev 06:24 -!- Amperture [~amp@65.79.129.113] has joined #lightning-dev 06:38 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 06:41 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 06:46 -!- hodlwave1 [hodlwavema@gateway/shell/matrix.org/x-hvmuhigtyrdvhzcz] has quit [Quit: Idle for 30+ days] 06:48 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 06:49 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 06:53 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 252 seconds] 07:35 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 07:38 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 258 seconds] 07:38 -!- __gotcha1 is now known as __gotcha 07:39 -!- bauerj [~bauerj@unaffiliated/bauerj] has quit [Ping timeout: 240 seconds] 07:42 -!- bauerj [~bauerj@unaffiliated/bauerj] has joined #lightning-dev 07:47 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 07:51 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 256 seconds] 08:02 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 08:06 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 240 seconds] 08:09 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 240 seconds] 08:09 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has quit [Ping timeout: 240 seconds] 08:10 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 08:11 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 08:11 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-jsqydgrdedkikrmb] has quit [Ping timeout: 240 seconds] 08:12 -!- dkrm [~dkrm@2001:41d0:8:3f7b::1] has quit [Ping timeout: 272 seconds] 08:12 -!- dkrm [~dkrm@2001:41d0:8:3f7b::1] has joined #lightning-dev 08:13 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has joined #lightning-dev 08:20 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-dqywssemvggitjtl] has joined #lightning-dev 08:21 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has quit [Remote host closed the connection] 08:22 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has joined #lightning-dev 08:25 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 08:27 -!- dkrm [~dkrm@2001:41d0:8:3f7b::1] has quit [Ping timeout: 252 seconds] 08:30 -!- dkrm [~dkrm@ns395319.ip-176-31-120.eu] has joined #lightning-dev 08:40 -!- bad_duck [~arthur@62.210.66.225] has quit [Ping timeout: 260 seconds] 08:42 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 240 seconds] 08:44 -!- bad_duck [~arthur@2001:bc8:c087:1001::1] has joined #lightning-dev 08:45 -!- Netsplit *.net <-> *.split quits: CodeShark__, wbnns 08:45 -!- Netsplit over, joins: wbnns, CodeShark__ 08:48 -!- shizonic [shizonicma@gateway/shell/matrix.org/x-mijsmylnshzpgucs] has quit [Ping timeout: 246 seconds] 08:48 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-rgpnagdiqlnwuxjf] has quit [Ping timeout: 256 seconds] 08:48 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-ktyblrtmbppcdltq] has quit [Ping timeout: 252 seconds] 08:49 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-ewawagquhbfarhpb] has quit [Ping timeout: 246 seconds] 08:49 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-dqywssemvggitjtl] has quit [Ping timeout: 256 seconds] 08:49 -!- imawhale[m] [borderless@gateway/shell/matrix.org/x-cchstxcbteatbems] has quit [Ping timeout: 256 seconds] 09:05 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 09:07 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 09:08 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 09:08 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 09:10 -!- __gotcha1 is now known as __gotcha 09:29 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 264 seconds] 09:35 -!- rachelfi1h is now known as rachelfis 09:35 -!- rachelfis is now known as rachelfish 09:43 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Ping timeout: 240 seconds] 09:46 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 09:48 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-grciljtqcmpnbkqu] has joined #lightning-dev 10:05 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 10:28 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 256 seconds] 10:29 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has joined #lightning-dev 10:29 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Quit: jb55] 10:37 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-varqpujdivceaowa] has joined #lightning-dev 10:37 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-ekeaondquwgiynao] has joined #lightning-dev 10:37 -!- shizonic [shizonicma@gateway/shell/matrix.org/x-jioiovhjvislcpae] has joined #lightning-dev 10:37 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-vxmqijyeumrtcxwm] has joined #lightning-dev 10:37 -!- imawhale[m] [borderless@gateway/shell/matrix.org/x-pfrwjtqxymdvctdl] has joined #lightning-dev 10:39 < ariard> meeting today? 10:47 -!- jkczyz_ [sid419941@gateway/web/irccloud.com/x-lrtiexgadoqaaabo] has left #lightning-dev [] 10:48 -!- jkczyz [sid419941@gateway/web/irccloud.com/x-lrtiexgadoqaaabo] has joined #lightning-dev 11:20 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 11:20 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 11:24 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 11:41 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 11:47 < cdecker> I thin so ariard 11:47 < cdecker> At least we do have an agenda with today's date on it :-) 11:50 -!- t-bast [~t-bast@2a01:e34:efde:97d0:552f:9809:a813:c19e] has joined #lightning-dev 11:53 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 11:53 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 11:55 -!- hiroki_ [3dc57809@pl69385.ag2001.nttpc.ne.jp] has joined #lightning-dev 11:58 -!- hiroki_ [3dc57809@pl69385.ag2001.nttpc.ne.jp] has quit [Remote host closed the connection] 11:58 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 11:59 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 11:59 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has joined #lightning-dev 12:00 < t-bast> Hi everyone! 12:01 < ariard> hello 12:01 < sstone> Hi! 12:01 < cdecker> Hello ^^ 12:01 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 12:03 < ariard> what's up on the agenda today? 12:03 < t-bast> https://github.com/lightningnetwork/lightning-rfc/issues/757 12:03 < bitconner> hello! 12:04 < t-bast> We're mainly wrapping up on the long-lasting PRs, and we'll be able to spend more time on long-term features :) 12:04 < t-bast> hey bitconner! 12:04 < rusty> Hi all! 12:05 < t-bast> Looks like we're enough people, shall we start? Does someone want to chair or shall I? 12:05 < rusty> t-bast: you need to hang out on IRC more when there's no meeting. Wanted to chat about blinded path progress (TL;DR: there is some!) 12:05 < jkczyz> hi 12:05 < rusty> t-bast for president! 12:05 < t-bast> rusty: oh yeah, just ping me by signal or mail and I'll connect ;) 12:06 < t-bast> #startmeeting 12:06 < lightningbot> Meeting started Mon Mar 30 19:06:01 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:06 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 12:06 < t-bast> First of all, the agenda for today 12:06 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/issues/757 12:06 < t-bast> #topic channel range clarification 12:06 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/737 12:06 -!- mkcisse [~mkcisse@176-138-20-90.abo.bbox.fr] has joined #lightning-dev 12:07 < t-bast> The only lasting issue is whether to allow a block to spill into multiple messages 12:07 < t-bast> rationale being that theoretically, without compression, a block full of channel opens couldn't fit in a single message 12:08 < t-bast> From previous discussions, CL and eclair thought it was too much of an edge case to worry about, LL wanted to investigate a bit more 12:08 < t-bast> wpaulino had a look at the PR but didn't ACK; the way I understood it he's ok with block fully contained in a single message, but I'd like an ACK from him or someone else on the LL team? 12:09 < rusty> In theory you could return a random sampling in that case and mark your response `full_information` = false. (c-lightning just fails to respond in this case tho) 12:10 <+roasbeef> feels like either a txindex in teh response, or an explicit "ok i'm done" would resolve thsi nicely 12:10 < t-bast> true, but it means that when full_information = false we can never know when you're done sending replies 12:11 <+roasbeef> i don't think the 'complete' field as is, is even used properly, nor is there any support for signalling a ndoe has only part of some of the graph 12:11 < t-bast> I agree that an explicit indication that everything is sent would be much easier -> why not add a TLV? 12:11 < rusty> t-bast: no, you're done still. You just did't send it at all. 12:11 <+roasbeef> t-bast: yeh in the past, we used erroneously used 'complete' as that 12:11 <+roasbeef> but it made everything a lot simpler on our end 12:11 < t-bast> rusty: ah ok, so sender aren't allowed to send one block in multiple replies - with that assumption it works 12:12 < rusty> t-bast: ack. We could allow it except for the final block, but that seems like we're not really helping. 12:12 < t-bast> it seems to me that the issue of a block full of channel opens can safely be ignored; are you really not using compression? 12:13 < bitconner> did we figure out a rough number for how much compression helps when sending timestamps and checksums? 12:13 < t-bast> I thought you were checking that :D 12:13 < bitconner> no? 12:13 < bitconner> i don't recall that :P 12:13 < t-bast> I'd shared rough number a month ago, I'll try to dig them up 12:14 < t-bast> IIRC rusty or christian had shared some calculations too 12:14 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has quit [Read error: Connection reset by peer] 12:15 < cdecker> I have historical gossip data, no info on compressability though 12:15 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has joined #lightning-dev 12:15 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 265 seconds] 12:15 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has quit [Remote host closed the connection] 12:16 < bitconner> ideally the numbers would be about worst-case compression too 12:16 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has joined #lightning-dev 12:16 < t-bast> we'd need more than 3.5k channel opens in a single block and to request both timestamps and checksum without compression to be in trouble 12:17 < rusty> Even assuming the checksums and timestamps don't compress, the short_channel_ids are an ordered sequence (with same block num). That's basically 3 of 8 bytes for free, so it's 5 bytes per reply, 64k == 13,000 channels. We can theoretically open 24k per block though. 12:17 < cdecker> Nice, so not fitting into a message is basically impossible for Bitcoin 12:17 * rusty repeats his promise to fix this as soon as it becomes a problem :) 12:18 < t-bast> I'm honestly in favor of not worrying about this now 12:18 <+roasbeef> ok, we'll leave it up to our future selves them 12:18 < t-bast> allright so we're good to merge 737? 12:18 < bitconner> it's int 5 + 4*4 bytes per reply? 12:19 < bitconner> isn't* 12:20 < bitconner> which produces figures more like t-bast's 3.5k 12:20 <+roasbeef> i think rusty isn't including the optional check sum and timestamp? 12:21 < rusty> bitconner: you're right, I forgot there are two csums and timestamps per entry. So 24 bytes uncompressed. So we're closer indeed to 3.5k 12:22 < t-bast> yeah my calculation was for the full thing, which yield around ~3k channels per message 12:22 < t-bast> timestamps are likely to compress quite well, so probably more than that 12:22 * rusty really needs to get back to set reconciliation which doesn't have this problem (I promise all new problems!) 12:22 < t-bast> hehe 12:22 < bitconner> the timestamps aren't ordered tho, so not necessarily 12:22 < rusty> Yeah, timestamps have to be in a 2 week range though. 12:23 * cdecker is reminded of the 12 standards XKCD when it comes to gossip protocols xD 12:23 < bitconner> but there def will be similar prefixes 12:23 -!- BamBaRay36 [~2BamBaRay@95.174.67.132] has quit [Ping timeout: 240 seconds] 12:24 < t-bast> exactly, so we'll be around at least 3 or 3.5k channels per message ; far from something that should worry us IMHO 12:24 < bitconner> i agree the chances of it being an issue is pretty rare. in general we should longer term revisit the gossip with invs, better node ann querying, etc 12:25 < t-bast> I think if we start seeing a rate of 3k channel opens every 10 minutes, we'll have a lot more problems than the gossip :D 12:25 < t-bast> So shall we move on and merge this? 12:25 < cdecker> Well gossip might still be the issue, but the message size limitation isn't likely to be the biggest issue :-) 12:25 < cdecker> ACK 12:26 <+roasbeef> sgtm 12:26 < bitconner> sgtm as well 12:26 < t-bast> #action merge #737 12:26 < t-bast> #topic Stuck channels 12:26 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/740 12:27 < t-bast> We're still going with the option of the additional reserve for lack of better short-term solution 12:27 < t-bast> The main issue with this PR honestly is just that wording this is a nightmare 12:28 < rusty> Yes, we've implemented this for now. 12:28 -!- joostjgr [~joostjgr@ip51cf95f6.direct-adsl.nl] has joined #lightning-dev 12:28 < t-bast> It spun off a lot of interesting discussions, but I think there's overall agreement on the mechanism: reserve some buffer for a future extra HTLC, with a feerate increase of x2 (which is just a recommendation) 12:30 < ariard> yeah it seems to blur 2 problems, preventing initiator to stuck channel by offering a HTLC and safe-guard against fee spikes in some time N to keep incoming capacity 12:31 < t-bast> the safeguard against fee spike is really for that specific problem, to avoid getting stuck 12:31 < t-bast> because that's how you currently get into that situation 12:32 < rusty> t-bast: my only complaint is that it since the non-fee payer MUST NOT send an HTLC which causes the fee=payer to dig into reserves, the alternate solution (letting just one more HTLC through in this case) is now technically banned. 12:33 < t-bast> rusty: that's true. We previously didn't have any requirement on the non-fee payer, do you want to keep it at that? 12:34 < t-bast> Or turn the third paragraph into a SHOULD NOT? 12:34 <+roasbeef> seems like things are pretty active on the PR in the spec repo, anything critical that needs to be discussed here? 12:35 < rusty> t-bast: SHOULD NOT. Everyone implemented a limit despite not being required to, which caused this problem. 12:35 <+roasbeef> on the implementation side, we'll need to make sure all our interpretations of the spec are actually compatible 12:35 <+roasbeef> I can see one impl having slightly diff behavior which causes a de-sync or force close 12:36 < t-bast> roasbeef: really? I don't think there's a risk there, you're just implementing behavior to restrict yourself (when funder) from sending too much 12:36 < t-bast> this PR doesn't impose any requirement on your peer, so I think it should be safe 12:36 < joostjgr> i also don't think there is a risk of de-sync 12:36 < rusty> roasbeef: we haven't added any receive-side restrictions, just what you should send. Still possible of course, but not a new problem. 12:37 < t-bast> rusty: allright, I'll make that a SHOULD NOT 12:37 * rusty reads https://lsat.tech/macaroons... nice job LL! 12:38 < t-bast> I agree that we're able to make progress on github (thanks to the reviewers!) so let's skip to the next topic? 12:38 <+roasbeef> gotcha, only sender stuff make sense, still seems like whack-a-mole in the end, but if it helps to resolve cases in teh wild today then it may be worthwile (dunno what impl of it looks like yet) 12:38 <+roasbeef> t-bast: sgtm 12:39 < t-bast> #action keep iterating on wording, fundee requirement SHOULD instead of MUST 12:39 < t-bast> #topic TLV streams everywhere 12:39 < t-bast> #https://github.com/lightningnetwork/lightning-rfc/pull/754 12:40 < t-bast> This one just got an ACK from roasbeef, so I feel it's going to be quick :) 12:40 <+roasbeef> approved it, we need to shuffle some bytes on disk to make it work in our impl, but it's sound 12:40 < t-bast> I just had one thought about that recently though for which I'd like your feedback 12:40 < rusty> So AFAICT the obvious step is to make option_upfront_shutdown_script a compulsory option. 12:41 < t-bast> For the specific case of update_add_htlc -> allowing a TLV extension means that there won't be a single fixed size for the packets going through the network 12:41 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 12:41 < t-bast> That looks like a privacy issue 12:41 < t-bast> It will be easier for network layer attackers to correlate traffic if udpdate_add_htlc messages differ in size depending on the features they use... 12:42 < rusty> t-bast: a minor one though; in practice update_add_htlc is already distinctive sized fom other messages, and there aren't enough at once to confuse. 12:42 < t-bast> yes but at least all update_add_htlcs look the same to a network observer... 12:42 < cdecker> Right, TLV extensions are also only defined on a peer level, not across multiple hops 12:43 < cdecker> So a distinctively sized update_add_htlc doesn't get propagated (unless we explicitly add such a feature later) 12:43 < cdecker> Probably best to keep that one in mind, but not worry about it for now 12:43 < rusty> cdecker: good point, so this critique only applies when we use it, which is always true. 12:43 <+roasbeef> mhmm, and if yuo're modifyign update_add_htlc, you need further changes in the layer above (routing feature bits, how to handle it, onoin payload implications, etc) 12:43 < t-bast> for path blinding we are adding an ephemeral point in the TLV extension of update_add_htlc once you're inside the blinded path :) 12:44 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 12:44 -!- vasild_ is now known as vasild 12:44 <+roasbeef> t-bast: not up to date on this stuff, but why not in th eonion instead? 12:44 < cdecker> Yep, in that case it'd apply 12:44 < bitconner> it could be the case tho that multiple hops need to add some tlv-extension in order for the payment to work 12:44 < rusty> roasbeef: you need the ss to decrypt the onion itself. 12:44 < bitconner> if there's any variable length data it would be pretty easy to track 12:44 < rusty> (tells you what the key tweak is) 12:45 < t-bast> roasbeef: because it's kind of a reverse onion, I don't think we can put it in the onion... 12:45 < cdecker> bitconner: that's true, but not specific to this proposal 12:45 < t-bast> yes it's mostly an issue for later, when we actually start adding TLVs at the end of update_add_htlc 12:45 < cdecker> _any_ extension mechanism would result in distinctive message sizes, wouldn't it? 12:45 < rusty> We can theoretically mask this stuff with pings, but I've shied away from that because it's easy to get the sizes wrong and not actually obscure anything. 12:46 < bitconner> cdecker, yes it does. particuarly for add_* messages 12:46 < bitconner> sorry, update_* messages 12:46 < bitconner> agreed we can address later tho 12:46 < cdecker> If they are getting propagated along with the HTLC itself, yes, otherwise I don't see a problem 12:47 < t-bast> sgtm, I wanted to mention it so we keep it in mind when building on top of these extensions for update_add_htlc, but we can probably move on for now 12:47 < cdecker> Let's keep it in mind for when we have proposals that'd extend the update_* messages along (parts of) the path 12:47 < t-bast> Looks like I got two approvals, so we'll get that one merged 12:47 < t-bast> #action merge #754 12:48 < t-bast> #topic Wumbo advisory 12:48 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/746 12:48 < t-bast> Very quickly, just want your opinion on 12:48 < t-bast> * whether this advisory should be in the spec or not 12:48 < t-bast> * if yes, is the current wording ok 12:48 < t-bast> I think it's small enough to be useful for people 12:48 < t-bast> Without adding too much bloat 12:49 -!- BamBaRay36 [~2BamBaRay@95.174.67.132] has joined #lightning-dev 12:49 < cdecker> Yes, but if we add every small tidbit to the spec it'll grow indefinitely. A separate best practices document would be better suited for these non-essential things 12:49 <+roasbeef> i think it doesn't hurt, many aren't even aware of the link between csv value and security 12:49 < ariard> we may need some annex for anchor_ouput on how to implement bump coins pool and effective aggregating algos, why not a security annex? 12:50 <+roasbeef> idk if scaling csv and confs is non-essential, would you accept a 100 BTC channel with 1 conf and a csv value of 2 blocks? 12:50 <+roasbeef> we can keep it minimal, but ppl should be aware of the considerations 12:50 < cdecker> No, I would apply my usual timeouts which are good for that 12:50 <+roasbeef> i think lnd is the only implementation that scales both conf and csv value according to the size of the impl as is (I could be wrong though) 12:51 < t-bast> I agree with roasbeef; I find these two lines useful 12:51 <+roasbeef> yeh two lines minimally, I don't think this _hurts_ anyone, just more context for security considerations 12:51 < rusty> t-bast: commented., Advice is bad. 12:51 < cdecker> Ok, my objection is by no means a very strong opinion, but I worry about the readability and size of the spec (both are abysmal as it is...) 12:51 < t-bast> we only scale confirmations in eclair 12:52 < rusty> conf is good, scaling csv is not logical. 12:52 <+roasbeef> how is scaling csv not logical? 12:52 < rusty> (I commented on PR just now) 12:52 < rusty> https://github.com/lightningnetwork/lightning-rfc/pull/746#issuecomment-606209902 12:52 <+roasbeef> imo if you have more funds in teh channel, you want a longer period to act just in case stuff breaks down 12:52 <+roasbeef> especially if you don't have a tower... 12:53 < rusty> roasbeef: and it's more painful to be without funds for that time. In theory these two things cancel out. 12:53 <+roasbeef> err I don't think so, loss of funds >>>> time value loss 12:53 < bitconner> it's more painful to be without the funds permanently imo 12:53 <+roasbeef> if it doesn't matter, than why don't we just all set csv to 1 everywhere? 12:53 < rusty> roasbeef, bitconner: sure. But does it scale differently with the amount? 12:53 <+roasbeef> 1 block, better act quickly! 12:54 < t-bast> maybe then update the wording to say that implementation "MAY" provide means of scaling any of these two values (if see fit)? 12:54 < bitconner> i'd be okay with that, both have cases where you may consider adjusting 12:55 < rusty> The conf thing is definitely worth noting. The other one is not rational (but that doesn't mean it's not worth considering). 12:55 <+roasbeef> err you haven't really provided a convincing argumetn taht is isnt' "rational" 12:56 <+roasbeef> but let's finish hashing it out on the spec, seems were down to just wording 12:56 <+roasbeef> minimally, it should have something like: "you should consider increasign these values as they're security parameters" 12:56 <+roasbeef> scaling/increasgin with chan size 12:56 < rusty> roasbeef: we've had this debate before, though, seems weird to rehash. 12:57 < ariard> you definetely want to scale up csv with channel amount, an attacker may spam the mempool or delay you justice transaction propagation 12:57 <+roasbeef> ariard: yep, it's a *security* parameter 12:57 <+roasbeef> t-bast: what's up next? we can continue this in the spec 12:58 < t-bast> agreed, let's continue that on the PR 12:58 < rusty> ariard: wah? But you can spend more on the penalty tx if it's larger. 12:58 < cdecker> You should select parameters that you feel comfortable with independently from the amount, either you can react or you can't, the potential loss only comes into play once the risk of loss is high enough 12:58 < ariard> rusty: a high-value channel may motivate an infrastructure attacker to just eclipse your full-node to drop your justice tx 12:58 <+roasbeef> amount totally matters... 12:58 < t-bast> #action tweak the wording to give more leeway to implementers in choosing what they want to make configurable 12:59 < ariard> higher csv let you do manual broadcast or any emergency broadcast stuff 12:59 <+roasbeef> ariard: yep, or give your tower more time to work, or w/e other back up measures you may have 12:59 < t-bast> Let's move on to anchor outputs? 13:00 <+roasbeef> so I sent an email out before the meeting 13:00 < t-bast> #topci Anchor outputs 13:00 < cdecker> I need to drop off in a couple of minutes 13:00 < t-bast> #topic Anchor Outputs 13:00 <+roasbeef> discussing stuff on IRC is hard, so i think we'll have more progress with that ML thread 13:00 < t-bast> Ok so let's keep that for the mailing list then 13:00 < rusty> ariard: that's plausible. 13:00 <+roasbeef> in the email i talk about our impl and plans, and then go thru some of the current concerns w/ two anchors 13:00 < t-bast> #topic Rendezvous? cdecker? :) 13:00 < BlueMatt> I added another comment on gh about 10 minutes ago pointing out that two anchors isn't just a waste of fees, its acually insecure 13:01 < cdecker> We can talk about rendezvous, sure 13:01 < joostjgr> replied to that. to_remote is encumbered by 1 csv. so i was wondering, how are there two spend paths for remote? 13:01 < joostjgr> ok, continue on the pr 13:01 < t-bast> thanks, let's continue anchor on the PR and mailing list 13:02 < t-bast> Let's have an update on rendezvous from cdecker 13:02 < ariard> joostjgr: currently reviewing PR but have you considered an upgrading mechanism for alreaady deployed channels? 13:02 * cdecker digs up the branch for the proposal 13:02 <+roasbeef> ariard: yep, I mentoin that in teh email 13:02 <+roasbeef> should have a draft of the shceme we've come up with posted to the ML in teh next week or two 13:03 < ariard> roasbeef: cool given there is no new parameters to negotiate, that shoulddn't that hard 13:03 <+roasbeef> it's pretty important imo, give that most chans still aren't using static to_remote keys and ppl hate closing channels lol 13:03 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md 13:03 < ariard> I agree, let's avoid a useless chain write 13:03 < cdecker> So we have a working construction for rendez-vous onions in that proposal 13:03 <+roasbeef> ariard: yeh it's basically some tlv extensions to make the commitment type specific, which lets you create new commits with a diff type than the latter 13:04 < cdecker> It works by creating the onion in such a way that we can cut out the middle part and regenerate that at the RV node 13:04 < ariard> roasbeef: so generic mechanism to cover commit tx format in the future like taproot one or when we drop CSV delay after mempool improvements? 13:04 < cdecker> While it is a working proposal it has a number of downsides 13:05 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 13:05 < cdecker> First and foremost it is not reusable, since the onion would always look the same in the latter part 13:05 < cdecker> The path encoded is fixed in the onion 13:05 < cdecker> And the size of the compressed onion is still considerable 13:05 <+roasbeef> ariard: mhmm, also with this, all we need to do for taproot/scnhorr is update the multi-sig script (just the single key), and then we can defer modifying the commitment until later as there's a pretty large design space for that 13:06 < cdecker> So for these reasons we have mostly stashed that proposal for the time being 13:06 <+roasbeef> cdecker: this is distinct from the "blinded hop hint" stuff right? 13:06 < t-bast> Interesting, I thought you were still making progress 13:07 < cdecker> roasbeef: Yep, this is the classical rendez-vous construction for sphinx with a couple of extra twists due to the increased per-hop payloads 13:07 < t-bast> roasbeef: yes the blinded paths are a different attempts at better privacy 13:07 < cdecker> rusty has mostly been concentrating on the blinded paths idea for the offers 13:07 < t-bast> and are you investigating something else? 13:07 < cdecker> I think we could end up using my rendez-vous proposal when combined with trampoline though 13:08 < t-bast> yes I believe with trampoline, your rendezvous proposal isn't very expensive (it's clearly acceptable) 13:08 < cdecker> We could have an onion that contains just a sequence of (non-consecutive) trampoline nodes, that'd then run the RV protocol 13:09 < cdecker> That way we can get recipient anonymity even in trampoline 13:09 < t-bast> what do you mean by "run the RV protocol"? 13:10 < cdecker> Don't get me wrong, the rendez-vous construction is a cool tool to have in our toolbelt, it's just not that well-suited for what we intended 13:11 < cdecker> I mean one of the trampolines would get the partial onion as payload (when he's the point where the two parts where stitched together) and have to recover the original partial onion 13:11 < cdecker> Otherwise it'd run the normal trampoline protocol 13:11 < t-bast> ok that sounds like what I had in mind 13:11 < cdecker> Great ^^ 13:12 < t-bast> do you want me to prepare something for the next meeting to discuss trampoline then? and integrate that rendezvous option into it? 13:12 < cdecker> So yeah, I think the TL;DR for rendez-vous is: "We did stuff, it's cool, it works, it doesn't fit our intended application" :-) 13:12 < cdecker> That'd be great, I'd love to get trampoline moving again 13:13 < BlueMatt> :100: 13:13 < t-bast> Something I'd like to ask the whole room: would you like me to move the trampoline proposal to format closer to what cdecker did for rendezvous (https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md)? 13:13 < t-bast> I think that may be more helpful to review the high-level design before the nitpicks of the message fields 13:14 < t-bast> And would probably be useful to keep to help newcomers ramp up more easily 13:15 < cdecker> It'd certainly help me get up to speed before the meetings, and for people that just want a quick overview of a specific feature / change 13:16 < rusty> I just can't get used to the hyphen in rendez-vous :) 13:16 < cdecker> Honestly I'm not sure which one is correct either xD 13:16 < t-bast> rendez-vous is the valid french word :) 13:17 < ariard> t-bast: +1 for a high-level doc, I fear people bringing again and again tsame point on privacy tradeoff in the PR 13:17 <+roasbeef> rownde-voo? 13:17 < rusty> t-bast: and maybe we should do the same for blinded paths... 13:17 < bitconner> the spec is written in english tho ;) 13:17 < t-bast> rownde-voo sounds like a cool new thing 13:17 <+roasbeef> lol, that's how I pronounce it ;) 13:17 < bitconner> yee haw 13:17 < cdecker> Ok, the namer in chief has spoken, rowndee-voo it is ^^ 13:17 <+roasbeef> kek 13:17 < t-bast> rusty: blinded path is already in proposal format :) 13:17 < rusty> This is the dreaded Eltoo disease, I think! 13:17 <+roasbeef> lolol 13:17 < ariard> sounds like some vooodo crypto 13:18 < cdecker> Right, rusty just for you we can rename it el-too :-) 13:18 < t-bast> el-too sounds spanish now 13:18 < bitconner> naming is def my favorite part of this process :) 13:18 < cdecker> Anyway, gotta drop off, see you around everyone ^^ 13:18 <+roasbeef> "uhh, lemmie get uhh #1, but rownde-voo, thank you I know y'all are closed n stuff now due to The Rona" 13:18 < t-bast> great I'll move trampoline to a format closer to the "proposals" then before next meeting, thanks all! 13:19 < bitconner> i'll also work the hornet summary before next meeting 13:19 < t-bast> bitconner: neat, I'd love to read that 13:19 < t-bast> #action bitconner to work on a hornet summary 13:19 < bitconner> was a little busy doing lockdown things lol 13:19 < t-bast> #action t-bast to move trampoline to a proposals format 13:19 < rusty> ariard: BTW, I conceded your point about eclipse attacks on the issue, for posterity. 13:19 < bitconner> i'll shoot to send it out to the ML 13:20 < t-bast> bitconner: yeah so many things to do around the house it's crazy...did you know rice bags may contain up to 31 less grains for the same weight? 13:20 <+roasbeef> t-bast: lol compared to like a rice _box_? 13:21 < rusty> t-bast: we should interop test at some point soon for blinded paths. My implementation is pretty hacky, but should be sufficient to test. 13:21 < BlueMatt> roasbeef: (its a joke about being so bored one goes and counts rice grains :p) 13:21 < t-bast> boxes are bad for the planet, think about the penguins man 13:21 <+roasbeef> lolol 13:21 < ariard> rusty: thanks, currently putting down a paper with gleb around eclipse+LN, to show a bit more light on this 13:21 < ariard> and motivate some groundwork in core 13:21 < t-bast> rusty: then I shuold get started on actually implementing something xD 13:22 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has quit [Read error: Connection reset by peer] 13:23 < rusty> t-bast: I still don't have a bolt11 format, so it's manually crafted paths for testing. And my infra really doesn't like not having a short_channel_id in every hop, so there's a dummy one at the moment (which gets overridden by enctlv). And also I don't handle the new error code correctly. 13:23 < cdecker> Sounds great ariard, make sure to send a copy to the ML once it's done :-) 13:23 < rusty> ariard: nice! 13:23 -!- mauz555 [~mauz555@88.125.182.66] has joined #lightning-dev 13:23 < t-bast> rusty: heh sounds like I shouldn't be in too much of a rush then, perfect ;) 13:24 < t-bast> alright time to go back to counting lentils, thanks everyone for the discussions and see you around! 13:24 < rusty> t-bast: no, but I was happy to get the pieces basically working. 13:24 < t-bast> #endmeeting 13:24 < lightningbot> Meeting ended Mon Mar 30 20:24:21 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 13:24 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-03-30-19.06.html 13:24 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-03-30-19.06.txt 13:24 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-03-30-19.06.log.html 13:24 < ariard> cdecker: sure, just 2 weeks 13:24 < rusty> Stay sane everyone; I don't know about you all but I find the world being on fire fairly distracting from Real Work :( 13:24 < t-bast> rusty: agreed, we need at least a PoC with basic interop, that will help motivate more work towards that 13:25 < t-bast> Thanks Rusty! Good luck on the other side of the world! 13:25 < rusty> t-bast: WRT errors, I am now thinking a new BADONION, which (unlike normal malformed ones where the N-1 hop turns it into a normal error) actually gets propagated back to the start of the blinded path/ 13:26 <+roasbeef> rusty: yeah somethign new everyday....no idea what to expect for April lol 13:26 < t-bast> rusty: I'll have to re-think about it but I think there was an issue with that...can't remember OTOH though 13:26 < rusty> t-bast: that means we still return errors from the final destination like normal, which I think is important. But any errors along the blinded path look identical. 13:26 <+roasbeef> at least we'll have 4/20/2020 13:28 < rusty> t-bast: my main problem is implementation, gotta make sure I catch all the cases where we return diff errors. 13:28 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 13:29 -!- t-bast [~t-bast@2a01:e34:efde:97d0:552f:9809:a813:c19e] has quit [Quit: Leaving] 13:31 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 13:44 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 13:48 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 260 seconds] 13:51 -!- imawhale[m] [borderless@gateway/shell/matrix.org/x-pfrwjtqxymdvctdl] has left #lightning-dev [] 13:59 -!- sr_gi_ [~sr_gi@132.red-83-34-186.dynamicip.rima-tde.net] has joined #lightning-dev 14:02 -!- sr_gi [~sr_gi@132.red-83-34-186.dynamicip.rima-tde.net] has quit [Ping timeout: 252 seconds] 14:04 -!- joostjgr [~joostjgr@ip51cf95f6.direct-adsl.nl] has quit [Quit: Leaving] 14:33 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 14:52 -!- slivera [~slivera@217.138.204.68] has joined #lightning-dev 14:54 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 14:54 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 15:23 -!- mauz555 [~mauz555@88.125.182.66] has quit [Remote host closed the connection] 15:24 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has joined #lightning-dev 15:25 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 15:29 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has quit [Ping timeout: 272 seconds] 15:36 -!- marcoagner [~user@bl13-226-166.dsl.telepac.pt] has quit [Ping timeout: 252 seconds] 15:43 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has joined #lightning-dev 15:44 -!- arij [uid225068@gateway/web/irccloud.com/x-kzehqwhquuvlesnm] has joined #lightning-dev 15:48 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 15:50 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:60fa:abe6:b686:45cf] has quit [] 16:17 -!- treehug88 [~textual@pool-71-105-170-196.nycmny.fios.verizon.net] has joined #lightning-dev 16:30 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 16:34 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 240 seconds] 16:39 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 17:33 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 17:41 -!- mkcisse [~mkcisse@176-138-20-90.abo.bbox.fr] has quit [Remote host closed the connection] 17:52 -!- shesek [~shesek@5.22.135.144] has joined #lightning-dev 17:52 -!- shesek [~shesek@5.22.135.144] has quit [Changing host] 17:52 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 17:53 -!- arij [uid225068@gateway/web/irccloud.com/x-kzehqwhquuvlesnm] has quit [Quit: Connection closed for inactivity] 18:03 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 246 seconds] 18:08 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 18:08 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 18:17 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 250 seconds] 18:26 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 256 seconds] 18:31 -!- rdymac [uid31665@gateway/web/irccloud.com/x-ynfrvoujkexldpbe] has quit [Quit: Connection closed for inactivity] 18:31 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 18:31 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 18:35 -!- rdymac [uid31665@gateway/web/irccloud.com/x-rbikawljrirggoit] has joined #lightning-dev 18:38 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 18:54 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 19:05 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 256 seconds] 19:10 -!- jtimon [~quassel@206.160.134.37.dynamic.jazztel.es] has quit [Ping timeout: 240 seconds] 19:30 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 19:36 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 19:44 -!- vindard [~vindard@190.83.165.233] has quit [Ping timeout: 250 seconds] 19:46 -!- vindard [~vindard@190.83.165.233] has joined #lightning-dev 19:54 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 20:07 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 20:08 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 20:09 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 20:14 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 260 seconds] 20:30 -!- cryptoso- [~cryptosoa@gateway/tor-sasl/cryptosoap] has joined #lightning-dev 20:31 -!- cryptosoap [~cryptosoa@gateway/tor-sasl/cryptosoap] has quit [Ping timeout: 240 seconds] 20:32 -!- achow101 [~achow101@unaffiliated/achow101] has quit [Ping timeout: 260 seconds] 20:33 -!- achow101 [~achow101@unaffiliated/achow101] has joined #lightning-dev 20:41 -!- rdymac [uid31665@gateway/web/irccloud.com/x-rbikawljrirggoit] has quit [Quit: Connection closed for inactivity] 20:42 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 20:45 -!- captjakk [~captjakk@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 20:47 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 265 seconds] 20:51 -!- molz_ [~molly@unaffiliated/molly] has joined #lightning-dev 20:54 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 264 seconds] 20:59 -!- molz_ [~molly@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 21:02 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 21:26 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 21:35 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 250 seconds] 22:36 -!- xavierfabric [~xavierfab@51.15.58.154] has joined #lightning-dev 22:40 -!- xavier_fabric [~xavierfab@209.6.219.248] has joined #lightning-dev 22:43 -!- xavier_fabric [~xavierfab@209.6.219.248] has quit [Client Quit] 22:44 -!- xavier_fabric [~xavierfab@209.6.219.248] has joined #lightning-dev 22:47 -!- xavier_fabric [~xavierfab@209.6.219.248] has quit [Client Quit] 23:01 -!- mol [~molly@unaffiliated/molly] has joined #lightning-dev 23:32 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 23:50 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 23:59 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Quit: Leaving] --- Log closed Tue Mar 31 00:00:38 2020