--- Log opened Mon Apr 01 00:00:32 2019 00:21 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 245 seconds] 00:27 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 01:12 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has joined #lightning-dev 01:37 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 01:38 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 01:50 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has joined #lightning-dev 02:07 -!- e4xit [~e4xit@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Remote host closed the connection] 02:07 -!- e4xit [~e4xit@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 02:16 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Quit: __gotcha] 02:16 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 03:26 < shesek> while, lightning.network is the website of Lightning Labs, a company that was founded by the folks that came up with the state update mechanism used in Lightning Network and wrote the Lightning white paper. But lightning itself as an open protocol, and like bitcoin itself, it has no "official website". 04:26 -!- khs9ne [~xxwa@unaffiliated/mn3monic] has quit [Excess Flood] 04:26 -!- khs9ne [~xxwa@host22-236-dynamic.104-80-r.retail.telecomitalia.it] has joined #lightning-dev 04:44 -!- khs9ne [~xxwa@host22-236-dynamic.104-80-r.retail.telecomitalia.it] has quit [Changing host] 04:44 -!- khs9ne [~xxwa@unaffiliated/mn3monic] has joined #lightning-dev 05:16 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 255 seconds] 06:21 -!- riclas [~riclas@148.63.37.111] has joined #lightning-dev 06:51 -!- shesek [~shesek@5.102.229.163] has joined #lightning-dev 06:51 -!- shesek [~shesek@5.102.229.163] has quit [Changing host] 06:51 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 07:15 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 246 seconds] 07:24 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has quit [Read error: Connection reset by peer] 07:25 -!- ott0disk [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has joined #lightning-dev 07:42 -!- ott0disk [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has quit [Ping timeout: 244 seconds] 07:45 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has joined #lightning-dev 07:55 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has quit [Ping timeout: 245 seconds] 07:58 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has joined #lightning-dev 09:05 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has quit [Quit: Leaving] 09:31 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 250 seconds] 09:44 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has quit [Quit: Leaving] 11:02 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 11:15 -!- shesek [~shesek@185.3.144.77] has joined #lightning-dev 11:15 -!- shesek [~shesek@185.3.144.77] has quit [Changing host] 11:15 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 11:20 <+roasbeef> shesek: that's not our website 11:45 -!- niftynei [~niftynei@104.131.77.55] has quit [Quit: ZNC - http://znc.in] 11:46 < shesek> Oh, sorry for the confusion. I thought it was for some reason. Who's is it? 11:48 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 11:49 -!- niftynei [~niftynei@104.131.77.55] has joined #lightning-dev 11:50 -!- niftynei [~niftynei@104.131.77.55] has quit [Client Quit] 11:51 -!- niftynei [~niftynei@104.131.77.55] has joined #lightning-dev 11:55 < cdecker> Meeting here in 5 minutes :-) 11:55 < rusty> Meeting in 5.... 11:55 < rusty> cdecker: beat me to it by a second! 11:56 < cdecker> Hehe ^^ 11:56 -!- sstone [~sstone_@185.186.24.109.rev.sfr.net] has joined #lightning-dev 11:56 < rusty> cdecker: are you happy to chair the meetings from now on? I struggle at 5:30am... 11:57 < rusty> Might be more coherent and productive! 11:57 < cdecker> Ok, I can if I'm online (got some travel coming up, so that may not always be the case) 11:57 -!- hiroki [d295ffdc@gateway/web/freenode/ip.210.149.255.220] has joined #lightning-dev 11:59 < cdecker> afaik the agenda consists of 3 PRs (https://github.com/lightningnetwork/lightning-rfc/labels/Meeting%20Discussion) 11:59 < rusty> Yep... 11:59 < cdecker> Does that sonud correct? 11:59 -!- niftynei [~niftynei@104.131.77.55] has quit [Quit: ZNC - http://znc.in] 11:59 < cdecker> OK 12:00 -!- YSqTU2XbB [~smuxi@134.28.68.51.rdns.lunanode.com] has joined #lightning-dev 12:00 -!- niftynei [~niftynei@104.131.77.55] has joined #lightning-dev 12:00 -!- YSqTU2XbB is now known as araspitzu 12:00 < sstone> and maybe if we have time we can discuss "trampoline" routing ? 12:00 < cdecker> #startmeeting 12:00 < lightningbot> Meeting started Mon Apr 1 19:00:59 2019 UTC. The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:00 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 12:01 < araspitzu> hi 12:01 < cdecker> Absolutely sstone, loving the proposal 12:01 < cdecker> Ping AmikoPay_CJP bitconner BlueMatt cdecker Chris_Stewart_5 kanzure niftynei 12:01 < cdecker> ott0disk roasbeef rusty sstone bitconner BlueMatt cdecker johanth kanzure 12:01 < cdecker> roasbeef rusty sstone 12:01 < niftynei> ;wave 12:02 < Chris_Stewart_5> hi 12:02 < bitconner> hola 12:02 < cdecker> Heia, seems we have a quorum 12:02 < cdecker> Let's give people 2 more minutes to trickle in :-) 12:02 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 12:02 < BlueMatt> I'm around 12:02 < BlueMatt> kinda 12:03 < cdecker> Today's agenda: PR #557, PR #590, PR #593 and open floor discussion (including Trampoline hopefully) 12:04 < kanzure> hi 12:04 < cdecker> Ok, let's get started with #557 12:05 < cdecker> #action BOLT7: extend channel range queries with optional fields (#557) 12:05 < rusty> OK, I am right now checking the test vectors. 12:05 <+roasbeef> i still think this is optimizing the wrong thing, there's a ton that can be done on the implementation level to make gossip for efficent than it is already 12:05 < cdecker> rusty, sstone can you give a short summary of what changed, and what is holding the PR up? 12:05 <+roasbeef> for example, not syncing from all your peers, using a smaller backlog for gossip timestamp, etc 12:06 < rusty> roasbeef: I somewhat agree, but OTOH it's fairly trivial to implement. 12:06 <+roasbeef> this has the potentialy to waste even more bandwidth as well due to spammy channels 12:06 <+roasbeef> sure, but why make protocol level changes for things that can be optmized at the implementation level? 12:07 <+roasbeef> most nodes really don't need every up to date channel update, instead clients know from past history which routes they're likely to traverse again 12:07 < rusty> roasbeef: well, it leads to INV-based gossip quite directly, too. 12:08 < sstone> it's optional, and has not impact if you don't want to use it. and I think you're somewhat missing the point: it's not about getting everything all the time 12:08 <+roasbeef> inv yeh, trying to alwayws query for up-to-date channel update policies on connection...no 12:08 < rusty> There are two parts to this PR: one just adds fine-grained queries so you don't need to get everything about every short_channel_id you want. The other adds a summary of updates so you can avoid the query. 12:08 <+roasbeef> the impact is the responder having to send all that extra data from queries 12:09 < sstone> only if you want to support this option 12:10 < rusty> roasbeef: it's hard to argue against the flags array on query_short_channel_ids. 12:10 <+roasbeef> also as this is a new optional field, perhaps it should be tlv based instead? since if we add something after this, you'd have to understand what the prior ones are in order to parse the new fields 12:11 < niftynei> +1 for tlv 12:11 < rusty> We can argue about the timestamp & checksums on reply_channel_range. 12:11 < cdecker> Do we have a TLV spec? 12:11 <+roasbeef> i think there's some code lingering around... 12:11 < niftynei> tlv spec is in https://github.com/lightningnetwork/lightning-rfc/pull/572 12:12 * rusty looks at niftynei who has implemented TLV parsing for the spec. 12:12 < cdecker> Not talking about code, this needs to be specced first 12:12 < cdecker> Thanks Lisa :+1: 12:12 <+roasbeef> well code is always a first step imo 12:12 <+roasbeef> as you typically find things that sound like the would've worked, but may have weird edge cases 12:12 < cdecker> As long as you don't go ahead and deploy it, ok :-) 12:13 < bitconner> how will the checksum work with fields added in the future? 12:13 <+roasbeef> this seems like the onion tlv? i'm speaking of wire tlv 12:13 < niftynei> (my implemenation roughly adheres to this, minus the varint aspect for the length field) 12:13 < cdecker> Anyhow, what shall we do with #557? I see two options: postpone yet again, or take a vote now 12:13 < bitconner> in the past we discussed adding schnorr sigs, would thos be included? 12:13 < sstone> I thought about using TLV and decide against it. I don't think it's essential 12:13 < niftynei> (which is a todo) 12:14 < rusty> sstone: I'm happy to take a shot at making it TLV. It should be easy, though we need varint length. 12:14 < cdecker> Guys, can we stick to this proposal, and defer a TLV variant? 12:14 < bitconner> proposal uses a 0 value checksum to indicate that there is no update, what if the checksum value is actually 0? 12:14 < rusty> bitconner: timestamp can't be 0. 12:14 < cdecker> We should first of all get a concept ACK / NACK and then we can worry whether we need serialization amendment 12:14 < sstone> yes please :) the point I'm trying to make is that current channel queries are almost unusable 12:15 < rusty> cdecker: agreed. 12:15 < sstone> it was focused on channel ann.but the real target should have been chan.updates 12:15 < rusty> I think adding the flags to specify which of announce/updates you want is fairly much a no-brainer (and was trivial to implement). 12:16 < bitconner> i think sending timestamps makes a lot of sense, not totally convinced on checksum tho 12:16 < rusty> We can argue over checksum and timestamp (was tempted to try to squeeze some bits out, but I'm not sure it's important). 12:17 < bitconner> is there any data pointing to how many queries are actually avoided by including the checksum? 12:17 < rusty> bitconner: checksums may prove unnecessary if more nodes start thottling spam. sstone had some stats IIRC. 12:18 < sstone> yes in December it was saving more than 80% traffic 12:18 < niftynei> !! 12:18 < gribble> Error: "!" is not a valid command. 12:18 <+roasbeef> throttling spam seems like a reasonable thing to do, most updates I see these days are suuuuper spammy, i've seen some node operators start to complain about bandwidth usage 12:18 < sstone> and in one of our uses cases (mobile nodes that are often online) it is still a huge win 12:18 < araspitzu> bitconner: the checksum lets you figure out if two updates with the same timestamp actually carry the same information, you may not want to download it again except to detect stale channels 12:18 <+roasbeef> 80% traffic when trying to sync all channel updates? 12:19 < sstone> no when you've been offline for a few days and sync 12:19 < araspitzu> *with different timestamps 12:19 < sstone> for the "initial sync" there's not much that can be done 12:19 < bitconner> i've been under the impression that the majority of the spam is not actually real, and that it is a bug in the enable/disable state machines 12:20 < cdecker> bitconner: can you elaborate? 12:20 <+roasbeef> iirc we recently saw a 95% bandwidth reduction just from not syncing updates from all peers, and instead picking 3 or so and rotating periodically 12:20 <+roasbeef> so like a destructive mode where enable/disable isn't stable 12:20 <+roasbeef> i think most of the updates rn are just enable/disable bit flipping 12:20 < niftynei> if updates are b/c of an disable/enable flag, wouldn't that change the checksum? 12:20 < bitconner> earlier this year i refactored our state machine for enable/disable https://github.com/lightningnetwork/lnd/pull/2411 12:21 < niftynei> i.e. that wouldn't explain the 80% reduction 12:21 < rusty> niftynei: yes, but if they disable and reenable, sum will be the same. 12:21 < bitconner> i'm curiouis if that spam still persists after that's widely deployed 12:21 < sstone> on mobile nodes it does not help since you have just a few peers to begin with 12:21 < sstone> the real point is: how do you actually use channel queries if all you have to filter with are channel ids ? 12:21 <+roasbeef> mobile nodes can trade off a bit of latency when getting the error back with the latest channel update for bandwidth (trying to sync em all) 12:21 < cdecker> I think c-lightning currently witholds updates that disable until someone actually wants to use the channel IIRC 12:21 <+roasbeef> sstone: can you elaborate? 12:22 <+roasbeef> cdecker: so it'll cancel back with the disable, the brodcast? 12:22 < cdecker> Yep 12:22 <+roasbeef> nice 12:22 < sstone> with the current queries, all you can do is get your peer's channel ids. what do you do then ? 12:22 < cdecker> Active enable, latent disable 12:22 -!- lndbot1 [~lndbot@138.197.213.35] has quit [Remote host closed the connection] 12:23 -!- lndbot [~lndbot@138.197.213.35] has joined #lightning-dev 12:23 < cdecker> Ok, I think we've reached the time for this PR, otherwise we'll talk about this forever 12:23 < cdecker> Shall we defer to the ML / Issue tracker or would people want to vote on it now? 12:23 <+roasbeef> sstone: what's the goal? you can interscet that agsint what you know and query for those that you don't know of 12:23 < cdecker> roasbeef: he's more concerned about updates than catching all channels 12:24 < sstone> yes ! 12:24 <+roasbeef> yeh you can get those onbly when you need 12:24 <+roasbeef> it's a waste of bandwidth to do it optimistically 12:25 <+roasbeef> which is more precious for mobile phones: bandwidth or latency? 12:25 < cdecker> I mean, there's no point in catching a pikachu when you don't know whether it can do Thunderbolt :-) 12:25 < sstone> both. and ths PR optimizes both 12:26 < sstone> if your strategy is don't ask if you already know the channel id then your get almost no updates 12:28 < cdecker> Any objections to moving this to the back of the meeting, or off to the ML? There seems to be quite a bit of discussions needed and we could get some things off the table before that 12:28 <+roasbeef> updates can be obtained when you actually need them, of the 40k or so channels, how many of those do you actually frequently route through? 12:28 <+roasbeef> sgtm 12:28 < bitconner> cdecker, if we have time at the end maybe we can circle back? 12:28 < sstone> it's terrible for latency 12:29 < cdecker> sstone: shall we address the other two topics first and come back to #557 in a few minutes? 12:29 < sstone> yes :) 12:29 < cdecker> Thanks 12:29 < cdecker> #topic BOLT 9: add features wumbo and wumborama #590 12:30 < cdecker> Ready set fight about naming :-) 12:30 < araspitzu> lol 12:30 <+roasbeef> idk it's afeature that no users will really see 12:30 < araspitzu> shall we leave that for last? perhaps there is some feedback 12:30 <+roasbeef> one day, it'll just be the nrom 12:30 <+roasbeef> norm 12:30 < cdecker> I'll abstain from this vote :-) 12:31 <+roasbeef> seems related to teh feature bit modification proposal as well 12:31 < cdecker> Can't always be the naming stickler ^^ 12:31 < cdecker> You mean #571? 12:31 < rusty> cdecker: OK, so there's some weirdness here. wumbo HTLCs vs wumbo channels. 12:32 < araspitzu> roasbeef: yes, in fact we might want to discuss if we put option_wumborama in channel_announcement.features 12:32 < cdecker> Good catch rusty, I'd have missed that 12:32 < rusty> You need a way to say "you can make a wumbo channel with me", and another to say "this channel can pass a wumbo HTLC"./ 12:33 < bitconner> is this fb also sent on connections? 12:33 < rusty> So I think we need a local (aka peer) bit to say "let's wumbo!" and a global (aka channel) bit to say "this channel can wumbo". 12:33 < cdecker> bitconner: that was kind of the point of Rusty's #571 :-) 12:34 < bitconner> rusty, cdecker: awesome just double checking :) 12:34 < bitconner> at one point i recall some discussion of only making global, but i may be mistaken 12:34 < araspitzu> rusty: could the channel_update.htlx_maximum_msat be interpreted for "wumbo HTLCs"? 12:34 < cdecker> araspitzu: I think so, yes 12:34 <+roasbeef> yeh 12:35 < bitconner> why does wumbo HTLC matter? 12:35 < rusty> bitconner: vital if you want to send one, you nee dto know what channels can take it. 12:35 < bitconner> certain impls already put max htlc values much larger than those limits :) 12:35 <+roasbeef> max uint64 12:36 < rusty> (FYI: feature bits get sent on init connect (local and global), advertized in channel_announcement and in node_announcment. 12:36 < bitconner> if we just start honoring those more directly, is that limit not lifted? 12:36 < araspitzu> cdecker: if that is the case we should be good with using just 2 options as in the PR 12:36 <+roasbeef> but that'd need to change by the time this is widespread 12:37 < rusty> So, I think we need a channel feature (this channel wumbos!), and a peer feature (let's wumbo!) 12:37 < cdecker> Mhe, min(htlc_maximum_msat, funding_msat) will do the trick 12:37 < araspitzu> also PR #590 assumes issue #504 12:37 <+roasbeef> seems only a peer feature is needed, if someone tries and you don't want to, you send a funding error back 12:37 <+roasbeef> max htlc (if set properly) does the rest 12:38 < cdecker> araspitzu: #504 is probably a subset of #571 12:38 < rusty> araspitzu: sure, but we can pull single features at a time in separate PRs. 12:38 < araspitzu> sgtm 12:38 < rusty> roasbeef: sure, we could do that. Should we start by slapping those setting max_htlc to infinity? 12:39 < bitconner> hehe 12:39 < bitconner> should we enforce that max htlc should not be larger than channel capacity? 12:39 < rusty> roasbeef: And meanwhile say max_htlc == UINT64_MAX implies it's capped at min(2^32, funding_capacity)". 12:39 < cdecker> Just to get an overview, what are the open issues with #590? Naming and how to signal? 12:39 < bitconner> when we initially implemented it, we added this check but that assumption was quickly broken :) 12:39 < rusty> bitconner: yeah, we should add that. 12:39 < araspitzu> bitconner: agreed 12:39 <+roasbeef> yeh we ended up doing that, btu for example, I don't eclair knows the funding size 12:40 <+roasbeef> (clamping to funding size if set to max uint64) 12:40 < rusty> roasbeef: then clamp to u32max, assuming it's pre-wumbo? 12:41 < araspitzu> is there agreement on the features signalling? 12:41 < cdecker> #action cdecker to change the htlc_maximum_htlc to match funding size 12:41 <+roasbeef> araspitzu: i think only a single bit is needed 12:41 < cdecker> Ok, that's c-lightning's "let's wing it" strategy taken care of 12:42 < rusty> OK, so I propose we: 1) add language that you should not set htlc_max > channel capacity (duh!), or 2^32 if no wumbo negotated. 2) if you see an htlc_max set to u64max, cap it to one of those. 3) assign the i_wumbo_you_wumbo bit. 12:42 < rusty> (peer bit, that is). 12:42 < araspitzu> having a global wumborama lets you connect to nodes that you know will support wumbo 12:43 <+roasbeef> araspitzu: yeh only global 12:43 < bitconner> imo we probably don't need to have any special casing for uint64, we should just defer merging that validation until most of the network has upgraded to set it properly 12:43 < rusty> araspitzu: that's where we start advertising peer bits in node_announcement. 12:43 < cdecker> Shall we split the PR into two parts (peer and global) then we can discuss them separately 12:43 < rusty> bitconner: agreed, it's more an implementation note. 12:43 < cdecker> I think everybody is happy with the peer bit 12:43 < bitconner> hmm, actually i think we do need to set the connection feature bit 12:43 < bitconner> o/w private nodes can't wumbo 12:44 < rusty> bitconner: private nodes don't send connection features at all though? 12:44 < rusty> We may need to eventually do something about feature bits in routehints, but I don't think we need it for this. 12:45 < bitconner> i think my mental model of the new terminology is off :P 12:45 < niftynei> (is a connection feature the same as a 'channelfeature' from #571) 12:45 < cdecker> Hehe, I know that feeling bitconner :-) 12:45 < niftynei> ? 12:46 < araspitzu> roasbeef: what would it look like only with the global? 12:46 < rusty> Oh, I read bitconner's sentence as channelfeature. If it's a peer feature, sure, but you know if you wumbod. 12:46 <+roasbeef> araspitzu: single global feature bit, you send that in the innit message and use that in your node_announcement, based on that ppl know you'll make wumbo channels, you set max_htlc above the old limit ot signal yo'ull forward larger HTLcs 12:47 <+roasbeef> yeh still behind on the new feature bit split terms myself... 12:47 < araspitzu> it sounds quite good 12:47 < araspitzu> we can still crosscheck the globals in the init message with those from the node_announcement 12:48 < bitconner> sounds to me like we are all in rough agreement, and we can hash out the terminology (and our understandings thereof) on the pr? :) 12:48 < niftynei> wait so the reason for the channelfeature is so other peers can find you to create wumbo channels? 12:48 < rusty> BTW we haven't got a final decision on feature unification. It's kinda tempting to only have one kind of feature, and drop the distinction between "features which prevent you routign" and "features which prevent you making a direct channel". 12:48 < niftynei> instead of discovering on connect? 12:48 < cdecker> Yeah, kind of miss #571 in the agenda, my bad, sorry :-( 12:48 < niftynei> idky but that seems like it'll get deprecated pretty quickly 12:49 < niftynei> i guess you could say the same for all of wumbo tho nvmd lol 12:49 < cdecker> So like bitconner said, we seem to agree to the concept, shall we hammer out the details in the PR? 12:49 < niftynei> ack 12:49 < rusty> niftynei: yeah, even local/peer features, people want to avoid scanning for them, so advertizing makes sense. Initial plan was just to advertize stuff you needed for routing via a channel/node. 12:49 < rusty> cdecker: ack. 12:49 < bitconner> ack 12:50 < araspitzu> should this wait for the features unification? I hoped it was a low hanging fruit :) 12:50 < rusty> I agree with roasbeef that only one feature bit is needed, though. 12:50 < cdecker> #agreed The concept of wumbo channels was accepted, but more details about signalling need to be discussed on the PR 12:50 < cdecker> #topic bolt04: Multi-frame sphinx onion routing proposal #593 12:50 < cdecker> Yay, I made a PR ;-) 12:51 < cdecker> Basically this is the multi-frame proposal written up (along with some minor cleanups that I couldn't resist) 12:51 <+roasbeef> yayy json! 12:51 < cdecker> It has two working implementations in C and Golang, and it currently blocks our progress on rendezvous routing, multi-part payments, spontaneous payments and trampoline routing :-) 12:52 <+roasbeef> i'm behind on this but iirc i didn't see a clear way of being able to only use part of a frame by having an outer length 12:52 < cdecker> I should also mention that this is a competing proposal to roasbeef's implementation 12:52 < cdecker> Hm? How do you mean? 12:52 < cdecker> The payload always gets padded to a full number of frames 12:53 < cdecker> (though that is not strictly necessary, it made the proposal so much easier) 12:53 <+roasbeef> i mean to extract only 8 bytes (for example) from the larger frame 12:53 <+roasbeef> i also still think we need both type + tlv (with tlv being its own type) 12:53 < cdecker> Oh, that's the parsing of the payload, that is basically deferred to the TLV spec 12:54 <+roasbeef> i also feel like when we talk about TLV, we seem to conflate the wire vs onion versions (which imo are distinct) 12:54 < cdecker> Right, a two frame payload with TLV would be realm 0x21 12:54 < cdecker> Sorry that's a three frame TLV payload 12:55 < cdecker> We can talk about TLV in a separate discussion, I feel we should stick to the pure multi-frame proposal here 12:55 <+roasbeef> overloading the realm also clobbers over any prior uses for it, we can still have type+realm by leaving the realm as is, and using space in the padding for the type 12:56 < cdecker> Oh god, please no, I'm uncomfortable with the layeringviolation that the realm byte is currnetly 12:56 <+roasbeef> how's it a layering violation? 12:56 <+roasbeef> i mean it's unused right now, but the intention was to let you route to foochain or w/e 12:56 < cdecker> We only have realm 0x00 defined so far, if somebody was using something else it's their fault 12:57 <+roasbeef> well they wouldn't need to ask anyone to use a diff byte, they just coudl as along as the intermediate node knew what it was meant to be 12:57 < rusty> roasbeef: we decided at the summit that we'd use an explicit tfield for "route to chain foo" to avoid the problems of realm byte assignment. 12:57 < cdecker> During the meeting in adelaide we decided that the realm byte is renamed a type byte, i.e., does not carry any other meaning than to signal how the payload is to be interpreted 12:57 <+roasbeef> but now you conflate chain routing and tlv types 12:57 < cdecker> The chain can be signalled in the payload 12:58 <+roasbeef> yeh i objected to that, as it combines namespaces when they can be disticnt 12:58 < cdecker> No, the type byte just says "this is a hop_data payload" or "this is a TLV payload" 12:58 <+roasbeef> yeh the type byte says that, realm byte if for which chain the target link should be over 12:58 <+roasbeef> is* 12:59 < cdecker> Let's please not go back on what was agreed upon, otherwise we'll be here forever 12:59 <+roasbeef> but stepping back a level, the main diff between this and the other proposal is that one is "white box" the other is "black box" 13:00 < cdecker> The type byte tells us how to interpret the payload, that's it 13:00 <+roasbeef> uh...I had this same opinion then, nothign there's no change in what i expressed then and now 13:00 < cdecker> Right, your concerns were heard, we decided for this change, let's move on :-) 13:01 < cdecker> How do you mean white box vs. black box? 13:01 <+roasbeef> the Q is who decided? I think a few others had the same opinion as well 13:01 < rusty> changing framing seems much cleaner than squeezing into the existing framing, to me at least. 13:01 <+roasbeef> this one has awareness of frame size when parsing, the other one just unpacked multiple frames 13:01 < rusty> roasbeef: true, but it's a fairly trivial calcuation. 13:02 <+roasbeef> yeah, just pointing out the differences 13:02 < cdecker> roasbeef: your proposal wastes loads of space, and burdens processing nodes with useless crypto operations... 13:02 <+roasbeef> i think that's the most fundamental diff (others follow from that) 13:02 <+roasbeef> well the hmac can be restored to be ignored just as this one does 13:02 <+roasbeef> with that the spac eis more or less the same, and the q is white box vs black box 13:02 < cdecker> At which point your construction looks awfully close to mine, doens't it? 13:02 <+roasbeef> yeh the blackbox requires additional decryptions 13:03 <+roasbeef> yeh they're very similar other than the distinction between the packet type and tlv 13:03 -!- Ax_ [b07a592c@gateway/web/freenode/ip.176.122.89.44] has joined #lightning-dev 13:03 < cdecker> And you need to somehow signal that there is more to come in the blackbox 13:03 <+roasbeef> i think you need to do the same in the whitebox too (my comment above about using a partial frame) 13:03 <+roasbeef> if i just want to have 30 bytes, no tlv, just based on the type 13:04 < cdecker> Well, using the former realm byte to communicate number of frames and type of payload is a pretty clean solution 13:04 < cdecker> It doesn't even change the semantics of having a single frame with legacy hop_data 13:05 < cdecker> Anyhow, seems to me that not everybody is convinced yet, shall we defer to the issue? 13:06 < cdecker> Or would the others like to proceed and unblock all the things that are blocked on this? 13:06 < rusty> I'm reluctant to keep deferring it, but I'm not sure how to deal with the impasse, either. 13:06 <+roasbeef> my main comment is that tlv shoudl be optional in the payloads 13:07 <+roasbeef> as there're cases where you'd need to use an additional hop due to the signalling data 13:07 < cdecker> Right, as it is (you can use whatever payload serialization you want) 13:07 <+roasbeef> i only see reg and tlv right now 13:07 < cdecker> You can use the legacy hop_data format which is fully supported and append data you want to that 13:08 <+roasbeef> so you'd need to have a length there 13:08 < rusty> roasbeef: it's smaller if it's used to replace existing payloads, but I'm not totally adverse to it staying the same. 13:08 < cdecker> Well, those are the only ones that are "currently" defined 13:08 < cdecker> For hop_data the padding is defined as MUST be 0x00s 13:08 < cdecker> So you can pack anything in there to signal the presence of extra data 13:09 < cdecker> The proposal literally say "the following `realm`s are currently defined", so if we come up with the next great thing we can just add that to the list :-) 13:10 < cdecker> I just think that having the data all layed out consecutively without the need to do additional rounds of decryption is pretty neat and avoids a bunch of copies to reassemble the data 13:10 <+roasbeef> using the realm though, which conflates realm and the actual packet type 13:11 < rusty> roasbeef: realm is dead. It's now just a signalling byte for the contents, not an attempt to squeeze every possible chain into 8 bits. 13:11 <+roasbeef> also maybe we should bump the onion type and commit to the version while we're at it? 13:11 < cdecker> It just makes good use of the space we have available: 4 bits to signal additional frames to be used, and 4 bits type information 13:11 < cdecker> That's another proposal altogether 13:11 <+roasbeef> y'all seem to be the only ones that concluded the realm byte doesn't exist anymore.... 13:11 < rusty> roasbeef, cdecker: agreed, but it's not a bad idea to have that on the table. 13:11 < rusty> roasbeef: see minutes. 13:12 < cdecker> Let's stick with this proposal, which I think is minimal to get the followup things unblocked 13:13 <+roasbeef> i'll take a deeper look at this one, a bit behind with volume (some of it just tangential really?) on the ML these days 13:13 < cdecker> The realm byte was always conflating chain and payload format 13:13 <+roasbeef> how so? 13:13 <+roasbeef> it just said go to this chain/link identified by this chan id 13:13 < cdecker> Just a quick reminder what the current spec says: "The realm byte determines the format of the per_hop field; currently, only realm 0 is defined" 13:15 < rusty> OK, we're 15 over, is there anything else before we close? 13:15 < cdecker> I'm happy to defer, since I only opened the PR 3 days ago, I can understand that it hasn't had the desired scrutiny, but at some point we need to pull the trigger 13:15 < cdecker> #agreed Decision of PR #593 was deferred until the next meeting, discussion on the ML and the issue 13:16 <+roasbeef> imo the realm can also determine exactly what types even make sense as well vs combining them under a single namespace 13:16 < rusty> sstone: test vectors seems mainly OK, but I'll try a TLV-variant anyway. I'm wanting to get back to my more ambitious JSON test framework, for which this would be perfect to integrate (specificially, if we know chain state we can test real gossip queries/answers). 13:16 < bitconner> cdecker, sgtm 13:16 < cdecker> Should we take another stab at #557? 13:17 < cdecker> Which I feel I have a responsibility to bring back up since it was me to suggest postponing it till the end of the meeting 13:18 < bitconner> sure 13:18 < rusty> cdecker: I think if we make it TLV, which seems the new hotness, sstone and I can thrash it out. It's clear that the checksums save significant startup latency in current network conditions. 13:18 < rusty> And TLV makes it easier to drop later, too. 13:18 < cdecker> #topic BOLT7: extend channel range queries with optional fields #557 (Round 2) 13:18 < sstone> sgtm 13:19 < cdecker> rusty: so you'd like to take another stab at it using the TLV format and re-propose? 13:19 < rusty> cdecker: yeah. sstone and I were simply bikeshedding on the formatting/naming at this point anywat AFAICT. 13:19 <+roasbeef> wire tlv right, not onion tlv? 13:19 <+roasbeef> (main diff imo is the size of the type and length) 13:20 < rusty> roasbeef: we can use varints for both, TBH? 13:20 < cdecker> roasbeef: I still don't see why making two versions matters... 13:20 < cdecker> Varints sounds like it'd solve this pretty easily 13:20 < bitconner> for me, the bigger distinction is that wire doesn't need even/odd, while onion does 13:20 <+roasbeef> well one can have a type that's KBs in length, the other is like 1.5kb max 13:20 < cdecker> Anyway, let's defer the #557 then 13:20 < rusty> bitconner: true... 13:21 < rusty> bitconner: though we still ahve the even/odd rule between peers, in case someone screws up. 13:21 < cdecker> #action rusty and sstone to change format to be TLV based, so it becomes easier to change in the future 13:21 * rusty drags niftynei in for TLV support... 13:21 < cdecker> roasbeef: 1300 - 1 - 32 bytes exactly for the onion :-) 13:22 <+roasbeef> cdecker: well depends on how many hops the data is distributed over tho right? 13:22 <+roasbeef> trampoline tyme? 13:22 < bitconner> rusty, yes though should that not be an enforcement of feature bits, rather than reimplementing it again at the parsing level? 13:22 < cdecker> Ok, I think that concludes the official part (with all decisions deferred once again) 13:22 <+roasbeef> (due to signalling overhead) 13:22 < cdecker> roasbeef: that's the maximum possible payload, yeah, you can split differently 13:23 < rusty> bitconner: Yeah, parsing pretty much has to be specific to which TLV you're pulling apart, AFAICT. 13:23 < niftynei> yes 13:23 < cdecker> Shall we conclude the meeting and free people from their screens? I'm happy to stick around for a bit of trampoline fun and some more bikeshedding ;-) 13:23 < rusty> Well, none of us are likely to be bored between now and next mtg :) 13:23 < rusty> cdecker: ack 13:23 < cdecker> Yeah, sounds like it :-) 13:24 < bitconner> rusty: i think it'd be possible to agree on a common format for the T the L and the V, and then have the parsing restrictions added on top (or not in the case of wire) 13:24 < sstone> ack 13:24 < cdecker> roasbeef, bitconner you also happy calling it a meeting here? 13:24 < bitconner> cdecker, sure thing, know it's getting late across the pond 13:24 < cdecker> #endmeeting 13:24 < lightningbot> Meeting ended Mon Apr 1 20:24:52 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 13:24 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-04-01-19.00.html 13:24 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-04-01-19.00.txt 13:24 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-04-01-19.00.log.html 13:25 < rusty> So... why are trampolines better than routehints? 13:25 < cdecker> No problem, it's just that I know some people have some child-wrangling and other duties 13:25 < cdecker> rusty: trampoline defers the route finding to an external node 13:25 < rusty> cdecker: wouldn't a route hint do the same? 13:25 <+roasbeef> it's a hybrid thing 13:26 <+roasbeef> it lets you say "i don't care how you get from A to B" 13:26 < cdecker> So I as a sender only need to know my 2-3 neighborhood, and I'll create a route to that trampoline node. The trampoline node actually takes care of looking up a route from itself to the destination 13:26 <+roasbeef> the A -> B can use anything as long as it's completeed, and within the fee+timelock budget 13:26 <+roasbeef> so it opens up things a bit 13:27 -!- khs9ne [~xxwa@unaffiliated/mn3monic] has quit [Excess Flood] 13:27 < cdecker> It'd also let us do what I always wanted: have a base routing layer that connects any two points, and then have an onion routing layer on top that doesn't need hops to be "physically" connected 13:27 < cdecker> (in this case both the base layer and the top layer actually use onion routing, but that's ok) 13:28 < rusty> I'm obv missing something. WIll re-read ML. 13:28 < cdecker> Routeboost would work the same way if the sender were to tell the recipient exactly what its nodeid is, so the recipient would calculate the route for the sender 13:29 -!- Ax_ [b07a592c@gateway/web/freenode/ip.176.122.89.44] has quit [Quit: Page closed] 13:29 < cdecker> With trampolines, the node computing the route knows neither sender nor recipient, and we can even extend the route to >20 hops (though that might also be a downside...) 13:29 -!- khs9ne [~xxwa@host22-236-dynamic.104-80-r.retail.telecomitalia.it] has joined #lightning-dev 13:29 < rusty> sender == invoice sender or payer? 13:29 < cdecker> Personally I find it incredible appealing 13:29 < sstone> sender == payer 13:29 < cdecker> Sender = payment sender, recipient = invoice creator 13:30 <+roasbeef> rusty: so the more extreme version has an inner + outer onion 13:30 <+roasbeef> inner changes each hop of the outer onion, it can be redone completley, the sender makes the outer onion 13:30 < cdecker> well the sender creates both, but each inner hop is a full new outer onion alltogether 13:30 < niftynei> (in case anyone else wants the ML link for the orig trampoline thread https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-March/001939.html) 13:32 < cdecker> As for the nomenclature that roasbeef and I have been using, they are defined here https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-April/001956.html 13:32 < cdecker> I also explains both the simple and the multi-trampoline construction 13:32 < cdecker> s/I/It/ 13:35 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 246 seconds] 13:35 -!- Joel_ [808a4171@gateway/web/freenode/ip.128.138.65.113] has joined #lightning-dev 13:36 < sstone> our main concerns were what we could lose wrt privacy/security but it seems to be ok 13:36 < cdecker> It depends a bit on you knowing at least a path to a trampoline node, but that should be possible 13:37 < cdecker> If we can somehow filter to just receive trampoline nodes and their channels up to a certain distance we should be ok 13:38 < sstone> there's also an impact on fee estimation that is a bit hard to formalize 13:38 <+roasbeef> yeah i think the biggest thing is that it really opens thigs up and allows for rapid experimentation w.r.t routing protocols, while still maintainig some advantages of onion routing 13:38 <+roasbeef> yeh with any packet switching proposal brought up, the Q is always how do you manage fees and timelock 13:38 <+roasbeef> things need to be a bit more slack thatn they are rn, but then that might be a step backwards since rn you always know exactlyt how much to pay (ignoring over paying) 13:39 < cdecker> Yep, I was rationalizing it as incentivizing trampolines to do the routing work 13:40 < cdecker> Also if I can just fire and forget a payment, with whatever fee I'd be happy to pay, and not attempt to optimize for those breadcrumbs seems like an ok tradeoff for me 13:40 -!- nirved is now known as Guest74904 13:40 -!- Guest74904 [~nirved@2a02:8071:b58a:3c00:15c:5ca6:84fc:1af7] has quit [Killed (adams.freenode.net (Nickname regained by services))] 13:40 -!- nirved [~nirved@2a02:8071:b58a:3c00:3ca6:9fb9:2e23:4e12] has joined #lightning-dev 13:41 < cdecker> Not so sure which to prefer between the full onion inception and the simple one-off trampoline tbh 13:41 < bitconner> cdecker, what is one-off trampoline? 13:41 < bitconner> i quite like the idea of onion inception 13:41 < bitconner> (and the name) :) 13:42 < cdecker> Basically instead of packing a new inner onion into the outer onion, you just have a TLV field that tells the trampoline which node is the final recipient 13:42 < bitconner> gotcha 13:42 < cdecker> The full onion inception could open us up to 90 hops for a single route, which feels dangerous to me 13:43 < cdecker> Given that each hop is an HTLC that is allocated the transferred amount... 13:43 < bitconner> perhaps there's an arugment that w/o onion inception, one can't do things that have destionation-specific things, e.g. secret shared AMP 13:44 < cdecker> Could always say that the inner onion is limited to 4 hops and the outer onion MUST waste 10 hops, then we're back to 20 hops max 13:44 < bitconner> destionation-specific data* 13:44 < sstone> cdecker: but if the sender can choose a max fee/timelock having long routes is less of a pb 13:45 < cdecker> sstone: that only works if the sender and recipient aren't malicious. Griefing is what I'm worried about here 13:45 <+roasbeef> how's it diff that the worst case today? 13:45 <+roasbeef> than* 13:46 <+roasbeef> (w/o knowing the destinatio) 13:46 < cdecker> Oh, and as bitconner points out we might need to go inception anyway, if we want to communicate something with the recipient 13:46 < cdecker> roasbeef: today I can hold up 20x my own funds, with onion inception trampolines we'd have 90x in the worst case 13:47 <+roasbeef> ahh yeh multiplies 13:47 < cdecker> (unless we make sure that we can't bounce more than x and each leg can only ever be y) 13:47 <+roasbeef> well even today you can do more 13:47 <+roasbeef> yeh not quite sure how to add those constraints yet 13:47 < cdecker> How so? 13:48 < cdecker> roasbeef: well, making people waste some space in the onion to prove they don't route more than 20 hops in total seems like a legit use 13:48 <+roasbeef> add more? you just use multiple nodes 13:48 <+roasbeef> cdecker: def 13:48 < cdecker> Right, but in that case I need to participate multiple times in the route, and I'd still be limited to 20x impact 13:49 <+roasbeef> ok I see what you're getting at 13:49 < cdecker> With onion inception trampolines I can use one sender and one recipient to have 90x my funds locked up along a gigantic path 13:49 < cdecker> :-) 13:49 < cdecker> But, pst, don't tell the others, they might think LN is vulnerable to attacks :-) 13:50 <+roasbeef> kek 13:51 < cdecker> Ok, I went ahead and marked some of the other PRs for the next meeting, please add yours so we can start well prepared next tiem :-) 13:54 < sstone> We'll try and start implementing some of the onion PR and give you more feedback (I planned to but was too sort on time sorry :( 13:54 < sstone> short 13:56 < cdecker> No problem :-) 14:01 < cdecker> Let me know if I can be of assitance (I would have implemented it in scala, if only I knew scala) 14:03 < sstone> ok. scala rulz! (honestly I know it's a lost cause in the crypto world but it's so much better than what people think :)) 14:10 < cdecker> I'm sure it is, just never had the opportunity to use it until now. Maybe some time in the future :-) 14:12 <+roasbeef> things have changed a _ton_ from when I used it last 14:12 <+roasbeef> I took Odersky's coursera course back in the day 14:13 < sstone> so there's a chance :) ? we can convert you too ? 14:18 -!- araspitzu [~smuxi@134.28.68.51.rdns.lunanode.com] has quit [Read error: Connection reset by peer] 14:21 < cdecker> Hehe, not today :-) 14:27 -!- hiroki [d295ffdc@gateway/web/freenode/ip.210.149.255.220] has quit [Ping timeout: 256 seconds] 14:28 < sstone> ok bye all ! 14:28 -!- sstone [~sstone_@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 14:30 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 244 seconds] 14:31 -!- droark [~droark@c-24-21-203-195.hsd1.or.comcast.net] has joined #lightning-dev 14:36 -!- Joel_ [808a4171@gateway/web/freenode/ip.128.138.65.113] has quit [Ping timeout: 256 seconds] 14:43 -!- Awb89 [c2a66d6c@gateway/web/freenode/ip.194.166.109.108] has joined #lightning-dev 14:49 -!- Awb89 [c2a66d6c@gateway/web/freenode/ip.194.166.109.108] has quit [Ping timeout: 256 seconds] 14:50 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Remote host closed the connection] 14:51 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 15:07 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 15:12 < rusty> niftynei: I guess it's time for me to review your TLV PRs! 15:15 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 245 seconds] 15:23 -!- StopAndDecrypt [~StopAndDe@173.254.222.154] has joined #lightning-dev 15:23 -!- StopAndDecrypt [~StopAndDe@173.254.222.154] has quit [Changing host] 15:23 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has joined #lightning-dev 15:34 < rusty> niftynei: done. 16:35 -!- MrPaz [~MrPaz@c-71-57-73-68.hsd1.il.comcast.net] has joined #lightning-dev 16:46 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 16:50 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 16:52 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Remote host closed the connection] 16:53 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 16:56 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 17:01 -!- StopAndDecrypt_ [~StopAndDe@96.44.189.226] has joined #lightning-dev 17:02 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has quit [Ping timeout: 246 seconds] 17:39 -!- MrPaz [~MrPaz@c-71-57-73-68.hsd1.il.comcast.net] has quit [Ping timeout: 255 seconds] 18:02 -!- jcsc [ac6e5c5e@gateway/web/freenode/ip.172.110.92.94] has joined #lightning-dev 18:03 -!- jcsc [ac6e5c5e@gateway/web/freenode/ip.172.110.92.94] has quit [Client Quit] 18:04 -!- LukeJCSC [ac6e5c5e@gateway/web/freenode/ip.172.110.92.94] has joined #lightning-dev 18:07 -!- LukeJCSC [ac6e5c5e@gateway/web/freenode/ip.172.110.92.94] has left #lightning-dev [] 18:08 -!- MrPaz [~MrPaz@c-71-57-73-68.hsd1.il.comcast.net] has joined #lightning-dev 18:15 -!- unixb0y [~unixb0y@p5B029BA1.dip0.t-ipconnect.de] has quit [Ping timeout: 245 seconds] 18:23 -!- unixb0y [~unixb0y@p5B029CD0.dip0.t-ipconnect.de] has joined #lightning-dev 18:26 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 268 seconds] 18:46 -!- riclas [~riclas@148.63.37.111] has quit [Ping timeout: 268 seconds] 18:55 -!- unixb0y [~unixb0y@p5B029CD0.dip0.t-ipconnect.de] has left #lightning-dev ["part"] 19:01 -!- MrPaz [~MrPaz@c-71-57-73-68.hsd1.il.comcast.net] has quit [Quit: Leaving] 19:25 -!- buZz [~buzz@unaffiliated/buzz] has quit [Ping timeout: 244 seconds] 19:35 -!- khs9ne [~xxwa@host22-236-dynamic.104-80-r.retail.telecomitalia.it] has quit [Ping timeout: 246 seconds] 19:39 -!- khs9ne [~xxwa@host22-236-dynamic.104-80-r.retail.telecomitalia.it] has joined #lightning-dev 19:45 -!- buZz [~buzz@192.161.48.59] has joined #lightning-dev 19:46 -!- buZz [~buzz@192.161.48.59] has quit [Changing host] 19:46 -!- buZz [~buzz@unaffiliated/buzz] has joined #lightning-dev 20:13 -!- nirved is now known as Guest81765 20:13 -!- nirved [~nirved@2a02:8071:b58a:3c00:3ca6:9fb9:2e23:4e12] has joined #lightning-dev 20:16 -!- Guest81765 [~nirved@2a02:8071:b58a:3c00:3ca6:9fb9:2e23:4e12] has quit [Ping timeout: 258 seconds] 20:20 -!- ccdle12 [~ccdle12@223.197.182.203] has joined #lightning-dev 20:53 -!- ccdle12 [~ccdle12@223.197.182.203] has quit [Read error: Connection reset by peer] 20:54 -!- ccdle12 [~ccdle12@223.197.182.203] has joined #lightning-dev 22:26 -!- ccdle12 [~ccdle12@223.197.182.203] has quit [Read error: Connection reset by peer] 22:27 -!- ccdle12 [~ccdle12@223.197.182.203] has joined #lightning-dev 23:11 -!- CubicEarth [~CubicEart@c-67-168-1-172.hsd1.wa.comcast.net] has joined #lightning-dev 23:15 -!- ccdle12 [~ccdle12@223.197.182.203] has quit [Remote host closed the connection] 23:26 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 23:37 -!- droark [~droark@c-24-21-203-195.hsd1.or.comcast.net] has quit [Read error: Connection reset by peer] --- Log closed Tue Apr 02 00:00:33 2019