--- Log opened Mon Jan 20 00:00:14 2020 00:12 -!- berndj [~berndj@azna.co.za] has quit [Ping timeout: 260 seconds] 00:17 -!- berndj [~berndj@azna.co.za] has joined #lightning-dev 00:52 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 00:53 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 246 seconds] 00:56 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 01:19 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 01:31 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 260 seconds] 01:37 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 01:41 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 02:05 -!- cryptoso- [~cryptosoa@gateway/tor-sasl/cryptosoap] has joined #lightning-dev 02:05 -!- AbramAdelmo_ [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 02:06 -!- cryptosoap [~cryptosoa@gateway/tor-sasl/cryptosoap] has quit [Ping timeout: 240 seconds] 02:06 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Read error: Connection reset by peer] 02:12 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Remote host closed the connection] 02:12 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 02:23 -!- jonatack [~jon@109.232.227.133] has joined #lightning-dev 02:33 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 02:36 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has joined #lightning-dev 02:41 -!- slivera [slivera@gateway/vpn/privateinternetaccess/slivera] has joined #lightning-dev 02:48 -!- belcher_ [~belcher@unaffiliated/belcher] has joined #lightning-dev 02:50 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 02:51 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 02:52 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 02:52 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 02:55 -!- __gotcha1 is now known as __gotcha 03:21 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 03:30 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has quit [Remote host closed the connection] 03:30 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 03:59 -!- slivera [slivera@gateway/vpn/privateinternetaccess/slivera] has quit [Remote host closed the connection] 04:15 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 04:21 -!- AbramAdelmo_ [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 04:21 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 04:26 -!- tinova [~tinova@lemoncat.org] has quit [Quit: Ping timeout (120 seconds)] 04:27 -!- tinova [~tinova@lemoncat.org] has joined #lightning-dev 04:27 -!- dabura667 [sid43070@gateway/web/irccloud.com/x-eldonbpbdiyianzx] has quit [Read error: Connection reset by peer] 04:27 -!- dabura667 [sid43070@gateway/web/irccloud.com/x-vnyhledzselfjrjy] has joined #lightning-dev 05:14 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 268 seconds] 05:16 -!- harrigan_ [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has quit [Quit: ZNC 1.7.5 - https://znc.in] 05:16 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has joined #lightning-dev 05:27 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 05:29 -!- jonatack [~jon@109.232.227.133] has quit [Read error: Connection reset by peer] 05:35 -!- jonatack [~jon@54.76.13.109.rev.sfr.net] has joined #lightning-dev 06:00 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 06:14 -!- bbryant [4c57fa0e@cpe-76-87-250-14.socal.res.rr.com] has joined #lightning-dev 06:31 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Ping timeout: 240 seconds] 06:33 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 06:45 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 06:51 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 260 seconds] 06:54 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 07:31 -!- jonatack [~jon@54.76.13.109.rev.sfr.net] has quit [Ping timeout: 268 seconds] 07:37 -!- nibbier_ [~quassel@mx01.geekbox.info] has left #lightning-dev [] 07:38 -!- nibbier [~quassel@mx01.geekbox.info] has joined #lightning-dev 07:48 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 07:58 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 08:00 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 08:10 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 08:13 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Ping timeout: 268 seconds] 08:38 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 265 seconds] 08:49 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 08:54 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has joined #lightning-dev 08:55 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 09:20 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has quit [] 09:22 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has joined #lightning-dev 09:30 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 265 seconds] 09:59 -!- bbryant [4c57fa0e@cpe-76-87-250-14.socal.res.rr.com] has quit [Remote host closed the connection] 10:11 -!- moneyball [sid299869@gateway/web/irccloud.com/x-robisjyjfvphumsd] has quit [Read error: Connection reset by peer] 10:11 -!- moneyball [sid299869@gateway/web/irccloud.com/x-ovnazfnmowvbxxke] has joined #lightning-dev 10:12 -!- imawhale1 [~Thunderbi@softbank126194188046.bbtec.net] has joined #lightning-dev 10:15 -!- imawhale [~Thunderbi@softbank126147136039.bbtec.net] has quit [Ping timeout: 272 seconds] 10:35 -!- t-bast [~t-bast@2a01:e34:ec2c:260:896d:3cba:99be:fe8] has joined #lightning-dev 10:42 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has quit [] 10:42 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has joined #lightning-dev 10:53 < cdecker> Spec meeting in ~8 minutes :-) Meeting agenda is here http://bit.ly/2Ra9D0Y 10:54 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has quit [] 10:56 < t-bast> Yay! Hi Christian, thanks for organizing, great initiative ;) 10:58 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 10:59 < rusty> Despite jet lag, I am mostly awake ... Hi all! 10:59 < cdecker> Hi guys ^^ 10:59 < t-bast> Hey Rusty, welcome back! 10:59 < cdecker> Welcome back Rusty ;-) 11:00 -!- sstone_ [~sstone@185.186.24.109.rev.sfr.net] has joined #lightning-dev 11:00 < rusty> Thanks! Haven't yet caught up with spec stuff, so I'll probably just add random comments and try to sound intelligent... 11:00 < niftynei> hello everyone. welcome back rusty! 11:00 -!- hiroki_ [77f52d8d@pl23181.ag2001.nttpc.ne.jp] has joined #lightning-dev 11:00 < t-bast> Not much has changed, we figured that if you were on holidays we could all stop working 11:00 < cdecker> No problem, I kept the agenda short so we get to talk about more long-term things as well, and not end up with the usual ACK/NACK slog 11:01 < sstone_> hi everyone! 11:01 < cdecker> Hi sstone_ ^^ 11:01 < ariard_> hi everyone 11:01 < cdecker> Seems ACINQ and Blockstream are well represented, how about Lightning Labs and rust-lightning? 11:02 < lndbot> hi 11:02 < lndbot> I'm gonna try to follow, but might have to tun suddenly 11:02 < cdecker> Hi Johanth :-) 11:02 < lndbot> *run 11:02 < cdecker> Let's wait a couple more minutes so people can show up 11:02 < ariard_> it's MLK day in the us 11:03 < cdecker> Oh I see, so we might not get many from the US joining, that's too bad 11:03 < niftynei> yes today's a holiday here 11:03 < cdecker> Hm, still hoping roasbeef and BlueMatt join, so we can somewhat make progres 11:04 < t-bast> BlueMatt is quite active on the mailing list rn, hopefully he can join 11:04 < t-bast> johanth do you know if conner or roasbeef will be able to join? 11:04 < cdecker> In the meantime: is everybody happy with me chairing today? Or did someone else want to give it a shot? 11:05 * t-bast is happy to let cdecker chair 11:05 < rusty> cdecker: :chair: 11:05 * cdecker ducks under the :chair: rusty just threw xD 11:06 < cdecker> Ok, let's start then, in the hopes that roasbeef and BlueMatt join as well 11:06 < cdecker> #startmeeting 11:06 < lightningbot> Meeting started Mon Jan 20 19:06:11 2020 UTC. The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:06 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 11:06 < lndbot> t-bast: I haven't heard that they will 11:07 < cdecker> So I wrote up a short agenda (~5 prs to ack/nack) followed by a bit more long-term outlooks to discuss so we can make these meeting a bit more constructive 11:07 < cdecker> #link https://paper.dropbox.com/doc/Lightning-Spec-Meeting-20200120--Asyn1LqwS5eymCtfthi~KR01Ag-SA160p27VepiVZcnbKcZE 11:08 <+roasbeef> cdecker: so saw the paper thing, what're the advtanges of that over using the existing labeling system? given that everyone is already on github, or we could make an issue to aggregrate the labels perhaps that's used each week 11:08 < cdecker> Today we have 6 PRs, 1 Issue that t-bast has asked to discuss and finally two outlooks for long term 11:08 < t-bast> hey roasbeef! 11:09 < cdecker> roasbeef: the main upside is that we don't accidentally carry over things from the last meeting and end up clumping everything together 11:09 < cdecker> But we can certainly judge at the end whether it was the improvement I hoped for, or not 11:09 < cdecker> The major upside is that we can also add things that are not PRs or Issues and discuss things more longer term 11:10 < cdecker> Since we have a quorum with two people from LL, shall we get started with the PRs? 11:10 < t-bast> cdecker: SGTM 11:10 < cdecker> #topic Restrict data_loss_protect to init context #726 11:10 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/726 11:11 < cdecker> It seems we discovered a number of issues with the "just advertise everything everywhere" idea 11:11 -!- orbital15 [46200075@70.32.0.117] has joined #lightning-dev 11:11 < cdecker> For this specific issue we are only interested in `data_loss_protect` being limited to `init` or not 11:12 < t-bast> I think that what underpins this is: what should be the rationale for including a feature in `init`, `node_ann`, elsewhere 11:12 <+roasbeef> cdecker: yeh if we don't wanna carry things over, can have a weekly master tracking issue perhaps 11:12 < cdecker> roasbeef brings up a good point that with the static channel backup it makes sense to prefer nodes that support `data_loss_protect`, and announcing it in the `node_announcement` would allow this preferential treatment 11:13 < rusty> cdecker: Agreed; I think this fix is wrong. We really do want to broadcast this. But perhaps that channel features that should gate whether you can route, not nf. 11:13 < cdecker> roasbeef: sure, we can replace the paper thing with issues, no problem, though then only the OP can edit 11:13 <+roasbeef> i don't think we need very strong rules for certain classes, as there're still exceptions even though the unification made certain classes more explicit 11:13 <+roasbeef> (re feature bit locations) 11:13 < t-bast> I really think of node_announcement feature as: "hey this is everything you need to know about me" so mostly all features should go in there 11:14 < rusty> t-bast: agreed. 11:14 <+roasbeef> t-bast: yeh not like we're short on space either lol 11:14 < t-bast> :D 11:14 < ariard_> the issue if we come with new nodes features, non-upgraded nodes won't be able to dissociate between packet processing and peer connection ones 11:14 < ariard_> and so not use these peers to route 11:14 <+roasbeef> seems like RL wants to implement some very strict rules for feature bit placement, while generally things can be mnore lax 11:14 < cdecker> Yep, though some things might be weird, i.e., mandatory flags in th node_announcement that we don't understand 11:15 < t-bast> ariard: in my opinion that's because we shouldn't be looking at node_announcement features (vertices in the graph) but rather channel_announcement/update features (edges) 11:15 < rusty> ariard_: indeed, whch is why we'd advertize routing features per-channel. Of course, we have the problem that channel_announcements can't currently be replaced, so you can't "upgrade" featuers on a channel. 11:15 <+roasbeef> yeh for anything routing related, the channels are the go to typically, for connection level, encryption or channel creation aspects lookin the node ann 11:16 < ariard_> rusty: hmm so the fix would be to context packet processing in channel_features and allow replacement of them? 11:16 < rusty> ariard_: well, we don't have any yet, so not sure when we'll need replacement. It may be the first ones are completely static? 11:17 < cdecker> I think the issue at hand is mostly that we end up in strange situations if we announce even features (i.e., mandatory if you want to talk to the node), where really odd ones suffice 11:17 < t-bast> I agree with Rusty there, I don't see yet a case where we really need to upgrade a channel's features 11:17 <+roasbeef> upgrade as in? 11:17 < cdecker> we might have different thresholds for various placements (required in init, optional in node_announcement) 11:18 < t-bast> cdecker: but if you only look at channel feature bits when you're choosing your route, nodes are free to do whatever they want in their node_announcement, right? 11:18 < rusty> roasbeef: some new channel magic causes us to want to upgrade channel features on the fly... 11:18 < ariard_> cdecker: yeah if we keep new features optional in node_announcement, that would work to let you find peers your want in the network 11:18 < cdecker> t-bast: if for example I no longer process legacy onion payloads that'd be an even bit in the node_announcement since otherwise we can't talk 11:18 < cdecker> That'd be an example of "if you don't understand this, don't talk to me" 11:19 < t-bast> cdecker: but then wouldn't you also update all your channels with that feature bit change (which would require feature bits in channel_update)? 11:20 < rusty> t-bast: worse, renewing would require negotiating with your peer who can't know your feature preferences Yuck... 11:20 <+roasbeef> t-bast: imo the node ann is sufficient in that case, during path finding we look at the node ann to see if a node can support w/e new features that my modify our payload or if we can traverse that node 11:20 < cdecker> That's a possibility, but weird: "I don't understand X, so if you talk to me through A, B, C, or D, you can't" 11:20 < cdecker> Seems semantically weird 11:20 < t-bast> yeah this ends up being very messy 11:21 < rusty> OK, I think we need to add N- and N+ to the table (i.e. advertize in node announcement, but always do so as odd). 11:21 < rusty> In init you would use the even bit. 11:21 < rusty> (If you didn't want to support backwards compat) 11:21 < cdecker> So we add two variants: N- and N+ for optional/mandatory possible in the node_announcement? 11:22 < rusty> Yes. I foresaw this for channel announcements, not node announcements, sorry. 11:22 < t-bast> We can also embrace the fact that setting an even feature bit in your node_announcement may exclude you from legacy nodes' path-finding 11:22 < cdecker> Anyway, we can probably bike-shed this to death, let's look at the very restricted case at hand (data_loss_protect in init only) 11:22 < t-bast> And let you decide 11:23 < t-bast> For the current PR, I'd then say to keep the feature as `IN` 11:23 < cdecker> For me it's a NACK as long as data_loss_protect is flagged as optional 11:23 < rusty> (To some extent this is less critical than channel_announcement features bits, which *have* to match on both sides so signatures are valid). 11:23 < t-bast> `IN-` 11:23 <+roasbeef> cdecker: agreed, should be in node_ann so nodes can seek our other nodes that have a better ability to let them recover their funds 11:23 < rusty> I will respond on PR, but I think this is an N-. 11:24 -!- imawhale1 [~Thunderbi@softbank126194188046.bbtec.net] has quit [Ping timeout: 258 seconds] 11:24 < ariard_> I'm fine with IN-, it solves the issue to seek preferential peers 11:24 < cdecker> Great, so everybody happy with nacking this one, and postpone the discussion on semantics for N-/N+ to GH? 11:24 < cdecker> Great, thanks ariard_ 11:24 < t-bast> SGTM 11:25 < t-bast> thanks for the discussion this triggered ariard ;) 11:25 < cdecker> #agreed cdecker to close #726 with a brief explanation 11:25 < cdecker> Next one ^^ 11:25 < cdecker> #topic 09+11: require transitive feature dependencies #719 11:25 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/719 11:26 < cdecker> It seems we have ACKs from ACINQ and LL, with agreement from the last time from c-lightning 11:27 < cdecker> We just had a minor rebase issue where lines got mixed up last time, so this should be ready now 11:27 -!- orbital15 [46200075@70.32.0.117] has quit [Remote host closed the connection] 11:27 < rusty> Like the idea, assume others have bikeshedded the details already :) 11:27 < cdecker> Are there any last minute comments regarding the requirement for transitive features? 11:28 < t-bast> nope, lgtm 11:28 < cdecker> rusty: yeah, it just makes a couple of dependencies among features more evident, no real changes to the behavior if I'm not mistaken 11:28 < ariard_> nope 11:28 <+roasbeef> nope, we've implemented it already 11:28 < cdecker> Great ^^ 11:28 < t-bast> already on our master too :D 11:28 < cdecker> #agreed cdecker to merge #719 11:28 < cdecker> #topic Specify that resolution of amount is msat #700 11:28 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/700 11:29 < cdecker> Just a minor clarification resulting from us having 11 decimal places, but SI modifiers being in increments in 3 orders of magnitude 11:30 < t-bast> Since the OP mentions libraries that didn't do this correctly, this clarification feels needed 11:30 < cdecker> We have acks from ACINQ and c-lightning, any objections to this clarification? 11:30 < rusty> Yep. Wording is a bit weird: I would suggest "last digit must be zero". 11:30 < cdecker> Yes, libraries getting this wrong is kind of nasty 11:30 < rusty> (Matching the line above) 11:31 < cdecker> Ok, rusty would you like to rephrase that or shall I? 11:31 < rusty> Happy for you to... but agreed it should be specified. 11:32 < t-bast> I believe the OP has moved on to other projects, so maybe we'll need to take over the PR 11:32 < cdecker> I just spoke to sword_smith last week when Elizabeth was in town :-) 11:33 < cdecker> #agreed cdecker to rephrase #700 to match the previous line and merge after acks on Github 11:33 < cdecker> #topic Add bolt11 test vector with amount in `p` units #699 11:33 < t-bast> cdecker: oh great, perfect then 11:33 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/699 11:34 < cdecker> Now, this one feels a bit strange, since it adds an example of how it works, but doesn't showcase the non-working case which is the interesting thing here 11:34 < cdecker> imho this additional test-vector doesn't add anything new. More interesting would have been to show a 1 pico-bitcoin invoice failing 11:34 < t-bast> good point, we could have a test vector for in invalid invoice 11:35 < rusty> cdecker: Agreed. But this should be applied, I should add some invalid invoice tests. 11:35 < rusty> (We had one library which didn't check signatures!) 11:35 < rusty> Actually, Imight as well fix this example too. 11:35 <+roasbeef> there's also that weird bech32 quirk w/ the q's 11:35 < rusty> (Wrong key was used). 11:36 < t-bast> roasbeef: what do you mean? the extension attack? 11:36 < cdecker> rusty: would you like to adopt the PR and add some failing examples? 11:36 * niftynei will brb 11:36 < rusty> #action rusty to clean up test vector (using correct privkey) and also add some negative tests in another PR. 11:36 < cdecker> Great, thanks rusty 11:37 < cdecker> So it seems this PR needs some additional work, and we can move on to the next topic. 11:37 < cdecker> Unless there are some more comments related to this one? 11:37 < t-bast> Nope, that sounds good! 11:37 < cdecker> #topic BOLT 1: Define custom message type range #705 11:38 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/705 11:38 < cdecker> Given we already have a couple of experimental features being implemented using the TLVs it seems only logical to add a range that can be self-assigned by experimenters 11:39 <+roasbeef> this is about messages and not the onion right? 11:39 < t-bast> roasbeef: yes 11:39 <+roasbeef> iirc we already have a cut out range in teh onion 11:39 < t-bast> We already have the same mechanism for onion tlvs 11:39 < cdecker> Custom types are already used by WhatSat and Noise for example, and keysend 11:39 < t-bast> I really mimicked it for messages 11:39 <+roasbeef> cool, should prob start a running wiki entry just so we can track all the experiemtns going on, and for ppl to glance at to see if they can avoid an early collision 11:39 < cdecker> Oh, I must have missed the onion one, sorry about that 11:39 < ariard_> t-bast: on bech32 https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-November/017443.html 11:40 <+roasbeef> this sgtm though 11:40 < cdecker> roasbeef: that'd be really useful, I read through joost's code to figure out the types he was using 11:40 < cdecker> Ok, so I'm counting ACKs from t-bast, roasbeef, and mine 11:40 < t-bast> ariard: ok thx, are there concerns for our use of Bech32 for invoices? Since we have a signature on top do we need to do anything? 11:41 < cdecker> Sounds like we have quorum, unless there are more comments? 11:42 < cdecker> #action cdecker to merge #705 11:42 < cdecker> And finally the last PR for today: #697 11:42 < cdecker> #topic BOLT-04: modify Sphinx packet construction to use starting random bytes#697 11:42 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/697# 11:42 < t-bast> And a good one for the finish line! 11:43 < cdecker> I think we've all implemented some variant of this already 11:43 <+roasbeef> so this'll ship in lnd 0.9 (dropping soon), confirmed that eclair+lnd match up 11:43 <+roasbeef> mhmm, in the end we can all do w/e we want as long as they're ranodm, but nice to have the spec direct the impl with a single safe route and updated test vectors 11:43 * niftynei has returned 11:43 < t-bast> the updated test vector worked for both eclair and lnd, did someone c-lightning side had time to validate them? 11:43 < cdecker> Yeah, I have a variant implemented, not yet with the pad-key, but should be no problem 11:44 < t-bast> great 11:44 < ariard_> t-bast: don't think so, but need to verify, signature doesn't protect if invoices decodes to wrong data 11:44 < cdecker> I can implement and verify this week, would that work? 11:44 < t-bast> cdecker: sgtm 11:45 < cdecker> #action cdecker to implement updated padding and merge #697 if test-vectors pass 11:45 < cdecker> Excellent. That concludes the round of PRs, let's move on to discussing the open issue #728 11:45 < cdecker> #topic Stuck channels because of small fee increase#728 11:45 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/728# 11:46 < t-bast> I'll let you read it, it's probably something you've already thought about 11:46 < cdecker> t-bast would you like to give a short introduction to the issue you have uncovered? 11:46 < t-bast> I think reading the issue should be the most accurate way of introducing it :) 11:46 < t-bast> I tried to make it not too long 11:47 < cdecker> It's very concise, and I think the issue is interesting. Thanks for including the fee-level changes scenario, otherwise I would have dismissed it as not being possible with us since we don't support push_msat 11:48 <+roasbeef> iirc we do some upfront validation to avoid creating channels like this, but it is still the case that a channel can enter this particualr state post creation 11:49 < rusty> The original proposal was that fees get gouged out of Bob in this corner case, that being better than stuck channels. But it was decided to make it illegal instead, resulting in this corner case. 11:49 < cdecker> Especially with feerates changing 11:49 < t-bast> yeah that's why it's painful, I've seen this happen to real-world channels 11:49 <+roasbeef> there're also some related edge cases w.r.t the handling of the reserve and fees 11:49 < t-bast> I'm wondering if you think that it's really unsafe for Alice to allow a single HTLC through in that case? 11:50 < cdecker> Wouldn't it be possible to move out of this situation by doing sub-dust, i.e., pruned HTLCs? 11:50 < t-bast> cdecker: yes that's one solution 11:50 <+roasbeef> btw the periods instead of commas in the issue kinda threw me off ;) 11:51 < cdecker> They'd be tiny transactions that go nowhere but you can unilaterally solve this by dropping tiny payments at Alice 11:51 < t-bast> roasbeef: xD 11:51 < rusty> t-bast: not sure what that code would look like "I'm going to let non-funder add 1 new htlc iff the resulting feerate is not insanely low"? 11:51 <+roasbeef> to double check...it's a 150 sat channel not 150k sat? :p 11:51 < t-bast> roasbeef: 150k sat in the example 11:52 < t-bast> rusty: that would rather be: in the check where we currently reject, if unpruned htlcs.count <= 1 then I let this one go through 11:52 < niftynei> (cdecker: we do support push_msat as of https://github.com/ElementsProject/lightning/pull/3369) 11:52 < cdecker> 150 sat would mean the entire channel is dust xD 11:53 <+roasbeef> lol yeh just making sure.... 11:53 < cdecker> Oh no, why??? 11:53 < t-bast> do I really need to check the feerate? Since the estimation is supposed to be very conservative, the feerate we chose should absorb the 20% weight increase without hitting the timeout before on-chain confirmation, shouldn't it? 11:53 < rusty> t-bast: ah, so *always* allow a single HTLC, even if fee cannot be paid? 11:54 < t-bast> rusty: exactly, because it's only a 20% weight increase (worst case) 11:54 * BlueMatt *waves* why are y'all here on a holiday, doesn't the world run on 'murican dates? :p 11:54 < t-bast> cdecker: I did a push_msat to c-lightning master to test the scenario, and it worked :) 11:54 * rusty is trying to remember if we let fees eat into reserve... 11:54 < niftynei> cdecker, so basically pushing alice enough msats to get the channel moving again 11:55 * BlueMatt notes that we can just remove update_fee and move on instead of trying to fix it :p 11:55 < cdecker> Yep, I don't see why Bob would do that, but it is a unilateral change 11:55 < t-bast> rusty: I don't think we currently do, but maybe we could there...or maybe it's a riskier change 11:55 < cdecker> BlueMatt: this is less about fees than it look like, we get the same issue if we're close to the reserve limit 11:56 < ariard_> assuming the conservative feerate would absorb the 20% weight just make confirmation riskier in case of rapid mempool min fee rate increases 11:56 < BlueMatt> cdecker: ah, maybe I'm missing the point. I cant say I dug into it enough...but we certainly dont have any cases where non-update_fee can break a channel, or my fuzzers would have found it 11:56 <+roasbeef> it's less about update_fee and mroe about weirdness and edge cases w.r.t initiator paying and the interaction w/ the reserve 11:56 -!- tryphe_ is now known as tryphe 11:56 < BlueMatt> (but they dont cover update_fee cause I *know* that can break it) 11:57 < cdecker> Ok, seems to me like there's some more brainstorming and checking necessary before we get a good solution. Shall we let it cook for a while and revisit next week? 11:57 < t-bast> cdecker: SGTM, let's have people look at this when they have time and we can share thoughts next time or on github 11:58 < rusty> t-bast: So, modification would be BOLT 2 to modify " receiving an `amount_msat` that the sending node cannot afford at the current `feerate_per_kw` (while maintaining its channel reserve): 11:58 < rusty> - SHOULD fail the channel." to carve out the single-HLTC case., Same on sender requirements? 11:58 < rusty> t-bast: seems to need a feature bit :( 11:58 < cdecker> Great, as always: comments and discussions on GH are welcome and we can hash out the details in the next meeting 11:58 < cdecker> Sounds good? 11:59 < t-bast> rusty: in that case it's not really the sending node that can't afford it, it's the opposite (when sending node is fundee) 11:59 < t-bast> I'm not really sure this is well covered in Bolt 2, but I'll dig into it and share on the issue 11:59 < rusty> t-bast: there's a requirement that you don't send such a thing, and a requirement that you don't accept it. Both need modification. 11:59 < t-bast> rusty: are you sure they're not for the case where sender is funder only? 12:00 < lndbot> With MPP you can send multiple dust HTLCs to work around it :p 12:00 < rusty> t-bast: ah, though you're right, there's no requirement specified on non-funder. There should be :( 12:00 < t-bast> rusty: that's what got me confused, I'll try to draft something 12:01 < ariard_> but using MPP to workaround mean you still have stuck capacity you can't use anymore 12:01 < t-bast> johanth: thinking outside the box ;) 12:01 < cdecker> #action everybody to brainstorm the issue on Github and come up with potential solutions for the next meeting 12:01 < t-bast> ACK 12:01 < rusty> Well, an implementation could refuse to allow any non-dust HTLCs and really simplify their code :) 12:02 < cdecker> ariard_: Alice can't send anything, so there's little point in unsticking her, but Bob can use MPP to unstuck the channel and route through Alice which works 12:02 < cdecker> #topic Current status of the dual funding proposal 12:02 < niftynei> ok so 12:03 < cdecker> So this is the part which I was really hoping to get to, a bit more long-term planning and status updates from people that have been working on parts of the spec that are really complicated 12:03 < cdecker> niftynei was so kind to volunteer to give us a quick update on her work on dual-funding and all the corner cases that that brings with it 12:04 < cdecker> Sorry for interrupting you niftynei :-) 12:04 < ariard_> cdecker: ah gotcha, you can route multiple MPP through same channel but tho you may hit the max_htlc limit 12:04 < niftynei> np cdecker! 12:04 < niftynei> here's my update on the dual funding. there's an active PR of an implementation of the dual-funding proposal up for c-lightning. one thing it's missing currently is the flow for RBF 12:05 < niftynei> which is to say that there might be changes to the proposed RFC draft for that section still 12:05 < niftynei> if you're curious at looking at the PR, it's here https://github.com/ElementsProject/lightning/pull/3418 12:06 < niftynei> one thing that i've been trying to design for, but haven't actually implemented yet, is the ability for nodes to batch opens together 12:06 <+roasbeef> occured to me the other week that if channel closing is improved to support multi-channel close (more than one input), then a distinct dual f unding flow is less critical, assuming the impls support multiple channels to a node, but then even w/o that the only downside is more than 1 txn to close them 12:06 < niftynei> this helps increase the anonymity set for an open channel transaction, and reduces the certainty that any shared inputs to a funding tx are the node's that provides them 12:07 -!- slivera [slivera@gateway/vpn/privateinternetaccess/slivera] has joined #lightning-dev 12:07 < cdecker> niftynei: that shared open is coordinated by one node that is in common to all channels or would that be some other group of nodes collaborating somehow differently? 12:07 < t-bast> +1 on batching 12:08 < t-bast> that may end up being very similar to coinjoin mechanisms, doesn't it? 12:08 < niftynei> roasbeef, do you have a link to the close channel proposal? i'm not really familiar with channel close considerations 12:08 < niftynei> t-bast, yes 12:08 <+roasbeef> isn't batching already also possible as is today? given that you just send over an out point 12:08 < niftynei> to an extent 12:08 < cdecker> roasbeef: could we generalize it to have a BYOI (bring your own inputs) so we remove that limitation? 12:08 <+roasbeef> niftynei: doesn't exist yet, just an half baked idea lol 12:08 < niftynei> right, gotcha 12:09 <+roasbeef> cdecker: i think so, so then it would be just two nodes jointly crafting a transaction, which is independently useful for many other things 12:09 < cdecker> Yep, and it could work for funding and close the same way imho 12:09 < cdecker> Although it sounds like it'd get us into meta-protocol territory xD 12:10 < niftynei> right, so ideally either party in a dual funding transaction could include another party 12:10 < ariard_> and we could reuse same fee logic for splicing 12:10 < niftynei> i.e. either the opener or the 'accepter' (after the accept_channel message they respond with) 12:11 < niftynei> *named after 12:11 < niftynei> one big difference between a channel open and a coin-join is that coin-joins to some extent (afaict) have a central coordination server 12:11 < cdecker> Sounds interesting, though we probably want to limit the number of rounds you can propose more stuff to bolt onto a transaction :-) 12:12 < niftynei> the current proposal uses a limit to the number of inputs as a proxy for that 12:12 <+roasbeef> idk seems difficult to coordinate all that without some elevated party, also stepping back seems liek we're trying to do everything in this current dual funding proposal which is making it progressively more complex 12:12 < ariard_> cdecker: we can also have a active/passive party, where only the active one can keep adding inputs/outputs from other peer to avoid dependencies 12:13 < niftynei> whereas a dual-funded tx doesn't have a central coordinator. this makes it harder/impossible to do the shielded inputs that recent coin-join protocols have proposed 12:13 < cdecker> That's a good point. niftynei is there a good way to make these changes incremental, i.e., is there a minimal set we can implement and test now, and then make more changes as needed later? 12:14 < niftynei> currently we avoid dependencies by failing/rejecting the open if a peer sends an input you've already seen 12:14 < niftynei> have you taken a look at the current proposal? 12:15 < cdecker> Not in detail, no :blushes: 12:15 < niftynei> i'm open to suggestions for simplification, i like to think we've hit the most simplified possible while still allowing for batched opens 12:16 < niftynei> having batched opens is very nice because it mitigates (minorly) the sharing of your current utxos 12:16 < cdecker> Ok, so I misunderstood, sorry. What you're saying is that the PR is the minimal possible change that still gives us the desired flexibility, gotcha 12:17 < t-bast> I agree, and having batched opens could pave the way for batched close as well, couldn't it? 12:17 < niftynei> yes, i took roasbeef's comment to be a desire to cut functionality, which would mean removing the batchedness 12:17 < niftynei> that'd definitely be the easiest way to simplify the proposal 12:17 < ariard_> I think so, you can reuse the same set of transaction construction message like funnding_add_input/output 12:17 < cdecker> I'll read up on the commits and be ready next time :-) 12:18 < t-bast> Do you think cutting batchedness for a first version still makes it easy to add later? Or is it much simpler if we do it from the start? 12:18 < niftynei> i don't think it's possible to add it later 12:18 < t-bast> ok 12:18 < t-bast> so even if it makes the proposal more complex, it's probably worth having 12:18 <+roasbeef> well it was made from an outside perspective, I haven't been following it super closely but just from my PoV i've seen the scope expand over time 12:18 < ariard_> though if you do batchedness, we should also consider CPFP to avoid one party no-resigning and stucking everything 12:18 <+roasbeef> could be wrong tho 12:19 < niftynei> anyway, one thing i could use some help with is dealing with fees in the batched open case 12:19 < niftynei> i wrote about it in this channel earlier, but it'll probably be easier to digest via a mailing list post 12:19 < cdecker> Hm, the batch funding seems a bit distinct though, would it be possible to treat that as a separate dependency for dual-funding? 12:20 < cdecker> I.e., merge batch funding first and then build dual funding on top? 12:20 < niftynei> part of the work i'm doing this week is updating the spec protocol tests framework rusty's been working on to include dual-funding tesets 12:21 < niftynei> you can do batch funding already with c-lightning 12:21 < cdecker> Great, I think we all need some more time with your PR to get more familiar with the changes. 12:22 < niftynei> the biggest change for dual-funding (and i'd assume a batched close) is socializing inputs/outputs for transaction construction 12:22 < niftynei> ok great. i think i'm way over time. 12:22 < cdecker> You mean how willing nodes are to share their inputs with peers? 12:23 -!- dr-orlovsky [~dr-orlovs@194.230.155.171] has quit [Read error: Connection reset by peer] 12:23 < t-bast> Because there's a risk a node is faking a channel open to discover your inputs? 12:23 < cdecker> No problem niftynei, let's move the discussion to GH then ^^ 12:23 < niftynei> yes exactly. though, i think i can make a case that your open utxo set isn't as precious for a lightning node as it is for a bitcoin wallet 12:23 < t-bast> That's true, as long as you want to use them to get into lightning channels, they'll end up mixed once inside LN 12:24 -!- dr-orlovsky [~dr-orlovs@194.230.155.171] has joined #lightning-dev 12:24 < t-bast> but that means wallet developers need to take this into account, and seggregate as much as possible from UTXO used for direct bitcoin txs 12:24 <+roasbeef> the other consideration for light clients is that they can't easily verify the inputs 12:24 < niftynei> because they're not the sum of your total wallet -- there's also invisible funds already in channels 12:24 <+roasbeef> full node backend nodes can just check their utxo set 12:24 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Read error: Connection reset by peer] 12:24 < niftynei> 'invisible' of course depending on whether it's private or not 12:25 < cdecker> #action everyone to take a look at "dual-funding, the monster" ElementsProject/lightning#3418 :-) 12:25 < niftynei> roasbeef, yes. we need the sig changes that are proposed in taproot (i believe) 12:25 < cdecker> t-bast would it be ok if we move the trampoline discussion to the next meeting? 12:25 < t-bast> cdecker: sure, no problem, we've already done a lot! 12:25 < cdecker> (it's getting kinda late, and I'd like to give it enoough room to be discussed in full) 12:26 < cdecker> Great, thanks 12:26 <+roasbeef> niftynei: signing all input values? i don't think that's sufficient, values may be right but an input is already spent making the entire transaction invalid 12:26 <+roasbeef> lol "the monster" 12:26 < niftynei> to make it light-client compatible (ie including the output script's format as a member of the signed hash) 12:26 < t-bast> roasbeef: that sounds like a nightmare-ish FSM to build 12:27 <+roasbeef> t-bast: yeh... 12:27 < cdecker> seems to me like we should take a hard look at the work being done in coinjoin land to get our coordinated tx building skills up to par 12:28 < niftynei> oh you're talking about a different case 12:30 < cdecker> I definitely need a diagram of all the pieces in flight :-) 12:31 < cdecker> Seems like we have reached a bit of a quite moment, shall we finish up and continue the discussion another time? 12:31 < niftynei> thanks for chairing cdecker 12:31 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 12:31 < t-bast> yes sounds good, thanks for chairing cdecker 12:31 < cdecker> Thank you niftynei for the update on dual-funding :-) 12:32 < t-bast> I really think having a clear agenda that we can agree on during the week before the meeting makes it more efficient 12:32 < cdecker> (sorry I was a bit unprepared, but I didn't make it through the monster in time) 12:32 < cdecker> #endmeeting 12:32 < lightningbot> Meeting ended Mon Jan 20 20:32:20 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 12:32 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-01-20-19.06.html 12:32 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-01-20-19.06.txt 12:32 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-01-20-19.06.log.html 12:32 < darosior> cdecker: "we should take a hard look at the work being done in coinjoin land to get our coordinated tx building skills up to par" => about that point joinmarket is definitely worth the look :-) 12:32 < niftynei> lol well i don't think the proposal's been updated in weeks so :P 12:33 < niftynei> the PR is new, but the spec proposal hasn't really changed much recently lol 12:33 < cdecker> Right, thanks t-bast, I hope it went ok for everybody, and I'm open for meta feedback on the format 12:33 < t-bast> Thanks everyone, gotta go! 12:33 -!- grover__ [grover77@gateway/vpn/privateinternetaccess/grover77] has quit [Quit: Leaving] 12:33 -!- t-bast [~t-bast@2a01:e34:ec2c:260:896d:3cba:99be:fe8] has quit [Quit: Leaving] 12:33 < ariard_> cdecker: do you handle closing #726 and opening the IN- followup? 12:34 < cdecker> I know roasbeef mentioned he'd prefer having the agenda on GH, so I'll open an issue for the next meeting there 12:34 < cdecker> ariard_: yes, I think I self-assigned myself 12:35 < cdecker> Oh, turns out meetbot handles agreed and action items differently, need to be more consistent in future then :-) 12:37 < cdecker> Thanks for the nice meeting everyone, see you on the interwebs ^^ 12:42 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 258 seconds] 13:17 -!- sstone_ [~sstone@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 13:18 -!- gribble [~gribble@unaffiliated/nanotube/bot/gribble] has quit [Remote host closed the connection] 13:23 -!- CCR5-D32 [~CCR5@unaffiliated/ccr5-d32] has joined #lightning-dev 13:26 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has joined #lightning-dev 13:29 -!- gribble [~gribble@unaffiliated/nanotube/bot/gribble] has joined #lightning-dev 13:37 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 13:40 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 13:42 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 14:02 -!- grover_ [grover77@gateway/vpn/privateinternetaccess/grover77] has quit [Quit: Leaving] 14:02 -!- grover77 [grover77@gateway/vpn/privateinternetaccess/grover77] has quit [Quit: Leaving] 14:20 -!- grover77 [grover77@gateway/vpn/privateinternetaccess/grover77] has joined #lightning-dev 14:20 -!- grover_ [grover77@gateway/vpn/privateinternetaccess/grover77] has joined #lightning-dev 14:25 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Read error: Connection reset by peer] 14:26 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 14:30 -!- sh_smith [~sh_smith@cpe-172-88-21-24.socal.res.rr.com] has quit [Ping timeout: 268 seconds] 14:32 -!- sh_smith [~sh_smith@cpe-172-88-21-24.socal.res.rr.com] has joined #lightning-dev 14:38 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 265 seconds] 14:45 -!- Kostenko [~Kostenko@2001:8a0:7292:eb00:955f:ca16:a373:2da1] has quit [Read error: Connection reset by peer] 15:14 -!- tryphe [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 15:28 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 15:28 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 15:39 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 15:40 -!- Kostenko [~Kostenko@2001:8a0:7292:eb00:955f:ca16:a373:2da1] has joined #lightning-dev 15:44 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 15:50 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has quit [Ping timeout: 265 seconds] 15:53 -!- hiroki_ [77f52d8d@pl23181.ag2001.nttpc.ne.jp] has quit [Remote host closed the connection] 15:56 -!- grover_ [grover77@gateway/vpn/privateinternetaccess/grover77] has quit [Ping timeout: 272 seconds] 15:56 -!- grover77 [grover77@gateway/vpn/privateinternetaccess/grover77] has quit [Ping timeout: 272 seconds] 16:06 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 16:06 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 16:12 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 16:14 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Ping timeout: 258 seconds] 16:39 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 16:43 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 16:55 -!- imawhale [~Thunderbi@softbank126194180255.bbtec.net] has joined #lightning-dev 17:00 -!- imawhale [~Thunderbi@softbank126194180255.bbtec.net] has quit [Client Quit] 17:00 -!- imawhale [~Thunderbi@softbank126194180255.bbtec.net] has joined #lightning-dev 17:01 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 17:02 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 260 seconds] 17:15 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 17:20 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 260 seconds] 17:37 -!- belcher_ [~belcher@unaffiliated/belcher] has quit [Quit: Leaving] 17:42 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 240 seconds] 17:47 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 17:55 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 18:00 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 18:00 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 18:29 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 258 seconds] 18:31 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:aca5:c811:ea2e:974b] has quit [] 18:32 -!- Luser0 [~Luser0@adsl-108-73-63-64.dsl.yntwoh.sbcglobal.net] has joined #lightning-dev 18:34 -!- Luser0 [~Luser0@adsl-108-73-63-64.dsl.yntwoh.sbcglobal.net] has left #lightning-dev [] 18:41 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 18:52 -!- CCR5-D32 [~CCR5@unaffiliated/ccr5-d32] has quit [Quit: ZZZzzz...] 19:38 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has quit [Read error: Connection reset by peer] 19:38 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 20:26 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 20:48 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has quit [Ping timeout: 268 seconds] 20:50 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has joined #lightning-dev 20:59 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 265 seconds] 21:11 -!- sh_smith [~sh_smith@cpe-172-88-21-24.socal.res.rr.com] has quit [Ping timeout: 265 seconds] 21:20 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 21:23 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 21:23 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 21:37 -!- sh_smith [~sh_smith@cpe-172-88-21-24.socal.res.rr.com] has joined #lightning-dev 21:41 -!- sh_smith [~sh_smith@cpe-172-88-21-24.socal.res.rr.com] has quit [Ping timeout: 258 seconds] 21:57 -!- molz_ [~molly@unaffiliated/molly] has joined #lightning-dev 22:00 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 22:19 -!- sh_smith [~sh_smith@cpe-172-88-21-24.socal.res.rr.com] has joined #lightning-dev 22:23 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 22:57 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 23:29 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 258 seconds] 23:51 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 23:55 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 258 seconds] --- Log closed Tue Jan 21 00:00:15 2020