--- Log opened Mon Apr 12 00:00:23 2021 00:55 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 00:59 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 01:13 -!- shesek [~shesek@unaffiliated/shesek] has quit [Remote host closed the connection] 01:13 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 01:28 -!- jarthur [~jarthur@cpe-70-114-198-37.austin.res.rr.com] has quit [Quit: jarthur] 02:35 -!- worc3131 [~quassel@2a02:c7f:dcc4:6500:cf0e:3346:8766:ab20] has joined #lightning-dev 03:26 -!- worc3131 [~quassel@2a02:c7f:dcc4:6500:cf0e:3346:8766:ab20] has quit [Ping timeout: 260 seconds] 04:02 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:16 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:16 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 04:16 -!- __gotcha1 is now known as __gotcha 04:45 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 04:47 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 252 seconds] 04:59 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 04:59 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 240 seconds] 05:02 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 252 seconds] 05:09 -!- belcher_ is now known as belcher 05:33 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 05:45 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 05:46 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 05:51 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 05:59 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has quit [Read error: Connection reset by peer] 06:03 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has joined #lightning-dev 06:20 -!- ThomasV_ [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 260 seconds] 06:39 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 06:44 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 268 seconds] 07:02 -!- tlev [~tlev@li120-195.members.linode.com] has quit [Quit: Ping timeout (120 seconds)] 07:03 -!- tlev [~tlev@li120-195.members.linode.com] has joined #lightning-dev 07:06 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 07:06 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 07:08 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 07:14 -!- jonatack [jon@gateway/vpn/airvpn/jonatack] has quit [Quit: jonatack] 07:17 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 07:17 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 07:23 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Quit: ZNC - http://znc.sourceforge.net] 07:23 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 07:30 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Quit: ZNC - http://znc.sourceforge.net] 07:31 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 07:32 -!- jonatack [jon@gateway/vpn/airvpn/jonatack] has joined #lightning-dev 07:36 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 07:41 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 07:47 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 07:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 07:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 07:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 07:57 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 08:08 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 08:11 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 08:37 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 265 seconds] 08:41 -!- Xaldafax [~Xaldafax@cpe-198-72-160-101.socal.res.rr.com] has joined #lightning-dev 08:52 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 09:00 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:01 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:07 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:07 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 240 seconds] 09:08 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:19 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:20 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:20 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:20 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 252 seconds] 09:20 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:23 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 09:26 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 09:32 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:32 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:33 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:33 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:36 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:37 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:41 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:42 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:47 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:49 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:57 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:58 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 09:58 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 09:59 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 10:00 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 10:00 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 10:01 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 10:01 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 10:02 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 10:11 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 10:12 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 10:13 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 10:14 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 10:20 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Read error: Connection reset by peer] 10:24 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 10:24 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 10:54 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 10:56 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 246 seconds] 10:58 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 11:43 -!- fiatjaf [~fiatjaf@2804:7f2:2a8c:f814:ea40:f2ff:fe85:d2dc] has quit [Quit: WeeChat 3.1] 12:38 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 12:41 -!- laptop [~laptop@ppp-3-130.leed-a-1.dynamic.dsl.as9105.com] has joined #lightning-dev 12:45 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 12:45 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 12:48 -!- t-bast [~t-bast@91-165-90-32.subs.proxad.net] has joined #lightning-dev 12:58 -!- lndev-bot [~docker-me@243.86.254.84.ftth.as8758.net] has joined #lightning-dev 12:58 < niftynei> hello 12:59 < niftynei> meeting agenda has been graciously posted by t-bast at https://github.com/lightningnetwork/lightning-rfc/issues/860 13:00 < t-bast> hey hey! 13:00 < cdecker> Hi everybody :-) 13:00 < vincenzopalazzo> hello guys :) 13:00 < t-bast> not a lot of new PRs recently (which is good, we have time to focus) so the agenda is quite small :) 13:00 < ariard> yooo :) 13:00 < niftynei> small fyi that rusty is out on vacation this week 13:01 < cdecker> I'm afraid I didn't make it in time to draft up a keyolo counterproposal yet 13:01 < niftynei> ten demerits from house c-lightning 13:02 < t-bast> no worries cdecker, there's no rush there! Another subject that I didn't put on the agenda that may be interesting would be to discuss the perf benchmarks joost put together recently, and what each implementation discovered in their own code thanks to that 13:02 < t-bast> Sharing the low hanging fruits we each discovered in our code may be useful for others 13:02 < jkczyz> hey... partly lurking 13:02 < cdecker> But... the house cup... :-( 13:03 < ariard> t-bast: do you have the link ? 13:03 < bitconner> hey hey 👋 13:03 < niftynei> should we get started? 13:03 < niftynei> i can chair, that's worth some points back to c-lightning right? ;) 13:03 < t-bast> https://twitter.com/joostjgr/status/1376925022292443141?s=20 13:04 < ariard> danke schon 13:04 < t-bast> A lot of the responses to that tweet contain interesting takes from many people 13:04 -!- smartineng [~Icedove@88.135.18.171] has quit [Quit: smartineng] 13:04 < ariard> BlueMatt ^^ 13:04 < t-bast> 10 points to niftnei for chairing :) 13:04 < niftynei> #startmeeting 13:04 < lndev-bot> niftynei: Error: A meeting name is required, e.g., '#startmeeting Marketing Committee' 13:04 -!- Thomashuet [~thomas@78.196.22.221] has joined #lightning-dev 13:05 < niftynei> #startmeeting April 12th lightning-dev meeting 13:05 < lndev-bot> Meeting started Mon Apr 12 20:05:03 2021 UTC and is due to finish in 60 minutes. The chair is niftynei. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:05 < lndev-bot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:05 < lndev-bot> The meeting name has been set to 'april_12th_lightning_dev_meeting' 13:05 < BlueMatt> ariard: yep, I saw it. I whipped up a quick benchmark showing we were doing great on the fsync-count, but doing a comparable actual wall clock comparison requires more than quick hacked-up benchmarker 13:05 < niftynei> let's start with this benchmarking topic, and then move into pull requests? 13:05 < BlueMatt> ariard: we're at a solid 10 fsync/send/two nodes, which is almost the lowest we can be, theoretically I think you could shave two off, but its probably not worth it 13:05 < t-bast> sgtm 13:06 < niftynei> #topic benchmarking study by joostjgr 13:06 < cdecker> My first question is: do we care about individual node performance? 13:06 < niftynei> #link https://twitter.com/joostjgr/status/1376925022292443141?s=20 13:06 < t-bast> Yep, what was interesting/surprising in this benchmark is some surprises when you really go E2E 13:06 < ariard> BlueMatt: tested from val sample? deeging into the article 13:06 < t-bast> cdecker: it depends, we still need reasonably good node perf in all cases 13:06 < BlueMatt> ariard: no, I was just trying to calculate the fsync count, given that seems to be ~everyone's bottleneck, and we're doing great. I didnt try to do a comparable throughput test which would include batching. 13:07 < cdecker> I mean it's a nice metric to have, but we scale by increasing channels, and having pressure to avoid overloaded (central) is somewhat nice if you ask me 13:07 < t-bast> cdecker: from what I understood, the main reason for this benchmark was to evaluate whereas big wallet "hubs" could offer streaming services to many clients 13:07 < t-bast> that's why the tests contained the invoice generation and recipient receive parts (and not simply routing) 13:07 < cdecker> If we care, I think we can do quite some tweaks to optimize performance from the benchmark 13:08 < BlueMatt> cdecker: honestly, the numbers were pretty low, fsyncs matter, but he was testing throughput with batching, which means you *should* get pretty close to just crypto as the bottleneck, which my benchmark clocks in at around 2-3ms per payment 13:08 < t-bast> Definitely, for example we noticed that our main bottleneck was invoice generation because we were very inefficiently doing pubkey recovery 13:08 < BlueMatt> cdecker: it may be that the bottleneck for most nodes is on the sending/receiving end, which, whatever, but if its also similar when routing, that should probably be improved 13:08 < niftynei> that context around the idealized usecase of fast payments is good to know (streaming payments) 13:09 < cdecker> One thing that I was looking into with my own (local) benchmarks was the pipeline depth when adding HTLCs (i.e., number of ops before committing) 13:09 < cdecker> Then we could do things like defer storing HTLCs until the commit, and work around higher latency 13:09 < niftynei> (from a use-case perspective, streaming every single payment and not having a localized 'balance' seems like the naive way to do it, but ultimately pricier than doing "1 minute/5minute" credit style txs) 13:10 < t-bast> cdecker: IIUC c-lightning by default sends commit_sig only in batches? Do you have a regular ticker for that? How is it configured? Eclair simply schedules a `sign` after each HTLC operation 13:10 < BlueMatt> cdecker: it should basically never be more than one full commitment-signed dance, no? irrespective of how many HTLCs are in the outbound buffer 13:10 < BlueMatt> t-bast: does that mean eclair does not batching at all? 13:10 < cdecker> We start a timer of 10msec after performing the first op, and then trigger a sign once the timer expires 13:11 < ariard> where those micro-payments dust one ? you might not have to generat sigs for them 13:11 < cdecker> 10ms is way too short really, but testing locally (no latency) it was ok 13:11 < t-bast> BlueMatt: we do "pessimistic" batching: whenever a commit_sig could make sense, we queue it up - if other HTLCs concurrently were added, we end up signing once for many htlcs, but if none were added, we sign instantly 13:11 < devrandom> BTW hubs that connect to a lot of leaf nodes (e.g. consumers), batching across channels would be needed (i.e. fsync multiple commits at once), which could get interesting 13:11 < ariard> bump your dust_limit_satoshi to decrease latency? 13:11 < BlueMatt> t-bast: hmm, ok 13:12 < t-bast> But we've long thought that we needed to experiment with forced batching at regular intervals 13:12 <+roasbeef> also important to remember that the current protoocl hamstrings x-put in many cases 13:12 < BlueMatt> devrandom: right, IIRC the benchmarks joost ran were like 10 channels, so that should have been possible, though it may not be turned on in some cases cause its 10 channels between the same nodes, not across different nodes 13:12 < ariard> devrandom: if big hubs rely on replicated channel monitors/watchtowers they might have severe latency hits 13:12 < t-bast> It's worth having some kind of realistic benchmark to test whether it's useful in practice or not 13:12 <+roasbeef> since there's a static amt that any sig can cover re new additions, and you always need to wait for the full dance before proposing new updates 13:12 <+roasbeef> his benchamrk was just direct nodes, but that shines a lot more when you go the multi-hop high latency link setting, as you can't really pipeline properly 13:13 < BlueMatt> roasbeef: right, but with batching *throughput* should be realllyyy high 13:13 <+roasbeef> it depends really, also i'm pretty sure basically none of us have tried concentrated optimization 13:13 < BlueMatt> yea, more work is definitely needed to do benchmarks that focus on routing nodes 13:13 < BlueMatt> not just sending/receiving 13:13 <+roasbeef> as there're other things to work on in teh critical path 13:13 < BlueMatt> since I'm not sure how much its a critical feature to do X000 sends/sec to the same next-hop node. 13:13 * cdecker would like to point out that eltoo allows not storing updates that only subtract, so if multiple HTLCs go in the same direction (streaming, ...) only one side needs to persist an update 13:13 < BlueMatt> rougint X000 sends/sec should be doable 13:14 < BlueMatt> cdecker: Thanks Eltoo-Man! 13:14 <+roasbeef> cdecker: yeh even aside from that, there's a lot of other book keeping a node needs to do like payment, invoice, circuit info, etc 13:14 <+roasbeef> and there're a lot of questions there re what the actual medium access protocol would be 13:14 < cdecker> Yeah, sorry, had to bring it up :-p 13:14 < t-bast> BlueMatt: I fully agree, I had a hard time understanding why the benchmark wasn't focusing on relaying only 13:14 <+roasbeef> going back to my other point, rn you always need to wait after sending a single sig, we can go back to how it was before where when you start you send N commitment points 13:15 <+roasbeef> then that lets you send N new states before needing to wait, since things are asymmetric it's fully de-sync'd so you never really need to wait for the other side, basically moving to a TCP sliding window type thing 13:15 < BlueMatt> roasbeef: right, but thats just latency, not throughput. and the latency is actually nice from a privacy perspective 13:15 < cdecker> Uff, that'd mean we need to disambiguate and discard many commitments, doesn't it roasbeef? 13:15 < BlueMatt> its *good* that we have to wait some time and batch, from a privacy perspective 13:15 < BlueMatt> t-bast: right, well version 1, hopefully more work done there. 13:15 <+roasbeef> no that helps w/ x-put as well, if you look at the nodes most of the time, theyr'e waiting for the revocation so they can continue to commit more, and batching won't be 100% so there's a lot of underutilized time 13:16 < cdecker> I think the simplicity that one-commit-in-fligh afords us is very nice, we already have a very complex protocol 13:16 < BlueMatt> roasbeef: right, but that sounds like something that can be solved with software 13:16 < BlueMatt> not just changing the protocol to add more complexity 13:16 <+roasbeef> cdecker: the gap wasn't too large, lnd did that before we switched over 13:16 < cdecker> roasbeef: that can be solved by queuing new HTLCs and committing them in the next commit 13:16 <+roasbeef> you just have a comitment point queue basically and things are mostly the same 13:16 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 13:17 <+roasbeef> that's not the same really tho, since you can't start the forwarding process yet 13:17 < BlueMatt> cdecker: I....assume everyone does that? 13:17 <+roasbeef> it was pretty night and day when we switched over fwiw, and I did tests w/ actual mult-hop nodes then as well 13:17 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 13:17 < BlueMatt> I mean ultimately you're limited in practice by the max htlc count anyway 13:17 <+roasbeef> in terms of the xput reduction 13:17 <+roasbeef> yeh taht's the other thing I mentioned, it's a value that is pretty arbitrary, and you can make it nearly unbounded as long as you mind chain costs in the worst case 13:18 < t-bast> In practice you're mostly limited by the volume of payment on mainnet xD, there isn't enough demand to make it really worth it to aggressively batch 13:18 < cdecker> BlueMatt: yes, but the important thing is to then have the window to add them all before committing again 13:18 <+roasbeef> but idk, this isn't something that really worries me, since none of us have really tried to focus on impl level optimizations 13:18 < BlueMatt> cdecker: right, at the end of the day, though, we *should* be waiting 10 or 100ms before forwarding anyway, so I really dont get this discussion 13:18 < cdecker> Yes, we probably have quite some slack to optimize before requiring proto level changes 13:18 <+roasbeef> t-bast: yeh that's the other thing, we're likely focused on issues re UX and reliability, if nodes start falling over as they can't keep up w/ demand, that's a great problem lol 13:18 < BlueMatt> its pretty important for privacy that batching happens there 13:19 < t-bast> roasbeef: totally agree, it's interesting to have these benchmarks early, but they show mostly that nobody really worked on perf yet because it's simply not needed - early optimization bla bla 13:19 < BlueMatt> so we should focus on improving throughput given batching and latency, not try to rip it out 13:19 <+roasbeef> in the multi-hop setting, likely the case that just the jitter due to node processing gives you that de-synced batching over time 13:19 < cdecker> BlueMatt: just pointing out that having to wait for a commit is unlikely to reduce throughput in the end 13:19 < BlueMatt> cdecker: right, thats my point... 13:19 < cdecker> Ok ^^ 13:19 <+roasbeef> t-bast: yeh no one really worked on pref, and it wasn't too bad imo, as it's just a starting point, and as I showed on twitter it really depends on what you're running the becnhmark on 13:20 <+roasbeef> like my m1 laptop got like 3x what was posted on the blog post 13:20 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 13:20 < BlueMatt> roasbeef: not to mention, if anyone actually got an indistrial m.2 instead of shit consumer ones, fsync would be 10x faster, and thats, like, *all* the time.... 13:20 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 13:20 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 13:20 < BlueMatt> maybe 50x 13:20 < t-bast> it's impressive that the M1 chip gets such good results compared to joost's 13:20 <+roasbeef> yeh def, the latest SSD stuff is like basically memory at this point 13:20 < BlueMatt> t-bast: its all the ssd, not the chip 13:21 < BlueMatt> t-bast: joost was on google cloud, which is probably ssd-over-san 13:21 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 13:21 < BlueMatt> so latency is gonna be a chunk higher 13:21 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 13:21 < t-bast> right, it's mostly fsync that's limiting us right now 13:21 < cdecker> We have some benchmarks (with no latency) that show 400-500 tx / second so 10x Joost's numbers 13:21 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 13:22 < BlueMatt> cdecker: right, my naive benchmarks showed 400/sec with zero batching, so could even push it higher. 13:22 < BlueMatt> (if you cut the I/O to zero) 13:22 < cdecker> Locally we should feel the impact of fsyncs the most, with remote endpoints we'll probably feel latency the most 13:22 <+roasbeef> but yeh idk, cool that there's an easy to run benchmark, but I wouldn't say optimizing like mad is in the top 5 list of prios re the network/impl in my mind 13:22 < cdecker> I mean my node routes 50 tx / ... week, so that's not the bottleneck xD 13:23 < BlueMatt> lol, right 13:23 < niftynei> it sounds like hardware is the easiest win at this point for a speed up 13:23 < BlueMatt> anyway, yea, good to have benchmarks, people can figure out what they want to do with it on their own, doesnt imply protocol changes, probably 13:23 < niftynei> should we move on to PRs? 13:23 < t-bast> we're starting to experiment more thoroughly with remote postgres, to see how much latency it's adding 13:23 < BlueMatt> niftynei: yea, industrial ssds are just dram with a battery and backing ssd, so they're ~instant 13:23 <+roasbeef> I do around 500 a week ;) 13:23 < cdecker> I think we have more impactful things to look into first, then optimize params for latency and only then we can look into protocol changes 13:24 < BlueMatt> right, we'll also be limited by 400 max htlcs plus per-hop delays of 50 or 100ms just to batch enough htlcs to have non-zero privacy, at least for larger routing nodes in the future 13:24 < BlueMatt> sooooo 13:25 < niftynei> #topic per_commitment_secret must be a valid secret key 13:25 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/859 13:25 < BlueMatt> looks like it has acks 13:25 < BlueMatt> can we just merge? 13:26 < cdecker> ack 13:26 < bitconner> ack 13:26 < t-bast> ack 13:26 < BlueMatt> merged 13:26 < niftynei> ok great, that's resolved. 13:26 < vincenzopalazzo> ack :) 13:26 < t-bast> the PR part of this meeting was very fast xD 13:26 < niftynei> yes, that does conclude the PR portion of this meeting 13:27 < niftynei> next up is issues 13:27 < t-bast> unless someone has a PR they want to be looked at? 13:28 < niftynei> ok if someone thinks of something post it and we'll come back 13:28 < cdecker> I'll prep the keyolo PR for next time, so we can continue discussion 13:28 < niftynei> #topic Discuss lightning network architecture diagram 13:28 < niftynei> there's no link for this, other than the ML post 13:29 < niftynei> #link https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-April/002990.html 13:29 < cdecker> #link https://upload.wikimedia.org/wikipedia/commons/f/f9/Lightning_Network_Protocol_Suite.png 13:29 < niftynei> i'm not sure what the issue is with this, does anyone have some insight? 13:29 <+roasbeef> hmm, I realize we lost that IRC bot in here as well 13:29 < t-bast> Rene was asking for some feedback on this diagram 13:29 < cdecker> I think mostly collect feedback for the diagram 13:29 < ariard> i think it's more call for reviewers 13:30 < niftynei> ah i see. thanks ya'll 13:30 < t-bast> I find the "Unreliable routing layer" confusing, not sure why it's named "unreliable" 13:30 < ariard> hmmm not sure about the reliable payment layer, your payment attempt might not succeed before expiration of the invoices 13:31 < t-bast> Not sure either why the top one is "reliable" payment layer 13:31 < cdecker> A bit shouty with SPHINX which is a name, not an acronym iirc 13:31 < cdecker> I think his intention was to separate the layers similar to IP (unreliable, packet based) vs TCP (reliable datatransfer, stream) 13:31 -!- lnd-bot [~lnd-bot@165.227.7.29] has joined #lightning-dev 13:31 < lnd-bot> [lightning-rfc] TheBlueMatt pushed 2 commits to master: https://github.com/lightningnetwork/lightning-rfc/compare/83980de78600...a9db80e49d17 13:31 < lnd-bot> lightning-rfc/master 55ee3f4 Lloyd Fournier: per_commitment_secret must be a valid secret key 13:31 < lnd-bot> lightning-rfc/master a9db80e Matt Corallo: Merge pull request #859 from LLFourn/patch-1 13:31 -!- lnd-bot [~lnd-bot@165.227.7.29] has left #lightning-dev [] 13:32 < cdecker> Thanks lnd-bot xD 13:32 < ariard> doesn't mention bolt 5 the non-interactive part of the protocol only played against the chain 13:32 <+roasbeef> the main idea was that one has retries, while the other is set and forget essentially 13:32 < t-bast> but that doesn't make it more "reliable", does it? 13:32 <+roasbeef> the on chain stuff is prob meant to be in that layer below, since it's more of a hop by hop thing from the PoV of the nodes 13:32 < t-bast> I think that naming is confusing, because the distinction here isn't the same as IP vs TCP 13:32 <+roasbeef> yeh wording can prob be imrpvoed, bu I look it as forwarding vs routing layer 13:32 < ariard> but you learn from your failures or in theory you could reuse already-computed paths 13:33 <+roasbeef> yeh the idea is you add a payment loop to the normal SendHTLC API to actually complete a payment 13:33 < cdecker> roasbeef, so more end-to-end vs. routing nodes 13:33 <+roasbeef> yeh 13:33 < ariard> yeah onchain is more peer 2 peer layer 13:33 < cdecker> gotcha 13:33 < t-bast> sounds like "unreliable routing layer" is just "payment routing layer" 13:33 < t-bast> and "reliable payment layer" is more an "application layer" of sorts 13:33 < cdecker> I wonder if on-chain shouldn't be a column on its own, given it's cross-cutting nature 13:34 <+roasbeef> hmm i'd say there's one more above, then the application layer is there, so stuff like using TLV's etc for new use cases 13:34 < t-bast> true, application layer could be even above, but the top layer is something where implementation may diverge a lot, most of it isn't protocol-defined (path-finding, choice of error handling, etc) 13:35 < cdecker> Yeah, it's unlikely there's a perfect (planar) representation, but the surrounding text in the lnbook can certainly provide a perspective that makes sense 13:35 <+roasbeef> yeh I had another one that was a lot uglier: https://user-images.githubusercontent.com/998190/112879240-bfe99480-907d-11eb-96f3-0dfad1fe8a3d.png 13:36 < cdecker> There are a couple of things we could add facets (see TLV) where they end up in different parts, so we should likely concentrate on the core things to make ln work in this 13:36 < t-bast> I would drop the "reliable" / "unreliable" part though, even with surrounding text to explain it's a bad parallel to standard layer stacks that doesn't translate very well IMHO 13:36 < cdecker> Right, "Payment layer" and "routing layer" seem pretty apropriate imho 13:37 < ariard> t-bast: even more confusing a ln node might emit ln messages or base layer messages both of them on top of tcp 13:38 <+roasbeef> cdecker: see my ugly version re TLV 13:38 < t-bast> Layering this kind of thing is really hard...makes me think again and again about the fact that OSI is criticized a lot because while it's a great way to explain networking in a classroom to make it look like it makes a lot of sense, in practice it just doesn't work that way 13:38 <+roasbeef> but you're right in that it's hard to like express all the dependancies/links w/ it all 13:38 <+roasbeef> we tried to show the relations w/ the boxes that span multiple rows/layers 13:38 < ariard> roasbeef: shouldn't wallet layer be near to link layer or rename as "node-control/peer selection/management" 13:38 <+roasbeef> in my other version, I just duplicated stuff where it overlapped 13:39 < t-bast> Why do you even try to have a completely layered diagram? It feels like some of the bottom parts make sense to be layered, but most of the rest is better explained as separate "components" that use the core components for various things 13:39 < t-bast> I'm not sure layers really make sense all the way 13:40 <+roasbeef> yeh it's hard to communicate the deps n stuff, nothing is perfect 13:40 < cdecker> I like it, to be sort of an index where things fit into the protocol 13:40 <+roasbeef> but the idea is that this is meant to be like a "mind map" or sorts 13:40 <+roasbeef> then over time the reader starts to udnerstand how things are linked, they zoom out to it, etc, etc 13:40 < niftynei> heh. my feedback would be something along the lines of "i dont find osi diagrams very helpful at all" but that's not a very productive contribution. the "map of the territory" idea is pretty good though 13:41 < niftynei> i used to make prezzis about the android architecture to help orient myself -- you can zoom in and out on them 13:41 < ariard> t-bast: an OG critic of the osi model : https://www.rfc-editor.org/rfc/rfc874.html 13:41 <+roasbeef> yeh we're going for more of a map of the territory, since things are non linear anyway 13:41 <+roasbeef> I wouldn't say this tries to "copy" the OSI diagram as well, in sofar as it's just a stack diagram lol 13:41 <+roasbeef> like the one I posted has 10 layers or something 13:41 < t-bast> ariard: :+1: 13:41 <+roasbeef> (the black and white crude one) 13:41 < ariard> imo best is draw of your mind map if you want to learn seriously :) 13:41 < ariard> *own 13:42 < t-bast> I think the mind map would make a lot of sense, just drop the layers from it :) 13:42 < cdecker> Well, we do have some layers: transport, update, multi-hop. I think that distinction makes sense 13:42 < t-bast> roasbeef: I had to scroll too much to see that whole diagram so I gave up xD 13:42 < niftynei> a lot of the protocol is "phased" oriented rather than stack oriented, imo 13:43 < cdecker> We can swap transport without affecting the layers above it. We can swap update with minimal impact to multi-hop. And we will change multi-hop with PTLCs without impacting the update layer 13:43 <+roasbeef> t-bast: there are no layers... *waves hands* 13:43 < niftynei> "L2, the layerless layer" 13:43 <+roasbeef> cdecker: +1 13:44 < t-bast> yes, some parts make sense to be layered, but the rest would probably work better as a scattered mind map (but I agree it's hard to draw the line and come up with something that looks good and makes sense) 13:44 < cdecker> Yeah, that moniker was a mistake (been calling it off-chain for exactly that reason) 13:44 <+roasbeef> liek path finding just cares it gets back an error, we don't *have* to use the current onion encrypted thing to send it back 13:44 <+roasbeef> also remember this is for total noobs, and y'all are the experts lol 13:44 < niftynei> prezzi, prezzi, prezzi 13:44 < niftynei> (you cant print a prezzi tho) 13:44 * roasbeef feels motion sick 13:44 < cdecker> What's this about italian prices? 13:44 < niftynei> oh it's prezi 13:44 < niftynei> https://prezi.com/ 13:44 < niftynei> lol 13:44 < t-bast> It has to fit on a t-shirt 13:45 < t-bast> That's the number 1 requirement 13:45 < niftynei> tough crowd 13:45 < niftynei> lol 13:45 < t-bast> heh 13:45 < cdecker> Ok, so we have our first schism in LN, the layerists vs the non-layerists 13:46 < niftynei> we've got 14 minutes left. let's move on to a long term update 13:46 <+roasbeef> but i'll copy/pate some of the feedback here to andreas+rene as well, just the first draft of it, leaning to more ofa mind map type thing, thx 13:46 < niftynei> i'm gonna pick a fun one, upfront payments / DoS protection 13:47 < niftynei> #topic upfront payments / DoS protection 13:47 < niftynei> though this is risky as there's no obvious person to put on the podium 13:47 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/843 13:47 < niftynei> joostjgr isn't in the chat 13:47 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has quit [Read error: Connection reset by peer] 13:47 -!- laptop [~laptop@ppp-3-130.leed-a-1.dynamic.dsl.as9105.com] has quit [Ping timeout: 252 seconds] 13:47 < niftynei> does anyone else want to fill in on this one? 13:48 < BlueMatt> #prposed topic: show of hands on the earliest people would feel personally comfortable travelling to the us (or similar country) for a lightning spec meeting (assuming restrictions are relaxed by then). feel free to dm me responses. 13:48 < cdecker> Hm, too bad rusty isn't here, he was working on something re upfront fees / anti-grieving 13:48 < ariard> fyi, sergei and gleb have published this paper on how channel jamming might be used for probing : https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-March/002988.html 13:49 <+roasbeef> I have this pet idea i need to flesh out a bit more, goes in a diff direction moreso of ensuring if stuff happens you can degrade and keep forwarding normal traffic 13:49 < t-bast> BlueMatt: I would be down in September of afterwards 13:49 < t-bast> *or afterwards 13:49 <+roasbeef> BlueMatt: i'd be down, the shot is pretty easy to come by in SF 13:49 < cdecker> BlueMatt: if the Swiss let me leave (and return), why not? 13:50 < cdecker> Thanks ariard, need to catch up on papers I haven't read yet :-) 13:50 < ariard> cdecker: you can travel through eu for "professional reasons" 13:50 < BlueMatt> cdecker: I just want to be respsectful of anyone who may have trouble getting a vax or doesnt want one and wants to wait until case load is down more. 13:50 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has joined #lightning-dev 13:51 < niftynei> rusty might be a complicating factor here, given the current two week quarantine mandate in austrailia 13:51 < ariard> yeah we can be patient, some folks might be around for miami? 13:51 < t-bast> unless we all go to Australia and plan a month there to work through the quarantine? 13:51 < BlueMatt> niftynei: we have one vote for sept, hopefully by then things will have relaxed some, but thats why I specified "assuming its relaxed by then" 13:52 < cdecker> Doesn't matter to me in which airport I quarantine xD 13:52 < BlueMatt> t-bast: we can just put rusty in a bubble ball and have him roll around while the rest of us spread our foreign germs 13:52 < ariard> sept sgtm as a rough timeline 13:52 < niftynei> sept sgtm! 13:52 < t-bast> BlueMatt: I don't think I'll ever be able to get rid of this image in my mind xD 13:53 < ariard> t-bast: we might do it in paris :) ? 13:53 < cdecker> Anyhow, I think we should defer on upfront fees until Joost and Rusty are back, wdyt niftynei? 13:53 < niftynei> sgmt 13:53 < t-bast> ariard: would love to, let's see what's easiest for most people ;) 13:54 < t-bast> ACK on deferring upfront fees 13:54 < niftynei> BlueMatt, did you get enough responses to your question? 13:54 < BlueMatt> niftynei: yep! unless someone else speaks up (or pms me) with a date post-sept, I'll look at sept! 13:55 < BlueMatt> though actually my schedule for sept may already be booked lol 13:55 < cdecker> Hehe, was just about to say, don't make it the first week 13:55 < niftynei> sweet. ok so the other things on the Long Term Updates are dual-funding, offers, blinded paths, and trampoline routing 13:55 < cdecker> We can set up a doodle for the dates I think? 13:55 < niftynei> i gave a dual-funding update two weeks ago, there's nothing to update/report as far as that's concerned 13:56 < BlueMatt> cdecker: yea, can do. will do later today or soon. 13:56 < niftynei> does anyone else have business they want to discuss in the last 4 minutes? 13:56 < cdecker> Well except that your code is now published as part of the latest release and being used niftynei ;-) 13:56 < t-bast> congrats niftynei! 13:56 < niftynei> right. nothing notable ;) 13:57 < niftynei> well, it'll get a lot more use once i get the accepter side plugin shipped xD 13:57 < t-bast> If someone wants to implement https://github.com/lightningnetwork/lightning-rfc/pull/847 I should have something in eclair soon 13:58 < niftynei> speaking of notable things, c-lightning added the lnprototests to our CI last week 13:58 < ariard> gg :) 13:59 < t-bast> that's nice! Are there tutorials on how to implement a driver for lnprototest or not yet? It would be really useful to have that to get us to write a driver for eclair 13:59 < niftynei> t-bast, ah this is good! i will take a look at adding it to c-lightning this week 14:00 < niftynei> i dont think there are tutorials, and i'm fairly certain that a lot of tests will need tweaking to be more generically applicable (a few things are hard coded to c-lightning defaults) 14:00 < niftynei> i could throw together a how-to for the runner implementation pretty easily though, i'll put it on my todo list 14:00 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 14:00 < t-bast> yeah https://github.com/rustyrussell/lnprototest/blob/master/HACKING.md is a bit lacking, it would be great to have documentation to explain what hooks an implementation needs to provide to plug into lnprototest 14:01 < niftynei> ok cool, i'll see about adding some good docs for it 14:01 < t-bast> thanks 14:02 <+roasbeef> oh re AMP stuff, we have things working pretty well end-to-end, realiezd we can use some new stuff to improve MPP and just sending payments in general, and now mainly need to catch up the draft spec and finish up the examples there 14:03 < t-bast> cool stuff! that will be something interesting to look at soon as well 14:03 < cdecker> Looking forward to it ^^ 14:03 <+roasbeef> and I should have some more concrete stuff to share re dynamic commitments by the next meeting 14:05 < niftynei> dope! looking forward to it 14:05 < ariard> sounds exciting! see you next time :) 14:05 < cdecker> I'll drop off soon btw, but really enjoyed todays meeting ^^ 14:06 < niftynei> i'm also dropping off. see you all next time! 14:06 < niftynei> #endmeeting 14:06 < lndev-bot> Meeting ended Mon Apr 12 21:06:29 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 14:06 < lndev-bot> Minutes: https://lightningd.github.io/meetings/april_12th_lightning_dev_meeting/2021/april_12th_lightning_dev_meeting.2021-04-12-20.05.html 14:06 < lndev-bot> Minutes (text): https://lightningd.github.io/meetings/april_12th_lightning_dev_meeting/2021/april_12th_lightning_dev_meeting.2021-04-12-20.05.txt 14:06 < lndev-bot> Log: https://lightningd.github.io/meetings/april_12th_lightning_dev_meeting/2021/april_12th_lightning_dev_meeting.2021-04-12-20.05.log.html 14:06 < vincenzopalazzo> See you next time guys :) 14:06 < niftynei> *ahem* 14:06 < t-bast> Thanks guys, and thanks niftynei for chairing! 14:06 < niftynei> :P 14:07 < cdecker> Thanks everyone, and thanks for chairing niftynei 14:07 < niftynei> tchau! 14:08 -!- Thomashuet [~thomas@78.196.22.221] has quit [Quit: leaving] 14:08 -!- t-bast [~t-bast@91-165-90-32.subs.proxad.net] has quit [Quit: Leaving] 14:17 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 14:22 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 14:23 -!- lndev-bot [~docker-me@243.86.254.84.ftth.as8758.net] has quit [Ping timeout: 252 seconds] 14:32 -!- m-schmoo1k [~will@schmoock.net] has left #lightning-dev [] 14:33 -!- m-schmoock [~will@schmoock.net] has joined #lightning-dev 14:54 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 15:08 -!- vincenzopalazzo [~vincenzop@host-80-181-199-140.retail.telecomitalia.it] has quit [Quit: Leaving] 15:49 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 15:49 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 16:14 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 240 seconds] 16:14 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Quit: ZNC - http://znc.sourceforge.net] 16:15 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 16:47 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has quit [Read error: Connection reset by peer] 16:47 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 17:10 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has quit [Remote host closed the connection] 17:10 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 17:15 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 265 seconds] 17:16 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 17:24 -!- jeremyrubin [~jr@024-176-247-182.res.spectrum.com] has quit [Remote host closed the connection] 17:24 -!- jeremyrubin [~jr@024-176-247-182.res.spectrum.com] has joined #lightning-dev 17:45 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 17:56 -!- fkinglag [~fkinglag@unaffiliated/fkinglag] has quit [Ping timeout: 240 seconds] 18:10 -!- fkinglag [~fkinglag@unaffiliated/fkinglag] has joined #lightning-dev 18:16 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 18:28 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 18:30 -!- bitromortac_ [~admin@gateway/tor-sasl/bitromortac] has quit [Ping timeout: 240 seconds] 18:44 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 19:09 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 19:11 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 19:14 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 252 seconds] 19:14 -!- molz_ [~mol@unaffiliated/molly] has joined #lightning-dev 19:18 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 19:18 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 19:21 -!- molz_ [~mol@unaffiliated/molly] has quit [Ping timeout: 252 seconds] 19:26 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 19:31 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 19:34 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 246 seconds] 20:14 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 240 seconds] 20:16 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 20:19 -!- fkinglag [~fkinglag@unaffiliated/fkinglag] has quit [Ping timeout: 240 seconds] 20:23 -!- belcher_ [~belcher@unaffiliated/belcher] has joined #lightning-dev 20:26 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 268 seconds] 20:33 -!- fkinglag [~fkinglag@unaffiliated/fkinglag] has joined #lightning-dev 21:08 -!- openoms [~quassel@gateway/tor-sasl/openoms] has quit [Remote host closed the connection] 21:08 -!- openoms [~quassel@gateway/tor-sasl/openoms] has joined #lightning-dev 21:12 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 21:20 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 21:23 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 21:24 -!- molz_ [~mol@unaffiliated/molly] has joined #lightning-dev 21:27 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 21:45 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 260 seconds] 21:55 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 21:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 22:24 -!- smartineng [~Icedove@88.135.18.171] has joined #lightning-dev 22:24 -!- smartineng [~Icedove@88.135.18.171] has quit [Excess Flood] 22:24 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 22:25 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 22:32 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 22:32 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 22:33 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 240 seconds] 22:33 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 22:33 -!- smartineng [~Icedove@88.135.18.171] has joined #lightning-dev 22:36 -!- Xaldafax [~Xaldafax@cpe-198-72-160-101.socal.res.rr.com] has quit [Quit: Bye...] 22:38 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has joined #lightning-dev 22:48 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 22:49 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 22:58 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 22:58 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 23:39 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 23:40 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 23:40 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 23:41 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 23:42 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 23:46 -!- ThomasV [~ThomasV@unaffiliated/thomasv] has quit [Ping timeout: 240 seconds] 23:47 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 23:48 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev --- Log closed Tue Apr 13 00:00:12 2021