--- Log opened Mon Apr 27 00:00:06 2020 00:06 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 00:06 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 244 seconds] 00:07 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 00:07 -!- t-bast [~t-bast@2a01:e34:efde:97d0:ddc:57f8:a806:d593] has joined #lightning-dev 00:07 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 00:18 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 00:22 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 260 seconds] 00:24 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 00:28 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 265 seconds] 00:32 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 00:36 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 244 seconds] 00:38 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 00:41 -!- Pavlenex [~Thunderbi@92.223.89.55] has joined #lightning-dev 00:42 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 244 seconds] 00:43 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 00:43 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 00:47 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 246 seconds] 00:48 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 00:53 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 260 seconds] 00:54 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 00:58 -!- t-bast [~t-bast@2a01:e34:efde:97d0:ddc:57f8:a806:d593] has quit [Ping timeout: 265 seconds] 00:58 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 246 seconds] 01:01 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 01:02 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection] 01:03 -!- rh0nj [~rh0nj@88.99.167.175] has joined #lightning-dev 01:05 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 246 seconds] 01:05 -!- t-bast [~t-bast@2a01:e34:efde:97d0:ddc:57f8:a806:d593] has joined #lightning-dev 01:06 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 01:07 -!- laptop [~laptop@212.203.87.198] has joined #lightning-dev 01:09 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 01:10 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 240 seconds] 01:12 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 01:16 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 240 seconds] 01:19 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 01:21 -!- marcoagner [~user@bl13-226-166.dsl.telepac.pt] has joined #lightning-dev 01:24 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:1b3:7aa3:ea90:e3f4] has joined #lightning-dev 01:26 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 244 seconds] 01:29 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 01:34 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 265 seconds] 01:39 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 01:46 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 01:58 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 02:04 -!- kexkey [~kexkey@217.138.200.222] has quit [Ping timeout: 260 seconds] 02:32 -!- Pavlenex [~Thunderbi@92.223.89.55] has quit [Quit: Pavlenex] 02:49 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 02:51 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 02:55 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 246 seconds] 03:04 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 03:07 -!- molz_ [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 03:12 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 03:22 -!- afk11` [~afk11@gateway/tor-sasl/afk11] has quit [Ping timeout: 240 seconds] 03:22 -!- afk11` [~afk11@gateway/tor-sasl/afk11] has joined #lightning-dev 03:26 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 03:29 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 03:33 -!- Kostenko [~Kostenko@2001:8a0:72b2:3700:584d:686c:4803:22d3] has quit [Read error: Connection timed out] 03:41 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 03:44 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 03:44 -!- vasild_ is now known as vasild 03:45 -!- bitdex_ [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 03:45 -!- bitdex_ [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 03:48 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 246 seconds] 03:49 -!- kabaum [~kabaum@2001:9b1:efd:9b00::281] has quit [Ping timeout: 240 seconds] 03:49 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 03:52 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 03:53 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 244 seconds] 04:15 -!- t-bast [~t-bast@2a01:e34:efde:97d0:ddc:57f8:a806:d593] has quit [Remote host closed the connection] 04:17 -!- t-bast [~t-bast@2a01:e34:efde:97d0:ddc:57f8:a806:d593] has joined #lightning-dev 04:24 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 04:41 -!- per [~per@gateway/tor-sasl/wsm] has quit [Ping timeout: 240 seconds] 04:41 -!- per [~per@gateway/tor-sasl/wsm] has joined #lightning-dev 04:49 -!- kabaum [~kabaum@h-13-35.A163.priv.bahnhof.se] has joined #lightning-dev 04:51 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 04:56 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 260 seconds] 05:06 -!- afk11` [~afk11@gateway/tor-sasl/afk11] has quit [Ping timeout: 240 seconds] 05:09 -!- afk11` [~afk11@gateway/tor-sasl/afk11] has joined #lightning-dev 05:41 -!- gojiHmPFPN [~textual@c-73-47-220-190.hsd1.ma.comcast.net] has joined #lightning-dev 05:49 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:1b3:7aa3:ea90:e3f4] has quit [Ping timeout: 265 seconds] 06:15 -!- Bugz [~pi@035-134-226-003.res.spectrum.com] has quit [Ping timeout: 260 seconds] 06:24 -!- Bugz [~pi@2600-6c67-8880-0168-c71d-66c8-53c5-9584.res6.spectrum.com] has joined #lightning-dev 06:51 -!- laptop [~laptop@212.203.87.198] has quit [Ping timeout: 240 seconds] 07:06 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 07:13 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Read error: Connection reset by peer] 07:13 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 07:30 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 07:36 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 07:36 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 07:39 -!- molz_ [~mol@unaffiliated/molly] has joined #lightning-dev 07:42 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 244 seconds] 07:50 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 07:54 -!- molz_ [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 08:11 -!- per [~per@gateway/tor-sasl/wsm] has quit [Remote host closed the connection] 08:11 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 240 seconds] 08:11 -!- per [~per@gateway/tor-sasl/wsm] has joined #lightning-dev 08:13 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 08:41 -!- laptop [~laptop@77-57-116-48.dclient.hispeed.ch] has joined #lightning-dev 09:06 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 265 seconds] 09:14 -!- Pavlenex [~Thunderbi@92.223.89.55] has joined #lightning-dev 09:18 -!- darosior [~darosior@194.36.189.246] has quit [Quit: Ping timeout (120 seconds)] 09:18 -!- darosior [~darosior@194.36.189.246] has joined #lightning-dev 09:18 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 09:21 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Ping timeout: 256 seconds] 09:21 -!- laptop [~laptop@77-57-116-48.dclient.hispeed.ch] has quit [Ping timeout: 260 seconds] 09:22 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 256 seconds] 09:26 -!- Netsplit *.net <-> *.split quits: zerogue, so, Amperture, __gotcha, wullon5, Sajesajama__, kloinka, rachelfish, nibbier, dkrm, (+2 more, use /NETSPLIT to show all of them) 09:26 -!- __gotcha1 is now known as __gotcha 09:27 -!- Netsplit over, joins: Sajesajama__ 09:28 -!- zerogue [~irc@138.197.175.130] has joined #lightning-dev 09:29 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 09:30 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 09:31 -!- CubicEarth [~CubicEart@c-67-168-1-172.hsd1.wa.comcast.net] has joined #lightning-dev 09:31 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 09:31 -!- Amperture [~amp@65.79.129.113] has joined #lightning-dev 09:32 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 09:32 -!- so [~so@unaffiliated/so] has joined #lightning-dev 09:32 -!- wullon5 [~wullon@241.243.86.88.rdns.comcable.net] has joined #lightning-dev 09:32 -!- rachelfish [~rachel@unaffiliated/itsrachelfish] has joined #lightning-dev 09:32 -!- nibbier [~quassel@mx01.geekbox.info] has joined #lightning-dev 09:32 -!- dkrm [~dkrm@ns395319.ip-176-31-120.eu] has joined #lightning-dev 09:36 -!- Pavlenex [~Thunderbi@92.223.89.55] has quit [Quit: Pavlenex] 09:46 -!- Kostenko [~Kostenko@2001:8a0:72b2:3700:584d:686c:4803:22d3] has joined #lightning-dev 09:52 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 09:59 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 10:01 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 10:04 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 10:06 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 10:11 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 10:17 -!- amp__ [~amp@65.79.129.113] has joined #lightning-dev 10:20 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has quit [Ping timeout: 244 seconds] 10:21 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Ping timeout: 260 seconds] 10:26 -!- Netsplit *.net <-> *.split quits: so, Amperture, wullon5, proofofkeags, kloinka, rachelfish, dkrm, nibbier, riclas 10:28 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 10:29 -!- Netsplit over, joins: so, proofofkeags, wullon5, rachelfish, dkrm, nibbier, riclas 10:29 -!- Netsplit over, joins: kloinka 10:33 -!- so [~so@unaffiliated/so] has quit [Ping timeout: 264 seconds] 10:39 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 244 seconds] 10:42 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 10:44 -!- so [~so@unaffiliated/so] has joined #lightning-dev 10:47 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has joined #lightning-dev 10:47 -!- Pavlenex [~Thunderbi@92.223.89.55] has joined #lightning-dev 11:20 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Ping timeout: 256 seconds] 11:29 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 11:37 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 12:00 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 12:07 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 12:40 -!- roasbeef [~root@104.131.26.124] has joined #lightning-dev 12:40 -!- mode/#lightning-dev [+v roasbeef] by ChanServ 12:48 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 240 seconds] 12:53 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 12:57 < cdecker> Hello everybody ^^ 12:57 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has joined #lightning-dev 12:58 < t-bast> Hey cdecker! 12:58 < sstone> Hi everyone! 12:59 < t-bast> Good that you're here a bit early christian, I have a small question regarding your postgres support :) 12:59 < t-bast> Did you build a tool to migrate from sqlite to postgres for nodes that were previously on sqlite? 13:00 < rusty> v/m,e wakes up... 13:01 < t-bast> hey rusty :) 13:02 < cdecker> t-bast: I didn't, no. fiatjaf built a tool that can do the conversion (https://github.com/fiatjaf/mcldsp) 13:02 <+roasbeef> lol quite a scary warning in that README 13:02 < cdecker> Haven't tried it, but I've heard it works. 13:02 < t-bast> cdecker: thanks, I'll have a look at that 13:03 < cdecker> roasbeef: as it should xD 13:03 < ariard> hi all 13:03 < cdecker> Didn't want to encourage users to switch half-way through and then bumping against untested difficulties (sqlite3 is rather permissive when it comes to typing, postgres really isn't) 13:04 < t-bast> hey ariard 13:04 < cdecker> Hi ariard 13:04 < t-bast> cdecker: what's your answer then for nodes that have been on the network for a long time, and want to switch to postgres? 13:04 < niftynei> hello :D 13:04 < t-bast> hey niftynei 13:04 < cdecker> t-bast: I don't really have one ^^ 13:05 < t-bast> cdecker: haha ok, trying to figure out what to say myself 13:05 < t-bast> shall we start the meeting? do we have a volunter to chair? 13:05 < jkczyz> hi 13:05 < rusty> t-bast: TBH there are some configurations which are a bit boutique to support. 13:05 < t-bast> hey jkczyz 13:06 < rusty> niftynei: you haven't chaired for a while? 13:07 < niftynei> this is ... true 13:07 < niftynei> is there an agenda? 13:07 <+roasbeef> not much on the agenda so lot of time for more other stuff I guess 13:07 <+roasbeef> niftynei: https://github.com/lightningnetwork/lightning-rfc/issues/768 13:07 < t-bast> I put together a tentative agenda (https://github.com/lightningnetwork/lightning-rfc/issues/768), it's quite light on PRs so we can easily deviate and discuss more high-level stuff like tx pinning issues 13:08 < niftynei> #startmeeting 13:08 < lightningbot> Meeting started Mon Apr 27 20:08:05 2020 UTC. The chair is niftynei. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:08 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 13:08 < BlueMatt> hey yall 13:08 * t-bast waves at BlueMatt 13:08 < niftynei> hi everyone welcome to the lightning-dev 27 Apr 2020 spec meeting 13:08 < cdecker> #startmeeting 13:08 < lightningbot> cdecker: Error: Can't start another meeting, one is in progress. 13:08 < cdecker> Oh, missed that one 13:09 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/issues/768 13:09 < t-bast> cdecker tried to hijack the chair!!! 13:09 < niftynei> wow this is starting off exciting. 13:09 < cdecker> Man, we need a visit to IKEA to get more chairs... end the lockdown 13:09 < niftynei> first item on the agenda is stuck channels 13:10 < niftynei> #topic stuck channels #740 13:10 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/740 13:10 < t-bast> After many iterations on the wording, I think we're reaching something somewhat clear on stuck channels 13:10 < rusty> FWIW, this is implemented in our coming release, too. 13:11 < t-bast> There's even one ACK from Joost (thanks!) so maybe another impl to chime in and we should be good to go? 13:11 < niftynei> it looks like the PR has two requisite acks 13:11 < niftynei> c-lightning has an implemenation, has anyone else implemented this yet? 13:11 < t-bast> yep it's in eclair too 13:11 < t-bast> done slightly differently though 13:12 < rusty> (Whoever commits this please squash it in rather than merge or rebase, since it's 9 commits for one change!) 13:13 < t-bast> yep definitely, squash it's going to be 13:13 < t-bast> I can commit this if everyone feels it's ready 13:13 < niftynei> ok. so it seems we're ready to mergesquash this one? 13:13 < rusty> ack! 13:13 < t-bast> ack! 13:14 < t-bast> roasbeef do you know if lnd has implemented that? 13:14 < t-bast> I know johan and joost were mostly looking at this, so since joost ack-ed it's probably underway or done? 13:15 < t-bast> I've also seen that val has linked this for RL 13:15 <+roasbeef> no implementation yet 13:16 < niftynei> (i believe val is valwal_ on irc) 13:16 < BlueMatt> yea, we have a pr with that and a few other things that valwal_'s doing 13:16 < BlueMatt> the change looked good to me last I looked, so happy if it gets merged 13:16 < t-bast> great 13:16 < niftynei> #action t-bast to squash/merge PR #740 for stuck channels 13:16 < niftynei> ok, moving on 13:17 <+roasbeef> we have some other heuristics we use, they may be included in this tho, end of the day it's all whack-a-mole really, but we'll prob add this eventually 13:17 < niftynei> #topic PR #767 prune channels with old channel_updates 13:17 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/767 13:17 < t-bast> roasbeef: :+1: 13:18 < niftynei> looks like there's still discussion happening on the PR 13:19 < cdecker> I think this one has its logic backward: a channel that is inactive should not receive updates from either endpoints 13:19 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Ping timeout: 240 seconds] 13:19 < t-bast> Who has tried applying bitconner's suggestion to their own node? 13:20 < rusty> Hmm, it'd be good to mark which channels fall under this then probe to see if they're *really* dead... 13:20 <+roasbeef> cdecker: I think he means that one side is dead, but the other keeps updating, but it actually isn't useable at all 13:20 < t-bast> At first glance it looked useful to me, but when I applied it the results were bad; it seems to highlight that there are gossip issues in the network, some nodes behaving strangely with their channel_updates 13:20 <+roasbeef> t-bast: resultings being? 13:20 < cdecker> Yes, so why would the other side still be sending updates? Keepalive is not necessary since we'll just reannounce 13:21 < rusty> But I do like the idea that you shouldn't update if the other side doesn't. Though how to define this exactly: you would send out one at (say) 13 days, but then you'd suppress the next one I guess. 13:21 <+roasbeef> cdecker: the other side does as it's still active, and will prob just follow the guidfelines to make sure it sends on before the 2 week period 13:21 < rusty> cdecker: yeah, we would reannounce, even if just to say it's disabled. 13:21 < t-bast> roasbeef: see my comment on the PR: surprisingly some updates either weren't propagated properly all the way to our node, or weren't emitted in time by the corresponding node (for a channel that's clearly alive) 13:21 <+roasbeef> lnd will just disable that end tho, once the other party is offline for a period 13:21 -!- ja [janus@anubis.0x90.dk] has joined #lightning-dev 13:22 <+roasbeef> t-bast: interesting, i know CL has some suppression limits on nodes, maybe you're running into that? 13:22 < rusty> roasbeef: yeah, we would too: if it's offline at the time we need to refresh, it'll get a disabled update. 13:22 < sstone> roasbeef: can you clarify the "y'alls continues to send fresh channel_updates" bit ? 13:23 < cdecker> My point is that no node should be sending updates for channels that are not routable, rather they should let them be pruned, and then be reannounced if they become usable again, rather than having a constant background radiation of nodes that aren't usable but want to be kept alive 13:23 <+roasbeef> sstone: didn't write it, I think like updating fees n stuff? 13:23 < t-bast> cdecker: agreed, but it seems that all three implementations are supposed to do that (at least we all think our impl does that) 13:23 < rusty> Yes, I wonder if c-lightning is suppressing spam too aggressively and causing propagation issues? Though we're too small a mnority to do that in general I think. However, my impression was that we're not seeing as much gossip spam in Modern Times (though let me check...) 13:24 < t-bast> rusty: that may explain what I've seen, I failed to receive channel_updates in time from a node that seems to be running lnd (BlueWallet), but I cannot know if that node never emitted the update or if it wasn't propagated all the way to me 13:25 < cdecker> Easy, connect directly 13:25 < t-bast> And I'm unsure what impl the other end of the channel is running 13:25 <+roasbeef> t-bast: from stuff gathered on our end, the bluewallet node seems to be funky, I think they're running some custom stuff 13:25 < t-bast> roasbeef: ahah that may be another explanation 13:25 < cdecker> Interesting 13:26 <+roasbeef> I think maybe this issue was partially inspired by some weirdness conner saw when examining their node 13:26 <+roasbeef> I guess let's just wait faor him to comment on the issue, may be able to provide some more insight 13:26 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Remote host closed the connection] 13:27 < t-bast> allright, sgtm 13:27 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 13:27 < cdecker> ack ^^ 13:27 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 13:28 < rusty> 2782437 suppressions in ~2 weeks, but you'd have to spam (4/day get ignored) then stop spamming at all for 13 days to get pruned. 13:29 < t-bast> rusty: you mean that there are 1782437 channel_update that you received but didn't relay? 13:30 -!- Pavlenex [~Thunderbi@92.223.89.55] has quit [Quit: Pavlenex] 13:30 < rusty> t-bast: yeah, but sorting by channel/nodeid now. Suspect it's a handful. 13:30 < t-bast> that's a lot, that means ~5 per node per day (assuming ~40k nodes in the network) 13:31 < t-bast> ah no it's ~2 since there are 2 ends of a channel (doh) 13:32 < rusty> Anyway, let's keep moving while I wait for my machine to churn the results... 13:32 < t-bast> ack 13:33 < niftynei> ok so it sounds like we should continue discussion on the PR 13:33 < cdecker> sgtm 13:33 < niftynei> #action rusty to figure out c-lightning pruning reality 13:33 < niftynei> #action discussion for PR 768 to continue on github 13:33 < niftynei> ok let's see what's next 13:34 < niftynei> #topic issue 761, which is a discussion branched off of 740 13:34 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/issues/761 13:34 < t-bast> I added that to the agenda because it's digging an old topic (asynchronous protocol and its pitfalls) but I couldn't find previous discussions around that...can one of the protocol OGs just quickly skim through that issue and provide some links to previous discussions? 13:35 < t-bast> I don't know if it's worth spending time during the meeting, but if someone can take an action item to do a quick pass on this it would be great 13:35 <+roasbeef> can check it out, but it is the case that there're some fundamental concurrency isuses that can happen w/ the state machine as is, without the existence of some unadd type thing 13:36 < t-bast> roasbeef: great, I know that had been discussed in the past, but searching for it in the ML archives I couldn't find anything... 13:36 < rusty> Yeah, it's the nature of the beast. You can only prevent it by slowing the entire thing down, basically. Though it's less of a problem in practice. 13:38 < niftynei> this is a cool discussion to highlight t-bast 13:38 < niftynei> seems like we should keep moving along 13:39 < t-bast> ack! 13:39 < niftynei> if i've understood correctly, roasbeef is going to give it a look-see 13:39 < niftynei> #action roasbeef to check thread out 13:40 < niftynei> #info "it's the nature of the beast" - rusty 13:40 < niftynei> moving on 13:40 < t-bast> I think we could print a t-shirt for that "it's the nature of the beast" 13:40 < cdecker> Loving the quote, that should become our official motto 13:40 < t-bast> xD 13:40 < niftynei> we've now reached the crowd favorite "long term updates" segment 13:40 <+roasbeef> kek 13:41 <+roasbeef> blinded paths maybe? 13:41 < niftynei> ariard, and THEBLUEMATT would you be ok to start with tx pinning attack? 13:41 < cdecker> So say we all "natura bestiorum" xD 13:41 <+roasbeef> or that 13:41 < rusty> Yeah, blinded paths are cooler! 13:41 <+roasbeef> ;) 13:42 < niftynei> the r's have spoken, we'll start blind and then roll into pinning then 13:42 <+roasbeef> cdecker: sounds kinda occult lol 13:42 <+roasbeef> would prob make for a bad ass shirt/hoodie 13:42 < niftynei> #topic long term updates: blinded paths cc t-bast 13:42 < BlueMatt> yo 13:42 < niftynei> *BlueMatt sry 13:42 < BlueMatt> niftynei: sure, though why am I in CAPS 13:42 < cdecker> roasbeef: it's likely also monstrously wrong, my latin is a bit rusty :-) 13:42 < niftynei> my shift key finger got lazy :{ 13:43 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/765 13:43 < niftynei> t-bast do you want to give us an update? 13:43 < t-bast> So the gist of route blinding is that it's doing a Sphinx round from the recipient to an introduction point 13:44 < t-bast> And using a key derived from the shared_secrets to blind the scids 13:45 < t-bast> It requires nodes *after* the introduction point to receive a curve point *outside* of the onion, to be able to derive a tweak to their node_id that lets them decrypt the onion 13:45 < t-bast> First of all it needs more eyes on the crypto itself :) 13:45 < t-bast> Then there are two things that need to be challenged 13:45 < t-bast> The fact that it adds data to the tlv extension of update_add_htlc, making them distinguishable at the link layer (different packet size) 13:46 < t-bast> And the fact that an attacker is able to uncover one of the channels by probing the fees/cltv -> maybe they could do worse? 13:47 < rusty> t-bast: it aklso needs a bolt11 extension, though I've been thinking about that. 13:47 < t-bast> The added flexibility compared to rendezvous is what gives attackers some probing potential - we need to evaluate how much and how we can mitigate those without building something too complex 13:48 < t-bast> rusty: true, it needs changes to Bolt11 as well (but those could be for the best as nobody like the current routing_hints xD) 13:48 < cdecker> I guess we couldn't normalize the TLV addition along the entire path, i.e., sending the adjunct point always, even if not needed? 13:48 < t-bast> cdecker: yes we definitely could 13:48 < t-bast> cdecker: that feels hackish, but it could be the way to go 13:49 < cdecker> But we can't blind it normally, since then we'd have the usual 'meet-in-the-middle' issue we had with RV 13:49 < rusty> cdecker: you can send an empty ping, too. (Though technically there are rules against sending pings when you're still waiting for a response) 13:49 < cdecker> So it'd be a decoy that is sent along 13:50 < t-bast> yes, a dummy decoy would be enough since it's only to thwart link layer attacker who would see encrypted bytes, it's just to make the lengths match 13:50 < niftynei> i have a dumb question. how does the route composer ensure that the capacity for that route is ok? 13:50 < cdecker> I was just thinking we could make it seamless by always going through the motions, to have constant-time and constant-size 13:50 < niftynei> or is the path not meant to be sized/usable? 13:51 < t-bast> niftynei: you run the path-finding like you'd do for a normal payment; ideally you'd select a path with the biggest capacity and additional channels to leverage non-strict forwarding 13:51 < t-bast> niftynei: but it's true that you can't guarantee that it will always succeed 13:52 < niftynei> so every payment failure will need a round-trip convo with the path initiator? 13:52 < niftynei> "hey this one didn't work, what else you got" 13:52 < cdecker> Yep, that's what I think it means 13:52 < t-bast> niftynei: that, or you'd provide multiple paths in the invoice, but both of these potentially also give more data to an attacker to unblind :/ 13:53 <+roasbeef> t-bast: this is an existing probing vector re fees/cltv tho? 13:53 < ariard> or do you send directly a set of blinded-path? 13:53 < t-bast> roasbeef: exactly, that's what I'm worried about 13:53 < cdecker> But it adds more flexibility to what my RV construction can offer (payment_hash and amounts are not committed to, so you could retry with different fees at least) 13:53 < niftynei> ack ok, this fits my understanding of the proposal then :) 13:54 < rusty> Note: there is a way to combat ctlv/fee probes, and that is to put advice inside the enctlv telling the node what minima to use (presumably, >= its existing minima). Then you can force the path to have uniform values. 13:54 < t-bast> I think it's showing that when you give some flexibility around fees and cltv, you make this more usable, but you give away a bit of privacy by adding probing, so we need to find the sweet spot 13:54 <+roasbeef> yeh as cdecker said, we can start to pad out update_add_htlc at all times, would need to look into the specifics of why it can't be attached in the onion 13:55 <+roasbeef> how's that failure condition communicated? we assume a new sync connetion between the sender/receiver? 13:55 < t-bast> roasbeef: you need it to know the key to decrypt the onion, that's why (at least in the current proposal), because the blinding works by tweaking node_ids 13:55 < cdecker> That being said, I think rusty is planning to use these for the offers, i.e., route communication to the real endpoint, not caring about fees, cltvs and amounts, and then we can have iterative approaches to fetch alternative paths in the background 13:56 < t-bast> roasbeef: in my mind it should be a single-shot, if we provide interaction between payer/recipient to get other paths I'm afraid the probing would be quite efficient 13:56 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 13:56 < ariard> t-bast: but how do you dissociate a real failure from a simulated-one for probing? 13:56 <+roasbeef> t-bast: doesn't sound realistic in practice tho? like even stuff like a channel update being out of date that you need to fetch or something (I still need to read it fully for broader context too) 13:57 < t-bast> rusty: I like the proposal to raise the lower bound for all channels in the path, everyone would benefit from this and it could help with probing - I need to think about this more 13:57 < rusty> roasbeef: yeah, they're no more robust than route hints today. 13:57 < cdecker> Well, for a communication substrate this works quite nicely, doesn't it? 13:58 < t-bast> roasbeef: I don't know, if you do what rusty proposes (raise fee and cltv compared to current channel_update) it may work quite nicely in practice 13:58 < t-bast> And for a communication substrate it's fine indeed 13:58 < rusty> (Ofc you need to do this for every node on the path, even if it's redundant, since otherwie you're flagging which ones need to be told!) 13:58 < t-bast> ariard: what do you mean? who distinguishes that failure? 13:59 < t-bast> #action t-bast to investigate making fee/cltv uniform in the blinded path, describe that in the PR 13:59 < niftynei> FYI we are about at 1hr of meeting time for today 14:00 < ariard> t-bast: like you receive a blinded route in invoice, you do a first probing? you ask for a 2nd invoice with another blinded route...? 14:00 < t-bast> already?? :'( 14:01 < cdecker> True, I wanted to mention that the path encoded in the blinded path needs to be contiguous and each hop needs to support the extra transport, so this becomes useful only rather late when most nodes have updated 14:01 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Remote host closed the connection] 14:01 < niftynei> (rusty sneaking in that fee increase he promised at LN-conf in berlin last year) 14:01 < t-bast> ariard: the way I see it you shouldn't give two blinded routes to the same payer. But maybe in practice it's simply impossible to avoid, it's a good point I need to think about that a bit more E2E 14:01 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 14:02 < t-bast> cdecker: yes it would take time to be adopted, it needs enough nodes to support it 14:02 < rusty> t-bast: I also need to implement dummy path extension, so we can put advice in the spec. 14:02 < t-bast> cdecker: once it's implemented, the road to E2E adoption will be quite long 14:02 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 14:03 < t-bast> rusty: yes I think that part is quite useful too! in combination with uniform fees/cltv it may also fix ariard's comment 14:03 < ariard> t-bast: but you need sender authentication and we want both-side anonymity? 14:03 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 14:04 < niftynei> should we spend 10min on pinning? 14:04 -!- jonatack [~jon@134.19.179.187] has joined #lightning-dev 14:04 < t-bast> ariard: yeah this is why I think preventing this is hard and potentially not doable, but the uniform fees combined with dummy path extension may allow you to give many blinded paths without risk. I need to spend more time on this 14:04 < t-bast> niftynei: ack, I think it would be good to discuss that a bit while we're all (?) here 14:04 < ariard> t-bast: yeah let's continue on the PR, replying on your comments 14:05 < niftynei> #action discussion to continue on PR 14:05 < t-bast> thanks everyone for the feedback, it already gave me a lot to work on! Don't hesitate to put comments on the PR 14:05 < cdecker> Need to drop off in 10, but let's quickly fix RBF pinning ;-) 14:05 < niftynei> #topic long term updates: mempool tx pinning attack cc ariad & BlueMatt 14:06 < niftynei> ariad or BlueMatt do you have a link you could pop into the chat for this? 14:06 < ariard> yes optech has a resume 14:06 <+roasbeef> well there's that recent ML thread 14:06 < niftynei> iiuc this was mostly a ML thread? 14:06 < ariard> https://github.com/bitcoinops/bitcoinops.github.io/pull/394/files 14:06 < BlueMatt> mostly the ml thread i think is a good resource 14:07 < BlueMatt> optech has a good summary as well that describes also the responses on the ml 14:07 < rusty> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html ... 14:07 < BlueMatt> not just the high-level issue 14:07 <+roasbeef> something I pointed out in the thread is that the proposed HTLC changes would require a pretty fundemantel state machine change, which is still an open design question, what's in the current draft PR gets us pretty far along imo and ppl can layer the mempool stuff on top of that till we figure out the state machine changes 14:07 < niftynei> #link https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002639.html 14:07 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 246 seconds] 14:08 <+roasbeef> favoring incremental updates to fix existing issues and make progress 14:08 < niftynei> #link https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html 14:08 < BlueMatt> roasbeef: I disagree that its "pretty fundamental" 14:08 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 14:08 <+roasbeef> BlueMatt: it would need a new set before commit_sig, seems big, no? idk how that would even look liek atm 14:08 < BlueMatt> but it does require an additional step between update_add and commitment_signed 14:08 <+roasbeef> new step* 14:09 < BlueMatt> ready_to_sign_commitment -> ok_go_ahead -> commitment_signed 14:09 < niftynei> #link ttps://github.com/bitcoinops/bitcoinops.github.io/pull/394/files 14:09 < BlueMatt> :) 14:09 < niftynei> #link https://github.com/bitcoinops/bitcoinops.github.io/pull/394/files 14:09 < ariard> like provide_remote_sig_on_local 14:09 <+roasbeef> ok now ensure that works on teh concurrent async setting 14:09 <+roasbeef> just saying it isn't fully figured out yet 14:09 < BlueMatt> no problem, ok_go_ahead ignores any things going the other way 14:09 < BlueMatt> and is "a part of the commitment_signed" 14:09 < BlueMatt> i dont think thats complicated 14:09 <+roasbeef> the devil is always in the details ;) 14:09 < ariard> monitoring mempool isn't a really fix it's easy to pour a local conflict in your mempool while announcing something else to the rest of netwrok 14:10 <+roasbeef> but just pointing out how it sprwals, and ppl can deploy a solution that fixes a lot of issues today as we have 14:10 < BlueMatt> but, anyway, I think if we want to move forward, then we can do anchor without any htlc transaction changes, that seems reasonable and leaves this issue as-is 14:10 < ariard> and identifying mempools of big ln nodes shouldn't be that hard due to tx-relay leaks 14:10 <+roasbeef> ariard: yeh and the other direction is mempool fixes, as jeremey rubin mentioned in the thread 14:10 < ariard> roasbeef: on the long term we agree, but it may takes months/years :( 14:10 < BlueMatt> note that rubin mentioned specifically that he does *not* have any mempool changes to fix this queued up 14:10 <+roasbeef> BlueMatt: yeah i'm on board w/ that, then we can continue to grind on this issue in the background 14:11 < BlueMatt> only that he's slowly working on slowly making discussing such changes an option :/ 14:11 < niftynei> can someone summarize the main discussion point rn? 14:11 < BlueMatt> niftynei: essentially you dont learn preimages from txn in the mempool, this may result in you not getting the preimage. 14:11 < ariard> niftyney: what fixes can we come up which is reasonable to implement in a short timeline? 14:11 <+roasbeef> niftynei: move forwaard w/ anchors as is rn that has this issue with htlcs, or try to finish up this other working thing that requires some bigger changes to the statemachien which aren't fully figured out yet 14:11 < BlueMatt> there isnt really any fix that can be done to the mempool policy that fixes it any time soon, and maybe ever. 14:11 < niftynei> ty! 14:12 <+roasbeef> down with the mempool! 14:12 < cdecker> Maybe this is a dumb question: but why doesn't the RBF rule ask for a pro-rata increase in feerate, rather than a linear diff. Pro-rata (say each replacement needs 10% higher feerate) we'd have a far smaller issue with spamming, and we could lose the absolute fee diff requirement 14:12 < ariard> package relay should make us safe 14:12 < BlueMatt> we can probably do something with anchor that, like, has different htlc formats based on the relative values of the htlc to the feerate 14:12 * rusty starts reading thread, confused about why we're talking about changing the wire protocol... 14:12 < BlueMatt> but we'll need to figure that out 14:12 <+roasbeef> cdecker: I think it does, but one issue is the absolute fee increase requirement, if it was just feerate this particular pinning issue wouldn't exist 14:13 < ariard> lockingdown htlcs and requiring CPFP for them will make them costlier which means more dust htlcs 14:13 < cdecker> That's my point: an exponential increase in feerate could replace the absolute fee increase requirement, couldn't it? 14:13 < niftynei> so to summarize: there's a mempool related info propagation problem that impacts htlc txs? 14:14 < ariard> cdecker: it asks for a pro-rate, bip125 rule4? 14:14 * BlueMatt notes that the whole rbf fee-based policy thing is *not* the only issue with rbf-pinning, so its somewhat of missing the point to focus on it. 14:14 <+roasbeef> niftynei: yeh 14:14 < niftynei> kk 14:15 < cdecker> BlueMatt: that's true, but we've been burned by it a couple of times, so I was just wondering 14:15 < ariard> cdecker: yeah but for performance reasons it's not implemented right now, you just sum up absolute fees of conflicted tx + descendants 14:15 -!- Wiqe [~exoe@host.212-19-22-72.broadband.redcom.ru] has joined #lightning-dev 14:15 -!- Wiqe [~exoe@host.212-19-22-72.broadband.redcom.ru] has left #lightning-dev [] 14:16 -!- Blopr [~Proph@103.108.90.225] has joined #lightning-dev 14:16 <+roasbeef> niftynei: the anchors PR on rfc solves some but not all of the issues, but a big thing is that yo can actully modify the commitmetn fee finally post broadcast, and also bump htlc fees. this issue can degrade into a bidding war for htlc inclusion basically 14:16 < niftynei> cdecker russell o'connor had a bitcoin ML post re: adjusting the feerate floor for RBFs a while ago 14:16 -!- Blopr [~Proph@103.108.90.225] has left #lightning-dev [] 14:16 -!- Swaji [~Kraigha@170.231.204.88] has joined #lightning-dev 14:16 -!- Swaji [~Kraigha@170.231.204.88] has left #lightning-dev [] 14:16 < cdecker> I mean if every replacement just required a 10% increase in feerate (and dropped the absolute fee increase requirement) wouldn't we end up in confirmation territory very quickly, and a node could actually decide locally whether the bump will be accepted or not 14:16 < BlueMatt> there's also the 'likely to be rbf'd' flag suggestion if we want to get into mempool policy changes 14:17 < cdecker> Oh, I'll go and read that, thanks niftynei ^^ 14:17 < ariard> roasbeef: sadly there is clever way to pin by exploiting ancestor size limit to obstrucate replacement 14:18 < niftynei> cdecker https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015717.html 14:18 < cdecker> BlueMatt: while not directly actionable I do get the feeling that a mempool RBF logic simplification is the only durable fix for these issues, but I'm happy to be corrected on this one :-) 14:18 < BlueMatt> anyway, so it sounds like the next steps are: a) amend anchors to make no changes to htlc signatures, and resolve the other issues (needs tests cases, debate about 1-vs-2 etc) on the ml or issue, then b) work on redoing things to use anchors (maybe based on feerates) in htlc txn 14:18 <+roasbeef> ariard: yeh there's prob a ton of other pinning vectors, but at least we can make a step forwrd to actually allow fee updates, maybe just a summary on all the ways pinning can happen would be helpful, I don't think those limits had stuff like this in mind when they were added (as stuff like this didn't really exist) then 14:18 <+roasbeef> BlueMatt: +1 14:19 < cdecker> BlueMatt: sgtm 14:19 < niftynei> #action amend anchors to make no changes to htlc signatures, and resolve the other issues (needs tests cases, debate about 1-vs-2 etc) on the ml or issue 14:19 < niftynei> #action work on redoing things to use anchors (maybe based on feerates) in htlc txn 14:19 < BlueMatt> cdecker: I tend to agree, but I'm not sure what it *is* - there's a very good reason why it is what it is around DoS issues, and while I agree its not ideal even in its own right, the solution isnt clear. I have lots of ideas, but the resources needed to implement them are....no where near there 14:20 < t-bast> ariard: :+1: 14:20 <+roasbeef> yeh would need to be like a big initiative 14:20 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 14:20 < ariard> roasbeef: true, all mempool rules should be documented as in bip125 14:20 < niftynei> i'm sorry to say that we are out of time 14:20 < ariard> to make it easier to reason for mempool doing offchain stufff 14:20 < ariard> *people 14:20 < niftynei> thanks for the great discussion today everyone and t-bast for the agenda 14:21 < BlueMatt> + 14:21 < BlueMatt> 1 14:21 < niftynei> and cdecker for the co-chair assist ;) 14:21 < t-bast> thanks niftynei! 14:21 < niftynei> i'm going to close the meeting but feel free to continue discussion 14:21 < niftynei> #endmeeting 14:21 < lightningbot> Meeting ended Mon Apr 27 21:21:36 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 14:21 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-04-27-20.08.html 14:21 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-04-27-20.08.txt 14:21 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-04-27-20.08.log.html 14:21 < t-bast> and thanks everyone for the discussions 14:22 < t-bast> I'd love to see such a tx-pinning summary 14:22 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 244 seconds] 14:22 < rusty> niftynei: thanks for chairing!@ 14:23 < t-bast> roasbeef you mentioned a mechanism to upgrade existing channels using anchor to potential changes in the commit tx in the future 14:23 < t-bast> is that something that we could use to do what BlueMatt suggests in multiple steps? 14:23 <+roasbeef> t-bast: yeh need to send out my draft 14:24 <+roasbeef> yeh exactly, deploy anchors 1.0, then 2.0 later 14:24 <+roasbeef> even just catching up channels in the wild to stuff liek static_remote_key would be a big win 14:24 < t-bast> and we'd be able to upgrade existing anchor 1.0 channels to 2.0 without on-chain txs? 14:24 < ariard> roasbeef: could this be used to have floating dust outputs? 14:24 <+roasbeef> t-bast: that's the idea, needs to stand up to scrutiny 14:24 <+roasbeef> floating dust outputs? 14:24 < ariard> like what is not a dust now maybe iin the future with fee increase 14:25 < t-bast> roasbeef: great! 14:25 <+roasbeef> t-bast: i think the ppl messing around w/ dlc stuff would also be into it, so then you could update your chan on teh fly if needed, or like add "sub-commitments" n stuff 14:25 < lndbot> hmm this is the second time ive been mentioned by you @bastien.teinturier no worries probably a typo but maybe you think im somebody else? 14:25 <+roasbeef> ariard: ahh yeh 14:25 < ariard> because right now `dust_limit_satoshis` is a fixed parameter at channel creation 14:25 <+roasbeef> mhmm, one thing in that direction also is being able to dynamically update our "flow control" params 14:25 < ariard> and your old channels may be configured with values which doesn't make sense anymore 14:25 < niftynei> cdecker: i wrote up some notes on o'connor's RBF proposal a while ago on my blog https://basicbitch.software/posts/2018-12-27-Explaining-Replace-By-Fee.html 14:26 < t-bast> christian: woops sorry, it was when meaning to say something to cdecker :D 14:26 <+roasbeef> stuff like the mx # of htlcs allowed, you may want to have a new peer only be able to propose N htlcs, then increase/decrease that based on failure/success rate, may be another mitigation against loop/jamming attacks 14:26 < t-bast> christian: I didn't realize that lndbot was forwarding pings, sorry! 14:27 < ariard> roasbeef: yeah based on good behavior you may want to relax policy 14:27 <+roasbeef> mhmm, lnd also has something called a "channel acceptance policy" which is a predicate to accept/deny inbound requests, and also be used to not like allow some node to quickly make a bunch of connections to do funny biz 14:28 <+roasbeef> so at least they'd need to build up good will over a period of time (how ever that's computed), then decide to go rogue or w/e 14:28 < ariard> do you have something like `max_dust_value_in_flight` ? It's likely exploitable 14:28 < ariard> like don't have too much unenforcable onchain value with loosely-trusted peers 14:28 <+roasbeef> ariard: I think that's kind a combo of max # in flight and the max amt, agree there should maybe be another setting there 14:28 <+roasbeef> mhmm, we don't monitor it atm, but that could be added just as policy on top of channels I think 14:29 < ariard> yeah but you need to adverstie to your counterparty 14:29 <+roasbeef> ah true, so they don't go over 14:29 <+roasbeef> fall back ofc would be just trying to cancel back, but at that point it's already on your commitment 14:29 < ariard> onchain outgoing/incoming policies may also be adverstied, to avoid some block race condition triggering unexpected channel closure 14:30 < ariard> yeah you may limit number of update iin a single-flow, make sense to me 14:30 <+roasbeef> yeh there're things that exist on the link level which aren't on the network level rn, you're meant to set them both with respect to each other, but bad configs are possible 14:30 < lndbot> no worries ;) we do have the same name after all im guessing you typed @ christian 14:33 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 14:34 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 260 seconds] 14:35 -!- t-bast [~t-bast@2a01:e34:efde:97d0:ddc:57f8:a806:d593] has quit [Quit: Leaving] 14:36 -!- ja [janus@anubis.0x90.dk] has left #lightning-dev [] 14:38 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 14:40 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 14:41 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 14:44 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 14:44 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 246 seconds] 14:47 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 14:49 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 14:54 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 260 seconds] 14:58 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 15:03 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 260 seconds] 15:07 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 15:08 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has quit [Ping timeout: 244 seconds] 15:12 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has joined #lightning-dev 15:14 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 260 seconds] 15:15 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 15:19 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has quit [Ping timeout: 240 seconds] 15:20 -!- shesek` [~shesek@5.22.128.43] has quit [Ping timeout: 260 seconds] 15:20 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Ping timeout: 260 seconds] 15:20 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has joined #lightning-dev 15:21 -!- shesek` [~shesek@5.22.128.43] has joined #lightning-dev 15:27 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 15:33 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 15:41 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 15:44 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 15:44 -!- vasild_ is now known as vasild 15:57 -!- marcoagner [~user@bl13-226-166.dsl.telepac.pt] has quit [Ping timeout: 265 seconds] 16:10 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 16:11 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Remote host closed the connection] 16:13 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Ping timeout: 244 seconds] 16:14 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:b94c:baeb:34ca:6eb6] has joined #lightning-dev 16:15 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 16:22 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 16:37 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 16:40 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 16:42 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 246 seconds] 16:44 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 16:47 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 16:59 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has quit [Ping timeout: 260 seconds] 17:18 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 17:26 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has joined #lightning-dev 17:30 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:b94c:baeb:34ca:6eb6] has quit [Remote host closed the connection] 17:32 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 17:33 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 17:38 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 264 seconds] 17:39 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 17:42 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 17:43 -!- amp__ [~amp@65.79.129.113] has quit [Remote host closed the connection] 17:45 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [] 17:52 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 17:53 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 17:54 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 17:54 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 17:56 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 17:56 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has quit [Ping timeout: 244 seconds] 17:58 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has joined #lightning-dev 18:09 -!- molz_ [~mol@unaffiliated/molly] has joined #lightning-dev 18:12 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 264 seconds] 18:23 -!- molz_ [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 18:25 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has quit [Ping timeout: 244 seconds] 18:33 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 18:35 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 244 seconds] 18:38 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 244 seconds] 18:39 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 18:47 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 19:22 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Remote host closed the connection] 19:23 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 19:23 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 19:27 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 260 seconds] 19:30 < rusty> OK, so https://ln.bigsun.xyz/node/033878501f9a4ce97dba9a6bba4e540eca46cb129a322eb98ea1749ed18ab67735 seems to be spamming channel updates bigtime (over 1000 per day). 19:30 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 19:31 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 19:46 -!- test111 [17ef172c@23.239.23.44] has joined #lightning-dev 19:54 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 19:55 -!- test111 [17ef172c@23.239.23.44] has quit [Remote host closed the connection] 19:55 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Remote host closed the connection] 19:56 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has joined #lightning-dev 19:56 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 19:56 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 20:03 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 260 seconds] 20:25 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 20:35 -!- Amperture [~amp@65.79.129.113] has joined #lightning-dev 20:35 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 21:03 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 244 seconds] 21:09 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 246 seconds] 21:26 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 244 seconds] 22:05 -!- x7DB [~0x7DB@105.112.57.36] has joined #lightning-dev 22:08 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 22:22 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has quit [Ping timeout: 240 seconds] 22:28 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has joined #lightning-dev 22:45 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection] 22:45 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 22:46 -!- rh0nj [~rh0nj@88.99.167.175] has joined #lightning-dev 22:48 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 264 seconds] 22:58 -!- mango [~mango@2601:646:480:1ea0:d60:5049:a1d2:987b] has joined #lightning-dev 23:05 -!- x7DB [~0x7DB@105.112.57.36] has quit [Remote host closed the connection] 23:05 -!- x7DB [~0x7DB@105.112.57.36] has joined #lightning-dev 23:06 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 23:21 -!- mango [~mango@2601:646:480:1ea0:d60:5049:a1d2:987b] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 23:23 -!- mango [~mango@2601:646:480:1ea0:d60:5049:a1d2:987b] has joined #lightning-dev 23:23 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 23:24 -!- x7DB [~0x7DB@105.112.57.36] has quit [Quit: Leaving] 23:38 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:d1e2:fe50:7c37:b1e] has quit [Ping timeout: 240 seconds] 23:38 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 260 seconds] 23:43 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 23:47 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has joined #lightning-dev 23:56 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has quit [Ping timeout: 264 seconds] --- Log closed Tue Apr 28 00:00:05 2020