--- Log opened Mon Oct 12 00:00:44 2020 00:03 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 00:05 -!- gleb [~gleb@178.150.137.228] has joined #lightning-dev 00:07 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 256 seconds] 00:15 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 00:24 -!- yzernik [~yzernik@12.203.56.101] has quit [Ping timeout: 272 seconds] 00:29 -!- yzernik [~yzernik@12.203.56.101] has joined #lightning-dev 00:33 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 00:40 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 00:42 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 265 seconds] 00:42 -!- __gotcha1 is now known as __gotcha 01:00 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 01:51 -!- jonatack [~jon@134.19.179.147] has quit [Ping timeout: 244 seconds] 02:04 -!- kexkey [~kexkey@91.193.6.10] has quit [Ping timeout: 265 seconds] 02:14 -!- musdom [~Thunderbi@202.186.37.216] has quit [Ping timeout: 246 seconds] 02:35 -!- jonatack [~jon@37.166.80.90] has joined #lightning-dev 02:52 -!- jonatack [~jon@37.166.80.90] has quit [Read error: Connection reset by peer] 03:10 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 03:12 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 03:14 -!- shesek [~shesek@unaffiliated/shesek] has quit [Remote host closed the connection] 03:43 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 04:15 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 04:29 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection] 04:29 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 04:40 -!- jonatack [~jon@109.232.227.138] has joined #lightning-dev 04:48 -!- jonatack [~jon@109.232.227.138] has quit [Ping timeout: 256 seconds] 04:50 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 04:50 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Client Quit] 04:50 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 04:52 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has quit [Quit: Leaving] 04:54 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 04:55 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Remote host closed the connection] 04:55 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:59 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 260 seconds] 05:01 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has joined #lightning-dev 05:02 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 05:05 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 05:06 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 05:06 -!- __gotcha1 is now known as __gotcha 05:23 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 05:27 -!- jcoe [seru@gateway/vpn/mullvad/joncoe] has joined #lightning-dev 05:27 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 260 seconds] 05:28 -!- jcoe [seru@gateway/vpn/mullvad/joncoe] has quit [Client Quit] 05:30 -!- fiatjaf [~fiatjaf@2804:7f2:2a85:2fc:ea40:f2ff:fe85:d2dc] has quit [Ping timeout: 260 seconds] 06:34 -!- fiatjaf [~fiatjaf@2804:7f2:2a85:2fc:ea40:f2ff:fe85:d2dc] has joined #lightning-dev 06:36 -!- vindard [~vindard@190.83.165.233] has quit [Ping timeout: 256 seconds] 06:38 -!- vindard [~vindard@190.83.165.233] has joined #lightning-dev 06:45 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 07:04 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 07:05 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 07:32 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 07:37 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 07:37 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:3854:5153:e997:dfee] has joined #lightning-dev 07:38 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 07:42 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:3854:5153:e997:dfee] has quit [Client Quit] 07:42 -!- musdom [~Thunderbi@202.184.174.75] has joined #lightning-dev 07:43 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Read error: Connection reset by peer] 07:44 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 08:00 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 08:01 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection] 08:01 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 08:11 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has quit [Quit: WeeChat 2.3] 08:16 -!- mips [~mips@gateway/tor-sasl/mips] has quit [Ping timeout: 240 seconds] 08:19 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 08:24 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 272 seconds] 08:28 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Remote host closed the connection] 08:28 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 08:29 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 08:34 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 08:35 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 08:40 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 08:41 -!- mips [~mips@gateway/tor-sasl/mips] has joined #lightning-dev 08:50 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Remote host closed the connection] 08:50 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 08:53 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 09:14 -!- proofofkeags_ [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has joined #lightning-dev 09:14 -!- proofofkeags [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has joined #lightning-dev 09:15 -!- proofofkeags_ [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has quit [Remote host closed the connection] 09:15 -!- proofofkeags [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has quit [Remote host closed the connection] 09:15 -!- proofofkeags [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has joined #lightning-dev 09:15 -!- proofofkeags_ [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has joined #lightning-dev 09:21 -!- proofofkeags__ [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has joined #lightning-dev 09:23 -!- proofofkeags [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has quit [Ping timeout: 240 seconds] 09:23 -!- proofofkeags_ [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has quit [Ping timeout: 240 seconds] 09:24 -!- proofofkeags [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has joined #lightning-dev 09:39 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 09:51 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has joined #lightning-dev 10:23 -!- ja [janus@anubis.0x90.dk] has joined #lightning-dev 10:34 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 10:36 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 10:36 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 10:37 -!- arij [uid225068@gateway/web/irccloud.com/x-bhlectyqewspvmoo] has joined #lightning-dev 10:41 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 246 seconds] 10:50 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 10:51 -!- musdom [~Thunderbi@202.184.174.75] has quit [Ping timeout: 272 seconds] 10:58 -!- musdom [~Thunderbi@202.184.174.75] has joined #lightning-dev 11:04 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 11:04 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 11:05 -!- __gotcha1 [~Thunderbi@plone/gotcha] has quit [Client Quit] 11:05 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 11:43 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 11:48 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 11:52 -!- Pavlenex1 [~Thunderbi@178-223-156-85.dynamic.isp.telekom.rs] has joined #lightning-dev 11:54 -!- Pavlenex1 [~Thunderbi@178-223-156-85.dynamic.isp.telekom.rs] has quit [Client Quit] 11:54 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 11:55 -!- t-bast [~t-bast@2a01:e34:ec2c:260:bc12:eb05:414a:ef4d] has joined #lightning-dev 11:55 -!- Pavlenex1 [~Thunderbi@178-223-156-85.dynamic.isp.telekom.rs] has joined #lightning-dev 11:56 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Ping timeout: 272 seconds] 11:56 -!- Pavlenex1 is now known as Pavlenex 11:57 < niftynei> hi hi hi 11:58 < t-bast> good morning everyone! 11:58 < niftynei> t-bast, thanks for pulling a meeting agenda together https://github.com/lightningnetwork/lightning-rfc/issues/802 11:59 < rusty> Hello all! 11:59 < t-bast> my pleasure, it's helpful to continue discussing outside of meetings IMO 11:59 < t-bast> hey rusty 11:59 -!- Pavlenex [~Thunderbi@178-223-156-85.dynamic.isp.telekom.rs] has quit [Client Quit] 11:59 < jkczyz> howdy! 🤠 11:59 < t-bast> hey! 12:01 < niftynei> hmm the agenda says 8PM UTC, but the calendar is showing the meeting for now (afaict the time is currently 7PM UTC) 12:01 < niftynei> hello daylight savings time! 12:01 -!- crypt-iq289 [aec10676@118.sub-174-193-6.myvzw.com] has joined #lightning-dev 12:01 < t-bast> damn, we're indeed in the daylight savings time madness, I didn't realize! 12:02 -!- reallll [~belcher@unaffiliated/belcher] has joined #lightning-dev 12:02 < t-bast> I'll update the issue (but unfortunately too late for some :/) 12:02 < rusty> Yeah, sorry, I meant to remind everyone last week when it moved :( 12:03 < niftynei> thanks t-bast 👍 12:03 < t-bast> that's why we meeting time should be a block number instead, it knows no daylight savings 12:04 -!- belcher_ [~belcher@unaffiliated/belcher] has quit [Ping timeout: 260 seconds] 12:04 < niftynei> hehe that's one way to drive the market demand for blockclocks ;P 12:04 < t-bast> :D 12:05 < t-bast> Shall we start the meeting? 12:05 < ja> t-bast: UTC does not have DST, so i don't understand? 12:05 <+roasbeef> yeh so it's rn 12:05 <+roasbeef> it changes like 3 times a year or w/e 12:05 -!- maybefbi [~maybefbi@112.199.218.101] has quit [Ping timeout: 272 seconds] 12:05 < niftynei> ja: iiuc the meeting is pegged to adelaide time, which means the UTC time moves 12:06 -!- reallll is now known as belcher 12:06 < ja> niftynei: aah ok, got it 12:06 < t-bast> #startmeeting 12:06 < lightningbot> Meeting started Mon Oct 12 19:06:45 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:06 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 12:06 < rusty> Yes, it's my fault. I refuse to get up for a 4:30am call, but in DST the 5:30am is more civilized for everyone else. 12:06 < niftynei> let's goooo :) 12:06 < t-bast> Let's start recording :) 12:07 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/issues/802 12:07 < t-bast> I setup a tentative agenda, with feedback from cdecker and ariard 12:07 < t-bast> We have a couple simple PRs to warm-up 12:07 < t-bast> #topic Bolt 4 clarification 12:07 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/801 12:08 < t-bast> A simple clarification around tlv_streams, I think we should let the PR author answer the comments 12:09 < rusty> t-bast: yes. Your point about it breaking tooling was correct, though it's in the lightning-rfc tree, not a c-lightning specific thing. 12:09 <+roasbeef> why idk if we should worry about that tool breaking when making spec changes 12:09 <+roasbeef> yeh* 12:09 < t-bast> rusty: true, but I think you're the only ones to use it xD, but worth maintaining though 12:10 < rusty> t-bast: lnprototest also uses it, FWIW. I guess that's kinda me too :) 12:10 <+roasbeef> I think this PR is a toss up, no strong feelings w.r.t if this actually clarifies things, kinda in typo territory 12:10 < t-bast> I don't think the variable name needs to change TBH 12:10 < t-bast> Let's wait for the author to comment and move to the next PR? 12:10 < rusty> roasbeef: it does stop people breaking the spec parsing though, or at least makes them fix up the tool if they do. 12:10 <+roasbeef> their port about the bolt 4 wording is more worthy of a change imo 12:10 < rusty> t-bast: ack 12:10 < rusty> roasbeef: ack 12:10 <+roasbeef> point* 12:10 < t-bast> agreed 12:11 < t-bast> #topic Claiming revoked anchor commit txs in a single tx 12:11 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/803 12:11 < t-bast> Worth having a look and bikeshedding the wording on this PR 12:11 < t-bast> It tells implementers to be careful with batching claiming revoked outputs 12:12 <+roasbeef> if we're advising w.r.t best practices when sweeping revoked outputs, we may also want to add that depending on the lateest state and the broadcasted state, the "defener" can use the funds of the attacker to siphon off to chain fees to speed up the confirmation of the justice transaction 12:13 < t-bast> maybe a whole section of recommended best practices for sweeping could be considered? 12:13 <+roasbeef> but also even if they pin the second level, that needs to confirm, then they still need to wait for csv, and there's another revocation output there 12:13 < t-bast> I'm sure ariard would love writing a section like that 12:13 < t-bast> Where we'd quickly explain the subtleties and attacks 12:13 <+roasbeef> it's one of those "maybe does something, but can delay things at times" vectors from my pov, in that either they pay a lot of fees to jam things up, and that tiehr confirms or it doesn't 12:13 <+roasbeef> if it does, then they paid a buncha fees, if it doesn't, then it's likely the case that the "honest" one would 12:14 < rusty> roasbeef: yeah, ZmnSCPxj recently implemented scorched earth for c-lightning. But you've always had this problem where a penalty could be a single giant tx but then you need to quickly divide and conquer as they can play with adding HTLC txs... 12:14 <+roasbeef> mhmm, another edge case here is if they spend the htlc transactions within distinct transactions 12:14 <+roasbeef> if you're sweeping them all in a single, yuou need to watch for spends of each in order to update your transaction and breach remedy strategy 12:14 < rusty> (So c-lightning just makes indep txs, because it's simple and nobody really seems to care). 12:15 < t-bast> yes, TBH eclair also makes indep txs, if you're claiming a revoked tx you're going to win money anyway, you don't need to bother optimizing the fees 12:15 <+roasbeef> mhmm, according to jonny1000's "breach monitor" there was a "suspected breach" for the first time this year (?) a few weeks ago 12:15 < t-bast> roasbeef: didn't hear about that, can you share a link? 12:16 <+roasbeef> t-bast: so not always (that you'd win money), if things are super lopsided then they may just be going back to a better state for them, and one that yuo're actually in a worse position for 12:16 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-tzemrjvzsabwqtlm] has quit [Quit: killed] 12:16 -!- martindale [ericfabric@gateway/shell/matrix.org/x-ecyshefyldcgwglo] has quit [Quit: killed] 12:16 -!- awesome_doge1 [awesome-do@gateway/shell/matrix.org/x-pcyqkrcbiarqvrlj] has quit [Quit: killed] 12:16 <+roasbeef> sure, sec.... 12:16 <+roasbeef> https://twitter.com/BitMEXResearch/status/1305794706702569472?s=20 12:16 < t-bast> roasbeef: true, if everything was on your side they may force you to spend on fees, that's right 12:17 <+roasbeef> so it was a detected justice transaction, ofc you can't detect failed defense attempts 12:17 <+roasbeef> https://forkmonitor.info/lightning 12:17 < t-bast> 55$...interesting :) 12:17 <+roasbeef> ok yeh was wrong about "frst time this year", but they're relatively uncommon and usually for "small amounts", which could mean just ppl testing to see "if this thing actually works" 12:18 <+roasbeef> no wumbo defeneses yet ;) 12:18 < t-bast> going back to the PR, do you think it's worth creating a section dedicated to sweeping subtleties? Or just bikeshed the current wording? 12:18 < t-bast> roasbeef: if it's never used, it doesn't work xD 12:18 <+roasbeef> hehe 12:18 <+roasbeef> I think this prob stands on it's own, wouldn't want to slow it down to add more stuff to it that can be done in a diff more focused PR (on the best practices section) 12:19 <+roasbeef> gonna comment on it to mention that even if they pin, you can play at the second level if that ever confirms 12:19 -!- belcher_ [~belcher@unaffiliated/belcher] has joined #lightning-dev 12:19 < t-bast> great, too bad ariard isn't there today, we'll continue the discussion on github 12:19 <+roasbeef> fsho 12:19 < t-bast> #action discuss wording on the PR 12:20 < t-bast> #topic clarify message ordering post-reconnect 12:20 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/issues/794 12:20 < t-bast> This issue of clarifying the order of replaying "lost" messages on reconnection has probably been discussed at length before 12:21 < t-bast> But I couldn't find past issues or ML threads to reference 12:21 < lndbot> Hi,I made that PR 12:21 -!- crypt-iq289 [aec10676@118.sub-174-193-6.myvzw.com] has quit [Remote host closed the connection] 12:21 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 256 seconds] 12:21 <+roasbeef> yeh we discovered this when tracking down some spurious force closes and de-syncs in lnd, that would even happen lnd <-> lnd 12:22 <+roasbeef> imo this is one of the areas of the spec that's the most murky, but also super critical to "get right" to ensure you don' thave a ton of unhappy users 12:22 <+roasbeef> (this area == how to properly resume a channel that may have had inflight updates/signatures/dangling-commitments) 12:23 < t-bast> I agree, this is important to get right as this leads to channel closures, so it's worth ensuring we all agree on the expected behavior! 12:24 <+roasbeef> mhmm, almost feels like we need to step waaaaaay back and go full on PlusCal/TLA+ here, as idk about y'all but even years later we've found some new edge cases, but could just be us lol, also ofc something like that can be pretty laborious 12:24 < rusty> roasbeef: indeed. If we were doing it again I'd go abck to being half duplex whihc removes all this shit. 12:24 <+roasbeef> I think a stop gap there would just be stating exactly what you need to commit to disk each time you recv a message 12:24 <+roasbeef> but what I'm referrgin to rn (not writing all the stae yo uneed to) is distinct from what eugene found, which is ambiguous retransmission _order_ 12:25 <+roasbeef> and the order of sig/revoke here is make or break 12:25 <+roasbeef> eugene have anything you want to add? 12:25 -!- awesome_doge [awesome-do@gateway/shell/matrix.org/x-rqhslheiciljaitd] has joined #lightning-dev 12:25 < t-bast> it feels to me that your outgoing messages are a queue, that you need to be able to persist/replay on reconnections 12:25 < lndbot> Yes that's the crux of the issue 12:25 <+roasbeef> t-bast: is that how ya'll implement it? 12:25 < t-bast> we should keep that queue on disk until the commit_sig/revoke_and_ack lets us know we can forget about those 12:26 < t-bast> roasbeef: I don't think we have that explicitly, but I'm thinking that we could :) 12:26 < t-bast> I may be missing some subtleties/edge cases though 12:26 <+roasbeef> also iirc rust-lightning has something in their impl to force a particular ordering based on a start up flag? 12:26 < t-bast> This is really the kind of issues where I only feel confident once I enumerate all possible cases to verify it works... 12:26 < rusty> But I think this is wrong (though this is my first read of this issue). Either ordering should work, but I will test. 12:26 < lndbot> Yes you really do need to enumerate all cases, so that's what we did 12:27 <+roasbeef> t-bast: hehe yeh, hence going waaaay up to concurrent protocol model checking... 12:27 <+roasbeef> eugene did it more or less by hand in this case tho 12:27 <+roasbeef> rusty: so I think we have a code level repro on our end, right eugene? 12:27 < lndbot> There are a limited number of cases really, so it should be possible to enumerate 12:27 <+roasbeef> I think the examples in that PR also spell out a clear scenario as well 12:27 < t-bast> rusty: either ordering works? I'm surprised, I'm not sure eclair will handle both orders (but haven't tested E2E) 12:27 < lndbot> We can trigger the current state de-sync yeah 12:28 <+roasbeef> check out this comment for the unrolled de-sync scenario: https://github.com/lightningnetwork/lightning-rfc/issues/794#issuecomment-687337732 12:28 < rusty> Well, OK. We assume atomicity between receiving sig and sending rev. Thus, if Alice says she has received the sig (via the reconnect numbers) the rev must go first. 12:29 <+roasbeef> but there're scenarios where you need to retransmit _both_ those messages 12:29 <+roasbeef> typically it occurs when there're in-flight concurrent updates 12:29 < t-bast> rusty: interesting, I don't think that's how eclair works, I'll check in details 12:29 <+roasbeef> (so both sides send an add, both may also send a sig afterwards, with one of those sigs being lost) 12:30 < rusty> roasbeef: the reconnect numbers should indicate what happened there, though. At least, that's the intent. 12:30 < t-bast> rusty: I read it wrong, I agree with what you said 12:31 -!- Pavlenex [~Thunderbi@178-223-156-85.dynamic.isp.telekom.rs] has joined #lightning-dev 12:31 <+roasbeef> ok if y'all are able to provide a "counter-example" w.r.t why eugene's example doesn't result in a de-sync then I think that'd be an actionable step that would let us either move forward to correct this, or determine it isn't actually an issue in practice 12:31 < rusty> Sigh, there is a simpler scheme which I discarded under pressure to optimize this. It would also simplify edge cases with fees. 12:31 <+roasbeef> there very well could just be somethign wrong w/ lnd's assumptions here as well 12:31 <+roasbeef> but I think, we did a CL <-> lnd repro too eugene? 12:31 < rusty> roasbeef: no, I've changed my mind. If you ack the sig, you need to immediately revoke, 12:31 <+roasbeef> fsho 12:32 < lndbot> We did a CL <-> lnd repro but that didn't show this, that showed a different CL-specific issue with HTLCs being out of order. 12:32 <+roasbeef> ah yeh that's another thing....htlc retranmission ordering 12:32 <+roasbeef> seems some impl send them in increasing htlc id order, while others do something else 12:32 < rusty> eugene: yeah, I've got one report of that, I suspect that we're relying on db ordering which sometimes doesn't. 12:33 <+roasbeef> but the way lnd works, we'll re-add those to the state machine, which enofrces that ids aren't skipped 12:33 <+roasbeef> ok cool that y'all are aware/looking into it 12:33 < t-bast> I'll dive into eclair's behavior in details and will report back on the issue 12:33 < t-bast> Let's check eclair and c-lightning and post directly on github, sounds good? 12:33 <+roasbeef> sgtm 12:33 < t-bast> And ping ariard/BlueMatt to have them check RL 12:34 <+roasbeef> on our end we fixed a ton of "we didn't write the proper state so we failed the reconnection" issues in 0.11 12:34 <+roasbeef> a ton being like 2/3 lol 12:34 < rusty> roasbeef: that's still quite a few, given how long this has been around :( Maybe I'll hack up the simplified scheme, see what it looks like in practice. 12:34 <+roasbeef> we plan on doing a spec sweep to see how explicit some of these requirements were 12:34 <+roasbeef> fwiw the section on all this stuff is like a paragraph lol 12:35 < t-bast> it should fit on a t-shirt, otherwise it's too complex 12:35 <+roasbeef> rusty: yeh....either we try to do something simpler, or actually try to formalize how we "think" the current version works w/ something like pluscal/tla+ 12:35 <+roasbeef> t-bast: mhmm, I think it can be terse, just that possibly what's there rn may be insufficient 12:35 < BlueMatt> oops, today's a us holiday so forgot there was a meeting. I'd have to dig, but we have fuzzers explicitly to test these kinds of things so I'd be very surprised to find if we have any desync issues left for just message ordering. 12:35 <+roasbeef> I also wonder how new impls like Electrum handle this stuff 12:35 < t-bast> roasbeef: yeah, I was teasing the extreme, I think this needs to be expanded to help understanding 12:36 <+roasbeef> BlueMatt: is it correct that RL has a flag in the code to force a particular ordering of sig/revoke on transmission? 12:36 < BlueMatt> roasbeef: yes. we initially did not but after fighting with the fuzzers for a few weeks we ended up having to to get them to be happy (note that it is only partially for disconnect/reconnect - its also because we have a mechanism to pause a channel while waiting on I/O to persist new state) 12:37 < BlueMatt> (and I'd highly recommend writing state machine fuzzers for those who have not - they beat the crap out of our state machine, especially around the pausing feature, and forced a much more robust implementation) 12:38 < lndbot> What exactly do you test for? 12:38 -!- t-bast-official [~t-bast@78.194.192.38] has joined #lightning-dev 12:39 < t-bast-official> damn I got dc-ed 12:39 <+roasbeef> t-bast: and t-bast-official will the real t-bast plz stand up? 12:39 < t-bast-official> it's an official account, you're safe 12:39 < BlueMatt> create three nodes, make three channels, interpret fuzz input as a list of commands to send payments, disconnect, reconnect, deliver messages asynchrously, pause channels, unpause channels, etc. 12:39 * rusty looks at the c-lightning reconnect code and I'm reliving the nightmare right now. We indeed do a dance to get the order right :( 12:39 < lndbot> And make sure the channel doesn't de-sync? 12:40 < BlueMatt> right, if any peer decides to force-close that is interpreted as a failure case and we abort(); 12:40 < BlueMatt> the relevant command list is here, for the curious https://github.com/rust-bitcoin/rust-lightning/blob/main/fuzz/src/chanmon_consistency.rs#L663 12:41 -!- t-bast [~t-bast@2a01:e34:ec2c:260:bc12:eb05:414a:ef4d] has quit [Ping timeout: 272 seconds] 12:42 < t-bast-official> Shall we move on? What about everyone checks their implementation this week and we converge on the issue? 12:43 <+roasbeef> sgtm, a good palce to start to is to check out that example of a de-sync scenario then doubl echeck against actual behavior 12:43 < rusty> t-bast-official: ack, I've already commented on-issue. TIA to whoever does a decent write up of this in the spec though... 12:43 <+roasbeef> weshould prob also note the htlc ordering thing more explictliy somewhere too 12:44 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-otygpfkiusmgncbb] has joined #lightning-dev 12:44 -!- martindale [ericfabric@gateway/shell/matrix.org/x-ituexraemvemugtw] has joined #lightning-dev 12:44 < t-bast-official> #action check implementations behavior in the scenario described in the issue 12:44 < ja> btw according to https://wiki.debian.org/MeetBot only the chair can do endmeeting, topic, agreed 12:44 <+roasbeef> andd the-real-t-bast is no more? 12:44 < t-bast-official> #action figure out a good writeup to clarify the spec 12:44 < t-bast-official> Daaamn, I can manually reconnect, be back 12:44 <+roasbeef> lol 12:44 -!- t-bast-official [~t-bast@78.194.192.38] has quit [Quit: Leaving] 12:44 <+roasbeef> "the lost meeting" 12:45 -!- t-bast [~t-bast@78.194.192.38] has joined #lightning-dev 12:45 < rusty> roasbeef: the meeting that never ended... 12:45 < t-bast> chair is back, back, back, back again 12:45 < t-bast> #topic Evaluate historical min_relay_fee to improve update_fee in anchors 12:46 < t-bast> rusty / BlueMatt, did you have time to dig up the historical number on this topic? 12:46 < rusty> t-bast: I got sidetracked trying to optimize bitcoin-iterate, sorry. Will re-visit today. 12:46 < t-bast> no worries, we can go to the next topic then 12:46 < t-bast> blinded paths or upfront payments/DoS? 12:47 < rusty> t-bast: hmm, blinded paths merely needs review IIRC? 12:47 < BlueMatt> t-bast: lol i thought the meeting was tomorrow and had a free day today :/ sorry. 12:48 < BlueMatt> stupid 'muricans 12:48 < t-bast> Yes, I've asked around to get people to review, hopefully we should see some action soon 12:48 < t-bast> BlueMatt: no worries! 12:48 <+roasbeef> BlueMatt: now we even have *two* holidays in one day! 12:48 < t-bast> Let's do upfront payments/DoS mechanisms then? 12:48 < t-bast> #topic Upfront payments / DoS protection 12:48 < rusty> DoS is an infinite well, but I have been thinking about it. AFAICT We definitely need two parts: something for fast probes, something for slow probes. 12:49 -!- joostjgr [~joostjag@81-207-149-246.fixed.kpn.net] has joined #lightning-dev 12:49 < t-bast> Just as we set the topic, a wild joost appears 12:49 <+roasbeef> yeh that's the thing, DoS is also greifing mainly, and some form of it exists in just about every protocol 12:49 < t-bast> he *knew* 12:49 <+roasbeef> vs like something that results in _direct_ attacker gain 12:49 < joostjgr> Haha, I was off by one hour because of timezone and just catching up 12:49 < niftynei> two for one holiday deal, sounds very american haha 12:49 <+roasbeef> then endless defence and ascend 12:50 <+roasbeef> niftynei: lolol 12:50 <+roasbeef> "now with a limited time offer!" 12:50 < rusty> slow probes I still can't beat a penalty. There are some refinements, but they basically mean someone closes a channel in return for tying up funds. You can still grief, but it's not free any more. 12:50 <+roasbeef> well ofc there's the button we've all been looking at for a while: pay for htlc attempts 12:51 <+roasbeef> but that has a whole heap of tradeoffs, but maybe also makes an added incentive for better path finding in general 12:51 <+roasbeef> "spray and pray" starts to have an actual cost 12:51 < rusty> fast probes, up-front payment. I've been toying with the idea of node-provided tokens whihc you can redeem for future attempts. 12:51 <+roasbeef> but then again, that could be a centralization vector: ppl just outsource to some mega mind db that knows everything 12:51 < rusty> (which lets you fuzz the amounts much more: either add some to buy some tokens, or spend tokens to reduce cost) 12:51 <+roasbeef> rusty: yeah I've thought of that too....but that's also kinda dangerous imo... 12:51 < rusty> roasbeef: oh yeah, for sure! 12:52 < rusty> roasbeef: ideally you go for some tradable chaumian deal, where you can automatically trade some with neighbors for extra obfuscation/ 12:52 <+roasbeef> imo the lowest hanging, low-risk fruit here is just dynamic limits as t-bast brought up again on the ML recently 12:52 <+roasbeef> rusty: yeh...but I'm more worried about like "forwarding passes"..... 12:52 < t-bast> it would be really helpful to have a way to somewhat quickly check a proposal against known abuses, as every complex mechanism we introduce may be abused and end up worse 12:53 <+roasbeef> hmm interesting, may need some fleshing out there to enumerate what properties you think that can give us rusty 12:53 <+roasbeef> t-bast: yeh if there was an ez no brainer solution, we'd have done it by now 12:53 <+roasbeef> in the end, maybe there's justr a series of heterogenous policies ppl deploy, as all of them are "not so great" 12:54 <+roasbeef> but again it's also "just" grifing/DoS, restricted operating modes in various flavors are ofc possible (with their own can of worms that worry me even more than the griefing tbh) 12:54 < rusty> roasbeef: yeah, OK, let's focus on up-front payment. Main issue is that you now provide a distance-to-failure/termination oracle, so my refinement attempts have been to mitigate that. 12:54 < t-bast> at least having a living document somewhere with all the approaches we tried that were broken may be helpful - I can draft that if you find it useful 12:55 <+roasbeef> t-bast: sure, part of the issue is that there've been like 20+ proopals split up across years of ML traffic 12:55 <+roasbeef> rusty: I was proposing going from the other direction: dynamic limits to give impls the ability to try out their own pollicies, with some of them eventually becoming The Word 12:56 < rusty> roasbeef: I think we need both, TBH. 12:56 <+roasbeef> if we had upfront payments magically solved, how many of us would deploy it by default today on impls given all the tradeoffs? 12:56 <+roasbeef> yeh :/ 12:56 <+roasbeef> it could just be part of some reduced operating model, where you then signal ok you need to pay to pass 12:56 <+roasbeef> but then again that can be gamed by making things more expensive for everyone else 12:56 < joostjgr> on the "just grieving/dos": The ratio of attacker effort vs grief is quite different compared to for example trying to overload a website. And the fact that the attack can be routed across the network doesn't make it better 12:56 < rusty> joostjgr: indeed. 12:56 <+roasbeef> joostjgr: it depends, over load a platform/infrastructure provider isntead, and you get the same leverage 12:57 <+roasbeef> and dependign on who that is, actually do tangible damage, but ofc the big bois these days have some nice force fields 12:57 <+roasbeef> > if we had upfront payments magically solved, how many of us would deploy it by default today on impls given all the tradeoffs? 12:58 <+roasbeef> ? 12:58 <+roasbeef> taking into account all the new gaming vecvtors it introduces 12:58 < t-bast> depends on the trade-offs :D 12:58 <+roasbeef> lolol ofc ;) 12:58 < rusty> I would. But then, I don't really *like* my users :) 12:58 <+roasbeef> also you'd need to formulate it into a model of a "well funded" attacker as well, that just want to mix things up 12:58 <+roasbeef> lololol 12:59 < joostjgr> if the upfront payment pays for exactly the cost the htlc represents, I'd say it is fair and would be deployed. 12:59 <+roasbeef> as in: if you're a whale, only a very high value would actually be a tangilble "cost" to you 12:59 <+roasbeef> joostjgr: deployment is one thing, _efficacy_ is another 12:59 <+roasbeef> whales don't feel small waves... 13:00 < cdecker> Wait, did I fall into the DST trap... again.... ? 13:00 < joostjgr> Wasn't the question about deployment? Just saying I would deploy it :) 13:01 < t-bast> Good evening cdecker xD 13:01 <+roasbeef> cdecker: welcome to the future ;) 13:01 < cdecker> Hi everyone :-) 13:01 <+roasbeef> joostjgr: yeh but digging deeper, how do you determine if something is effective after deployment? should "ineffective" mitigations be deployed? 13:01 < cdecker> That's what I get for writing my own calendar... 13:01 < cdecker> Sorry for being late 13:02 <+roasbeef> "efficacy" also depends on the profile of a theoretical attacker and their total budget, etc, etc 13:02 < joostjgr> If it makes the attacker pay for the attack, I think it can be effective 13:02 < t-bast> No worries cdecker, we're right in the middle of upfront payment/DoS, if you want to jump in 13:02 < rusty> Anyone have ideas about cost? Would we add YA parameter in gossip, or make it a flat fee, or make it a percentage of the successful payment? 13:02 <+roasbeef> just like a really "well funded" attacker can clog up bitcoin blocks for weeks with just their own transactions 13:02 <+roasbeef> rusty: should be set and dynamic for the forwarding node imo 13:02 < joostjgr> Yes, there are always attackers with bigger pockets of course. But may not be a reason to let anyone execute these for free 13:02 < t-bast> rusty: I kinda liked your proposal where it wasn't only sats but also some grinding 13:03 < rusty> t-bast: I have a better one now. You can buy/use credits. Ofc it's trust, but you're only really using it for amount obfuscation. 13:03 <+roasbeef> joostjgr: yeh, just getting at that stuff like this is never really "solved" just mitigated based on some attacker profile/budget, like just look into all the shenanigans that miners can get into against other miners in an adversarial env 13:03 <+roasbeef> rusty: there's a danger zone there.... 13:04 <+roasbeef> centralization pressures also need to be weighed 13:04 < niftynei> roasbeef, is centralization in this case based on the 'decay function of service providerism' or something inherent to the proposal? 13:05 < rusty> roasbeef: yep, at least one. Hence I really wanted them to be tradable, but that's more complex. 13:05 <+roasbeef> not sure what you mean by decay there (there're lss ppl?), niftynei, more like "ma'am, do you have the proper papers for this transit?" 13:05 < t-bast> niftynei: can you explain what you mean by "decay function of service providerism"? 13:05 <+roasbeef> "ahh, I see, then no can do" 13:06 < niftynei> number of people providing the service is smaller than total set of people using the service 13:06 < t-bast> you mean big barrier to entry for newcomers/small nodes? 13:06 < niftynei> e.g. everyone needs liqiudity, only so many people run 'liquidity providing' services 13:07 < ariard> hello 13:07 < niftynei> and over time those tend to fall offline/consolidate etc because 'running a service' is Work 13:07 <+roasbeef> prob also introduces some other boostrapping problems as well, but imo fragmenting the network is an even bigger risk....if y'all get where I'm going with that above example.... 13:07 < t-bast> hey ariard, the daylight savings got you too (we started an hour ago) 13:07 < ariard> ah damn 13:08 < ariard> -> reading the logs 13:08 <+roasbeef> lotta logs lol 13:08 < cdecker> Nah, 250 lines, halfway through already ^^ 13:08 < t-bast> roasbeef: yeah I agree, but it feels like all the potential solutions for this problem have to rely on some "trust" that is slowly built between peers (dynamically adjusting cost based on the "relationship" you had with your peer) 13:09 < t-bast> roasbeef: which definitely centralizes (why spend time building trust with people you don't know?) 13:09 <+roasbeef> t-bast: I think what I'm talking about above is an entirely distinct, more degnerate class of "failure" mode 13:09 < t-bast> yeah because when taken to the extreme, you split and others start their own network 13:10 <+roasbeef> even then, if you consider a "dynamic adversary" that can "corrupt" nodes after the fact, then that starts to fall short as well 13:10 <+roasbeef> (playing devil's advocate a bit here, but we'd need to really settle on a threat model) 13:10 < t-bast> I'll have my own LN, with blackjack and hookers 13:10 <+roasbeef> lololol 13:10 <+roasbeef> yeh that's always an option xD 13:10 <+roasbeef> the freedom to assemble! 13:11 < t-bast> Well at least I'll start centralizing all the ideas that have been proposed in that space in a repo which we can update as new ideas come, I think it will save time in the long run 13:11 <+roasbeef> gotta be going soon myself....great meeting though, also nice that with the time change I no longer have an actual meeting-meeting during this time as well 13:11 <+roasbeef> t-bast: +1 13:12 < t-bast> #action t-bast summarize existing proposals / attacks / threat models 13:12 < joostjgr> In think that that threat model should at least include the trivial channel jamming that is possible today. 13:12 < t-bast> #action rusty can you detail your token idea? 13:12 < ariard> also issues with watchtowers 13:12 < rusty> t-bast: yeah, I'll post to ML. 13:12 < niftynei> ooh excited to see a summary/list of threat models to contemplate +1 13:12 < ariard> you should talk with Sergi he spotted few ones 13:13 < t-bast> joostjgr: yes definitely, there needs to be different threat models for different kinds of attackers 13:13 < t-bast> ariard: yeah I saw some of your discussions, where a watchtower may be spammed as well, I'll ping him to contribute to the doc 13:14 < t-bast> Shall we discuss one last topic or shall we end now (sorry for the unlucky ones who missed the DST change...)? 13:14 < joostjgr> t-bast: looking forward to that overview too :+1: 13:14 <+roasbeef> idk if that's new? that's why all tower today are basically "white listed" 13:14 <+roasbeef> towers 13:14 < ariard> roasbeef: for a future with public towers 13:14 < t-bast> roasbeef: I agree, I think the watchtower model today is very easily spammable 13:14 <+roasbeef> at least it's like that in lnd, since we hadn't yet implemented rewards other than upon justice 13:15 < rusty> Yeah, the intention was always that you'd pay the watchtower. 13:15 < ariard> so yeah not the model of today but the model where you pay per update 13:15 < t-bast> roasbeef: but we can list the requirements/threat model to build the watchtowers of the future 13:15 <+roasbeef> yeh, lol just like you don't open up your network attached filesystem to the entire world 13:15 < ariard> and thus someone can force you to pay for nothing the watchtower until you exhaust your "credit" 13:15 < ariard> the entire world includes your channel counterparty 13:15 <+roasbeef> yeh depends entirely on how a credit system would work 13:16 <+roasbeef> yeh ofc, basic isolation is a requirement 13:16 <+roasbeef> but also one needs to assume the existence of a private tower at all times as well 13:16 <+roasbeef> depends on the sitaution tho really, very large design space 13:16 < ariard> yeah but for mobile ? but agree the design space is so large 13:17 < ariard> the latency cost of the watchtower might be a hint of the presence of one 13:17 <+roasbeef> depends really, also would assume mbile chan sizes <<<<<< actual routing node chan sizes 13:17 < ariard> especially if we have few of them 13:17 <+roasbeef> and it's mobile also, so don't store your matress savings in there 13:17 <+roasbeef> but lots of things to consider, which is why we just implemented white listed towers to begin with 13:17 < ariard> t-bast: what was this story with people huge amounts on mobile lol ? 13:17 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 244 seconds] 13:18 < ariard> sure easier to start with private towers 13:18 <+roasbeef> oh yeh we know for exp as well ppl just really like to sling around large amts of sats to "test" :p 13:18 < t-bast> some people are crazy with their bitcoin, they probably have too much of those 13:18 <+roasbeef> g2g 13:19 < t-bast> see you roasbeef 13:19 < cdecker> cu roasbeef :-) 13:19 < t-bast> let's end for today, already a lot to do for the next meeting ;) 13:19 < t-bast> #endmeeting 13:19 < lightningbot> Meeting ended Mon Oct 12 20:19:25 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 13:19 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-10-12-19.06.html 13:19 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-10-12-19.06.txt 13:19 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-10-12-19.06.log.html 13:19 < rusty> OK, so I have my homework to do today. endmeeting? 13:19 < ariard> bye 13:19 < niftynei> tchau! 13:19 < t-bast> Thanks everyone! 13:20 < cdecker> Ciao everybody, sorry to have joined so late, was a good meeting ^^ 13:20 -!- jonatack [~jon@104.254.90.235] has joined #lightning-dev 13:20 * rusty thinks that the simplified channel update protocol would also make dynamic limits trivial. 13:20 < rusty> hmm.... 13:20 < t-bast> cdecker if you have free time instead of a meeting, there's route blinding and trampoline waiting for more reviews xD 13:21 < t-bast> rusty: that would be nice... 13:22 < rusty> t-bast: BTW you might have noticed some work on offers... I'm really trying to get a rough prototype together this week. 13:22 < cdecker> Yep, I re-read both, seem like pretty polished proposals (but I said that a couple of months ago already ^^) 13:22 < t-bast> yes I've seen that it's shaping up, that's why I'm trying to get more reviews on route blinding to pave the way! 13:23 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 13:23 < t-bast> cdecker: but that's good to hear! :) 13:23 < t-bast> I've got to go too, see you next time! 13:24 < cdecker> I can't add much on the crypto review side (not my forte) but afaiu the only primitive that isn't already used in sphinx is the pubkey tweaking 13:24 < cdecker> See you around t-bast ^^ 13:24 < t-bast> cdecker: yes, I don't think I'm doing anything too fancy or dangerous, but maybe I'm missing a trick that could simplify the whole thing...that's why I'd like more eyes on it! 13:24 < t-bast> See you! 13:25 -!- t-bast [~t-bast@78.194.192.38] has quit [Quit: Leaving] 13:49 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 13:53 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 14:02 -!- mips [~mips@gateway/tor-sasl/mips] has quit [Remote host closed the connection] 14:03 -!- mips [~mips@gateway/tor-sasl/mips] has joined #lightning-dev 14:13 -!- joostjgr [~joostjag@81-207-149-246.fixed.kpn.net] has quit [Quit: Leaving] 14:46 -!- jesseposner [~jesse@98.37.146.62] has joined #lightning-dev 15:10 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 15:12 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 15:15 -!- musdom [~Thunderbi@202.184.174.75] has quit [Ping timeout: 240 seconds] 15:16 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has quit [Read error: Connection reset by peer] 15:17 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has joined #lightning-dev 15:17 -!- lukedashjr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 15:19 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 264 seconds] 15:21 -!- lukedashjr is now known as luke-jr 15:23 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 15:35 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has quit [Read error: Connection reset by peer] 15:35 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has joined #lightning-dev 15:37 -!- Pavlenex [~Thunderbi@178-223-156-85.dynamic.isp.telekom.rs] has quit [Quit: Pavlenex] 16:03 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 16:06 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 16:08 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 246 seconds] 16:14 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has joined #lightning-dev 16:16 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has quit [Client Quit] 16:20 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has joined #lightning-dev 16:21 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has quit [Client Quit] 16:33 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 16:34 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 16:37 < rusty> Send simplified update proposal to ml. 16:37 < rusty> s/send/sent/ 17:00 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 17:02 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 17:09 -!- proofofkeags [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has quit [Ping timeout: 246 seconds] 17:09 -!- proofofkeags__ [~proofofke@c-73-34-43-4.hsd1.co.comcast.net] has quit [Ping timeout: 246 seconds] 17:11 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has joined #lightning-dev 17:45 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 246 seconds] 18:27 -!- proofofkeags_ [~proofofke@174-29-30-112.hlrn.qwest.net] has joined #lightning-dev 18:27 -!- proofofkeags [~proofofke@174-29-30-112.hlrn.qwest.net] has joined #lightning-dev 18:38 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 18:40 -!- bitromortac_ [~admin@gateway/tor-sasl/bitromortac] has quit [Ping timeout: 240 seconds] 18:40 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has quit [Remote host closed the connection] 18:41 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 19:13 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 19:13 < rusty> roasbeef / eugene: thanks for prod on the HTLC rexmit order issue. I've found it; indeed, we didn't check incoming order, so our tests didn't detect it. But we will re-xmit in random order. 19:17 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 240 seconds] 20:21 <+roasbeef> rusty: dope! 20:21 * roasbeef does a cool internet high five 20:21 < rusty> roasbeef: yeah, we extract htlcs from the hash, and sure enough, probability that it's in order drops fast... 20:37 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 20:40 < rusty> As a quick hack, what do ppl think of limiting new channels to # simultaneous HTLCs based on the number of successful HTLCs they've done (plus one). Simply immediately fail any above this value. That means you have to actually use a channel a little before using it for a DoS. 20:40 < rusty> Sucks for MPP of course, hmm.... 20:48 < aj> rusty: trivial to use micropayments to bump the limit prior to an attack? 20:49 < rusty> aj: yep, but at least they have some cost (not really enough). 21:14 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has joined #lightning-dev 21:16 -!- opsec_x12 [~opsec_x12@44-25-143-49.ip.hamwan.net] has quit [Read error: Connection reset by peer] 21:18 -!- da39a3ee5e6b4b0d [~textual@n11211935170.netvigator.com] has quit [Ping timeout: 260 seconds] 21:26 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 21:58 -!- proofofkeags_ [~proofofke@174-29-30-112.hlrn.qwest.net] has quit [Ping timeout: 260 seconds] 21:58 -!- proofofkeags [~proofofke@174-29-30-112.hlrn.qwest.net] has quit [Ping timeout: 260 seconds] 22:09 <+roasbeef> rusty: I sketched out something similar to that in t-bast's latest ML thread on the topic 22:10 <+roasbeef> open questions are when to increase (and by how much), and when to decrease (how drastically) the HTLC bandwidth window 22:10 <+roasbeef> two things to consider ofc, both the # of slots and the total amount utilized 22:11 <+roasbeef> # of slots can really jsut go away since it's arbitrary using indirect htlc/commitmment blocks (another off-chain multi-sig covenant that defers HTLC manifestation with a layer of indirection), can be chained to basically an unbounded size like file system indirect blocks 22:14 -!- mips [~mips@gateway/tor-sasl/mips] has quit [Quit: Leaving] 22:44 -!- mips [~mips@gateway/tor-sasl/mips] has joined #lightning-dev 23:00 -!- jesseposner [~jesse@98.37.146.62] has quit [Ping timeout: 264 seconds] 23:26 -!- jesseposner [~jesse@98.37.146.62] has joined #lightning-dev 23:26 -!- musdom [~Thunderbi@202.184.174.75] has joined #lightning-dev 23:30 -!- jesseposner [~jesse@98.37.146.62] has quit [Ping timeout: 240 seconds] 23:58 -!- jesseposner [~jesse@98.37.146.62] has joined #lightning-dev --- Log closed Tue Oct 13 00:00:45 2020