--- Log opened Mon Oct 26 00:00:58 2020 00:18 -!- jeremyrubin [~jr@c-73-15-215-148.hsd1.ca.comcast.net] has quit [Ping timeout: 260 seconds] 01:10 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 01:20 -!- kabaum [~kabaum@84.216.157.11] has joined #lightning-dev 01:31 -!- belcher_ [~belcher@unaffiliated/belcher] has joined #lightning-dev 01:35 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 272 seconds] 01:43 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 260 seconds] 01:46 -!- belcher_ is now known as belcher 01:48 -!- kabaum [~kabaum@84.216.157.11] has quit [Ping timeout: 256 seconds] 02:00 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 02:08 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has joined #lightning-dev 02:23 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 02:56 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 02:56 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 03:08 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 03:09 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 03:11 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 03:11 -!- vasild_ is now known as vasild 03:13 -!- jonatack [~jon@82.102.27.163] has quit [Ping timeout: 240 seconds] 03:28 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has quit [Ping timeout: 256 seconds] 04:14 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 04:36 -!- nobody12_ [~nobody123@193.27.14.142] has joined #lightning-dev 04:40 -!- nobody123 [~nobody123@193.27.14.110] has quit [Ping timeout: 258 seconds] 04:43 -!- nobody123 [~nobody123@ipservice-092-217-103-159.092.217.pools.vodafone-ip.de] has joined #lightning-dev 04:44 -!- nobody12_ [~nobody123@193.27.14.142] has quit [Ping timeout: 260 seconds] 05:32 -!- kabaum [~kabaum@84.216.157.11] has joined #lightning-dev 05:56 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 06:05 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 06:06 -!- jonatack [~jon@213.152.162.114] has joined #lightning-dev 06:13 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Remote host closed the connection] 06:14 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 07:14 -!- ghost43_ [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 07:15 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Ping timeout: 240 seconds] 07:20 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 07:22 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 07:32 -!- kabaum [~kabaum@84.216.157.11] has quit [Ping timeout: 258 seconds] 07:37 -!- th0th [~th0th@gateway/tor-sasl/th0th] has joined #lightning-dev 07:39 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Ping timeout: 256 seconds] 07:39 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 07:53 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 258 seconds] 07:55 -!- jonatack [~jon@213.152.162.114] has quit [Ping timeout: 260 seconds] 07:57 -!- jonatack [~jon@88.124.242.136] has joined #lightning-dev 08:10 -!- kexkey [~kexkey@static-198-54-132-174.cust.tzulo.com] has joined #lightning-dev 08:12 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 08:24 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has joined #lightning-dev 08:25 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 08:25 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 08:26 -!- mol_ [~mol@unaffiliated/molly] has left #lightning-dev [] 08:27 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has quit [Read error: No route to host] 08:28 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Quit: ZNC - http://znc.sourceforge.net] 08:29 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 08:45 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has joined #lightning-dev 09:03 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has quit [Read error: No route to host] 09:09 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has joined #lightning-dev 09:14 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Remote host closed the connection] 09:14 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 09:30 -!- jeremyrubin [~jr@c-73-15-215-148.hsd1.ca.comcast.net] has joined #lightning-dev 09:35 -!- alko89 [~alko89@unaffiliated/alko89] has quit [Quit: ZNC 1.7.5 - https://znc.in] 09:35 -!- alko89 [~alko89@unaffiliated/alko89] has joined #lightning-dev 09:37 -!- kabaum [~kabaum@host-217-214-150-176.mobileonline.telia.com] has quit [Ping timeout: 256 seconds] 10:01 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 10:15 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 10:15 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 10:17 -!- __gotcha1 is now known as __gotcha 10:40 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 10:50 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has quit [Quit: WeeChat 2.3] 10:57 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 264 seconds] 11:23 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 11:27 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 11:29 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 11:31 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 11:42 -!- Bugz [~pi@035-134-224-053.res.spectrum.com] has joined #lightning-dev 11:52 -!- t-bast [~t-bast@2a01:e34:ec2c:260:70c2:2d88:7229:ff47] has joined #lightning-dev 11:57 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 11:58 -!- sstone [~sstone@112.ip-51-68-199.eu] has joined #lightning-dev 12:01 < cdecker> Good morning ^^ 12:01 < rusty> Generic UTC Greeting everyone! 12:01 < t-bast> Good evening guys! Hope everyone makes it with the DST changes across the world ;) 12:02 < sstone> hi everyone ! 12:02 < bitconne1> howdy! 12:02 < jkczyz> hey! 12:02 < lndbot> Good evening :D 12:02 -!- bmancini55 [4620050a@70.32.5.10] has joined #lightning-dev 12:03 -!- bitconne1 is now known as bitconner 12:03 < lndbot> Hello! 12:03 * cdecker is pretty happy that these meetings are now 2 hours earlier :-) 12:04 < t-bast> I guess rusty may be less happy about that xD 12:04 < rusty> Yes, we're in the civilized part of the cycle :) 12:05 < t-bast> oh right on the contrary it's a better time in australia indeed! 12:05 < t-bast> shall we start? I may have to drop off a bit early so I can't chair this time :) 12:06 < rusty> t-bast: well, it's the same, but this is the only project I've ever agreed on such a ridiculous time for, so there's that. OK, cdecker to chair? 12:06 < rusty> Or bitconner ? 12:06 < cdecker> I'm happy to, unless bitconner wants to do it 12:07 < bitconner> cdecker: probably better if you chair, i might need to drop out in the middle 12:07 < cdecker> Sounds good 12:07 < cdecker> #startmeeting 12:07 < cdecker> Damn, forgot the command syntax 12:07 < cdecker> Uhm, nope that was correct, is there still a meeting running? 12:07 < rusty> https://wiki.debian.org/MeetBot if anyone is curious 12:08 < t-bast> hum, strange, it should work 12:08 < t-bast> or did it break because last time I dc-ed in the middle of the meeting? 12:08 < cdecker> Ok, I'll try to check during the meeting 12:08 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/805 12:09 < bitconner> should we try an endmeeting? 12:09 < t-bast> #endmeeting 12:09 < cdecker> bitconner needs to be done by the previous chair 12:09 < bitconner> ah 12:09 < t-bast> no, looks like the last meeting was ended properly (looking at the logs) 12:09 <+roasbeef> did the old meeting actually ever eeven end? can't recall 12:09 < cdecker> Ok, let's try again then 12:09 < cdecker> #startmeeting 12:09 < cdecker> Nope, bot broken, ah well 12:09 < rusty> Yeah, I rememeber reading logs. What is the name of the bot, can anyone rememeber? 12:10 < rusty> I will be the bot and turn logging on, and ping aj to fix after. 12:10 < t-bast> thanks! 12:10 < cdecker> It was called lightningbot 12:11 < rusty> Beep. MEETING STARTED. 12:11 <+roasbeef> the bot died? 12:11 <+roasbeef> lol 12:11 < cdecker> Anyway, back to the topic 12:11 < bitconner> lol 12:11 < cdecker> #topic Clarify bolt 4 #801 12:11 < cdecker> #topic BOLT 4: link to BOLT 1 for tlv_payload format #801 12:11 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/801 12:12 < t-bast> I think this PR needs feedback from one of the pytools expert, clearly out of my league xD 12:12 < cdecker> It's changing a single name in the wire format and adds a clarification 12:12 <+roasbeef> yeh typo more or less, sgtm 12:12 < bitconner> lgtm 12:12 < cdecker> The clarification is ok, but the rename is likely to break some things on our end 12:12 < cdecker> Then again we pull in the spec in bulk and fix up in the same commit 12:12 <+roasbeef> the cost of auto generation ;) 12:13 < cdecker> So I think we're good too 12:13 < rusty> OK, so he's a bit confused. Each tlv stream has its own name; they're a global namespace. Sure, type x tends to have x_tlvs, but increasingly we're dropping the _tlvs suffix (e.g. in offers I just removed it) 12:14 < rusty> But I should comment on issue. 12:14 <+roasbeef> I think he's conbfused re tlv vs tlv_stream 12:14 <+roasbeef> so a tlv tuple vs a series of them 12:14 <+roasbeef> fwiw I don't think this really materially affects things, just more about consistent nomenclature 12:14 < cdecker> Exactly 12:15 < cdecker> Ok, should be easy to ack or nack 12:15 < cdecker> I don't really care enough either way :-) 12:15 < t-bast> same for me 12:16 < cdecker> Shall we just ack it then? 12:16 <+roasbeef> lol same 12:16 <+roasbeef> just did on the pr 12:16 < bitconner> merge! 12:16 < rusty> I'm happy to replace tlvs -> tlv_stream ftw. BUt you can't reverse them. 12:16 < t-bast> totally fine with merging as-is and fixing nomenclature later in the overall spec 12:16 < cdecker> Hm? How do you mean rusty? 12:17 < cdecker> Ah just saw your comment on the PR 12:17 < rusty> cdecker: his final comment is to change [`accept_channel_tlvs`:`tlvs`] to [`tlv_stream`: `accept_channel_tlvs`] 12:17 < rusty> But tlv_stream is *not* a specific type. 12:17 < rusty> (remember, format is TYPENAME: FIELDNAME 12:17 < cdecker> Yep, I remember the namespacing I opposed in the first place :-) 12:18 <+roasbeef> /shrug 12:18 < rusty> It has the advantage of being precise? Anyway, we can imply the existence of foo_tlv_stream at the end of foo now, IIRC. 12:19 < cdecker> Ok 12:19 < rusty> How about I update the tooling to accept `tlv_stream` and do a global sweep? 12:19 < cdecker> Sounds good 12:19 < t-bast> SGTM 12:20 < cdecker> #action rusty to try out the change and report success/failure 12:20 < cdecker> Moving on then 12:20 < cdecker> #topic Require to claim revoked local output in its own penalty tx post-anchor #803 12:20 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/803 12:21 < cdecker> This is yet another mempool pinning attack found by our resident expert ariard :-) 12:21 < t-bast> concept ACK, just nits from a merge issue I think 12:22 < rusty> Yeah, isn't this a subset of the known divide-and-conquer problem if you create a single penalty tx? 12:23 <+roasbeef> it's a bit different rusty, in this case they use the new flexibility in the 2nd level to possilby delay a sweep/resolution 12:24 < lndbot> Concept ACK also 12:24 <+roasbeef> but even if that 2nd level confirms, you still have the csv at that level and the revocation clause 12:24 <+roasbeef> so imo it's the other party just delaying the inevitable in a sense 12:24 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 12:24 <+roasbeef> so they block your mega spend for long enough, then maybe are able to sweep their output in isolation (tho that would require them to get past their own pinning?) 12:24 < cdecker> by the other party you mean the cheating party? 12:25 <+roasbeef> so delay long enough for their csv to expire, which _could_ be a long time, so it's better for you to just sweep that output first then do the rest 12:25 <+roasbeef> yeh cheating party 12:25 < cdecker> If I understood correctly this is about delaying and then replacing the penalty tx itself by weighing down the penalty by stuffing the HTLC-success/failure 12:25 <+roasbeef> as w/ most pinning stuff, imo there's a lot of caveats, but might as well be more defensive impl wise 12:26 < cdecker> So this is likely to have a large impact on punishments 12:26 < rusty> roasbeef: right. There's an assumption that the "main output" (bad nomenclature, that's not used elsewhere in spec) is the most important, may not be true... 12:27 <+roasbeef> yeh so it's: you try to sweep the entire thing at once, but somehow (ordering, fees, mempool, etc, etc) they have this other large transaction tree, which "blocks" your actual penalty transaction 12:27 <+roasbeef> you can also react to it in real time kinda too, like if your penalty is taking a while, split it up and just sweep the main output then the rest 12:28 < rusty> So, you have to be ready to split, and you have to do it based on delay, not based on onchain activity (before we worried about HTLC txs getting through). 12:28 <+roasbeef> mhmm, or just always start out w/ it being split 12:28 <+roasbeef> as before, you still need to handle partial 2nd level htlc confirmtaion and adjust accordingly 12:28 < cdecker> Yeah, that's what this PR is proposing 12:28 < rusty> roasbeef: yeah, so make lots of tiny txs, because nobody cares... 12:29 < cdecker> Seems sensible enough to use either a splitting logic or not aggregate at all in the first place 12:29 <+roasbeef> yeh I mentioned that in the PR as well rusty, theen there's also the just move over most of their funds to miner's fees if you're still made whole after that 12:29 < t-bast> that's the simplest solution 12:29 < rusty> roasbeef: s/so make/we make/ sorry... 12:29 < t-bast> but it's worth mentioning these subtleties for impl that start optimizing away from the simple choice of one tx per output sweep 12:29 < cdecker> Agreed 12:30 < cdecker> So, verdict is "LGTM, needs addressing the nits"? Sound good? 12:30 < t-bast> SGTM 12:30 < bitconner> sgtm 12:31 <+roasbeef> yeh base is good imo, needs some word smithing, room for expansion in this aerea later 12:31 < cdecker> #agreed looks good, needs addressing the nits 12:31 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 12:31 < cdecker> Ok, now on to the CVEs :-) 12:31 < rusty> ack 12:31 < bitconner> weee 12:31 < cdecker> Shall we go through them individually? 12:32 < bitconner> sure 12:32 < cdecker> #topic Fail channel in case of high-S remote signature reception #807 12:32 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/807 12:32 < cdecker> So high-S non-normalized signatures 12:32 <+roasbeef> set of changes in this PR lgtm, going further tho impls should start to attempt to possibly check mempool acceptance before accepting transactions 12:32 < t-bast> I'm wondering whether we should repeat this sentence everywhere (what's done in the PR) or have in Bolt 1 a section about signature validity that mentions this 12:33 < lndbot> Instead of that, mempool acceptance like roasbeef said 12:33 <+roasbeef> tho that can be kinda difficult tho, since policy rules can be addeed w/e and ppl can be on older versions of bitcoind that don't have those policy rules or w/e other node 12:33 <+roasbeef> it's harder for light clients too tho, assuming they haven't re-impl'd the "entire mempool" 12:33 <+roasbeef> also fwiw, Ithink the current bitcoind calls for mempool acceptance aren't package aware, so you can just test one transaction in isolation 12:34 < cdecker> I was tempted to just reference BIp62 but that only mentions low-s as a native source of malleability but doesn't make a prescription 12:34 < bitconner> t-bast: agreed, i think specifying it in one place is sufficient. also means we can start enforcing low-s on gossip msgs 12:34 <+roasbeef> that's a broader point tho, in this case it was the high-s 12:34 <+roasbeef> also fwiw, the way our fix was impl'd, if soeone did happen to have a high-s sig, it was convered automatically in the background before broadcast 12:35 -!- jonatack [~jon@88.124.242.136] has quit [Quit: jonatack] 12:35 < cdecker> c-lightning should never accept high-s signatures for anything because we use libsecp256k1 internally 12:35 < t-bast> same for eclair, we use libsecp256k1 12:35 <+roasbeef> mhmm in our case it was a re-impl for an optimization 12:35 <+roasbeef> betweeen our LN sig format and the bitcoind version 12:36 < t-bast> ok, was curious why indeed 12:36 <+roasbeef> so a version that operated on bytes, instead of using btcec which used big ints 12:36 < cdecker> Interesting 12:36 < cdecker> cgo + libsecp256k1 would not have been an option? 12:36 <+roasbeef> also in our case, we did full VM validation for co-op close, but only did normal ecdsa sig check for sigs/htlcs 12:36 * rusty fears anything reimplementing secp256k1... 12:36 < bitconner> cgo break cross-compilation 12:36 <+roasbeef> what I'm trying to say is that lack of libsecp wasn't the issue 12:36 < cdecker> Ouch, ok makes sense 12:37 <+roasbeef> it was manual conversion between 64-byte sigs and the bitcoin format 12:37 < t-bast> interesting, thanks for detailing that! 12:37 < bitconner> it could be an option, at the expense of portability 12:37 <+roasbeef> as the sigs themselves _are_ valid, and would have been mined 12:37 <+roasbeef> assuming relay 12:38 <+roasbeef> also this type of malleability kinda matters less post segwit, but can still be nice to have for certain mempool tracking stuff 12:38 < cdecker> Ok, everyone ok with the way the PR is formulated? Or shall we ask for a general validity section that is referred to from the appropriate sections? 12:38 < t-bast> Both options sound good to me, I'm fine with either one 12:38 < bitconner> cdecker: i think it's more forward looking to specify it once, then say all sigs in ln must me low-s 12:39 < cdecker> Yep, that'd also be my preference 12:39 -!- Guest81984 [~joost@81-207-149-246.fixed.kpn.net] has joined #lightning-dev 12:39 < cdecker> Hehe, and rusty seems to like the explicit nature of the repeated statement 12:40 < rusty> Yeah, people tend to read the part of the spec they're implementing. 12:40 <+roasbeef> link to the bip instead of the bitcoind PR tho? 12:40 < cdecker> Let's merge it this way, and then pull it back into a dedicated section at a later point once we have sufficiently many rules for them to be cluttered 12:40 < bitconner> sgtm 12:40 < cdecker> roasbeef BIP62 talks about low-s, but doesn't prescribe 12:41 < cdecker> So a link to it is unlikely to clarify 12:41 <+roasbeef> gotcha yeh, and that also lists a buncha other malleability vectors as well 12:41 < cdecker> #agreed cdecker to merge the PR and defer grouping the prescriptions to a later point in time 12:42 < cdecker> #topic Prevent preimage reveal collision while claiming onchain incoming HTLC #808 12:42 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/808 12:42 < cdecker> roasbeef could you walk us through the attack vector? I have something like 3 different interpretations that don't overlap 100% 12:44 <+roasbeef> ok 12:44 < bitconner> cdecker: if you tried to route a payment through a node that had an invoice matching the forwarded htlc, lnd would claim the incoming channel if it went to chain 12:44 <+roasbeef> so for lnd we have two pre-iamge dbs: the ones we discovered and our own 12:44 <+roasbeef> you could amke us go to chain, and check our normal invoice db for the pre-image and settle it on chain 12:45 < bitconner> this is bad because the outgoing htlc is still live, so you're leap frogging the settlement 12:45 <+roasbeef> yep, so someone gets a pre-iamge early, and depending on the situation may be able to gain from that 12:45 <+roasbeef> the invoice that goes on chain actually needs to be the same amount as well, so you'd need to put up funds 1:1 12:46 <+roasbeef> it's solved by enforcing always outgoing before ingoing, or making sure we checked only the "general pre-image db" (discovered from routing forwarding) if we went on chain and were not the final hop 12:46 -!- jonatack [~jon@213.152.162.154] has joined #lightning-dev 12:46 < bitconner> it just needs to be greater than the invoice amount 12:46 <+roasbeef> yeh 12:46 <+roasbeef> the fix we impl'd was the ordering check there, but also we plan on going back to also add the dependancy check as well as it's a more general solution 12:46 < t-bast> it's really hard to properly word this requirement without just explaining the attack 12:46 < cdecker> So the worst case scenario would be the attacked node out of pocket for a premature settlement of a forwarded payment? 12:47 <+roasbeef> lol yeh, it was also pretty subtle and took a few back n forths to actually narrow it down and attempt to categorize it 12:47 < bitconner> i think this could be simplified to: "MUST NOT reveal the preimage if there are active downstream HTLCs" 12:47 < cdecker> But shouldn't always claim all HTLCs that match known preimages make you whole again? 12:47 <+roasbeef> cdecker: so if this doesn't trigger the invoice being shwon as settled, the attakcer has bought a pre-iamge for the price of an on-chain transaction and routign fees, which may not actually be useful for anything depending on the application 12:47 < bitconner> cdecker: you pull on chain, mark the invoice settled, but since the downstream htlc is still active, the attacker settle downstream and take the funds back 12:48 <+roasbeef> yeh if it's is settled then that happens ^ 12:48 < t-bast> bitconner: it's not enough, it could be settled downstream but you still don't want to leak the preimage 12:48 <+roasbeef> it also requires the attacker to intercept an actual payment as well 12:48 <+roasbeef> making the payment_addr _mandatory_ eliminates the interception attack vector as they can't guess what it actually is to make us reveal the pre-image 12:49 < bitconner> imo it's easiest to imagine the attacker making a circular route. but the first hop is settled before the remaining n-1 12:49 <+roasbeef> which afaik, some nodes in the wild already require themselves with a small patch 12:49 < bitconner> once the first hop is settled, the node thinks the invoice is paid onchain, but the attacker then walks out the back door with all moolah 12:50 < rusty> bitconner: when upstream takes the funds, won't that trigger you to resolve downstream payment? 12:50 <+roasbeef> yeh ideally they also need a direct chan to the victim as well, and some merchants these days hide their node behind another using a private channel to link the two, meaning it may have not been possible 12:50 <+roasbeef> rusty: so reverse the ordering: you settle the incoming before the outgoing 12:50 <+roasbeef> (assuming we're using the same terms here lol) 12:51 < cdecker> Ah, wait so the start scenario is the victim node V between attackers A and B. V issues an invoice, then A routes through A -> V -> B, but V considers the route-through as the payment? 12:51 < bitconner> cdecker: yes 12:51 <+roasbeef> cdecker: yes, rout-thru here meaning pulling on-chain 12:51 <+roasbeef> if V's invoice had the payment_addr then A+B couldn't guess that to provide a valid looking onion 12:52 <+roasbeef> also A+B may discover this by intercepting a normal payment to V, then somehow guessing it's for V to launch this other route to extract the pre-image 12:52 < bitconner> but, they can omit the payment secret entirely since no one requires it 12:52 < cdecker> Ah, I see. So there is just one payment that gets re-interpreted by the invoice DB when going on-chain, it's not really two HTLCs that collide at V (which is what I thought was the case) 12:52 <+roasbeef> ah yeh the "collision" nomenclature by antoine can be kinda confusing there 12:52 <+roasbeef> mhmm so it would need to be required as bitconner mentions, which is somehting we planned on doing anyway, and imo we've had enough time since we did the whole mpp/tlv revolution 12:52 < cdecker> Now it makes sense, thanks for the explanation! 12:53 < bitconner> yeah, the collision here refers to the malicious htlc's payment hash and the invoice payment hash 12:53 < lndbot> cccccclvtubhtjhvrkklrltkvjrbrdfdjbinlhtfvicc 12:53 < bitconner> noice 12:53 < rusty> roasbeef: yeah, we're doing logging RN on blockstream store to see how many payments include the secret field. 12:54 < bitconner> we're thinking of making payment secrets required by default in 0.12 12:54 <+roasbeef> rusty: cool, I would assume most "mondern" wallets do at this point 12:55 < bitconner> would people welcome a pr that changes that to the recommended BOLT11 feature set? 12:55 < rusty> roasbeef: yeah, you gotta think so.... 12:55 < bitconner> or, adds it, rather 12:55 <+roasbeef> end user implication here is that if they have a really old wallet (like 1 yr old at this point)? their payments may fail to certain destinations 12:55 < lndbot> cccccclvtubhtgujbggjcengcruiruunbektufnrvckt 12:55 <+roasbeef> but the invoice itself would also convey the requirement, so ideally they can get a nicer method 12:55 < cdecker> Well, there is something to be said for hitting users with the occasional incompatibility to keep them updating xD 12:55 <+roasbeef> the yubikey seems to agree here 12:55 -!- t-bast-official [~t-bast@78.194.192.38] has joined #lightning-dev 12:55 <+roasbeef> heh true 12:56 < lndbot> subtle ack 12:56 < rusty> bitconner: if the logs indicate ~everyone is upgraded, I'd like to do a c-lightning release with that to be sure to flush out more users. Then we can upgrade spec? 12:56 <+roasbeef> next lnd release is scheduled to drop mid-ish nov 12:57 < cdecker> Ok, going back to the PR in question, I think the wording could be improved slightly ("thou shalt not conflate payments destined for you and those that are just passing through") 12:57 <+roasbeef> the 0.12 that bitconner referenced 12:57 < t-bast-official> cdecker: ACK, that's exactly what should be meant 12:57 <+roasbeef> cdecker: but doesn't the dependancy ordering requireent suprecede that? 12:57 <+roasbeef> so "never pull the incoming htlc before the outgoing if the outgoing hasn't expired yet" 12:57 < bitconner> rusty: sure either one, i'm also okay with the spec leading the impl here 12:57 < rusty> "Your software shall have no bugs, especially not ones which can lose money". Done. 12:58 < bitconner> šŸ˜‚ 12:58 < t-bast-official> roasbeef: I think that still leaves a hole, where you could reveal your preimage for less than the invoice amount 12:58 < cdecker> No I think it really should be that payments destined for you should be treated separately from those that are forwarded no more, no less 12:58 -!- t-bast [~t-bast@2a01:e34:ec2c:260:70c2:2d88:7229:ff47] has quit [Ping timeout: 272 seconds] 12:58 < bitconner> cdecker, yes that indeed was the fix on our end. we were falling back to the invoice database if we couldn't find the preimage in our "learned-forwarded-preimgae db" 12:58 < cdecker> Well, just re-reading your point roasbeef, yours implies mine, so either way is fine 12:59 < cdecker> Ok, so shall we bikeshed on the PR itself? I think it needs to be worded a bit more explicitly 12:59 < bitconner> separating the two indeed prevents this. main issue was that lnd thought it was the exit, so it wasn't considering that a downstream htlc existed. if they're properly isolated it goes away 13:00 < t-bast-official> I think it's not enough: A -> V -> B, A drops onchain, B fails the payment. The V -> B payment is settled, but V should still not reveal the preimage on-chain on the A -> B link 13:00 < t-bast-official> because the payment may be for less than the invoice amount 13:00 <+roasbeef> isn't that already specified re amount checking in the payload? 13:00 < cdecker> Yes, so separating the two is exactly what it should do: in your scenario it was forwarded so don't look in your invoice DB 13:00 <+roasbeef> I guess there's also kinda a layering thing here as well 13:01 < bitconner> cdecker: yep, indeed 13:01 < cdecker> Anyway, the easiest fix to all of this is "if you have the preimage, claim the HTLC independently of it being destined for you or not" that ought to teach them some respect 13:01 < t-bast-official> Agreed, clean separation of your own invoices and forwarded payments 13:02 < cdecker> (and don't forward it, be greedy) 13:03 < bitconner> sounds this like one needs some more work on the pr itself? 13:03 < cdecker> #agreed everyone to help make the formulation a bit more explicit 13:03 < bitconner> sgtm 13:03 < t-bast-official> sgtm 13:04 < bitconner> what happened to t-bast-unofficial? lol 13:04 < cdecker> #topic Make invoice's `s` flag mandatory ? #809 13:04 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/809 13:04 <+roasbeef> the Real T-Bast took over 13:04 < t-bast-official> bitconner: I beat that loser 13:04 < bitconner> is -official the irc version of blue checkmark? 13:04 < t-bast-official> xD 13:04 -!- cdecker is now known as realCdecker 13:05 -!- bitconner is now known as realBitconner 13:05 < realCdecker> Ok, last issue for today? 13:05 < realCdecker> Nah, we can probably make it through the list 13:05 < realCdecker> Just 2 more 13:05 < realCdecker> So making `s` mandatory? 13:05 < realBitconner> ah, perfect 13:05 < t-bast-official> And this one we already kinda discussed 13:06 < realCdecker> Yep, so the verdict is that we are gauging support in the network, and reconvene as soon as we have conclusive results? 13:06 < t-bast-official> ACK 13:06 < ariard> damn I loose the UTC game again :( 13:06 < rusty> t-bast-official: yeah, want to report how many nodes indicate support and paste on issue? realCdecker can you grep store logs to find out how many have `s` in payments? 13:06 < realBitconner> haha XD 13:06 < realCdecker> #agreed More measurements needed to gauge the impact of making the flag mandatory 13:07 < lndbot> Am I the only non-real here? Iā€™m just a bot 13:07 < t-bast-official> ariard: haha, too late so we rejected all your PRs 13:07 < realCdecker> rusty I have no access to the store logs (thankfully) 13:07 < t-bast-official> rusty: yep, sign me on to measure this 13:07 < realCdecker> But I'll ask ops to pull the stats 13:07 < rusty> realCdecker: thx! 13:07 < ariard> nevermind I was mostly interested by channel spamming 13:07 < realBitconner> tbh no matter what the numbers say i think we should do it 13:08 < realCdecker> #topic Add a SECURITY.md #772 13:08 < realCdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/772 13:08 < ariard> ah here the context for this one 13:08 < ariard> I mostly interested by getting some security.md out 13:09 < realBitconner> it's a safety/ux tradeoff, but realistically no one should be running software that old 13:09 < realCdecker> realBitconner, not sure we should. Hitting a couple of laggards on the head with incompatibility is one thing, if it's 50%+ it's a major issue 13:09 < ariard> because while finding one of the CVE I did send a report to all of them even if it was a lnd issue (the high-S one) 13:10 < ariard> as we don't have that much a gentleman coordination policy to avoid someone patching quickly stuff and thus revealing weakness in other implementation 13:10 < realBitconner> on the bright side, most economically-relevant lnd nodes have likely been upgraded to 0.11 13:11 < t-bast-official> we should have more CVEs, they force people to update! 13:11 < realCdecker> ariard, I like the proposal, and some more process around these things. However, I think we need to be a bit more specific 13:11 < rusty> I like the idea of SECURITY.md, we should include some contact GPG keys directly in the repo too. I'll go. 13:11 < t-bast-official> I agree with ariard we should get a basic security.md out 13:12 < t-bast-official> And we can improve it as we go 13:12 < realCdecker> For example point 3, informing upstream maintainers at the same time as the LN developers is dangerous 13:12 < ariard> what I'm really worried is the kind of situation like the inflation CVE on core which was found by an external party and communicated to multiple core codebase maintainer 13:12 < realCdecker> They may not coordinate with us, and inadvertently leak info about the vulnerability 13:12 < ariard> in that kind of timeline you want to act fast and also coordinate with other 13:13 < realCdecker> The same goes for point 5: the reporter should be discouraged from attempting to fix it and open a PR directly, despite it maybe being in their power 13:13 < realCdecker> Opening a PR may put other implementations and the installation base at risk 13:14 < realCdecker> So we definitely need to pin this down on two axes: who to inform, and what to do unilaterally 13:14 < rusty> Keep it simple, and respectful. Get three contacts (one from LL, ACINQ and Blockstream), and we will handle contacting other implementations; don't make reporter do all the work. 13:15 < realCdecker> Yep, we used to have the bitcoin-security list for these kinds of things, maybe that's a model as well 13:15 < t-bast-official> I agree with rusty and cdecker, make it as simple as possible for the reporter 13:15 < realBitconner> +1 13:15 < t-bast-official> But do list things that the reporter must avoid doing at all cost 13:16 < rusty> Remember the reporter is doing us a huge service, let's be as helpful as we can. 13:16 < ariard> rusty: +1, will update the PR in consequence 13:16 < realBitconner> #1 don't post on twitter 13:16 < ariard> it would be great if each implementation can comment on the PR its security contact 13:16 < realCdecker> Absolutely, having them run around gathering addresses is not good, let's have a single address and GPG key 13:16 < rusty> realBitconner: lol 13:16 < realCdecker> The address forwards to a lead on each team and we triage and coordinate from there 13:17 < realBitconner> ariard: will do 13:17 < rusty> realCdecker: people have vacations, so you want at least two. And sharing a GPG key is hard... 13:17 < ariard> I think sending a report to 3 different GPG keys isn't that much a high barrier 13:18 < realBitconner> i think having one gpg key per team. submit to the impl you know it works against. 13:18 < ariard> and IIRC bitcoin-security have more than 2 people behind 13:18 < realCdecker> Ok, but let's list the addresses and GPG fingerprints in the doc 13:18 < realCdecker> ariard bitcoin-security was unencrypted for the most part 13:18 < realBitconner> realCdecker: yes definitely 13:19 < realCdecker> Ok, then the verdict is "simplify and precisify", sounds good? 13:19 -!- realCdecker is now known as cdecker 13:19 < t-bast-official> ACK 13:20 < ariard> sounds good 13:20 < realBitconner> precisify šŸ¤“ 13:20 -!- t-bast-official is now known as t-bast 13:20 -!- realBitconner is now known as bitconner 13:20 < ariard> realCdecker: ah... but it's more for a threshold on receiver availability due to vacations/travels/... 13:21 < cdecker> #agreed Documenting the reporting process was accepted. The document needs to add contact details (e-mail and gpg keys) for the implementations and add details about what _not_ to do (report CVEs via the Twitter tag #StealMyMoney) 13:22 < cdecker> I think sharing an e-mail address and a GPG key in a team should be good, shouldn't it? 13:22 < bitconner> also should say not to use #HaveFunStayingPoor or Udi will RT 13:23 < rusty> FWIW: 15EE 8D6C AB0E 7F0C F999 BFCB D920 0E6C D1AD B8F1 13:23 < t-bast> bitconner: love it 13:23 < cdecker> ACK, that fingerprint matches 13:24 < cdecker> Great, I think that concludes the official part of the meeting, now for wine and snacks 13:24 < bitconner> 91FE 464C D751 01DA 6B6B AB60 555C 6465 E5BC B3AF 13:24 < ariard> btw, going through the logs, I'm working on getting a libstandardness on the core-side to reduce future tx-standardness malleability vectors 13:24 < niftynei> snacks? 13:24 < bitconner> lnd pgp key^^ 13:24 < niftynei> o.O 13:25 < rusty> #endmeeting! 13:25 < cdecker> #endmeeting 13:25 * bitconner waves at niftynei! 13:25 <+roasbeef> ariard: you see the point about node versions above as well? 13:25 < t-bast> If you're interested in spam/DoS stuff and don't want to read a ton of mail threads, I've started a summary 13:25 * niftynei waves at bitconner 13:25 < t-bast> https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md 13:25 < rusty> Beep. Meeting has ended. Transcript will be uploaded to YOUR URL HERE 13:25 < ariard> roasbeef: still reading, but I want something which is stable across versions/implementations 13:25 < bitconner> t-bast: awesome thank you! 13:25 < cdecker> Awesome, thanks t-base-most-official-eva 13:26 -!- t-bast is now known as real-slim-t-bast 13:26 < bitconner> lolll 13:26 < rusty> 2021 is going to suck without Lightning conferences :( 13:26 < real-slim-t-bast> and the latest mail thread has a proposal I'd like you to break "D 13:26 < ariard> t-bast: still around? I spent more time brooding on bidirectional 13:26 < real-slim-t-bast> what I called bidirectional upfront payments 13:26 < cdecker> Yeah, missing the conferences a lot. Never thought I'd miss interacting with other human beings 13:27 < real-slim-t-bast> ariard: I'll really need to go in 5 minutes... 13:27 < ariard> I think that's _secure_ if you define more the routing requirement 13:27 < rusty> Yeah, I am out too... 13:27 < real-slim-t-bast> I can't wait for conferences to start again... 13:27 <+roasbeef> ariard: I mean moreso that new policy can be added w/ each release of w/e nodes, meaning ppl will also need to ensure this lib or w/e RPC call is also updated as well, and relay is weird too 13:27 < cdecker> Good seeing everyone ^^ Good meeting! :-) 13:27 < ariard> real-slim-t-bast: said shortly you want the hodl_fee in to not be superior to hodl_fee out 13:28 < bitconner> thanks for chairing cdecker! 13:28 < cdecker> It was fun ^^ 13:28 < ariard> but if routing nodes account for timevalue of HTLCs that's likely not to happen 13:28 < bitconner> and to rusty for botting XD 13:28 < real-slim-t-bast> thanks all! 13:28 < cdecker> Rusty will you ping aj about the bot? I think he was operating it 13:28 < real-slim-t-bast> ariard: I saw your last message, I'll think about this and we can chat about it next week ;) 13:29 < ariard> real-slim-t-bast: yeah sure, let's discuss on lightning-docs 13:29 -!- bmancini55 [4620050a@70.32.5.10] has quit [Remote host closed the connection] 13:29 -!- real-slim-t-bast [~t-bast@78.194.192.38] has quit [Quit: Leaving] 13:30 < ariard> roasbeef: yes you don't want a tightening of tx-relay rules breaking silently second-layer, IMO it should be updated with same care as an API change 13:31 -!- Guest81984 [~joost@81-207-149-246.fixed.kpn.net] has quit [Quit: Leaving] 13:31 < ariard> like warn with at least a 6 month period before any update to let higher ecosystem adapt 13:32 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 272 seconds] 13:35 < lndbot> What is the current proposal for channel jamming? Upfront payments? 13:38 < bitconner> eugene: check out t-bast's summary linked above 13:39 < bitconner> https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md 13:45 -!- Netsplit *.net <-> *.split quits: TD-Linux, michaelfolkson, cncr04s 13:45 -!- michaelfolkson2 [~michaelfo@2a03:b0c0:1:e0::23d:d001] has joined #lightning-dev 13:45 -!- Netsplit over, joins: TD-Linux 13:46 -!- cncr04s [~cncr04s@unaffiliated/cncr04s] has joined #lightning-dev 14:16 -!- sstone [~sstone@112.ip-51-68-199.eu] has quit [Quit: Leaving] 15:08 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 15:11 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 15:11 -!- vasild_ is now known as vasild 15:15 -!- gleb [~gleb@178.150.137.228] has quit [Quit: Ping timeout (120 seconds)] 15:16 -!- gleb [~gleb@178.150.137.228] has joined #lightning-dev 15:17 -!- deafboy [quasselcor@cicolina.org] has quit [Remote host closed the connection] 15:19 -!- deafboy [quasselcor@cicolina.org] has joined #lightning-dev 15:38 < lndbot> Thank you, I'll read it through and give some comments 15:47 -!- th0th [~th0th@gateway/tor-sasl/th0th] has quit [Remote host closed the connection] 15:47 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 260 seconds] 15:48 -!- th0th [~th0th@gateway/tor-sasl/th0th] has joined #lightning-dev 15:57 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 16:02 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has joined #lightning-dev 16:07 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 16:58 -!- fiatjaf [~fiatjaf@2804:7f2:2a84:12c:ea40:f2ff:fe85:d2dc] has joined #lightning-dev 17:00 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 17:26 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 17:31 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 17:48 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 246 seconds] 17:53 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 17:54 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 18:06 -!- queip [~queip@unaffiliated/rezurus] has quit [Read error: Connection reset by peer] 18:06 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 18:14 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 18:18 -!- queip [~queip@unaffiliated/rezurus] has quit [Excess Flood] 18:18 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 18:22 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 240 seconds] 18:23 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 18:23 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 18:26 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 18:32 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 18:34 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 18:38 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 272 seconds] 18:48 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Read error: Connection reset by peer] 18:51 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 18:53 -!- queip [~queip@unaffiliated/rezurus] has quit [Quit: bye, freenode] 18:56 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 18:57 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 19:03 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 260 seconds] 19:06 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 19:14 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 19:15 -!- bitromortac_ [~admin@gateway/tor-sasl/bitromortac] has joined #lightning-dev 19:18 -!- bitromortac [~admin@gateway/tor-sasl/bitromortac] has quit [Ping timeout: 240 seconds] 19:24 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 19:29 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 240 seconds] 19:48 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 20:12 -!- sr_gi [~sr_gi@static-128-69-224-77.ipcom.comunitel.net] has quit [Ping timeout: 240 seconds] 20:14 -!- sr_gi [~sr_gi@static-77-88-225-77.ipcom.comunitel.net] has joined #lightning-dev 20:17 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 20:26 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 20:28 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 256 seconds] 20:29 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 240 seconds] 20:50 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 264 seconds] 20:57 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 20:58 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 21:00 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 272 seconds] 21:00 -!- __gotcha1 is now known as __gotcha 21:04 -!- kexkey [~kexkey@static-198-54-132-174.cust.tzulo.com] has quit [Read error: Connection reset by peer] 21:35 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 22:03 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 22:47 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 22:55 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 23:21 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 272 seconds] 23:33 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 272 seconds] 23:34 -!- jeremyrubin [~jr@c-73-15-215-148.hsd1.ca.comcast.net] has quit [Ping timeout: 256 seconds] --- Log closed Tue Oct 27 00:00:58 2020