--- Log opened Mon May 11 00:00:17 2020 00:42 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 00:46 -!- marcoagner [~user@bl13-226-166.dsl.telepac.pt] has joined #lightning-dev 00:54 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 258 seconds] 01:05 -!- CubicEarth [~CubicEart@c-67-168-1-172.hsd1.wa.comcast.net] has quit [Ping timeout: 240 seconds] 01:06 -!- laptop [~laptop@212.203.87.198] has joined #lightning-dev 01:08 -!- CubicEarth [~CubicEart@c-67-168-1-172.hsd1.wa.comcast.net] has joined #lightning-dev 01:10 -!- molz_ [~mol@unaffiliated/molly] has quit [Read error: Connection reset by peer] 01:10 -!- molz_ [~mol@unaffiliated/molly] has joined #lightning-dev 01:11 -!- nehan_ [~nehan@41.213.196.104.bc.googleusercontent.com] has joined #lightning-dev 01:12 -!- vtnerd_ [~vtnerd@173-23-103-30.client.mchsi.com] has joined #lightning-dev 01:13 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 01:14 -!- waxwing_ [~waxwing@193.29.57.116] has joined #lightning-dev 01:14 -!- TD--Linux [~Thomas@2604:a880:1:20::173:1001] has joined #lightning-dev 01:15 -!- molz_ [~mol@unaffiliated/molly] has quit [Ping timeout: 272 seconds] 01:15 -!- Netsplit *.net <-> *.split quits: vtnerd, orange-, TD-Linux, nehan, nsh, waxwing 01:21 -!- Netsplit over, joins: orange- 01:29 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 01:29 -!- nsh [~lol@integer.musalbas.com] has joined #lightning-dev 01:31 -!- dfmb_ [~dfmb_@unaffiliated/dfmb/x-4009105] has joined #lightning-dev 01:34 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 272 seconds] 01:39 -!- nsh [~lol@integer.musalbas.com] has quit [Changing host] 01:39 -!- nsh [~lol@wikipedia/nsh] has joined #lightning-dev 01:51 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 02:09 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 02:13 -!- Matterlend [~Logiciant@82.102.24.131] has quit [Quit: Leaving] 02:23 -!- per [~per@gateway/tor-sasl/wsm] has joined #lightning-dev 02:31 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:3091:4ce6:92bd:c385] has quit [Ping timeout: 244 seconds] 02:36 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 02:42 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 02:42 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Client Quit] 02:47 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 02:47 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 02:52 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 256 seconds] 03:03 -!- waxwing_ [~waxwing@193.29.57.116] has left #lightning-dev [] 03:04 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 03:04 -!- waxwing [~waxwing@unaffiliated/waxwing] has joined #lightning-dev 03:23 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 03:24 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 04:01 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 04:17 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 04:19 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 04:23 -!- TD--Linux [~Thomas@2604:a880:1:20::173:1001] has quit [Changing host] 04:23 -!- TD--Linux [~Thomas@about/essy/indecisive/TD-Linux] has joined #lightning-dev 04:23 -!- TD--Linux is now known as TD-Linux 04:25 -!- t-bast [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has joined #lightning-dev 04:29 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 04:31 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 04:34 -!- t-bast-official [~t-bast@78.253.233.125] has joined #lightning-dev 04:37 -!- t-bast [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has quit [Ping timeout: 252 seconds] 04:39 -!- laptop [~laptop@212.203.87.198] has quit [Quit: Leaving] 04:47 < fiatjaf> what happened with the proposal to have unannounced channels have random shortchannelids, not related to the funding transaction? 04:48 < rusty> fiatjaf: subsumed by the proposal to use blinded paths, which does even better. 04:48 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 04:50 < fiatjaf> right, I agree that one is very good, but it should still be possible to have channels with random ids, just in case some people want to try to come up with a different way of announcing the channels that isn't through gossip 04:50 < fiatjaf> does that make sense? 04:50 < rusty> fiatjaf: I had a proposal to do this, but it was complex. Basically, you need your peer to set the tempid so they don't clash, and tell it to you so you remember it. That means you can only have one. 04:51 < fiatjaf> what if it was the hash of the funding tx? 04:51 < rusty> fiatjaf: guessable? 04:52 < fiatjaf> hmm 04:52 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 240 seconds] 04:55 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:56 < fiatjaf> what about the onions for routing nodes using the next peer id instead of the channel id? 04:57 < fiatjaf> so one can know there is a channel between A and B without knowing the channel id and still route a payment with that info 04:57 < cdecker> We already use the short_channel_id as a shorthand for the peer that the channel connects to, if we were to use node_ids we'd spend an extra 25 bytes per hop 04:58 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 05:00 < fiatjaf> what about just the initial x bytes of the node id? 05:01 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Ping timeout: 256 seconds] 05:03 -!- Aaronvan_ [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 05:05 -!- Cloudflare [~Cloudflar@unaffiliated/cloudflare] has quit [Ping timeout: 272 seconds] 05:05 -!- Cloudflare [~Cloudflar@unaffiliated/cloudflare] has joined #lightning-dev 05:06 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 256 seconds] 05:29 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 05:32 -!- Aaronvan_ [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 256 seconds] 05:35 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 05:37 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 264 seconds] 05:50 -!- fox2p [~fox2p@2804:14c:5ba3:21d::100c] has joined #lightning-dev 05:52 -!- fox2p [~fox2p@2804:14c:5ba3:21d::100c] has quit [Client Quit] 05:52 -!- foxp2 [~foxp2@2804:14c:5ba3:21d::100c] has joined #lightning-dev 05:53 -!- Sajesajama_ [Salsa@gateway/vpn/protonvpn/sajesajama] has quit [Quit: Leaving] 05:53 -!- Sajesajama [Salsa@gateway/vpn/protonvpn/sajesajama] has joined #lightning-dev 05:59 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 06:00 -!- foxp2 [~foxp2@2804:14c:5ba3:21d::100c] has quit [Read error: No route to host] 06:05 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 06:14 < cdecker> The problem with truncating a pubkey is that it breaks a lot of crypto-tricks, but might be worth checking into 06:15 -!- foxp2 [~foxp2@2804:14c:5ba3:21d::100c] has joined #lightning-dev 06:15 -!- foxp2 [~foxp2@2804:14c:5ba3:21d::100c] has quit [Client Quit] 06:18 -!- t-bast-official [~t-bast@78.253.233.125] has quit [Remote host closed the connection] 06:19 -!- t-bast-official [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has joined #lightning-dev 06:19 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has joined #lightning-dev 06:21 < fiatjaf> well, the node just has to search for the full key in his database if it has a channel with the destination (instead of searching with the short_channel_id). what kind of crypto tricks? 06:21 < fiatjaf> anyway, this may be silly, but seems simple enough. my only goal is to enable people to have a channel without ever disclosing their funding transaction 06:21 < foxp2> anyone know any resources for finding out performance details / benchmark of LND and C-Lightning? 06:22 < fiatjaf> is the only reason for channel ids being a pointer to the funding transaction that of anti-spam measures? 06:22 < fiatjaf> or is there another reason? 06:25 < fiatjaf> for payee nodes everything is solved by blinded paths, I think 06:25 -!- laptop [~laptop@212.203.87.198] has joined #lightning-dev 06:25 < fiatjaf> but I wanted to solve that for public routing nodes too 06:25 < fiatjaf> bypassing the gossip altogether and sharing channels through other means 06:29 < fiatjaf> (ok, feel free to ignore if this is too silly) 06:37 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has quit [Quit: \/\/] 06:40 -!- foxp2 [~foxp2@ec2-54-144-139-49.compute-1.amazonaws.com] has joined #lightning-dev 06:41 -!- afk11` [~afk11@gateway/tor-sasl/afk11] has quit [Ping timeout: 240 seconds] 06:42 -!- afk11` [~afk11@gateway/tor-sasl/afk11] has joined #lightning-dev 06:49 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 06:53 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 06:54 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 258 seconds] 06:57 -!- slivera [~slivera@116.206.228.243] has joined #lightning-dev 06:58 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 07:01 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Read error: Connection reset by peer] 07:02 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 07:08 -!- slivera [~slivera@116.206.228.243] has quit [Quit: Leaving] 07:13 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Read error: Connection reset by peer] 07:13 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 07:16 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 256 seconds] 07:21 -!- laptop [~laptop@212.203.87.198] has quit [Quit: Leaving] 07:27 -!- slivera [~slivera@116.206.228.243] has joined #lightning-dev 07:30 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 246 seconds] 08:01 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 08:02 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection] 08:02 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 08:03 -!- real-t-bast [~t-bast@78.253.233.125] has joined #lightning-dev 08:06 -!- t-bast-official [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has quit [Ping timeout: 265 seconds] 08:07 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection] 08:07 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 08:14 -!- laptop [~laptop@77-57-116-48.dclient.hispeed.ch] has joined #lightning-dev 08:16 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Ping timeout: 240 seconds] 08:19 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 08:26 -!- real-t-bast [~t-bast@78.253.233.125] has quit [Remote host closed the connection] 08:26 < cdecker> fiatjaf: isn't that sort of the use-case for route-hints? 08:26 -!- real-t-bast [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has joined #lightning-dev 08:28 < cdecker> Regarding your question about outpoints as an anti-spam measure, yes, that's the big use-case, but we also extract the overall capacity from the outpoint since that is not included in the gossip (I think of it as a nudge to go and check that the channel exists) 08:40 < cdecker> foxp2: I'm not sure the implementations are directly comparable, since the benchmarks usually are pretty scenario specific. For example c-lightning depending on the concurrency I set in the benchmark I see anywhere between 100tx/s to 600tx/s 08:41 -!- ThomasV [~thomasv@2a02:8109:88c0:cca:8d00:56c8:f0a8:2d] has joined #lightning-dev 08:41 -!- ThomasV is now known as Guest65938 08:41 < cdecker> You can then start tweaking the node ad nauseam (commitment timeout, maximum payments in flight, ...) and you'll get a wide range of throughput, so be careful what conclusions you take from this type of comparison ;-) 08:46 < foxp2> Awesome 08:46 < foxp2> This makes perfect sense. I was inquiring more about where to find such resources 08:46 < foxp2> Ive just found the benchmark files for c-lightning 08:46 < foxp2> will try and find similar tests for LND 08:47 < foxp2> thanks for the input cdecker 08:51 < cdecker> foxp2: if you just want to play some ping pong between two nodes you might want to look in an old project of mine that can spin up nodes from the 3 main implementations and drive them: https://github.com/cdecker/lightning-integration 08:52 < cdecker> foxp2: it's a bit old by now, and might have some breakage in some places, but hopefully it can still start and stop arbitrary topologies, and shim away the API differences 08:52 < foxp2> great, this is very close to what I was searching for 08:52 < foxp2> thanks a lot cdecker 08:53 < cdecker> foxp2: it's based on the same library that the c-lightning tests use 08:53 < cdecker> np 08:53 < foxp2> yeah been messing w c-lightning lately 08:53 < foxp2> works really well. wondering how far it can go :P 08:58 -!- slivera [~slivera@116.206.228.243] has quit [Ping timeout: 260 seconds] 09:01 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 09:03 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 256 seconds] 09:05 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 09:07 -!- foxp2 [~foxp2@ec2-54-144-139-49.compute-1.amazonaws.com] has quit [Quit: \/\/] 09:10 -!- foxp2 [~foxp2@ec2-54-144-139-49.compute-1.amazonaws.com] has joined #lightning-dev 09:22 -!- real-t-bast [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has quit [Quit: Leaving] 09:23 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 09:24 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 09:29 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Client Quit] 09:29 -!- Guest65938 [~thomasv@2a02:8109:88c0:cca:8d00:56c8:f0a8:2d] has quit [Quit: Leaving] 09:48 -!- foxp2 [~foxp2@ec2-54-144-139-49.compute-1.amazonaws.com] has quit [Quit: \/\/] 09:51 < cdecker> If you bump against a limitation we can probably find a solution :-) 10:09 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has joined #lightning-dev 10:17 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 10:20 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 272 seconds] 10:29 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [] 10:32 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 10:38 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 10:52 -!- proofofk_ [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 10:54 -!- proofof__ [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 10:55 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Read error: Connection reset by peer] 10:57 -!- proofofk_ [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 256 seconds] 11:06 < fiatjaf> cdecker: no, route hints are only for the last hops and they're decided by the payee. what I'm envisaging is a system in which public channels would be floating and the payer would be able to pick them and use them when calculating the payment route. 11:09 < cdecker> Hm, there is technically no limit on the number of channels in the route hints, so you could expose arbitrary channels with them, as long as the succession of hints ends up at a destination (something I had forgotten), but you're right, they're probably not the right medium to add just any floating channels to the routing table 11:13 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 258 seconds] 11:46 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 11:48 -!- michaelfolkson [~textual@2a00:23c5:be01:b201:bd0e:a77:833e:daf8] has joined #lightning-dev 11:49 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Ping timeout: 240 seconds] 11:52 -!- tryphe_ is now known as tryphe 11:55 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has joined #lightning-dev 11:58 -!- proofof__ [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 256 seconds] 12:02 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 12:05 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 12:05 -!- ja [janus@anubis.0x90.dk] has joined #lightning-dev 12:09 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 12:10 -!- _Sam-- [~sam@unaffiliated/greybits] has joined #lightning-dev 12:10 -!- michaelfolkson [~textual@2a00:23c5:be01:b201:bd0e:a77:833e:daf8] has quit [Ping timeout: 272 seconds] 12:37 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Quit: __gotcha] 12:38 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 12:39 -!- nehan_ [~nehan@41.213.196.104.bc.googleusercontent.com] has quit [Quit: leaving] 12:47 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Remote host closed the connection] 12:48 -!- t-bast [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has joined #lightning-dev 12:49 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 12:51 < t-bast> Happy halving everyone! We'll start the spec meeting soon, see agenda here: https://github.com/lightningnetwork/lightning-rfc/issues/774 12:56 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has joined #lightning-dev 12:57 -!- bitconner [~root@138.68.244.82] has joined #lightning-dev 12:58 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 272 seconds] 12:59 < cdecker> Happy halving to you too, that was way quicker than expected ^^ 13:00 < rusty> Yeah, I was still asleep! It was no less exciting than I expected. 13:01 < t-bast> xD 13:01 < rusty> Still, I expect lots of tweets for @SayBitcoiners to work with :) 13:02 < t-bast> Haha can't wait to discover what this new twitter handle has in store 13:02 < cdecker> Love the coinbase input for block 629'999 though xD 13:03 < bitconner> hola! 13:03 < niftynei> hello! 13:03 < cdecker> Helas, fellas 13:03 < t-bast> hey! long time no digitally-see bitconner! 13:03 < sstone> hi everyone! 13:03 < bitconner> 🥂 13:03 < jkczyz> hey all 13:04 < rusty> cdecker: LOL 13:04 < bitconner> haha i know, missed the last one xD 13:04 < bitconner> the months creep by in quarantine haha 13:05 < rusty> Hi all! Hope everyone is feeling optimistic and all lightning. 13:06 < t-bast> bitconner: yeah it's crazy, that sure has to do with relativity or something 13:06 < t-bast> Shall we start the meeting? Do we have a chair candidate? 13:07 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 13:08 < BlueMatt> RNYTimes 09/Apr/2020 With $2.3T Injection, Fed's Plan Far Exceeds 2008 Rescue Mined 13:08 < cdecker> I might have to leave a bit early, but I can chair until then 13:08 < rusty> cdecker: short meeting == good meeting! 13:08 < cdecker> Ok 13:08 < cdecker> #startmeeting 13:08 < lightningbot> Meeting started Mon May 11 20:08:30 2020 UTC. The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:08 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 13:08 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/774 13:08 <+roasbeef> heh, should've tried to open a chan in that block or the havling one 13:08 < cdecker> The agenda for today 13:08 < cdecker> roasbeef: that would have been epic ^^ 13:09 < cdecker> #topic Adding a responsible disclosure document for the spec and pointers to the various implementations (#772) 13:09 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/772 13:10 < rusty> "faultive" is not a word. But damn, it should be. 13:10 < cdecker> Hehe, agreed 13:10 < t-bast> It's our project, we can make our own words 13:10 < cdecker> #action Add "faultive" to the merriam-webster dictionary 13:10 < t-bast> is ariard around? 13:11 < cdecker> ariard took the initiative has proposed having a document describing (a possible) way to contact the spec maintainers in case of a spec issue, and pointing to implementations in case it only impacts a specific implementation 13:11 < cdecker> (my interpretation) 13:11 < BlueMatt> ariard is out 13:12 < t-bast> I think this makes sense, but what would be the process for such a spec vulnerability? 13:12 < BlueMatt> but i think we can discuss it more on the pr 13:12 < cdecker> Sounds good :+1: 13:12 <+roasbeef> one critical addition would be contact info for the various impls, but still a WIP so prob need to hash it out more on teh PR itself 13:12 < BlueMatt> imo "here's a list of each prominent implementation, please email all of them" 13:12 < rusty> Yeah, approve in principle: I think we should all nominate a contact both for cross-impl issues and for direct ones. 13:12 < BlueMatt> if we *really* want, we can set up a lightning-security@ private mailing list, but, ehhh 13:12 < t-bast> rusty: agreed 13:12 < cdecker> Well, we could also have a single address on the @ln.dev domain that fans out to a representative of each team 13:13 < cdecker> BlueMatt: ML would be one example (they never really work though in my experience) having an alias that fans out to the teams that can then triage seems reasonable though 13:14 < BlueMatt> right, thats what i meant 13:14 < BlueMatt> (like security@bitcoincore.org does) 13:14 < cdecker> Ok, gotcha 13:14 < rusty> security@ln.dev could work, be a huge spam magnet of course, but what isn't? 13:15 < BlueMatt> security@bitcoincore.org is...surprisingly quiet 13:15 < cdecker> So does this seem like something we should pursue? If yes we'd defer to the PR to hammer out details, ok? 13:15 < t-bast> BlueMatt: does security@bitcoincore.org has a shared key to encrypt the disclosure and have multiple people be able to read it? or is it unencrypted emails only? 13:15 < BlueMatt> the occasional "I got scammed, can you go look up who this address is? kthx" but thats about it 13:15 < BlueMatt> its, sadly, mostly unencrypted, but the website *does* list a set of pgp keys to encrypt to 13:15 < BlueMatt> anyway, I'm not sure its worth it 13:15 < BlueMatt> hopefully spec-level bugs keeps going down over time 13:15 < BlueMatt> and if its antoine reporting all of them I think he can handle it lol 13:16 < rusty> cdecker: ack, let's hammer out on PR. Move on? 13:16 <+roasbeef> if it just sends out to a pre set list of addrs it's manageable imo 13:16 < t-bast> BlueMatt: sounds like a plan, let's get ariard full time on finding those bugs :D 13:16 < cdecker> #agreed everyone to discuss the details on the pull request 13:16 < cdecker> #topic Channel announcement features (#769, #770, #773) 13:17 < BlueMatt> t-bast: well re y'all gonna fund him to do it from paris? :p 13:17 < cdecker> #769 describes the problem, and #770 + #773 are two potential solutions 13:17 < t-bast> BlueMatt: would love too, if he wants to do some Scala on eclair on the side it would be perfect! 13:17 * BlueMatt votes for 773 13:18 < cdecker> Issue being that there was a bit of a disagreement on whether wumbo should be in the 1) channel_announcement and 2) whether it should be optional or mandatory 13:18 < rusty> Yeah, facts on the ground and all that. 773 is the only real option. 13:18 < BlueMatt> note that I disagree strongly with cdecker's comment, though 13:18 <+roasbeef> really just needs to be in the node ann 13:18 < t-bast> ACK 773, no code changes to do, perfect lazyness 13:18 < BlueMatt> relying on on-chain data to figure out the channel size is bad 13:18 <+roasbeef> max_htlcs governs eveerything route-ability wise 13:18 < rusty> The spec was wrong (compulsory was dumb), but putting it in the channel_announce was just showing off. 13:18 <+roasbeef> well not all node types verify that, max_htlc is the main thing you should look at 13:18 < cdecker> BlueMatt: we can add it to the announcement, but having a binary flag doesn't solve it either :-) 13:18 < rusty> (Though, as first feature there, it made us write some code). 13:18 < BlueMatt> but using htlc_maximum_msat is great 13:19 < cdecker> roasbeef, bitconner is #773 ok with you? 13:19 < bitconner> yeah doesn't make much sense for the channel ann to advertise it after its already been created 13:19 < bitconner> 773 lgtm 13:19 < cdecker> Ok 13:19 < rusty> And also, even 770 is surprising: if two wumbo-supporting nodes create a channel which &*isn't* wumbo, it still gets the feature bit. 13:19 < BlueMatt> cdecker: we already did! (htlc_maximum_msat) 13:19 < cdecker> #action cdecker to merge #773, and close #770 13:20 < bitconner> unless you set max_htlc to max_uint_64 :) 13:20 -!- joostjgr [~joostjag@ip51cf95f6.direct-adsl.nl] has joined #lightning-dev 13:20 < cdecker> BlueMatt: that doesn't really have to be the channel capacity, that is just an upper limit on the HTLC size, which can be smaller 13:20 < BlueMatt> cdecker: true, but its also all you ever need to know for routing 13:20 < t-bast> exactly 13:21 < cdecker> Well, if you want to start being clever, you can bias against smaller channels since they have a higher chance of being depleted 13:21 < cdecker> Or if you want to do MPP the maximum HTLC doesn't tell you whether you can send another or not 13:21 <+roasbeef> yeh can pre fliter if one wishes 13:21 < t-bast> But you can be clever-er and figure out clever people will do that and those small channels will thus be full 13:21 <+roasbeef> cdecker: that's in the chan ann too iirc 13:21 < cdecker> Anywho, let's continue on the merry tour through the spec ^^ 13:21 < BlueMatt> heh, using mpp to get around maximum_htlc seems superrrrr antisocial 13:22 <+roasbeef> lol that's a given tho 13:22 <+roasbeef> the link itself has the max outstanding value 13:22 < cdecker> Well, but it isn't prohibited, and you can't really do much about it anyway (especially once we get PTLCs) 13:22 < BlueMatt> yea, totally. not disagreeing, just noting 13:22 < cdecker> #topic Clarify Bolt 7 intro #760 13:23 < cdecker> Sounds to me like a good clarification 13:24 < cdecker> Did anyone see a blocker in this one? 13:24 < rusty> Sure. The order was because you need to announce a channel before you can announce a node, but I can see how that's confusing. 13:24 < rusty> Ack from me 13:24 < t-bast> ACK #760 13:25 < cdecker> BlueMatt, roasbeef, bitconner any objections to apply this? 13:25 < t-bast> Agreed the channel / node ordering was a bit confusing, we can consider the first channel ever to be the exception, not the rule 13:25 <+roasbeef> would file under like the typo/formatting rule 13:25 <+roasbeef> hmm wel node discovery can't happen w/o channel discovery 13:25 <+roasbeef> since you don't keep a node unless it has channels 13:26 < cdecker> Yep, was tempted to apply it as a spelling thing, but it's a bit bigger than a typo, so wanted to make sure ^^ 13:26 < BlueMatt> eh, i think its fine. 13:26 < BlueMatt> (doesn't need discussion, i dont think, but in between topics, figured I'd note it'd be nice to get some reviews on 758) 13:26 < cdecker> I think keeping the ordering this way is ok, we're describing the what happens, not how it happens in all its minutiae 13:26 -!- michaelfolkson [~textual@2a00:23c5:be01:b201:9c69:4792:1c73:6a37] has joined #lightning-dev 13:27 < cdecker> #action cdecker to apply #760 13:27 < bitconner> 760 lgtm 13:27 <+roasbeef> BlueMatt: so joost is planning on moving things over to json for those test vectors, to make em easier to consume/generate 13:27 < t-bast> Agreed that we can cover #758 as well, eclair can't comment much since we still don't support static_remotekey (because we already had a deterministic derivation that's equivalent), but we'll have it soon 13:27 <+roasbeef> there's also a q of if all the old ones should be kept around as well for commitments 13:28 < cdecker> Oh, I see #758 needs some cross-verification, can we put it on next time's agenda? 13:28 < rusty> #action Rusty to test #758 13:28 < cdecker> +1 on moving these to JSON btw 13:29 < cdecker> #action all implementations to verify the new test vectors in #758 13:29 < cdecker> Does that sound ok BlueMatt? 13:31 < cdecker> Any last comments for #760 / #758 before we proceed? 13:31 < cdecker> #topic Bolt 11 test cases #736 13:31 < BlueMatt> roasbeef: moving to json would be great! but letting them go stale while "they're about to be rewritten" is also not ok 13:31 < joostjgr> yes indeed, i am planning to write down the anchor test vectors in json 13:31 < BlueMatt> cdecker: yep, thats all I wanted 13:31 < joostjgr> still looking for the original remote private key to produce signatures... 13:31 < cdecker> Great, so #736 is more test vectors 13:32 < t-bast> I think this one only needs a rebase, conner and I validated them for eclair/lnd IIRC 13:32 < cdecker> Sounds good, so we're missing an OK from rust-lightning and that's it I think 13:33 < cdecker> rusty: would you do the honors of rebasing or shall I? 13:33 < joostjgr> rusty: do you still have that private key that you used to generate test vectors? i can also do a new one 13:33 < rusty> cdecker: will do. 13:34 < cdecker> #action rusty to rebase and merge #736 13:34 < rusty> joostjgr: huh... which ones? 13:34 < cdecker> Making good progress everyone ^^ 13:34 < joostjgr> the remote private key for the commit tx test vectors in bolt 03 13:34 < joostjgr> the local priv key is there, but not the remote one. i want to generate new remote sigs for anchors 13:35 < rusty> joostjgr: hmm, that's a bad omission. Let me check the usual candidates... 13:35 < joostjgr> also useful to test sig generation in our test suite 13:35 < cdecker> #topic Network pruning #767 13:35 < bitconner> the privkey isn't H(rusty)? 13:35 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/767 13:36 < joostjgr> haha, okay, so we are supposed to leave hidden messages :) 13:36 < cdecker> bitconner: it's H(H(rusty | 1)) :-) 13:36 -!- michaelfolkson [~textual@2a00:23c5:be01:b201:9c69:4792:1c73:6a37] has quit [Ping timeout: 272 seconds] 13:36 < joostjgr> ok thanks 13:36 < sstone> joosyjgr: I think they used to be in comments that you see when you display the "raw" .md file 13:37 < cdecker> So #767 is the biggest airplane in Boeing's lineup... aehm, I mean it wants to clarify the pruning semantics due to lack of updates 13:37 < bitconner> cdecker: a soild KDF indeed lol 13:38 < bitconner> ah yes, so since the original pr, i've ran some more numbers and through them in a gist 13:38 < cdecker> So I think "latest update" was definitely confusing, but it shouldn't be the oldest one either 13:38 < bitconner> see the link in the PR body 13:38 < joostjgr> sstone: interesting, see them now INTERNAL: "remote_privkey: 8deba327a7cc6d638ab0eb025770400a6184afcba6713c210d8d10e199ff2fda01" 13:38 < rusty> joostjgr: looks like they got lost. See the original commit fb5e8667bbd02f086dc9fb16038a9f3d4434d241 ? 13:38 < bitconner> i mean at any time you only have two updates for the channel, the proposal is to use the older of the two 13:38 < bitconner> as niftynei said, it could be more clear as the older of the most recent channel updates for each node 13:39 < cdecker> But, that'd mean that I can force a channel I have with you to become pruned despite you wanting to forward payments, isn't it? 13:39 < bitconner> the main thing that sticks out is that 26.24% of the channels have at least one end older than 2 weeks 13:39 < t-bast> bitconner: did you check the channels that should have been pruned by that? 13:39 < cdecker> I still think that as long as one side sends updates, and the channel isn't closed on-chain, we should not prune it 13:39 < t-bast> It ended up pruning some very active channels from BlueWallet in my case 13:40 < bitconner> t-bast: yes to some extent, but i don't think it matters much 13:40 < t-bast> Either a bug on the BlueWallet node (in which case I need to re-run the simulation and check others) or another issue 13:40 < bitconner> if those nodes are falling behind that points to some other bug that shouldn't be happening 13:41 < bitconner> but those bugs can be fixed 13:41 < t-bast> but then we should investigate those bugs :) 13:41 < cdecker> For example c-lightning will not send updates if there is not enough capacity to forward anything, which might cause us to get pruned... 13:41 < t-bast> I think it's an opportunity to find and fix some gossip bugs 13:41 < bitconner> either way, the current heuristic allows buggy/malicious nodes to pollulte your routing table 13:41 < niftynei> this seems like a dumb question, but what's the trigger for a node to send out an updated channel_update msg? 13:41 < cdecker> Being silent is not a sign of misbehavior imho, but rather being respectful of everyone's bandwidth 13:41 < rusty> cdecker: really? I don't think this is true? 13:42 < bitconner> the proposal is mean to eliminate that by using a local heuristic, rather than relying on the good will/correctness of other peers 13:42 < rusty> (re: c-lightning) 13:42 <+roasbeef> cdecker: anything beign zero here? 13:42 < cdecker> Well, we'll defer the first channel update if we are the fundee for example 13:42 * niftynei goes to read the spec 13:42 < rusty> cdecker: I didn't think we implemented that because it was too much of a givaway. 13:42 <+roasbeef> niftynei: yeh I guess "outdated" should never happen, in that the time stamp should be increasing 13:42 < bitconner> cdecker: by wouldn't it? also one update every two weeks is essentially silent wrt bandwidth 13:42 < cdecker> Oh, I thought we had 13:42 <+roasbeef> (if i'm following) 13:43 < bitconner> the recency of your channel update proves liveness of the channel peers, and rn we are keeping channels where only one side proves liveness 13:43 < t-bast> right now I've collected metrics on channel_update frequencies, let me dig up the numbers 13:43 < rusty> niftynei: implementation dependent. We have a 13-day timer which refreshes everything if necessary. We also update lazily: if a channel is disabled (i.e. peer disconnected) and we try to route through it, we'll produce a channel update. 13:43 < bitconner> the channels don't necessarly have to be deleted, in lnd they would move into the zombie state and could be resurrected if a new update comes in later 13:44 < cdecker> So let's quickly recap: if your channel peer is dead, are you sending updates for those channels? (hint: you shouldn't) 13:44 < BlueMatt> we should at least add a note that nodes SHOULD renegotiate the channel_update every two weeks 13:44 < bitconner> then why do you have the channel? 13:44 < BlueMatt> which I think is missing 13:44 < cdecker> And if your channel is active are you sending keepalive updates? 13:44 < t-bast> here are the numbers: 50th percentile is to refresh channel_update every 24 hours 13:44 < rusty> I do worry we're seeing weird gossip propagation issues. Some may be production bugs, but some may be actual prop issues. (Cue rusty to dig up the minisketch proposal again) 13:44 < t-bast> 90th percentile is a bit more than 24 hours 13:45 <+roasbeef> cdecker: does disable count as an update? ;) 13:45 <+roasbeef> also depends what period of inactivity counts as "dead" 13:45 < t-bast> and only 99th percentile is 3 days, so clearly all nodes are aggressively sending channel_updates (probably too aggressively_ 13:45 < cdecker> roasbeef: I wouldn't count it as a keepalive update, so it should be allowed 13:46 < cdecker> What we do IIRC is disable slowly (if a payment tries to go through a channel with a dead peer), but enable quickly (as soon as the peer reconnect we enable) 13:46 < bitconner> okay, so as far as next steps (i can run some more numbers if people think of better heuristics), what would people liek to see? 13:46 < BlueMatt> also, this needs updating in channel_update processing either way 13:46 < t-bast> note that my numbers are only for channel_updates that have a timestamp updated, so given these numbers I'm guessing that lnd by default re-emits a channel_update every 24h, doesn't it? 13:46 < BlueMatt> it says "if the fields below timestamp are equal" SHOULD ignore this message 13:46 < BlueMatt> obviously if we expect keepalives we MUST NOT ignore those messages 13:46 < BlueMatt> otherwise we have propagation issues 13:46 < rusty> cdecker: yes. But we also have a 13-day refresh timer which refreshes everything (even if disabled) 13:47 < BlueMatt> it seems strange to me to change the spec to introduce propagation issues for nodes that faithfully implement today's protocol 13:47 < BlueMatt> we'd at least need some kind of waiting period 13:47 < bitconner> wdym by introduce propagation issues? 13:47 < BlueMatt> the spec *today* says you SHOULD ignore keepalive updates 13:47 < BlueMatt> and not relay them, and not accept them 13:48 <+roasbeef> ....where exactly? 13:48 < BlueMatt> but pruning old entries means that you'd prune 13:48 < BlueMatt> err, sorry 13:48 < BlueMatt> - if the fields below `timestamp` are equal: 13:48 < BlueMatt> - SHOULD ignore this message 13:48 <+roasbeef> keep alive would have a newer timestamp 13:48 <+roasbeef> or you just +1 the prior one 13:48 < BlueMatt> you SHOULD ignore any messages where the fields other than timestamp are the same 13:48 < BlueMatt> according to the spec 13:48 < cdecker> When did we add that? Seems weird tbh 13:49 < bitconner> iirc that wording was changed recently, i'd confirm that's what the original wording said 13:49 < BlueMatt> thats been in there for a long time 13:49 < BlueMatt> but, yea, someone should check 13:49 < rusty> BlueMatt: true, we actually allow redundant updates every 7 days. 13:49 <+roasbeef> above that, it says on ly apply that if the timestamps are equal 13:49 <+roasbeef> the keep alive would have a new timestamp 13:50 <+roasbeef> only* 13:50 <+roasbeef> so spec is fine here 13:50 < cdecker> Seems that requirement was added in https://github.com/lightningnetwork/lightning-rfc/pull/621 13:50 < BlueMatt> oh, oops, indeed, i misread 13:50 < BlueMatt> sorry about that 13:50 < BlueMatt> thanks roasbeef 13:50 < BlueMatt> back to my original note, we should at least add a requirement in the sender section there that says you have to generate keepalives 13:50 < bitconner> so keepalive is legal, and the spec recommends keeping not-alive channels 13:51 < cdecker> Anyhow, seems we can go on discussing the pros and cons of using the earlier rather than the later of the two latest updates, shall we move that to the PR? 13:51 < BlueMatt> bitconner: where does it do that? I only see reference in the non-nomative section 13:51 < BlueMatt> the normative section needs to be updated here 13:52 < bitconner> by using the latest of the two updates, you only require one side to be alive 13:52 < t-bast> cdecker: ACK 13:52 < bitconner> but yes, we can continue on the PR 13:52 < cdecker> bitconner: yes, that's my point, as long as one node cares for a channel we shouldn't throw it away 13:52 < cdecker> But to be honest it's not a firm opinion on this 13:53 < bitconner> also have some time before this would see any action until the propagation issues are investigated/resolved 13:53 < niftynei> pruning it means you stop propagating it; if you keep it in your gossip store you could have a different internal heuristic that throws them away for routing table operations, no? 13:53 < t-bast> It would be good to clarify if you observe the same statistics as I do from my node; if we agree on these numbers we can very easily divide the bandwidth used by a factor of at least 10 with less aggressive channel_update keep-alive 13:53 < cdecker> Ok, let's move the discussion on the PR (I'll need to look at Conner's numbers a bit more I think) 13:53 < cdecker> #action everyone to discuss the pros and cons of using one update over the other on the PR 13:54 < niftynei> ack 13:54 < cdecker> That's the PR based topics 13:54 < bitconner> niftynei: yes that is also true, that's closer to what lnd does w/ its zombie index. my assumption is that what's in the spec is what's recommended for your routing table tho 13:54 < cdecker> On to the discussion topics ^^ 13:54 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 264 seconds] 13:55 < cdecker> BlueMatt, joostjgr or t-bast, do we have any volunteers to present their discussion topic first? 13:55 < rusty> Oh, I actually have an update on the protocol tests. Kinda. 13:55 < cdecker> Great, let's do that then ^^ 13:55 < t-bast> Great, let's do protocol tests for a change :) 13:55 < cdecker> #topic Protocol testing framwork 13:55 < t-bast> Trampoline and Route Blinding are simply waiting for some love on the PR itself 13:56 < t-bast> I've done a lot of work to add diagrams and stuff to make it easy to understand 13:56 < rusty> OK, so the current tests use this DSL to define them. That's proven painful, and keeps needing new features (e.g. when we started doing optimized signatures) 13:56 < cdecker> ack t-bast, I'll give trampoline a look this week 13:56 * roasbeef whispers json into rusty's ear 13:56 < rusty> roasbeef: "no". 13:56 <+roasbeef> kek 13:56 * t-bast thinks roasbeef is becoming a javascript dev or something 13:56 <+roasbeef> lolol 13:56 < niftynei> hehehe 13:57 < rusty> So I've stepped back, and the new tests are straight python, with various libraries to support making messages, txs etc. 13:57 < rusty> So you can say things like "make a commitment tx with keys foo and bar". then "add htlc x". "sig should be valid for key K on the tx". 13:58 < cdecker> If you need additional inspiration btw I think skylark is a rather nice python DSL (it's used for the bazel build tool) 13:58 < rusty> Which means I have a lot more stuff to (re) implement in python, but the result is we use standard pytest which does tests in parallel etc and is easy to cut & paste. 13:58 < cdecker> Sorry, I meant starlark 13:58 < niftynei> that sounds nice rusty 13:58 < niftynei> speaking as perhaps your sole guinea pig on the old format :P 13:59 < t-bast> that sounds nice, what's the interface each implementation needs to add to plug into the python DSL? 13:59 < cdecker> It should eventually be talking over the P2P proto with the implementation, right? 14:00 < niftynei> and a lot faster to get things done in -- i ended up writing a python generator for the tests that i wanted for some of the early dual-funded protocol tests 14:00 < t-bast> It probably needs some hooks into the node under test to get into specific situations, doesn't it? 14:00 < niftynei> ('generator' here being 'script that prints tests' not an actual python generator thing) 14:00 < rusty> t-bast: currently it requires you to implement several methods, like "generate a block" and "open a channel" in a separate python file, and run that. I'm going to change it, but the idea of one python module for each impl remains. 14:00 <+roasbeef> rusty: is this much diff than decker's old tests? 14:00 < cdecker> Probably, but a small shim around the RPC and a way to mock the bitcoin backend should suffice 14:00 < rusty> cdecker: yes, it currently uses a C shim, but am reimplementing in native python. 14:00 <+roasbeef> or this is like wire level message matching? 14:01 < niftynei> it's wire level message matching 14:01 < rusty> roasbeef: very. It is wire-level (with more smarts to match sigs etc). 14:01 < cdecker> roasbeef: yes, with my integration tests we were running full implementations against each other, giving us many many combinations, while this just tests a single node for it's behaviour 14:01 < rusty> You create a DAG and it runs though all the paths 14:01 <+roasbeef> how w/o inserting decryption keys? 14:01 <+roasbeef> nodes need to be modified to run against it? 14:01 <+roasbeef> ah ok signle node 14:02 < rusty> roasbeef: current implementation requires nasty invasive impl hacks for keys. Hoping to loosen that so you provide the test runner with your priv keys, rather than vice verse. 14:02 < niftynei> no, c-lightning is not modified. there's a runner that wraps RPC commands to tie into the required DSl actions 14:02 < rusty> niftynei: no, c-lightnign has a heap of dev-only hooks to override keys for this! 14:02 < niftynei> oh. right. 14:03 < rusty> ''--dev-force-tmp-channel-id=0000000000000000000000000000000000000000000000000000000000000000', 14:03 < rusty> '--dev-force-privkey=0000000000000000000000000000000000000000000000000000000000000001', 14:03 < rusty> '--dev-force-bip32-seed=0000000000000000000000000000000000000000000000000000000000000001', 14:03 < rusty> '--dev-force-channel-secrets=0000000000000000000000000000000000000000000000000000000000000010/0000000000000000000000000000000000000000000000000000000000000011/0000000000000000000000000000000000000000000000000000000000000012/0000000000000000000000000000000000000000000000000000000000000013/0000000000000000000000000000000000000000000000000000000000000014/FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF 14:03 < rusty> FFFFFFFFFFFFFFFFFFFFFF', 14:03 < rusty> There'll still be some of that I imagine, but ideally much less. 14:04 < niftynei> right so the python impl could be more dynamic wrt keys, e.g. injectable 14:04 < cdecker> Depends on how much of the logic we want to reimplement in python I guess 14:04 < rusty> So, status: the current tests are really ugly to write (you have to manualy figure out what txs look like, etc), and DSL is... um... a bit special. So I'm considering that the "one I throw away". 14:04 < niftynei> i hear electrum has a python implemenation ;) 14:05 < BlueMatt> we have python bindings 14:05 < BlueMatt> (or, will, Soon (tm) ) 14:05 < cdecker> Right, both good options ^^ 14:05 < rusty> I'm currently reworking the entire Python stack, currently doing the message generation (direct from the spec) in a nicer, generic way. 14:05 < BlueMatt> (but, we *do* have C bindings) 14:06 < cdecker> However if we can get away with some mock data rather than reimplementing it'd probably be easier to debug as well 14:06 < rusty> It's going under cdecker's pyln. p[ython namespace. In particular, much is under pyln.proto right now (e.g. pyln.proto.bolts). 14:07 < cdecker> Speaking of which I wanted to add a couple of things to that myself 14:07 < rusty> Once I have converted the existing tests, I'll let you know we've reached MVP. 14:07 < cdecker> Great, thanks rusty for the update 14:07 < joostjgr> Wouldnt it be nice to have multiple threads to discuss things in parallel? Just start all agenda items at once 14:07 < rusty> But lots of recent interop problems would be avoided by this existing, so hence my effort. 14:07 < rusty> EOF :) 14:07 < t-bast> great that definitely sounds useful 14:08 < joostjgr> There is one thing I want to bring up about anchors. The outstanding discussion on whether to encumber all spend paths of the htlc outputs with CSV 1 rather than all paths except the revocation path. Isn't it an advantage if the other party or a watchtower can act on an unconfirmed breach? Anchor it down using the revocation path. 14:08 <+roasbeef> def 14:08 < cdecker> joostjgr: I'm having loads of difficulties following as it is, multiple discussions at once seems counterproductive to me 14:08 < jkczyz> yeah, having seen the old grammar, looking forward to how it turns out 14:08 < joostjgr> at once, but separated in channels 14:08 < cdecker> #topic Anchor outputs #688 (@joostjager) 14:08 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:3091:4ce6:92bd:c385] has joined #lightning-dev 14:09 < cdecker> Last topic for today, I need to get some sleep soon ^^ 14:09 < BlueMatt> joostjgr: last I looked at it there were strange attack cases without it 14:09 < joostjgr> I rebased the pr and addressed some of the smaller comments. Now working on test vectors 14:09 < BlueMatt> I described one that makes implementation triply tricky 14:09 <+roasbeef> joostjgr: yeh they can insta act which is nice, and also star tto siphon off funds from the cheating party into miner's fees to bump up the conf 14:09 < cdecker> Ah that makes more sense joostjgr 14:09 <+roasbeef> also past revocation land, all bets are off more or less 14:09 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 256 seconds] 14:09 < BlueMatt> given we're gonna change the htlc txn format later for anchor-but-secure-htlcs-this-time, it also isnt critical to make it as flexible as possible, but getting it secure I'd prefer 14:10 < BlueMatt> given we've seen that mempool relay policy has a tendency to bite us in the ass, lets not risk it 14:10 <+roasbeef> the scenario described there doesn't seem like much of an attack, the party that's able to take the revoke funds has an incentive to do it as quickly as possilbe 14:10 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has quit [Quit: \/\/] 14:11 <+roasbeef> swift justice 14:11 < joostjgr> bluematt: yes i saw your scenario where the breaching party is punished extra by forcing them to rbf the fee upward 14:11 < bitconner> if the revocation path has CSV 1 then a tower can't bump the fee for a client? 14:11 <+roasbeef> bitconner: that too 14:11 < BlueMatt> roasbeef: as we've learned over the last few weeks...."incentive to do it as quickly as possible" isn't always as simple as it seems :p 14:12 <+roasbeef> well this is a specific context, yuo're playing for all the chips here 14:12 < bitconner> then the commitment txn needs to carry full fees to be tower-able 14:12 < BlueMatt> wait, why cant the tower use your anchor? 14:12 <+roasbeef> joostjgr: the true punishment is them just losing all their funds in the end tho 14:12 < BlueMatt> its there...well, to allow CPFP 14:12 < BlueMatt> if the tower cant use the anchor, that seems like a bug in the anchor construction 14:13 <+roasbeef> BlueMatt: it needs the mult-sig keys for that, before the block period or w/e it is 14:13 < bitconner> the tower can spend using presigned txns, but what are the inputs? now nodes need to keep a reserve of UTXOs just for the tower 14:14 <+roasbeef> yeh could be presigned when sent ove r 14:14 < BlueMatt> right, so you're saying the tower should be able to hurn your htlc value for you to create cpfp 14:14 <+roasbeef> but if this is really "the breaching party is punished extra by forcing them to rbf the fee upward", that's insignificant as the breaching party is about to lose all their money, seems minor if Alice can toy w/ them for w/e reason 14:15 < BlueMatt> roasbeef: thats the Obvious Issue I see, lets please not take "lack of obvious issue" as a reason to risk it 14:15 <+roasbeef> risk what? 14:15 <+roasbeef> as long as the revocation fine is ok, not sure what the issue is 14:15 < joostjgr> there is the obvious or not so obvious issue on the one hand and watchtower implications on the other hand 14:16 < BlueMatt> so, if you're suggesting the watchtower just burn your in-flight htlc value to create cpfp txn, i dont really see how that solves it either, though 14:16 < BlueMatt> cause you'd need to always maintain an htlc balance 14:17 < bitconner> not sure where htlc balances are coming in here 14:17 < BlueMatt> in the hopefully-common-case you wouldn't have n htlc and you're back to square one 14:17 < BlueMatt> bitconner: maybe I'm misunderstanding, but let me restate the way I've understood this: 14:17 < bitconner> ideally the tower can bump with arbitrary utxos 14:17 < BlueMatt> the desire to have htlcs not have a csv one is so taht the watchtower can bump the commitment transaction by burning your htlc value. 14:17 < BlueMatt> what did i misunderstand? 14:19 < bitconner> i think we're only referring to the revocation path, not htlcs as a whole 14:19 < BlueMatt> right 14:20 < BlueMatt> so if I didnt misunderstand anything, can someone restate the motivation for revocation path htlcs being non-csv'd? 14:21 < cdecker> Would it be ok if I ended the meeting notes? Feel free to continu discussing, but I really need to get some sleep ;-) 14:21 <+roasbeef> the tower can revoke in the same block as the commitment is mined, they can also start to shift funds from the party breaching to miner's fees 14:21 < joostjgr> yes sure, the discussion has been started and can continue on the pr. thanks 14:21 < cdecker> #endmeeting 14:21 < lightningbot> Meeting ended Mon May 11 21:21:59 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 14:22 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-05-11-20.08.html 14:22 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-05-11-20.08.txt 14:22 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-05-11-20.08.log.html 14:22 < BlueMatt> right, so *in the case that you have pending htlcs* the tower can burn some of your htlc funds. I really dont see why we're solving for that edge case. 14:22 < rusty> Thanks cdecker 14:22 < BlueMatt> thanks cdecker! 14:22 < cdecker> Thanks everyone for the meeting, it's been very productive :-) 14:22 -!- laptop [~laptop@77-57-116-48.dclient.hispeed.ch] has quit [Ping timeout: 260 seconds] 14:22 < BlueMatt> enjoy sleep! 14:22 <+roasbeef> not sure i'd call that an edge case, revocation in general can be viewed as an edge case 14:22 < t-bast> Thanks cdecker! 14:22 < BlueMatt> like, that doesn't really solve the issue of "tower can't bump the commitment" because hopefully you don't always have large htlc value in flight 14:22 <+roasbeef> if we're going to change that too, then we need sufficient motivation and a concrete scenario 14:22 < cdecker> I'll catch up with the anchor proposal and this discussion tomorrow ^^ 14:23 <+roasbeef> any state in the past had pending htlcs due to how the commitment update protocol works 14:23 <+roasbeef> since it takes 2 to add/remove one 14:23 < bitconner> BlueMatt: you may be right, i'd want to go back and work through it. planning to look into the watchtower changes for anchors this quarter so now might be the time to flesh this out and see if anything comes up 14:23 <+roasbeef> they also don't need to be large, just something to feed off into fees 14:24 -!- t-bast [~t-bast@2a01:e34:efde:97d0:243f:541d:60f1:2d1c] has quit [Quit: Leaving] 14:24 < cdecker> btw the agenda for next time is up https://github.com/lightningnetwork/lightning-rfc/issues/775. Please add any points you'd like to discuss :-) 14:24 < BlueMatt> roasbeef: hmm? no, the specific state that is broadcasted 14:24 < BlueMatt> why "any state in the past that had htlcs" 14:24 <+roasbeef> could have ad* 14:25 <+roasbeef> any balance change results in states taht have pending htlcs 14:25 < BlueMatt> (for the record I totally agree my wishy-washy "ehhhh, that feels really bad, and it seems kinda sketch" argument here holds no water against a real utility change, but I'm trying to puzzle through the utility here and dont fully see it yet) 14:25 < BlueMatt> some states, sure, but what maters is only the state that was braodcasted 14:25 < BlueMatt> not other states 14:26 < bitconner> it's also in general very difficult to sweep HTLCs via the tower on the current commitment format, so take that with a grain of salt 14:26 < BlueMatt> right, I mean the other thing is we have to redo htlcs for anchor v2 *anyway* 14:26 < bitconner> part of the reason we only cover commitment outputs, but it would be nice to have that option if possible 14:28 < BlueMatt> (in other unrelated news, if anyone has bitcoind debug.logs with -debug=net on for the past several months, I'd appreciate a chat) 14:31 -!- joostjgr [~joostjag@ip51cf95f6.direct-adsl.nl] has quit [Quit: Leaving] 14:33 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 14:40 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 14:41 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 15:01 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 256 seconds] 15:07 -!- laptop [~laptop@77-57-116-48.dclient.hispeed.ch] has joined #lightning-dev 15:10 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 256 seconds] 15:10 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Quit: Pavlenex] 15:18 -!- marcoagner [~user@bl13-226-166.dsl.telepac.pt] has quit [Ping timeout: 272 seconds] 15:18 -!- laptop [~laptop@77-57-116-48.dclient.hispeed.ch] has quit [Ping timeout: 240 seconds] 15:20 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 15:27 < ariard> heyyyy sorry for missing meeting, was actually out celebrating halving :p 15:27 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 15:27 < ariard> will flesh out security.md more and replied to y'all comments 15:27 < ariard> *reply 15:35 -!- Pavlenex [~Thunderbi@141.98.103.251] has joined #lightning-dev 15:41 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Remote host closed the connection] 15:41 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 15:47 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has joined #lightning-dev 15:48 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 16:00 -!- Pavlenex [~Thunderbi@141.98.103.251] has quit [Remote host closed the connection] 16:03 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 256 seconds] 16:04 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Remote host closed the connection] 16:07 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 16:11 -!- mdunnio [~mdunnio@208.59.170.5] has joined #lightning-dev 16:13 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 246 seconds] 16:16 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Ping timeout: 260 seconds] 16:17 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 16:19 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 16:20 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [] 16:22 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 16:35 -!- RonNa [~quassel@60-251-129-61.HINET-IP.hinet.net] has quit [Ping timeout: 256 seconds] 16:35 -!- RonNa [~quassel@60-251-129-61.HINET-IP.hinet.net] has joined #lightning-dev 16:42 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 16:43 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 16:50 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 264 seconds] 16:50 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has quit [Quit: \/\/] 16:54 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has joined #lightning-dev 16:56 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has quit [Client Quit] 17:21 -!- dfmb_ [~dfmb_@unaffiliated/dfmb/x-4009105] has quit [Read error: Connection reset by peer] 17:30 -!- dfmb_ [~dfmb_@unaffiliated/dfmb/x-4009105] has joined #lightning-dev 17:33 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has left #lightning-dev [] 18:03 -!- dfmb_ [~dfmb_@unaffiliated/dfmb/x-4009105] has quit [Read error: Connection reset by peer] 18:04 -!- ghost43_ [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 18:05 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has joined #lightning-dev 18:05 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Ping timeout: 240 seconds] 18:10 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [] 18:17 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 256 seconds] 18:31 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 18:32 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 18:32 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 18:41 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Excess Flood] 18:43 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 19:00 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Remote host closed the connection] 19:00 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 19:03 -!- foxp2 [~foxp2@ec2-3-229-163-177.compute-1.amazonaws.com] has quit [Quit: \/\/] 19:16 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 19:16 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 19:19 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 19:22 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 258 seconds] 19:37 -!- shesek [~shesek@unaffiliated/shesek] has quit [Read error: Connection reset by peer] 19:38 -!- shesek [~shesek@185.3.145.28] has joined #lightning-dev 19:38 -!- shesek [~shesek@185.3.145.28] has quit [Changing host] 19:38 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 19:40 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 256 seconds] 19:59 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 260 seconds] 20:01 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 20:42 -!- proofofkeags [~proofofke@174-29-9-247.hlrn.qwest.net] has quit [Ping timeout: 256 seconds] 21:19 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 21:23 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 21:23 -!- vasild_ is now known as vasild 21:26 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 21:46 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 22:01 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Remote host closed the connection] 22:02 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 23:11 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 246 seconds] --- Log closed Tue May 12 00:00:18 2020