--- Log opened Mon Jun 22 00:00:06 2020 --- Day changed Mon Jun 22 2020 00:00 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 00:07 -!- marcoagner [~user@81.193.76.62] has joined #lightning-dev 00:11 -!- Pavlenex [~Thunderbi@185.189.114.187] has joined #lightning-dev 00:19 -!- Pavlenex [~Thunderbi@185.189.114.187] has quit [Quit: Pavlenex] 00:48 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 00:49 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 00:50 -!- Pavlenex [~Thunderbi@185.189.114.187] has joined #lightning-dev 00:53 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has joined #lightning-dev 00:56 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 260 seconds] 00:57 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds] 01:11 -!- Amperture [~amp@65.79.129.113] has joined #lightning-dev 01:11 -!- Amperture [~amp@65.79.129.113] has quit [Remote host closed the connection] 01:46 -!- shesek [~shesek@5.102.255.210] has joined #lightning-dev 01:46 -!- shesek [~shesek@5.102.255.210] has quit [Changing host] 01:46 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 01:59 -!- jonatack [~jon@184.75.221.179] has joined #lightning-dev 02:11 -!- Pavlenex [~Thunderbi@185.189.114.187] has quit [Quit: Pavlenex] 02:17 -!- belcher [~belcher@unaffiliated/belcher] has joined #lightning-dev 02:52 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Ping timeout: 246 seconds] 02:52 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has joined #lightning-dev 02:59 -!- kloinka [~quassel@223.177.233.220.static.exetel.com.au] has quit [Ping timeout: 264 seconds] 03:18 -!- elichai2 [sid212594@gateway/web/irccloud.com/x-vfkoqkwnbgxbpgjt] has joined #lightning-dev 03:20 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 272 seconds] 03:24 -!- Pavlenex [~Thunderbi@185.189.114.187] has joined #lightning-dev 03:32 -!- Pavlenex [~Thunderbi@185.189.114.187] has quit [Quit: Pavlenex] 03:32 -!- Pavlenex [~Thunderbi@185.189.114.187] has joined #lightning-dev 03:32 -!- Pavlenex [~Thunderbi@185.189.114.187] has quit [Client Quit] 03:43 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 03:49 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:6c7a:ae0c:5f2c:db7f] has joined #lightning-dev 04:02 -!- deafboy [quasselcor@cicolina.org] has joined #lightning-dev 04:03 -!- deafboy [quasselcor@cicolina.org] has quit [Client Quit] 04:07 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 04:17 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 04:40 -!- mauz555 [~mauz555@2a01:e0a:56d:9090:6c7a:ae0c:5f2c:db7f] has quit [] 04:50 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has joined #lightning-dev 04:54 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has quit [Client Quit] 04:55 -!- jonatack [~jon@184.75.221.179] has quit [Ping timeout: 240 seconds] 05:10 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 246 seconds] 05:11 -!- elichai2 [sid212594@gateway/web/irccloud.com/x-vfkoqkwnbgxbpgjt] has left #lightning-dev [] 05:24 -!- mol_ [~mol@unaffiliated/molly] has joined #lightning-dev 05:27 -!- mol [~mol@unaffiliated/molly] has quit [Ping timeout: 258 seconds] 05:35 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has joined #lightning-dev 05:37 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 05:48 -!- mol_ [~mol@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 05:54 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 05:57 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 06:05 -!- slivera [~slivera@103.231.88.10] has quit [Remote host closed the connection] 06:40 -!- rh0nj [~rh0nj@88.99.167.175] has joined #lightning-dev 07:01 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 07:12 -!- Pavlenex [~Thunderbi@185.212.170.139] has joined #lightning-dev 07:16 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has joined #lightning-dev 07:16 -!- mol [~mol@unaffiliated/molly] has joined #lightning-dev 07:22 -!- testasdfasdf [54d313e5@cm-84.211.19.229.getinternet.no] has joined #lightning-dev 07:39 -!- Pavlenex [~Thunderbi@185.212.170.139] has quit [Quit: Pavlenex] 07:48 -!- testasdfasdf [54d313e5@cm-84.211.19.229.getinternet.no] has quit [Remote host closed the connection] 07:49 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 07:56 -!- raj_149 is now known as raj_ 08:07 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 260 seconds] 08:10 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 08:24 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 08:26 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 08:26 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 08:29 -!- __gotcha1 is now known as __gotcha 08:29 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 08:29 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 08:32 -!- __gotcha1 is now known as __gotcha 08:38 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [] 08:40 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 08:58 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has quit [Quit: This computer has gone to sleep] 09:03 -!- jb551 is now known as jb55 09:03 -!- belcher [~belcher@unaffiliated/belcher] has quit [Quit: Leaving] 09:08 -!- Pavlenex [~Thunderbi@185.212.170.139] has joined #lightning-dev 09:11 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has joined #lightning-dev 09:12 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 272 seconds] 09:20 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 09:23 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 09:24 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 09:24 -!- vasild_ is now known as vasild 09:30 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Remote host closed the connection] 09:36 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 240 seconds] 09:41 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds] 09:47 -!- th0th [~th0th@gateway/tor-sasl/th0th] has quit [Quit: Leaving] 09:51 -!- Pavlenex [~Thunderbi@185.212.170.139] has quit [Quit: Pavlenex] 09:57 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 10:11 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 10:12 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 10:12 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 10:20 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 10:25 -!- deafboy [quasselcor@cicolina.org] has joined #lightning-dev 10:31 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 272 seconds] 10:32 -!- shesek [~shesek@5.22.128.126] has joined #lightning-dev 10:32 -!- shesek [~shesek@5.22.128.126] has quit [Changing host] 10:32 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 10:33 -!- shesek [~shesek@unaffiliated/shesek] has quit [Client Quit] 10:33 -!- shesek [~shesek@5.22.128.126] has joined #lightning-dev 10:33 -!- shesek [~shesek@5.22.128.126] has quit [Changing host] 10:33 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 10:38 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds] 11:24 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 11:24 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 11:32 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has joined #lightning-dev 11:37 -!- vindard [~vindard@190.83.165.233] has joined #lightning-dev 11:39 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:1925:45f6:a387:4c53] has quit [Ping timeout: 240 seconds] 11:40 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has joined #lightning-dev 11:46 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 11:49 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has joined #lightning-dev 12:01 -!- Pavlenex [~Thunderbi@178-223-145-137.dynamic.isp.telekom.rs] has joined #lightning-dev 12:02 -!- Pavlenex1 [~Thunderbi@141.255.166.197] has joined #lightning-dev 12:05 -!- Pavlenex [~Thunderbi@178-223-145-137.dynamic.isp.telekom.rs] has quit [Ping timeout: 240 seconds] 12:05 -!- Pavlenex1 is now known as Pavlenex 12:22 -!- Pavlenex [~Thunderbi@141.255.166.197] has quit [Quit: Pavlenex] 12:24 -!- t-bast [~t-bast@2a01:e34:ec2c:260:9d7a:e70e:de58:4e0e] has joined #lightning-dev 12:26 -!- dr-orlovsky [~dr-orlovs@xdsl-188-154-186-21.adslplus.ch] has quit [Quit: Textual IRC Client: www.textualapp.com] 12:41 -!- kiwi_53 [47f6e483@gateway/web/cgi-irc/kiwiirc.com/ip.71.246.228.131] has joined #lightning-dev 12:52 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 12:57 -!- sstone [~sstone@112.ip-51-68-199.eu] has joined #lightning-dev 12:58 -!- sstone [~sstone@112.ip-51-68-199.eu] has quit [Client Quit] 13:00 < t-bast> Good morning everyone! 13:01 < cdecker> Evenin' ;-) 13:01 < jkczyz> afternoon 13:01 < ariard> Hello 13:01 < rusty> (Too early to be) good morning all! 13:02 < t-bast> What have you all been working on recently? 13:03 < rusty> t-bast: hey, c-lightning finally updated to latest spec, was nice to get rid of the old option parsing in open/accept/reestablish! (And refreshing to *reduce* code for once!) 13:03 < niftynei> hello! 13:03 < ariard> anchor outputs implem 13:03 < t-bast> rusty: nice! 13:03 * t-bast waves at niftynei 13:03 -!- renepick1 [54d313e5@cm-84.211.19.229.getinternet.no] has joined #lightning-dev 13:04 < t-bast> ariard: how far along are you? 13:05 < t-bast> ariard: would be interesting to share useful thoughts/gotchas everyone ran into while implementing it (e.g. RBF/CPFP engine tricks) 13:05 <+roasbeef> depends on how your wallet is set up internally really 13:05 < ariard> t-bast: likely fews more days of work, it's mostly getting right the utxo pool for feeding CPFP and how not to phagocyte the wallet for rust-lightning 13:05 -!- sstone [~sstone@112.ip-51-68-199.eu] has joined #lightning-dev 13:06 < ariard> yes I would be glad add an addendum on anchor output on CPFP, specially the fact that you want automatic bumping, after X blocks increase by Y% feerates 13:06 <+roasbeef> yep you can finally actually adjust fees....such, flexibility 13:06 <+roasbeef> we're looking to enable by default now for 0.11 13:07 < t-bast> much security 13:07 <+roasbeef> just need to add the channel type for the watchtower 13:07 -!- bitconner [~root@138.68.244.82] has quit [Quit: leaving] 13:07 < ariard> roasbeef: IIRC for your 0.10 bumping was manual ? what heuristics are you looking for automatic bumping? like being more aggresive when timelocks is near to expire? 13:07 < t-bast> roasbeef: you mentioned a protocol to update commit txs on-the-fly for existing channels, did you draft something? 13:08 < t-bast> shall we start the meeting this time to record the discussions? :) 13:08 < rusty> t-bast: yes! 13:08 < t-bast> #startmeeting 13:08 < lightningbot> Meeting started Mon Jun 22 20:08:37 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:08 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 13:08 < ariard> I don't think anchor output add security without package relay support, but it's a step forward for congestion management 13:08 -!- bitconner [~root@138.68.244.82] has joined #lightning-dev 13:09 < t-bast> #topic The everlasting anchor outputs debate 13:09 < rusty> t-bast, roasbeef: yeah, I've been looking at making static_remotekey compulsory, it would be really nice to have an upgrade to get rid of the older code entirely. 13:09 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/688 13:09 <+roasbeef> ariard: ofc it does..you can actually adjust fees 13:09 < t-bast> rusty: we're not fully ready for that yet, on the mobile side we still need to do some work for bech32... 13:10 <+roasbeef> ariard: yeh 0.10 is manual, we're looking to do an exponential increase as we get closer to the deadline 13:10 -!- joostjgr [~joostjag@ip51cf95f6.direct-adsl.nl] has joined #lightning-dev 13:10 <+roasbeef> t-bast: stuff keeps coming up, it's on the way... ;) 13:11 <+roasbeef> rusty: same! eclair added it, I think it's bout time 13:11 <+roasbeef> static remote by default that is 13:11 < ariard> roasbeef: wrt to blockspace competition and racing for being confirmed before timelocks expiration ? yes but I think a really adversarial counterparty can pin the commitment 13:11 <+roasbeef> want to prio enabeling these safety features by default and make them compulsory 13:11 < ariard> to avoid even your getting into the mempool 13:11 <+roasbeef> ariard: yes pinning is still there, but even w/o pinning, today, you can't adjust 13:11 <+roasbeef> we always conflate these issues 13:12 <+roasbeef> as in: w/o any adverserial activity, today you can't adjust your fees, so you can end up being screwed 13:12 < t-bast> roasbeef: there's something that's been bothering me, how can you decide on the amount you attach to your anchor output without looking at the mempool? Because the carve-out rule is a one-shot attempt, if you don't bump enough (because there other anchor has a big list of children txs) you won't be able to RBF your anchor, right? 13:12 < rusty> roasbeef: agreed, sorry I've been remiss on implementing anchor outputs. However, I have finally gotten the new protocol tests (ann coming today I really hope) to the point where I can implement it so testing is easier. 13:12 <+roasbeef> t-bast: you can use it as a lower bound, that's your floor 13:12 < ariard> I see your point, it's a step forward even with regards to security but I wouldn't call you secure even with compulsory anchor output support 13:13 <+roasbeef> yeah the step needs to be a certain size 13:13 <+roasbeef> ariard: security isn't binary! 13:13 <+roasbeef> let's address what we can, and contineu to iterate on the other issues 13:13 < ariard> roasbeef: right we should evaluate each scenario on its own, but solving the hardest one are likely alleviate us to have to overspec 13:14 <+roasbeef> we also don't need super strong global synchrony on this 13:14 < ariard> like I don't think that remote anchor ouput and local commitment should be pursued, it's a mapping oracle 13:14 < harding> t-bast: you should be able to RBF your anchor. 13:14 <+roasbeef> since it's a link level feature and can be negotiated between peers 13:14 <+roasbeef> ariard: implementations are free to deploy if if they wish, or wait even longer (it's been like 9+ months at this posint) 13:15 <+roasbeef> dunno what you mean by mapping oracle 13:15 < ariard> roasbeef: see on t-bast gist https://gist.github.com/t-bast/22320336e0816ca5578fdca4ad824d12 13:15 < ariard> a way to map your full-node to your LN-node 13:15 < joostjgr> short update on the anchor pr itself: i am still working on providing you with the test vectors 13:15 < harding> t-bast: for RBF'ing your anchor, see https://github.com/bitcoin/bitcoin/pull/16421 13:15 < ariard> roasbeef: I think your free to deploy whatever you want but we should spec out security-damaging stuff, I'm not sure 13:16 <+roasbeef> ariard: seems distinct...and other ways that can be mitigated 13:16 * bitconner waves, irc client back in operation 13:16 < t-bast> harding: but at a potentially huge cost, right? If the attacker has a big list of child txs on his anchor (that maximizes the package), you already need quite a big amount to get your anchor spend to match the carve-out rule. If you didn't allocate enough fee there, when you bump it it will be very costly, isn't it (because of the adversarial package)? 13:16 < ariard> roasbeef: well so what's your mitigation? It's just reacting to any in-mempool stuff is insecure by design 13:17 <+roasbeef> ariard: mitigation to what? i feel like we're tyring to fix everything in a single swoop, and more stuff is discovered, but the central thing being enabled: bumping fees is extremley valuable, well overdue, and implemetnations that don't deploy some sort of solution in the near term put their users at risk 13:17 < harding> t-bast: no, I don't think so. You're only RBFing your ~200 vbyte anchor transaction, so you only need to fee bump those 200 vbytes. 13:17 < ariard> joostjgr: have you verified the spendign path for the anyone-can-spend I'm not sure it's compliant with MINIMALIF, haven't test yet though 13:18 < t-bast> harding: oh that's great, I'd misunderstood that then! 13:18 < t-bast> harding: thanks for clarifying this 13:18 < ariard> roasbeef: I agree with you we should move forward with a subset of anchor output, namely adding a _local_ anchor ouput on local commitment transactions 13:18 < ariard> and even getting right the bumping logic should be done steps by steps 13:19 < t-bast> joostjgr: could you also add to the PR that local signatures should still use SIGHASH_ALL even when the remote uses SIGHASH_SINGLE? Worth mentioning IMHO 13:19 < ariard> (well I agree we should move forward, but I think we disagree on the scope what we should move forward with) 13:19 <+roasbeef> minimal if is just about what you supply in the witness ariard 13:19 < joostjgr> t-bast: i believe i've added that in https://github.com/lightningnetwork/lightning-rfc/pull/688/commits/964b03f51cdcf668eb4d494a738679482e20c5ed 13:20 < joostjgr> ariard: i will look that up. it has been a while now since we worked on it 13:20 <+roasbeef> ariard: fee bumping is purely client-side policy 13:20 < t-bast> joostjgr: thanks, don't know how I missed that :/ 13:21 < joostjgr> still need to start that weekend project to make the anyone-can-pay anchor sweeper and get rich 13:21 <+roasbeef> ariard: again as I mentioned above, we don't need global syncrhony on this, it's a link level upgrade, that's the beauty of LN as well, we can roll out things more quickly as long as there's negotiation 13:21 < t-bast> joostjgr: xD 13:21 < ariard> minimal if may hit you there because OP_CHECKSIG will push a 0 in case of sig failure? 13:21 < rusty> joostjgr: bwahahah! 13:22 < ariard> roasbeef: okay if you want to have remote anchor output on local commitment feel free to do so, I would just discourage to deploy it 13:23 < ariard> I maybe wrong on this but with all these pinning games, mapping your counterparty node is a big chunk of it 13:23 <+roasbeef> ariard: rationale being? otherwise one party can block the confirmation entirely 13:23 <+roasbeef> we keep combining all these scenarios, which isn't productive imo 13:23 < ariard> mapping oracle namely, making transaction size bigger also 13:23 < ariard> but my point is you can already block confirmation entirely 13:24 <+roasbeef> but in the case w/o any outside interatction at all, w/o this you can fail to get into the chain in time, which can mean loss of funds 13:25 < ariard> Ah so the scenario you're thinking is concurrent broadcast of both commitment transactiosn and your counterparty one getting in mempool 13:26 < ariard> and your counterparty fee-bumping policy being too lazy to get if confirmed ? 13:26 < ariard> in a timely fashion 13:26 <+roasbeef> yeh that's one of many 13:26 <+roasbeef> or you don't trust their bumping algo or w/e 13:27 <+roasbeef> you should have an ability to do it yourself, since their commitment needs to be confirmed in order for you to amke sure you can resolve all your contracts properly 13:27 < t-bast> Is it a fair summary to say that roasbeef you want to move forward with the current proposal because while not fixing all attack vectors (pinning is still possible), it greatly improves in the current tx format; ariard your main concern is that you'd like to move forward with something that better fixes those attack vectors? 13:27 < ariard> yeah it being lazy or buggy that's the same for you, still that it's doing the assumption that you see the remote commitment in your mempool 13:27 <+roasbeef> idk how saying "users need to be able to bump their fees" isn't a resounding agreement 13:27 <+roasbeef> t-bast: yes, yes, yes 13:27 < ariard> p2p rules don't make the assumption right now that every peer will see every transaction announced 13:28 <+roasbeef> t-bast: we keep expanding the threat model as well continually vs pinning one down and operating within that 13:28 < ariard> t-bast; yes exactly I'm working on some package relay for core and I think that would avoid us a lot of overspecing there 13:28 < rusty> I agree with roasbeef FWIW. 13:29 < t-bast> We need to remember that attacks only get better, so we may get stuck never shipping anything if we're always aiming for fixing everything at once 13:29 <+roasbeef> t-bast: boom 13:29 < ariard> because as soon as we have package relay a lot of complexities around OP_CSV, remote anchor and tx-propagation assumptions can disappear 13:29 < ariard> modulo a soon on the core-side can take a long time 13:30 < t-bast> ariard: I completely agree, but that doesn't prevent us from doing something that's not perfect now, and migrate to something better once we have better layer 1 support? 13:30 < ariard> still you will fix some attacks/congestion scenarios by making some others attacks easier, that's my concern 13:30 < t-bast> as long as what we'd like to do now isn't horribly costly and fixes issues we're seeing today? 13:31 < rusty> ariard: package relay doesn't fix the bitcoin core mempool behavior, which is a larger concern anyway. 13:31 < t-bast> Then I think what would help move forward is a clearer view of how much simpler we make those attacks, to weigh whether what we gain is better than what we lose 13:31 <+roasbeef> ariard: how long is package relay gonna take? no one knows.... 13:31 < ariard> so I'm okay to move forward if we can dynamically negotiate the scope of anchors, for whom being unconfortable with adding remotes ones 13:32 < bitconner> for me, increasing safety of 1000s of nodes deployed today >>> needing to make a v2 anchor proposal 13:32 < t-bast> After spending a few days on the anchor outputs current proposal, I've grown to like it :). The format change isn't too drastic, and it's a first step to get everyone to start implementing RBF/CPFP engines which will always be useful. 13:33 < ariard> roasbeef: I'm actively working on it and hope to post some proposal before next meeting, but likely a 18months timeline to get it merged and deployed 13:33 < t-bast> ariard: you mean a flag that would let us have a format with a single anchor (local) instead of 2? 13:33 < ariard> a minimal package relay just fixing the current state of LN, not the full-fledge things 13:33 <+roasbeef> ariard: 18 months...such time, wow 13:34 < ariard> bitconner: yes but I'm questioning this "safety", as t-bast are we confident that's a blank increase ? 13:34 < t-bast> roasbeef: we're already 9 months in for the anchor outputs proposal, and it's been discussed almost 2 years ago for the first time xD 13:34 < t-bast> roasbeef: 18 months doesn't shock me anymore 13:34 < rusty> Thanks, I think we're going in circles. Can we move on? 13:34 <+roasbeef> rusty: sgtm 13:34 <+roasbeef> t-bast: think of the events that can happen in those 18 months that'll make you wanted to have deployed _something_ in that time frame 13:35 < bitconner> rusty: yes 13:35 < t-bast> #action ariard to summarize the security loss the two-anchors create, so that we evaluate it next time 13:35 < t-bast> roasbeef: yeah, I'm teasing really :) 13:35 <+roasbeef> also 18 months is just conjecture, and it'll be even longer for "all" nodes to update 13:35 < ariard> roasbeef: think of the events that can happeen in those 18 months due to an anchor output introducing some easier way to attack 13:35 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 13:36 <+roasbeef> ariard: ofc we can't know that there won't be any new things discoerved, that doesn't mean we should do _nothing_ 13:36 < ariard> roasbeef: in the meanwhile, without pacakge relay if your feerate doesn't get into the mempool anchor won't help you 13:36 <+roasbeef> i think it's also diff for you given that rust-lightnign isn't fully "deployed" 13:36 <+roasbeef> we have users in the wild we want to protect _now_ 13:36 < t-bast> Shall we do some small PRs, or does Rusty want to introduce to the world protocol tests v2? 13:36 <+roasbeef> t-bast: sure let's move on 13:36 < rusty> t-bast: PRs first? 13:37 <+roasbeef> will let y'all know if anything changes w.r.t our plans re deployment 13:37 < joostjgr> test vectors pr, related :) 13:37 < ariard> okay let's ask a last question and then move on, if we do see sophisticated attackers in the next coming montths don't we think they will chase for the easiest scenarios to execute? 13:38 < t-bast> ariard: it's a good point, but I think for the sake of this meeting it's worth preparing something for next time to showcase how worse it would be with 2 anchors, don't you agree? 13:38 < ariard> roasbeef: I think that's an orthogonal point, about being deployed, you can't tell to your users they're going to be safe even with anchor outputs 13:38 <+roasbeef> ariard: there're easier things they can do than do some elaborate commitment pinning, did you see that "flood & steal" paper (or w/e it's called) 13:38 <+roasbeef> ariard: you can't claim 100% safety with _anything_, security isn't binary bruv 13:38 < t-bast> Then we'll have something we can comment on and debate, I feel it's a bit too hand-wavey right now 13:39 < rusty> Yes, ariard this is not the hole the water is coming thru right now... 13:39 < ariard> roasbeef: yes, but this is already mentionned in LN paper, and I think you can just dumbly spam the mempool no need to open channels 13:39 <+roasbeef> yeh even easier...and guess what...no one would be able to update their fees to try and thawrt it! 13:39 < bitconner> rusty: lol never heard that b4 13:39 < t-bast> rusty: me neither xD 13:40 < ariard> as again, I'm fine moving forward with a negotiated version of anchor output, I would personally not deploy remote anchor ones 13:40 < rusty> yeah, best I could do at 6:10am. 13:40 < BlueMatt> i mean remote anchor is super useful if you assume some kind of ability to monitor the global mempool, no ariard? 13:40 < BlueMatt> which, like, sure, you cant, but you can do something with that 13:40 < ariard> "some kind of ability to monitor the global mempool" 13:40 < t-bast> ariard are you ok with preparing a small gist/issue to summarize the cons of the double anchors? 13:41 < BlueMatt> and I think its pretty clear by now that we can't "fix" the problem without, like, eltoo or something. 13:41 < t-bast> BlueMatt: you just triggered cdecker, that was the forbidden word 13:41 < ariard> t-bast: yes I'm already gathering all the issues around the fees, will publish on the ml or elsewhere once I got package relay 13:41 <+roasbeef> t-bast: lmaooo 13:42 < t-bast> ariard: great, then let's continue discussing that off-meeting and resuming this discussion next time? 13:42 * cdecker rears his head :-) 13:42 <+roasbeef> ok...small PRs? ;) 13:42 < t-bast> #topic Static remotekey test vectors 13:42 < ariard> t-bast: sure :) 13:42 <+roasbeef> he's arisen! 13:42 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/758 13:42 * BlueMatt notes that these discussions almost certainly merit a presentation and video/voice call, not just a text chat. 13:42 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 272 seconds] 13:43 < t-bast> #action t-bast to try to gather people for a voice call before next time to discuss these anchor issues more efficiently 13:43 < t-bast> on our side, araspitzu validated the test vectors, they're on eclair master so the PR looks good 13:44 < t-bast> joostjgr put some interesting comments, BlueMatt did you have time to review them? 13:44 < BlueMatt> t-bast: is that with the previously-agreed-upon chnges (ie dropping HTLC-tx changes?) 13:45 < joostjgr> comment about test vectors in general: while working on generating the anchor vectors, i switch to using the 'raw' test vector data (that is currently hidden as comments in the markdown). 13:45 < t-bast> BlueMatt: what is? the meeting to schedule ? or the PR? 13:45 < BlueMatt> t-bast: the pr. 13:45 < t-bast> BlueMatt: this is static_remotekey, not anchors 13:46 < BlueMatt> oh, oops, sorry, didnt realize what we were talking about. 13:46 < t-bast> No worries, this is much simpler ;) 13:46 < rusty> Erk, I didn't see this PR. I tried to replicate the PRs recently, and ran into the "missing secret key" problem (one of the remote secrets). But I'm happy to redo those later, I agree with the idea of making these static_remotekey since nobody should be without it these days. 13:46 < BlueMatt> ah, regarding joostjgr's questions, I dunno, the way we check these vectors is lower-level than actual enforcement of htlc limits and such. 13:47 < joostjgr> what i do now is set up a channel between two nodes and let them go through the message exchanges to get the channel in the 'test point' state 13:47 < joostjgr> so no use for vectors that describe an impossible state 13:47 < joostjgr> i deleted them 13:48 < BlueMatt> right, we dont bother doing that for the test vectors, since they're just to make sure we generate txn correctly, not really anything to do with protocol enforcement 13:48 < t-bast> same for us, we test that at the transaction level so we don't mind the 0 fee / reserve 13:48 < BlueMatt> I think just y'all removing vectors if you cant write tests for them is fine, but no need to remove the vectors 13:48 < joostjgr> but why describe a commitment tx that is really undefined? 13:48 < joostjgr> maybe also a bit impl. specific 13:48 < BlueMatt> I dont think its undefined? its just a way to check that you can generate txn correctly 13:49 < rusty> joostjgr: it's a fair point. Ideally these would be generated with all-known secrets and reasonable fee levels. 13:49 < BlueMatt> can add a comment that notes that black-box testing is unlikely to get into such a state. 13:49 < t-bast> it doesn't feel unreasonable to me to have the test vectors reflect some real situation 13:49 < t-bast> but on the other hand, it creates dependencies between bolts that aren't strictly necessayr for these tests 13:50 < joostjgr> i don't think they add anything if those commitments can never happen 13:50 < bitconner> i think it begs, what logic are we really testing then? 13:51 < bitconner> that couldn't also be described by a valid commitment 13:51 < BlueMatt> i found these unit tests useful when the channel create_commitment_tx() function was written and almost nothing else 13:51 < BlueMatt> having it with zero fee I dont really care about, we can remove it, or not, happy to flip a coin, but I dont see how these tests test anything more then simply the commitment tx creation function(s) 13:52 < BlueMatt> like, even if you black-box to get your state machine there, its not like you've meaningfully tested your state machine. 13:52 < rusty> I think if we use a sane (253 perkw) feerate, the first test vector is impossible, since HTLC 0 may be trimmed. It's only 1000 sat. But I'd need to double-check 13:53 < joostjgr> same for 'commitment tx with fee greater than funder amount' 13:54 < joostjgr> btw, for anchor test vectors we need to come up with new 'interesting' configurations and fee rate tipping points 13:54 < rusty> joostjgr: yeah, I remember grinding out those fee values by brute force for the test vectors...4 13:54 < joostjgr> i thought so... was thinking about doing the same for anchors :) 13:56 < rusty> joostjgr: we used to be able to get into that corner case (fee greater than funder can afford), but I think with new requirements on push_msat leaving reserve, it's not triue. 13:57 < rusty> (FWIW, these test vectors were *not* useful for the protocol test python implementation last week, since those assume we know everyone's secrets) 13:58 < t-bast> One thing that comes to mind is that changes elsewhere in the protocol (for example adding an extra reserve or something) shouldn't force us to re-generate Bolt 3 test vectors because they're now invalid commitments, that would be really wasteful 13:58 < BlueMatt> right, they're really only useful to sanity check that you've gotten the commitment tx generation right, not any kind of exhaustive check on...anything 14:00 < rusty> Yes, but that's a *lot*. Including HTLC trimming, OCN generation, key tweaking... 14:00 < BlueMatt> right, it def took me a few rounds to get it all right when I first wrote a commitment tx generator. 14:00 < rusty> ... output ordering, feerate calculation... 14:01 < BlueMatt> anyway, I also dont think this is, like, the most critical thing to harp on. if people care strongly, I can drop it. I can also add a comment noting that its only useful for some test suites, or I can literally flip a coin. 14:01 < t-bast> I don't care much either TBH 14:01 < t-bast> If someone has a strong opinion, please say it 14:02 < rusty> No, if the new vectors are correct let's update them. And if joostjgr generates new ones for anchors, he can fix these issues :) 14:02 < t-bast> joostjgr: does that sound ok for now? 14:02 < joostjgr> i wasn't planning to fix static remote key vectors in the context of anchors 14:03 < joostjgr> i would just remove them from the pr now 14:04 < joostjgr> unless the anchor vectors are going to replace this completely 14:04 < joostjgr> question again about how to structure the spec... 14:04 <+roasbeef> i'm all about distinct extension documents these days 14:04 < rusty> joostjgr: we need them in place while it's still an option, but eventually I expect anchors will become compulsory and these can be deleted. 14:04 <+roasbeef> much more scalable and easier to read/analyze 14:04 <+roasbeef> vs "if statements" in the spec lol 14:05 < BlueMatt> personally I strongly favor dropping old sections...they're in git if you need them 14:05 < joostjgr> no fan of 'if' statements either. 14:05 < t-bast> roasbeef: meaning you'd have anchor not change the current bolt 3 but rather be a different section/document with some duplication? 14:05 < rusty> roasbeef: sure, but I keep hoping those ugly if statements will focus us on dropping old stuff :) 14:05 <+roasbeef> hehe 14:05 <+roasbeef> t-bast: yep 14:05 < bitconner> +1 for standalone documents 14:06 < t-bast> roasbeef: that was my feeling as well while reading the PR 14:06 <+roasbeef> t-bast: then we'd start to "freeze" the spec, and do everything in extension documents 14:06 <+roasbeef> this way someone can read a single doc and impl a new feature 14:06 <+roasbeef> vs needing to navigate conditionals in the spec to make sure they implemented the correct thing 14:06 < t-bast> and later we can revisit to replace an old document by a new one, and people can look in git for the old version 14:06 < t-bast> I agree that these "ifs" are hard to read and a bit error-prone 14:06 <+roasbeef> yeh then those could be stiched together to make a congtiguous spec w/ some auto-gen 14:07 < rusty> +1 to using git as intended. But a spec which lies because there's this other thing you need to read which replaced it is a horrible thing, too. 14:07 < joostjgr> sounds like a project. after the anchor pr merge please 14:07 <+roasbeef> also gets across the feature that w/ all the feature bits n stuff we have, ppl can choose what they want to implement other than like big new payment types 14:07 < t-bast> joostjgr: I feel your pain xD 14:07 <+roasbeef> joostjgr: ;) 14:07 <+roasbeef> rusty: main body could link to other stuff also 14:08 <+roasbeef> there're also some really big chagnes like taproot or scriptless scripts stuff that would pretty much be a re-write of certain bolts 14:08 < joostjgr> this meeting needs to be kept under control ;) 14:08 < t-bast> Sounds like something we'd have to do in a 3-day spec meeting IRL if we really want to make progress on it, let's defer for now? 14:09 < rusty> roasbeef: yeah, I actually like the idea of the top-level simply being "see for this feature". 14:09 < rusty> t-bast: yeah, it's a Big Project. 14:09 < t-bast> Since we're encouraging new implementation to directly use static_remotekey, I'm in favor of merging #758 14:09 <+roasbeef> yeh, this is where i'd like things to head, but it's a big-ish change that we'd need to do over time 14:10 < rusty> t-bast: ack, but I haven't tested the vectors myself. 14:10 < t-bast> #action c-lightning to validate the test vectors 14:10 < t-bast> Now you'll have to :D 14:10 < rusty> LOL 14:11 < bitconner> i think joost validated them? only saw that one comment tha one pubkey might be tweaked? 14:11 < t-bast> on the LL side, do you strongly oppose to this change? Or is it ok? 14:11 < joostjgr> no didn't validate them 14:11 < joostjgr> i can comment without validation 14:11 < joostjgr> because of some overlap with the anchor test vector generation 14:12 < t-bast> sgtm 14:12 < bitconner> oh gotcha, okay yeah i'm in favor assuming we match up. as matt said we can always drop/extend as we please for state-machinne level tests as well 14:12 < t-bast> #action finalize comments on the PR and merge once verified by enough implementation 14:12 < t-bast> Let's do a last small PR (we're already slightly over 1h) 14:13 < t-bast> But it's mine and I'm chair so... 14:13 < t-bast> #topic cltv_expiry_delta recommendations 14:13 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/785 14:13 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 14:14 < t-bast> There are way too many channels using very low cltv_expiry_delta on mainnet (6!!!) 14:14 < t-bast> And it's probably because that section is way to optimistic, so I suggest stressing a bit more that caution is required 14:14 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has quit [Ping timeout: 260 seconds] 14:14 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:1925:45f6:a387:4c53] has joined #lightning-dev 14:15 < rusty> Yeah, 6 is charity. With anchor outputs, it's closer to possible, but TBH there's not much enduser difference between 6 and 60. 14:15 < ariard> if we increase delay of cltv_expiry_delta, we should also increase commitment broadcast delta 14:16 < BlueMatt> t-bast: maybe recommend something higher than 12, given all we've learned since the 12 recommendation was written? 14:16 < ariard> in general being more conservative with all these timelocks make all attacks harder 14:16 < t-bast> ariard: but right now I haven't seen very wrong usage of to_self_delay, so it doesn't feel as important to fix 14:16 < t-bast> ariard: is there a paragraph in particular you want me to make more cautious? 14:16 < t-bast> BlueMatt: I'm all in favor in recommending even higher values 14:16 < ariard> t-bast: not to_self_delay, I was mentioning going-on-onchain-to-claim-incoming delay 14:17 < ariard> this one has a name spec? 14:17 <+roasbeef> it should be dynamic really 14:17 < rusty> BlueMatt: I'd really like to recommend a number, but we still don't know. I think increasing the recommended minimum is Best Practice RN though. 14:17 < t-bast> ariard: oh right, and this one isn't even properly named in the spec 14:17 < ariard> roasbeef: yes we should scale them on mempool congestion 14:17 <+roasbeef> your cltv delta, changed based on what's going on in the chain, and also your past attempts to get any sort of txn confirmed 14:17 < ariard> and channel_update increase or decrwase 14:17 < t-bast> yes it should be dynamic, but short term it's important to at least recommend a bigger lower bound than we currently do :) 14:17 < rusty> ariard: the spec calls that the deadline. 14:18 < t-bast> I can update the PR to expand a bit on that though 14:18 < BlueMatt> roasbeef: I dont think past on-chain closes are a good indicator of the future. 14:18 <+roasbeef> i mean your attempts to get any transaction in the chain 14:18 <+roasbeef> could be unrelated to LN 14:18 < BlueMatt> t-bast: maybe at least for now recommend 24 or 36 blocks? 14:18 < t-bast> I don't think anything is a good indicator of the future 14:18 <+roasbeef> also the time and other factors as well since stuff is pretty cyclic 14:19 < ariard> rusty: htlc_onchain_deadline seems a good name? Just we should have a strict one I've seen different names in every LN papers 14:19 < rusty> Actually, it calls it G in this section. The grace period " a grace-period `G` blocks after HTLC timeout before giving up on an unresponsive peer and dropping to chain" 14:19 < BlueMatt> roasbeef: right, but we know that, like, its doubly hard to get things confirmed at exactly 9am every weekday, which software likely wont be able to learn without a lot of work :p 14:19 < BlueMatt> (or whatever time it is that bitmex tries to screw everyone daily) 14:19 < t-bast> BlueMatt: perfect, I was afraid people would be reluctant, but I'll increase the values in my PR! 14:19 < bitconner> probably safer to recommend higher and only go lower if you understand the risks 14:19 <+roasbeef> lol oh yeah the bitmex txn bomb 14:20 < t-bast> #action t-bast recommend even higher value (yay!) 14:20 < ariard> at the end of the day even if you do dynamic, you may have a second-order game were people try actually to game default "autopilot" configuration 14:20 < t-bast> #action t-bast explain why a dynamic value makes sense 14:20 < bitconner> (so more towards 36) 14:21 < BlueMatt> 36 sgtm 14:21 < t-bast> #action t-bast properly name the "deadline" and recommend a higher value than 7 14:21 < t-bast> allright thanks for the feedback guys, I'll update the PR somewhat heavily 14:21 < rusty> t-bast: it has a name, "grace period" in the spec, but happy to rename. 14:22 < bitconner> iirc lnd uses 40 atm so we are in that realm 14:22 < t-bast> rusty: a decent PR name needs underscores and backticks 14:22 < t-bast> not PR, spec name 14:23 -!- renepick1 [54d313e5@cm-84.211.19.229.getinternet.no] has quit [Remote host closed the connection] 14:23 < t-bast> isn't the grace period still for the downstream node (instead of the upstream one)? 14:23 < t-bast> I probably need to re-read it carefully 14:24 < rusty> t-bast: G is how long you wait once peer should have failed HTLC before going onchain. 14:24 -!- renepick1 [54d313e5@cm-84.211.19.229.getinternet.no] has joined #lightning-dev 14:24 < rusty> R == worst-case reorg depth. 14:24 < rusty> S = delay before txn is mined. 14:24 < t-bast> rusty: right, so I think that what ariard and I are talking about is yet another parameter 14:25 < t-bast> rusty: what we're talking about is how long you would wait for an upstream peer to acknowledge and remove a fulfilled HTLC before going on-chain (for the upstream channel, not the downstream one) 14:25 < rusty> ... we use G for both. 14:25 < t-bast> oh gotcha 14:25 < t-bast> then 1 or 2 blocks is clearly not what I'd recommend! 14:25 < rusty> "B now needs to fulfill the incoming A->B HTLC, but A is unresponsive: B waits `G` more 14:25 < rusty> blocks before giving up waiting for A. A or B commits to the blockchain." 14:25 < t-bast> I set it to 24 by default on eclair 14:26 < ariard> the deadline for received HTLC this node has fulfilled? 14:26 < rusty> *24* blocks.... wow, that's a long time for your peer to be offline! 14:26 < t-bast> it's not only about being offline 14:27 < t-bast> it's the time you're confident your HTLC-success will be confirmed 14:27 < t-bast> otherwise you'll enter a race with the upstream's HTLC-timeout tx 14:27 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has left #lightning-dev [] 14:27 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 14:27 < t-bast> (note: it's not your HTLC-success in that case, it's your claim-preimage tx) 14:27 < rusty> t-bast: that's a derived value, though. 14:28 < ariard> and you can RBF this one like you want 14:28 < t-bast> yes that's true, but we currently don't have the logic to automatically RBF 14:28 < t-bast> we'll have it soon, but for now 24 makes me feel safe-ish 14:28 < rusty> t-bast: I think "`cltv_expiry_delta` Selection" section would benefit from a close re-reading. 14:29 < t-bast> rusty: great, I'll spend some time on it this week and update my PR 14:29 < ariard> yes but the bumping logic aka "when next block I'm going to bump this" can be the same between RBF/CPFP 14:29 < rusty> But I agree the numbers are too low. 14:29 < rusty> OK, 1 minute to hard stop for me. 14:29 < renepick1> Hey since it is getting late and the meeting is almost over I would like to ask if you could have a look and give me feedback for https://github.com/lightningnetwork/lightning-rfc/pull/780 soonish? It is not ready and on purpose I made an early PR so that I can continue the work based upon the feedback. I have been here for the last 3 meetings and 14:29 < renepick1> it has not been discussed yet (neither here nor on github). I would like to continue working / improving it soon. I won't be here for the next spec meeting though.. thank you 14:29 < t-bast> thank you all for the feedback! let's end now, we've done a great meeting ;) 14:30 * roasbeef g2g 14:30 < t-bast> renepick1: honestly I'd love to have time to review it, but I don't think I'll be able to in the short term... 14:30 < t-bast> #endmeeting 14:30 < lightningbot> Meeting ended Mon Jun 22 21:30:58 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 14:30 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-06-22-20.08.html 14:30 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-06-22-20.08.txt 14:30 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-06-22-20.08.log.html 14:31 < t-bast> renepick1: I'll try doing an early review though, might not be very in-depth but it will be a start 14:31 < renepick1> @t-bast take your time. there are more people that can do it 14:32 < renepick1> I also understand it is rather complex / orthogonal to the stuff that people have been working on currently 14:32 < t-bast> Thanks everyone for your time, please have a quick look at the PRs we didn't cover during the meeting (they're small and just need a small comment here and there) 14:32 < renepick1> but I thought instead of specing everything out that early feedback might be useful 14:33 < renepick1> and I am somewhat blocked right now because there is no feedback what so ever. maybe I should do a writeup on the mailinglist? 14:33 < t-bast> renepick1: I agree, that's the right mindset ;), unfortunately I think it's a bit tough for all teams right now, too many interesting things to do, too little time! 14:34 < t-bast> renepick1: yes a ML post would likely be useful, there are more people answering on the ML than on the spec's github 14:35 < renepick1> yeah but please remember for most people it is actually a job as they are paid from companies to do that stuff. I am doing that voluntarily and it feels a little bit strange (like not being welcome) after long research to come with something that is just ignored. I understand everyone has a lot of stuff to do and that the topic is not in your core 14:35 < renepick1> interest / focus that is why I was just sending a kind reminder (: 14:36 -!- joostjgr [~joostjag@ip51cf95f6.direct-adsl.nl] has quit [Quit: Leaving] 14:36 < renepick1> it is also hard for me to catch up with the core team as you guys work on this stuff on a daily basis so I feel this is where the disconnect might come from. anyway I wish everyone a great time (: 14:39 < t-bast> yeah I totally get that 14:39 < t-bast> I think that unfortunately it's just that things will take much more time than you're hoping :) 14:40 < renepick1> it's actually ok! I will rather have a thourough review than something quick and hasty 14:40 < renepick1> I hoped me feedback was well palced. as I said I understand that the topic that I am suggesting is somewhat orthogonal 14:40 < t-bast> but I agree that it's quite hard when you don't have the luxury to be full time on the project 14:41 < renepick1> it's ok. gives me also freedom (: 14:41 < renepick1> anyway have to go! see you and thanks for taking my concern seriosly 14:41 -!- renepick1 [54d313e5@cm-84.211.19.229.getinternet.no] has left #lightning-dev [] 14:41 < t-bast> sure, see you! 14:41 -!- t-bast [~t-bast@2a01:e34:ec2c:260:9d7a:e70e:de58:4e0e] has quit [Quit: Leaving] 14:42 -!- yzernik [~yzernik@2600:1700:dc40:3dd0:1925:45f6:a387:4c53] has quit [Ping timeout: 272 seconds] 14:43 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has joined #lightning-dev 15:00 -!- sstone [~sstone@112.ip-51-68-199.eu] has quit [Remote host closed the connection] 15:03 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 256 seconds] 15:16 -!- marcoagner [~user@81.193.76.62] has quit [Ping timeout: 240 seconds] 15:42 -!- yzernik_ [~yzernik@2600:1700:dc40:3dd0:1925:45f6:a387:4c53] has joined #lightning-dev 15:43 -!- yzernik [~yzernik@75-25-138-252.lightspeed.plalca.sbcglobal.net] has quit [Ping timeout: 264 seconds] 17:54 -!- cryptoso- [~cryptosoa@gateway/tor-sasl/cryptosoap] has joined #lightning-dev 17:54 -!- cryptosoap [~cryptosoa@gateway/tor-sasl/cryptosoap] has quit [Ping timeout: 240 seconds] 17:54 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [] 18:14 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 256 seconds] 18:19 -!- slivera [~slivera@103.231.88.30] has joined #lightning-dev 18:22 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 18:41 -!- vtnerd [~vtnerd@173-23-103-30.client.mchsi.com] has quit [Ping timeout: 264 seconds] 18:42 -!- vtnerd [~vtnerd@173-23-103-30.client.mchsi.com] has joined #lightning-dev 19:43 -!- shesek` [~shesek@5.22.128.126] has joined #lightning-dev 19:43 -!- shesek` [~shesek@5.22.128.126] has quit [Remote host closed the connection] 19:43 -!- shesek [~shesek@5.22.128.126] has joined #lightning-dev 19:43 -!- shesek [~shesek@5.22.128.126] has quit [Changing host] 19:43 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 19:44 -!- cryptosoap [~cryptosoa@gateway/tor-sasl/cryptosoap] has joined #lightning-dev 19:44 -!- cryptoso- [~cryptosoa@gateway/tor-sasl/cryptosoap] has quit [Ping timeout: 240 seconds] 19:47 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has quit [Quit: This computer has gone to sleep] 19:52 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has joined #lightning-dev 20:26 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 20:29 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Ping timeout: 260 seconds] 20:31 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has quit [Ping timeout: 256 seconds] 20:58 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has quit [Quit: This computer has gone to sleep] 21:10 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has joined #lightning-dev 21:20 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 21:24 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 21:24 -!- vasild_ is now known as vasild 21:31 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has quit [Quit: This computer has gone to sleep] 21:32 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has joined #lightning-dev 21:32 -!- tryphe [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 21:33 -!- ThomasV [~thomasv@unaffiliated/thomasv] has joined #lightning-dev 21:34 -!- nicolasburtey [~nicolasbu@c-73-93-86-177.hsd1.ca.comcast.net] has joined #lightning-dev 21:35 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 21:36 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Read error: Connection reset by peer] 21:43 -!- nicolasburtey [~nicolasbu@c-73-93-86-177.hsd1.ca.comcast.net] has quit [Ping timeout: 240 seconds] 21:57 -!- Relis [~Relis@cpc96290-lewi18-2-0-cust910.2-4.cable.virginm.net] has quit [Quit: This computer has gone to sleep] 22:01 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has quit [Quit: ZNC 1.7.5 - https://znc.in] 22:04 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has joined #lightning-dev 22:07 -!- nicolasburtey [~nicolasbu@c-73-93-86-177.hsd1.ca.comcast.net] has joined #lightning-dev 22:16 -!- ThomasV [~thomasv@unaffiliated/thomasv] has quit [Ping timeout: 246 seconds] 22:20 -!- nicolasburtey [~nicolasbu@c-73-93-86-177.hsd1.ca.comcast.net] has quit [Ping timeout: 240 seconds] 22:25 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 22:43 < norisg> Hi, I am reading the lnbook and was wondering what happens if the channel has to be closed (forced closed because channel partner is offline) and the fee of the commitment transaction is low that it never gets into a block? 22:43 < norisg> worst case scenario if blocks are full for a while 22:45 < norisg> it's said in the book, that commitment transactions are up to 5 times higher, but what happens if this is not enough? 22:48 < ja> norisg: with anchor commitments, you can increase the fee 22:50 < norisg> ja: is it already considered in BOLT 1 22:50 < norisg> ? 22:51 < ja> norisg: it has not been merged yet, there is a PR. judging from the meeting today, i think it may get merged soon 22:52 < ja> the whole motivation for anchor commitments, as i understand it, is to address that issue you were worried about 22:52 < ja> norisg: but why do you ask about BOLT 1? don't you mean BOLT 3? 22:53 < norisg> ja: ok I thought BOLT 1 is implemented and bolt2/3 are in conception state, but I am not up to date 22:54 < norisg> ja:do I see the BOLT implementation by the release number of the lightning software 22:54 < ja> norisg: what do you mean by "release number" ? 22:55 < ja> the version number of the lightning implementations do not say anything about which version of the spec that they implement 22:56 < norisg> ja: like the github tag for c-lightning for instance v.08.1 22:56 < ja> the markdown documents you see in the root of the lightning-rfc repo are what people refer to when they say e.g. "BOLT 3" or "BOLT 2" 22:58 < ja> norisg: i can't find any tag with that name. but i see a tag named v0.8.2.1. that version number is unrelated to the numbers of the lightning-rfc standard documents (BOLTs) 22:59 < norisg> ja: ok what I don't really understand is, why are we writing on two different BOLTS 2/3, or can I consider BOLT 2 final and now new requirements are introduced in BOLT 3 23:00 < ja> norisg: the standard is 'living', any part of it could be amended. the protocol has "feature flags" so that an implementation can know whether the counterparty supports a certain 'new' feature 23:01 < ja> norisg: see this document : https://github.com/lightningnetwork/lightning-rfc/blob/master/09-features.md 23:01 < ja> that list of features lets you see which BOLT was amended to support a newly added feature 23:01 < norisg> ja: ok thanks for the help, have to do the reading now :) 23:25 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 23:47 -!- marcoagner [~user@2001:8a0:6a5e:bd00:ffc1:99f7:23a:1565] has joined #lightning-dev 23:59 -!- riclas [riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev --- Log closed Tue Jun 23 00:00:59 2020