--- Log opened Mon Feb 03 00:00:28 2020 00:00 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 00:05 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Ping timeout: 265 seconds] 00:38 -!- DaleBewan [~dalebewan@2a00:c320:2:1:fd18:5808:191f:bb32] has joined #lightning-dev 01:18 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 01:32 -!- AbramAdelmo_ [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Read error: Connection reset by peer] 01:33 -!- AbramAdelmo_ [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 01:37 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has joined #lightning-dev 01:37 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has quit [Client Quit] 01:47 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has joined #lightning-dev 02:07 -!- molz_ [~molly@unaffiliated/molly] has joined #lightning-dev 02:11 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 02:16 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 02:40 -!- cryptoso- [~cryptosoa@gateway/tor-sasl/cryptosoap] has joined #lightning-dev 02:41 -!- cryptosoap [~cryptosoa@gateway/tor-sasl/cryptosoap] has quit [Ping timeout: 240 seconds] 02:42 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 02:45 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 03:17 -!- dr-orlovsky [~dr-orlovs@77-58-192-184.dclient.hispeed.ch] has quit [Quit: My MacBook has gone to sleep. ZZZzzz...] 03:18 -!- rdymac [uid31665@gateway/web/irccloud.com/x-wonutmpvqktwsuxo] has joined #lightning-dev 03:37 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 03:48 -!- asd0r [d9168aca@217.22.138.202] has joined #lightning-dev 03:51 -!- belcher [~belcher@unaffiliated/belcher] has joined #lightning-dev 03:52 -!- asd0r [d9168aca@217.22.138.202] has quit [Remote host closed the connection] 04:10 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 04:10 -!- __gotcha1 [~Thunderbi@plone/gotcha] has joined #lightning-dev 04:13 -!- __gotcha1 [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 04:13 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 05:15 -!- imawhale [~Thunderbi@209.58.135.89] has joined #lightning-dev 05:16 -!- imawhale1 [~Thunderbi@softbank126194186087.bbtec.net] has quit [Ping timeout: 265 seconds] 05:41 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 06:07 -!- dr-orlovsky [~dr-orlovs@194.230.147.144] has joined #lightning-dev 06:16 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 265 seconds] 06:24 -!- DaleBewa_ [~dalebewan@2a00:c320:2:1:4d4:da0d:52a1:44c9] has joined #lightning-dev 06:28 -!- DaleBewan [~dalebewan@2a00:c320:2:1:fd18:5808:191f:bb32] has quit [Ping timeout: 272 seconds] 06:50 -!- dr-orlovsky [~dr-orlovs@194.230.147.144] has quit [Quit: My MacBook has gone to sleep. ZZZzzz...] 06:55 -!- dr-orlovsky [~dr-orlovs@194.230.147.144] has joined #lightning-dev 07:09 -!- dr-orlovsky [~dr-orlovs@194.230.147.144] has quit [Quit: My MacBook has gone to sleep. ZZZzzz...] 07:40 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 07:47 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 265 seconds] 08:02 -!- gkrizek [~gkrizek@ec2-54-149-179-115.us-west-2.compute.amazonaws.com] has quit [Remote host closed the connection] 08:10 -!- gkrizek [~gkrizek@ec2-54-149-179-115.us-west-2.compute.amazonaws.com] has joined #lightning-dev 08:17 -!- DaleBewa_ [~dalebewan@2a00:c320:2:1:4d4:da0d:52a1:44c9] has quit [Remote host closed the connection] 08:18 -!- DaleBewan [~dalebewan@2a00:c320:2:1:4d4:da0d:52a1:44c9] has joined #lightning-dev 08:21 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-pgbecwfooebcdhbq] has joined #lightning-dev 08:23 -!- DaleBewan [~dalebewan@2a00:c320:2:1:4d4:da0d:52a1:44c9] has quit [Ping timeout: 265 seconds] 08:31 -!- borderless[m] [borderless@gateway/shell/matrix.org/x-yfagtenyxsflnlqm] has joined #lightning-dev 08:36 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 08:37 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Read error: Connection reset by peer] 08:37 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 08:42 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Ping timeout: 268 seconds] 08:46 -!- deafboy [quasselcor@cicolina.org] has quit [Quit: deafboy] 08:47 -!- deafboy [quasselcor@cicolina.org] has joined #lightning-dev 08:47 -!- Bugz [~pi@2602:30a:2ea2:1180:4658:cbe4:213d:490b] has quit [Quit: WeeChat 2.3] 08:49 -!- Bugz [~pi@2602:30a:2ea2:1180:4658:cbe4:213d:490b] has joined #lightning-dev 08:50 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 08:54 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Ping timeout: 260 seconds] 09:02 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:550b:91e3:b5fb:7e84] has joined #lightning-dev 09:28 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 09:32 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 09:32 -!- AbramAdelmo_ [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Read error: Connection reset by peer] 09:33 -!- fiatjaf [~fiatjaf@2804:7f2:2996:333e:ea40:f2ff:fe85:d2dc] has quit [Ping timeout: 265 seconds] 09:33 -!- fiatjaf [~fiatjaf@2804:7f2:2996:333e:ea40:f2ff:fe85:d2dc] has joined #lightning-dev 09:44 -!- mol [~molly@unaffiliated/molly] has joined #lightning-dev 09:47 -!- molz_ [~molly@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 09:59 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Remote host closed the connection] 10:00 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 10:01 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Quit: Leaving] 10:19 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 10:24 -!- Kostenko [~Kostenko@2001:8a0:7293:8f00:c82d:30c0:f65f:ddff] has quit [Ping timeout: 248 seconds] 10:37 -!- deafboy [quasselcor@cicolina.org] has quit [Read error: Connection reset by peer] 10:37 -!- deafboy [quasselcor@cicolina.org] has joined #lightning-dev 10:40 -!- Kostenko [~Kostenko@2001:8a0:7293:1200:68f5:2150:4d47:f16f] has joined #lightning-dev 10:44 -!- YSqTU2XbB [~smuxi@176.21.69.158.rdns.lunanode.com] has joined #lightning-dev 10:44 -!- YSqTU2XbB is now known as araspitzu 10:45 -!- t-bast [~t-bast@2a01:e34:ec2c:260:c0eb:56b5:5ec3:556e] has joined #lightning-dev 10:55 -!- Guest92 [9e4515b0@gateway/web/cgi-irc/kiwiirc.com/ip.158.69.21.176] has joined #lightning-dev 10:57 -!- jkczyz [~jkczyz@135.84.132.250] has left #lightning-dev [] 10:58 -!- hiroki_ [a0f8a43f@pl49215.ag2001.nttpc.ne.jp] has joined #lightning-dev 11:00 -!- jkczyz [uid419941@gateway/web/irccloud.com/x-yohhgzrutotwldwk] has joined #lightning-dev 11:00 < BlueMatt> mtg 11:00 < cdecker> Good evening everyone 11:00 < niftynei> hello hello 11:00 < t-bast> Hey guys 11:00 < araspitzu> Hi everyone! 11:01 -!- Guest92 [9e4515b0@gateway/web/cgi-irc/kiwiirc.com/ip.158.69.21.176] has quit [Remote host closed the connection] 11:01 -!- joostjgr [~joostjag@ip51cf95f6.direct-adsl.nl] has joined #lightning-dev 11:01 < cdecker> Everyone here for the spec meeting? 11:01 < BlueMatt> thats what I'm here for, dunno what you came by for? 11:02 < t-bast> Just handing around 11:02 < t-bast> *hanging 11:02 < BlueMatt> is this chan set to need auth to send? 11:02 < BlueMatt> jkczyz: is complaining he cant send 11:02 < t-bast> No, anyone can send 11:02 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has joined #lightning-dev 11:02 < t-bast> What IRC client? 11:02 < cdecker> I moved the discussion about the status of anchor output to next meeting since nobody volunteered to give the update 11:02 < t-bast> Oh no right I think you do need auth 11:03 < cdecker> Any chances we have a volunteer for a quick status update here? 11:03 < BlueMatt> can roasbeef fix that? since he can? 11:03 < niftynei> i think you have to be auth'd with nickserv to send messages 11:03 < joostjgr> hi all, yes we can give an update 11:03 < cdecker> I think roasbeef is only voiced, not oped 11:03 < t-bast> niftynei is right 11:03 < joostjgr> on the anchor outputs 11:03 < BlueMatt> can we disable that? 11:03 < BlueMatt> who is actually auth'ed? 11:03 < cdecker> Great, thanks joostjgr :+1: 11:03 < BlueMatt> errr, op'd 11:03 < t-bast> Hi Joost! 11:03 * cdecker moves that agenda item back 11:04 < niftynei> my IRC client tells me that there are zero OPs in the channel 11:04 < cdecker> Yep 11:04 < cdecker> Someone had set it to +n due to spamming 11:05 < BlueMatt> right, I resume someone can get ChanServ to give them ops, though 11:05 < niftynei> really excited about this agenda cdecker! 11:05 < BlueMatt> anyway, we should start 11:06 < cdecker> Yep, let's 11:06 < cdecker> Anyody volunteering to chair? otherwise I'm happy to do it 11:06 < t-bast> Happy to let you do it! 11:06 < cdecker> #startmeeting 11:07 < lightningbot> Meeting started Mon Feb 3 19:06:59 2020 UTC. The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:07 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 11:07 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/731 11:07 < cdecker> Any follow ups from last week? I think we executed on all actions we decided last time, or am I missing something? 11:08 < t-bast> There was some discussion on https://github.com/lightningnetwork/lightning-rfc/issues/728 11:08 < t-bast> I don't know if we reached something satisfactory, but we do have mitigations 11:08 < t-bast> thanks to BlueMatt and halseth 11:09 < cdecker> I think BlueMatt wanted to have a wider discussion of #726 (which features to announce where), and we decided to make a new issue for that discussion 11:09 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 11:09 < cdecker> Great, I love seeing discussions on GH ^^ 11:09 < BlueMatt> t-bast: yea, we should probably run with the idea there and make some pr to specify clients implement it, but I dont think that needs discussion here. 11:10 < BlueMatt> maybe you can write up what it appears y'all implemented in elciar? 11:10 < t-bast> BlueMatt: agreed 11:10 < t-bast> BlueMatt: we haven't implemented anything yet, but we will and I'll do a write-up to append to the issue and maybe to the spec too 11:10 < cdecker> Ok, so a brief explanation of what Eclair does on issue #728? 11:10 < rusty> Sorry I'm late, needed coffee. 11:11 < cdecker> rusty: same here ^^ 11:11 < t-bast> #action t-bast: implement proposed mitigation for #728 and document that 11:11 * t-bast waves at Rusty 11:11 < BlueMatt> cdecker: yea, I think last meeting's discussion on 726 (which I was not here for, apologies) captured some of the issues, but by no means all of them - there is a ton of ambiguity in how flat features (which wasnt really introduced by flat features, but it brought it to the forefrunt) should work moving forward 11:11 < cdecker> Ok, that seems to conclude the pending items from last time, shall we get started with today's agenda? 11:12 < cdecker> No problem BlueMatt, better to address the fundamental problem, without the distraction of a concrete case that is easily settled ^^ 11:12 < t-bast> cdecker: sgtm 11:12 < cdecker> #topic Single-option large channel proposal #596 11:12 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/596 11:12 < t-bast> WUMBO 11:13 < cdecker> This has been pending for quite a while, I don't think there's anything major, just needs acks 11:13 < araspitzu> i just committed a fix for the last nit by t-bast 11:13 <+roasbeef> t-bast: how do y'all handle scaling confirmations? particularly if there's a push amt? 11:14 < cdecker> roasbeef: what do you mean by scaling confirmations? I might be missing some context 11:14 < t-bast> roasbeef: currently we don't, we keep it at 6 11:14 < cdecker> Ah, I see, sorry for the dumb question 11:14 < t-bast> you mean waiting for more confirmations if the channel is huge? 11:14 <+roasbeef> yeh so like a 100 btc channel 11:15 < t-bast> that's not #reckless at all 11:15 * cdecker hopes LNBig isn't listening xD 11:15 < t-bast> I'd do it with 6 confs 11:15 < BlueMatt> 6 confs is....wayyyy too low for 100 btc 11:15 < t-bast> right now we're sticking to 6 confs, but the wumbo channels we're seeing are more around 1 BTC 11:15 < t-bast> for larger amount it's true that we should adapt the confirmation window 11:15 < BlueMatt> anyway, that should be up to the implementation? 11:16 < t-bast> BlueMatt: yeah I was trolling on that one ;) 11:16 < cdecker> Does it really matter for now? Wumbo is quite orthogonal to negotiating the number of confirmations isn't it? 11:16 < BlueMatt> yes 11:16 < t-bast> Yes this is true, feels implementation-specific 11:16 < cdecker> I can just unilaterally defer `funding_locked` and don't need to communicate that at all, do I? 11:16 < t-bast> cdecker: you could but it's not very nice :) 11:17 <+roasbeef> yeh it's implementation specific, was wondering what they've done since iirc they use themin production 11:17 < BlueMatt> you can also specify your delay? 11:17 <+roasbeef> but even then, we may want to have an advisory section 11:17 < BlueMatt> as you should 11:17 <+roasbeef> particularly as ppl are accepting zero conf channels these days 11:17 < t-bast> roasbeef: agreed, an advisory section would be useful 11:17 < rusty> Worthy consideration, though the amount you can steal by opening and reorging a channel is limited to the total outgoing capacity of the node. But yeah, there's a logic which says "wait until the block rewards exceed the amount of the payment"... 11:17 < cdecker> Ok, so a new PR with an advisory section, and merge #596? 11:18 < t-bast> cdecker: SGTM 11:18 < t-bast> BlueMatt: what kind of calculation would you recommend to scale the waiting period? 11:18 < araspitzu> i think an advisory section would be useful/nice especially if turbo channels make it to the spec 11:18 < cdecker> Any objections? Any volunteers to write the advisory (possibily someone with an idea for a sensible algo)? 11:19 < rusty> cdecker: feature numbers are weird? They should be the next one, right? So 18/19? 11:19 < t-bast> rusty's right, we should merge with the latest available feature bits 11:19 < BlueMatt> t-bast: I dont think we can reasonably capture a Good recommendation in an advisory section, but having one that notes that you should do *something* is a good idea :) 11:19 < t-bast> BlueMatt: that's a start :D 11:19 < BlueMatt> as for how to scale, i think nodes need to be considering their total value across all channels, not just the one. 11:19 < cdecker> Well, it's just 3 more bytes in the features, but we can compress 11:19 < rusty> Yeah, mention the problem, imply there's a clever solution. Makes us look smart without actually making us responsible if it goes wrong :) 11:20 < t-bast> a-la Fermat 11:20 < araspitzu> yes those are from the waiting room, action point for me to change the PR to use 18/19? 11:20 < cdecker> #action cdecker to change the featurebits to 18/19 and merge #596 11:20 < cdecker> Anyone taking the lead for the advisory? We can hash out details on the PR I guess 11:21 < t-bast> If it's just mentioning the issue without providing the exact algo, araspitzu or I can do it 11:21 < rusty> cdecker: "Note: an implementation may want to vary the number of confirmations required for a large channel." 11:21 < niftynei> "a-la Fermat" ahaha :thinking_face: 11:21 < cdecker> #action t-bast and araspitzu to brainstorm an advisory section in a new PR explaining how confirmations should scale with the funding amount 11:22 < t-bast> niftynei: I don't always make maths jokes, but when I do... 11:22 < araspitzu> cdecker +1 11:22 < rusty> (I will personally tweet-slap any person who places a margin note in one of the BOLTS!) 11:22 < cdecker> rusty: They should be inversely proportional xD 11:22 < cdecker> Awesome, any last words for the wumbo PR? 11:22 < cdecker> Let's make it official ^^ 11:22 < t-bast> ACK! 11:23 < rusty> Ack! 11:23 <+roasbeef> I feel like we're missing the ability of nodes to advertise "how large" 11:23 < cdecker> Well, that goes more into liquidity providers I guess 11:23 <+roasbeef> as rn they'd just try and fail and not really get any feedback 11:23 < cdecker> As a node opening a channel I am in control, as a fundee I can fail HTLCs that exceed my risk tolerance 11:24 <+roasbeef> alises aren't really used for anything atm, so we could possilby use that for signalling 11:24 <+roasbeef> t-bast: also do y'all just allow infinite size, or is there a sort of config soft cap? 11:24 < cdecker> Ouch, aliases 11:24 < niftynei> have we added TLVs to node ann's yet? 11:24 < t-bast> roasbeef: I believe there's a cap, I don't remember which it is yet 11:25 < cdecker> roasbeef: would you like to create a proposal for this sort of signaling? 11:25 < t-bast> niftynei: that's for one of the next topics: TLV everywhere to simplify, then we can add one there for capacity? 11:25 < cdecker> Yep, that's #714 down 2 items in the agenda ^^ 11:26 < rusty> It's common to have a minimum accepted channel size, but not sure if it's worth advertizing a range... 11:26 < cdecker> So let's move on down the list and we can brainstorm a bit at the end, sounds good? 11:26 <+roasbeef> was thinking just a max 11:26 <+roasbeef> sure seems either a new tlv field for the node ann or overloading the alias are possible candidates 11:26 -!- aantonop [ac3a6b3f@172.58.107.63] has joined #lightning-dev 11:26 < t-bast> roasbeef: I think for max that could be in your reply to a first open that's too big (in a tlv) 11:27 <+roasbeef> reply? 11:27 < t-bast> if we can avoid putting that in node_anns which are gossiped and consume a lot of bandwidth that'd be nice 11:27 < cdecker> Hm, signaling up front is a way nicer experience 11:27 <+roasbeef> cdecker: agreed 11:28 < cdecker> Let's brainstorm later / on GH 11:28 < cdecker> #topic BOLT 7: be more aggressive about sending our own gossip. #684 11:28 < rusty> But we have actual experience with minima IRL. Has it proved a problem to have an error msg which says what the min is. 11:28 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/684 11:28 < rusty> cdecker: Wow, is that still open? OK, Ack. 11:28 -!- sanket1729 [~sanket172@72.36.89.11] has joined #lightning-dev 11:29 < cdecker> It is, and requires a rebase :-) 11:29 < cdecker> Happy to rebase once we have all the acks 11:29 < cdecker> I think we need a sign-off from rust-lightning and lnd 11:29 < t-bast> I believe lnd is already doing that, isn't it? 11:30 <+roasbeef> we do a weaker version of this rn, for 0.10 we aim to implement a stronger version of it 11:30 < cdecker> I think so too, mainly looking for formulation acks in the spec at this point 11:30 <+roasbeef> but yeh on board w/ the general idea 11:30 < rusty> Yeah, it's a subtle trap :( 11:30 < BlueMatt> RL doesn't implement any of the gossip filtering stuff as of yet...gotta love optional features. so I dont have a horse in this race 11:31 < BlueMatt> though note that RL also doesnt use timestamp to mean timestamp 11:31 < BlueMatt> the original spec did not require it be timestamps (only monotonically increasing), and we dont make syscalls, so we dont have a time source 11:31 < cdecker> Ok, so no objections I think 11:31 < t-bast> cdecker: sgtm 11:31 < cdecker> BlueMatt: yes, that is my fault, I wanted logical timestamps but then also came up with the dumb auto-pruning... 11:31 < BlueMatt> (and assuming all lightning nodes know the current time in any more precision than the block header sucks) 11:32 < cdecker> #action cdecker to rebase and merge #684 11:32 < BlueMatt> eventually we need to move to header-based timestamps, but I dont think we do yet 11:32 < BlueMatt> in any case, please avoid breaking low-precision (+/- 2 hours) timestamps! 11:32 <+roasbeef> for actions I think we should still give ppl a chance after the meeting to glance at the PR and hit the approve button on gh 11:32 < rusty> BlueMatt: with gossipv2 I want to use block numbers, for exactly this reason. 11:32 < cdecker> Coincidence has it that more advanced gossip sync mechanisms are our research topic today 11:32 < BlueMatt> right. 11:32 < cdecker> :-) 11:33 <+roasbeef> don't think any of us rely on time in a strong fashion at all, beyond how bitcoin does 11:33 < cdecker> #topic Bolt 1: Specify that extensions to existing messages must use TLV #714 (@t-bast) 11:33 < BlueMatt> better-gossip is the reason we never bothered to implement the current gossip filter stuff 11:33 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/714 11:33 < BlueMatt> roasbeef: gossip filters definitely does. 11:33 < cdecker> t-bast: would you do the honors of walking us through this one? 11:33 <+roasbeef> depends on how you interpret "strong" ;) 11:33 < t-bast> cdecker: my pleasure 11:34 < t-bast> So the idea is to make it explicit that all messages can be extended via TLV stream 11:34 <+roasbeef> perhaps lets do anchor outputs before this since it's more immediate and concrete (spec pr, working impl) 11:34 < t-bast> However there are a few optional fields in some messages that make this not-so-obvious 11:34 < rusty> t-base: "- If it doesn't include `extension` fields: - MUST omit the `extension` TLV stream entirely." this doesn't make sense any more (now there's no length prefix), since an empty TLV is identical to a missing one? 11:35 < t-bast> rusty: yes, I can probably improve that one 11:35 * BlueMatt doesnt get why we need this - is anyone planning on implementing logic to read any extra message bytes as TLV right now? otherwise it seems like fuure guidance that just ties us to something uneccessarily? 11:35 < cdecker> So my main concern is that it limits us to only use TLV in future, which might be ok 11:35 < t-bast> BlueMatt: we are adding TLV streams in many messages already, and it made us realize that some existing extensions (upfront_shutdown_script) make it hard 11:35 < BlueMatt> right, but, unless there's a reason to, why do so? 11:36 < t-bast> So I think it's good to make that explicit in the spec to avoid future work from making this switch painful 11:36 < t-bast> The ability to add such TLV streams allow us (and others) to experiment with new features easily without forking from the network 11:36 < rusty> t-bast: I like the *idea* of making those fields compulsory so TLV is easier, but this will break older Eclair if done today, right? 11:36 < BlueMatt> i mean we can add a non-normative "future extensions should heavily consider TLV", but my question is why make it normative, and why worry about such messages until we have to? 11:37 < t-bast> rusty: as of 0.3.3 released friday, we're good to go ;) 11:37 < rusty> BlueMatt: and such advice is meta, thus belongs in CONTRIBUTING.md. 11:37 < BlueMatt> also, for those messages, we can just say "they are required if you want to go past them" 11:37 < BlueMatt> rusty: agreed 11:37 < rusty> t-bast: it's premature :) 11:37 < t-bast> rusty: and it won't break older eclair, only potential Phoenix which won't see those messages anyway 11:37 < BlueMatt> like, if you want to add new fields, then those fields become mandatory, but no need to break it? 11:37 < rusty> t-bast: um, the ACINQ node got this wrong until 3? days ago? 11:38 < t-bast> rusty: yes because we were waiting to deploy the updated version because this PR wasn't making progress on the spec, but now the Acinq node runs 0.3.3 which fixes it 11:38 < rusty> BlueMatt: technically there's an issue about parsing TLV fields (and breaking if there's an even one) vs ignoring them. But since even-fields-I-dont-expect is a Shouldn't Happen, it's hard to care. 11:38 < t-bast> rusty: other eclair nodes don't run the custom extensions that broke this 11:38 < rusty> t-bast: OK. 11:39 <+roasbeef> (ignore my last msg lol thought it was a diff subject) 11:39 < t-bast> BlueMatt: this is couple with the test message range to allow people to experiment with features before proposing them in the spec 11:39 < rusty> So, the only spec change here is changing "ignore additional data" to "parse additional data as TLV". 11:39 <+roasbeef> yeh basically we need to make all the existing not really optional fields explcitily mandatory to build any new extensions on top 11:40 < t-bast> rusty: mostly, with the only exception being how we clarify the case of upfront_shutdown_script 11:40 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 11:40 < t-bast> I've got a suggestion to retro-fit upfront_shutdown_script as a TLV field with type 0 in the last comments on the PR 11:40 < niftynei> the proposal for open_channelv2 moves upfront_shutdown_script to a TLV ;) 11:40 * BlueMatt doesnt really want to implement "parse as TLV to validate, and then ignore everything you just parsed and store it as is so that you can re-serialize as-is for signature verification" 11:40 < t-bast> niftynei: yaay 11:41 < BlueMatt> at least for the announcement messages 11:41 <+roasbeef> t-bast: what about just makign it be all zeroes if you dont' want it? 11:41 < t-bast> roasbeef: that's exactly how it ends up being, but I think it makes the spec less messy if that can be also understood as a TLV field 11:41 <+roasbeef> ahh i see the encoding thing now 11:42 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has quit [Client Quit] 11:42 <+roasbeef> hmmm but then this isn't really backwards compatible, and now we have 2 ways to parse the same field? 11:42 < rusty> We always send an upfront_shutdown_script (may be a 0 byte). Don't mess with it until open_channel2. Happy to make it compuslory. 11:42 < t-bast> no it is fully backwards-compatible because the encoding ends up being the same 11:42 <+roasbeef> mandatory feels like an eaiser update path, less cleverness 11:43 < t-bast> (by sheer luck) 11:43 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 11:43 <+roasbeef> we also have other "not really optional fields" we need to handle as well 11:43 < t-bast> Ok I can make this field mandatory if you prefer, without transforming it into a TLV 11:44 < t-bast> Yes but the others are more easily handled, all implementation already include them all the time I believe 11:44 < rusty> BlueMatt: that requirement to preserve unknown msg tails already exists though... 11:44 < t-bast> I listed that in the last commit message 11:44 < t-bast> roasbeef: the only other optional field is the commitment point in channel_reestablish 11:45 < BlueMatt> rusty: yes, my point is that the diff here is that you'd parse that data as tlv, then ignore all the parse results unless it fails 11:45 < t-bast> this one can easily be made mandatory IMO 11:45 < BlueMatt> which, ugh 11:45 < rusty> t-bast: but if I start parsing open_channel insisting that there be a (maybe 1 zero byte) upfront_shutdown_script, I think old eclair will no longer open channels with me? 11:45 <+roasbeef> t-bast: iirc max htlc as well in chan upd 11:45 < rusty> BlueMatt: well, that now applies for every msg you receive, so it's not so bad? 11:46 < t-bast> rusty: you wouldn't because we don't set the upfront_shutdown_script feature bit, so you're not allowed to send a non-empty script :) 11:46 < BlueMatt> rusty: no, i mean you'll still have different parsing for the announce messages, but, whatever 11:46 < cdecker> Ok, seems that there is still quite a lot to discuss about this one, do we want to hash it out now (and forego the other topics) or do we want to defer? 11:46 < BlueMatt> I dont feel *super* strongly here (sorry if its coming across as such), just noting that it seems like a bit of jumping the gun 11:46 < t-bast> roasbeef: this one is gated on another byte in the message, so the logic can stay as it is today (channel_flags) 11:46 <+roasbeef> t-bast: ah yeh...that was kinda weird in retrospect, 11:47 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has quit [Ping timeout: 265 seconds] 11:47 < t-bast> roasbeef: heh yeah it was 11:47 < rusty> t-bast: exactly, if you open a channel with me, your msg will lack that field and be malformed (once I remove the old logic which didn't read that field if you didn't set the option) 11:47 <+roasbeef> i'm on board w/ just making it mandatory though, we have a PR for lnd that is going in this direction 11:47 < t-bast> cdecker: agreed, thanks for the feedback, if you're ok we can keep discussing that on the PR? I'll make the upfront_script mandatory instead of my TLV hack 11:48 < t-bast> rusty: I don't think so, right now the spec allows me to send an empty upfront_script regardless of the features advertized 11:48 < cdecker> #action everyone to continue the discussion of #714 on GH 11:48 < rusty> t-bast: and does old eclair do that? 11:48 * cdecker is sorry for being so pushy, but these meetings are too short to hash out every detail 11:48 < cdecker> #topic BOLT7: reply_channel_range parameter #560 11:49 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/560 11:49 < cdecker> Now this should be an easy one 11:49 < t-bast> rusty: let's take this to the PR, thanks for the feedback! 11:49 < cdecker> It's a clarification on how the parameters should be interpreted when building and receiving range queries 11:50 < cdecker> rusty: would this require a clarification of what complete=1 means? 11:50 < t-bast> "MUST set with `chain_hash` [...]" -> should that "with" really be there? I don't understand this sentence 11:51 < cdecker> Good catch, I think that "with" needs dropping 11:52 < rusty> cdecker: ack 11:52 < cdecker> Is that the only issue people have with this PR? If that's the case I'll fix it up and merge 11:52 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has joined #lightning-dev 11:53 < t-bast> I must admit the two next conditions aren't very clear to me 11:53 < t-bast> the "checking" is confusing me a bit 11:53 < cdecker> It's a bit redundant, yes 11:53 <+roasbeef> heh yeh they had the same interpretation of complete as we did in the past 11:54 < rusty> OK, so high level: you can send multiple reply_channel_range in response to a query_channel_range (due to 64k limit, mainly). 11:54 < rusty> You could choose to implement these responses as precanned 1024 block chunks, for example. 11:54 < rusty> But your answers have to cover the range they asked for. 11:55 < t-bast> rusty: that sounds good 11:55 < sstone> yes but cover exactly or you can overlap ? 11:55 < cdecker> So we're allowed to send a channel for height b even though they asked for [b+3, b+1000]? 11:55 < rusty> e.g. if they ask for first_blocknum = 500,000 and number_of_blocks = 10,000. You can theoretically give 10,000 responses, one per block. (DO NOT DO THIS) 11:56 <+roasbeef> sstone: we allow an overlap of one block iirc 11:56 < rusty> sstone: that should have been specified. Wasn't. We recently loosened the requirements that each one give *some* new information, but may overlap IIRC. 11:56 < t-bast> I think it would be useful to have a quick look at https://github.com/lightningnetwork/lightning-rfc/pull/730 in the same review process, this is tightly linked 11:56 < cdecker> Shouldn't it be "MUST overlap"? 11:57 < sstone> the current specs implies you can send "more" as long as you cover what the request aks for, #560 seems so imply that replies should cover the exact range that was queries which is imho wrong 11:57 <+roasbeef> wpaulino: ^ 11:57 < sstone> there is also a difference between lnd and cl/eclair 11:57 < rusty> cdecker: MUST overlap the area asked, yes. I was implying that we should have said MUST NOT overlap other responses. 11:57 < cdecker> Ah ok, I see 11:57 <+roasbeef> sstone: as of 0.9? iirc we're now all in sync (other than that issue we reported to eclair) 11:58 < cdecker> So since #730 is indeed very closely coupled with this one, let's put that on the table as well 11:58 < sstone> cl/eclair allow gaps betweeen sucessive channel_range_replies, lnd does not (?) 11:58 < rusty> YEs, the intent was that you can over-send. c-lgihtning keeps a bitmap; it would have been nicer to assert replies must be in ascending order :( 11:59 < sstone> basically the question is: do we allow gaps between replies ? 12:00 <+roasbeef> we changed to always send the complete range, so no gaps 12:00 <+roasbeef> as before we weren't compatible w/ wcl 12:00 <+roasbeef> cl* 12:00 < rusty> sstone: hmm, c-lightning will not. We keep a bitmap of blocks, and clear as we receive responses. We need to clear all bits to consider it complete. 12:00 < rusty> If everyone sends in block-ascending order,t hat would simplify the spec? 12:00 < t-bast> But on the receiver side (not sender-side) we've seen that discrepancy in what's accepted 12:01 <+roasbeef> lnd has special cases to accept the old way we did things, but we'll send thing "properly" now and do a little more validation on what we recv (if we don't detect it's the legacy case) 12:02 < t-bast> if query is first_blocknum = 500,000 and number_of_blocks = 10,000, but there's no result in block 510,000 to 520,000. Our first range result is blocks between 500,000 and 510,000, then the second range result is for blocks between 520,000 and onwards. Should the second message's `first_block_num` be 510,000 or 520,000? 12:02 -!- lndbot2 [~lndbot@138.197.213.35] has quit [Remote host closed the connection] 12:02 <+roasbeef> 520 12:02 < t-bast> (you can ignore the `number_of_blocks`, irrelevant to the example and misleading 12:02 -!- lndbot [~lndbot@138.197.213.35] has joined #lightning-dev 12:03 < sstone> I think it makes sense not to have gaps, and it's not explicit, hence #730 12:03 <+roasbeef> the two changes we implemented recently: https://github.com/lightningnetwork/lnd/pull/3785 https://github.com/lightningnetwork/lnd/pull/3836 12:03 <+roasbeef> which was triggered by a bug report by CL: https://github.com/lightningnetwork/lnd/issues/3728 12:03 <+roasbeef> perhaps we should continue this in the issue? or the PR ref'd above 12:04 -!- hiroki_ [a0f8a43f@pl49215.ag2001.nttpc.ne.jp] has quit [Remote host closed the connection] 12:04 < rusty> Ouch, yes, c-lightning gets upset if you overlap your responses. i.e. every block you cover must not have been covered by a previous response. That's sane, but too strict for current spec. 12:04 < t-bast> I believe it's worth clarifying in the spec since implementations diverged (can be done on Github instead of IRC) 12:04 < rusty> t-bast: agreed. 12:04 -!- ohmyfromage [~ohmyfroma@unaffiliated/ohmyfromage] has joined #lightning-dev 12:04 < niftynei> this seems like something the protocol test framework rusty's been working on would help with 12:04 < cdecker> So defer #560 and #730 to GH? 12:04 < t-bast> agreed with niftynei and cdecker ;) 12:05 < rusty> My preference would be to allow over-response, but require ascending order. Then detecting the last response is simpler than keeping a bitmap. 12:05 < niftynei> (in addition to better clarification in spec) 12:05 -!- aantonop [ac3a6b3f@172.58.107.63] has quit [Remote host closed the connection] 12:05 < cdecker> Oh right, we need to talk about rusty's proto tests, rusty would you volunteer to present that next time? 12:05 < rusty> (Assuming that everyone today *does* ascending order) 12:05 < rusty> cdecker: ack1 12:05 < cdecker> #action everyone to discuss the minutiae of #560 and #730 on GH 12:05 < sstone> rusty: yes everyone does :) 12:06 < rusty> sstone: great, then we can cheat!! :) 12:06 < cdecker> Ok, that brings us to the Long Term updates. t-bast and joostjgr have volunteered to give a brief update 12:06 < cdecker> #topic Current status of the trampoline routing proposal (@t-bast) 12:06 < rusty> #action rusty to rework 560 and 730 to assume ascending order. 12:06 < t-bast> Maybe we can start with anchor outputs which should be quicker? 12:06 * cdecker passes the mik to t-bast 12:07 < cdecker> Sure, let's do anchor first 12:07 < cdecker> #topic Current state of the anchor output proposal (@joostjager) 12:07 < joostjgr> ok, the update: 12:07 < joostjgr> two main points open atm: 12:07 < joostjgr> one is the channel open parameters. whether to hard code and how to negotiate. we haven't come to an agreement on that. 12:08 < joostjgr> we remain of the opinion that we shouldn't lock ourselves in a corner. 12:08 * cdecker nods furiously 12:08 < joostjgr> the fixed anchor size of 294 sats is already obsolete because we needed 330 sats with the p2wsh outputs on the commitment tx :) 12:09 < joostjgr> point two is about the elimination of gaming with the symmetric csv delay 12:09 < BlueMatt> wut? I thought we had a whole call about this issue and agreed we should just fix it to a reasonable amount? 12:09 < joostjgr> we realized that there still is a game present in the htlc resolution. there the symmetry isn't present. so you could use that to circumvent a to_remote csv delay 12:09 < BlueMatt> (what ever happened to the audio of this, connor?) 12:10 < joostjgr> you'd still be tied to the htlc cltv value, but that can be much lower than the to_self delay 12:10 <+roasbeef> so the solution doesn't actually address all griefing vectors 12:11 <+roasbeef> at this point, do we want to try and patch this (even more script changes), or just leave the to_remote non-delayed as things are rn 12:11 < t-bast> that's a good catch 12:11 <+roasbeef> imo it comes from trying to introduce symmetry into what is fundamentally an asymmetric commitment scheme 12:11 < t-bast> I think we don't want to leave the to_remote non-delayed though 12:11 < joostjgr> we can't leave it completely non-delayed, because of the carve-out rules. it would at least need a csv delay of 1 12:12 <+roasbeef> sure yeh otherwise it would just be a csv of one 12:12 < cdecker> Good question, at some point we'll get too many changes and apparent security against everything, but only because we ourselves don't understand all the implications anymore 12:12 < cdecker> Is there a point in making incremental changes towards a good solution? 12:12 <+roasbeef> but if we try to patch this, then we go back into the terriroty where cltv+csv dependent on each other, which we worked to remove w/ the 2-stage htlc scheme 12:12 < joostjgr> that is indeed the question. take it further possibly by also requiring the remote to use 2nd level txes or stay close to what we have now and don't address the gaming 12:13 <+roasbeef> the incremental would be leave the csv 1 (for carve out) for now, to later revisit once we think we've addressed all teh gaming vectors (but imo as I said above, we're trying to shoe horn symmetric redemption into an asymetric commitmetn scheme) 12:13 < t-bast> Is there a summary of that somewhere on the PR? I didn't see it 12:13 < cdecker> joostjgr: could you lay out the venues for gaming we have in the current proposal? 12:13 < joostjgr> we don't need the symmetric csv for anchors. it was more in the category of 'now that we are fixing the format anyway', iirc 12:13 <+roasbeef> t-bast: iirc they discoverd this earlier this evening, but yeh we should add it to the PR 12:13 < joostjgr> not on the pr, it came up this afternoon 12:14 <+roasbeef> mhmm they're independent: having better fee contorl vs trying to "fix" possilbe griefing vectors 12:14 < rusty> joostjgr: indeed. And if that doesn't achieve what we want, clearly the minimal incremental change is the Right Thing. 12:14 < t-bast> gotcha, could you summarize this on the PR so that we can think about it more? 12:14 < BlueMatt> joostjgr: seems like action to go write it up, then? 12:14 < joostjgr> ok, will do 12:15 < cdecker> Ok, sounds like we're at a cross-roads: is it worth deferring this just to "quickly clean up" a venue for gaming, or do we make a minimal change with a well defined scope, but requires a later patch for the gaming? 12:16 < cdecker> It sounds to me like we should go the incremental route, since we're still finding new venues for gaming 12:16 < joostjgr> it is also a question what the priority of addressing the gaming is compared to other outstanding issues with the LN 12:16 <+roasbeef> as it'll already be csv based, it won't be too big of a change to modify it from having a value of 1 12:16 <+roasbeef> implementation wise, we're just about finished on our end 12:16 < rusty> cdecker: we'll break this all with eltoo anyway, so I doubt we'll address the gaming separately short of that. 12:17 < joostjgr> ok, sounds we'll just up the to_remote csv from 0 to 1 then 12:17 < joostjgr> and leave everything else unchanged. 12:17 <+roasbeef> well eltoo runs into other issues that are slightly related to this, since the cltv has a relationship with the csv once again 12:17 < cdecker> Who knows when we'll get eltoo, so I prefer having a fix, rather than hoping for future magic :-( 12:17 < BlueMatt> right, iterative seems like a sensible approach...moving forward on anchor outputs is pretty critical for a post-0-fee world 12:17 < joostjgr> then only remaining hot topic is ... variable anchor size 12:18 < cdecker> Let's not pull eltoo into the discussion just yet, but a recent ML post addressed just that issue roasbeef 12:18 <+roasbeef> orly? guess i'm behind lol, there's kind of a lot of cross chatter on the ML these days 12:18 < joostjgr> bluematt: yes, we've had a meeting and exchanged opinions, but no firm acks were given 12:18 * rusty is months behind on lightning-dev too.... 12:18 <+roasbeef> rusty: lol ok cool I'm not the only one :p 12:18 < cdecker> roasbeef: I know, I just get really excited when people find new uses for noinput / anyprevout :-) 12:19 < cdecker> joostjgr: you mean the amount of satoshis we should give the anchor? 12:19 < rusty> joostjgr: you implied above that 330 was the new minimum? 12:19 <+roasbeef> cdecker: yeh 12:19 < joostjgr> yes 12:19 < joostjgr> because our anchor out is p2wsh, 330 is the dust limit 12:19 < BlueMatt> joostjgr: wut? no, iirc the meeting was pretty firm. 12:19 < joostjgr> we assumed p2wkh before 12:19 < BlueMatt> sure, if the scripts change, it'll have to change appropriately, but we were pretty firm on "minimal-ish value" 12:19 <+roasbeef> i think ultimately, we don't want to put our selves in a corner, so we may just make it a tlv field to allow lnd nodes to update the value later in the future w/o having to roll out a new version, if we don't end up making it a param in the spec 12:20 <+roasbeef> it's just a matter of managing uncertainty in the future imo 12:20 < BlueMatt> roasbeef: if you want to negotiate it, you can, but that seems pretty nonsensical, and the discussion on the call seemed that everyone largely agreed there. 12:20 < cdecker> Ok, let's prescribe a constant for now, and add flexibility later, when needed/useful 12:21 < cdecker> (aka rusty's mantra) 12:21 < BlueMatt> I dont really want to reopen this discussion, but there's no sense "negotiating" this kind of value, if you want more flexibility, both sides have to upgrade *either way* 12:21 <+roasbeef> i wasn't on the call BlueMatt, so I can't speak for others, but people seem to not be strongly tied to their past statements 12:21 < BlueMatt> we've beaten this horse to deal 5 times. 12:21 <+roasbeef> well seems it has re-opened after a surprise came up during implementation 12:22 < niftynei> does output size affect the dustlimit because you're considering the size of the script in the spending tx? 12:22 < BlueMatt> we've had this discussion ~5 times, but the basic premise that everyone on the call agreed on was "if we need to change it, negotiation doesn't help because both sides need to upgrade anyway, so writing negotiation code is just complexity when you'll only accept a fixed value, so negotiation is useless" 12:22 <+roasbeef> niftynei: yeah p2wsh spends are bigger than p2wkh spends 12:22 <+roasbeef> BlueMatt: you don' tneed to upgrade if it's just a command line or even param to the open call 12:22 < niftynei> so the dust limit is being applied to the spending tx, not the anchor output tx 12:22 < BlueMatt> so adding spec complexity is just shooting ourselves in the foot 12:22 < rusty> To vary it you need to have a new msg to vary existing channels, too, which is where it really gets nasty... 12:22 <+roasbeef> you can have things be fixed now, but allow negotiation in the future 12:23 <+roasbeef> rusty: vary existing channels? 12:23 < BlueMatt> you still need both sides to accept a different value? 12:23 < BlueMatt> so might as well just fix it and let both sides assume it 12:23 < cdecker> I think this is proving to be a rather good discussion piece :-) roasbeef it's important we learn about these findings so we can follow along the development. BlueMatt a decision taken in a vacuum is unlikely to hold up when confronted with real implementations. I think we made good progress on this one ^^ 12:23 < rusty> roasbeef: if this is insufficient after some Crisis Event, surely we need a way to fix established channels too? 12:23 < BlueMatt> cdecker: agreed that decisions in a vaccum are useless, but roasbeef didn't bring up a new issue we werent aware of here, afaict? 12:24 < cdecker> Agreed, I also don't quite understand the need to make it configurable, hence my call to explain :-) 12:24 < BlueMatt> if you need both sides to agree to a different value, just assume it and move on, you dont have to negotiate anything if both sides have to opt to switch the value to X? 12:26 < cdecker> t-bast: you ok with bumping the update on trampoline to next meeting? 12:26 < t-bast> cdecker: of course, no hurry, this is good discussion on anchor outputs 12:27 < cdecker> And the research topic (improved gossip sync) will also have to move :-( 12:27 < t-bast> Sure, no problem 12:27 * cdecker will need to leave soon 12:27 * BlueMatt is somewhat starving... 12:28 < BlueMatt> lets call it? 12:28 < cdecker> sgtm 12:28 < t-bast> joost/roasbeef: have you also looked a bit into the issue of reserving UTXOs to protect against attacks? 12:28 < BlueMatt> we can schedule another call (this time with roasbeef, I guess, unless connor can just share his thoughts from the meeting with him so we dont have to repeat it)? 12:28 < t-bast> ok nevermind, I can get that update offline 12:28 < cdecker> #action everyone to keep an eye on the anchor outputs proposal and help wherever possible :-) 12:29 < cdecker> Yeah, we can setup a call to discuss anchor output sizes more in detail 12:29 * BlueMatt is still missing the audio connor theoretically recorded on the last one 12:29 <+roasbeef> rusty: sure, i'm also working on a way to update commitment types in-flight, so ppl don't need to clsoe out channels to get all the new fancy features 12:29 < niftynei> i need to head out, thanks for a great meeting everyone 12:29 < cdecker> BlueMatt: would you like to schedule it? 12:29 < t-bast> SGTM, joost/roasbeef can you propose a few times for a call? 12:30 <+roasbeef> t-bast: our current impl doesn't make the fees minimal, so this is used mostly as a feebost rather than providing all the fees 12:30 < t-bast> roasbeef: even with that, you'll potentially need to reserve UTXOs to avoid being caught off-guard when channels close one after another, right? 12:31 < t-bast> roasbeef: I'm thinking about big nodes cases, with thousands of channels 12:32 <+roasbeef> t-bast: sure, but i mean w/o UTXO reservation, things fall back to how they are now (we still have fees on the commitment w/ update_fee), we also have some utxo management fan/out code for a system we run that we can apply use here 12:32 < cdecker> Ok, let's call it an end, but everybody is welcome to stick around and continue discussing (I will while cooking dinner) 12:32 < cdecker> #endmeeting 12:32 < lightningbot> Meeting ended Mon Feb 3 20:32:39 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 12:32 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-02-03-19.06.html 12:32 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-02-03-19.06.txt 12:32 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-02-03-19.06.log.html 12:32 <+roasbeef> as you can still craft the utxos at close time or clsoe it it beyond having to allocate a utxo for _each_ channel 12:33 <+roasbeef> re voice in here, we had some ppl spam the channel aggressively sometime last year, so we added the registration requirement 12:33 < t-bast> roasbeef: yes for now, but if we completely drop update_fee and fees then we really need a good algorithm for the wallet UTXO reservation, right? 12:34 < jkczyz> testing... 1... 2... 12:34 < BlueMatt> yea, all of freenode did, I havent seen it in some time so may be useful to remove 12:34 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Read error: Connection reset by peer] 12:34 < jkczyz> didn't want to interrupt :) 12:34 < t-bast> welcome jkczyx 12:34 <+roasbeef> t-bast: either reservation or creation near close time yeh 12:34 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 12:34 < t-bast> roasbeef: I'm a bit afraid of that logic to be honest, maybe I'm over-thinking it but it feels like a nasty attack vector 12:34 < t-bast> maybe it's simpler than I think :) 12:35 <+roasbeef> t-bast: yeh idk doesn't seem that daunting to me, but we have to handle utxo management like this for other services we already run 12:35 < t-bast> Thanks everyone for the discussions, gotta go too, good breakfast/lunch/dinner and let's keep discussing that on Github 12:35 < t-bast> roasbeef: share your tricks ;) 12:36 -!- t-bast [~t-bast@2a01:e34:ec2c:260:c0eb:56b5:5ec3:556e] has quit [Quit: Leaving] 12:37 < cdecker> Ok, meeting notes posted in the spec meeting issue. I'll close the issue once we have ticked off all the action items in there :-) 12:38 < cdecker> Agenda for the next meeting is already up at https://github.com/lightningnetwork/lightning-rfc/issues/735, though I'll likely reduce the discussion topics to 1-2 12:39 < cdecker> I quite enjoyed todays meeting, thanks everyone ^^ 12:52 -!- joostjgr [~joostjag@ip51cf95f6.direct-adsl.nl] has quit [Quit: Leaving] 12:52 -!- araspitzu [~smuxi@176.21.69.158.rdns.lunanode.com] has quit [Remote host closed the connection] 12:53 -!- mauz555 [~mauz555@2a01:e35:8ab1:dea0:550b:91e3:b5fb:7e84] has quit [] 13:14 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 13:14 -!- sstone [~sstone@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 13:45 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 14:13 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 14:24 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 14:26 -!- DaleBewa_ [~dalebewan@192.145.8.205] has joined #lightning-dev 14:26 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Read error: Connection reset by peer] 14:31 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has quit [Quit: Konversation terminated!] 14:35 -!- cryptapus [~cryptapus@jupiter.osmus.org] has joined #lightning-dev 14:35 -!- cryptapus [~cryptapus@jupiter.osmus.org] has quit [Changing host] 14:35 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has joined #lightning-dev 14:36 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 14:36 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 14:41 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 14:42 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 14:51 -!- marijnfs [~marijnfs@x4d074863.dyn.telefonica.de] has joined #lightning-dev 14:55 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 265 seconds] 15:06 -!- marijnfs [~marijnfs@x4d074863.dyn.telefonica.de] has quit [Remote host closed the connection] 15:07 -!- marijnfs [~marijnfs@x4d074863.dyn.telefonica.de] has joined #lightning-dev 15:08 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Read error: Connection reset by peer] 15:08 -!- __gotcha [~Thunderbi@plone/gotcha] has joined #lightning-dev 15:10 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 15:16 -!- __gotcha [~Thunderbi@plone/gotcha] has quit [Ping timeout: 265 seconds] 15:25 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has quit [Ping timeout: 240 seconds] 15:49 < rusty> https://github.com/lightningnetwork/lightning-rfc/pull/737 as requested. It's a bit stricter than proposed, but we all meet it anyway and it's simple. 15:49 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 15:49 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 15:52 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #lightning-dev 15:55 -!- marijnfs [~marijnfs@x4d074863.dyn.telefonica.de] has quit [Remote host closed the connection] 15:57 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 268 seconds] 15:58 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 16:29 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 16:37 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:4517:9679:60e5:9978] has joined #lightning-dev 16:37 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:4517:9679:60e5:9978] has quit [Client Quit] 16:37 -!- opsec_x12 [~opsec_x12@c-67-183-29-123.hsd1.wa.comcast.net] has quit [Quit: Leaving] 16:39 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 16:45 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 16:45 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 16:59 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 17:21 -!- molly [~molly@unaffiliated/molly] has joined #lightning-dev 17:23 < sanket1729> Hi All, are there any heuristics/solutions for detecting a node which "simply holds an HTLC and does not pass it" ? 17:24 -!- mol [~molly@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 17:25 -!- rdymac [uid31665@gateway/web/irccloud.com/x-wonutmpvqktwsuxo] has quit [Quit: Connection closed for inactivity] 17:30 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 17:32 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 17:32 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 17:34 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 265 seconds] 17:44 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Remote host closed the connection] 18:05 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has quit [Ping timeout: 268 seconds] 18:06 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has joined #lightning-dev 18:10 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 18:18 -!- belcher [~belcher@unaffiliated/belcher] has quit [Quit: Leaving] 18:44 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 240 seconds] 19:02 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 19:03 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 265 seconds] 19:25 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 19:27 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Read error: Connection reset by peer] 20:06 -!- imawhale1 [~Thunderbi@softbank126194186087.bbtec.net] has joined #lightning-dev 20:08 -!- imawhale [~Thunderbi@209.58.135.89] has quit [Ping timeout: 265 seconds] 20:10 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 20:23 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 20:26 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 20:28 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 20:32 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Ping timeout: 240 seconds] 20:36 -!- pedromvpg [~pedromvpg@cpe-67-245-47-141.nyc.res.rr.com] has quit [Remote host closed the connection] 20:42 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 20:51 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 20:55 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Ping timeout: 265 seconds] 21:14 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 265 seconds] 21:58 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Quit: Leaving] 22:32 -!- DaleBewa_ [~dalebewan@192.145.8.205] has quit [Remote host closed the connection] 23:00 -!- DaleBewan [~dalebewan@192.145.8.205] has joined #lightning-dev 23:05 -!- DaleBewan [~dalebewan@192.145.8.205] has quit [Ping timeout: 268 seconds] 23:08 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 23:10 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev 23:11 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 23:15 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has quit [Ping timeout: 265 seconds] 23:45 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 240 seconds] 23:50 -!- DaleBewan [~dalebewan@2a00:c320:2:1:a87f:6deb:b815:384c] has joined #lightning-dev 23:51 -!- DaleBewan [~dalebewan@2a00:c320:2:1:a87f:6deb:b815:384c] has quit [Remote host closed the connection] 23:51 -!- DaleBewan [~dalebewan@2a00:c320:2:1:a87f:6deb:b815:384c] has joined #lightning-dev 23:55 -!- AbramAdelmo [AbramAdelm@gateway/vpn/protonvpn/abramadelmo] has joined #lightning-dev --- Log closed Tue Feb 04 00:00:04 2020