--- Log opened Mon Jul 22 00:00:17 2019 00:38 -!- jungly [~quassel@host97-200-static.8-79-b.business.telecomitalia.it] has joined #lightning-dev 01:01 -!- tracygo [~tracygo@88.130.155.21] has joined #lightning-dev 02:36 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 258 seconds] 02:42 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 03:15 -!- riclas [riclas@148.63.37.111] has joined #lightning-dev 03:40 -!- ugam [6a331637@106.51.22.55] has joined #lightning-dev 03:50 -!- laptop500 [~laptop@host81-147-158-108.range81-147.btcentralplus.com] has joined #lightning-dev 03:59 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has joined #lightning-dev 04:21 -!- pav [~pavle_@37.120.130.59] has joined #lightning-dev 04:37 -!- pav [~pavle_@37.120.130.59] has quit [] 05:00 -!- tombusby [~tombusby@gateway/tor-sasl/tombusby] has quit [Remote host closed the connection] 05:00 -!- tombusby [~tombusby@gateway/tor-sasl/tombusby] has joined #lightning-dev 05:13 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 05:27 -!- sstone [~sstone@irc.acinq.fr] has joined #lightning-dev 05:33 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has quit [Quit: Konversation terminated!] 05:40 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 260 seconds] 05:48 -!- cryptapus [~cryptapus@jupiter.osmus.org] has joined #lightning-dev 05:48 -!- cryptapus [~cryptapus@jupiter.osmus.org] has quit [Changing host] 05:48 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has joined #lightning-dev 06:00 -!- ugam [6a331637@106.51.22.55] has quit [Ping timeout: 260 seconds] 06:13 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection] 06:14 -!- rh0nj [~rh0nj@88.99.167.175] has joined #lightning-dev 06:17 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 06:59 -!- davterra [~none@64.120.89.14] has joined #lightning-dev 07:12 -!- tracygo [~tracygo@88.130.155.21] has quit [Quit: tracygo] 07:15 -!- scoop [~scoop@205.178.77.52] has quit [Remote host closed the connection] 07:18 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 07:34 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 07:43 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #lightning-dev 08:00 -!- farmerwampum [~farmerwam@184.75.220.114] has quit [Ping timeout: 245 seconds] 08:05 -!- Honthe [~Honthe@s91904422.blix.com] has joined #lightning-dev 08:12 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 244 seconds] 08:13 -!- farmerwampum [~farmerwam@162.219.178.10] has joined #lightning-dev 08:24 -!- farmerwampum [~farmerwam@162.219.178.10] has quit [Read error: Connection timed out] 08:24 -!- farmerwampum [~farmerwam@162.219.178.10] has joined #lightning-dev 08:53 -!- captjakk [~captjakk@75-166-175-210.hlrn.qwest.net] has joined #lightning-dev 09:21 -!- jungly [~quassel@host97-200-static.8-79-b.business.telecomitalia.it] has quit [Remote host closed the connection] 09:25 -!- davterra [~none@64.120.89.14] has quit [Quit: Leaving] 09:46 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Remote host closed the connection] 09:46 -!- reallll [~belcher@unaffiliated/belcher] has joined #lightning-dev 09:49 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 246 seconds] 09:52 -!- reallll is now known as belcher 09:57 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has quit [Quit: Konversation terminated!] 09:58 -!- cryptapus [~cryptapus@jupiter.osmus.org] has joined #lightning-dev 09:58 -!- cryptapus [~cryptapus@jupiter.osmus.org] has quit [Changing host] 09:58 -!- cryptapus [~cryptapus@unaffiliated/cryptapus] has joined #lightning-dev 10:12 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 245 seconds] 10:13 -!- sstone [~sstone@irc.acinq.fr] has quit [Remote host closed the connection] 10:15 -!- scoop [~scoop@205.178.77.52] has quit [Remote host closed the connection] 10:23 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 11:02 < Chris_Stewart_5> Did meeting times change? 11:02 -!- scoop [~scoop@204.62.118.241] has joined #lightning-dev 11:07 -!- scoop [~scoop@204.62.118.241] has quit [Ping timeout: 246 seconds] 11:33 < BlueMatt> apparently? 11:40 -!- scoop [~scoop@204.62.118.241] has joined #lightning-dev 11:42 -!- scoop [~scoop@204.62.118.241] has quit [Read error: Connection reset by peer] 11:42 -!- scoop_ [~scoop@204.62.118.241] has joined #lightning-dev 11:47 -!- scoop_ [~scoop@204.62.118.241] has quit [Ping timeout: 268 seconds] 11:49 -!- scoop [~scoop@204.62.118.241] has joined #lightning-dev 11:53 -!- scoop [~scoop@204.62.118.241] has quit [Ping timeout: 246 seconds] 11:57 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 12:13 -!- Amperture [~amp@24.136.5.183] has quit [Ping timeout: 246 seconds] 12:24 -!- t-bast [~t-bast@2a01:e34:ec2c:260:d4c3:f193:837a:7a73] has joined #lightning-dev 12:55 -!- hiroki_ [d28ab1fb@251.177.138.210.rev.vmobile.jp] has joined #lightning-dev 12:59 -!- sstone [~sstone@li1491-181.members.linode.com] has joined #lightning-dev 13:00 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 13:00 < cdecker> Good morning everybody :-) 13:01 < rusty> Hi everyone! 13:01 < t-bast> Hi everybody 13:01 < sstone> Good evening everyone :) 13:02 -!- ugam [6a331637@106.51.22.55] has joined #lightning-dev 13:02 < niftynei> hello!! 13:02 < Chris_Stewart_5> hi 13:02 < rusty> I believe we suckered niftynei into chairing today... 13:03 < t-bast> hehe that's what I remember from last time too 13:03 < niftynei> right. i brought a chair and everything. 13:03 < t-bast> :D 13:03 < cdecker> ^^ 13:03 < niftynei> does someone have a link to the meeting commands? 13:04 < t-bast> https://wiki.debian.org/MeetBot 13:04 < niftynei> ty 13:04 < niftynei> #startmeeting 13:04 < lightningbot> Meeting started Mon Jul 22 20:04:28 2019 UTC. The chair is niftynei. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:04 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 13:05 < niftynei> ok. i thought we'd start with the rest of the outstanding TLV PR's 13:05 < cdecker> Yep, they are needed for the var-hop-payload PR anyway 13:06 < rusty> A lot to get through today, indeed! Let's do it. 13:06 < cdecker> (sort of) 13:06 < niftynei> which are ... conveniently not in the meeting agenda. let me add them. 13:06 < t-bast> agreed, let's start with TLV and then onion ;) 13:06 < niftynei> #640 13:06 < niftynei> #topic #640 swap CompactSize for BigSize in TLV format 13:06 < cdecker> #topic BOLT01: swap CompactSize for BigSize in TLV format 13:06 < niftynei> :D 13:06 < cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/640 13:07 < cdecker> Hehe, sorry about that 13:07 -!- Renepickhardt [58823426@mue-88-130-52-038.dsl.tropolys.de] has joined #lightning-dev 13:07 < rusty> niftynei knocks cdecker off the chair... :) 13:07 < t-bast> :) 13:07 -!- nkohen [~nkohen@199.188.64.16] has joined #lightning-dev 13:07 < t-bast> Is there some outstanding discussion on using big-endian instead of little-endian? 13:08 < t-bast> I think CL, LL and Eclair have all made the code changes, right? 13:08 <+roasbeef> don't think so, this also includes psuedo code of the encoding as well 13:08 < cdecker> Can we externalize the test-vectors like we did for the var-hop-payload? 13:08 <+roasbeef> cdecker: as in make a new file in the repo? 13:08 < rusty> I think consensus was clear. Prev agreement on CompactSize didn't take this into account; everyone's a bit relieved. 13:09 < cdecker> Yep, just move them into a separate JSON file instead of having them in the text itself (minor cleanup for after the merge) 13:09 < rusty> cdecker: I think we should do a pass and get all the test vectors moved. Can we push that to a subcommittee? 13:09 < t-bast> I agree with rusty on that 13:09 < rusty> (As a spelling/formatting change) 13:09 < cdecker> Absolutely, just something that jumped me in the eye 13:09 <+roasbeef> yeh can be done afterwards 13:09 < cdecker> +1 13:09 < niftynei> ok great 13:10 < niftynei> #agreed merging #640 13:10 < t-bast> Would be good to merge TLV changes and onion to be able to make progress on things that build on top of it, and do some clean-up afterwards if needed 13:10 < rusty> #action cdecker to lead bikeshedding on externalizing and neatening test vectors. 13:10 < rusty> t-bast: +1 13:10 * cdecker loves bikeshedding :-) 13:10 < cdecker> All my sheds turn out to be some maroon color because I keep changing my mind :-) 13:10 < niftynei> in the same vein, the next order of business is the TLV testcases 13:11 < t-bast> yep, I think bitconner, rusty and I all have updated our test suite to match #631 13:11 < rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/631 13:11 < niftynei> #topic TLV testcases #631 13:11 < niftynei> lol 13:11 < t-bast> do we all have a green test suite? 13:11 < t-bast> cdecker got kicked out, now it's rusty's time... 13:11 < t-bast> fithgint for the chair at Blockstream 13:11 < t-bast> just reptilian things? 13:12 < niftynei> we're working on our chair sharing algorithm 13:12 < t-bast> haha 13:12 < cdecker> The dining blockstreamers problem all over again :-) 13:12 < rusty> niftynei: sorry. Links are nice though, since they're clickable in the minutes. 13:12 < niftynei> noted. thanks rusty! 13:13 < t-bast> I think for #631 we have a pretty good test suite now, if we decide that moving these to a JSON file is for another effort we should be good, aren't we? 13:13 < t-bast> is bitconner here tonight? 13:13 < rusty> OK, bikeshedding over BigSize can be put on hold. I think we're good for 631. Ack. 13:13 < t-bast> 631 is an ACK from me too, I think bitconner was ok too but wouldn't want to speak for him. 13:14 < rusty> t-bast: he acked on-issue. 13:14 < rusty> https://github.com/lightningnetwork/lightning-rfc/pull/631#pullrequestreview-260999156 13:14 < t-bast> sgtm then 13:15 < t-bast> shall we have an action item to merge that asap too then? 13:15 < niftynei> #action merge #631 13:15 < rusty> Yep... it was also suggested to use `bigsize` rather than `varint` throughout the spec, but that's under the #spelling rule really. 13:16 < t-bast> agreed, we can do spelling fixes in later PRs to replace it everywhere 13:16 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has quit [Quit: harrigan] 13:16 < t-bast> otherwise it's distracting from the more important feature discussions 13:16 < niftynei> that wraps up the outstanding TLV PRs 13:16 < t-bast> That's great! 13:17 < t-bast> Let's do the same with onion then :) 13:17 < t-bast> IIRC cdecker is thirsty and has a prosecco bottle nearby 13:17 < niftynei> #topic variable sized onion payloads https://github.com/lightningnetwork/lightning-rfc/pull/619 13:17 < niftynei> :D 13:17 < cdecker> Yep, ready to go :-) 13:17 < rusty> OK, only issue here was signalling. zero-HMAC vs 0-TLV. 13:18 < t-bast> roasbeef were you able to update your onion implementation to bigsize and verify the latest test vector? 13:18 <+roasbeef> not yet 13:18 < cdecker> I think keeping both is ok: the zero-HMAC is needed for backward compatibility and the 0-TLV allows us to reclaim the last 32 bytes of the onion 13:18 < t-bast> I agree with cdecker 13:18 <+roasbeef> wait both? 13:18 <+roasbeef> you only need 1 13:19 <+roasbeef> it's also weird to have the tlv layer reach down and inform the onion if it's the terminal node or not 13:19 < cdecker> But if desired we can permit zero-HMAC only for legacy hop_data, and only 0-TLV for the newer TLV based payload :-) 13:19 <+roasbeef> one is a sphinx laevle indicator, the other is an app level indicator 13:19 < rusty> roasbeef: but in practice, it's "if first byte is 0, there's no HMAC". 13:19 <+roasbeef> layers crossing into other layers 13:19 < t-bast> I think that termination should have been handled one layer above the onion layer. It probably wasn't do-able for legacy so legacy uses an empty hmac but for TLV it makes more sense in my opinion to handle termination when interpreting the payload. 13:19 < t-bast> Why does it make sense for the onion layer to know about termination? 13:19 <+roasbeef> why above? terminiation is a packet level thing, the structure is even diff when constructing everything 13:20 < t-bast> The onion layer only peels one layer of the onion and passes that to the app layer right? 13:20 < cdecker> So shall we just have a followup PR that allows 0-TLV for TLV based payloads? 13:20 <+roasbeef> it cna signal to the app if it's done or not 13:20 <+roasbeef> it also makes termination independent of w/e app level framing 13:20 < t-bast> In my opinion termination isn't a packet-level thing for onion. For trampoline for example I'm using onions in a slightly different way and it doesn't really make sense to have "termination" at the onion layer but rather at the application level. 13:21 < rusty> I dislike crossing the two lines, where the TLV stream length is only known once you start parsing the TLV stream. However, in this case it's trivial enough that I can't raise strong objection. You literally look at the first byte of the tlv stream: if it's zero, there's no HMAC. 13:21 < cdecker> Yeah, agreed with t-bast: remember it's the HMAC for the _next_ hop not ourselves :-) 13:21 < cdecker> So it should be considered part of the payload, not the onion 13:21 < cdecker> Using the 0-TLV and zero-HMAC is perfectly identical in that sense 13:21 <+roasbeef> yeah it's for the next hop, the original sphinx used the next addr, but at that point the routing info is more closely embedded into the packet itself (as was before) 13:22 <+roasbeef> dunno how trampoline relates to this, that's something else all together that isn't even fully baked 13:22 < rusty> I disagree that all-zero-hmac should work for non-legacy though. That's just weird. 13:22 <+roasbeef> if we remov ehamc, just more code that needs to change (awareness wise) between the legacy and the new 13:22 < t-bast> it's just about using the onion's flexibility: it will be useful for other scenario as well that require playing with what's inside the onion 13:23 < cdecker> But it gives us back 32 bytes of payload 13:23 <+roasbeef> and we save a ton along the way w/ all the other additions ;) 13:24 < cdecker> Let's quickly enumerate the variants that we could agree on: a) keep zero-HMAC as sole signal, b) use zero-HMAC for legacy and 0-TLV for TLV payloads, or c) allow zero-HMACs also in TLV based payloads 13:24 <+roasbeef> i think this can be resolved later since you can add it, but then keep the hmac are the primary 13:24 < t-bast> I honestly don't feel very strongly about this. I think cdecker's proposal is totally fine, but if it's a no-go for some we can stick to the HMAC 13:24 < cdecker> Which ones are people most comfortable with? 13:24 <+roasbeef> i'm ok w/ b, but then it's just redundant information 13:24 <+roasbeef> err c 13:24 <+roasbeef> c or a 13:25 < t-bast> I implemented c for now 13:25 < t-bast> Waiting for a final decision :) 13:25 < rusty> Nack C. Have one signalling mechanism please! 13:25 <+roasbeef> but your primary is hmac t-bast ? 13:25 < cdecker> So, (c) is what the current spec says 13:25 <+roasbeef> one FTW 13:25 < cdecker> Ok, so everybody ok with (a)? 13:25 < t-bast> No my primary right now is -> if legacy check mac, if TLV check first byte and additionnally check HMAC 13:26 < t-bast> We can do a) for this proposal and re-visit later, right? 13:26 < cdecker> If that's the only issue we have I can just drop the final signal in the PR and we can merge :-) 13:26 < t-bast> a) is the least amount of changes compared to legacy, and we can change that later when we feel we really need those 32 bytes 13:26 < rusty> t-bast: hmm, yes, 0 is even :) 13:26 <+roasbeef> don't think all the latest test vectors have been cross ref'd 13:27 < t-bast> rusty: :) 13:27 <+roasbeef> also there was confusion if the old ones worked w/ pre-emptive use of BigSize? 13:27 * cdecker puts the prosecco back in the fridge... 13:27 < t-bast> were you able to check them roasbeef? the latest test vectors on the PR work for eclair and CL 13:27 < t-bast> they use the big-endian bigsize encoding 13:27 <+roasbeef> haven't yet 13:27 < cdecker> roasbeef: the latest version of the test-vector now all use bigsize as per spec 13:28 < t-bast> I think if you got them to work with your previous little-endian encoding, the easy code change to move to big-endian should make it work 13:28 < t-bast> I don't foresee anything complex there :) 13:28 < rusty> OK, so plan is to eliminate the 0 TLV record for now, and roasbeef to ack test vectors? 13:28 <+roasbeef> sgtm 13:28 < t-bast> sgtm 13:28 < cdecker> In the meantime I'll drop the 0-TLV type so we use zero-HMAC for termination only 13:28 < cdecker> SGTM 13:28 < t-bast> once ack-edm can we agree to merge? 13:29 < t-bast> That makes it easier to build on top of it (AMP, message extensions, etc) 13:29 < niftynei> #agreed eliminate 0 TLV record for now 13:29 < niftynei> #action roasbeef to ack test vectors 13:29 < rusty> t-bast: +1 from me! 13:29 < niftynei> ok so if roasbeef acks the test vectors, we consider this PR good to merge? 13:29 < t-bast> roasbeef are you ok with that? Once ack-ed that your implementation verifies the updated test vectors, we can merge without waiting for next meeting? 13:30 <+roasbeef> don't think there's ever been a req to wait for this meeting to merge stuff 13:30 < t-bast> great :) 13:30 < niftynei> ok great. moving on 13:30 < rusty> Yeah, unless unforseen issues (as always!) 13:30 < t-bast> related to onion and tlv 13:30 < t-bast> nitftynei I have a small PR to look at 13:31 < t-bast> https://github.com/lightningnetwork/lightning-rfc/pull/627 13:31 < t-bast> I'm wondering how other implementations deal with that error code right now 13:31 < t-bast> It feels to me that a new error code would be useful 13:31 <+roasbeef> how would you error out? since atm it's just a fixed number of bytes 13:31 <+roasbeef> so failing on integer parsing or something? 13:31 <+roasbeef> or the packet is the wrong size? 13:32 <+roasbeef> matters more in a post TLV world I guess 13:32 < t-bast> there are many things than can be wrong in the TLV: invalid order, unknown even type, etc 13:32 < t-bast> yes exactly, I detail that in the PR intro 13:32 < niftynei> #topic BOLT 04: Add failure code for invalid payload 13:32 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/627 13:33 < niftynei> ok so my two cents on this PR is that it looks like we still need LL and CL to review it 13:33 < t-bast> agreed, just wanted to bring it to their attention while they're finalizing the onion implementation 13:33 < cdecker> Well, it just formalizes what we'd be doing anyway 13:33 <+roasbeef> no strong objection at a glance, we'll def need a way to signal back the sender that they messed up somewhere 13:33 < cdecker> So from me it looks reasonable 13:33 < t-bast> great, so let's discuss the exact bit used on the PR in the following days, no rush here 13:34 <+roasbeef> have a history of accidentally adding new probing vectors with precise errors, so should have that in mind, but don't see anything at a glance 13:34 < niftynei> it's a short proposal to add a new error type for onions 13:34 < rusty> t-bast: errors for things which Shouldn't Happen aren't high on my priority list, but I'll comment on-issue. 13:34 < cdecker> I'm happy with using BADONION|PERM since it seems to be the most generic failure we can hit 13:35 < t-bast> rusty: sgtm 13:35 < t-bast> cdecker: that was indeed my first choice 13:35 < rusty> It should only be BADONION if we think we can't generate a valid reply. I don't think that's the case. 13:35 < rusty> Would prefer to include offset of error within TLV, too, as a hint. 13:36 <+roasbeef> yueh sounds useful for debugging 13:36 < rusty> Anyway, will make these points on the issue to avoid derailing 13:36 < niftynei> #action rusty to leave feedback on PR 13:36 < t-bast> perfect, let's discuss this on github after giving it some thought and seeing how that would fit in our implementations 13:37 < niftynei> ok i'd like to get a one easy-ack PR through so we can merge it 13:37 < t-bast> good idea 13:37 < niftynei> #topic BOLT7: (announcement_signatures) Fail channel if `short_channel_id` not correct. #635 13:38 < niftynei> https://github.com/lightningnetwork/lightning-rfc/pull/635 13:38 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/635 13:38 <+roasbeef> easy lgtm 13:38 < rusty> +1 13:38 < niftynei> great. moving on 13:38 < cdecker> ack 13:38 <+roasbeef> I guess SHOULD instead of MAY? 13:38 < t-bast> simple enough 13:38 < rusty> roasbeef: yeah, let's upgrade to SHOULD. 13:38 <+roasbeef> commented on PR 13:39 < niftynei> #action upgrade from MAY to SHOULD 13:39 < t-bast> then maybe update the following lines to SHOULD too? 13:39 < rusty> Probably should upgrade the other ones too, but that's a separate PR. 13:39 < niftynei> yes we MAY update them 13:39 < rusty> niftynei: I think we SHOULD. Nay, MUST! 13:39 <+roasbeef> kek 13:40 < niftynei> #agreed MAY be merged after SHOULD updated 13:40 < niftynei> moving on, i'd like to get these two DLP wording PR's out 13:41 < niftynei> #topic BOLT 2: remove local/remote from reestablish field names. 13:41 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/634 13:41 < t-bast> ACK 13:41 < rusty> I was testing these, and the names made it worse. They're actively misleading, so it's almost a spelling fix. 13:42 < t-bast> they really confused me when I first read the spec 13:42 < niftynei> same 13:42 < rusty> Sorry :( Let 13:42 < niftynei> i found the DLP section incredibly hard to parse. i think this is a definite win 13:42 < t-bast> I hesitated to send a PR but was too shy, early times 13:42 < rusty> 's kill them. 13:42 < niftynei> ok, if there's on dissent let's move on 13:43 < niftynei> #agreed merge #634 13:43 < niftynei> this next one's similar 13:43 < niftynei> #topic option_data_loss_protect: concretely define `my_current_per_commitment_point` 13:43 < rusty> W00t! Where's that prosecco... 13:44 < rusty> link? 13:44 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/550 13:44 <+roasbeef> well don't htink removing the context makes it more clear re chan reest fields names 13:44 <+roasbeef> we use slighlty diff names in our code: https://github.com/lightningnetwork/lnd/blob/master/lnwire/channel_reestablish.go#L20 13:45 <+roasbeef> referring to "chain height" 13:45 <+roasbeef> commit height* 13:45 < rusty> roasbeef: yeah, there's definitely a strong argument that this entire section should be rewritten. 13:46 < rusty> But adding the words in #550 def makes it better, not worse. 13:46 < rusty> (Since there's currently no description of taht field!) 13:46 < ianthius> the docs for LND say that my bitcoind must be compiled with zmq, do folks know if the binaries from the the ubuntu repository bitcoin/bitcoin include this? 13:46 < niftynei> right, it adds a description for a currently undefined field 13:46 < t-bast> I agree that this is a useful addition 13:46 -!- jtimon [~quassel@109.164.158.146.dynamic.jazztel.es] has joined #lightning-dev 13:46 < t-bast> ianthius: I think there are instructions in bitcoin-core's readme about that 13:47 < t-bast> ianthius: I would suggest not to use bitcoin from ubuntu's PPA though 13:47 < t-bast> ianthius: you should instead use one of the repoducible builds as explained on bitcoin's github repo 13:47 < niftynei> roasbeef, if your comment about 'removing the context' regarding #550 or #635? 13:48 < niftynei> :s/if/is 13:48 <+roasbeef> #634 13:49 <+roasbeef> had forgotten about #550 13:49 <+roasbeef> i had a comment that looks to hav eben a ddressed many months ago? will check it out again and lgtm it if things look ok 13:49 < rusty> ianthius: yes, pretty sure the ppa does, FWIW. 13:49 < niftynei> ah ha. ok. well since there's now two outstanding PR's to clarify the section, that seems like a strong indication that we should update it 13:49 < cdecker> I found the DLP stuff to be utterly confusing tbh 13:49 <+roasbeef> has worked pretty well in the field for us FWIW 13:50 < ianthius> t-bast: this is not for production, any other reason you suggest not using the binaries besides security? 13:50 < cdecker> So any more clarification would be great in my mind 13:50 < t-bast> iantius: it's just security, for experimenting you should be fine then 13:50 < ianthius> thx all 13:50 < cdecker> ianthius: could this wait 30 minutes? We're in the middle of a spec meeting 13:50 < niftynei> #action roasbeef to check outstanding comment 13:50 < rusty> roasbeef: we keep getting sync errors from lnd, so not sure it's actually working in practice :( 13:50 < ianthius> oh whoops sorry folks. i am done 13:50 < lndbot> looking at https://github.com/lightningnetwork/lightning-rfc/pull/651 at a glance it looks like it will be extremely useful, at least to me. But I don't think my review will be very ujseful since I will basically just believe anything it tells me. can I review beg for other person's PR? 13:51 < cdecker> Yep, we're still trying to hunt down the "sync error" issue that's been haunting us, and it might just be an imprecision in the DLP wording 13:51 < t-bast> agreed with jtimon, this one should be useful to get in 13:52 <+roasbeef> rusty: sync error can just be a timing thing 13:52 <+roasbeef> andn doesn't always mean that we can't continue forwrd, or y'all sent the wrong info 13:52 < rusty> I'd like someone from LL to read through #651 please. 13:53 < niftynei> i'd like to hear more about sync error root causes, but i'd also like to move on. 13:53 <+roasbeef> concept looks sound, didn't know if it till this meeting just now 13:53 < niftynei> #topic CONTRIBUTING.md: first draft of how to write and change spec. #651 13:53 < rusty> roasbeef: OK, we're going to have to ignore it for 0.7.2, which means DLP will fail to do its job, since we won't force close if you're *actually* out of sync. 13:53 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/651 13:54 < niftynei> i think this one still needs some review time. 13:54 < rusty> Perhaps we get an ack to apply it, but give a week for more feedback? If nothing emerges in that time, apply? 13:54 < niftynei> sgtm 13:54 <+roasbeef> rusty: would discourage against that in favor or trying to locate the root cause, otehrwise just ignoring can mean loss of funds, if y'all have a repro can prob track it down easily 13:54 < t-bast> sgtm 13:54 < rusty> (Nothing major, that is, obc clarifications and typos) 13:54 < cdecker> sgtm 13:55 < cdecker> roasbeef: the problem is we can't reproduce it in a lab setting 13:55 < niftynei> roasbeef we've been having trouble nailing down a repro 13:55 < niftynei> :D 13:55 < rusty> roasbeef: I'll try ignoring it on my node, see if it self-heals or just gets stuck in a sync error loop? 13:55 < cdecker> We'll have to ignore it since otherwise we end up closing a lot of channels, since lnd has forced our hand here 13:56 < rusty> cdecker: I lost three channels during a restart the other day :( 13:56 <+roasbeef> depends on what's being ignored, and the action 13:56 <+roasbeef> if you get teh chan reset message, then you can act accordignly 13:56 < cdecker> Ok, definitely need to run my nodes with IO logging so I can at least reconstruct some of the state 13:57 <+roasbeef> if we don't get urs, but you get ours and we're behind, then you can act 13:57 < niftynei> regarding #651, if there's no objections i'm going to take rusty's suggestion as the action 13:57 < cdecker> niftynei: sgtm 13:57 < t-bast> niftynei: ACK 13:57 < niftynei> #agreed one week for feedback, once resolved ok for approval 13:57 < t-bast> There was a new wave of feedback on gossip stuff, should we spend some time on range_queries/inventory gossip? 13:58 < niftynei> #topic BOLT7: extend channel range queries with optional fields #557 13:58 < niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/557 13:58 < t-bast> thanks :) 13:58 < rusty> t-bast: well, niftynei literally finished our TLV autogen code on the weekend for c-lightning, so now I get to update our implementation; should be ready before next mtg. 13:59 < t-bast> rusty: that would be great! 13:59 < t-bast> so we're still at a concept ACK on 557 and will update once we have a second implementation? 14:00 < t-bast> and will discuss inventory gossip once we've made progress on 557's implementation? 14:00 < sstone> rusty: do you think that 557 needs a new feature bit now or that it can be added later ? 14:00 < rusty> t-bast: I will also ahve a test in my protocol test framework, too. 14:00 < t-bast> rusty: fancy 14:00 < rusty> sstone: I think 2 feature bits, yes. 14:01 < niftynei> ok, so it sounds like CL is going to fixup their implementation and we'll revisit in the future 14:01 < niftynei> :s/in the future/at the next meeting 14:02 < rusty> Since 557 is really two features. The query_short_channel_id flags, and the query_channel_range flags. 14:02 < sstone> I see them as one since you can't really use extended query_short_channel_id with extended query_channel_range ? 14:02 < rusty> Using feature negotiation lets us turn them off individually if we find out we deployed too soon :) 14:03 < sstone> Yes that's right 14:03 < rusty> sstone: oh, they're closely related. But they're separate. 14:04 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 248 seconds] 14:04 < rusty> You can def use the query_short_channel ID if you know about the channel but just want the latest update, I guess? 14:04 < sstone> How would you know you need it ? 14:05 < sstone> without having received timestamps first I mean 14:05 < rusty> sstone: handwave! Maybe you want to check you have the latest you're going to use in a route? 14:05 < rusty> Or somehow otherwise prioritize? 14:06 < rusty> But yeah, it's probably not going to happen in any implementation. 14:06 < rusty> So perhaps we 14:06 < sstone> that what worries me: impl supporting one without the other 14:07 < sstone> that's 14:07 < rusty> d disable both anyway, in case of problems. OK, maybe one feature bit for both? 14:07 < rusty> They're pretty easy to implement. 14:08 < jtimon> sorry if the meeting is not the appropriate time to ask for this, I can wait, but roasbeef could you give a hint to fix https://github.com/cdecker/lightning-integration/issues/59 ? not familiar with go at all... 14:08 < cdecker> jtimon: roasbeef gave me a few hints, and I'll try them asap :-) 14:08 < jtimon> cdecker: oh, awesome 14:08 < sstone> rusty: sgtm, we can continue on the PR discussion section 14:09 < niftynei> ok we're over time. 14:09 < cdecker> I'll add the transcript to that issue jtimon so it isn't blocking on me 14:09 < jtimon> double awesome 14:09 < niftynei> #action to continue discussion on PR, CL to re-do implementation 14:09 < cdecker> Awesome job everybody, looks like we made good progress today :-) 14:09 < niftynei> #topic next meeting chair? 14:10 < t-bast> Good job guys, thanks a lot! Sorry for your prosecco cdecker, it will be even better next time ;) 14:10 < sstone> an open question for next time: how do you feel about using short channel id as an alias for node id ? 14:10 < sstone> context: inv gossip 14:10 < niftynei> any volunteers for next mtg? i found it really helpful to have time to organize the agenda beforehand 14:10 -!- Renepickhardt [58823426@mue-88-130-52-038.dsl.tropolys.de] has left #lightning-dev [] 14:10 < t-bast> I agree with niftynei, it's great to know you're chairing to be able to prepare 14:10 < t-bast> I can volunteer if no-one wants to 14:11 < rusty> niftynei: I think you should do it again, now you've had practice :) 14:11 < rusty> sstone: if it saves space, I'm all for it... 14:11 < niftynei> i'd be happy to! 14:12 < niftynei> i think LL hasn't had a chair spot in a few meetings tho 14:12 < t-bast> I agree, maybe someone from LL to rotate frequently? 14:12 < niftynei> also happy to trade off with t-bast :) 14:13 < niftynei> also, i'd like to go ahead and unmark everything that is "Meeting Discussion". anything that needs to be discussed next week can re-add it 14:13 < t-bast> niftynei: agreed 14:13 < niftynei> otherwise it's too hard to keep track of what's Actually on the docket for the meeting 14:14 < t-bast> @roasbeef no-one on your side wants to chair? 14:14 < t-bast> time to get joost and johan in there ;) 14:14 < rusty> t-bast: bitconner has chaired before, and he's not here so we can volunteer him, right? :) 14:14 < niftynei> #action niftynei to untag every Meeting Discussion label (so they can be re-added for next meeting) 14:15 < t-bast> rusty: that sounds evil, I love it 14:15 < niftynei> i second a nomination for bitconner 14:15 < niftynei> in the case that he's unavailable in two weeks, t-bast can chair 14:15 < t-bast> that's settled then, democracy is a beautiful thing 14:16 < niftynei> #agreed bitconner nominated to chair next meeting, t-bast will serve as backup chair 14:16 < niftynei> great. thanks everyone! 14:16 < niftynei> tag your issues for next meeting ! 14:16 < niftynei> #endmeeting 14:16 < lightningbot> Meeting ended Mon Jul 22 21:16:17 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 14:16 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-07-22-20.04.html 14:16 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-07-22-20.04.txt 14:16 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-07-22-20.04.log.html 14:16 < t-bast> thanks everyone, good morning/day/night! 14:17 < rusty> Thanks! 14:17 -!- ugam [6a331637@106.51.22.55] has quit [Remote host closed the connection] 14:18 < sstone> thanks, good bye everyone! 14:20 -!- hiroki_ [d28ab1fb@251.177.138.210.rev.vmobile.jp] has quit [Remote host closed the connection] 14:23 -!- mol [~molly@unaffiliated/molly] has joined #lightning-dev 14:25 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 14:26 -!- scoop [~scoop@205.178.77.52] has quit [Read error: Connection reset by peer] 14:26 -!- scoop_ [~scoop@205.178.77.52] has joined #lightning-dev 14:26 -!- t-bast [~t-bast@2a01:e34:ec2c:260:d4c3:f193:837a:7a73] has quit [Quit: Leaving] 14:28 -!- pav [~pavle_@37.120.130.59] has joined #lightning-dev 14:30 -!- sstone [~sstone@li1491-181.members.linode.com] has quit [Quit: Leaving] 14:48 < niftynei> ok all the `Meeting Discussion` labels have been removed. please re-add them as you see fit 15:03 -!- pav [~pavle_@37.120.130.59] has quit [] 15:11 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 15:26 * bitconner acceptsnomination to sit on the iron throne 15:26 < bitconner> :D 15:34 < cdecker> jtimon: seems I found a solution to the docker integration tests being broken: https://github.com/cdecker/lightning-integration/pull/61 15:34 < cdecker> Can you confirm? 15:35 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 15:35 < jtimon> cdecker: sure, I'll do in a bit 15:38 -!- molly [~molly@unaffiliated/molly] has joined #lightning-dev 15:39 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 15:40 -!- mol [~molly@unaffiliated/molly] has quit [Ping timeout: 268 seconds] 15:50 < jtimon> cdecker: it seems to be working, but it's downloading a bunch of things, just waiting stuff to finish to give the ack, but it's looking good 15:53 < jtimon> thanks 15:57 <+roasbeef> cdecker: looks good to me at a glance 16:03 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 272 seconds] 16:09 -!- nkohen [~nkohen@199.188.64.16] has quit [Ping timeout: 245 seconds] 16:09 < jtimon> https://github.com/cdecker/lightning-integration/pull/61#issuecomment-513988593 16:09 -!- Amperture [~amp@24.136.5.183] has joined #lightning-dev 16:10 -!- peleion [~jeff@76.73.148.34] has joined #lightning-dev 16:16 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 16:19 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 16:31 -!- booyah [~bb@193.25.1.157] has joined #lightning-dev 16:41 < jtimon> 39%, I didn't know this was so slow, I have many cores not being used, sorry: 16:41 < jtimon> est.py::test_direct_payment[ptarmigan_lightning] FAILED [ 41%] 16:41 < jtimon> test.py::test_direct_payment[ptarmigan_lnd] FAILED [ 42%] 16:41 < jtimon> test.py::test_direct_payment[ptarmigan_ptarmigan] FAILED [ 42%] 16:41 < jtimon> test.py::test_forwarded_payment[eclair_eclair_eclair] FAILED [ 43%] 16:41 < jtimon> test.py::test_forwarded_payment[eclair_eclair_lightning] ^C^X^CFAILED [ 44%] 16:41 < jtimon> test.py::test_forwarded_payment[eclair_eclair_lnd] Killed 16:56 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 16:57 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 245 seconds] 17:00 -!- lukedashjr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 17:03 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 258 seconds] 17:04 -!- lukedashjr is now known as luke-jr 17:17 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Excess Flood] 17:17 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 17:26 -!- rusty [~rusty@120.20.107.177] has joined #lightning-dev 17:26 -!- rusty [~rusty@120.20.107.177] has quit [Changing host] 17:26 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 17:38 <+roasbeef> jtimon: yeh don't think they've been optimized at all, likely a lot of sleep()s in there 17:40 < jtimon> I wasn't really worried about the sleeps in particular tests but mostly many independent test being run and my many cores not being used 17:43 < jtimon> -j everything, to be fair, thne go build seems to be using all my cores without telling him anything explictly, and maven seems to be only slow because it's donwloading even its own bitcoin core every time, I think we can cache some of the downloads without impeding CMD from updating to the latest versions 17:44 < jtimon> the docker image will be bigger though, not sure if that's a big deal or not 17:45 < jtimon> and then just get test.py to get numjobs as an argument, does that make sense as a plan to make this faster? 17:47 < jtimon> I'll try to PR something with separated potential improvements that people can ack nack separately before I go to bed 17:49 < bitconner> my suspicion is that it may be difficult to parallelize some of them due to interactions with the miner 17:57 -!- laptop500 [~laptop@host81-147-158-108.range81-147.btcentralplus.com] has quit [Ping timeout: 268 seconds] 17:58 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 18:17 -!- davterra [~none@195.242.213.120] has joined #lightning-dev 18:18 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 18:19 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 18:20 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 18:22 -!- davterra [~none@195.242.213.120] has quit [Remote host closed the connection] 18:29 < jtimon> bitconner: my instinct is just try and see how it breaks or goes slower or faster, this seems faster, but I didn't touch the python yet: 18:29 < jtimon> https://github.com/cdecker/lightning-integration/pull/62 18:34 -!- riclas [riclas@148.63.37.111] has quit [Ping timeout: 248 seconds] 19:00 -!- davterra [~none@209.58.184.112] has joined #lightning-dev 19:32 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 19:32 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 19:59 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 260 seconds] 20:15 -!- scoop_ [~scoop@205.178.77.52] has quit [Remote host closed the connection] 20:39 -!- davterra [~none@209.58.184.112] has quit [Ping timeout: 248 seconds] 20:41 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 248 seconds] 20:45 -!- aaa22 [836b9301@131.107.147.1] has joined #lightning-dev 20:46 -!- aaa22 [836b9301@131.107.147.1] has quit [Remote host closed the connection] 21:00 -!- nijak [nijak@gateway/vpn/mullvad/nijak] has quit [Ping timeout: 244 seconds] 21:01 -!- nijak [nijak@gateway/vpn/mullvad/nijak] has joined #lightning-dev 21:08 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 21:14 -!- scoop [~scoop@205.178.77.52] has quit [Remote host closed the connection] 21:14 -!- jtimon [~quassel@109.164.158.146.dynamic.jazztel.es] has quit [Quit: gone] 21:14 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 21:16 -!- Honethe_ [~Honthe@s91904422.blix.com] has joined #lightning-dev 21:20 -!- Honthe [~Honthe@s91904422.blix.com] has quit [Ping timeout: 272 seconds] 21:22 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 260 seconds] 21:32 -!- scoop [~scoop@205.178.77.52] has quit [Remote host closed the connection] 21:32 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 22:09 -!- ugam [6a331637@106.51.22.55] has joined #lightning-dev 22:40 -!- scoop [~scoop@205.178.77.52] has quit [Remote host closed the connection] 22:41 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 22:45 -!- scoop [~scoop@205.178.77.52] has quit [Ping timeout: 245 seconds] 23:11 -!- scoop [~scoop@205.178.77.52] has joined #lightning-dev 23:16 -!- scoop [~scoop@205.178.77.52] has quit [Ping timeout: 248 seconds] 23:49 -!- Kostenko [~Kostenko@195.12.50.233] has quit [Ping timeout: 268 seconds] --- Log closed Tue Jul 23 00:00:18 2019