--- Log opened Mon Feb 04 00:00:40 2019 00:23 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 00:36 -!- diackca_ [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 00:49 -!- enemabandit [~enemaband@185.227.37.188.rev.vodafone.pt] has joined #lightning-dev 00:49 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 00:50 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 00:51 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Ping timeout: 250 seconds] 00:52 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 00:55 -!- diackca_ [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 01:04 -!- diackca_ [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 01:09 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 01:11 -!- diackca_ [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 01:14 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 01:17 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 01:25 -!- queip [~queip@unaffiliated/rezurus] has quit [Quit: bye, freenode] 01:38 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 01:42 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has joined #lightning-dev 02:26 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 02:43 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 02:48 < rusty> I've created an issue to serve as tomorrow's adenda: https://github.com/lightningnetwork/lightning-rfc/issues/566 02:49 < rusty> Please tag issues you want to discuss, in case people want to prepare ;) 02:54 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has joined #lightning-dev 02:56 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 03:07 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 03:27 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 03:27 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 03:36 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 03:38 -!- sh_smith [~sh_smith@cpe-76-174-26-91.socal.res.rr.com] has joined #lightning-dev 03:49 -!- jungly [~quassel@host97-200-static.8-79-b.business.telecomitalia.it] has joined #lightning-dev 03:55 -!- marcoagner [~user@2001:8a0:fee8:9001:f21d:827f:7aa5:c011] has joined #lightning-dev 03:58 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 04:16 -!- tombusby [~tombusby@gateway/tor-sasl/tombusby] has quit [Remote host closed the connection] 04:19 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 04:19 -!- riclas [~riclas@148.63.37.111] has joined #lightning-dev 04:44 -!- jtimon [~quassel@92.28.134.37.dynamic.jazztel.es] has joined #lightning-dev 05:00 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 250 seconds] 05:38 < riclas> hey guys, a general question: if I lock funds into LN, when I unlock them, are my entry outputs used to pay my exit, or some other outputs? 05:39 -!- Amperture [~amp@24.136.5.183] has quit [Remote host closed the connection] 05:40 < ysangkok> riclas: we replied to your question, didn't you see it? 05:49 -!- ebx [~ebx@unaffiliated/ebex] has joined #lightning-dev 05:50 < riclas> ahhh missed it, since it was hours later eheh 05:52 < riclas> so if i understood correctly, my inputs will be used as my outputs, unless i should receive a larger balance, which will include other outputs 05:54 < riclas> my concern is if I use LN, won't I be using a mixer of some kind? ... I wouldn't want to taint my coins 05:54 < molz> there's no mixer on ln 05:56 < riclas> sure. but if i receive outputs that weren't mine to begin with, it's the same result 05:57 < ysangkok> you can't close the channel with more balance than the channel was opened with 05:58 < ysangkok> if you opened the channel, your closing tx will spend funds that trace back to, exclusively, then funding tx that you made yourself 05:58 < molz> yup 05:59 < ysangkok> s/then/the/ 05:59 < riclas> ok, that's a beautiful explanation, thanks. still, if i am receiving payments from someone in LN, how do I "add them" to my bitcoin stash? 05:59 < ysangkok> well, you will be receiving balance on an existing channel that you already have 06:00 < ysangkok> because you can only receive on channels where the counterparty has some balance, it is kinda a problem if you have no channels and you want to start receiving 06:01 < riclas> can we put an example? Say i open a channel with 1 BTC. i start receiving payments from other peers and should have 2 btc. how do i get the whole balance? 06:01 < ysangkok> if you open a channel, by default, you can't receive 06:02 < riclas> or is that not possible 06:02 < ysangkok> if you want to receive payments, it is best if you get somebody to open channels to your node 06:03 < riclas> but if the person has to route a payment, and doesn't have a direct channel to me? say i have my node 06:06 < riclas> do you mean i can have channels connected to me with only people I want to? 06:06 < riclas> and there will be no unknown peers on route to me? 06:08 < molz> then you and Bob need to have a channel with each other, then if Bob has a channel with that someone, that someone might be able to send payment to you via Bob 06:12 < riclas> sure. but i want to trade with Alice, who has a channel with Charles, who has a channel with Bob, then me. the coins I receive are always from Bob's channel payment? 06:13 < riclas> and i don't know charles. 06:14 < riclas> A -> C -> B -> me, don't know C 06:20 -!- rachelfish [~rachel@unaffiliated/itsrachelfish] has quit [Ping timeout: 246 seconds] 06:20 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 06:21 -!- rachelfish [~rachel@unaffiliated/itsrachelfish] has joined #lightning-dev 06:37 < ysangkok> any payment you receive must come on a channel that you have with an immediate peer 06:37 < ysangkok> if your peer doesn't have balance in a channel he has with you, he cannot send you anything 06:38 < ysangkok> every arrow ("->") you typed is a channel 06:39 < ysangkok> so you are receiving payment over a path, but the path includes channels, and you are also receiving payment on a channel 06:40 < ysangkok> when you receive money, your node will not even know how the money came from A to B 06:40 < ysangkok> all your nodes cares about, is that B can send you money. you can determine if he can by looking at his channel balance 06:42 < ysangkok> so yes, the coins you receive will always be from B, if you only have a channel with B. if A paid the invoice, B will take money from C, and C will take money from A. but you are still getting paid from B 06:54 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 06:55 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 06:57 < riclas> thanks ysangkok. though that is the reason of my initial question... it kind of creates a web of trust regarding the coins that are sent 06:58 < riclas> i can monitor the funds from B and then dc his channel 06:58 < riclas> if he receives coins that i don't want 06:59 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Ping timeout: 245 seconds] 07:05 < ysangkok> you can not the balance of B's channels, you can not know when a payment is made on his channels. you could speculate that he might not be the payer, if you receive an htlc from him 07:06 < ysangkok> *you cannot see 07:06 < riclas> i meant: when i receive a payment from him that i don't "like" i can dc his channel 07:07 < ysangkok> you don't even have to disconnect, you can just fail the htlc 07:07 < molz> or you can send to us 07:07 < molz> :D 07:17 < riclas> yeah, but i want to receive my payment, i just don't want him in the future :) 07:18 < riclas> so on my ln node i can see from who i received x payment. that's cool. 07:35 < molz> riclas, i might not know who would use ln for their tainted coins but if they do, tough luck, because there's still a cap for a very small small amt to open a channel, if someone wants to mix their coins there're much better services elsewhere 07:37 -!- michaelsdunn1 [~michaelsd@38.126.31.226] has joined #lightning-dev 07:37 -!- michaelsdunn1 [~michaelsd@38.126.31.226] has quit [Changing host] 07:37 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #lightning-dev 07:42 < riclas> molz you mean channel capacity is small atm. but i suspect it will increase over time... also, it should be pretty simple to open many small channels do different peers 07:45 < molz> riclas, sure, but they just open a bunch of channels, big or small, but they never spend, what can they accomplish? 08:11 -!- arshbot [unknown@overflod.chary.us] has joined #lightning-dev 08:19 < riclas> if they open also a receiving node and send to it, is the channel tx linked to it on bitcoin? 08:20 < riclas> but what you say makes sense, they can only "launder" between themselves, or accomplices 08:20 < riclas> OR they can spend to otc desks 08:27 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 09:03 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection] 09:03 -!- jungly [~quassel@host97-200-static.8-79-b.business.telecomitalia.it] has quit [Remote host closed the connection] 09:04 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has quit [Quit: Leaving] 09:14 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 09:14 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 09:15 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 09:15 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 09:15 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 09:16 -!- Murch [~murch@50-200-105-218-static.hfc.comcastbusiness.net] has joined #lightning-dev 09:18 -!- diackca_ [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 09:35 -!- enemabandit [~enemaband@185.227.37.188.rev.vodafone.pt] has quit [Ping timeout: 244 seconds] 09:36 -!- diackca_ [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 09:36 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 09:40 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 09:41 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 09:56 -!- araspitzu [~araspitzu@ec2-35-180-216-238.eu-west-3.compute.amazonaws.com] has quit [Remote host closed the connection] 09:57 -!- enemabandit [~enemaband@16.77.54.77.rev.vodafone.pt] has joined #lightning-dev 10:10 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 10:11 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 10:15 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Ping timeout: 250 seconds] 10:52 -!- sstone [~sstone_@185.186.24.109.rev.sfr.net] has joined #lightning-dev 10:53 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 10:54 < rusty> Hi all! Meeting in 5. Agenda here: https://github.com/lightningnetwork/lightning-rfc/issues/566 10:55 < rusty> Issues by label here: https://github.com/lightningnetwork/lightning-rfc/labels/2019-02-04 10:56 -!- hiroki [a331d575@gateway/web/freenode/ip.163.49.213.117] has joined #lightning-dev 10:59 < rusty> https://wiki.debian.org/MeetBot for instructions on how to annotate things 10:59 < cdecker> Present :-) 11:00 < cdecker> PING AmikoPay_CJP bitconner BlueMatt cdecker Chris_Stewart_5 kanzure lightningbot niftynei ott0disk roasbeef rusty sstone 11:00 < rusty> #startmeeting 11:00 < lightningbot> Meeting started Mon Feb 4 19:00:21 2019 UTC. The chair is rusty. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:00 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 11:00 < cdecker> PING bitconner BlueMatt cdecker johanth kanzure lightningbot lndbot roasbeef rusty sstone 11:00 < rusty> #info Agenda here: https://github.com/lightningnetwork/lightning-rfc/issues/566 11:00 * BlueMatt hasnt slept in 24 hours and has to leave soon 11:00 < sstone> Hi everyone! 11:01 < cdecker> Just added the multi-onion thingy in the agenda, feel free to push down the list if we don't have time :-) 11:01 < rusty> #info label to apply to issues/PRs for discussion: https://github.com/lightningnetwork/lightning-rfc/labels/2019-02-04 11:01 -!- mkl_ [31e6c6bb@gateway/web/freenode/ip.49.230.198.187] has joined #lightning-dev 11:01 < cdecker> Heia sstone 11:02 < rusty> Hi all; would like to finally tag 1.0 if we can. Should we apply the few pending clarification fixes first? 11:02 < kanzure> hi. 11:02 -!- araspitzu [~ott0disk@5.170.172.15] has joined #lightning-dev 11:02 -!- Murch [~murch@50-200-105-218-static.hfc.comcastbusiness.net] has quit [Quit: Snoozing.] 11:02 < niftynei> hello 11:02 < rusty> https://github.com/lightningnetwork/lightning-rfc/labels/spelling if you want to see the pending ones... 11:02 < lndbot> hi :slightly_smiling_face: 11:02 -!- Murch [~murch@50-200-105-218-static.hfc.comcastbusiness.net] has joined #lightning-dev 11:03 < rusty> And of course: https://github.com/lightningnetwork/lightning-rfc/pull/550 which is approved but isn't technically a typo fix. 11:03 < rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/550 11:04 <+roasbeef> added what's IMO a clarification re that it's the current unrevoked point owned by that party 11:04 -!- ott0disk [~smuxi@5.170.172.15] has joined #lightning-dev 11:05 < rusty> Well, I guess the entire point is "this is the point they need for the commit tx you would broadcast now to unilateral close". 11:05 -!- araspitzu [~ott0disk@5.170.172.15] has quit [Client Quit] 11:05 <+roasbeef> yeh which is the one they haven't revoked yet 11:05 < rusty> Yes. 11:08 < rusty> OK, so now it's down to wording. Perhaps we just append "(as it sent in its last revoke_and_ack message)" ? 11:08 < niftynei> it's not necessarily the unrevoked point right, it's whatever your commitment point is for the signed commit txn you have from your peer, right? 11:09 < niftynei> hmm actually i guess that is the same thing, because you don't revoke a commitment until you received the next one 11:09 < niftynei> in theory you could receive a new commitment and not get the chance to revoke it before your peer goes offline 11:10 < rusty> niftynei: well, it's whatever one you would broadcast. 11:10 < lndbot> should be unrevoked from the sending node’s POV, no assumptions about the peer 11:10 <+roasbeef> last unrevoked to me is "the lowest unrevoked", fwiw lnd nodes don't store that pending commitment and rely on the other party to broadcast it 11:10 < niftynei> my issue with tying it so concretely to the revoke action is that there's no ack that the other side ever gets your revocation... it's really based off what the last valid signed commit you received, right? 11:11 <+roasbeef> doesn't matter if they ack it, you've revoked so you shoudn't broadcast that commitment 11:11 < niftynei> like you're telling them your commitment point for the most valid commitment txn you have 11:11 < niftynei> 'most valid' aka 'non-revoked' 11:13 < niftynei> the important thing seems to be focusing on the fact that it's the commitment txn your node will publish if the re-establish messages show a state chain loss for the other party 11:13 < rusty> BTW, I thought revoke_and_ack contained the N+1th point, and we want the Nth point? 11:13 <+roasbeef> at broadcast time, lnd also stores the last unrevoked to handle the case of the unrevoked+pending commitment case 11:13 <+roasbeef> so it'll be w/e we broadcasted 11:13 < niftynei> rusty i believe that's correct 11:13 <+roasbeef> ah yeh that's right (re n+1) 11:14 < rusty> Yes, so it's *not* the one you sent in the last revoke_and_ack. Hmm, not sure we can do better than the actual concrete requirement: the one corresponding to what you'll put onchain. 11:14 < rusty> ie. append ("the commitment_point the sender would use to create the current commitment transaction for a unilateral close)" 11:15 < niftynei> right which is exactly the spirit of the clarification i proposed 11:16 < rusty> I was trying to capture roasbeef's summary too. 11:17 <+roasbeef> yeh would, or "did" works there 11:17 <+roasbeef> since it's possible for them to broadcast that "top" commitment as well 11:17 < rusty> It is, logically, the one you use for the last signed commitment from the peer. But in theory you could pause before sending revoke_and_ack, and have two valid commitment txs. If your implementation were to broadcast the old one (we won't, it's atomic to us to send revoke_and_ack), *that* is what you must send. 11:18 < rusty> roasbeef: yes. 11:19 < niftynei> ah i see. so it's not necessarily the last received but the 'current closing candidate'? 11:20 < rusty> niftynei: exactly 11:20 < rusty> Whatever goes on-chain is what they need 11:21 < rusty> OK, I think we're going to timeout on this issue. I've put a suggestion up, let's move on 11:21 < niftynei> :ok_hand: 11:21 < cdecker> Ok 11:21 < rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/558 11:22 < rusty> Trivial change to bring scripts into same form. 11:22 < rusty> Anyone object? 11:22 < cdecker> Sounds simple enough 11:22 < rusty> (We're also doing 558 562 563 if you want to read ahead :) 11:22 < lndbot> lgtm 11:22 < rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/558 11:23 < rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/559 11:23 < cdecker> sgtm 11:23 < rusty> Another trivial clarification: the receiver set max values, the sender must not violate. 11:24 < rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/559 11:24 < rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/562 11:24 < cdecker> LGTM 11:24 <+roasbeef> 558 should specify the precise data push 11:25 < rusty> roasbeef: it's kind of implied by the length, I guess. 11:25 <+roasbeef> commented on the PR 11:25 < rusty> roasbeef: but yeah, this matches what we use elsewhere in those calculations, but more clarity would be nice. 11:25 < rusty> #apply https://github.com/lightningnetwork/lightning-rfc/pull/562 11:26 < rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/563 11:26 < rusty> roasbeef: I think that would be a separate sweep, though. 11:26 < rusty> roasbeef: but if you want I can try to unaction it :) 11:27 < rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/563 11:27 < rusty> Now, that was faster :) 11:27 < rusty> #topic Finally tagging 1.0 11:27 <+roasbeef> just saying "data" doesn't clarify anymore than it is, since as you say it's arguably implicit, so if we're going to specify should elimnate all ambiguity 11:28 < cdecker> btw rusty if you follow #action with one of the participant's name they'll be assigned the action in the meeting notes (I just assigned myself if noone else was jumping in) 11:29 < rusty> roasbeef: there's no convenient wording for those pushes, though. Elsewhere it's annotated like (OP_DATA: 1 byte (pub_key_alice length)) 11:30 <+roasbeef> i mean like the exact op code 11:30 < sstone> roasbeef: and readers can check all the details in the test vectors 11:30 < cdecker> Well, OP_PUSH1 would be my personal preference 11:30 <+roasbeef> sure, they can, but if we're modifying it, should make it as explicit as possible 11:30 < cdecker> Gives the reader both the op-code as well as the content 11:30 <+roasbeef> which may mean a larger sweep as rusty said 11:30 < rusty> roasbeef: to be clear, they're just doing the minimal to bring it into line with the others. 11:32 -!- Murch [~murch@50-200-105-218-static.hfc.comcastbusiness.net] has quit [Quit: Snoozing.] 11:32 < rusty> roasbeef: and the point of this part of the spec is to merely establish the length/weight of the txs. 11:33 < cdecker> Right, we can always be more verbose and add details at a later point in time 11:33 -!- Murch [~murch@50-200-105-218-static.hfc.comcastbusiness.net] has joined #lightning-dev 11:33 < rusty> #action roasbeef to come up wiht better notation than OP_DATA for BOLT#3 weight calculations 11:34 < cdecker> SGTM 11:34 < rusty> So, back to topic. After those 4 spelling fixes, can we please tag 1.0? 11:34 < rusty> Like, git tag v1.0 11:34 < rusty> Then we can officially open the gates to The New Shiny. 11:35 < rusty> It's like I want to open my Christmas presents and you're all being all grinchy... 11:35 < cdecker> Happy to tag it, we've been mostly working on v1.1 features anyway 11:35 < rusty> (I realize the text is fat from perfect, but time is finite). 11:35 < lndbot> is the plan to modify the current docs for 1.1, meaning that you no longer can implement 1.0 without checking out the tag? 11:36 < cdecker> Yep 11:36 < lndbot> or will it be clear from it what is towards 1.0, 1.1 etc 11:36 < lndbot> ok 11:36 < rusty> lndbot / johanth: yes. Though we're not planning any compat breaks, so it's a bit semantic. 11:37 < rusty> roasbeef: you happy for a v1.0 tag? 11:37 < rusty> sstone: and you? 11:37 < sstone> yes 11:38 <+roasbeef> sgtm, will take a peek at pending open prs to see if there's anything that maybe should land 11:38 < rusty> #action rusty Tag v1.0 before next meeting, after approved fixes here. 11:38 < rusty> OK, looking at https://github.com/lightningnetwork/lightning-rfc/labels/2019-02-04 11:38 < sstone> just remembered that there's an error in the onion test vectors 11:39 < rusty> sstone: :( Any chance of a fix soon? 11:39 < sstone> yes it should be trivial 11:40 < rusty> #action sstone To fix onion test vectors, rusty to verify. 11:40 < rusty> Thanks@ 11:40 < rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/557 11:40 < rusty> I remain to be convinced that we can't just suppress flapping nodes, BTW. 11:41 < rusty> (ie. remember that we have a dup with a later timestamp, and re-broadcast the old one). 11:41 <+roasbeef> rusty: same, nodes can just spam to hell, and IMO it's futile to try to sync all the channel updates 11:41 < cdecker> #557 is looking much better, thanks sstone ^^ 11:42 <+roasbeef> instead clients can prioritize updates for channels they've used in the past, and eat the latency on an error due to a failed update. routing nodes don't really need them at all 11:42 < cdecker> roasbeef: absolutely, having a consistent view of the network topology is a red herring 11:42 < cdecker> But I understand sstone's wish to keep traffic low 11:42 < rusty> roasbeef: yeah, I think we'll need to suppress spammy ones *anyway*. 11:42 < rusty> cdecker: I think if nodes start suppressing dups he'll get his wish? 11:43 < cdecker> We might impose exponential backoff for chatty nodes 11:43 < rusty> (Though it implies you should not send more than 2 updates every 120 seconds to be safe) 11:43 < sstone> to be clear: we're trying to sync A and B, not A and the rest of the world (i.e mobile nodes are connected to very few peers) 11:43 < rusty> sstone: but if B is suppressing redundant updates for you, I think you get your wish. 11:44 < rusty> (BTW, should I prioritize writing code so c-lightning doesn't ask for ALL the gossip on every connect? I've been slack...) 11:44 < sstone> yes, possibly. but the new PR has a minimal impact on the spec (no new messages, just optional data) 11:45 <+roasbeef> sstone: sync A and B? 11:45 < cdecker> Nah, c-lightning can deal with the extra data :-) 11:45 < rusty> sstone: but if we suppress redundant updates, I think your extension becomes pointless? 11:45 < rusty> cdecker: what's 20MB between friends? 11:45 < sstone> sync a mobile node that offline most of the time and the peer we're connected too 11:45 < sstone> that's 11:46 < rusty> #action rusty c-lightning to do something smarter than retrieve all gossip from peer before 0.7 release. 11:46 < sstone> if it really becomes pointless then people will stop using it, but I doubt it will happen that soon 11:47 < rusty> sstone, roasbeef: Well, anyone object to me implementing dup suppression to see how it goes? Then I'll look at implementing the extension if that doesn't work? 11:47 <+roasbeef> sstone: don't see how that distinction changes anything, you'll still end up trying to get all the updates since you were down, or all chans that updated since then 11:47 <+roasbeef> (unless it has preferential querying) 11:48 < rusty> roasbeef: there was a good on-list analysis of how much gossip is redundant; it's a lot :( 11:48 < sstone> we're trying to minimize traffic and startup time. we've implemented it and the gains are massive right now 11:48 < cdecker> Well, actually if you're offline and reconnect you should only get the channel_announcement and the latest channel_update, not all prior ones 11:48 < cdecker> The duplicate channel_update stuff can only happen while online or while trying to sync from multiple nodes simultaneously 11:49 <+roasbeef> only massive as y'all try to always sync all updates rn? 11:49 < cdecker> (and duplicates for nodes that you know of, that have had an update, which I now realize is exactly what you're trying to suppress, my bad) 11:49 < sstone> you come back online, your peer has 1000 updates with newer timestamp, only 10 of them carry actual change 11:50 <+roasbeef> ahh gotcha, assuming you want all the updates 11:50 < ott0disk> the trick of the PR is to not retrieve updates that did not change, this is done via the checksum. 11:50 < sstone> I'm almost not making the numbers up :) 11:50 <+roasbeef> what if peers just reject updates that don't change at all between the last 11:51 < cdecker> What kind of savings are we talking about here sstone? Just curious 11:51 < ott0disk> although you still want to have one every 2 weeks to detect the stale channels 11:51 < rusty> roasbeef: that's waht I suggested, but it doesn't help if they actually are doing "disable, reenable". 11:51 < sstone> a good 80% less traffic on the current network 11:51 <+roasbeef> same thing w/ the checksum tho right? they'd compute and refetch since that bit flipped 11:51 < rusty> ott0disk: yes, as I suggested in my post onlist, you need to allow for weekly updates. 11:52 < rusty> roasbeef: not if it happened since you went offline. ie. two updates, you're back where yuo started. 11:52 < cdecker> roasbeef: we've also had a few variants on the ML where a flapping channel would get overridden by the second to last update and resulting in the wrong endstate 11:52 < rusty> ie. they're not *strictly* redundant at the time, just in retrospect they are. 11:53 < sstone> also I believe that the checksum trick could be useful for INV-base gossip, and potentially for set-based reconciliation (still think it's a bad fit though, must be missing smthg) 11:53 < sstone> INV-based 11:53 < rusty> sstone: adler was a nice touch, I was having flashbacks to the zlib source code :) 11:54 < sstone> it was pm's idea (we used it a long time ago on another project) 11:55 < cdecker> Ok, so from my side this is a far better proposal than the last one. It leaves a lot of flexibility, is fully backward compatible and doesn't need signaling nor new messages 11:55 < rusty> OK, so redundant suppression helps if they're literally redundant, but the csum proposal helps if they're just transient but not literally redundant. I'll have to record the exact gossip to determine which it is, unless sstone has that data already? 11:55 < rusty> cdecker: agreed. If we need something like this, this is nice and minimal. 11:56 < sstone> yes we have "6-hourly" dumps of our routing table (need to check we're still doing that) 11:57 < cdecker> I have quarter hourly stats on the channel state if that helps 11:57 < rusty> sstone: that won't tell me what I need; if consecutive updates are identical. That's OK, I can figure it out. 11:57 < rusty> roasbeef: I guess the question is, do you have any major objections to the extension? Seems pretty clean to me... 11:58 <+roasbeef> haven't looked at specifics of the latest 11:59 < rusty> roasbeef: OK, cool, let's shelve. But no objections from me. 11:59 < rusty> Next topic? 11:59 <+roasbeef> diff looks much smaller than the last ;) 11:59 < rusty> Yeah :) 11:59 < cdecker> Hehe 11:59 < rusty> cdecker: you wanted to discussion onion fmt? 11:59 <+roasbeef> prob also a meta q on if it should be TLV or not (as new optional field) 12:00 < cdecker> Yep, roasbeef has opened a PR here https://github.com/lightningnetwork/lightning-onion/pull/31 about the multi-hop onion 12:00 < rusty> #action Continue discuss https://github.com/lightningnetwork/lightning-rfc/pull/557 on PR 12:00 < cdecker> And I added a (rather lengthy) comment below after reviewing it. I'd like to invite everybody to take a look and see what works better for them 12:00 < rusty> #topic https://github.com/lightningnetwork/lightning-onion/pull/31 multi-hop onion 12:01 < cdecker> I have listed all the pros to my approach there, so I won't repeat them here 12:01 < rusty> cdecker: I like getting more room in the onion for roasbeef's movie streaming service. 12:02 <+roasbeef> we can get those 64 or w/e bytes just by eliding the mac check between the pivot and final hop, i favor the multi unwrap as it keeps the processing+creation code nearly identical 12:02 < cdecker> But I was wondering, since I'm implementing this and I keep getting confused with hops (as in single payload size) and hop-payload (as in actual payload being transferred), if everybody is ok if I use the term "frame" to refer to a single 65 byte byte slice 12:02 < rusty> cdecker: have you implemented it yet? I worry that hmac position will be awkward. 12:02 <+roasbeef> otherwise we may end upwith 3 diff ways: rendezvous, regualr, multi onion 12:02 < cdecker> I'm implementing it now 12:03 < cdecker> roasbeef: no, we end up with 2 formats: legacy, and TLV (which includes rendez-vous and spontaneous payments and all the rest) 12:03 <+roasbeef> processing 12:03 < cdecker> TLV being the multi-onion/single-onion case 12:03 < rusty> cdecker: indicated by realm byte 0 vs other, right? 12:03 <+roasbeef> also i think TLV should be optional, and there's a big hit for signalling those values 12:04 <+roasbeef> and many times makes what shoudl fit into a single hop fit into 3 12:04 < cdecker> Yep, realm 0 (i.e., MSB unset) keeps exactly the same semantics and processing as the old one 12:04 <+roasbeef> also with type+TLV we get more "solution" space 12:04 < cdecker> We only differentiate in how we interpret the payload 12:04 <+roasbeef> so up to to 65k depending on size of type 12:04 < cdecker> On the contrary I think TLV is eventually the only format we should support 12:04 < rusty> roasbeef: well, I consider framing and format indep topics. Whether we end up implying new format from new framing, I don't mind. 12:05 <+roasbeef> we have a limited amount of space, unlike the wire message where it's 65kb 12:05 < cdecker> Actually it's 32 + 32 + 18*65 bytes maximum payload 12:05 <+roasbeef> yeh as is, it doesn't care about framing it just passes it up 12:05 <+roasbeef> (of the payload) 12:06 <+roasbeef> cdecker: so i mean that you options if there's a disticnt type, then optional tlv within that type, vs having to agree globally on what all the types are 12:06 <+roasbeef> (all the tlv types) 12:06 < cdecker> So with my proposal we just read the realm, the 4 MSB tell us how many additional frames to read, and then we just pass the contiguous memory up to the parser which can then differentiate between legacy realm 0 and TLV based payload 12:07 < rusty> cdecker: BTWdo you go reading the realm byte to determine the hmac position before you've checked the hmac? 12:07 <+roasbeef> i brought up the value of invoicelss payments in australia, ppl asked what ppl would actually use it for (many were skeptical), once it dropped many app devs were super excitd w.r.t the possibilities 12:07 < rusty> roasbeef: yeah, blog post coming RSN on why it's a terrible idea :) 12:07 <+roasbeef> why what is? 12:07 < rusty> roasbeef: not giving a receipt :) 12:08 < cdecker> Well, if we make TLV correspond to the second realm 1 we still have 254 realms to be defined if we want to drop TLV at some point 12:08 < rusty> roasbeef: but to be fair, doing it "right" means we need bolt11 offers, which is a bunch of work. 12:08 < cdecker> rusty: the HMAC is just the last 32 bytes of the payload, which can be computed from the realm and then we just check that 12:08 <+roasbeef> yeh, so packet level type that telsl you how to interpret the bytes to the higher level 12:08 <+roasbeef> rusty: depends entirely on the use case, in many cases you can obtain the same with a bit more interaction at a diff level 12:09 <+roasbeef> there aren't even any standards for receipts yet afaik 12:09 <+roasbeef> and this is prob the most requested feature i see 12:09 < cdecker> Did I just lose the multi-frame discussion to a side-note? xD 12:09 < rusty> roasbeef: I agree. Anyway, let's stick to topic. 12:10 <+roasbeef> if you have the optional tlv, then you need a length as well 12:10 < rusty> cdecker: I'd like to see the implementation. I think that moving framing is cleaner than roasbeef's redundant HMACs. 12:11 < cdecker> Anyway, if we stick to unwrapping the onion incrementally, and just skip HMAC like roasbeef suggests we do a whole lot of crypto operations extra, we don't get contiguous payloads and have to copy to reassemble, and we don't get any advantage from it imho 12:11 <+roasbeef> as in how many bytes in a frame to consume (as what impl does), other2ise you end yup passing extra zeros if you don't have a padding scheme 12:11 <+roasbeef> no advtanage? you get the extra space 12:11 < cdecker> No, I mean over my proposal 12:11 <+roasbeef> it's simple imo, idential creation+processing 12:11 < rusty> roasbeef: 0 tlv is terminal, usually. 12:11 <+roasbeef> simpler* 12:12 < rusty> roasbeef: that's why I want to see an implementation of cdecker's proposal. 12:12 < cdecker> And I'm saying mine is even simpler since all we do is changing the shift size 12:12 < cdecker> :-) 12:12 <+roasbeef> but processing changes completely, no? 12:12 <+roasbeef> this you do the same processing as rn in a loop 12:12 < cdecker> I'm implementing it now and I'll publish on the ML 12:13 < cdecker> roasbeef: no, processing just gets variable shifts as indicated by the realm 12:13 < rusty> OK, we're 10 minutes over. Any final topics worth a mention? 12:13 <+roasbeef> yeh, it changes ;) 12:13 < cdecker> if the 4 MSB of the realm are 0, shift by 65, if they are 0b0001 then shift by 130 and so on 12:13 < cdecker> Nope, we'll bike-shed on the mailing list :-) 12:13 <+roasbeef> creation too 12:14 < rusty> I would like to point people at https://github.com/rustyrussell/lightning-rfc/tree/guilt/tests which is my very alpha code and proposal for JSON-formatted test cases in the spec. 12:14 < cdecker> Yeah, but again that's specular, and way simpler than splitting the payload into 8bytes + 64 bytes + 32 bytes later, and encrypting them a bunch of times 12:14 < rusty> I would also like a decision on feature bits, but we might have to defer to next mtg. 12:15 < rusty> (ie. combine "routing" vs "peer" bits, or expand node_announcement to have two bitmaps?) 12:15 <+roasbeef> loops are pretty simple, and creatino is the same once you get the "unrolled" route 12:16 <+roasbeef> didn't see what was gained by splitting em 12:17 < rusty> ETIMEDOUT I think. Happy to stick around for a bit, but OK if we end meeting? 12:17 < cdecker> Sure 12:17 < rusty> #endmeeting 12:17 < lightningbot> Meeting ended Mon Feb 4 20:17:59 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 12:17 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-02-04-19.00.html 12:17 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-02-04-19.00.txt 12:17 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2019/lightning-dev.2019-02-04-19.00.log.html 12:18 < rusty> Thanks everyone.... 12:18 < rusty> sstone: did you have q. over the feature bits? 12:18 < cdecker> roasbeef: your proposal does a number of encryption and decryption rounds which are pointless, and the data needs to be copied to be parsed. 12:19 < cdecker> Anyway, gotta go, let's continue this on the ML/issue tracker 12:19 < sstone> yes I think I understand your last message 12:19 < rusty> The original idea was to broadcast (in node_announcement) only the stuff you need if you want to route through the node. Local stuff you'd find out on connect. But turns out people wanted to find nodes with certain local features... 12:20 < sstone> yes for example dataloss_protect 12:20 < rusty> sstone: exactly! 12:20 < sstone> also I was confused by the wording in " Put both in `features` in node announcements, but never use even bits for peer features" 12:21 < rusty> sstone: so I think we need them in node_announcement. We could add a second bitmap, or simply combine them. 12:21 < rusty> sstone: right, I meant "never use even peer features when putting it into node_announcement" 12:21 < rusty> sstone: which is a bit hacky, but it works. 12:22 < sstone> I understood the opposite at first (don't use even bits in per-connection features) 12:22 < rusty> IN theory, if we added a new field to node_announcement (peer_features?) we could halve the number of bits. You don't usually care if a peer feature is optional or compulsory, just if it's supported at all But that's probably premature optimization... 12:23 < rusty> sstone: yes, so I understand your confusion! 12:24 < sstone> rusty: but with your convention of not using mandatory peer features in node announcement we should have something that is really close to the current spec 12:25 < rusty> sstone: yes. 12:25 < sstone> and I was hoping we could try it out with the new wumbo feature 12:25 < rusty> We just need to make sure we assign feature bits from the same pool: ie. no local and global features can be equal. 12:26 < rusty> sstone: OK, I'll write up an actual spec proposal using that system then, since we seem to be in agreement. That will clear the way to actual feature bit assignments! 12:26 < sstone> how will you rename global features ? :) 12:27 < rusty> sstone: routing_features? and local->peer features? Got a better name? 12:28 < sstone> I like node_features better 12:29 < rusty> OK. As long as it's clear that "you can't route through if you don't understand an even feature". 12:29 < sstone> yes. I find it easier to understand that way 12:31 -!- mkl_ [31e6c6bb@gateway/web/freenode/ip.49.230.198.187] has quit [Ping timeout: 256 seconds] 12:35 < sstone> rusty: got to go now. Thanks! 12:36 -!- sstone [~sstone_@185.186.24.109.rev.sfr.net] has quit [Quit: Leaving] 12:38 -!- mkl_ [31e619ec@gateway/web/freenode/ip.49.230.25.236] has joined #lightning-dev 12:38 <+roasbeef> cdecker: parsing only there as was an attempt to do things in a black box manner, can easily be removed, bu tlooking forwad to checking out your code 12:39 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 12:41 -!- ott0disk [~smuxi@5.170.172.15] has quit [Read error: Connection reset by peer] 12:42 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 12:44 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 256 seconds] 12:45 -!- hiroki [a331d575@gateway/web/freenode/ip.163.49.213.117] has quit [Quit: Page closed] 12:51 -!- mkl_ [31e619ec@gateway/web/freenode/ip.49.230.25.236] has quit [Ping timeout: 256 seconds] 12:57 -!- shesek [~shesek@141.226.217.238] has joined #lightning-dev 12:57 -!- shesek [~shesek@141.226.217.238] has quit [Changing host] 12:57 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 13:21 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 13:27 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 13:57 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 14:01 -!- tombusby [~tombusby@gateway/tor-sasl/tombusby] has joined #lightning-dev 14:09 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has joined #lightning-dev 14:14 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 14:24 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 256 seconds] 14:25 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 14:34 -!- diackca [~abdoulaye@41-184-172-161.rv.ipnxtelecoms.com] has quit [Remote host closed the connection] 14:35 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 256 seconds] 14:36 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 14:52 -!- tryphe_ is now known as tryphe 14:55 -!- arubi [~ese168@gateway/tor-sasl/ese168] has quit [Ping timeout: 256 seconds] 14:59 -!- Murch [~murch@50-200-105-218-static.hfc.comcastbusiness.net] has quit [Quit: Snoozing.] 15:00 -!- ebx [~ebx@unaffiliated/ebex] has quit [Quit: bye] 15:03 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 15:22 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 256 seconds] 15:23 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Remote host closed the connection] 15:26 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 15:38 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 15:43 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 15:44 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 16:12 -!- Murch [~murch@ip-173-132-80-240.atlnga.spcsdns.net] has joined #lightning-dev 16:12 -!- rh0nj [~rh0nj@88.99.167.175] has joined #lightning-dev 16:12 -!- jtimon [~quassel@92.28.134.37.dynamic.jazztel.es] has quit [Quit: gone] 16:23 -!- jtimon [~quassel@92.28.134.37.dynamic.jazztel.es] has joined #lightning-dev 16:52 -!- Amperture [~amp@24.136.5.183] has joined #lightning-dev 17:00 -!- Murch [~murch@ip-173-132-80-240.atlnga.spcsdns.net] has quit [Quit: Snoozing.] 17:15 -!- enemabandit [~enemaband@16.77.54.77.rev.vodafone.pt] has quit [Ping timeout: 240 seconds] 17:42 -!- Murch [~murch@64-71-8-130.static.wiline.com] has joined #lightning-dev 17:44 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 240 seconds] 17:45 -!- Murch [~murch@64-71-8-130.static.wiline.com] has quit [Client Quit] 18:29 -!- unixb0y [~unixb0y@p5B029C3B.dip0.t-ipconnect.de] has quit [Ping timeout: 240 seconds] 18:30 -!- unixb0y [~unixb0y@p5B0299DD.dip0.t-ipconnect.de] has joined #lightning-dev 18:33 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 18:55 -!- Empact [~textual@2607:fb90:9c2d:b26d:29c1:21a7:daba:aacc] has joined #lightning-dev 18:55 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has joined #lightning-dev 19:04 -!- Empact [~textual@2607:fb90:9c2d:b26d:29c1:21a7:daba:aacc] has quit [Remote host closed the connection] 19:06 -!- Empact [~textual@38.104.224.174] has joined #lightning-dev 19:23 -!- Empact [~textual@38.104.224.174] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 19:26 -!- Empact [~textual@38.104.224.174] has joined #lightning-dev 19:27 -!- Empact [~textual@38.104.224.174] has quit [Client Quit] 19:31 -!- Empact [~textual@38.104.224.174] has joined #lightning-dev 19:50 -!- Empact [~textual@38.104.224.174] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 20:02 -!- riclas [~riclas@148.63.37.111] has quit [Ping timeout: 250 seconds] 20:03 -!- achow101 [~achow101@unaffiliated/achow101] has quit [Ping timeout: 244 seconds] 20:11 -!- achow101 [~achow101@unaffiliated/achow101] has joined #lightning-dev 21:12 -!- kurimotokenichi [~kurimotok@p1538213-ipngn200604fukuokachu.fukuoka.ocn.ne.jp] has joined #lightning-dev 21:19 -!- kurimotokenichi [~kurimotok@p1538213-ipngn200604fukuokachu.fukuoka.ocn.ne.jp] has quit [Quit: kurimotokenichi] 21:23 -!- unixb0y [~unixb0y@p5B0299DD.dip0.t-ipconnect.de] has quit [Ping timeout: 245 seconds] 21:47 -!- unixb0y [~unixb0y@p5B0299DD.dip0.t-ipconnect.de] has joined #lightning-dev 22:00 -!- tryphe_ [~tryphe@unaffiliated/tryphe] has joined #lightning-dev 22:03 -!- tryphe [~tryphe@unaffiliated/tryphe] has quit [Ping timeout: 240 seconds] 22:06 -!- jtimon [~quassel@92.28.134.37.dynamic.jazztel.es] has quit [Ping timeout: 245 seconds] 22:20 -!- mkl_ [31e59ae0@gateway/web/freenode/ip.49.229.154.224] has joined #lightning-dev 22:31 -!- mkl_ [31e59ae0@gateway/web/freenode/ip.49.229.154.224] has quit [Quit: Page closed] 22:34 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 22:35 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 246 seconds] 22:50 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 246 seconds] 22:57 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev --- Log closed Tue Feb 05 00:00:41 2019