--- Log opened Mon Aug 26 00:00:52 2019 00:27 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 268 seconds] 00:35 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 00:38 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 00:45 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 246 seconds] 00:54 -!- laptop500 [~laptop@85-118-92-47.mtel.net] has joined #lightning-dev 00:57 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 01:10 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 248 seconds] 01:13 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 258 seconds] 01:24 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 01:32 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 01:39 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 245 seconds] 01:50 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 02:41 -!- Grouver [~grouver@a80-100-203-151.adsl.xs4all.nl] has joined #lightning-dev 02:55 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 02:56 -!- Grouver [~grouver@a80-100-203-151.adsl.xs4all.nl] has quit [Quit: Leaving] 02:58 -!- shesek [~shesek@unaffiliated/shesek] has quit [Read error: Connection reset by peer] 03:02 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 03:33 -!- Amperture [~amp@24.136.5.183] has quit [Ping timeout: 272 seconds] 03:35 -!- vtnerd [~Lee@173-23-103-30.client.mchsi.com] has quit [Ping timeout: 245 seconds] 03:38 -!- vtnerd [~Lee@173-23-103-30.client.mchsi.com] has joined #lightning-dev 03:42 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 03:48 -!- Amperture [~amp@24.136.5.183] has joined #lightning-dev 03:48 -!- Amperture [~amp@24.136.5.183] has quit [Remote host closed the connection] 03:48 -!- Amperture [~amp@24.136.5.183] has joined #lightning-dev 03:50 -!- Kostenko [~Kostenko@195.12.50.233] has quit [Ping timeout: 245 seconds] 04:06 -!- Kostenko [~Kostenko@195.12.50.233] has joined #lightning-dev 04:13 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 04:44 -!- farmerwampum [~farmerwam@195.206.105.2] has joined #lightning-dev 04:55 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 272 seconds] 04:55 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 05:23 -!- unixb0y [~unixb0y@dslb-188-098-156-193.188.098.pools.vodafone-ip.de] has quit [Ping timeout: 245 seconds] 05:34 -!- unixb0y [~unixb0y@dslb-188-105-085-128.188.105.pools.vodafone-ip.de] has joined #lightning-dev 06:41 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 245 seconds] 06:50 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 07:11 -!- _Sam-- [~greybits@unaffiliated/greybits] has joined #lightning-dev 07:45 -!- laptop500 [~laptop@85-118-92-47.mtel.net] has quit [Ping timeout: 245 seconds] 08:02 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 246 seconds] 08:17 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 08:42 -!- davex__ [~user@45.74.60.139] has quit [Remote host closed the connection] 08:59 -!- RonNa [~quassel@60-251-129-61.HINET-IP.hinet.net] has quit [Quit: No Ping reply in 180 seconds.] 09:02 -!- RonNa [~quassel@60-251-129-61.HINET-IP.hinet.net] has joined #lightning-dev 09:14 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Ping timeout: 260 seconds] 09:19 -!- shesek [~shesek@141.226.148.40] has joined #lightning-dev 09:19 -!- shesek [~shesek@141.226.148.40] has quit [Changing host] 09:19 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev 09:20 -!- laptop500 [~laptop@85-118-92-97.mtel.net] has joined #lightning-dev 09:27 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 09:31 -!- noxx [~noxx@p200300E617288300DC4D2FD592112E82.dip0.t-ipconnect.de] has joined #lightning-dev 09:38 -!- noxx [~noxx@p200300E617288300DC4D2FD592112E82.dip0.t-ipconnect.de] has quit [Quit: Leaving] 10:01 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 244 seconds] 10:12 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 10:31 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 246 seconds] 10:42 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 10:52 -!- gribble [~gribble@unaffiliated/nanotube/bot/gribble] has quit [Remote host closed the connection] 10:58 -!- gribble [~gribble@unaffiliated/nanotube/bot/gribble] has joined #lightning-dev 11:07 -!- arubi [~ese168@gateway/tor-sasl/ese168] has quit [Read error: Connection reset by peer] 11:07 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Read error: Connection reset by peer] 11:07 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Write error: Connection reset by peer] 11:08 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 11:08 -!- arubi [~ese168@gateway/tor-sasl/ese168] has joined #lightning-dev 11:13 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 11:18 -!- davex__ [~user@45.74.60.132] has joined #lightning-dev 11:25 -!- greypw [~greypw@unaffiliated/greypw] has quit [Ping timeout: 264 seconds] 11:28 -!- greypw [~greypw@unaffiliated/greypw] has joined #lightning-dev 11:34 -!- greypw [~greypw@unaffiliated/greypw] has quit [Remote host closed the connection] 11:45 -!- greypw [~greypw@unaffiliated/greypw] has joined #lightning-dev 11:47 -!- eliantobtc [565878fa@ip565878fa.direct-adsl.nl] has joined #lightning-dev 11:48 -!- eliantobtc [565878fa@ip565878fa.direct-adsl.nl] has left #lightning-dev [] 11:52 -!- captjakk [~captjakk@107.145.175.198] has joined #lightning-dev 12:03 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 244 seconds] 12:09 -!- queip [~queip@unaffiliated/rezurus] has joined #lightning-dev 12:35 -!- niceplace [~nplace@185.128.25.132] has quit [Ping timeout: 245 seconds] 12:41 -!- niceplace [~nplace@185.128.25.132] has joined #lightning-dev 12:55 -!- noxx [~noxx@p200300E617288300DD1EECC55E89A2EA.dip0.t-ipconnect.de] has joined #lightning-dev 13:00 -!- noxx [~noxx@p200300E617288300DD1EECC55E89A2EA.dip0.t-ipconnect.de] has quit [Client Quit] 13:02 < niftynei> i spy a minisketch 13:03 < niftynei> O.o 13:03 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 248 seconds] 13:09 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 13:14 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Ping timeout: 260 seconds] 13:14 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 13:15 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #lightning-dev 13:48 < bitconner> sip, don't gulp 😂 13:55 -!- laptop500 [~laptop@85-118-92-97.mtel.net] has quit [Remote host closed the connection] 14:23 -!- Kostenko [~Kostenko@195.12.50.233] has quit [Ping timeout: 248 seconds] 14:24 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 268 seconds] 14:26 -!- bralyclow [bralyclow@gateway/vpn/protonvpn/bralyclow] has joined #lightning-dev 14:35 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 14:38 -!- Kostenko [~Kostenko@dsl-155-30.bl26.telepac.pt] has joined #lightning-dev 14:44 -!- Kostenko [~Kostenko@dsl-155-30.bl26.telepac.pt] has quit [Ping timeout: 268 seconds] 14:57 -!- Kostenko [~Kostenko@195.12.50.233] has joined #lightning-dev 15:03 <+roasbeef> i wonder how minisketch will translate though, in bitcoin land there's a more regular access pattern due to ongoing transaction broadcast, so you can use that to make a sort of estimate of the diff size, while in lightning ideally long term we just stop optimistically propagating these fee updates and they're more rare, at steady state also things can be more bursty (say a big node does all their updates), etc 15:04 <+roasbeef> i think just moving to inv in the short term will give us a lot, then at that point, we'd need to see emperically empirically how much savings we get, and if that's worth the additional protocol complexity (it adds a good bit more round trips, wehich may be less attractive on mobile, and afaik minisketch as design more with full nodes in mind which are always active so have a good way to estimatte teh size diff) 15:12 < gleb> Have you seen my comment on Rusty’s GitHub issue? 15:17 <+roasbeef> gleb: yeh 15:18 <+roasbeef> it'll also become the most complex part of the protocol in a sense as well if we went all the day w/ it fwiw, when we can have fairly large gains by just moving to invs 15:18 <+roasbeef> it's also a very diff data model, there's two things we want to sync: new channels and updates 15:18 <+roasbeef> new channels are a linear series in time, as you can ask for channels in a particualr block range 15:18 <+roasbeef> while updates can be generated more or less for free as is now, compared to bitcoin where you need to attach a relay fee and there's only a certain allotment for "penny" transactions, etc 15:32 < gleb> Regarding round-trips, I think based on the deployment minisketch might either be same round-trips as INVs, or +1 extra round-trip. 15:36 <+roasbeef> gotcha 15:36 <+roasbeef> but yeh the domain is also diff, as having better tx propagation in bitcoin means blocks can propagate more quickly via compact blcoks or w/e, and may also aide with miner profitability 15:37 <+roasbeef> while in LN, if you don't have a channel update, it isn't critical, as you can try that path then receive the authenticated update back in a failure message 15:37 <+roasbeef> so there're only first order benefits in terms of bandwidth savings (maybe, depending on one's role) 15:38 <+roasbeef> in bitcoin, it also isn't the case that light clients come up and want to sync the mempopol (ignoring BIP 37 for a second), while "light" LN clients _may_ want the past updates in order to reduce payment latency due to out of date updates 15:38 <+roasbeef> a useful piece of data (which I think pierre has somewhat), would be how many of these channel updates are actually "useless", meaning they just update the timestamp and nothing more 15:40 < gleb> Carla was collecting some data like that over summer, but we were also busy with other things so it's not finished now. 15:40 < gleb> And it's not clear how it would change with Lightning. 15:40 <+roasbeef> yeh, i think many have diff views on this, but imo all channel updates are basically useful and superflous 15:41 <+roasbeef> as a routing node, you don't really need them, as you're just forwarding payments, and can use node/chan ann information to make decisions as to where to open channels 15:41 <+roasbeef> clients need them, but it isn't critical, as they can obtain them on demand 15:41 -!- captjakk [~captjakk@107.145.175.198] has quit [Remote host closed the connection] 15:42 <+roasbeef> also of the 35k or so channels, the avg clients likely uses less than a hundred of them ona daily basis, just due to how most of the popular services are interconected and also lack of proper channel management (atm) 15:42 < gleb> Comparing to INVs, the most bandwidth savings you can *theoretically* get is (connectivity - 1) / connectivity (theory people would argue a bit but that's close enough). In Bitcoin with 8 conns it's 87.5% savings on average across the network. Minisketch allows to achieve that bound, and Erlay does something like 80% due to little overhead. 15:42 <+roasbeef> we could remove channel update ruomouring as is today, and things wouldn't be afected that much 15:42 -!- captjakk [~captjakk@107.145.175.198] has joined #lightning-dev 15:42 <+roasbeef> yeh that's the other thing, connections on bitcoin are a bit more structured, givnen the addrmgr heuristics and the default settings (8 conns, 125 conns, etc) 15:43 <+roasbeef> while larger routing nodes in LN have upwards of 1k connections (inbound+outbound) at a given moment 15:43 < gleb> The question is (a) whether that is a big deal in a grand scheme of things — do you actually spend bandwidth there and (b) how to deploy it so that you can get that optimal bandwidth gain, is it even possible to get close 15:43 <+roasbeef> as these connections are critical to operation, compred to the more ephemeral connections in bitcoin (just gimmie the blocks, etc) 15:44 <+roasbeef> gleb: if the bandwidth is a big deal in the grand scheme? 15:44 < gleb> Well, in Bitcoin announcing transactions is 10GB/month, while total bandwidth a node consumes is 20GB/month 15:45 < gleb> So it's noticeable. But in lightning if, as you say, there will be no channel updates — perhaps it's not worth adding the complexity 15:46 < gleb> There is no point in optimizing 40MB/month, that's what i'm saying. And I don't have a sense in which direction the lightning network is moving. 15:47 -!- captjakk [~captjakk@107.145.175.198] has quit [Ping timeout: 272 seconds] 15:50 < gleb> Regarding different domains: latency is not minisketches' business, it is a high-level protocol thing (Erlay). Having no requirements for latency might help to make difference estimation better, so that's cool, but nothing would change fundamentally, it's really just a formula. 15:51 <+roasbeef> oh yeh by latency I mean that if the client doesn't have all the state that it needs, then it obtains it fater a failed payment, so the tradeoff there is a bit more latency from teh client's PoV 15:51 <+roasbeef> which imo is ok, as many times ppl open their LN app or w/e, they may just be checking on the state of their wallet or chanenls, and not actually send a payment 15:51 <+roasbeef> so it isn't worth it to attempt to sync all that state on demand, as it's likely just a waste 16:01 < gleb> What's a regular mode of operation you're thinking? A mobile wallet (represents a node) comes online, and tells its' 4 lightning peer nodes that it needs updates from block X? 16:01 <+roasbeef> so first there's a new bootstrapping node: it comes online, connects to peers, then asks (in order of block height) for channels, it can control how far back this range goes, this is to get enoguh information to either be able to route (client) or to be able to make deicisons about where to open channels (routing node) 16:02 <+roasbeef> then for a mobile client: it comes online every now n then, and can either say "give me all the updates in the past N seconds" (the way it is right now), and then also say "give me all the new channels beyond block XXX" 16:03 <+roasbeef> a routing node that restarts may also want to hear of all the updates since it was down (or not, imo they don't need to) 16:03 <+roasbeef> "updates" here are routing policies on channels that may impact path finding (new features, fees, lock time, etc) 16:03 <+roasbeef> but as a client, if I try w/ your old fees, and you have a new fee, then I 16:03 <+roasbeef> i 16:03 <+roasbeef> ll 16:03 <+roasbeef> get back the new on in the error packet 16:03 < gleb> Does a mobile client have p2p functionality? Are there expectations for relaying updates to other peers? Or it's just a crypto library connected to a real node. 16:04 <+roasbeef> yeh it has all the p2p functionality, it isn't required to relay updates to other peers 16:04 <+roasbeef> it'll send out information of _new_ channels, but not _updates_, since it isn't really routing 16:04 <+roasbeef> the chanenls are static data, they'll be there until the channel is closed or forgotten, while the updates are more real-time data 16:04 <+roasbeef> also the first time it hears about a channel, it gets that update 16:05 <+roasbeef> many impls will also send out "hear beat updates", so say like every 2 weeks or so, just to remind impls that the channel is still active, as soem will prune after a period of inactivity 16:06 <+roasbeef> one thing which we don't care about too much atm, but will in the near future are also node updates, which (atm) give connection/addr info and also what features they support whihc may affect routing (like say new channel types or HTLC types) 16:06 <+roasbeef> however some of this feature signalling info may be compied over into channel anns and updates themselves as well 16:07 <+roasbeef> on a higher level, it isn't yet clear how _often_ these updates shoudl happen (do you really need to ajdust your fees multiple times an hour? or can you update, then when ppl don't have the latest, just send it to them) 16:07 <+roasbeef> in that eaxmple, also notice that I can save the network more bandwidth as a whole, by jsut sending the updates to those that need them 16:08 < gleb> I probably should go to the Lightning conf of some sort sometime to brainstorm this, although I'm not sure you're at the point where you actually wanna invest time here guys. 16:08 <+roasbeef> you should come to the conf either way, it's gonna be dope :) 16:13 < gleb> I'll see! Also if you have similar research problems on mind, let me know, there will be MIT thing in october to discuss blockchain research and help academics find topics 16:13 <+roasbeef> cool, def 16:15 -!- greypw [~greypw@unaffiliated/greypw] has quit [Read error: Connection reset by peer] 16:17 -!- captjakk [~captjakk@107.145.175.198] has joined #lightning-dev 16:18 -!- greypw [~greypw@unaffiliated/greypw] has joined #lightning-dev 16:21 -!- greypw [~greypw@unaffiliated/greypw] has quit [Client Quit] 16:22 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 16:25 -!- greypw [~greypw@unaffiliated/greypw] has joined #lightning-dev 16:27 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Ping timeout: 260 seconds] 16:30 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #lightning-dev 16:50 -!- captjakk [~captjakk@107.145.175.198] has quit [Ping timeout: 246 seconds] 16:51 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 17:01 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has quit [Ping timeout: 244 seconds] 17:08 < rusty> bitconner: apparently he was using 10G a day on 4G! 17:08 <+roasbeef> :o 17:09 <+roasbeef> there goes their music streaming 17:11 < rusty> I suspect I'm being nerd sniped into over-optimizing this, but it's *fun* dammit... 17:13 <+roasbeef> hehe yeh we had a bit of a saga ourselves in that area in our past release, things are much better there, and we also now have some monitoring in place to watch bandwidth levels etc 17:13 <+roasbeef> so we can even see when we connect to an older node that asks for a large backlog (massive spike in outbound bandwidth) or even when we restart our nodes 17:16 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 17:17 < rusty> roasbeef: yeah, our next release will only ask for all gossip if it's just starting. Otherwise, it'll ask for scids, and work back if more things seem missing. One problem is that it's hard to know if you're missing node_announcements. 17:18 <+roasbeef> yeh node anns are harder, since we can't selectively query for them atm 17:18 <+roasbeef> i think the latest inv stuff patches that up 17:18 <+roasbeef> they'll become more important as we start to roll out the onion tlv stuff to 17:18 <+roasbeef> too* 17:23 < rusty> I'm hoping to leapfrog invs with minisketch, but we'll see how it works out. I'm going to try to bandaid something together for testing. 17:24 <+roasbeef> well it'll need invs anyway as a fall back 17:24 <+roasbeef> and it would be a pretty big leapfrog lol (code wise) 17:27 < rusty> roasbeef: naah, we have the selecting query flags now, we don't need invs? 17:28 <+roasbeef> there's an inv PR that builds upon it, invs are still hellpful for filtering out updates at tip (and also minisketch falls back to some sort of advertise/query functionality anyway) 17:29 <+roasbeef> that query stuff lets you know at broad strokes if you're missing an odler channel update 17:29 <+roasbeef> invs solve the issue of everything still blasting you with the same update/ann once you're fully synced to the graph state each time a channel is mined or updated 17:30 <+roasbeef> 1k connections, each of them send you the same 5 messges (chan ann, chan upd x2, node ann x2) each time a block is mined atm 17:33 < gleb> If estimation was wrong, you can still use minisketch and save bandwidth by using bisection. This applies recursively. In Erlay we choose to fall back to flooding if initial reconcil fails and FIRST bisection fails. You can also do up to log(differences) bisections if you want to avoid falling back to flooding :) 17:34 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 17:36 < gleb> Also because we can't assume low response latency in Bitcoin, but want fast tx relay, we ended up having INVs even after reconciliation anyways (but only 1 inv per node per tx, not 8 as in gossip) to dedup if 2 reconcils happen at the same time with different salt. This might be not necessary for you, because no latency requirement. 17:38 < gleb> rusty: do you want access to bip-erlay? I can add you, also if roasbeef is interested. It's almost finale, but I keep saying "next week" for several months now. 17:38 < gleb> It's really very low-level peer to peer messages described for the most part. 17:42 < rusty> roasbeef: minisketch solves that problem much more nicely. We count how much has changed since last time, and send a sketch of that capacity (with min 128 == 1kb, because why not). If we hit capacity before 60 seconds, send early. Gotta run some real tests to get numbers, however. 17:42 < rusty> gleb: minisketch kind of speaks for itself I think, but I might get some inspiration from bip-erlay, thanks! 17:43 <+roasbeef> invs are a shovel, minisketch is an excavator (think of the code diff assuming you need to re-impl from scratch and also the added complexity to the protocol, not sayin git isn't worth it, but we'll need to weigh the tradeoffs) 17:44 <+roasbeef> also we'll need to factor in how mobile cleitns fit in as well 17:44 <+roasbeef> since they may not be online enough to catch an epoch, but still may want to be able to quickly get a snapshot of what's changing at tip 17:45 < gleb> I think mobile clients might have something very different from regular nodes. 17:46 -!- esotericnonsense [~esotericn@unaffiliated/esotericnonsense] has quit [Read error: Connection reset by peer] 17:48 -!- esotericnonsense [~esotericn@unaffiliated/esotericnonsense] has joined #lightning-dev 17:49 <+roasbeef> gleb: yeh, compred to bitcoin mobile clients which only care about blocks and not the mempool 17:55 -!- captjakk [~captjakk@107.145.175.198] has joined #lightning-dev 18:08 < rusty> roasbeef: if you're only talking to one node, you want neither invs nor minisketch. You want a stream, like we have now. And if you're trying to catch up you also need something else to query. But minisketch is pretty optimal for steady state 18:09 <+roasbeef> what would you put in teh sketch, all chanenls you know of, or only things that you've seen updated since the last eophc (i guess chanID || timeStamp ?) 18:11 < rusty> roasbeef: everything. 18:11 < rusty> Let me pastebin my current drafty... 18:11 <+roasbeef> could be pretty big then? if it's (numChans*2 + 1) 18:12 <+roasbeef> err numChans*2 + numChans 18:12 <+roasbeef> so that's like 1.4 MB 18:12 < rusty> https://0bin.net/paste/n6cuK0VO1ax36pj7#2UMVYkBmvViTeoMV0bOfZ+v1QjMShPL3YKF1TbiaKL+ 18:12 <+roasbeef> with the current number of chans, leaving out node anns 18:13 <+roasbeef> (60k*16 + 60k*8) bytes 18:13 < rusty> roasbeef: naah, it's about 64k in my current thinking, but you'd usually send less. 18:13 <+roasbeef> or that lst one is 30k, but then still 1.2mb 18:13 < rusty> I mean, 8k sorry 18:13 <+roasbeef> 8kb? how 8 bytes for a chan id, 8 bytes for a time stamp 18:14 < rusty> No, we squeeze chanids + four timestamps into 64 bits. There's a bit of air in there. 18:15 < rusty> Helps to require implementations to alternate the bottom bit of timestamp on updates. 18:15 <+roasbeef> this is also ever growing, compared to the scenario of syncing the diff in our two mempools 18:16 < rusty> roasbeef: ? no, it's an accumulator of fixed size. 18:16 <+roasbeef> how can it be fixed sized, yet include everything? 18:16 < rusty> roasbeef: *magic* 18:16 <+roasbeef> everything being all channels and updates 18:16 <+roasbeef> kek 18:16 < rusty> No, it tells you that you need to fetch one. 18:17 < rusty> Oh, there's a new update for that scid... 18:17 < rusty> (But it's all deeply handwavy until I've implemented and benched IRL) 18:17 <+roasbeef> heh, but I think we should center in on the exact problme you're trying to solve here, and for which class of node 18:18 <+roasbeef> seems you're targetting the routing node who has either been online for a while, or just restarted? 18:18 < rusty> It's for steady state, when you've got more than one peer. 18:18 < rusty> So you've already synced, you're just trying to stay in sync. 18:18 <+roasbeef> (as fwiw routing nodes today can opt out of the flooding all together and still be able to their job) 18:19 <+roasbeef> same goes for if you're mostly receiving 18:20 < rusty> For sure. This is basically an optimized INV; tells you scid and (lower-bits-of) timestamps only. 18:23 < rusty> In the short term, reducing spam on the network is more important and easier. I have a three step plan. 1. Stop spamming network. 2. Get upset at other nodes which spam. 3. Profit! 18:27 -!- captjakk [~captjakk@107.145.175.198] has quit [Ping timeout: 245 seconds] 18:30 < gleb> rusty: why the same message contains both sketch and channel updates(raw)? 18:31 < gleb> or rather update INVs I guess. 18:31 < rusty> gleb: two possibilities. One is we risk oversaturating the sketch, so we send raw. Other is that we have channels we literally can't back in either encoding. 18:32 < rusty> gleb: it's a batch of INVs really. 18:33 < rusty> (It can also be used if something has changed but the timestamp, while different, shares enough lower bits that it looks the same). 18:33 < gleb> Interesting. So you're making a fundamental choice of saturation-based reconciliation schedule rather than timing-based schedule? 18:33 < rusty> gleb: that's the proposal, pending testing. 18:33 < rusty> gleb: well, we'll also ahve a 60 second timer like now. 18:33 < gleb> Is this influenced by aj? He was suggesting doing the same for bitcoin about a year ago :) 18:35 < rusty> gleb: I guess it seemed obvious? But if you hit the 60 second timer with only 5 updates, you can truncate the sketch down to capacity 10, which is kinda nice (each change == add and delete). 18:35 < gleb> Timing-based works better in Bitcoin because tx rate is sort of stable. And I choose to do it this way because simultaneous reconciliations over several links at once is pain 18:35 < rusty> gleb: but bitcoin needs a steady stream of transactions. We don't really need a steady stream of gossip, so we do need a timer to bound the propagation delay. 18:36 < gleb> In lightning it's very random anyway, so timing won't help I guess. 18:36 < rusty> gleb: one advantage we have is that we don't need a seed value, if we can actually pack IDs into 64 bits. So everyone just maintains the same sketch (though capacities are impl dependent) 18:39 * gleb thinking of censoring channel updates 18:40 < gleb> Not having salt makes protocol easier, that's good. 18:40 < rusty> gleb: yeah, much easier! 18:40 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Remote host closed the connection] 18:40 < gleb> Okay, so, according to your idea. Imagine A connected to B. Then A have 10 updates and sends a sketch to B. Of what capacity? 18:40 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #lightning-dev 18:41 < rusty> Well, probably 128, since sending less than 1k is a bit of a waste. But ignoring a minimum, 10 works, 20 probably better. 18:42 < rusty> (The assumption is that B has probably seen a significant subset, or at least they have more common updates than separate ones) 18:42 -!- Chris_Stewart_5 [~chris@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 245 seconds] 18:44 < rusty> gleb: I haven't decided on pull vs push. I mean, B could simply sent all updates to A which A doesn't have, or B could request from A things B doesn't have. 18:51 < gleb> Well, let me ask another thing. Let's say the threshold until sending is N. And A hits N diffs and sends a sketch to B. What would be the capacity? 18:53 < rusty> gleb: that's a bit backwards. A node will maintain a sketch of capacity C; it might change C, but generally I'm assuming it would pick something at startup. It would therefore set N to C * 0.8. 18:54 < rusty> gleb: if it hits timeout (60 seconds) and it only has D diffs, it would only send a sketch of capacity max(C, D * 2) say. 18:55 < rusty> gleb: Hmm, but if the peer chooses a smaller C, it risks desynchronization. So they probably need to know each other's max capacity? 18:57 < gleb> This might help. Also worth communicating it in advance, because if C_a > C_b, there's no point in sending extra syndromes (high order part of the sketch), 18:58 < rusty> gleb: more importantly, it's useful to set N for each peer so you can flush before *they* reach saturation. 19:01 < gleb> So imagine connected pairs A-B, A-C, B-C. Say A-B just reconciled and in sync. Now both of them send sketch to C, but C needs only one sketch! 19:01 < gleb> Because those sketches are almost identical. 19:07 < rusty> gleb: yes, but I'm not sure what to do about that, or whether it's even worthwhile trying? 19:08 < rusty> In general, a node will send you more data than you need. I'm not sure if further optimization is required? 19:10 < gleb> But in this case, it's (2x + C) as much as you need. It approaches the INV gossiping bandwidth, if connectivity of 4 is assumed :) It's alright if this is rare, but if this happens in 50% that's no good 19:11 < gleb> Instead of sending sketch right away, A, B could send a notification "I've collected N diffs for ya please reconcile", and then C does something smart. 19:12 < gleb> Like, reconciles with A, and then sends sketch with capacity 5 just to sync with B, if what learned from A is equal to diff announced by B. 19:12 < gleb> It would be cool to try out this different ways, ideally simulate at large scale 19:13 < rusty> gleb: Yeah. I suspect you could get a similar effect by simply trying to stagger peers. 19:15 < rusty> Sending a sketch is effectively sending a "I have N diffs". In this case, C would be saying "I have ~0 diffs". 19:17 < gleb> Right. And hopefully in a random world of lightning network, it is actually ~0 and not some spike 19:22 < rusty> I've been thinking previously that gossip flush wants to move towards alternation, ie. 30 seconds A->B, 30 seconds later B->A. This can be done by playing with +/- 1 seconds (and randomness) on timeouts to make it happen. But in addition, A wants to spread out the timeouts for flushes across peers. These goals kind of conflict, since two peers might fight, but perhaps some heuristic is possible. It also implies that if you hav 19:22 < rusty> e many peers, you want to stretch out the flush timeout to > 60 seconds. 19:33 < gleb> Alterations seem random to me, I don't see how they directly help efficiency 19:34 < gleb> Maybe attempt to make everything so random that it's evenly distributed? hah 20:26 -!- captjakk [~captjakk@107.145.175.198] has joined #lightning-dev 20:44 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 272 seconds] 20:49 -!- arubi [~ese168@gateway/tor-sasl/ese168] has quit [Remote host closed the connection] 20:49 -!- arubi [~ese168@gateway/tor-sasl/ese168] has joined #lightning-dev 20:58 -!- captjakk [~captjakk@107.145.175.198] has quit [Ping timeout: 272 seconds] 21:16 -!- bralyclow [bralyclow@gateway/vpn/protonvpn/bralyclow] has quit [Remote host closed the connection] 21:19 -!- bralyclow [bralyclow@gateway/vpn/protonvpn/bralyclow] has joined #lightning-dev 21:24 -!- bralyclow [bralyclow@gateway/vpn/protonvpn/bralyclow] has quit [Ping timeout: 245 seconds] 21:57 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 21:57 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 22:18 < rusty> gleb: sorry, got distracted. In your A & B vs C example, it's worst case if A and B try to sync with C (or vice versa) at the same time. 22:55 -!- captjakk [~captjakk@107.145.175.198] has joined #lightning-dev 22:59 -!- bralyclow [bralyclow@gateway/vpn/protonvpn/bralyclow] has joined #lightning-dev 23:28 -!- captjakk [~captjakk@107.145.175.198] has quit [Ping timeout: 245 seconds] 23:29 -!- melvster [~melvin@ip-86-49-18-190.net.upcbroadband.cz] has joined #lightning-dev 23:58 -!- captjakk [~captjakk@107.145.175.198] has joined #lightning-dev --- Log closed Tue Aug 27 00:00:53 2019