--- Log opened Mon Feb 17 00:00:41 2020 00:04 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Remote host closed the connection] 00:06 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 00:13 -!- mol [~molly@unaffiliated/molly] has joined #lightning-dev 00:16 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 260 seconds] 00:28 -!- marcoagner [~user@bl11-16-246.dsl.telepac.pt] has joined #lightning-dev 00:34 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Quit: Quit] 00:34 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 00:38 -!- slivera [~slivera@217.138.204.89] has quit [Remote host closed the connection] 01:13 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Quit: Quit] 01:14 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 01:16 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 01:24 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 01:34 -!- URoZYMP43wPs [b9ecc97b@185.236.201.123] has joined #lightning-dev 01:47 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 01:48 -!- asukan [~jan@183.128.110.62] has quit [Ping timeout: 268 seconds] 01:50 -!- URoZYMP43wPs [b9ecc97b@185.236.201.123] has quit [Remote host closed the connection] 01:50 -!- achow101 [~achow101@unaffiliated/achow101] has quit [Ping timeout: 260 seconds] 01:51 -!- kT3rpWgb [b9ecc97b@185.236.201.123] has joined #lightning-dev 01:51 -!- kT3rpWgb [b9ecc97b@185.236.201.123] has quit [Remote host closed the connection] 01:52 -!- URoZYMP43wPs [b9ecc97b@185.236.201.123] has joined #lightning-dev 01:54 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 01:54 -!- achow101 [~achow101@unaffiliated/achow101] has joined #lightning-dev 02:13 -!- mol [~molly@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 02:15 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has joined #lightning-dev 02:31 -!- slivera [~slivera@217.138.204.72] has joined #lightning-dev 02:32 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 240 seconds] 02:45 -!- mol [~molly@unaffiliated/molly] has joined #lightning-dev 02:56 -!- URoZYMP43wPs [b9ecc97b@185.236.201.123] has quit [Remote host closed the connection] 03:03 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has joined #lightning-dev 03:07 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 03:20 -!- URoZYMP43wPs [b9ecc97b@185.236.201.123] has joined #lightning-dev 03:21 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 03:49 -!- slivera [~slivera@217.138.204.72] has quit [Ping timeout: 272 seconds] 04:59 -!- laurentmt [~Thunderbi@92.223.89.145] has joined #lightning-dev 05:28 -!- laurentmt [~Thunderbi@92.223.89.145] has quit [Quit: laurentmt] 05:47 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Quit: jonatack] 05:54 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-kvemstlzaxnsgdfs] has joined #lightning-dev 06:02 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 06:13 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev 06:38 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has quit [Ping timeout: 240 seconds] 06:39 -!- fox2p [~fox2p@cpe-66-108-32-173.nyc.res.rr.com] has joined #lightning-dev 06:44 -!- sword_smith is now known as sword_smith_ 06:45 -!- sword_smith_ is now known as sword_smith 07:46 -!- sh_smith [~sh_smith@cpe-172-88-21-24.socal.res.rr.com] has quit [Remote host closed the connection] 08:33 -!- molly [~molly@unaffiliated/molly] has joined #lightning-dev 08:37 -!- mol [~molly@unaffiliated/molly] has quit [Ping timeout: 268 seconds] 09:21 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has quit [Remote host closed the connection] 09:21 -!- jcoe1 [~seru@195.206.169.231] has joined #lightning-dev 09:21 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has joined #lightning-dev 09:21 -!- jcoe [seru@gateway/vpn/protonvpn/joncoe] has quit [Ping timeout: 272 seconds] 09:52 < fiatjaf1> in a PTLC-based lightning payment the payee would give the payer a public key and when the payment is completed it would get a signature? 09:52 < fiatjaf1> in that case, what would be the message signed? 09:55 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 240 seconds] 10:03 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #lightning-dev 10:06 -!- rachelfish [~rachel@unaffiliated/itsrachelfish] has quit [Ping timeout: 240 seconds] 10:06 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-gpsyfitteymfegcn] has quit [Ping timeout: 240 seconds] 10:06 -!- imawhale[m] [borderless@gateway/shell/matrix.org/x-qxqkjinatfykiaac] has quit [Ping timeout: 256 seconds] 10:07 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-hnlkadivnylxvnus] has quit [Ping timeout: 245 seconds] 10:07 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-getudbsqdkdesixh] has quit [Ping timeout: 245 seconds] 10:07 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-kvemstlzaxnsgdfs] has quit [Ping timeout: 245 seconds] 10:07 -!- hodlwave1 [hodlwavema@gateway/shell/matrix.org/x-aarguqyrbpedkhnk] has quit [Ping timeout: 240 seconds] 10:07 -!- rachelfish [~rachel@unaffiliated/itsrachelfish] has joined #lightning-dev 10:22 -!- mol [~molly@unaffiliated/molly] has joined #lightning-dev 10:24 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 10:25 -!- imawhale[m] [borderless@gateway/shell/matrix.org/x-qceegzmoipbmzpnw] has joined #lightning-dev 10:26 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-yxjubufjiplmefjv] has joined #lightning-dev 10:32 -!- t-bast [~t-bast@2a01:e34:ec2c:260:6da4:8659:ca3f:b73a] has joined #lightning-dev 10:35 -!- hodlwave1 [hodlwavema@gateway/shell/matrix.org/x-nvukrkdylohdueqp] has joined #lightning-dev 10:39 -!- dome [~e-mod@2603:3020:2608:1000:f02a:7b52:489a:ef1a] has joined #lightning-dev 10:46 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-kehhvdmshgqavtcv] has joined #lightning-dev 10:46 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-kzltththfrnoteew] has joined #lightning-dev 10:56 < cdecker> Meeting here in 5 minutes :-) 10:56 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 10:56 < t-bast> hey cdecker! 10:57 < BlueMatt> yoyoyo 10:57 < t-bast> hey matt! 10:57 < cdecker> Hi t-bast, hi matt :-) 10:57 < BlueMatt> man, this meeting loves to be on 'murican holidays, doesn't it 10:57 < cdecker> Seems we have a quorum, shall we decide everything right now? 10:57 < t-bast> really? it's another holiday? how many of those you got? 10:57 < BlueMatt> cdecker: ack 10:58 < t-bast> cdecker: ship it 10:58 < BlueMatt> t-bast: apparently lots, but only every two weeks on mondays 10:58 < t-bast> BlueMatt: that sounds like a conspiracy 10:58 < BlueMatt> indeed. 10:58 < cdecker> Sounds like the deep state doesn't want us working on privacy enhancing tech... 10:59 < t-bast> Damn it I knew it. And no-one would believe me! 10:59 < BlueMatt> probably. did everyone cheak for nearby birds? make sure you're in a safe location! 10:59 < cdecker> I'm in Switzerland, all birds here are neutral 11:00 < BlueMatt> they're probably nsa birds. birds are a conspiracy 11:00 < cdecker> Nah, they left with Airforce One after Davos 11:01 < niftynei> the real truth is america loves efficient holidays, so we tend to organize it such that every important event is celebrated on a monday 11:01 < cdecker> Agenda for today if people want to get a head start: https://github.com/lightningnetwork/lightning-rfc/issues/735 11:01 < cdecker> Thanks for the help putting it together t-bast ^^ 11:02 < niftynei> so the odds of a monday being an american holiday are extraordinarily high (compared to other days of the week) 11:02 < BlueMatt> time? 11:02 < t-bast> niftynei: this is clearly more efficient than france, where our holidays sometimes end up being on week-ends (oh the loss) 11:03 < t-bast> cdecker: thank you for putting it together! 11:03 < rusty> Wow, #738. I think I'm in love. 11:03 < cdecker> See, in catholic countries you'd always get a Tuesday or a Thursday, meaning everybody just also takes the Monday or Friday off as well, that's what I call efficient ^^ 11:03 < niftynei> hehehe 11:04 -!- joostjgr [~joostjgr@ip51cf95f6.direct-adsl.nl] has joined #lightning-dev 11:04 < t-bast> great we got lightning labs too, let's start? 11:04 -!- Bugz [~pi@2602:30a:2ea2:1180:4658:cbe4:213d:490b] has quit [Quit: WeeChat 2.3] 11:04 < t-bast> Hi Joost 11:04 < cdecker> Yep, let's 11:04 -!- Bugz [~pi@2602:30a:2ea2:1180:4658:cbe4:213d:490b] has joined #lightning-dev 11:04 < cdecker> Any volunteers for chairing? 11:04 < joostjgr> Hi all. I am here, but I see that I am not expert on most of today's topics 11:05 < cdecker> joostjgr: that's ok, neither are we xD 11:05 < joostjgr> I mean I can't give the ack if we need it :) 11:05 < rusty> At 5:30am, I am an expert on nothing. It's OK, nobody has noticed yet :) 11:05 < t-bast> I can chair if you want cdecker and no-one else does 11:05 < cdecker> Happy for you to do it t-bast :-) 11:06 < t-bast> cool let's do this then 11:06 < t-bast> #startmeeting 11:06 < lightningbot> Meeting started Mon Feb 17 19:06:09 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:06 < lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 11:06 < t-bast> #topic Wumbo 11:06 < BlueMatt> rusty: you just need to install the caffeine drip before you go to bed so that it can start dripping into your veigns before your alarm goes off :p 11:07 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/596 11:07 < t-bast> Anything holding up merging wumbo? 11:07 < t-bast> We agreed to work on an advisory on confirmation scaling, but we'll do that in a follow-up PR 11:08 < cdecker> Bit compaction has been done, so I think this is good to go 11:08 < rusty> t-bast: ack! 11:08 < ariard_> hi 11:08 < cdecker> Hi Antoine 11:09 < t-bast> Great, Joost do you know if on LL side there's anything holding up the wumbo spec PR? 11:09 < joostjgr> No, not up to date on that one 11:10 < t-bast> We did agree during last meeting to merge this once the bits were updated, and roasbeef was there so I think we're good to go right? 11:11 < joostjgr> If that was the noted action 11:12 < t-bast> #action t-bast merge 596 and work with araspitzu on advisory PR for scaling confs 11:12 < joostjgr> As I said, I have not much to ack here. Coming for stuck channels and anchors 11:12 < t-bast> #topic TLV extensions 11:12 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/714 11:13 < t-bast> Allright the points that are still to be discussed on this PR are the optional fields that need to be made mandatory 11:13 < rusty> Since this will go away in openchannel2, I believe making it mandatory is correct. 11:14 < t-bast> I'm proposing to interpret shutdown_script as a TLV field, which is more flexible 11:14 <+roasbeef> didn't we cover this last time? to just make it zeroes? 11:14 < t-bast> I made a quite lengthy comment about why on the PR, which I'll let you digest :) 11:14 <+roasbeef> also I don't really see "openchannel2" replacing what we have rn given the main benefit is dual funding, which itself can already be emulated using multiple single funder channels 11:14 < t-bast> hey roasbeef, yes we did, but needs a look at how the PR does that 11:15 <+roasbeef> how does making it tlv help us remove the sender requirement? assume there's a node out there that never updated to tlv stuff but is opening channels, how can we remove that requirement ina backwrds compatible manner? 11:16 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Quit: Quit] 11:16 < t-bast> I don't think this is an issue 11:16 < rusty> t-bast: clever. 11:16 < t-bast> If our node offers the option_shutdown_script feature, it does offer it as a TLV, which the old node interprets as just the shutdown_script field 11:17 < t-bast> If our nodes doesn't, the old node isn't expecting a shutdown_script field and will ignore the whole tlv stream 11:17 < t-bast> (if there's anything written to that tlv stream of course) 11:17 < t-bast> rusty: yay it's not so hacky after all :) 11:18 < cdecker> As clever as the "interpret the bytes differently" solution is, it's a game we can only play once 11:18 < rusty> (I think we're going to have to shift to either a second key, or a bip 32 seed & range, in future, to allow splice out to be sensible). 11:18 < t-bast> cdecker: yes, but once we've moved to a tlv stream we shouldn't need that kind of trick unless we want something completely different from tlvs, right? 11:18 < t-bast> rusty: what do you mean? 11:19 <+roasbeef> re wumbo: UX for it will be pretty poor w/o a way for the nodes to signal their max accepted channel 11:19 < niftynei> i believe it's to allow key-rotation for splicing out to different addresses; otherwise you'd have to re-use the address to comply with the upfront_shutdown script for multiple out splices 11:19 < cdecker> No, I'm saying we can only use this interpretation freedom on a single optional field, which is ok if upfront_shutdown_script is the only occurrence of this issue, but if there are any other optional fields appended we can't just give them TLV-type 0x00 again 11:19 < rusty> t-bast: option_upfront_shutdown_script assumes the only way to get funds out is shut down. In future we have splice out. 11:19 <+roasbeef> i think he means being able to obtain the "next series" w/o going thru the interaction to reduce round trips 11:20 <+roasbeef> splice out and upfront shutdown aren't really compatible tho... 11:20 < t-bast> cdecker: got it, totally 11:20 < rusty> roasbeef: indeed. 11:20 < t-bast> rusty/niftynei: ah gotcha, but that shouldn't be an issue with non-upgraded nodes? 11:20 <+roasbeef> cdecker: which I think is ok, as the optional fields aren't really optional, so if possible we can do this to all the existing ones to move to the "new world" 11:21 < rusty> t-bast: all other TLVs must be > the longest possible shutdown script ofc. 11:21 < niftynei> i thought t-bast did an audit of existing fields this pertains to... upfront_shutdown and one other?? 11:22 <+roasbeef> there's max htlc too iirc 11:22 < t-bast> yes only your_last_per_commitment_secret/point and shutdown_script, max_htlc doesn't need to change as its presence is gated on the preceding byte (channel_flags IIRC) 11:22 < rusty> option_data_loss_protect / option_static_remotekey which are effectively compulsory on the network todat. 11:23 < t-bast> rusty: since I added a requirement that if you use a TLV stream, you have to include a shutdown_script, this shouldn't be an issue, is it? 11:23 < t-bast> rusty: you'd include either a valid one or a 0x0000 one if you don't care about shutdown_scripts 11:24 < rusty> t-bast: that would break compatibility with legacy once you do, if you use a type less than 0x22? 11:24 < t-bast> rusty: I don't think so, because the shutdown_script length is currently 2 bytes 11:25 < rusty> t-bast: oops, yes, type is 0, no lesser type possible. OK, more coffee for me. 11:25 < t-bast> rusty: xD this clearly isn't the simplest PR for breakfast 11:25 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 11:26 <+roasbeef> all follows for me, and as decker said above we can only really do this once, so might as well do it early so we can enjoy the new bountiful world of tlv (actually optional fields) 11:27 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Client Quit] 11:27 < cdecker> Yes, but don't we have a flat type-structure across all TLVs? 11:27 < rusty> It's an ack from me, too. 11:27 <+roasbeef> cdecker: flat type? how's that related? 11:28 < cdecker> So what I meant before was that if we use the re-interpret trick on upfront_shutdown_script, we can't use it for option_static_remotekey, even though they might be in different messages 11:28 <+roasbeef> but static remote key doesn't involve a new field 11:28 < cdecker> There is only one legacy optional field that can be type 0x00 11:29 < niftynei> i see what you're saying cdecker, i thought we agreed that TLV types are unique per message? 11:29 < t-bast> cdecker: why not? that's only if we want messages to share TLV records, right? 11:29 < t-bast> Yeah I thought we said each message is its own namespace for the TLV records it contains 11:29 <+roasbeef> yeh unique per message 11:29 < cdecker> Don't we? I find it really confusing to have TLV-types be context dependent, 11:29 < niftynei> you're right in that if they're globally assigned this won't work again :) 11:30 < niftynei> i was also under the 'unique per message' assumption umbrella 11:30 < cdecker> Ok, I might be misremembering the discussion about this, but it's ugly af 11:31 < t-bast> xD 11:31 < cdecker> Anyway, I don't want to be the only one holding this up, so I'll shut up if the rest is happy with having TLV-types context dependent xD 11:31 <+roasbeef> idk a namespace per message means we don't need to worry about global collisions, pretty nice mio 11:31 <+roasbeef> imo 11:31 < t-bast> great, so sounds like we have an ACK on this PR and can move the next topic, thanks for the review 11:32 < t-bast> #action t-bast give a few days for other people to potentially chime in on Github, then merge 714 11:32 < t-bast> #topic networks in init message 11:32 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/682 11:32 < t-bast> This is an old one, I believe c-lightning and eclair already support it 11:33 < t-bast> I don't think anything is holding up the merge on that one, right? 11:34 <+roasbeef> iirc there some was some interop thing 11:34 <+roasbeef> which seems to be settled now? 11:34 < t-bast> yep I tested interop 11:34 < t-bast> with c-lightning and lnd, correctly ignored by lnd, correctly handled by c-lightning 11:35 <+roasbeef> cool, yeh duno when we'll impl it as we don't really have a strong need, since we don't support liquid or anything like that 11:35 <+roasbeef> did we resolve the partial interesction thing? 11:35 < t-bast> no rush IMO 11:35 <+roasbeef> so if some chains are shared but not all 11:36 < cdecker> Yes, c-lightning now checks against all the provided chains (https://github.com/ElementsProject/lightning/pull/3371) 11:36 < rusty> roasbeef: yeah, that was a coding bug in c-lightning, long fixed. IMO, this is mainly to stop testnet connecting to mainnet etc. 11:36 < t-bast> this is ok I believe, the last requirement in Bolt7 says you'd only forward gossip for the supported chains 11:37 <+roasbeef> lgmt'd on pr 11:37 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 11:37 < t-bast> good, let's move on 11:37 < t-bast> #action rebase and merge 682 11:37 < t-bast> #topic a home for Bolts 11:38 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/738 11:38 < t-bast> This is a proposal to use the ln.dev DNS to host a gitbook for the spec 11:38 -!- jcoe1 [~seru@195.206.169.231] has quit [Ping timeout: 268 seconds] 11:38 < cdecker> Seems we get more and more pseudonymous contributors :-) 11:38 <+roasbeef> but then who manages the domain? also $900/yr? lol 11:39 < cdecker> Yeah, it'd be good if we can at least guarantee it doesn't morph into an altcoin site as soon as we have linked it extensively 11:39 < t-bast> at least accepting the gitbook part should be a no-brainer IMO (what the PR does) 11:39 < cdecker> We're need to have control over the domain before we can make this kind of committment 11:39 < cdecker> Oh yes, that part is good 11:39 < rusty> roasbeef: Yeah, ouch, but cdecker's point too. 11:40 < BlueMatt> how does one generate the resulting website? if its something simple I can also host on a dfferent domain 11:40 < BlueMatt> (eg ln.bitcoin.ninja or so) 11:40 <+roasbeef> seems the PR is really just a table of contents? 11:40 <+roasbeef> and the person is free to host it on gitbook if they want 11:40 -!- hiroki_ [db667682@pl76162.ag2001.nttpc.ne.jp] has joined #lightning-dev 11:40 < t-bast> I think generating the gitbook is really simple (probably just a JS library or something) 11:40 <+roasbeef> so we should discuss the table of contents rather than gitbook itself 11:41 < BlueMatt> right, I can buy another server and sync it over to my webhost.... 11:41 < rusty> Yes, I think we accept the book part, and I can take over the domain. My BTC tipjar has enough for a few years, maybe more if Number Go Up. 11:41 < BlueMatt> that sounds good. 11:41 < t-bast> rusty: that's a great idea 11:41 < cdecker> We should contact the contributor soon though, since he mentioned 10 days ago that renewal is due in 14 days :-) 11:41 < t-bast> so on the table of content and links themselves, is there something we want to change from what's in the PR? 11:42 * BlueMatt notes that domain may be a waste of money..... 11:42 < cdecker> LGTM 11:42 < BlueMatt> pr itself LGTM 11:42 <+roasbeef> the hosting/domain isn't really the important part, idk if you can generate it manually anymore (the gitbook), iirc you gotta do it all thru their interface now, could be wrong tho 11:43 < niftynei> anyone can host this , right? 11:43 <+roasbeef> yeh 11:43 <+roasbeef> doesn't this conflict w/ bolt #0? 11:43 <+roasbeef> which already has a rough table of contents, or does the tool need an explicit summary.md? 11:43 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 11:43 < t-bast> I think the tool needs it to be in summary.md, let's check 11:44 < niftynei> based on the website structure, i'm going to guess it needs it for creating the sidebar 11:44 < t-bast> https://github.com/GitbookIO/gitbook/blob/master/docs/structure.md 11:44 < t-bast> yep it needs to be in SUMMARY.md 11:44 < rusty> I guess the q here is: are people broadly happy with me offering to take over the domain with my commitment to keep for at least 3 years? (I'll ask him nicely if he'll sponsor it, too). 11:44 < t-bast> rusty: ACK 11:44 < cdecker> ACK 11:44 <+roasbeef> t-bast: is that repo still being used? last commit 2 years ago, deprecation warning in the readme 11:45 < t-bast> arf, read too fast then, let's try to find the official doc 11:45 < cdecker> Well, doesn't have to be gitbook either, there are plenty of static site generators 11:45 < cdecker> We can work with that, but I think it might be really nice to have a canonical home for the spec that isn't the repo 11:45 <+roasbeef> yeh, sub gitbook w/ any other static generator, don't think we really need "one to rule them all" 11:46 < t-bast> https://docs.gitbook.com/integrations/github/content-configuration#summary 11:46 < niftynei> i really like this but i don't understand why this particular user doesn't just use a fork of the repo? 11:46 < cdecker> And I quite like the domain, even though it's waaaaaay too expensive... 11:46 <+roasbeef> perhaps let's move on? not super critical stuff here, just static site generation and hosting 11:46 -!- hodlwave1 [hodlwavema@gateway/shell/matrix.org/x-nvukrkdylohdueqp] has quit [Quit: killed] 11:46 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-kzltththfrnoteew] has quit [Quit: killed] 11:46 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-kehhvdmshgqavtcv] has quit [Quit: killed] 11:46 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-yxjubufjiplmefjv] has quit [Quit: killed] 11:47 -!- imawhale[m] [borderless@gateway/shell/matrix.org/x-qceegzmoipbmzpnw] has quit [Quit: killed] 11:47 < niftynei> it seems like they're asking for committee / community approval for a (admittedly very great) project where it doesn't exactly require it 11:47 < t-bast> Allright let's move on, I'll summarize 11:47 <+roasbeef> niftynei: yeh, they could very well just fork add the summary and host it as they're doing rn 11:48 < t-bast> #action discuss with the OP the possibility of doing this in a fork of the repo 11:48 < t-bast> #action rusty to get in touch with OP about DNS hand-over 11:48 < t-bast> #topic Stuck channels 11:48 < t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/740 11:49 -!- sanket1729 [~sanket172@72.36.89.11] has quit [Ping timeout: 250 seconds] 11:49 < rusty> So we had m-schmoock demonstrate this with c-lightning, causing me to rush in a workaround for our (pending, cdecker niftynei ping) release. 11:49 <+roasbeef> seems related to https://github.com/lightningnetwork/lightning-rfc/issues/745 ? 11:49 <+roasbeef> this was always one of the most underspecified areas of the spec imo, just tons of edge cases waiting to be discovered 11:50 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev 11:50 < rusty> There are two solutions I came up with: the one I used was "if I'm the funder, keep an additional reserve to allow paying for another HTLC at current fees * 1.5". 11:50 < rusty> This is what c-lightning 0.8.1 does. 11:50 <+roasbeef> just seeing this now, but why would the node do a fee increase that results in it not being able to pay for the set of preseent htlcs? 11:51 < joostjgr> Isn't the only thing that must be MUST is offering anything that would cause the chan to force close? 11:51 < BlueMatt> roasbeef: note that this is *unrelated* to update_fee - you can hit it without any fee changes at all 11:51 < joostjgr> keeping more reserve looks more like non-mandatory to me 11:51 < t-bast> roasbeef: it's not the node, it's the network :D 11:51 -!- sanket1729 [~sanket172@72.36.89.11] has joined #lightning-dev 11:51 < rusty> roasbeef: yeah, funder can be forced to dip into reserves, even eliminate their output altogether if fees spike. 11:52 < joostjgr> nodes can keep safety margins as they like when sending out htlcs 11:52 < rusty> joostjgr: exactly. 11:52 < t-bast> joostjgr: true, this could be a SHOULD for that particular case (did I understand your comment correctly?) 11:53 < joostjgr> yes, could be something like SHOULD keep headroom for two htlcs. But there is a MUST directive in case the add would be guaranteed to kill the channel 11:53 < BlueMatt> lets just say MUST 11:53 < rusty> Anyway, the other mitigation would be to override the fundee side logic, which tries not to add an HTLC if the funder would not be able to afford the fee. This would add a single-htlc carve-out for that case (i.e. fundee adds a single HTLC even if funder would be unable to afford fee). 11:53 < BlueMatt> i mean it kinda is - you MUST have available space for two more HTLCs, seems prefectly reasonable 11:53 < t-bast> but if we go in the direction of that extra reserve, and all impl behave the same way, what's nice is that we can know exactly how much we can send to the remote party; if we don't agree on the behavior, we can't know that we have to try, fail and adapt 11:53 < BlueMatt> with a note on the receive end that old nodes dont do this 11:54 < joostjgr> it is a protocol change if we make it MUST 2*, while otherwise we just define behavior that avoids force close that is currently happening 11:54 <+roasbeef> t-bast: even w/ this stuff is still murky, giving yourself (and the other party) more "room" at all times kind of acknowledges this all isn't super well defined, so we'll all just try out best 11:54 <+roasbeef> joostjgr: this pr itself is a protocol change technically 11:54 < BlueMatt> joostjgr: yes, it *should* be a protocol change 11:55 < joostjgr> i don't think it needs to be 11:55 < t-bast> roasbeef: it's not about being well-defined, it's because there's a condition where you could get stuck if the network fees rise, and that lets you avoid it 11:55 < joostjgr> if we just specify that nodes shouldn't offer updates that result in force close, i think that is enough 11:55 < BlueMatt> joostjgr: this doesnt result in force close 11:55 < rusty> Note that this is an unsolvable problem, because both sides can add htlc simultanously and always exceed the funders' affordable limits. This is why they're heuristics. 11:55 < BlueMatt> it results in *stuck* channel 11:56 < joostjgr> if you add an htlc for which you can't pay the fee, it is force close, no? 11:56 < BlueMatt> right, to actually *solve* feerate would have to be charged to the side that added the htlc 11:56 < rusty> BlueMatt: indeed. 11:56 < BlueMatt> (which we should do eventually maybe, but that is a super separate discussion) 11:56 < t-bast> exactly, this is because of the assymetry between funder/fundee 11:57 < t-bast> but a short-term solution would be nice :) 11:57 < rusty> joostjgr: not in the spec, no. That's true if funder adds something it knows it can't afford, but fundee could add at same time as funder (or, as funder changes fees, etc) 11:57 < BlueMatt> joostjgr: no. read the issue, functional bob will decide not to add any htlcs because its stuck 11:58 -!- hiroki_ [db667682@pl76162.ag2001.nttpc.ne.jp] has quit [Ping timeout: 260 seconds] 11:59 <+roasbeef> t-bast: I mean that we're not really confident if this fixes "everyting", since this area generally isn't well defined 11:59 < rusty> I prefer the "fundee lets itself always have one HTLC even if funder can't afford it" because it doesn't limit channel capacity. 11:59 < rusty> roasbeef: it definitely doesn't fix everything. 11:59 < rusty> roasbeef: and it's perfectly well defined. It's just not nice :) 12:00 < joostjgr> bluematt: i mean if the fundee adds something that funder can't pay for 12:00 < BlueMatt> joostjgr: right, but nodes wotn do that 12:00 < rusty> BlueMatt: how can they avoid it? 12:00 < rusty> BlueMatt: simultanous HTLCs in both directions can happen. 12:00 < t-bast> roasbeef: it's true that there are issues with the protocol itself because fee handling can't be perfect at the moment...but that additional restriction seems to fix quite easily an issue we're seeing in the wild. You could argue that implementations can do that mitigation without it being in the spec, is that what you mean? 12:01 < BlueMatt> right, in theory you could hit it (though joostjgr's propose change doesnt resolve this), but you can also hit this bug without that 12:01 < t-bast> rusty: but in that solution the funder can lose money in extreme cases 12:01 < rusty> BlueMatt: Yes, we seek to resolve the simple case, since it's acually happening. 12:01 < BlueMatt> right. 12:02 < BlueMatt> anyway, so it seems like most folks are on the same page - we should review the pr and merge it? 12:02 * BlueMatt has been slacking on it, sorry 12:02 < joostjgr> yes, those race conditions are unsolved. my comment was about MUST 2*. If it is MUST, shouldn't there also be a corresponding MUST on the receiver side specified? 12:02 < joostjgr> otherwise I think it should be SHOULD on the sender 12:02 < BlueMatt> joostjgr: no, there can't, cause old nodes dont implement it 12:02 < rusty> t-bast: I don't see how funder could lose money? 12:02 < t-bast> for the reason matt mentions, we can't put it on the receiver at the moment 12:02 < BlueMatt> we can totally add a sender-side "MUST, going forward" without a recipient-side check 12:03 < joostjgr> Also, hard coding the 2x doesn't sounds good to me. Implementations can make their own decision here. I find MUST too strong 12:03 < BlueMatt> this is definitely an important enough thing for MUST 12:03 * cdecker is very much looking forward to externalizing fees altogether :-) 12:03 < BlueMatt> cdecker: note that that does not fully solve this problem. 12:04 < BlueMatt> 2x is a trivial headroom, I see no reason to not MUST it - if nodes want to do *more* they can 12:04 < BlueMatt> thats totally local policy 12:04 < t-bast> rusty: because that situation happens when the funder doesn't have funds on his side: if the fundee does send an HTLC that leaves the funder unable to pay for the fee, once the htlc fulfills the fundee can simply not sign the new commitment, broadcast the one with the HTLC on-chain, and race for the timeout 12:04 < t-bast> rusty: in case this is really because of a big fee raise, that on-chain timeout is possible 12:05 < joostjgr> btw, if the fundee sends dust htlcs, it is ok. don't know if we want to specify that too. it is sort of implicit in 'cannot pay for in the commitment', but could be good to make explicit 12:05 < t-bast> joostjgr: what's nice with all implementations sharing the same factor is that it's easy to know how your remote will behave 12:05 < rusty> t-bast: this is always true, even without this? Either the fees are ludicrous, or the reserves were too low. 12:05 < t-bast> rusty: but you can't use the reserve to pay for the fee, can you? 12:05 < rusty> t-bast: yes. 12:06 < rusty> t-bast: it usually only happens when there's a race where both sides add though. 12:06 < t-bast> rusty: I need to refresh my memory there then, I'll have a look at it tomorrow 12:07 < rusty> (Originally we had language which would pull fees from the *fundee* side if they still weren't met after the funder's funds were exhausted, but this was seen as too ugly). 12:07 < rusty> t-bast: the only reason I didn't implement that mitigation is that I wasn't sure how impl did this in practice, since it's rare. 12:08 < t-bast> I think I need to dig more into it, shall we keep discussing this on Github and move on to next topics? 12:08 <+roasbeef> fees are hard mayne 12:08 <+roasbeef> i think it's something we all underestimated initially, just making it one sided (funder pays) seemed to have slashed off enough funny biz at the time 12:09 < BlueMatt> fees suck. lets get rid of them :) 12:09 <+roasbeef> just gimmie a grant, i'll take care of the rest 12:09 * BlueMatt -> lunch. 12:10 < t-bast> I think we should disucss a bit of long term updates 12:10 < t-bast> Rusty do you want to talk about your event tests, or should I do decoy/rendezvous? 12:11 < rusty> t-bast: decoy! Event tests need some attention from me and niftynei.... 12:11 < t-bast> #action keep exploring solutions to stuck channels, discuss on Github 12:11 < t-bast> ACK, I'll do the talking then :D 12:11 < t-bast> #topic Decoy node_ids and poor man rendezvous 12:11 < t-bast> #link https://gist.github.com/t-bast/9972bfe9523bb18395bdedb8dc691faf 12:11 * BlueMatt will read scrollback later, but notes that he prefers to see more work on redezvous before trampoline. cc cdecker 12:12 < t-bast> The idea started from wanting wallets to be able to hide their node_id and scid when generating invoices 12:12 < t-bast> And do that without the need for a stateful protocol, just with crypto tricks 12:13 < t-bast> I wanted to explore that as far as I could, and it turns out (unless I'm missing something) that it can be used for a cheap rendezvous scheme 12:13 <+roasbeef> first time looking at it, but at a glance looks pretty involved, would think there's something simpler that achives the same end result (ofc first time i'm seeing this) 12:13 <+roasbeef> does it support actually giving errors back to the sender if they originate in the latter part of the route? 12:13 < t-bast> roasbeef: if you have something simpler, then it would be nice, do share! 12:13 < t-bast> roasbeef: define "latter part"? 12:13 < cdecker> roasbeef: yes, you just follow HTLCs back upstream 12:13 <+roasbeef> the part not known to the sender 12:13 < cdecker> Like we do today 12:14 < t-bast> yes you could have errors there 12:14 <+roasbeef> not following, so the same shared secret is used throughout the entire route? 12:14 < t-bast> but that's where the current attack I have lies: if nodes on the rendezvous path start giving accurate errors, mallory can probe the route via channel fees 12:14 < t-bast> roasbeef: yes, same shared secret here 12:14 < t-bast> the sender constructs a fully normal onion 12:15 < cdecker> t-bast: that's for the poor-mans rv only though 12:15 < t-bast> it's just using many routing hints 12:15 < t-bast> cdecker: yes, only for this specific proposal 12:15 < cdecker> Fully fledged RV with the sphinx onion construction is opaque if the error happens after the RV node 12:15 < cdecker> Wouldn't it be more accurate to call this a pseudonymous routing rather than rendezvous? 12:16 < t-bast> cdecker: yes, fully fledged RV also avoids any kind of probing, but may be very costly in terms of space in invoices 12:16 <+roasbeef> it's basically like blinded hop hints? 12:16 < t-bast> totally, this is really pseudonymous and not rendezvous 12:16 < t-bast> roasbeef: exactly, you simply blind nodes and scids after the "RV node" 12:17 < t-bast> the issue is that it means sender can try that route with many different fees 12:17 < t-bast> and that may help him uncover the real channels and nodes 12:17 < t-bast> I'm not sure yet how to best address that 12:17 < cdecker> Agreed on the space savings in the invoices, full-fledged RV is doing very poorly there :-( 12:18 < t-bast> I was starting from the assumption that full-fledged RDV was not possible because of the space cost; but maybe that assumption was flawed 12:19 < rusty> t-bast: you can tell a tmp scid is being used, but we'd have to define exactly what that effects. 12:19 < rusty> s/effects/affects/ 12:19 < cdecker> It might be quite expensive, but we can do with a couple of hops, (~65 bytes per hop after the RV node). We should be able to fill the back of the onion in using a PRNG (chacha20) 12:20 < cdecker> Sphinx RV is what I mean here 12:20 < t-bast> rusty: yes, that definitely deserves more thinking, it's still an early proposal 12:20 < t-bast> cdecker: that means you'd be able to put less than 1300 bytes in the invoice? 12:20 -!- JoKe[m] [jokesharek@gateway/shell/matrix.org/x-jkqclubtepjfdzbm] has joined #lightning-dev 12:20 < cdecker> Yes, that's what I'm thinking 12:20 < t-bast> cdecker: by somehow sharing the "mid-state" of the onion encryption? 12:21 < t-bast> Do you have drafts on how we could do that? 12:21 < cdecker> Not yet, but I'm working on it 12:21 < cdecker> The trailer generation just needs to change slightly so it can be generated at the sender 12:21 -!- URoZYMP43wPs [b9ecc97b@185.236.201.123] has quit [Remote host closed the connection] 12:22 < t-bast> Nice, I think it's worth putting some effort in that direction 12:22 < rusty> I really want a system which can be used both for simple private channels and fully private routes. 12:22 <+roasbeef> even sphinx RV leaves a lot to be desired 12:22 <+roasbeef> as it only packs a single route, and how many payments work with just one single golden route? also what about payment splitting 12:22 < cdecker> Right, that's indeed an issue 12:23 < t-bast> rusty: yes, if we had that sphinx rv we could use it from a mobile with only one-node, and that would hide your node_id and scid, that would be great 12:23 < t-bast> roasbeef: we'd need to provide more than one of those 12:23 <+roasbeef> how many....? ;) 12:23 < t-bast> simply because of MPP if you want to do a big payment 12:23 < t-bast> so it really needs to be somewhat space efficient 12:23 <+roasbeef> idk feels like a dead end given new considerations 12:23 < t-bast> or we should work on a different encoding for invoices, more compact :D 12:24 < t-bast> what does feel like a dead end? 12:24 < t-bast> my decoys or full sphinx rdv? 12:25 <+roasbeef> sphinx based rdv, haven't read this decoy thing yet but if it also relies on a set of pre-planned routes same may apply /shrug 12:25 < t-bast> I don't know, it depends on how much space we can still use in invoices 12:26 < cdecker> The decoy IDs retain their usefullness for the last few hops, I just wouldn't extend it beyond that. Let's keep em as a sort of blinded routehint 12:26 <+roasbeef> in the OG hornet setting or even tor, the problem is simpler as you just want a path and you don't have as many constraints as we do in the network, so you can just create a hand full of them (circuits) and they'll work for just about anything 12:26 < cdecker> That should make discussion more focused 12:27 < t-bast> let's say we're using these decoys for only a few hops indeed, not as a full rendezvous 12:27 < t-bast> the issue is that we need support from both sender and the last hops' nodes, so it needs to bring enough features for implementations to all agree to implement and ship 12:29 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Ping timeout: 268 seconds] 12:30 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 12:31 < rusty> t-bast: come for the blinding, stay for the fact that we add feature flags to routehints! 12:31 < t-bast> that can be a first sell ;) 12:32 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Client Quit] 12:32 < t-bast> I think it's somewhat simple to implement, but we need to have enough people review the crypto and implementations ACK to make sure it can ship in wallets 12:33 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 12:33 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has quit [Client Quit] 12:33 < t-bast> I'll have to leave in a few minutes, shall we wrap it up and keep discussing that on the mailing list and github? 12:34 < t-bast> We had to skip over two PRs, feel free to review them on github so we can merge them next time ;) 12:35 < cdecker> Same here, it's getting late :-) 12:35 < joostjgr> One final comment from me on anchors: Last meeting we decided to drop the symmetric csv because it wasn't a complete fix for the gaming issue. That left us with two options to proceed, from which we made a preliminary decision. Please check out the PR to comment. We are planning to merge the new format with an experimental feature bit in the next release. 12:35 -!- willcl_ark [~quassel@cpc123762-trow7-2-0-cust7.18-1.cable.virginm.net] has joined #lightning-dev 12:36 < t-bast> joostjgr: sounds good, let's discuss that on Github 12:36 < t-bast> #action look at latest developments on anchor outputs PR 12:37 < t-bast> #action cdecker to investigate full sphinx rendezvous possibilities 12:37 < t-bast> #action tear decoys proposal apart on github until we're satisfied 12:37 < t-bast> #endmeeting 12:37 < lightningbot> Meeting ended Mon Feb 17 20:37:57 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 12:37 < lightningbot> Minutes: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-02-17-19.06.html 12:37 < lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-02-17-19.06.txt 12:37 < lightningbot> Log: http://www.erisian.com.au/meetbot/lightning-dev/2020/lightning-dev.2020-02-17-19.06.log.html 12:38 < t-bast> Thanks all, I'll update the issues tomorrow and apply the action items. 12:38 < t-bast> See you next time! 12:39 -!- t-bast [~t-bast@2a01:e34:ec2c:260:6da4:8659:ca3f:b73a] has quit [Quit: Leaving] 12:40 < cdecker> Thanks everyone :-) 12:40 < joostjgr> Bye everyone 12:40 -!- joostjgr [~joostjgr@ip51cf95f6.direct-adsl.nl] has quit [Quit: Leaving] 12:40 < dome> Thanks guys! This is my first time just listening in as a fly on the wall... Looking forward to eventually joining the conversation. Was great to see the workflow 12:42 -!- dr-orlovsky [~dr-orlovs@ip216.ip-54-36-238.eu] has quit [Ping timeout: 240 seconds] 12:44 < niftynei> "Thanks guys!" *ahem* 12:45 -!- dr-orlovsky [~dr-orlovs@ip216.ip-54-36-238.eu] has joined #lightning-dev 12:46 < niftynei> don't leave me out dome! :P 12:47 < lndbot> Thanks, guize! x) 12:47 < niftynei> -.- 12:49 < dome> Thanks to all! :D 12:49 < dome> Really need to start getting used to using 'folks' :p 12:50 < lndbot> "Thanks, guys and gals" :P 12:50 < dome> But really, thank you all for all you do! The LN community is inspiring. 12:52 < niftynei> glad to have you on board dome! :) 12:53 < niftynei> " "Thanks, guys and gals" :P" <3 12:54 < dome> <3 13:02 -!- hodlwave1 [hodlwavema@gateway/shell/matrix.org/x-onzodyupeotyjjrl] has joined #lightning-dev 13:02 -!- TheFuzzStone[m] [thefuzzsto@gateway/shell/matrix.org/x-pvuhrdptiepzivku] has joined #lightning-dev 13:02 -!- mrostecki [mrosteckim@gateway/shell/matrix.org/x-hqsvcdzswjhmkjbd] has joined #lightning-dev 13:02 -!- imawhale[m] [borderless@gateway/shell/matrix.org/x-zaufbzfvvgjyrfgw] has joined #lightning-dev 13:02 -!- brandoncurtis [brandongwu@gateway/shell/matrix.org/x-fjveotlrkztwoday] has joined #lightning-dev 13:21 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 13:29 <+roasbeef> y'all > anything 13:30 <+roasbeef> y'all is the final evolution of plural pronouns in the english language, even works in cases outside of that (plural possessive, second person pronoun) 13:31 < dome> need to bring back my southern roots! 13:31 < dome> y'all went away when i moved from VA to NY 13:32 <+roasbeef> y'all stays with one always, like a steadfast confidant 13:32 < m-schmoock> whoa, missed it, was on the road :/ 13:32 < m-schmoock> anyway, you discussed the channel lockup im just going through the wall of text 13:37 < cdecker> Yes m-schmoock, we discussed it a bit, but it seems people are not happy with the countermeasures yet, might need a couple more rounds of discussion 13:38 < m-schmoock> well obviously theres no clean and easy solution 13:40 < dome> @roasbeef the NY vernacular stuck with me so much even after moving away. It's only now starting to leave me, going to re-introduce y'all back in there ;) 13:45 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has quit [Ping timeout: 260 seconds] 13:54 -!- dome [~e-mod@2603:3020:2608:1000:f02a:7b52:489a:ef1a] has quit [Quit: My MacBook has gone to sleep. ZZZzzz...] 14:35 < aj> roasbeef: "all y'all" is surely a further evolution 14:49 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 15:21 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 15:39 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Remote host closed the connection] 15:46 -!- Chris_Stewart_5 [~Chris_Ste@unaffiliated/chris-stewart-5/x-3612383] has joined #lightning-dev 15:55 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 16:06 -!- marcoagner [~user@bl11-16-246.dsl.telepac.pt] has quit [Ping timeout: 260 seconds] 16:11 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #lightning-dev 16:40 <+roasbeef> aj: perhaps...but the "all" there is just an intensifier right? 16:40 <+roasbeef> "I need y'all to leave" vs "I need ALL y'all to leave" 16:40 <+roasbeef> the latter elicits a jump to attention to gtfo 16:43 < aj> y'all is a group, all y'all is the entire crowd, as i understand it 16:44 <+roasbeef> ? 16:44 < aj> https://en.wiktionary.org/wiki/all_y%27all 16:45 <+roasbeef> i think i'd usually substitute the "all" with a like circle twirl with my finger in the air when needing to address the crowd 16:45 < aj> don't know i've ever said y'all rather than written it 16:45 <+roasbeef> kek 17:12 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 17:14 -!- asukan [~jan@183.128.110.62] has joined #lightning-dev 17:22 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Remote host closed the connection] 17:45 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 272 seconds] 18:00 -!- molly [~molly@unaffiliated/molly] has joined #lightning-dev 18:02 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 18:03 -!- mol [~molly@unaffiliated/molly] has quit [Ping timeout: 265 seconds] 18:04 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 18:06 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 18:11 -!- Emcy [~Emcy@unaffiliated/emcy] has quit [Remote host closed the connection] 18:39 -!- dome [~e-mod@170.250.42.23] has joined #lightning-dev 18:40 -!- riclas [~riclas@77.7.37.188.rev.vodafone.pt] has quit [Ping timeout: 272 seconds] 18:41 < rusty> I thought about being clever and using a timestamp since Jan 1 2020 instead of 1970, but turns out that takes 4 bytes within 6 months anyway, so no point. 18:57 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 18:57 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #lightning-dev 19:04 -!- dome [~e-mod@170.250.42.23] has quit [Quit: My MacBook has gone to sleep. ZZZzzz...] 19:06 -!- dome [~e-mod@170.250.42.23] has joined #lightning-dev 19:06 -!- dome [~e-mod@170.250.42.23] has quit [Client Quit] 19:08 -!- Emcy [~Emcy@unaffiliated/emcy] has joined #lightning-dev 19:39 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #lightning-dev 19:42 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 19:42 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 20:16 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 260 seconds] 20:20 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 21:08 -!- asukan_ [~quassel@ec2-35-180-180-58.eu-west-3.compute.amazonaws.com] has joined #lightning-dev 21:09 -!- asukan [~jan@183.128.110.62] has quit [Quit: Konversation terminated!] 21:09 -!- asukan_ is now known as asukan 21:22 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has quit [Remote host closed the connection] 21:22 -!- afk11 [~afk11@gateway/tor-sasl/afk11] has joined #lightning-dev 21:28 -!- Madars [~null@unaffiliated/madars] has quit [Ping timeout: 260 seconds] 21:51 -!- Madars [~null@unaffiliated/madars] has joined #lightning-dev 22:13 -!- Madars [~null@unaffiliated/madars] has quit [Ping timeout: 268 seconds] 22:13 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has joined #lightning-dev 22:47 -!- AaronvanW [~AaronvanW@unaffiliated/aaronvanw] has quit [Ping timeout: 272 seconds] 22:48 -!- Madars [~null@unaffiliated/madars] has joined #lightning-dev 23:05 -!- Madars [~null@unaffiliated/madars] has quit [Ping timeout: 240 seconds] 23:28 -!- Madars [~null@unaffiliated/madars] has joined #lightning-dev 23:35 -!- betawaffle [~betawaffl@h2.kdf.io] has quit [Remote host closed the connection] 23:35 -!- betawaffle [~betawaffl@h2.kdf.io] has joined #lightning-dev 23:43 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 23:46 -!- Madars [~null@unaffiliated/madars] has quit [Ping timeout: 265 seconds] 23:50 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #lightning-dev --- Log closed Tue Feb 18 00:00:44 2020