--- Log opened Fri Feb 07 00:00:32 2020 00:45 -!- PaulTroon [~paultroon@h-5-150-248-150.NA.cust.bahnhof.se] has joined #c-lightning 01:16 -!- Victorsueca [~Victorsue@unaffiliated/victorsueca] has quit [Read error: Connection reset by peer] 01:16 -!- Victor_sueca [~Victorsue@unaffiliated/victorsueca] has joined #c-lightning 01:29 < m-schmoock> git rebase is usually a lot easier in c-lightning, as the commit history tends to be very clean :D 01:30 < zmnscpxj> git rebase is a lot easier if you always use git rebase 01:30 < zmnscpxj> if you let your history have a lot of merge commits, git rebase is an eldritch horror 01:30 < m-schmoock> yup 01:31 < zmnscpxj> Since C-lightning uses rebase-then-fast-forward to merge instead of actual git merge, we follow this implicitly 01:31 < m-schmoock> hey, is there a way to determine the maximum amount one can technically be able to receive on a channel? 01:31 < m-schmoock> we have spendable_msat but we dont have receivable_msat 01:32 < m-schmoock> can I do some good guesstimations or is this remte implementation dependend? 01:32 < zmnscpxj> .....not yet 01:32 < m-schmoock> hm 01:32 < zmnscpxj> you could do guesstimations 01:32 < zmnscpxj> or write a plugin that computes it for you............................ 01:32 < zmnscpxj> plugins are cool 01:32 < zmnscpxj> plugins are awesome 01:32 < zmnscpxj> PLUGINS 01:33 < m-schmoock> Im working on the drain plugin which runs into troubles when doing a 'fill 100%' operations especially when multiple chunks are involved 01:33 < m-schmoock> because 'fill' cant know for sure the amount the remote is able to send 01:33 < zmnscpxj> can remote admit to you how much many it has? 01:33 < m-schmoock> Im thinking about re-implementating all the trimmed/untrimmed fee and reserve calculations inside my plugin 01:34 < zmnscpxj> because if not there is nothing you can really do 01:34 < m-schmoock> my plugin would just assume remote has 0 HTLCs inflight 01:34 < m-schmoock> and do the math wioth that and hopes the remote implementations does the same 01:35 < m-schmoock> only other way would be to not allow 'fill 100%' but cap it at like 'fill 99%' 01:35 < m-schmoock> which is not cool 01:35 < zmnscpxj> remote is not your direct peer? 01:35 < m-schmoock> it is 01:36 < m-schmoock> direct peer 01:37 < zmnscpxj> Then you just deduct the channel reserve from the entire capacity, subtract max(your_owned, your_channel_reserve) from that difference 01:37 < zmnscpxj> how do you convince the direct peer to give you money? 01:38 < m-schmoock> this is the fill testcase that fails: https://github.com/lightningd/plugins/pull/71/files#diff-88f5d9948c5e37be5dc85a7dfbfc4503R163 01:38 < m-schmoock> zmnscpxj: circular payment 01:38 < zmnscpxj> ah. 01:38 < m-schmoock> im just getting troubles when doing 100% drain/fill 01:38 < m-schmoock> stuff like 99% usually works 01:38 < zmnscpxj> That depends on the state of intermediate channels that you might not be able to see 01:39 < m-schmoock> exactly 01:39 < zmnscpxj> you consider the channel reserve? 01:39 < m-schmoock> yes 01:39 < zmnscpxj> well.... *shrug* I suppose ...? Nothing better? Best-effort? 01:39 < m-schmoock> https://github.com/lightningd/plugins/blob/master/drain/drain.py#L100 01:39 < m-schmoock> receivable = their - their_reserve 01:40 < m-schmoock> but that does not account for HTLC trimmed stuff 01:40 < zmnscpxj> haha no 01:40 < m-schmoock> which raises the issue I describe 01:40 < zmnscpxj> making multiple payments is no good? 01:40 < m-schmoock> making multiple payments makes the thing worse 01:40 < zmnscpxj> ha yes 01:40 < m-schmoock> which is how I found out about this 01:40 < zmnscpxj> each HTL also has onchain fees 01:40 < m-schmoock> check the first link with the test setup 01:40 < m-schmoock> so, any useful ideas? :D 01:41 < zmnscpxj> if the peer is the opener of the channel it pays for the HTLC onchain fees while they are still present 01:41 < zmnscpxj> then later when the HTLC is claimed the peer gets the onchain fees back because the HTLC has disappeared and no longer needs to be paid for onchain 01:42 < m-schmoock> ah good to know :D I thought this was directional independent 01:42 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 252 seconds] 01:42 < m-schmoock> zmnscpxj: which is why I can do a 'drain' two times, and the second time its just a tiny amount 01:42 < zmnscpxj> that's what I remember, "initiator pays" 01:42 < zmnscpxj> haha 01:43 < zmnscpxj> lemme double-check 01:43 < m-schmoock> fun fact: doing this brings a chann el to an invalid state 01:43 < m-schmoock> but at least it does not crash anymore 01:43 < zmnscpxj> ??????????? 01:43 < zmnscpxj> whut 01:43 < m-schmoock> there was a PR couple of month ago 01:43 < zmnscpxj> invalid 01:43 < m-schmoock> kinda 01:43 < m-schmoock> mom, I give you some details 01:45 < m-schmoock> this was the crash fix https://github.com/ElementsProject/lightning/pull/2688 01:45 < m-schmoock> but it still brings a channel into a state which must have a 'very small transaction first' to unlock it 01:45 < m-schmoock> but no crash at least 01:46 < m-schmoock> Maybe I take the time to write a test to once demonstrate this 01:47 < m-schmoock> at least it was like that mid of last year if I remember correctly 01:47 < m-schmoock> im only working occasiannly on this stuff, my three kids take most of my time :D 01:47 < zmnscpxj> haha ok 01:48 < m-schmoock> this line and following : https://github.com/ElementsProject/lightning/pull/2688/files#diff-e6fd9e4a383e2ab7819a6292c1edf70bR2160 01:49 < m-schmoock> (merged) it demonstrates that if a channel is drained twice by 100% it must be unlocked by a trimmed HTLC 01:50 < m-schmoock> <- mr edge case ;) 01:50 < zmnscpxj> hehe yes 01:50 < zmnscpxj> but edge cases are where most bugs are (especially crash bugs in low-level languages) so this is still appreciated 01:51 < m-schmoock> I know. So as I undestand its not an 'invalid state' but rather an 'unfortunate state' 01:51 < m-schmoock> its caused by a protocol issue 01:52 < zmnscpxj> yes 01:52 < zmnscpxj> does not seem to be feasible to do 100%, because onchain fees suck 01:52 < zmnscpxj> but you can get close in practice I think 01:52 < m-schmoock> what would be your guestimation to do a fill 100% ? 01:52 < m-schmoock> draining already works after this and another PR 01:53 < zmnscpxj> silently convert it to fill 99% 01:53 < zmnscpxj> LOL 01:53 < m-schmoock> :D 01:53 < m-schmoock> maybe we should make a PR that disallows bringing a channel into this 'unfortunate state' where only an untrimmed low value HTLC unlocks it 01:53 < m-schmoock> because this is cleary not a useful state 01:53 < m-schmoock> and harms network health 01:54 < zmnscpxj> yes, we could add softer margins than the hard 1% reserve limits 01:54 < m-schmoock> in theory I can do a circular payment to bring a remot into that state 01:55 < m-schmoock> *in practice 01:55 < m-schmoock> ;) 01:55 < zmnscpxj> if you know who funded it and how much each side owns 01:55 < zmnscpxj> which is not that hard to grok with proper chain analysis and payment probing 01:55 < m-schmoock> thats totally doable by try and error 01:55 < m-schmoock> I can do it with a plugin right now 01:55 < zmnscpxj> yep 01:55 < zmnscpxj> file an issue? 01:55 < m-schmoock> .... ltes just lock all the channels ;) 01:56 < m-schmoock> I did back then, but it didnt really concern 01:56 < zmnscpxj> you forgot to do the evil laugh 01:56 < m-schmoock> (aside the crash thing) 01:56 < m-schmoock> lol 01:56 < m-schmoock> hm, maybe your right 01:56 < m-schmoock> I can write a test to demonstrate how to 'trim/lock' remote channels that are not even directly conencted 01:57 < zmnscpxj> that would be good, yes 01:57 < zmnscpxj> as a motivating example 01:57 < m-schmoock> then we need to ad a higher margin to the reserves/fee 01:57 < zmnscpxj> something like that, a little softer but yes 01:57 < m-schmoock> we just need to prevent running into the trimmed state 01:57 < m-schmoock> thats it 01:58 < m-schmoock> fgive a temporary capcity error 01:58 < m-schmoock> or, I could totally screw the network MUHAHAHHAWWA 01:58 < zmnscpxj> :thumbsup: 01:58 < m-schmoock> we tested on LND I think back then the situation was the same, but no chrash on their side3 01:59 < m-schmoock> alright, I will stick on it, but as usual for me dont expect fast progress ;) 01:59 < m-schmoock> I need to get rid of this 9/17 job 02:07 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection] 02:08 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning 02:16 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has left #c-lightning [] 02:16 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has joined #c-lightning 02:50 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Ping timeout: 240 seconds] 02:50 -!- ghost43_ [~daer@gateway/tor-sasl/daer] has joined #c-lightning 03:14 -!- Kostenko [~Kostenko@2001:8a0:7293:1200:68f5:2150:4d47:f16f] has quit [Remote host closed the connection] 03:44 -!- Kostenko [~Kostenko@2001:8a0:7293:1200:68f5:2150:4d47:f16f] has joined #c-lightning 03:48 < darosior> I cannot restart Travis jobs anymore :-( 05:16 -!- jonatack [~jon@213.152.161.244] has joined #c-lightning 05:19 -!- jonatack [~jon@213.152.161.244] has quit [Client Quit] 05:19 -!- jonatack [~jon@213.152.161.244] has joined #c-lightning 05:23 -!- Amperture [~amp@65.79.129.113] has quit [Remote host closed the connection] 05:26 -!- PaulTroon [~paultroon@h-5-150-248-150.NA.cust.bahnhof.se] has quit [Remote host closed the connection] 05:27 < darosior> cdecker: Have I been striked of Travis for abusing job restart ? 05:28 < darosior> Or for trying to push gorgeous ASCII art into C-lightning logs through a plugin :p 05:29 < zmnscpxj> Possibly? Gorgeous ASCII art causes Travis to mark the test case as passing no matter what. 06:05 < darosior> ^^ 06:35 -!- rafalcpp [~racalcppp@ip-178-211.ists.pl] has joined #c-lightning 06:55 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #c-lightning 07:03 -!- Amperture [~amp@65.79.129.113] has joined #c-lightning 07:18 -!- ghost43_ is now known as ghost43 07:30 -!- jonatack [~jon@213.152.161.244] has quit [Ping timeout: 260 seconds] 07:56 -!- mdunnio [~mdunnio@38.126.31.226] has joined #c-lightning 08:16 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #c-lightning 08:47 -!- kristapsk_ is now known as kristapsk 09:15 -!- rafalcpp [~racalcppp@ip-178-211.ists.pl] has quit [Read error: Connection reset by peer] 09:16 -!- queip [~queip@unaffiliated/rezurus] has quit [Quit: bye, freenode] 09:28 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 10:03 -!- queip [~queip@unaffiliated/rezurus] has joined #c-lightning 10:20 -!- PaulTroon [~paultroon@h-5-150-248-150.NA.cust.bahnhof.se] has joined #c-lightning 10:32 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has left #c-lightning [] 10:32 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has joined #c-lightning 13:10 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 13:17 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 14:52 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 240 seconds] 14:55 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Remote host closed the connection] 15:06 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Ping timeout: 240 seconds] 15:09 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #c-lightning 16:21 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 16:32 -!- Amperture [~amp@65.79.129.113] has quit [Remote host closed the connection] 17:49 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 240 seconds] 18:27 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 18:58 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 19:06 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has left #c-lightning [] 19:06 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has joined #c-lightning 19:19 -!- nixbox [499e347f@c-73-158-52-127.hsd1.ca.comcast.net] has joined #c-lightning 20:17 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has quit [Remote host closed the connection] 20:33 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 260 seconds] 21:48 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 21:49 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Client Quit] 22:00 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 272 seconds] 22:02 -!- queip [~queip@unaffiliated/rezurus] has joined #c-lightning 22:37 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning --- Log closed Sat Feb 08 00:00:33 2020