--- Log opened Thu May 23 00:00:21 2019 00:26 -!- adam3us [~adam3us@unaffiliated/adam3us] has quit [Ping timeout: 250 seconds] 00:26 -!- adam3us [~adam3us@unaffiliated/adam3us] has joined #c-lightning 01:23 -!- darosior [6dbe8dc1@gateway/web/freenode/ip.109.190.141.193] has joined #c-lightning 01:35 < m-schmoock> I noted that my node ont the chaintools explorer looks like having fewer and fewer channels (only 3 left now). My node has ~20 channels though. Are the channels updates not send out at a minimum frequency? Or are we doing no updates if nothing changes? Can someone please check their `listnodes 02a2c53bc475cb92e4ab2f38a5bca56df695034ce90ad78c2f47c05911e3f79e41` and check how many channels I have 01:35 < m-schmoock> announced via gossip? 01:39 < m-schmoock> ah, this should do it: lightning-cli listchannels | grep "02a2c53bc475cb92e4ab2f38a5bca56df695034ce90ad78c2f47c05911e3f79e41" | wc 02:16 < m-schmoock> What does that mean: "Adding HTLC too slow: killing channel" (afterwards my channel got uni-closed) 02:36 -!- Letze [uid63391@gateway/web/irccloud.com/x-zgwpszrethbumcvu] has joined #c-lightning 02:40 < t0mix> m-schmoock .."| wc " shows "0 0 0" 02:42 < t0mix> but it is same for my node. it doesn't seems right. 02:46 < m-schmoock> t0mix: can you remote the "| wc" and tell me the output ? 02:47 < t0mix> ah, remove.. sure.. no output at all 02:47 < m-schmoock> :D 02:55 < m-schmoock> t0mix: anything? 03:00 < t0mix> no, nothing. no ouptut for your node, no output for my node. 03:00 < t0mix> I do get some output for some other nodes. but not all channels. 03:01 < m-schmoock> strange 03:02 < m-schmoock> not even JSON {} or any error ? 03:02 < t0mix> no error, exit code 0 03:02 < m-schmoock> cdecker: can you please try to lightning-cli listchannels | grep 02a2c53bc475cb92e4ab2f38a5bca56df695034ce90ad78c2f47c05911e3f79e41 and pastebin the output if its not empty? 03:03 < m-schmoock> t0mix: we could use "setchannelfees" to enforce gossip channel updates, if thats the case 03:06 < m-schmoock> t0mix: I can only see one channel of yours 03:06 < m-schmoock> no, mom my fault 03:06 < m-schmoock> I see 14 03:07 < m-schmoock> [btc@schmoock ~]$ lightning-cli listchannels | grep 0214ec84c84827dd4911de56d2ecb77d367c6f24c658b8acfe4826b01968e45594 | wc 03:07 < m-schmoock> > 28 03:07 < m-schmoock> (so 14 for each direction) 03:07 < t0mix> should be 15, but +1 was opened few moments ago 03:09 < t0mix> I checked on my other server. I see your channels there. 03:10 < t0mix> wc shows "36 72 3222" 03:10 < m-schmoock> kk. thx 03:10 < t0mix> but why I don't see it on my main server =| 03:11 < m-schmoock> fishy 03:12 < m-schmoock> you can stop, remove gossip store and restart. but it will hurt your RasPI 03:12 < t0mix> I really don't want to restart the client x,D 03:12 < m-schmoock> :D 03:13 < m-schmoock> Preshipped, signed and checsummed Blockchain downloads are the past. Today we are building also on preshipped, signed and checksummed gossip_store downloads 03:13 < m-schmoock> ^^ 03:14 < m-schmoock> Luke DashJr is right. 300Kb are enough 03:14 < t0mix> it's just too late now 03:15 < t0mix> so gossip_store is safe to delete, yup? 03:15 < t0mix> I'll wait with restart. hoping for miracle. 03:16 < m-schmoock> yes, but it will seriously hurt your RasPI CPU 03:16 < m-schmoock> it will check each channel it discovers agains the UTXO set of bitcoind 03:16 < m-schmoock> eventualy it will have build a new view on the network 03:30 < m-schmoock> cdecker: can you ack this minor plugin fix?: https://github.com/lightningd/plugins/pull/30 im working on a larger cleanup/refinement, this one droped out as someone raise an issue on github... 03:32 <@cdecker> Added a small comment about using `//` for integer divisions :-) 03:32 <@cdecker> ... and merged 03:32 < m-schmoock> thx 03:34 <@cdecker> This IRC thing will never last, I can't add a thumbsup emoji as a reaction to a message /s ^^ 03:35 < m-schmoock> Im sure something will takeover this Bitcoin thing because its better, so no invest 03:35 < m-schmoock> im sure this internet thing will be obsolete one time 03:35 < m-schmoock> :D 03:43 < m-schmoock> cdecker: I really believe we have an issue regarding the current handling of the HTLC commitment fee calculation. I can bring down channels to such a low value, that it is impossible to use the channel unless a small payment is being made first. 03:43 < m-schmoock> this can be done to remote channels 03:44 < m-schmoock> I tested with LND and clightning as remotes 03:45 < m-schmoock> Will try with the other direction ('fill' command) soon when the plugin supports that 03:48 -!- rafalcpp [~racalcppp@84-10-11-234.static.chello.pl] has joined #c-lightning 03:55 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 04:02 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has joined #c-lightning 04:03 < t0mix> how about copying gossip_store from my other server and replacing it on raspi? will that work fine? 04:05 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 04:31 <@cdecker> t0mix: that should work fine, assuming both versions are the same, and the nodes are turned off during the transfer 04:34 < t0mix> v0.7.0-1-g0631490 vs v0.7.0-53-g02ddeed 04:35 < t0mix> but I can upgrade before the transfer 04:35 <@cdecker> Hm, that might be problematic, Rusty had a change that changes the format of the gossip_store 04:35 < t0mix> still, I'll have to do the restart :( I see I'm going to loose some channels again 04:35 < m-schmoock> good part is that this changes are intendet to perform better on rasPi 04:41 < t0mix> I'm happy for caring :) 04:42 < t0mix> I better do the restart now while mempool is close to being empty. is it time to ugprade to master version, too? 04:46 -!- Letze [uid63391@gateway/web/irccloud.com/x-zgwpszrethbumcvu] has quit [Quit: Connection closed for inactivity] 04:57 < m-schmoock> I would give it a try 04:57 < t0mix> already restarting. not @ master version so far. 04:58 < m-schmoock> I only noticed we have some gossip startup crash bugs when starting/stopping the daemong quickly. Other than that it currently fells stable 04:59 < t0mix> define quickly 04:59 < m-schmoock> define CPU 05:00 < m-schmoock> Im developing on plugins. sometimes i need to restart, because the plugin reloader has some bugs left. i notice like 3 times out of 20 restaarts that it crashes on startup more or less directly due to some gosspi stacktrace 05:04 < t0mix> armv7, quadcore 900MHz 05:04 < t0mix> Raspberry Pi 2 Model B 05:06 < t0mix> so far it's starting, but gossip_store is growing sloowly 140kB >.< 05:10 < t0mix> sigh.. 7 channels lost :'( 05:11 < t0mix> yay! only 6! =D 05:14 < t0mix> 155462 satoshi gone.. jeez that hurts 05:15 <@cdecker> Gone how? 05:15 < t0mix> UNILATERAL closure 05:16 < t0mix> Received error from peer: channel 4b41a19ec8505630847ba9298ca6e1e5dd39273ca48f943dd10a565e9345dc5f: sync error --> that kind of errors 05:16 <@cdecker> Goddamn, that 05:16 <@cdecker> that's lnd getting upset with their own logic again 05:16 < t0mix> at least the channels that survives I know they will stand next reboots, too 05:16 < molz> cdecker, wut? 05:17 < molz> cdecker, what do you mean? 05:17 <@cdecker> Both eclair and c-lightning get this very opaque error message when talking to lnd from time to time 05:18 < t0mix> if I could identify which node runs which SW I would avoid lnd. although it's just workaround to minimize lost. 05:19 <@cdecker> molz: for reference this is the error https://github.com/lightningnetwork/lnd/blob/c1c4b84757dd5b1e1fcb285b4a1fa6a56b35432c/htlcswitch/linkfailure.go#L23-L25 05:20 < molz> ok.. reporting 05:20 <@cdecker> It seems we disagree about some of the parameters (i.e., last commitment point) during re-establish: https://github.com/lightningnetwork/lnd/blob/27ae22fa6c2a69c0bda197d77dac99a8e756b7d6/htlcswitch/link.go#L873-L885 05:21 <@cdecker> Sorry for the closures t0mix, but the funds should be back eventually if that's any consolation 05:22 < t0mix> yeah cdecker, I know funds will be back.. this is not the first time loosing channels that way (shiner smile) 05:23 < t0mix> at least gossip_store file is growing faster now. already @ 5MB. 05:31 < darosior> cdecker: I've tried to reduce repetitions as you suggested, but I struggle to see how "an array of arrays of commands" would be better ? I would ever have to iterate through each array of commands 05:31 < t0mix> 155462 satoshi were fees, not funds =D 05:32 <@cdecker> Ouch 05:32 <@cdecker> darosior: I can give it a try later :-) 05:34 < darosior> Ok thanks 05:35 < darosior> :) 05:56 < molz> t0mix, 155462 sat went to miners? 05:58 < t0mix> exactly that much 05:59 < t0mix> I usually open channels with "slow" option. which makes transaction costs @ 500 sat --> that is very good. I have no problems waiting weeks. 06:00 < t0mix> if I forget to use slow, I get to 15000sat immediately =D which is not cool. also I believe that unilateral closure was way to overpaid. but well.. can something be done about it at all? 06:10 <@cdecker> Not really, unilateral closes rely on commitment transactions which the two parties have to agree on (including fees) and they are way overpaid simply because they are the emergency mechanism 06:38 <@cdecker> Rusty has been working on a way to remove the commitment to fees, by providing a hook output that can be used for RBFs, but the long term solution is likely eltoo 06:39 < darosior> hmm finally maybe I can sort commands by category the lightningd-side to avoid the 7 iterations in lightning-cli 06:43 <@cdecker> We shouldn't really do cosmetic stuff server side, that's a purely representational issue so it's ok in lightning-cli 06:54 -!- designwish [~designwis@51.ip-51-68-136.eu] has quit [Quit: ZNC - http://znc.in] 06:59 -!- designwish [~designwis@51.ip-51-68-136.eu] has joined #c-lightning 06:59 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Quit: WeeChat 2.4] 06:59 -!- ulrichard [~richi@dzcpe6300borminfo01-e0.static-hfc.datazug.ch] has quit [Remote host closed the connection] 07:31 < t0mix> yes, implementing RBF could help with expensive closures. can't wait for eltoo, sighash no input++ 07:59 < t0mix> any hint how to identify ln sw on remote node? telnet to 9735 was my best shot. 08:30 < molz> so.. wen eltoo? 08:49 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has quit [Quit: Leaving] 08:55 <@cdecker> wen noinput ^^ 09:05 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection] 09:06 -!- rh0nj [~rh0nj@88.99.167.175] has joined #c-lightning 09:12 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #c-lightning 09:30 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 09:32 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #c-lightning 09:33 < molz> :D 09:35 -!- darosior [6dbe8dc1@gateway/web/freenode/ip.109.190.141.193] has quit [Quit: Page closed] 10:41 < t0mix> m-schmoock, I still don't see the channels. not even after restart :( 12:17 < m-schmoock> thats strange 12:18 < m-schmoock> t0mix: when you do the "lightning-cli listchannels" and the output is empty and return code is 0 as you say. is there any error appearing in the logfile?? 12:19 < m-schmoock> because thats definitively not correct behaviour 12:37 < t0mix> i do see some channels, but not all. listchannels|wc -l returns 20420 12:39 <@cdecker> t0mix: that usually means that you either aren't fully synchronized yet, or the bitcoind you are using to verify the channels is not returning correctly for some channels (pruned?) 12:40 < t0mix> on other server (amd, much stronger, but housed at my friend) it returns 1187716. much much more 12:40 < t0mix> not pruned 12:40 < t0mix> bitcoind is fully synced 12:45 < t0mix> even indexed 12:56 < t0mix> I'll check tomorrow. maybe the number will grow =} what I don't get is how my node is finding the routes. or is that info not crucial? 13:11 -!- jtimon [~quassel@181.61.134.37.dynamic.jazztel.es] has joined #c-lightning 14:04 -!- darosior [52ff9820@gateway/web/freenode/ip.82.255.152.32] has joined #c-lightning 14:05 <@cdecker> A partial view of the network is sufficient in many cases, in particular if you talk to nodes with many nodes it doesn't matter that you're missing some 14:05 < t0mix> so it is normal? 14:06 <@cdecker> Well you seem to be missing about 80% judging from the numbers you posted above, so I wouldn't call it optimal, but yes it'll work to some extent :_) 14:07 < t0mix> funny is that I don't see my own node in there 14:07 <@cdecker> Oh, wait this commit may predate a rather stupid overflow bu we had 14:08 <@cdecker> We were cutting the result of listchannels every 65k/2 channels 14:08 <@cdecker> So updating will get you the full result again 14:09 <@cdecker> This is it: https://github.com/ElementsProject/lightning/pull/2505 14:09 < t0mix> ok, time to update to master definitely 14:10 <@cdecker> Yeah, going to release a new version in a couple of days 14:22 < grubles> \o/ 14:29 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 14:29 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Client Quit] 14:31 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 14:42 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 14:49 -!- spaced0ut [~spaced0ut@unaffiliated/spaced0ut] has quit [Quit: Leaving] 14:51 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 15:00 < t0mix> running v0.7.0-375-g09761cd (master I hope =) 15:05 < t0mix> another 3 channels lost. 15:09 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 15:13 < roasbeef> wasn't something like this fixed on the c-lightnig side iirc? that it would at times not send the proper point, which is what caused DLP to be disabled, but then re-enabled when it was fied? 15:14 < roasbeef> sync error means either chan reest wasnt' sent, or an invalid commitment point was sent 15:14 < roasbeef> which is why we need proper error codes also, we stopped giving out more info since at times it was sensitive, and most error strings just arbitrary 15:14 < roasbeef> those channels aren't lost either, we'll continue to send the chan reest even after we've swept our own funds 15:15 < roasbeef> but it's also the case, that in that case, we'll just force close on chain, and if you got the chan reest, then it contains our commitment point which allows it to sweep the coins 15:17 < t0mix> I'm trying to read with understanding 15:19 < t0mix> I can provide you with data, but I hardly can comment. 15:27 < t0mix> cdecker, listchannels|wc -l shows 110039! it works! thank you! 15:41 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 15:49 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 15:51 < roasbeef> t0mix: i mean your channels aren't lost, your node just needs to use the data it's been given to sweep them 15:52 < t0mix> maybe in theory, but the fact is that channels are closed. 15:52 <@cdecker> It's sweeping correctly, it's the close that is annoying... 15:55 < t0mix> yup, unilateral closure of channel with lnd after c-lightning restart is the problem. funds are safu, fees for closures are the pain. (not mentioning how hard I tried to balance them the close would be if CL send an invalid commit point, this issue popped up in the past which led DLP being disabled in CL for a period of time before it was re-enabeled 16:15 -!- jtimon [~quassel@181.61.134.37.dynamic.jazztel.es] has quit [Quit: gone] 16:28 -!- darosior [52ff9820@gateway/web/freenode/ip.82.255.152.32] has quit [Quit: Page closed] 16:28 < fiatjaf> can a plugin cause itself to reload? 16:28 < fiatjaf> like maybe just exit and lightningd will restart it 16:39 < fiatjaf> is it better to add a lot of jsonrpc methods or to add a single method with "subcommands"? 17:01 < molz> fiatjaf, can a plugin cause what to reload? 17:02 < fiatjaf> itself 17:02 < fiatjaf> so it can change its manifest 17:03 < molz> oh .. no idea :) 17:04 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 246 seconds] 17:14 <@niftynei> cdecker, do you have any hot tips for how to get the python tests to run faster? 18:30 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #c-lightning 19:29 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 19:29 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #c-lightning 19:35 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 19:36 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #c-lightning 20:05 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 21:07 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection] 21:08 -!- rh0nj [~rh0nj@88.99.167.175] has joined #c-lightning 21:56 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 22:01 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 22:12 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 246 seconds] 22:24 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 22:33 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning --- Log closed Fri May 24 00:00:22 2019