--- Log opened Fri Feb 08 00:00:43 2019
00:24 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection]
00:25 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning
01:23 < t0mix> so I try closing that channel
01:45 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning
02:36 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...]
02:45 -!- spinza [~spin@155.93.246.187] has joined #c-lightning
02:49 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection]
02:49 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...]
02:50 -!- rh0nj [~rh0nj@88.99.167.175] has joined #c-lightning
02:54 -!- spinza [~spin@155.93.246.187] has joined #c-lightning
03:24 <@cdecker> t0mix: that usually means that the funder created an invalid funding tx (double-spent outputs, low fees, or similar)
03:24 <@cdecker> t0mix: the channel will not confirm and your node will forget it after a grace-period of 1 day I think
03:25 <@cdecker> Correction it's 2 weeks
03:38 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 250 seconds]
04:13 < t0mix> I've triggered 'lightning-cli "035d937399f0a5d1f2abfa379ab06d7d81e7e8395e80824cc0a24ca779fdecc5f0" close'.
04:13 < t0mix> status changed to '"CHANNELD_SHUTTING_DOWN:Funding needs more confirmations. They've sent shutdown, waiting for ours"'
04:13 < t0mix> oki, I'll just wait
04:56 -!- booyah [~bb@193.25.1.157] has quit [Read error: Connection reset by peer]
04:57 -!- booyah [~bb@193.25.1.157] has joined #c-lightning
05:18 -!- lio17 [~lio17@80.ip-145-239-89.eu] has quit [Ping timeout: 246 seconds]
05:19 -!- lio17 [~lio17@80.ip-145-239-89.eu] has joined #c-lightning
06:33 -!- spaced0ut [~spaced0ut@unaffiliated/spaced0ut] has joined #c-lightning
08:15 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning
08:20 -!- m-schmoock [~will@schmoock.net] has joined #c-lightning
08:23 < m-schmoock> Hello everyone, I'm new here. Trying to start with issue #2316. My name is Michael, please be gentle :D
09:21 -!- Eagle[TM] [~EagleTM@unaffiliated/eagletm] has joined #c-lightning
09:22 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 250 seconds]
09:41 -!- Eagle[TM] [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 268 seconds]
09:47 -!- justan0theruser is now known as justanotheruser
10:03 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning
10:08 <@cdecker> Hi m-schmoock, great to have you here :-)
10:08 <@cdecker> Great catch on #2321 btw
10:19 -!- thorie [~thorie@thorie1.xen.prgmr.com] has joined #c-lightning
10:22 < thorie> hello, can someone help me with c-lightning?
10:22 < thorie>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                            
10:22 < thorie> 26081 thorie    20   0 8650972 8.200g    560 R 100.0 83.9 388:43.26 lightning_gossi  
10:23 < thorie> the process lightning_gossipd is at 100% cpu and rpc commands are unresponsive
10:23 < thorie> i think this is fixed, but i'm not sure how to safely upgrade my node
10:24 < thorie> is there a way to upgrade without losing my incoming channels?
10:25 < m-schmoock> cdecker: thx, i just can't take this Blockchain not Bitcoin BS anymore and need to focus my brain on real stuff like LN :D
10:27 < thorie> there is still no way to backup my channels, right?
10:30 < m-schmoock> you think you loose the states when you do a `pkill lightningd`; <update stuff>; lightningd --daemon ?
10:30 < m-schmoock> im not yet expert, but when you make a backup of your ~/.lightning directory before the upgrade you should be fine
10:31 < m-schmoock> at least i have not yet lost mainnet coins this way
10:32 < thorie> last time i upgraded that way, i lost all my mainnet coins
10:33 < thorie> i'm wondering if there are more safeguards now against incompatible database upgrades
10:34 < thorie> apparently backups are not safe because when i did it, somehow old states got broadcasted and i got penalized, i lost all my coins
10:34 < thorie> it was after a crash, i tried to upgrade and re-run c-lightning on my old db
10:34 < thorie> but i jumped about 1 year of missed upgrades
10:35 < thorie> i guess i have no choice but to close all my channels, but i will lose all my incoming capacity
10:36 < thorie> my node keeps crashing every day, so i have to upgrade
10:36 < thorie> at this point i think i need to switch to lnd, but that means rewriting all my code
10:36 < thorie> since my code depends on communicating via the c-lightning rpc
10:37 < thorie> but i'm not sure if lnd is any better, i might end up with the exact same issues
10:38 < thorie> maybe it's too soon not to expect to lose all my coins every upgrade
10:38 <@cdecker> thorie: that's a known issue addressed in the latest `master` but not released yet
10:38 <@cdecker> (the CPU being pegged at 100% that is)
10:39 < thorie> is there any plan on having some safe upgrade checking?
10:39 < thorie> i'm happy to use master, as long as i don't lose my coins again
10:39 < thorie> last time i only lost about $20 i had in channels, but this time i have a lot more
10:40 < thorie> if not, i'll close my channels and move all the funds out of c-lightning before upgrading
10:42 < thorie> you guys are probably too busy focusing on other issues than worrying about upgrades
10:42 < thorie> i'll just close everything
10:46 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 246 seconds]
10:48 < thorie> damn, closing channels failed
10:48 < thorie> { "code" : -1, "message" : "Channel close negotiation not finished before timeout" }
10:49 < thorie> looks like i'm going to lose my funds again
10:49 <@cdecker> It should always be safe to upgrade to master
10:49 < thorie> only in theory though
10:50 <@cdecker> Could you share some more information about the failed upgrade?
10:50 < thorie> last time i upgraded to master, i lost all my coins
10:50 < thorie> yeah we discussed it already in the github issue ticket, but in the end, there was no solution
10:51 <@cdecker> Oh, that was the botched downgrade, right?
10:51 <@cdecker> Yeah, upgrading is safe, downgrading is not
10:51 <@cdecker> (and c-lightning will prevent you from starting if downgrading the DB)
10:52 <@cdecker> I'm happy to help you recover those coins if you have a copy of the DB and the hsm_secret file btw)
10:52 < thorie> no, it wasn't a downgrade
10:53 < thorie> it was an upgrade, but skipping many many releases
10:54 < thorie> i don't know how db changes are handled over time, or whether only incremental upgrades are supported
10:55 < thorie> i did a huge jump from an early version of c-lightning to the latest master (at the time) and something went horribly wrong
10:55 < thorie> i don't know if you have a test suite that is constantly testing old versions with channels and doing an upgrade to some other newer arbitrary version
10:56 < thorie> but i'd imagine maybe that's not worth it until c-lightning 1.0 is released
10:58 < thorie> it is a huge hassle to close all my channels
10:58 < thorie> i wish there was a `lighting-cli closeall`
10:59 < thorie> having to close hundreds of channels manually :(
10:59 < thorie> looks like i'm losing money whenever i close channels too, due to some kind of closing fee
10:59 < thorie> which means upgrades are going to cost money
11:01 < thorie> i might have to write up a script to close all of these channels
11:03 < thorie> damn it's not working
11:03 < thorie> i think i need to force close all my channels
11:04 <@cdecker> It literally is a list of DB SQL statements that get applied incrementally
11:04 <@cdecker> The DB version is the number of migrations that were applied
11:05 < thorie> but i think i had a bunch of channels opened with nodes that no longer exist
11:05 < thorie> i was running a very old version of c-lightning, then i backed up the .lightning folder, and stopped my node for about a year
11:05 < thorie> then i restarted my node 1 year later with the latest master
11:06 < thorie> and nothing worked, the old channels were in some kind of error state
11:06 < thorie> what happens to the funds in my channel if my node is down for 1 year?
11:08 < thorie> i'm getting a status: ONCHAIN:All outputs resolved: waiting 99 more blocks before forgetting channel
11:08 -!- spaced0ut [~spaced0ut@unaffiliated/spaced0ut] has quit [Quit: Leaving]
11:08 < thorie> do I need to wait 99 blocks before upgrading to master?
11:09 < thorie> alright i've closed about 80/200 channels now
11:09 < thorie> really tough to lose all this capacity
11:10 < thorie> a lot of the close commands failed, i think due to some error on negotiating the closing fee
11:11 <@cdecker> Sorry for the slow responses, cooking dinner on the side
11:12 < thorie> it's fine i'm making slow progress
11:13 <@cdecker> Yes, onchaind waiting means that the channel will be forgotten after 99 blocks, you should have your funds in `listfunds` :-)
11:13 < thorie> what about the close commands that have timed-out ? those are lost funds forever?
11:13 <@cdecker> Waiting for the channels to disappear should not be necessary
11:14 <@cdecker> How do those timed-out channels look in `listpeers`?
11:15 < thorie>  "state": "CLOSINGD_SIGEXCHANGE",
11:15 < thorie>           "status": [
11:15 < thorie>             "CHANNELD_NORMAL:Reconnected, and reestablished.",
11:15 < thorie>             "CLOSINGD_SIGEXCHANGE:Negotiating closing fee between 183 and 3005 satoshi (ideal 2594)",
11:15 < thorie>             "CLOSINGD_SIGEXCHANGE:Waiting for another closing fee offer: ours was 2535 satoshi, theirs was 2532 satoshi,"
11:15 < thorie>           ],
11:15 < thorie> not sure what to do here
11:16 <@cdecker> You should be able to force close them using `lightning-cli close [node_id] true`
11:16 <@cdecker> If you have a lot of channels you can do that for all channels in a for-loop
11:17 < thorie> yeah almost done closing them all
11:17 < thorie> 30-40 more left
11:18 <@cdecker> for i in $(lightning-cli listpeers | jq '.peers[] | select(.channels[].state == "CLOSINGD_SIGEXCHANGE") | .id'); do lightning-cli close $i true; done
11:19 <@cdecker> That's a bit of a mouthful (worse than I had anticipated), but by replacing the select condition you can iterate over peers with channels in any state you'd like
11:20 <@cdecker> Which github issue was yours by the way?
11:21 < thorie> this one https://github.com/ElementsProject/lightning/issues/2003
11:21 <@cdecker> Thanks
11:22 <@cdecker> I mixed up the issues, thought you reported another issue, that's where the downgrade confusion came from
11:22 < thorie> No problem
11:23 < thorie> i can't seem to close channels via short channel id
11:23 < thorie> { "code" : -32602, "message" : "Short channel ID not found: '548394:1599:0'" }
11:23 <@cdecker> We got better at handling failed funding transactions recently, so that should have improved a bit
11:24 < thorie> $ lc listfunds | grep 548394 -a3
11:24 < thorie>   "channels": [
11:24 < thorie>     {
11:24 < thorie>       "peer_id": "0217890e3aad8d35bc054f43acc00084b25229ecff0ab68debd82883ad65ee8266",
11:24 < thorie>       "short_channel_id": "548394:1599:0",
11:24 < thorie>       "channel_sat": 78000,
11:24 <@cdecker> Oh, the short_channel_id format changed in the spec recently (I know that's annoying)
11:24 < thorie>       "channel_total_sat": 78000,
11:24 < thorie>       "funding_txid": "5b02e8400b3e89d239cc438f6bb094060abecdde4ef3e3641117aced1e430de9"
11:24 < thorie>     },
11:24 < thorie> how do i close this channel? is there a way to get the channel id?
11:24 <@cdecker> You can use the `peer_id` to close
11:24 <@cdecker> `lightning-cli close 0217890e3aad8d35bc054f43acc00084b25229ecff0ab68debd82883ad65ee8266` shoudl work
11:25 <@cdecker> (maybe with force=true, if you're not connected)
11:25 < thorie> thanks
11:25 < thorie> peer_id worked
11:25 < thorie> hmm
11:25 <@cdecker> Eggcellent :-)
11:25 < thorie> actually it didn't work, the peer_id still shows the channel in listfunds
11:26 < thorie> is it because it's waiting 99 blocks?
11:26 < thorie>  "ONCHAIN:All outputs resolved: waiting 95 more blocks before forgetting channel"
11:26 <@cdecker> Does the channel still show up as `CHANNELD_NORMAL` in listpeers?
11:26 < thorie> shows as "ONCHAIN" state
11:26 <@cdecker> Oh yeah, the channels stick around until they are forgotten
11:26 < thorie> hmm, how do i know if i'm done closing all my channels?
11:27 <@cdecker> It's annoying since we count funds double (both in outputs as well as channels), we need to do something more clever
11:27 <@cdecker> You can check `listpeers` if there is that shows up as not `ONCHAIND` that probably needs closing
11:27 <@cdecker> (ONCHAIND or AWAITING_UNILATERAL that is)
11:28 < thorie> i have one that is state: CHANNELD_NORMAL:Attempting to reconnect, and it's connected: false
11:28 < thorie> would a force close be safe in this situation? and would i have to wait a long time?
11:29 <@cdecker> Yep, force close is always safe (but might be costing you a bit more fees)
11:29 < thorie> whats the diff. between "CLOSINGD_COMPLETE" and "ONCHAIN"
11:29 <@cdecker> iirc force close will first attempt for a few minutes to do a collaborative close so it might take a while before it goes unilateral
11:30 <@cdecker> CLOSINGD_COMPLETE means that the closing transaction was broadcast, but not yet seen on-chain
11:30 < thorie> ah ok, that makes sense
11:30 < thorie> just need to wait for those i guess
11:31 < thorie> do you think i should run one independent c-lightning node per channel?
11:31 < thorie> that way, if upgrading fails, worst case i will only lose the funds in one channel 
11:32 <@cdecker> Well, if you do that you're not going to route payments and you're going to be a leaf node in the network
11:32 < thorie> hmm i see
11:32 <@cdecker> Also you'd be splitting the amount you can send yourself
11:32 < thorie> i have no idea how to handle upgrades
11:33 < thorie> if there was a way to make backups, that would be nice
11:33 <@cdecker> Yeah, working on those ;-)
11:33 <@cdecker> Gotta go, but I'll read up when I come back
11:34 <@cdecker> Keep us posted how things go along ^^
11:34 < thorie> no problem, i think after this closing i will be ready to upgrade
11:38 < thorie> i hit ctrl-c during a close command, and now rpc is stuck/unresponsive - can't even do `help` command anymore
11:38 < thorie> kinda weird 
11:39 < thorie> and lightningd is now at 100% cpu
11:39 < thorie> i guess i have to be careful with ctrl-c :P
12:30 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning
12:46 < thorie> 2019-02-08T20:46:16.447Z lightningd(26590): Updating database from version 88 to 89
13:03 -!- Amperture [~amp@24.136.5.183] has joined #c-lightning
13:04 -!- Amperture [~amp@24.136.5.183] has quit [Remote host closed the connection]
13:04 -!- Amperture [~amp@24.136.5.183] has joined #c-lightning
13:12 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 244 seconds]
13:19 -!- spinza [~spin@155.93.246.187] has quit [Ping timeout: 250 seconds]
13:25 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning
13:36 < thorie> "version": "v0.6.3rc1-151-gd413fc7",
13:39 < thorie> trying to create a channel, what does this mean? { "code" : -1, "message" : "They sent error channel 324ef3e3d43a4586ac411c5c22015dbaaf48946fc36d0d7b85e95a931d403080: maxValueInFlight too large: 18446744073709551615 mSAT, max is 300000000 mSAT" }
13:53 -!- spinza [~spin@155.93.246.187] has joined #c-lightning
14:43 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...]
14:59 -!- spinza [~spin@155.93.246.187] has joined #c-lightning
15:13 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning
15:31 < EagleTM> thorie: there was a bug in lnd in one of the release candidates for 0.5.2 which is related to maxValueInFlight. It's fixed now but there might be some installations still running the buggy one.
15:36 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 250 seconds]
15:38 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning
15:51 < molz> EagleTM, it's not a bug, LND has a stricter rule to validate maxValueInFlight, can't be larger than the channel capacity, but clightning allows this to be larger than channel capacity so this is a conflict but LND compromised and changed this
15:51 < molz> actually LND follows the protocol rule regarding this issue but clightning does not
15:55 < EagleTM> molz: i just didn't comprehend why it happened on opening a channel, regardless of size of the channel to open. that looked like a bug to me
15:57 < molz> because LND put in a check for this, the PR was merged a few weeks ago, so if you got the update to master that had that PR then it does this, but that PR has been reversed
16:07 < EagleTM> yes, it's fine for me now
16:51 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...]
16:51 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection]
16:51 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning
17:04 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 272 seconds]
17:16 -!- spinza [~spin@155.93.246.187] has joined #c-lightning
17:46 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection]
17:47 -!- rh0nj [~rh0nj@88.99.167.175] has joined #c-lightning
18:27 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has quit [Remote host closed the connection]
18:28 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has joined #c-lightning
18:54 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has quit [Ping timeout: 268 seconds]
19:22 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning
19:30 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.]
19:31 -!- m-schmoock [~will@schmoock.net] has quit [Remote host closed the connection]
20:03 -!- Eagle[TM] [~EagleTM@unaffiliated/eagletm] has joined #c-lightning
20:05 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 268 seconds]
21:05 -!- achow101 [~achow101@unaffiliated/achow101] has quit [Ping timeout: 245 seconds]
21:08 -!- achow101 [~achow101@unaffiliated/achow101] has joined #c-lightning
21:12 -!- Kostenko [~Kostenko@185.183.106.228] has quit [Ping timeout: 245 seconds]
21:27 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection]
21:27 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning
21:27 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-evztbxqdurfwpjrm] has left #c-lightning []
21:28 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-evztbxqdurfwpjrm] has joined #c-lightning
23:56 -!- deusexbeer [~deusexbee@093-092-176-118-dynamic-pool-adsl.wbt.ru] has quit [Quit: Konversation terminated!]
--- Log closed Sat Feb 09 00:00:45 2019