--- Log opened Mon Feb 24 00:00:03 2020 01:02 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #c-lightning 01:11 -!- lxer [~lx@ip5f5bf497.dynamic.kabel-deutschland.de] has joined #c-lightning 03:02 < vasild_> blockstream_bot: hello 03:03 -!- vasild_ is now known as vasild 03:04 < vasild> Some interesting questions above from "< blockstream_bot> [Tim Ho, Blockstream]", how to reply? 03:36 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 240 seconds] 03:39 < darosior> "Are there any issues if my node changes it's IP address? What happens to the channels?" => No, you just need to change the announced addresses. No issue with the channels 03:40 < darosior> "Also, how does the network respond if i have two hosts with the same pubkey and different ips?" ==> It'd mean you duplicated your node and two instances run separately *don't do this*: your database will differ and you'll lose data 03:41 < darosior> "Also what is usually the bottleneck for creating invoices? It used to be very fast, but recently making a request to create a new invoice is talking 10-15 seconds." ==> I don't experience that, anyone else ? 03:42 < darosior> "Does it matter that the node has generated millions of invoices?" ==> In this case you really want to use autoclean ^^ 03:44 < vasild> So just typing a reply in this channel suffices for "Tim Ho" to receive it? 03:50 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #c-lightning 04:08 < fiatjaf1> about the invoice slowness I've experienced that when I had a very slow/unresponsive bitcoind backing lightningd 04:08 < fiatjaf1> vasild: it should 04:49 -!- rafalcpp [~racalcppp@ip-178-211.ists.pl] has quit [Ping timeout: 260 seconds] 04:58 -!- rafalcpp [~racalcppp@ip-178-211.ists.pl] has joined #c-lightning 05:07 -!- Amperture [~amp@65.79.129.113] has joined #c-lightning 05:17 < blockstream_bot> [Tim Ho, Blockstream] Thanks for the answers :) 05:25 -!- Amperture [~amp@65.79.129.113] has quit [Remote host closed the connection] 05:26 -!- Amperture [~amp@65.79.129.113] has joined #c-lightning 05:34 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has left #c-lightning [] 05:34 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has joined #c-lightning 06:44 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 07:49 -!- mdunnio [~mdunnio@38.126.31.226] has joined #c-lightning 09:09 < blockstream_bot> [Tim Ho, Blockstream] I'm on 0.7.2.1 is it safe to directly upgrade to v0.8.0 without closing my channels? 10:54 < blockstream_bot> [Tim Ho, Blockstream] Hmm upgrading causing a lot of issues 10:54 < blockstream_bot> [Tim Ho, Blockstream] ```2020-02-24T18:53:43.510Z **BROKEN** database: Accessing a null column 12 in query SE 10:54 < blockstream_bot> LECT id, channel_htlc_id, msatoshi, cltv_expiry, hstate, payment_hash, payment_key, routing_onion, failuremsg, malformed_onion, origin_htlc, shared_secret, received_ti 10:54 < blockstream_bot> me FROM channel_htlcs WHERE direction= ? AND channel_id= ? AND hstate != ? ``` 11:00 <@cdecker> Tim, yes there is a quick fix in `master` but we didn't catch it in time I think 11:01 <@cdecker> The error should be ok to ignore however 11:01 <@cdecker> It's only 6 weeks to the next release anyway :-) 11:02 < blockstream_bot> [Tim Ho, Blockstream] I also get this error: 11:02 < blockstream_bot> ```2020-02-24T18:45:00.158Z **BROKEN** connectd: STATUS_FAIL_INTERNAL_ERROR: Failed to bind on 2 socket: Address already in use``` 11:03 < blockstream_bot> [Tim Ho, Blockstream] I'm starting lightningd with `--addr=0.0.0.0:9736` (yes, not default port) and there's no other app/service listening on 9736 at this time. 11:03 < blockstream_bot> [Tim Ho, Blockstream] So I'm not sure what "address already in use" it is complaining about. 11:05 < blockstream_bot> [Tim Ho, Blockstream] Also seeing: 11:05 < blockstream_bot> ```2020-02-24T18:59:31.045Z DEBUG gossipd: seeker: state = STARTING_UP New seeker 11:05 < blockstream_bot> 2020-02-24T18:59:31.053Z UNUSUAL plugin-bcli: Could not connect to '/home/thorie/.lightning/lightning-rpc': Connection refused``` 11:07 < blockstream_bot> [Tim Ho, Blockstream] Also seeing that it's not completely ignoring 9736: 11:07 < blockstream_bot> ```2020-02-24T19:05:49.127Z DEBUG lightningd: testing /home/thorie/work/lightning/light 11:07 < blockstream_bot> ningd/lightning_openingd 11:07 < blockstream_bot> 2020-02-24T19:05:49.567Z DEBUG hsmd: pid 12998, msgfd 22 11:07 < blockstream_bot> 2020-02-24T19:05:49.597Z DEBUG connectd: pid 12999, msgfd 25 11:07 < blockstream_bot> 2020-02-24T19:05:49.597Z DEBUG hsmd: Client: Received message 11 from client 11:07 < blockstream_bot> 2020-02-24T19:05:49.597Z DEBUG hsmd: Client: Received message 9 from client 11:07 < blockstream_bot> 2020-02-24T19:05:49.597Z DEBUG hsmd: new_client: 0 11:07 < blockstream_bot> 2020-02-24T19:05:49.642Z DEBUG connectd: Broken DNS resolver detected, will check fo 11:07 < blockstream_bot> r dummy replies 11:07 < blockstream_bot> 2020-02-24T19:05:49.642Z DEBUG connectd: Created IPv4 listener on port 9736 11:07 < blockstream_bot> 2020-02-24T19:05:49.642Z DEBUG connectd: Created IPv4 listener on port 9736 11:07 < blockstream_bot> 2020-02-24T19:05:49.642Z DEBUG connectd: REPLY WIRE_CONNECTCTL_INIT_REPLY with 0 fds 11:07 < blockstream_bot> 2020-02-24T19:05:49.644Z DEBUG gossipd: pid 13000, msgfd 24 11:07 < blockstream_bot> [Tim Ho, Blockstream] And also, strange broken error about gossip_store_compact: bad version, right there. 11:10 < blockstream_bot> [Tim Ho, Blockstream] Is there any chance it's using my old ip address/port in the channel data? Because 9735 is in-use (not by c-lightning, another app) so that's why I'm starting c-lightning with 0.0.0.0:9736 instead. But the release of c-lightning I'm upgrading is from a server that was running it on 9735. 11:11 < blockstream_bot> [Tim Ho, Blockstream] I stopped my node for now. I'm not sure what to do. 11:12 < blockstream_bot> [Tim Ho, Blockstream] Is it safe to rollback to 0.7.2.1 on the original server (running on port 9735)? Because 0.8.0 on the new server (port 9736) has too many errors and crash dumps. 11:12 <@cdecker> No it's not safe to roll back since DB changes were applied 11:13 < blockstream_bot> [Tim Ho, Blockstream] The original server has the old DB. 11:13 <@cdecker> Don't recover an DB, that might cause you to lose funds 11:13 < blockstream_bot> [Tim Ho, Blockstream] I basically copied the .lightning dir from the old server (that has 0.7.2.1) to the new server (that has 0.8.0). 11:14 <@cdecker> Yeah, whatever you do don't start the 0.7.2.1 one otherwise you might inadvertently cheat and lose funds 11:14 < blockstream_bot> [Tim Ho, Blockstream] Ok 11:16 < blockstream_bot> [Tim Ho, Blockstream] I should have closed all my channels before upgrading :( 11:21 < blockstream_bot> [Tim Ho, Blockstream] Should I try stopping the other service on 9735, then starting this one on 9735 to fix that address issue? 11:35 < blockstream_bot> [Tim Ho, Blockstream] Also my `gossip_store` file has disappeared, probably when i tried to start the node 11:36 < blockstream_bot> [Tim Ho, Blockstream] Is that normal? 11:37 < blockstream_bot> [Tim Ho, Blockstream] oh wait, that's weird... it seems to be in a folder called ~/.lightning/bitcoin now along with some other files 11:39 < blockstream_bot> [Tim Ho, Blockstream] so now I have ~/.lightning/lightning-rpc and ~/.lightning/bitcoin/lightning-rpc ? what is happening 11:39 < blockstream_bot> [Tim Ho, Blockstream] and for some reason, ~/.lightning/bitcoin/gossip_store is 2 bytes 11:40 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 11:40 < blockstream_bot> [Tim Ho, Blockstream] but before starting the node, it was much larger 11:40 < blockstream_bot> [Tim Ho, Blockstream] ```-rw------- 1 thorie thorie 672940136 Feb 24 10:17 gossip_store``` 11:40 < blockstream_bot> [Tim Ho, Blockstream] 642 MB 11:42 -!- rafalcpp [~racalcppp@ip-178-211.ists.pl] has quit [Ping timeout: 255 seconds] 11:43 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 11:45 < blockstream_bot> [Tim Ho, Blockstream] should --lightning-dir=~/.lightning or ~/.lightning/bitcoin ? 11:55 < blockstream_bot> [Tim Ho, Blockstream] Any ideas? My site is still down until I can fix this :( 11:59 <@cdecker> Right, the home directory for lightningd was moved into a subdirectory that is network-specific 11:59 <@cdecker> You should always talk to `.lightning/bitcoin/lightning-rpc` instead of the old parent directory 12:00 <@cdecker> You can also identify what process is running on port 9735 using `sudo netstat -anp | grep 9735` 12:06 <@cdecker> FWIW gossip_store is just a cache of information and will rebuild after a while, so you can just delete it 12:06 <@cdecker> (and it will get truncated to 1 byte if we have any error reading it btw) 12:09 <@cdecker> For more information about the default directory change see the changelog: https://github.com/ElementsProject/lightning/blob/master/CHANGELOG.md#changed-1 :-) 12:11 < blockstream_bot> [Tim Ho, Blockstream] I have another app running on 9735. That is why I am running lightning on --addr=0.0.0.0:9736 <--- 12:11 < blockstream_bot> [Tim Ho, Blockstream] There is nothing on 9736. 12:13 < blockstream_bot> [Tim Ho, Blockstream] I mean, otherwise wouldn't it not show this in my logs? 12:13 < blockstream_bot> ```2020-02-24T19:46:19.353Z DEBUG connectd: Created IPv4 listener on port 9736``` 12:22 < blockstream_bot> [Tim Ho, Blockstream] Any ideas? 12:30 < blockstream_bot> [Tim Ho, Blockstream] Is there a way to determine the listening port from a file descriptor? 12:37 < blockstream_bot> [Tim Ho, Blockstream] I added some debugging lines in the code with my very limited knowledge of C: 12:37 < blockstream_bot> ```2020-02-24T20:36:18.600Z DEBUG connectd: Created IPv4 listener on port 9736 12:37 < blockstream_bot> 2020-02-24T20:36:18.601Z DEBUG connectd: Now listen_fds has 0 12:37 < blockstream_bot> 2020-02-24T20:36:18.601Z DEBUG connectd: Created IPv4 listener on port 9736 12:37 < blockstream_bot> 2020-02-24T20:36:18.601Z DEBUG connectd: Now listen_fds has 0 12:37 < blockstream_bot> 2020-02-24T20:36:18.601Z DEBUG connectd: Now listen_fds has 1 12:37 < blockstream_bot> 2020-02-24T20:36:18.602Z DEBUG connectd: Created IPv4 listener on port 9736 12:37 < blockstream_bot> 2020-02-24T20:36:18.602Z DEBUG connectd: Now listen_fds has 0 12:37 < blockstream_bot> 2020-02-24T20:36:18.602Z DEBUG connectd: Now listen_fds has 1 12:37 < blockstream_bot> 2020-02-24T20:36:18.603Z DEBUG connectd: Now listen_fds has 2 12:37 < blockstream_bot> 2020-02-24T20:36:18.603Z DEBUG connectd: Created IPv4 listener on port 9736 12:37 < blockstream_bot> 2020-02-24T20:36:18.603Z DEBUG connectd: Now listen_fds has 0 12:37 < blockstream_bot> 2020-02-24T20:36:18.603Z DEBUG connectd: Now listen_fds has 1 12:37 < blockstream_bot> 2020-02-24T20:36:18.604Z DEBUG connectd: Now listen_fds has 2 12:37 < blockstream_bot> [Tim Ho, Blockstream] So after the 4th listener, listen_fds has 4 elements. 12:37 < blockstream_bot> [Tim Ho, Blockstream] ```2020-02-24T20:36:20.566Z DEBUG connectd: Failure is on fd index = 1 12:37 < blockstream_bot> 2020-02-24T20:36:20.566Z **BROKEN** connectd: Failed to listen on socket: Address already in use (version v0.8.1-modded)``` 12:38 < blockstream_bot> [Tim Ho, Blockstream] Then when it crashes, the failure is on listen_fds[1] ... interestingly, not on 0. Which, actually is not surprising. Since 9736 is *already* taken by 0. 12:38 < blockstream_bot> [Tim Ho, Blockstream] So then, why does it create 4 listeners, all with the same port, 9736? 12:44 < willcl_ark> have you tried another port to see if it _is_ some issue with the port being in use (somehow)? 12:45 < blockstream_bot> [Tim Ho, Blockstream] let me try 9737 12:47 < blockstream_bot> [Tim Ho, Blockstream] Same error with 9737 12:47 < blockstream_bot> [Tim Ho, Blockstream] I don't understand why it tries to add 4 listeners though. 12:48 < willcl_ark> hmmm. are you able to paste a full (with any senstive info redacted) log from startup to e.g. https://bpaste.net/+text ? 12:48 < blockstream_bot> [Tim Ho, Blockstream] I pasted it here: https://github.com/ElementsProject/lightning/issues/3545 12:48 <@cdecker> How many `--addr` or `--bind-addr` config options do you have on the command line or the config file? 12:48 <@cdecker> If it is specified multiple times we might actually try multiple times 12:49 < willcl_ark> I also wonder, reading the above, I have seen "rpc conection refused" when I try to use the a user from the wrong usergroup to access it; I wonder if the upgrade/migration has incorrect user file permissions or anything 12:49 < blockstream_bot> [Tim Ho, Blockstream] I have 4, but they all have the same value. 12:49 < blockstream_bot> [Tim Ho, Blockstream] ```addr=0.0.0.0:9736 12:49 < blockstream_bot> bind-addr=0.0.0.0:9736``` 12:49 < willcl_ark> Why don't you have 1 (in _either_ config or launch command) and not 4? 12:50 <@cdecker> Yes, just reproduced it here, don't specify them multiple times :-) 12:50 < willcl_ark> --addr does both listen and announce, so you don't need addr and bind-addr AFAIK 12:50 < blockstream_bot> [Tim Ho, Blockstream] ``` /* Add addresses we've explicitly been told to *first*: implicit 12:50 < blockstream_bot> * addresses will be discarded then if we have multiple. */``` 12:51 < blockstream_bot> [Tim Ho, Blockstream] I thought connectd/connectd.c would discard any multiples anyways? 12:51 < willcl_ark> you seem to have found a bug :) 12:51 < blockstream_bot> [Tim Ho, Blockstream] I mean, I didn't think it would hurt. 12:52 < blockstream_bot> [Tim Ho, Blockstream] Okay I'm not going crazy then, let me change it to ONE specification 12:52 <@cdecker> Well, that's the neat thing about programs: they do exactly what you tell them to do. Then there's the downside that they do EXACTLY what you tell them to do xD 12:53 <@cdecker> Ran into similar issues myself quite a number of times :-) 12:53 < blockstream_bot> [Tim Ho, Blockstream] Now it's running 12:54 < blockstream_bot> [Tim Ho, Blockstream] But I'm seeing a ton of failures like this: 12:54 < blockstream_bot> ```2020-02-24T20:53:36.089Z DEBUG 02e7c42ae2952d7a71398e23535b53ffc60deb269acbc7c10307e6b797b91b1e79-connectd: Failed connected o 12:54 < blockstream_bot> ut: 93.123.80.47:9735: Cryptographic handshake: peer closed connection (wrong key?). ``` 12:54 < blockstream_bot> [Tim Ho, Blockstream] Is this normal? 12:56 < willcl_ark> Do they reconnect OK after that? Retry should happen a few seconds later I think 12:57 < blockstream_bot> [Tim Ho, Blockstream] Nope, it's permanently failing. The connection retry is getting longer and longer. 12:58 < blockstream_bot> [Tim Ho, Blockstream] 2020-02-24T20:57:29.300Z DEBUG 03a503d8e30f2ff407096d235b5db63b4fcf3f89a653acb6f43d3fc492a7674019-chan#30171: Will try reconnect in 256 seconds 12:58 < blockstream_bot> [Tim Ho, Blockstream] Let me make sure I can connect to the host by other means. 12:58 < willcl_ark> perhaps that peer is offline, although you said there were loads of them... 12:58 <@cdecker> That usually means that the remote node changed it's key, and the handshake fails because we were expecting the old node ID 12:59 <@cdecker> How many are we talking here? 12:59 < blockstream_bot> [Tim Ho, Blockstream] Not sure, it's difficult to count. But seems like all of them? I have about 50 channels. 13:00 < lxer> It would be nice to have a list of errormessages and what they actually mean. They usually don't make any sense to me, tbh. 13:00 < blockstream_bot> [Tim Ho, Blockstream] I can ping the hosts, and get a response. But connections are all timing out according to lightningd logs. 13:01 < blockstream_bot> [Tim Ho, Blockstream] ```2020-02-24T21:00:44.096Z DEBUG 03bc9337c7a28bb784d67742ebedd30a93bacdf7e4ca16436ef3798000242b2251-chan#29920: Will try reconnect in 300 seconds 13:01 < blockstream_bot> 2020-02-24T21:00:44.096Z DEBUG 03bc9337c7a28bb784d67742ebedd30a93bacdf7e4ca16436ef3798000242b2251-connectd: Failed connected out: 46.229.165.141:41832: Connection establishment: Connection refused. 13:01 < blockstream_bot> 2020-02-24T21:00:44.449Z DEBUG lightningd: Adding block 617273: 00000000000000000006018fb5172e64adb23ffd8fb3e274ba85835c2a975284 13:01 < blockstream_bot> ^C 13:01 < blockstream_bot> thorie@debian:~/.lightning$ ping 46.229.165.141 13:01 < blockstream_bot> PING 46.229.165.141 (46.229.165.141) 56(84) bytes of data. 13:01 < blockstream_bot> 64 bytes from 46.229.165.141: icmp_seq=1 ttl=50 time=72.4 ms 13:01 < blockstream_bot> ^C 13:01 < blockstream_bot> --- 46.229.165.141 ping statistics --- 13:01 < blockstream_bot> 1 packets transmitted, 1 received, 0% packet loss, time 0ms 13:01 < blockstream_bot> rtt min/avg/max/mdev = 72.491/72.491/72.491/0.000 ms``` 13:02 < blockstream_bot> [Tim Ho, Blockstream] I don't know if `41832` is the port. If so, it's bizarre. Shouldn't it be 9735 or something usually? And, I can't connect to that port. 13:02 < blockstream_bot> [Tim Ho, Blockstream] ```thorie@debian:~/.lightning$ nc -vz 46.229.165.141 41832 13:02 < blockstream_bot> 46.229.165.141: inverse host lookup failed: Unknown host 13:02 < blockstream_bot> (UNKNOWN) [46.229.165.141] 41832 (?) : Connection refused``` 13:02 < blockstream_bot> [Tim Ho, Blockstream] Are these really the remote port numbers? They look like internal port numbers. 13:03 < willcl_ark> yeah weird! you could try a manual connection to one? e.g. for that pubkey in your log above: 03bc9337c7a28bb784d67742ebedd30a93bacdf7e4ca16436ef3798000242b2251@46.229.165.141:9735 13:04 < willcl_ark> but CL should really not be trying to reconnect to the ephemeral outbound port they used to connect to you originally... I don't think 13:06 < willcl_ark> for me, `telnet 46.229.165.141 9735` opens a connection, so that node is def online and listening on 9735 13:06 < blockstream_bot> [Tim Ho, Blockstream] ```13:05 $ ./lightning-cli connect 03bc9337c7a28bb784d67742ebedd30a93bacdf7e4ca16436ef3798000242b2251@46.229.165.141:9735 13:06 < blockstream_bot> { 13:06 < blockstream_bot> "id": "03bc9337c7a28bb784d67742ebedd30a93bacdf7e4ca16436ef3798000242b2251" 13:06 < blockstream_bot> }``` 13:06 < blockstream_bot> [Tim Ho, Blockstream] works there 13:06 < willcl_ark> sooooo, the issue seems to be that it's not trying to reconnect to their listening port, perhaps because your gossip store got corrupted? 13:07 < blockstream_bot> [Tim Ho, Blockstream] I upgraded my gossip store from 0.7.2.1 to 0.8.1 13:07 < blockstream_bot> [Tim Ho, Blockstream] Well, that "upgrade" wiped it out to zero. 13:07 < willcl_ark> I wonder if, if you manually reconnect to a few, and let your gossip store get repopulated, then restart CL, then it might be able to reconnect to the rest itself? 13:07 < blockstream_bot> [Tim Ho, Blockstream] But I see that these weird port numbers are stored in the sqlite3 file. 13:07 < willcl_ark> (btw, I'm just hazarding a guess here) 13:08 < willcl_ark> I suppose that's "last known connection" details 13:08 < blockstream_bot> [Tim Ho, Blockstream] Yeah, those details are not useful since I not only upgraded, but migrated to a new server. 13:09 < blockstream_bot> [Tim Ho, Blockstream] Probably not even useful if you restarted the daemon either, since wouldn't all the connections be lost? 13:10 < blockstream_bot> [Tim Ho, Blockstream] ``` { 13:10 < blockstream_bot> "id": "03bc9337c7a28bb784d67742ebedd30a93bacdf7e4ca16436ef3798000242b2251", 13:10 < blockstream_bot> "connected": true, 13:10 < blockstream_bot> "netaddr": [ 13:10 < blockstream_bot> "46.229.165.141:9735" 13:10 < blockstream_bot> ],``` 13:11 < blockstream_bot> [Tim Ho, Blockstream] So it was connected false, now after the *manual* connect, it says connected true and funding looks good again. 13:12 < blockstream_bot> [Tim Ho, Blockstream] ```✔ ~/work/lightning/cli [v0.8.1|✚ 1…2] 13:12 < blockstream_bot> 13:12 $ ./lightning-cli listpeers | grep 'connected": false' | wc 13:12 < blockstream_bot> 38 76 1102 13:12 < blockstream_bot> ✔ ~/work/lightning/cli [v0.8.1|✚ 1…2] 13:12 < blockstream_bot> 13:12 $ ./lightning-cli listpeers | grep 'connected": true' | wc 13:12 < blockstream_bot> 9 18 252``` 13:12 < willcl_ark> well I am at the limit of my understanding, but as cdecker said above, don't be tempted to touhc your old node now, or you risk loss of funds via ln penatly close. If it were me, I would lookup the pubkey@ip:port of a handful on 1ml or similar, re-connect and wait for gossip to be rebuilt. The restart CL and see if luck is on your side :) 13:12 < blockstream_bot> [Tim Ho, Blockstream] that's a majority of connected false 13:13 < willcl_ark> the `node_announcement` gossip message should contain each node's listening address (if advertised). so, when you get those for the errant nodes, reconnect might be able to take place after a restart 13:14 < willcl_ark> https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md#the-node_announcement-message 13:14 < blockstream_bot> [Tim Ho, Blockstream] I restarted a couple times, no luck 13:14 < willcl_ark> yeah but before that, you might need to wait for the full gossip store to be rebuilt 13:15 < blockstream_bot> [Tim Ho, Blockstream] Ok 13:15 < willcl_ark> above you said it was ~600MB, so perhaps wait until it is around that size again, and restart after 13:15 < willcl_ark> or you can manually reconnect to a few, if you have some high priority ones 13:16 < blockstream_bot> [Tim Ho, Blockstream] i'm manually connecting to all of them right now, with a script 13:16 < willcl_ark> that seems like a fine solution also 13:17 < blockstream_bot> [Tim Ho, Blockstream] Thanks for your help guys, I'll let you know how it goes. 13:18 < willcl_ark> np. It's not usual to have issues like this in my experience FWIW 13:19 < blockstream_bot> [Tim Ho, Blockstream] I'm glad to have found bugs, but maybe next time I'll try upgrading a testnet copy first :p 13:21 < willcl_ark> what OS are you on out of interest? 13:24 < blockstream_bot> [Tim Ho, Blockstream] Linux debian 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 GNU/Linux 13:24 < blockstream_bot> [Tim Ho, Blockstream] stretch 13:24 < blockstream_bot> [Tim Ho, Blockstream] I need to upgrade to buster at some point. 14:05 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has left #c-lightning [] 14:05 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has joined #c-lightning 14:09 -!- fiatjaf1 is now known as fiatjaf 14:51 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #c-lightning 15:33 -!- lxer [~lx@ip5f5bf497.dynamic.kabel-deutschland.de] has quit [Ping timeout: 258 seconds] 15:41 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 15:58 -!- mdunnio [~mdunnio@38.126.31.226] has quit [Remote host closed the connection] 16:00 -!- MasterdonX [~masterdon@42.0.30.151] has quit [Ping timeout: 260 seconds] 16:00 -!- MasterdonX [~masterdon@165.231.253.164] has joined #c-lightning 16:19 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 252 seconds] 16:22 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #c-lightning 17:21 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Remote host closed the connection] 17:29 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #c-lightning 18:06 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 18:09 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #c-lightning 19:49 -!- Victorsueca [~Victorsue@unaffiliated/victorsueca] has joined #c-lightning 19:52 -!- CubicEarth [~CubicEart@c-67-168-1-172.hsd1.wa.comcast.net] has quit [Ping timeout: 260 seconds] 19:52 -!- Victor_sueca [~Victorsue@unaffiliated/victorsueca] has quit [Ping timeout: 260 seconds] 19:52 -!- rotarydialer_ [~rotarydia@unaffiliated/rotarydialer] has quit [Ping timeout: 260 seconds] 19:52 -!- RonNa [~quassel@60-251-129-61.HINET-IP.hinet.net] has quit [Ping timeout: 260 seconds] 19:52 -!- RonNa [~quassel@60-251-129-61.HINET-IP.hinet.net] has joined #c-lightning 19:53 -!- CubicEarth [~CubicEart@c-67-168-1-172.hsd1.wa.comcast.net] has joined #c-lightning 19:54 -!- rotarydialer [~rotarydia@unaffiliated/rotarydialer] has joined #c-lightning 22:18 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has left #c-lightning [] 22:19 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zkjjaoqsjfattawp] has joined #c-lightning 23:40 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 23:43 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 23:57 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 240 seconds] --- Log closed Tue Feb 25 00:00:04 2020