--- Log opened Fri Nov 13 00:00:15 2020 00:07 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has joined #c-lightning 00:07 < m-schmoock> Is there a way to set/affect the onchain fees for the settlement transaction that is eventually done when we force-close a channel (I mean the second TX that comes several 100 blocks after the closing transaction) 00:11 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has quit [Read error: Connection reset by peer] 00:17 -!- harrigan [~harrigan@ptr-93-89-242-235.ip.airwire.ie] has joined #c-lightning 01:08 -!- jasan [~j@n.bublina.eu.org] has joined #c-lightning 01:08 -!- dr_orlovsky [~dr-orlovs@31.14.40.19] has quit [Ping timeout: 256 seconds] 01:29 < m-schmoock> niftynei: updated #4198 01:29 < zmnscpxj__> IIRC it is pre-signed at the same feerate as the commitment tx 01:29 < zmnscpxj__> hmmm 01:29 < m-schmoock> zmnscpxj__: must it be the presigned closing fee? 01:30 < zmnscpxj__> hmmmmmm wait 01:30 < m-schmoock> :) 01:30 < zmnscpxj__> do you mean the one that claims after the `to_self_delay`? 01:30 < m-schmoock> yep 01:30 < zmnscpxj__> it should use a "current" rate, lemme check onchaind/onchaind.c 01:31 < zmnscpxj__> It spends using the `delayed_to_us_feerate` 01:31 < zmnscpxj__> which should be seen in the `feerates` command 01:31 < m-schmoock> in this case we might want a feature to preset the feerate for the to_us_delayed tx when we force close a channel via RPC arg 01:32 < m-schmoock> I think we currently cant influence this fee, right? 01:32 < zmnscpxj__> currently no. 01:33 < m-schmoock> how about: command=close id [unilateraltimeout] [destination] [fee_negotiation_step] [to_self_feerate] ? 01:34 < zmnscpxj__> close already has a feerate setting, why not use that? 01:34 < zmnscpxj__> hmmm 01:34 < zmnscpxj__> does not yet haha 01:34 < m-schmoock> it just has "fee_negotiation_step" 01:34 < m-schmoock> :D 01:35 < m-schmoock> thats what I was missing 01:35 < zmnscpxj__> possibly 01:35 < m-schmoock> zmnscpxj__: should we open an issue/feature_request = 01:35 < m-schmoock> ? 01:35 < zmnscpxj__> okay 01:39 < m-schmoock> k, will check later. no hurry since we have 2 month time I guess :D 01:39 < zmnscpxj__> but in principle, any output that has gotten past the `to_self_delay` does not need to be claimed immediately and then moved to an onchain address in our onchain derivation path 01:39 < zmnscpxj__> we could just keep the funds in that UTXO until later 01:39 < zmnscpxj__> when we want to actually spend it 01:39 < m-schmoock> exactly 01:39 < zmnscpxj__> could prioritize such UTXOs in the UTXO selection 01:39 < zmnscpxj__> drawback is that it is not on our onchain derivation path and thus not recoverable from onchain key recovery 01:39 < zmnscpxj__> but presumably you are using C-Lightning because you want funds in channels and not onchain 01:39 < zmnscpxj__> more complexity in our wallet stuff though, >.< 01:39 < m-schmoock> as a start we could just set feerate urgend/normal/slow by hand and re-broadcast occasionally until it gets through eventually 01:40 < m-schmoock> theres really no point in overpaying the delayed_to_us tx as we can always bump its fee 01:42 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 02:05 -!- kexkey [~kexkey@static-198-54-132-137.cust.tzulo.com] has quit [Ping timeout: 264 seconds] 02:08 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 02:11 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 02:11 -!- vasild_ is now known as vasild 02:19 -!- ksedgwic [~ksedgwic@157-131-253-103.fiber.dynamic.sonic.net] has quit [Ping timeout: 264 seconds] 02:21 < m-schmoock> cdecker: I retried you routing patches and they work A LOT better 02:22 < m-schmoock> I captured stdout / info / debug incase you want to re-check if all changes are required in that way 02:22 < blockstream_bot> [Tim Ho, Blockstream] Anyone know what this means? 02:22 < blockstream_bot> ```2020-11-13T10:21:31.507Z **BROKEN** database: Accessing a null column 12 in query SELECT id, channel_htlc_id, msatoshi, cltv_expiry, hstate, payment_hash, payment_key, routing_onion, $ailuremsg, malformed_onion, origin_htlc, shared_secret, received_time FROM channel_htlcs WHERE direction= ? AND channel_id= ? AND hstate != ?``` 02:22 < m-schmoock> we should discuss if we want to include it in 0.9.2 as the current routing code is really bad for payments > 20$ on mainnet 02:23 < blockstream_bot> [Tim Ho, Blockstream] And also: 02:23 < blockstream_bot> ```2020-11-13T10:21:53.461Z **BROKEN** 0385a8b705f137db6944408a06e6e9ef76a29f0cecb25bf537ca9044b66fd29fd5-chan#38497: Peer internal error AWAITING_UNILATERAL: Bad tx b49f3d3bb8c0edb3c38dfd 02:23 < blockstream_bot> 41341bf351646cf0c0ec8ba0a5ceb82a968f7e9818: 01000000000101018c766a4bc047511cb1d2fde2f144016ad8e2117a8a70059a5d680f903815d30000000000ffffffff02acb83a0000000000160014b68b37231420023b3e724 02:23 < blockstream_bot> d246f2bac9f30800d7c60ea00000000000022002068cb1d20fa56af1a83b3ddcfa34d18f5caa78cae9fe581e3a55dfb2d76f7bed802483045022100ef1f1db602da50e5a8677b0e2e7d60bcdeb7e519c75780104b2b2f20f56b11f002 02:23 < blockstream_bot> 2006a0ea6e4e78bb0ebeadd2171a32659a4f24cca05fc3a8b41ac19419c5f0388401210388fdc342563d185b5747d15975d319413c9269561cc3a6fed63fe8f6ed66775800000000 02:23 < blockstream_bot> 2020-11-13T10:21:53.461Z UNUSUAL 0385a8b705f137db6944408a06e6e9ef76a29f0cecb25bf537ca9044b66fd29fd5-chan#38497: Peer permanent failure in AWAITING_UNILATERAL: Internal error``` 02:24 < m-schmoock> cdecker: the payment took 5 seconds on the first try for 150$ payment. without the changes I struggle to get even 10$ trhough 02:24 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 02:26 < m-schmoock> '$ailuremsg' maybe this is related to the recent https://github.com/ElementsProject/lightning/pull/4187 ... 02:28 < jasan> m-schmoock: congratulations! 02:28 < m-schmoock> jasan: for the routing probs? this patch was made by cdecker ;) 02:29 < m-schmoock> anyway it was time this gets addressed :D 02:29 < jasan> all the best 02:29 < jasan> blockstream_bot: what version of lightningd do you run? 02:29 <@cdecker> Oh nice, thanks for checking m-schmoock, this is just the start of the optimizations 02:29 <@cdecker> Need to open a discussion thread on GH 02:30 -!- belcher_ is now known as belcher 02:30 * jasan also remembers trying to pay a bigger invoice which did not work, then resettling to $4 02:31 < m-schmoock> cdecker: thx. we need more field testing. this has been a pain for a while (at least for me) 02:31 < jasan> (that's why I have only one sticker from Blockstream store) 02:31 < m-schmoock> lol 02:32 < m-schmoock> cdecker: im not sure whats the tradeoff in your changes and if we need to be that strict 02:32 < m-schmoock> but its def better on mainnet for bigger payments 02:32 < m-schmoock> gotta go. ping me if you need logs ... 02:32 < jasan> Have a nice weekend! 02:33 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has joined #c-lightning 02:33 * jasan /metoo 02:34 -!- jasan [~j@n.bublina.eu.org] has quit [Quit: Have a nice weekend!] 02:35 <@cdecker> m-schmoock: I was considering having a fake MPP-receiver plugin. It'd hold on to incoming HTLCs for the 60 second MPP timeout, and failing with different error codes depending on whether all parts were received. This way users could send (provably) unclaimable payments that get refunded, but we learn whether a payment would have succeeded 02:36 <@cdecker> A couple of volunteers running these would mean we can test loads of different parameters for the `pay` plugin, optimizing them for the network state 03:03 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 03:10 < m-schmoock> sure, as long as the code doesnt drain my wallet to yours ;) 03:11 < m-schmoock> cdecker: any chance we can get a basic optimization into 0.9.2 because I fear there are quite some users affected... 03:25 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 04:01 -!- Teoti [~teoti@66.245.254.7] has joined #c-lightning 04:19 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has left #c-lightning [] 04:19 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has joined #c-lightning 04:20 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 04:34 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has quit [Remote host closed the connection] 04:37 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has joined #c-lightning 05:06 -!- ksedgwic [~ksedgwic@192-184-134-31.static.sonic.net] has joined #c-lightning 05:17 -!- shesek [~shesek@unaffiliated/shesek] has quit [Remote host closed the connection] 05:46 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Remote host closed the connection] 05:47 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 06:08 -!- ark [~ark@89.32.80.84] has joined #c-lightning 06:10 < ark> Hello 06:10 < ark> For the topic of high availability 06:10 < ark> How tested is the PostgreSQL driver 06:12 < ark> Can you have 2 instances of the same node pointing to the same database? 06:21 -!- k3tan [~pi@gateway/tor-sasl/k3tan] has quit [Ping timeout: 240 seconds] 06:24 -!- k3tan [~pi@gateway/tor-sasl/k3tan] has joined #c-lightning 06:35 -!- queip [~queip@unaffiliated/rezurus] has quit [Excess Flood] 06:37 -!- queip [~queip@unaffiliated/rezurus] has joined #c-lightning 07:14 -!- k3tan [~pi@gateway/tor-sasl/k3tan] has quit [Ping timeout: 240 seconds] 07:15 -!- sr_gi [~sr_gi@static-120-201-229-77.ipcom.comunitel.net] has quit [Read error: Connection reset by peer] 07:15 -!- sr_gi [~sr_gi@static-120-201-229-77.ipcom.comunitel.net] has joined #c-lightning 07:31 -!- k3tan [~pi@gateway/tor-sasl/k3tan] has joined #c-lightning 08:26 < m-schmoock> we really need to update rebalance and drain to somehow use MPP 08:27 -!- liberliver1 [~Thunderbi@dynamic-077-011-048-215.77.11.pool.telefonica.de] has joined #c-lightning 08:27 -!- liberliver [~Thunderbi@144.49.211.130.bc.googleusercontent.com] has quit [Read error: Connection reset by peer] 08:30 -!- liberliver [~Thunderbi@144.49.211.130.bc.googleusercontent.com] has joined #c-lightning 08:32 -!- liberliver1 [~Thunderbi@dynamic-077-011-048-215.77.11.pool.telefonica.de] has quit [Ping timeout: 256 seconds] 09:01 -!- kexkey [~kexkey@static-198-54-132-171.cust.tzulo.com] has joined #c-lightning 09:05 -!- polydin [~george@136.49.254.169] has joined #c-lightning 10:03 -!- liberliver [~Thunderbi@144.49.211.130.bc.googleusercontent.com] has quit [Ping timeout: 260 seconds] 10:05 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has quit [Ping timeout: 240 seconds] 10:52 -!- kexkey [~kexkey@static-198-54-132-171.cust.tzulo.com] has quit [Quit: Do, don't don't] 11:12 -!- kexkey [~kexkey@static-198-54-132-91.cust.tzulo.com] has joined #c-lightning 11:19 -!- belcher [~belcher@unaffiliated/belcher] has quit [Quit: Leaving] 11:58 -!- az0re [~az0re@gateway/tor-sasl/az0re] has quit [Ping timeout: 240 seconds] 12:35 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has left #c-lightning [] 12:35 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has joined #c-lightning 12:40 -!- dr-orlovsky [~dr-orlovs@31.14.40.19] has joined #c-lightning 12:55 -!- jonatack [~jon@213.152.162.69] has quit [Quit: jonatack] 13:06 -!- shesek [~shesek@unaffiliated/shesek] has joined #c-lightning 13:15 -!- billygarrison [86291f92@134.41.31.146] has joined #c-lightning 13:17 < billygarrison> Is there a recommended way to rotate logs in c-lightning (the one created from the `log-file` option)? 13:23 -!- jonatack [~jon@88.124.242.136] has joined #c-lightning 13:28 -!- jonatack [~jon@88.124.242.136] has quit [Ping timeout: 264 seconds] 13:29 -!- jonatack [~jon@109.202.103.170] has joined #c-lightning 13:51 -!- billygarrison [86291f92@134.41.31.146] has quit [Remote host closed the connection] 13:53 < darosior> Sending a SIGHUP to lightningd 14:06 < ark> Hello 14:06 < ark> For the topic of high availability 14:06 < ark> How tested is the PostgreSQL driver 14:06 < ark> Can you have 2 instances of the same node pointing to the same database? 14:08 -!- vasild_ [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 14:12 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 14:12 -!- vasild_ is now known as vasild 14:43 -!- Teoti [~teoti@66.245.254.7] has quit [Quit: Leaving] 14:43 -!- az0re [~az0re@gateway/tor-sasl/az0re] has joined #c-lightning 15:30 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 15:49 -!- cryptosoap [~cryptosoa@gateway/tor-sasl/cryptosoap] has quit [Ping timeout: 240 seconds] 15:50 -!- cryptosoap [~cryptosoa@gateway/tor-sasl/cryptosoap] has joined #c-lightning 16:17 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has left #c-lightning [] 16:17 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has joined #c-lightning 16:31 < ark> rusty: 16:31 < ark> For the topic of high availability 16:31 < ark> How tested is the PostgreSQL driver 16:32 < ark> Can you have 2 instances of the same node pointing to the same database? 16:33 < rusty> ark: hmm, have to ask cdecker TBH, I've not used it other than basic testing. 16:34 < rusty> ark: but you should not run two instances of the same node anyway, too much can go wrong! 16:35 < zmnscpxj__> ark: idea: run two pieces of hardware, one runs the node, one has a copy of the `hsm_secret` and points to the same database but does not run normally 16:35 < zmnscpxj__> ark: then have a process that monitors the health of the first hardware and starts up the backup instance if the first hardware fails 16:36 < zmnscpxj__> though you now have to stop the first hardware from running if it comes back up 16:48 < ark> La cosa es que queria montar en nodo sobre un docker swarm 16:48 < ark> Puedo hacer que se mantenga una sola instancia 16:48 < ark> Pero si es por perdida de conexion podria acabar con 2 instancias funcionando a la vez. Y como pensaba utilizar una base de datos remota, pues es mas complicado que se corrompiera la base de datos 16:48 < ark> The thing is that I wanted to mount a node on a docker swarm 16:48 < ark> I can make a single instance stay 16:48 < ark> But if it is due to loss of connection, you could end up with 2 instances running at the same time. And since I was thinking of using a remote database, it is more complicated for the database to be corrupted 16:49 < zmnscpxj__> sorry, no real idea then 16:50 < ark> Is there a way to put the ln server on pause for forwarding? 16:51 < zmnscpxj__> how do other nodes connect to you? 16:52 < ark> that way it could be paused if there is more than one instance running 16:53 < zmnscpxj__> if there are two instances, what happens to the TCP/IP connections on the "paused" instance? 16:53 < ark> Con el docker swarm las instancias hacen como si fueran una sola para el lado remoto 16:54 < ark> good question 16:54 < ark> With the docker swarm the instances act as if they were one for the remote side 16:56 < zmnscpxj__> so they could conflict as well with the TCP/IP communications with other LN nodes? 16:58 < ark> you're right 16:58 < zmnscpxj__> so yes, this is a lot more troublesome 17:00 < ark> So for now the only more or less reliable solution would be to make sure that there is only one instance running at the same time, right? 17:00 < ark> And if one falls, pick it up somewhere else 17:00 < zmnscpxj__> yes 17:00 < zmnscpxj__> there are no decent load balancers on Lightning 17:02 < ark> And for the subject of the HTLC, the safest is an external database to use sqlite in a distributed file system, right? 17:03 < zmnscpxj__> hmmm? 17:03 < zmnscpxj__> well, might be better to ask cdecker 17:03 < zmnscpxj__> I believe PostgreSQL would be faster (and maybe safer?) in such a setup 17:03 < ark> ok 17:04 < ark> thanks 17:04 < zmnscpxj__> np np 17:05 < ark> Yes, I was thinking about that configuration 17:05 < ark> I think it is more secure due to the issue of database locks 17:06 < zmnscpxj__> if only one application (C-Lightning instance) is using the database, there are no database locks to worry about 17:06 < zmnscpxj__> C-Lightning only takes locks one at a time, it never makes multiple locks in parallel 17:07 < ark> Yeah, but the lock assures me that the information has been transferred out of the ln node 17:08 < zmnscpxj__> well, the unlocking procedure should do that synchronization 17:08 < zmnscpxj__> and C-Lightning uses atomic transactions to ensure consistency of its database 17:08 < ark> because all database transactions need a commit to be effective 17:09 < zmnscpxj__> I mean SQLITE does have working database transactions 17:09 < zmnscpxj__> though I guess that is assuming that the underlying filesystem is correctly POSIX-compliant 17:09 < zmnscpxj__> which a random distributed file system might not be.... 17:11 < ark> Clear 17:11 < ark> that it has been saved in the sqlite database does not mean that the operating system has saved it to disk or even transmitted to the rest in case of distributed system 17:12 < zmnscpxj__> well, SQLITE3 does do an `fsync` and updates its write logs etc. to ensure consistency, so either the commit pushes through, or the commit does not 17:12 < zmnscpxj__> and the OS trusts the filesystem to do the sync correctly 17:13 < zmnscpxj__> it is the filesystem you should probably ask if it is doing the job correctly 17:13 < zmnscpxj__> FWIW stuff like ZFS have internal architectures more similar to databases nowadays 17:14 < zmnscpxj__> and ensures atomicity of writes as well, even across multiple devices 17:17 < ark> Would it be safe to keep the current state of the channels, say in an AWS database? 17:17 < ark> Separated from hsm_secret 17:17 < ark> Would it be safe to keep the current state of the channels, say in an AWS database? 17:17 < ark> Separated from hsm_secret 17:17 < ark> Would it be safe to keep the current state of the channels, say in an AWS database? 17:17 < ark> Separated from hsm_secret 17:18 < ark> Would it be safe to keep the current state of the channels, say in an AWS database? 17:18 < zmnscpxj__> AWS would know your activity and how much you own, but yes, it would not be able to *Steal* it 17:19 < zmnscpxj__> in theory if a curious AWS technician wanted to snoop they could probably get your data and how much you own and invoices and etc. etc 17:19 < zmnscpxj__> but would not be able to outright steal without the `hsm_secret` 17:22 < ark> It's good to know that 17:22 < ark> If only there are privacy issues with that. It might be feasible to host the database there without compromising the integrity of the balances 17:22 < zmnscpxj__> yes, people sometimes send C-Lightning databases to devs for debugging, if they trust that dev will not leak privacy 17:25 < ark> For the paranoid even in the future, the database fields could be encrypted 17:25 < zmnscpxj__> we do not have facility to do that in the C-Lightning codebase yet, and would be difficult to do 17:26 < zmnscpxj__> since `hsm_secret` is only opened in a separate process 17:35 < ark> Good, but the database wouldn't need such a level of security 17:35 < ark> It could be encrypted with symmetric key which if you see the main process 17:35 < ark> since there is no danger of theft if that leaks 17:37 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #c-lightning 17:38 -!- zmnscpxj__ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 18:51 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Remote host closed the connection] 18:52 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #c-lightning 18:53 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has left #c-lightning [] 18:56 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has joined #c-lightning 19:39 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 21:26 -!- rh0nj [~rh0nj@88.99.167.175] has quit [Remote host closed the connection] 21:45 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 21:49 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #c-lightning 21:51 -!- DeanWeen [~dean@gateway/tor-sasl/deanguss] has quit [Ping timeout: 240 seconds] 22:15 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 23:42 -!- sr_gi [~sr_gi@static-120-201-229-77.ipcom.comunitel.net] has quit [Read error: Connection reset by peer] 23:43 -!- sr_gi [~sr_gi@static-120-201-229-77.ipcom.comunitel.net] has joined #c-lightning --- Log closed Sat Nov 14 00:00:16 2020