--- Log opened Fri May 17 00:00:15 2019 00:31 -!- keymone [~keymone@ip1f10c1a7.dynamic.kabel-deutschland.de] has quit [Ping timeout: 246 seconds] 00:35 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning 00:52 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 00:56 -!- keymone [~keymone@ip1f10c1a7.dynamic.kabel-deutschland.de] has joined #c-lightning 01:09 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has joined #c-lightning 01:43 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 02:19 -!- Victorsueca [~Victorsue@unaffiliated/victorsueca] has quit [Ping timeout: 244 seconds] 02:54 -!- Kostenko [~Kostenko@dsl-154-81.bl26.telepac.pt] has joined #c-lightning 02:56 -!- darosior [6dbe8dc1@gateway/web/freenode/ip.109.190.141.193] has joined #c-lightning 03:17 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Quit: = ""] 03:29 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Remote host closed the connection] 03:29 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning 03:37 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 04:03 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 05:26 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 05:33 -!- Victorsueca [~Victorsue@unaffiliated/victorsueca] has joined #c-lightning 05:52 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 06:44 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 246 seconds] 06:59 -!- StopAndDecrypt_ [~StopAndDe@129.232.161.114] has quit [Ping timeout: 252 seconds] 06:59 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has joined #c-lightning 07:04 -!- spaced0ut [~spaced0ut@unaffiliated/spaced0ut] has joined #c-lightning 07:33 -!- darosior [6dbe8dc1@gateway/web/freenode/ip.109.190.141.193] has quit [Ping timeout: 256 seconds] 08:01 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has joined #c-lightning 08:15 -!- ulrichard [~richi@dzcpe6300borminfo01-e0.static-hfc.datazug.ch] has quit [Remote host closed the connection] 08:35 < m-schmoock> cdecker: im having a hard time to drain a channel 100.0%. I always get strange capacity exceeded errors that doesnt make sense (to me). Im sending trying to send out: ours_msat - our_reserves - calculated routing fees. the circular payment fails at the first channel with unreasonable high expected fees: Cannot afford fee 226sat: would make balance 774230msat below reserve 1000sat (226SAT !?!) 08:35 < m-schmoock> is there anything to consider when draining a channel to exactly 0msat 08:35 < m-schmoock> ? 08:38 < m-schmoock> sems like the C code `channeld/full_channel.c` (line 489) takes onchain fees into account for that (`amount_sat`) 08:40 <@cdecker> Are we considering on-chain fees twice? Or would the fee dip into the reserve maybe? Also notice that you might be hitting against the fuzz code that tries to slightly randomize the amounts transferred to hide endpoints and round amounts 08:41 < m-schmoock> i can try turing that off 08:41 < m-schmoock> but i dound that causes 200sat 08:41 < m-schmoock> this is my code im working with: https://github.com/lightningd/plugins/pull/22/files 08:41 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has quit [Quit: Leaving] 08:42 < m-schmoock> what do you mean by "Are we considering on-chain fees twice?OC" 08:42 < m-schmoock> I substraced the fee from the to be drained amount after route calculation and use a invoice without a defined amount 08:42 < m-schmoock> and the amount is calculated by respecting the own reserves 08:43 < m-schmoock> also, all these cannot add up to 226sat 08:44 < m-schmoock> The error is raised at `full_channel.c` which complains about fee ( fee = commit_tx_base_fee(feerate, untrimmed); 08:44 < m-schmoock> and this are the 226sat that are missing 08:45 < m-schmoock> isnt that supposed to be covered by the reserves? 08:47 < m-schmoock> NOTE: reserves 1000sat , commit_tx_fee: 226sat , resulting balance after (impossible) htlc: 774230msat -> this is exactly -226sat below reserves 08:48 < m-schmoock> *'exactly' (230msat missing, but i already know where these came from) 08:54 < m-schmoock> btw, have you seen the QT gui plugin? quite nice work from darosier 09:03 < m-schmoock> cdecker: why are concrete onchain fees considered anyway, isnt that whats the reserves are meant for? 09:19 < m-schmoock> I substracted the mystery 226sat by hardcoding, worked but resulted in a 'drained' channel that now has "to_us_msat" : "1226228msat" (Note: reserves + 226sat) ! 09:20 < m-schmoock> I think theres somethign wrong with the `full_channel.c` commit tx fee calculation 10:31 <@cdecker> No, the HTLC is a separate transaction, which may need to be sent on-chain if we get a unilateral close while the HTLC is active on the channel 10:31 <@cdecker> That's why it needs to be considered for the HTLC creation 10:34 < m-schmoock> yes, but does it make sense that the minimal effective ours_msat is reserves + 226sat (htlc fee) ? 10:34 < m-schmoock> the deamon prevents me from draining a channel below reserves + 226sat (not just reserves) 10:35 <@cdecker> Hm, would need to check the specs whether it's allowed to dip into the fee for the HTLC fees 10:35 < m-schmoock> if it does, how do I know via RPC how bis the HTLC fee is gonna be, so i can know how much i can sent over a channel exactly? 10:35 <@cdecker> Actually no, it shouldn't be allowed, otherwise you could play a game of scorched earth with your peer 10:36 < m-schmoock> aha 10:36 < m-schmoock> okay, so then how do i get the exact value the daemon will account for HTLC fee? 10:36 < m-schmoock> or is it always 226sat? 10:37 < m-schmoock> :D I can try catch the error, it tells me indirectly 10:37 < m-schmoock> lol 10:40 < m-schmoock> are you really sure that the effective to_us_msat amount is always reserves + htlc commitment fees? 10:40 < m-schmoock> it sounds rather misleading 10:41 <@cdecker> Nope, that would be true for a single HTLC, but since we may have up to 483 HTLCs on the channel at any point in time the usable amount can be lower 10:42 <@cdecker> I'm sorry about this, as usual the LN protocol is more complicated than it appears on first inspection 10:42 < m-schmoock> not your fault i think :D 10:42 <@cdecker> FWIW with eltoo we can remove a lot of this stuff, since fees can be sideloaded and reserve values become pointless :-) 10:43 < m-schmoock> okay, then I will implement try and catching/parsing the error to determine the fees the daemon actually account for 10:43 < m-schmoock> this feels so wrong 10:43 <@cdecker> Hehe I know 10:44 < m-schmoock> in this regard, the whole point of 'draining' a channel becomse useless. but having the 'fill' command maybe useful anyway to fill up a channel before closing to reduce liquidity 10:45 < m-schmoock> a 'drained' channel will thus always result into a onchain utxo even thou its just reserves and commitment fee? 10:46 < m-schmoock> or is there a way to drain a channel that will result in no utxo upon close? 10:46 < m-schmoock> (dust accumulation) 10:52 < m-schmoock> cdecker: also, should there be a way to determine the next HTLC onchain fees via RPC before running into an exception? 11:03 < m-schmoock> bah, i cant parse the exception as its not passed when not hacking into C code °_° 11:04 < m-schmoock> its just CHANNEL_ERR_CHANNEL_CAPACITY_EXCEEDED without details 11:05 < m-schmoock> where this is strange: it tells me the reserves would used, but if I add the 226sat, the result is having reserves + 226sat left as 'ours' 11:07 < m-schmoock> cdecker: TLDR: so my questions: 1. how do I determin the HTLC commitment fees before sending the payment? 2. is it possible to drain a channel in a way it will result in no (own) UTXO when closing (dust accumulation usecase). 3. is it really correct the daemon tells me reserves would be used, but if account for magic 226sat result is reserves + 226 sat ? 11:08 < m-schmoock> sorry for many questions, its hard to get attention from the right guys ^^ 11:13 < m-schmoock> *3 ... result is 'reserves + 226sat' as 'ours' 11:14 < m-schmoock> 4. is there a technical way to get a channel below reserves + magic HTLC fees, ideally just reserves? 11:33 -!- spaced0ut [~spaced0ut@unaffiliated/spaced0ut] has quit [Quit: Leaving] 12:02 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 246 seconds] 12:21 -!- jb55 [~jb55@S010660e327dca171.vc.shawcable.net] has joined #c-lightning 12:32 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning 13:02 < m-schmoock> okay, i answered 3. for myself. its correct as the specific HTLC may fail and the fees have to be accounted for this exact usecase. that the result is 'reserve + htlc fee' as 'own' on succesful payment is also expected, as the HTLC was resolved and the fees become available again 13:02 < m-schmoock> LN is fun! 13:04 < m-schmoock> that also answers 4. with 'no' 13:05 < m-schmoock> 2. is 'probably no' 13:06 < m-schmoock> remaining is just 1. , if no, if i can change the raised error to contain the accounted HTLC fee (not just "Capacity exceeded") 13:33 < fiatjaf> instead of sending coins to the c-lightning wallet and then funding a channel, why can't we have c-lightning generate the multisig lightning address and then we send coins to it from another place? 14:17 -!- darosior [52ff9820@gateway/web/freenode/ip.82.255.152.32] has joined #c-lightning 14:22 < t0mix> +1 like @fiatjaf 15:23 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 15:28 -!- michaelsdunn1 [~michaelsd@unaffiliated/michaelsdunn1] has quit [Remote host closed the connection] 15:29 < darosior> I think it is useful to have it all in the same place in error case (onchain resolution), so if the channel gets unilaterally closed your funds get back to lightning wallet 15:49 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 15:51 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 16:12 < fiatjaf> I'm not saying we should get rid of the c-lightning wallet 16:13 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 16:16 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 252 seconds] 16:32 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 16:33 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 16:34 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Excess Flood] 16:38 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 16:40 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Excess Flood] 16:43 -!- darosior [52ff9820@gateway/web/freenode/ip.82.255.152.32] has quit [Quit: Page closed] 19:33 -!- m-schmoock [~will@schmoock.net] has quit [Remote host closed the connection] 19:38 -!- m-schmoock [~will@schmoock.net] has joined #c-lightning 20:04 -!- Eagle[TM] [~EagleTM@unaffiliated/eagletm] has joined #c-lightning 20:05 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 258 seconds] 21:48 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-awleibpncfpfpwzs] has left #c-lightning [] 21:48 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-awleibpncfpfpwzs] has joined #c-lightning --- Log closed Sat May 18 00:00:17 2019