--- Log opened Tue Sep 29 00:00:32 2020 00:21 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #c-lightning 00:23 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has joined #c-lightning 00:35 -!- jonatack [~jon@37.167.109.160] has joined #c-lightning 00:36 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has quit [Remote host closed the connection] 00:49 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has joined #c-lightning 01:10 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has quit [Remote host closed the connection] 01:10 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has joined #c-lightning 01:27 -!- skme9_ [~skme9@2402:3a80:691:8f38:28cb:2224:2052:d7ad] has joined #c-lightning 01:29 -!- skme9_ [~skme9@2402:3a80:691:8f38:28cb:2224:2052:d7ad] has quit [Max SendQ exceeded] 01:29 -!- skme9_ [~skme9@2402:3a80:691:8f38:28cb:2224:2052:d7ad] has joined #c-lightning 01:31 -!- skme9 [~skme9@2402:3a80:691:8f38:28cb:2224:2052:d7ad] has quit [Ping timeout: 240 seconds] 01:31 -!- skme9_ [~skme9@2402:3a80:691:8f38:28cb:2224:2052:d7ad] has quit [Max SendQ exceeded] 01:32 -!- sr_gi [~sr_gi@static-80-160-230-77.ipcom.comunitel.net] has quit [Read error: Connection reset by peer] 01:33 -!- sr_gi [~sr_gi@static-80-160-230-77.ipcom.comunitel.net] has joined #c-lightning 01:47 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has joined #c-lightning 02:04 -!- jasan [~jasan@tunnel509499-pt.tunnel.tserv27.prg1.ipv6.he.net] has joined #c-lightning 02:06 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has quit [Quit: Leaving] 02:08 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has joined #c-lightning 02:12 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has quit [Remote host closed the connection] 02:29 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Ping timeout: 240 seconds] 02:30 -!- jonasschnelli [~jonasschn@unaffiliated/jonasschnelli] has quit [Ping timeout: 260 seconds] 02:31 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 02:31 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 02:32 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning 02:32 < darosior> "Do you think one should care about bad peers in terms of HTLC forwarding successrate stats?" => I do, as a node operator i don't want to commit in advance part on my funds with a notoriously bad forwarder 02:33 < darosior> I think the opener metric makes sense but i'm not sure how to translate it to fees yet 02:33 < zmnscpxj> what is opener metric? 02:34 < darosior> In https://github.com/lightningd/plugins/pull/147 i suggested conditioning the fee increase to whether we are the opener of the channel 02:35 < darosior> (Pinged you there btw as we talked about it last time) 02:36 < darosior> The reason is personally from my own experience: by increasing the fees for everyone using the channel i opened i can on average --and assuming noone isn't just pure evil-- compensate my channel close cost. 02:37 < darosior> If you didn't open the channel, you can be more liberal and just "try and see" 02:38 < zmnscpxj> right, but in a future where there is anchor commitments, the fees change 02:38 < darosior> Hmm 02:38 < zmnscpxj> opener does not necessarily pay so much 02:39 < darosior> Ok, right 02:39 < zmnscpxj> on the other hand, anchor commitments are "Coming Soon (TM)" 02:39 < darosior> :) 02:40 < darosior> Looking forward to having the UX conversation about the utxo pool for bring-your-own-fees in C-lightning :) 02:41 < zmnscpxj> Just leave a small amount of funds onchain? dunno 02:41 < darosior> Fractional reserve ? 02:41 < zmnscpxj> "small" is relative :) 02:41 < zmnscpxj> haha 02:42 < zmnscpxj> Do we still need the channel reserves in an anchor commitments world? 02:42 < darosior> In this case you completely assume that the user cannot survive a massive channel close :/ 02:42 < darosior> channelS 02:42 < darosior> Hmm how would it differ ? 02:43 < zmnscpxj> Re mass channel close: Why? Just claim the biggest-on-my-side channel first, now you have funds for the rest? 02:44 < zmnscpxj> Re channel reserves: they are to protect you from theft by making theft attempt non-costless. but in an anchor commitments world, the one publishing old state has to pay fees anyway, so that is now the cost they pay. 02:44 < zmnscpxj> Re channel reserves: unless we added on some more purposes for channel reserves in which case never mind 02:55 < m-schmoock> darosior: the thing is do we know have a clue about the reason a node is a 'notoriously bad' forwarder. It could just be that he is badly connected, imbalanced, liquidity props, technical reasons 02:56 < m-schmoock> we may want to close a channel if we have a good reason the node is has technical problems 02:56 < zmnscpxj> does the exact reason matter? (unless we want to avoid *ourselves* being a notoriously bad forwarder) 02:56 < m-schmoock> *good reason to believe 02:57 < m-schmoock> well, if the reason is that he is badly connected, we make the situation worse by closing his channel 02:57 < zmnscpxj> centralization ftw 02:57 < m-schmoock> :D 02:58 < zmnscpxj> rather than disconnect their channel, maybe open channels to its direct peers? 02:58 < darosior> zmnscpxj: "Why? Just claim the biggest-on-my-side channel first, now you have funds for the rest?" => This does not work for watchtowers 02:58 -!- jonatack [~jon@37.167.109.160] has quit [Ping timeout: 256 seconds] 02:58 < zmnscpxj> darosior: right, watchtowers 02:59 < zmnscpxj> but presumably watchtowers would then have a larger reserve 02:59 < zmnscpxj> not "normal" nodes that can leave like less than have a millibitcoin for the occassional close 03:00 < m-schmoock> see if a node fails more often than he succesds in forwarding because he is not well connected, that is not something we should fix by closing a channel but the one should fix it that is using a suboptimal routing algo 03:00 < m-schmoock> right? 03:00 < darosior> ... Or maybe not, but faith is what matters :p 03:00 < darosior> Yeah it's def a good point for nodes 03:00 < m-schmoock> but if we have indication that a node is bad for technical reasons, we may very well close it (or contact the guy) 03:00 < darosior> m-schmoock: i do grafana-based channel-closing x) 03:01 < zmnscpxj> hmm. rather than close, try to first move your funds offchain to another channel? 03:01 < zmnscpxj> though if it is a "notoriously bad" forwarder.... bleah 03:02 < darosior> Yeah that's in that case, but also if this connection does not bring me a lot of success (so absolute + rate based) 03:02 < darosior> I do use m-schmoock's drain plugin when i have some spare time 03:02 < zmnscpxj> how are you measuring forwarding success rate? `forward_event` notification? 03:03 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Ping timeout: 240 seconds] 03:03 < darosior> But i often want to try out new nodes that appeared since last massive opening 03:03 < darosior> Thanks for multichannel btw <3 03:03 < zmnscpxj> np np np 03:03 < zmnscpxj> that took several months lol 03:03 < darosior> I used a hacky python script that did not deserve to exist ^^ 03:03 < zmnscpxj> what does it use? 03:04 < zmnscpxj> hacky = working :P 03:04 < darosior> "how are you measuring forwarding success rate? `forward_event` notification?" => Prometheus metrics and custom Grafana dashboard 03:04 < darosior> Then guesstimation 03:04 < zmnscpxj> okay, now wtf is Prometheus and Grafana? 03:05 < darosior> https://grafana.com/grafana/dashboards/12397 03:05 < zmnscpxj> okthx 03:05 < darosior> Prometheus is the information source, and Grafana the nice display 03:05 < darosior> https://github.com/lightningd/plugins/tree/master/prometheus 03:06 < zmnscpxj> much interested in heuristics for selecting peers and deselecting them (i.e. closing channels) 03:07 < darosior> Do you know about mnomeni ? 03:07 < zmnscpxj> no 03:07 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning 03:07 < darosior> I want to do basically this but in a more relable manner and more heuristic with https://github.com/lightningd/plugins/pull/82 03:08 < darosior> Their thing is https://www.moneni.com/mcb/nodematch 03:08 < zmnscpxj> ah, yes, saw that before 03:08 < zmnscpxj> but I cannot find details on the algo... hmmm 03:08 < zmnscpxj> need the algo so I can run it on my brain 03:08 < zmnscpxj> (just to be clear, I am not some form of AI) 03:09 < darosior> Yeah, that's not the only one 03:10 < darosior> Actually the script in #87 led me to find some nodes used by people for heuristics that were totally unknown to me (like not 1ML superstars but def better peer to fund a channel with) 03:10 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 03:10 < zmnscpxj> ? 03:10 < darosior> And they had some browser app to find connections 03:10 * darosior tries to find it 03:11 < zmnscpxj> ah 03:11 < zmnscpxj> ...is there a good description of the algo? 03:12 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 03:12 < darosior> I don't remember but it seemed way more descriptive 03:13 < az0re> much interested in heuristics for selecting peers and deselecting them (i.e. closing channels) 03:13 < az0re> +1 03:13 < az0re> I have some hacky python scripts 03:13 < az0re> But still not clear on what metrics I should be optimizing 03:13 < zmnscpxj> hacky = working :P 03:14 < az0re> To some extent, at least :) 03:14 < az0re> But they've been hit-or-miss in terms of getting me more payments traffic 03:14 < zmnscpxj> what heuristics are you using? 03:15 < az0re> That nodematch thing says: "It does not take capacities or balances into account, but focuses in maximizing the number of nodes reached in a minimal number of hops." 03:15 < az0re> Well, mine does take capacities and fees into account 03:16 < az0re> Low-fee reachability: Number of nodes that I can reach below a given fee threshold 03:16 < zmnscpxj> I imagine for nodematch, you can run a Dijkstra starting from your node, complete the entire graph, then find the leaves in the shortest-path tree 03:16 < zmnscpxj> and sort them 03:16 < az0re> Also incoming: Number of nodes that can reach me below a given fee threshold 03:16 < zmnscpxj> ah, good idea as well 03:18 < az0re> Then another that creates some BS metric combining number of additional low-fee reachable nodes given by peering with a given node as well as how much average low-fee-reachable distance (i.e number of node hops) is reduced on average by peering with that node 03:19 < az0re> Under the theory that fewer hops -> more likely to succeed in routing 03:19 < zmnscpxj> thanks 03:19 < az0re> But I am just pulling these metrics out of my ass and haven't really given it serious thought 03:20 < zmnscpxj> well, heuristics have to start *somewhere* 03:20 < az0re> I'd like to make one that takes my channel balances into account and suggests nodes that make rebalancing my unbalanced channels possible/cheaper/shorter path 03:22 < zmnscpxj> look at which peer has the most incoming to *you*, then select a peer of that peer, preferring those that have low feerates going to that node 03:22 < az0re> Something like finding low-weight (and low-hop) cycles for each unbalanced channel, then aggregating to suggest the nodes overall most useful for rebalancing 03:22 < az0re> Need to think about that 03:24 < az0re> Yeah, that's probably correct, but the aggregation I think is the key: Finding the best nodes that make rebalancing multiple unbalanced channels possible 03:25 < az0re> Anyway... 03:25 * az0re goes AFK 03:25 < zmnscpxj> thanks! 03:45 < zmnscpxj> Also, a thing I do not see much discussion about (everyone is discussing peers) --- how about feerates? 03:45 < zmnscpxj> in price theory, there is always some optimum price 03:45 < zmnscpxj> if your price is too low, sure the customer base increases but your earnings per unit are lower due to the lower price 03:46 < zmnscpxj> if the price is too high, the customer base contracts and you might earn more per unit but sell fewer units 03:46 < zmnscpxj> so finding the optimal feerates as well is needed 04:01 < jasan> What is this (from lightningd debug log)? DEBUG: Ignoring spammy update for 604620x1054x0/1 (last 1601366054, now 1601376483) 04:06 < jasan> Another thing: I also see a strange channel on my mainnet node. It says funding_txid: 074f9fbd8b818a96c55ef3dc03642fb98656a7738b2fbf69fb03468f3fb1545b but there is no such txid on mainnet. 04:07 < zmnscpxj> is it in AWAITING_LOCKIN? 04:16 < jasan> zmnscpxj: yes 04:16 < zmnscpxj> do you have funds in it? 04:16 < jasan> zmnscpxj: no, it seems like someone was opening a channel to me 04:17 < zmnscpxj> yes, that is a possibility 04:17 < zmnscpxj> I think it will eventually be forgotten by our node if the transaction never confirms 04:17 < zmnscpxj> so "not your problem", just wait it out 04:17 < jasan> OK 04:17 < jasan> thank you 04:17 < zmnscpxj> I think :) 04:17 < zmnscpxj> hahaha 04:18 < zmnscpxj> but if you have no funds in it, it should be relatively risk-free for you 04:18 < jasan> sure 04:18 < zmnscpxj> more annoying than harmful 04:21 < jasan> not annoying at all, I was just wondering and wanted to talk it over 04:22 < zmnscpxj> Yeah but it takes up space in your `listpeers`. also prevents you from making a proper channel with that peer 04:38 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has left #c-lightning [] 04:38 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has joined #c-lightning 05:03 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Ping timeout: 240 seconds] 05:17 -!- dr-orlovsky [~dr-orlovs@31.14.40.19] has quit [Ping timeout: 264 seconds] 05:27 -!- dr-orlovsky [~dr-orlovs@31.14.40.19] has joined #c-lightning 05:47 < jasan> I get following test error: 05:47 < jasan> ERROR tests/test_closing.py::test_penalty_htlc_tx_timeout - ValueError: 06:13 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #c-lightning 07:12 -!- kexkey [~kexkey@37.120.205.214] has joined #c-lightning 07:31 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Quit: jonatack] 07:32 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #c-lightning 07:53 <@cdecker> jasan: any changes that might have caused this? We do have some flaky tests (tests that fail randomly) and that's one of them IIRC 07:54 <@cdecker> I'm tracking the tests that fail on Travis with a simple script here: http://46.101.246.115:5001/ 07:54 < jasan> cdecker: no, I am testing master and it seems I was getting timeouts on Travis (I have set up my clone of lightning to go on my Travis which is is bit limited compared to paid one) 07:54 <@cdecker> That test has a 4.3% failure rate on master 07:56 < jasan> I see. Is that failure rate caused only by timeouts? 07:57 <@cdecker> Can have a couple of causes, timeouts are often the symptom, and the cause can be many things (usually wrong assumptions encoded in the test, like log-line order, or ordering of events that we observe) 07:58 <@cdecker> We do try to address the flaky tests and fix them up on a regular basis, but they are annoying to see when you're working on other things 07:58 < m-schmoock> cdecker: why are travis tests flaky for clang, gcc, liquid, pstgres ? 07:59 <@cdecker> Hence that tool to track the worst offenders :-) 07:59 < jasan> m-schmoock: I think that is caused by Travis not seeing any output and killing the process after 10 minutes of no output. 08:00 < m-schmoock> I had a weired asertion that happened for clang job tests/test_pay.py::test_setchannelfee_zero its was clearly calculating the fees incorrectly 08:00 < m-schmoock> i tried locally fine, i restarted now the job has another error 08:00 <@cdecker> Because they are slightly different in their behavior: liquid has a fee output so output ordering assumptions may be wrong, postgres has different timing and result ordering whereas sqlite3 maintains insertion order, gcc and clang may result in different binaries being generated (though these are really really rare to be the cause) 08:00 < m-schmoock> as if it calculates wrong 08:01 < m-schmoock> anyway jobs on #4096 are red (not because of the PR). and restarting doesnt help 08:02 <@cdecker> It can be caused by things that you might not be considering (a block coming in changing the feerate returned by estimatesmartfees, mempool seeming different, pre-/post-applying an HTLC which changes the balances, etc) 08:02 < jasan> cdecker: but tests use regtest, no? 08:03 < jasan> how can a "block coming in" cause that? 08:03 <@cdecker> Yes, and we mock away the estimatesmartfee calls using a reverse proxy, that was a bad example, but you see there are a lot of places that might influence the behavior 08:04 < m-schmoock> cdecker: it was failing on tests/test_pay.py::test_setchannelfee_zero (line 2098) where the testcase explicitly sets the routing fees to zero. still the clang binary had not 4999999msat sent but 5000053 something 08:04 < m-schmoock> I found that fishy 08:04 < m-schmoock> but after restarting the job it failed elsewhere 08:13 <@cdecker> Might have been the pay plugin misbehaving, adding randomness when it shouldn't 08:15 <@cdecker> Usually the way to reproduce these is by running pytest with just that test for dozens of times: `pytest tests -k name_of_the_flaky_test --count=50 -x` 08:15 <@cdecker> That requires pytest-repeat for the `--count` argument 08:15 <@cdecker> `-x` will exit at the first failure 08:17 <@cdecker> You can also throw in a `-n 10` if you have `pytest-xdist` installed, that will run 10 instances in parallel 08:17 <@cdecker> (at the expense of way more noise in the logs when a test fails since it kills all workers with `-x` and each will print a backtrace) 08:25 < jasan> m-schmoock: "FAILED tests/test_pay.py::test_setchannelfee_zero - assert 5000049 == 4999999 08:25 < jasan> 4142 08:26 < jasan> m-schmoock: https://travis-ci.org/github/jsarenik/lightning/jobs/731302011 08:29 <@cdecker> Interesting, I wonder if it's trying to use the routehint included in the invoice, which hasn't seen the channel fee change yet 08:30 <@cdecker> Lines 3782 to 3792 08:32 < jasan> cdecker: does it mean that in reality there is also some randomness to pay plugins? I have to be missing something because where I come from computers are deterministic and the programs can be always run the same way to produce the same result. 08:33 < jasan> By the way, is Travis reliable? https://travis-ci.org/github/jsarenik/lightning/jobs/731302015 08:33 <@cdecker> Yep, my feeling exactly, we randomize a lot in order to better protect a user's privacy, exactly because being predictable let's you infer loads of things 08:33 < jasan> The command "git checkout -qf 197ca77161db5bbe079fd3a358b03631066fee3c" failed and exited with 128 during . 08:33 <@cdecker> That happens when a branch gets force-pushed to before the travis job runs 08:34 < jasan> ah, OK, thanks 08:34 <@cdecker> Usually just means that you weren't interested in the results anyway, hence we ignore them :-) 08:34 < jasan> yes, that's what I did on that branch 08:34 < jasan> Right. 08:35 < jasan> cdecker: So answer is "yes, Travis is generally reliable"? 08:35 <@cdecker> Regarding randomness, I feel your pain, but it's a necessary evil in this case. Besides, any complex enough system is bound to have some (apparent) randomness, hence unit tests usually test tiny bits of functionality 08:36 <@cdecker> Travis is mostly stable, but does have its quirks from time to time (and it's slow) 08:38 < jasan> "necessary evil" - sounds like security through obscurity 08:38 < jasan> (which does not really work) 08:39 <@cdecker> How so? Randomized algorithms are a very useful tool in many different situations (they can get pretty good approximations in NP-hard problems for example) 08:40 <@cdecker> The key of our use of randomness here is to not be predictable by an attacker attempting work back from his observations 08:41 < jasan> I see and have read the paper Lisa recommended on the first meeting. 08:41 <@cdecker> Which paper was that? 08:42 < jasan> Towards Measuring Anonymity 08:42 < jasan> kuleuven.ac.be 08:42 <@cdecker> Ah, Claudia Diaz' paper 08:42 < jasan> Yes. 08:43 <@cdecker> It's still on my to-read list :-( 08:44 < jasan> I can read it to you aloud :) 08:45 <@cdecker> Hehe, papers as a podcast, I think that's a good niche (I'd definitely subscribe btw) 08:50 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #c-lightning 09:33 < m-schmoock> cdecker: thx. it does not like the --count but the -n 10 works 09:33 < m-schmoock> unrecognized arguments: --count=50 09:33 < m-schmoock> pytest --help does not show count for me 09:35 < m-schmoock> maybe --force-flaky and --max-runs ? 09:37 < m-schmoock> --force-flaky --min-passes=40 --max-runs=50 -n 10 does it 09:48 < jasan> cdecker: Yes, but you correctly noted that the randomness in a complex system is just apparent. 09:49 < jasan> cdecker: What can I do to help making the testing more deterministic and less disturbing (I mean less palse-positive)? 09:50 < m-schmoock> aahh this is pytest-repeat if you need --count 09:50 < m-schmoock> or pytest-count 09:51 < m-schmoock> nice 09:54 <@cdecker> Yep, pytest-repeat is the one you need 09:55 < m-schmoock> nice, now that I know that I can hunt those pesky flakes myself better 09:55 < m-schmoock> thx 09:55 <@cdecker> jasan, if you want to help with flaky tests the trick is to run the tests in a loop until they fail, add instrumentation and narrow down what is causing the occasional failure 09:56 <@cdecker> But be warned, it's very tedious work, can be frustrating and takes quite some knowledge about internals to guess what is going wrong, so definitely don 09:57 <@cdecker> *'t feel obliged to look into it. I'd rather you work on something more rewarding :-) 10:06 < jasan> cdecker: Yes please. More rewarding and simple. So that I can learn something and build upon it. Then later when I grow old I may be able to fix the flaky tests :) 10:07 <@cdecker> We've started adding the "good first issue" tag to issues on GH a while ago, those are well suited to start digging into the code 10:08 <@cdecker> The best thing of course is to fix something that is annoying to you as a user (scratch your own itch), I'm sure you have something that could be improved from your point of view 10:11 < jasan> I worked as QA Engineer in Red Hat before. So I am concerned about tests :) 10:11 < jasan> Have a look at https://github.com/ElementsProject/lightning/pull/4102 10:11 < jasan> No need to merge. Just showing what I played with. 10:11 < jasan> Let's call it a day here. Bye! 10:13 -!- jasan [~jasan@tunnel509499-pt.tunnel.tserv27.prg1.ipv6.he.net] has quit [Quit: Bye!] 10:21 -!- az0re [~az0re@gateway/tor-sasl/az0re] has quit [Remote host closed the connection] 10:35 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has quit [Remote host closed the connection] 10:37 -!- felixweis [sid154231@gateway/web/irccloud.com/x-vqevwrayegyhoxun] has quit [Ping timeout: 246 seconds] 10:37 -!- Galvas [sid459296@gateway/web/irccloud.com/x-xctslzasbdoehkez] has quit [Ping timeout: 260 seconds] 10:37 -!- RubenSomsen [sid301948@gateway/web/irccloud.com/x-lrokbsurnqtoyeht] has quit [Ping timeout: 260 seconds] 10:38 -!- fjahr [sid374480@gateway/web/irccloud.com/x-xxhgfjymcdjzupvi] has quit [Ping timeout: 260 seconds] 10:44 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has joined #c-lightning 10:50 -!- fjahr [sid374480@gateway/web/irccloud.com/x-mhucahffcjrazsug] has joined #c-lightning 10:50 -!- Galvas [sid459296@gateway/web/irccloud.com/x-vuifxfwtmszywtqe] has joined #c-lightning 10:58 -!- felixweis [sid154231@gateway/web/irccloud.com/x-imvmwyczsyyifppw] has joined #c-lightning 11:03 -!- ark [59205054@89.32.80.84] has joined #c-lightning 11:03 < ark> hello 11:03 < ark> { "in_channel": "618002x1273x0", "out_channel": "618542x692x0", "in_msatoshi": 123005935, "in_msat": "123005935msat", "out_msatoshi": 123003705, "out_msat": "123003705msat", "fee": 2230, "fee_msat": "2230msat", "status": "failed", "received_time": 1582888180.035, 11:03 < ark> "resolved_time": 1582888187.652 } 11:05 < ark> Shouldn't the fee field be in satoshi?I say this because there is already a specific field for msat 11:10 < ark> I have looked at the documentation and such. But it doesn't seem to make much sense to have 2 fields referring to the same thing 11:10 -!- RubenSomsen [sid301948@gateway/web/irccloud.com/x-ezdlvbmqqcwlpxjv] has joined #c-lightning 11:14 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has quit [Remote host closed the connection] 12:18 -!- kristapsk [~KK@gateway/tor-sasl/kristapsk] has joined #c-lightning 12:20 <@cdecker> ark the unit-less fields are deprecated and will remove in future, hence the duplicate information 12:34 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 260 seconds] 12:37 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #c-lightning 12:49 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has left #c-lightning [] 12:49 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has joined #c-lightning 13:15 -!- az0re [~az0re@gateway/tor-sasl/az0re] has joined #c-lightning 13:52 -!- ghost43 [~daer@gateway/tor-sasl/daer] has quit [Ping timeout: 240 seconds] 13:53 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #c-lightning 14:45 -!- fiatjaf [~fiatjaf@2804:7f2:2a85:2fc:ea40:f2ff:fe85:d2dc] has quit [Ping timeout: 272 seconds] 14:53 -!- felixweis [sid154231@gateway/web/irccloud.com/x-imvmwyczsyyifppw] has quit [Ping timeout: 244 seconds] 14:53 -!- RubenSomsen [sid301948@gateway/web/irccloud.com/x-ezdlvbmqqcwlpxjv] has quit [Ping timeout: 260 seconds] 14:54 -!- Galvas [sid459296@gateway/web/irccloud.com/x-vuifxfwtmszywtqe] has quit [Ping timeout: 260 seconds] 14:54 -!- fjahr [sid374480@gateway/web/irccloud.com/x-mhucahffcjrazsug] has quit [Ping timeout: 260 seconds] 14:55 -!- fjahr [sid374480@gateway/web/irccloud.com/x-dmlcmuddvuflzgsf] has joined #c-lightning 14:56 -!- RubenSomsen [sid301948@gateway/web/irccloud.com/x-rhglvnghwhhrcoaz] has joined #c-lightning 14:56 -!- Galvas [sid459296@gateway/web/irccloud.com/x-ymztybpytyaaaheh] has joined #c-lightning 14:57 -!- felixweis [sid154231@gateway/web/irccloud.com/x-hxyatmrqpswznyge] has joined #c-lightning 15:10 -!- vasild [~vd@gateway/tor-sasl/vasild] has quit [Ping timeout: 240 seconds] 15:12 -!- vasild [~vd@gateway/tor-sasl/vasild] has joined #c-lightning 16:18 -!- ark [59205054@89.32.80.84] has quit [Remote host closed the connection] 16:52 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #c-lightning 17:14 -!- az0re [~az0re@gateway/tor-sasl/az0re] has quit [Quit: Leaving] 17:31 -!- mrostecki [~mrostecki@gateway/tor-sasl/mrostecki] has quit [Ping timeout: 240 seconds] 17:45 -!- az0re [~az0re@gateway/tor-sasl/az0re] has joined #c-lightning 18:04 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 240 seconds] 18:25 -!- az0re [~az0re@gateway/tor-sasl/az0re] has quit [Ping timeout: 240 seconds] 18:26 -!- az0re [~az0re@gateway/tor-sasl/az0re] has joined #c-lightning 19:15 -!- Netsplit *.net <-> *.split quits: queip, fjahr, felixweis, jonatack 19:15 -!- fjahr [sid374480@gateway/web/irccloud.com/x-rbbwbtbigjvdlmkh] has joined #c-lightning 19:15 -!- felixweis [sid154231@gateway/web/irccloud.com/x-krefugjevbfmhthz] has joined #c-lightning 19:15 -!- Netsplit over, joins: queip 19:15 -!- Netsplit over, joins: jonatack 19:47 -!- zmnscpxj_ [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has joined #c-lightning 19:48 -!- zmnscpxj [~zmnscpxj@gateway/tor-sasl/zmnscpxj] has quit [Remote host closed the connection] 19:49 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has joined #c-lightning 20:10 -!- DeanGuss [~dean@gateway/tor-sasl/deanguss] has joined #c-lightning 21:01 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has left #c-lightning [] 21:01 -!- blockstream_bot [blockstrea@gateway/shell/sameroom/x-zzgbwiizjmpgirfq] has joined #c-lightning 21:02 -!- az0re [~az0re@gateway/tor-sasl/az0re] has quit [Remote host closed the connection] 21:49 -!- az0re [~az0re@gateway/tor-sasl/az0re] has joined #c-lightning 22:51 -!- jb55 [~jb55@gateway/tor-sasl/jb55] has joined #c-lightning 23:04 -!- justanotheruser [~justanoth@unaffiliated/justanotheruser] has quit [Ping timeout: 244 seconds] 23:13 -!- jasan [~jasan@tunnel509499-pt.tunnel.tserv27.prg1.ipv6.he.net] has joined #c-lightning 23:14 < jasan> zmnscpxj_: Hi! 23:14 < zmnscpxj_> hello 23:15 < jasan> zmnscpxj_: In Tor, do the nodes know about all other nodes? Do they use something like lightning.network's gossip? (I am wondering and trying to find some working example of gossip.) 23:16 < zmnscpxj_> What do you mean? 23:16 < zmnscpxj_> all nodes are gossiped 23:16 < jasan> In lightning. 23:16 < zmnscpxj_> we do not really particularly care what addresses they expose 23:16 < zmnscpxj_> they can expose an IP and an .onion, and it would be propagated over gossip as well 23:16 < zmnscpxj_> or are you referring to DNS seeds? 23:17 < jasan> If you take Tor without lightning. I was wondering about the underlying communication and comparing it to the one used in lightning. Sorry for confusion, I am not talking about the situation when lightning uses Tor. 23:18 < zmnscpxj_> ah, okay 23:18 < az0re> All routing nodes are public 23:18 < jasan> az0re: in Tor? 23:18 < az0re> Yes 23:18 < zmnscpxj_> sorry, not sure 23:18 < az0re> However, there are bridges 23:18 < jasan> az0re: and all know about all the others? 23:18 < az0re> And Tor clients are not routers 23:19 < zmnscpxj_> Tor is a bit complex due to legal implications 23:19 < az0re> Other than that, all Tor nodes are public 23:19 < zmnscpxj_> there are exit nodes, routers, and bridges 23:19 < az0re> They are included in the consensus, from which Tor clients select nodes with which to build circuits 23:19 < zmnscpxj_> exit nodes take on legal risk since they could access websites that are illegal in the jurisdiction they operate in 23:20 < zmnscpxj_> Routers just talk to other Tor nodes 23:20 < jasan> How does it scale? 23:20 < zmnscpxj_> bridges let Tor clients in restricted countries (*cough* C *cough*) connect to them 23:20 < zmnscpxj_> it scales because there are no persistent channels like in Lightning 23:21 < jasan> I read something about one million lightning (Rusty's experiment). 23:21 < jasan> Ah, I see. 23:21 < zmnscpxj_> any router can connect to any other router and route messages 23:21 < az0re> Each Tor routing node is implicitly connected to all others 23:21 < zmnscpxj_> yes 23:21 < az0re> Exactly 23:21 < zmnscpxj_> though they might not be "actually" connected until some onion tells them to 23:21 < jasan> And what if routing on Lightning network was free for all? There would be channels, but no one would care which they use. 23:22 < zmnscpxj_> well, we could do something like have two levels of routing 23:22 < az0re> TCP SYN = nearly free 23:22 < az0re> Bitcoin onchain TX fee = not free 23:23 < zmnscpxj_> on one level we have channels and publicly-visible "I need to get this payment to node X" 23:23 < jasan> Yes, but I think the only reason we keep knowledge of all the channels and try to find a way through them is fees, no? 23:23 < zmnscpxj_> on another level we have onions which can be between any arbitrary nodes 23:23 < zmnscpxj_> but that will tend to lead to longer routes and higher payment fees 23:24 < jasan> Think no fees. 23:24 < jasan> For this thought experiment at least. 23:24 < az0re> If there were no fees, Lightning Network would not be as useful 23:24 < az0re> Just use onchain TXes 23:24 < jasan> az0re: I do not agree. 23:24 < az0re> Explain :) 23:25 < jasan> az0re: If there are no fees then everybody wants to use lightning. So people start to buy BTC and set up channels. 23:25 < jasan> Whatever you have in your channel now will be worth 1000x1000x... more 23:25 < jasan> That will be the "fee" 23:26 < jasan> The fee will be you run your machine. The same as with bitcoin full nodes. If it is not economical to run the full node, you just do not. 23:26 < az0re> Oh, I thought you meant "no onchain TX fees" not "no LN fees" 23:26 < zmnscpxj_> well, then it should be possible 23:27 < jasan> Thank you! 23:27 < zmnscpxj_> we would just imitate IP that way 23:27 < jasan> It helps to talk with colleagues. 23:27 < zmnscpxj_> i.e. at one level we just send payments with open-coded payment destinations 23:27 < zmnscpxj_> but at the payment destination, it is actually an onion that the payment destination has to unwrap 23:27 < zmnscpxj_> and send to the next payment. 23:28 < jasan> Yes. 23:28 < jasan> Is anyone familiar with Yggdrasil-Network? 23:29 < jasan> in a way similar to Tor, even more similar to cjdns 23:31 < jasan> There are also "channels" because the nodes have peers. Particular peers they connect to. 23:32 < jasan> It is an experiment in spanning-tree routing as far as I understood. 23:32 < zmnscpxj_> well, the internet is that way too, it is not as if you are *physically* connected to a node other than your ISP 23:33 < jasan> Right. But my ISP has routing tables and not a general all-equal spanning tree algorithm to find the next hop. 23:33 < zmnscpxj_> routing tables, spanning trees; potato, sweet potato 23:33 < zmnscpxj_> haha 23:33 < jasan> :) 23:34 < jasan> But sweet potatoes are better! :D 23:36 < jasan> In terms of democracy and low-threshold to enter there is a big difference between routing tables and spanning trees. 23:36 < zmnscpxj_> well, that would be good 23:37 < jasan> Yes, it's an experiment, but works for me and I use it for >2 years already. 23:38 < jasan> Yes, <3 years also ;-) 23:39 < jasan> Have a good morning! I may be back later! And I really appreciate talking with you zmnscpxj_ and az0re ! 23:39 < az0re> lol 23:39 < az0re> Ciao 23:40 -!- jasan [~jasan@tunnel509499-pt.tunnel.tserv27.prg1.ipv6.he.net] has quit [Quit: WeeChat 2.9] --- Log closed Wed Sep 30 00:00:33 2020