--- Day changed Thu Sep 19 2019 00:44 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 01:46 -!- takamatsu_ [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 01:46 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has quit [Read error: Connection reset by peer] 02:07 < waxwing> kristapsk, re: hanging, it's by default 20*maker_timeout_sec which is i think by default 60 or something so it waits for 20 minutes. sendpayment uses the same architecture as tumbler in that regard, it waits and retries if "stall". 02:07 < waxwing> see https://github.com/JoinMarket-Org/joinmarket-clientserver/blob/master/jmclient/jmclient/client_protocol.py#L358 02:08 < waxwing> the two scripts sendpayment and tumbler are just variants on the same thing, a Taker runs a schedule, a sendpayment is by default a single entry schedule, but you can also run it with -S with your own multi-join schedule. 02:08 < waxwing> so the two scripts sendpayment and tumbler can and should be folded together. 02:09 < waxwing> (that's just for background, ofc in real time scenarios you're not going to wait 20 minutes, so i get it :) ) 02:19 -!- takamatsu_ [~takamatsu@unaffiliated/takamatsu] has quit [Ping timeout: 245 seconds] 02:36 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 02:51 < belcher> i tracked down the cause of my problem to a commit which causes the wallet to import lots of addresses when a new one is requested 02:51 < belcher> iv made issue 401, i cant think of a good solution yet but at least we know the problem 03:00 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has quit [Read error: Connection reset by peer] 03:06 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 03:14 < waxwing> belcher, did you see my question about that? 03:14 < waxwing> why does import take time? it should be immediate on local machine 03:15 < waxwing> that commit made no difference at all to my tests 03:23 < waxwing> left a comment on the issue in case you're not around 03:23 < waxwing> if you're prepared to spend some time, i'm sure we could figure it out from the delta between our two situations. it's really weird to me, but def important if there are some situations where imports are slow; i've never seen it i think. 03:32 < fiatjaf> in my first 4 days running yield generator I was participating in one coinjoin per day on average. now I have more liquidity and lower fees and keep my yg running 24 per day, but there has not been a single coinjoin in the last 7 days. what could be wrong? 03:34 < waxwing> don't know, but i believe things have been functioning fine the last week. as i think i explained before, it is entirely possible to do a coinjoin with yourself to test (use -P to pick your bot) and you can check your bot is in the channel also. 03:38 < fiatjaf> isn't my wallet file locked? 03:38 < fiatjaf> if I try to sendpayment while yg is running will that work? 03:38 < waxwing> if it's important to get joins quicker, you should use tumbler not yieldgenerator. tumbler also gives better privacy properties (nobody else learns your linkages) 03:38 < waxwing> yes that's true, you would need a second wallet. 03:47 -!- StopAndDecrypt_ [~StopAndDe@107.181.189.37] has joined #joinmarket 03:48 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has quit [Ping timeout: 245 seconds] 03:50 -!- reallll [~belcher@unaffiliated/belcher] has joined #joinmarket 03:54 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 268 seconds] 04:00 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has quit [Read error: Connection reset by peer] 04:06 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 04:07 -!- rdymac [uid31665@gateway/web/irccloud.com/x-avwxlvnlzkfizduy] has joined #joinmarket 04:22 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has quit [Read error: Connection reset by peer] 04:28 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 05:07 < Sentineo> the lack of coinjoins is on my part as well, probably some market forces. Do not think it is aconfig issue, as I did not change a thing. 05:21 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has quit [Read error: Connection reset by peer] 05:27 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 05:34 -!- deafboy [quasselcor@cicolina.org] has quit [Ping timeout: 246 seconds] 05:38 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has quit [Read error: Connection reset by peer] 05:47 -!- deafboy [quasselcor@cicolina.org] has joined #joinmarket 06:11 -!- Zenton [~user@unaffiliated/vicenteh] has quit [Read error: Connection reset by peer] 06:14 -!- Zenton [~user@unaffiliated/vicenteh] has joined #joinmarket 06:16 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 06:27 -!- rdymac [uid31665@gateway/web/irccloud.com/x-avwxlvnlzkfizduy] has quit [Quit: Connection closed for inactivity] 06:28 -!- MaxSan [~four@109.202.107.5] has joined #joinmarket 06:31 -!- reallll is now known as belcher 06:32 -!- MaxSan [~four@109.202.107.5] has quit [Client Quit] 06:33 < belcher> back 06:36 < belcher> waxwing iv replied to your comment tl;dr i dont know why the imports take time 06:37 < belcher> im happy to spend time on this 07:16 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has quit [Ping timeout: 276 seconds] 07:26 < waxwing> ok i'm back. 07:27 < waxwing> belcher, so i'll get a log of an example my side. 07:32 < waxwing> maker side, seeing no delay here: https://0bin.net/paste/j9sRssnA28xkiKeb#vZq5TOOJzt6ydX6l04opV7+bNBi3taWhQVzAoIA9Ea1 07:32 < waxwing> will look at taker side now 07:34 < waxwing> taker side: https://0bin.net/paste/cSLKuXay2CbUG-1O#Sq-pJj3S1xgqMZEIagYjzpSFTtdBIz3U6OMeGNpPpYw 07:35 < waxwing> looks like maybe 0.5 seconds? it might depend on stuff happening in JM code; can put a timing call around the rpc itself. but anyway i've never seen an appreciable delay before. 07:35 < belcher> i wonder if its something like hdd vs ssd, my laptop uses hdd 07:36 < waxwing> hmm. i guess we have to narrow it down at some point to something. but ... 30 seconds? seems excessive? 07:36 < waxwing> especially because it's regtest. 07:37 < waxwing> to be clear in case it isn't obvious, i've been doing these tests on regtests with 3 and more makers over a very large length of time on different machines, never seen this. 07:37 < waxwing> but yes ssd. but, really, would Core struggle with a regtest instance?! 07:37 < belcher> from my point of view it doesnt seem that unusual, thinking back rpc calls were always a little bit slow... now massively slow but noticable 07:38 < belcher> im thinking about EPS here too, that makes a lot of RPC calls at the start and it does take 5-10 seconds, much more on a pi or other low powered machine 07:38 < waxwing> half a second slow sure; 30 seconds slow? 07:38 < waxwing> right. so you're thinking it could be CPU bound maybe? 07:38 < belcher> idk 07:38 < waxwing> maybe you could test by running importmulti from the command line and somehow giving it a very large list like 60 or more? 07:39 < belcher> good idea, sec 07:40 < waxwing> i guess there's no avoiding delving into what import actually does. do you remember gmaxwell saying importing a million addresses shouldn't even cause a problem? :) 07:41 < belcher> i think he was talking about having core watch a million addresses at once 07:41 < belcher> in my experiance for example with EPS, importing all the addresses the first time you run it always took a while 07:42 < belcher> ofc that uses importaddress not importmulti so it can speed up but its still non-instant 07:46 < belcher> hold on my core has crashed, im in the process of coming up with an importmulti command line to run 07:49 < waxwing> yeah i'm doing it too for comparison. 07:49 < belcher> where will you get the bitcoin addresses from? 07:52 < belcher> ill write a short script to do it, as importmulti expects a json object 07:54 -!- arubi [~ese168@gateway/tor-sasl/ese168] has quit [Remote host closed the connection] 07:55 -!- arubi [~ese168@gateway/tor-sasl/ese168] has joined #joinmarket 07:58 < waxwing> i grabbed a list from printing out the imports happening during a regtest test run 07:58 < waxwing> i did a batch of 64 just now (yeah python need), seemed immediate. i'll try to do it again with a timing. 07:59 < waxwing> but i guess my experiment is of little interest. 08:00 -!- nsh [~lol@wikipedia/nsh] has joined #joinmarket 08:03 < waxwing> "Error parsing JSON" oh what joy doing rpc on command line :) 08:04 < belcher> i think ill do it via this python script to avoid that, python can also measure times 08:07 < waxwing> hmm true. that's "pure enough" of a test. 08:07 < belcher> ok done, its 25 seconds 08:07 < belcher> for 60 addresses 08:08 < nsh> what you trying to do? 08:09 < belcher> it was assumed that importing addresses is costly but on my machine (and my other low powered machine which also runs JM and EPS) importing addresses is not instant, that gives rise to a bug 08:09 < belcher> so right now we're measuring how long it takes to import addresses 08:09 < belcher> also see https://github.com/JoinMarket-Org/joinmarket-clientserver/issues/401 08:11 < nsh> (ah cool 08:11 < belcher> iv never experienced myself that importing is really instant, its always seems like it takes some time (not very much admittedly) for as long as i can remember using bitcoin this way 08:11 < nsh> i wonder if there's an easy way to instrument which data structures are updated when you import an address 08:11 < belcher> admittedly iv only ever used bitcoin on maybe 3-4 different computers 08:11 < nsh> and time these granularly 08:11 < nsh> probably a chore. i know nothing about cpp 08:12 < belcher> sometimes decode and encoding the json can be a bottleneck too for RPC 08:13 < belcher> theres also an issue iv run into with other things, that bitcoin core uses a lock for certain RPC calls so often its slow because of waiting and not necessary because of CPU 08:16 < belcher> i wonder if theres another way to achieve what you wanted waxwing in the 4b4f8c9 commit, im reading the commit now 08:18 < waxwing> it's a can of worms. 08:19 < waxwing> but i really find it difficult to believe that there's any logic in it taking 30 seconds to import. i've done endless testing on multiple machines, on regtest i've never experienced an issue with imports taking a meaningful length of time. 08:20 < belcher> ill ask in #bitcoin, maybe someone knows 08:22 < belcher> so the problem solved by 4b4f8c9 is that when the user runs a joinmarket script it may have to import empty address up to the gap limit, because in usual operation joinmarket only imports 1 or 2 addresses at a time and not 6 (or whatever the gap limit is) 08:23 < waxwing> sure but i mean, we do 1 RPC call for import anyway. 08:25 < belcher> reducing the gap limit reduces the number of addresses imported which does speed up the call, so the bottleneck must be related to the number of imported addresses 08:25 < belcher> thats why i thought the hdd vs sdd thing 08:26 < waxwing> right, but i'm still not convinced your scenario isn't pathological. i mean, it's a clean regtest instance. 08:26 < waxwing> right. 08:26 < belcher> oh hold on heres an idea: put the wallet file and maybe the whole regtest dir in a ram disk thingy 08:26 < waxwing> makes sense. 08:26 < belcher> /dev/shm on linux 08:26 < belcher> ill give it a go 08:27 < waxwing> i wonder if perhaps scenarios where disk is near full could cause write access problems? just vaguely guessing at this point. 08:28 < waxwing> ram is a nice idea. 08:28 < belcher> the disk that the regtest dir is on has about 40gb free, 85% of it is in use 08:33 < belcher> ok yes that was it 08:33 < belcher> now the call takes 0.043 seconds 08:33 < belcher> and the bitcoin-qt GUI is noticeably much much faster, especially to startup 08:33 < belcher> i suppose hdds are just slow 08:34 < belcher> if anyone wants to give it a go, $ bitcoin-qt -datadir=/dev/shm/dotbitcoin -regtest 08:34 < belcher> before that do mkdir /dev/shm/dotbitcoin 08:34 < waxwing> oh nice belcher :) 08:35 < waxwing> well nice in a way 08:35 < waxwing> but yes would make some sense to shift all regtest testing into RAM, i suppose there might be RAM constrained situations .. still as long as locations are configurable i guess it's fine either way. 08:35 < belcher> even so, lots of people use hdds, especially in the always-on setups that run full nodes and joinmarket which often use old hardware that isnt useful for anything else 08:35 < belcher> wouldnt this issue show up in mainnet too? 08:36 < waxwing> yes it's reasonable to wonder, i just can't understand why we're not talking about 1-2 seconds. 30 seconds just seems bizarre. 08:36 < waxwing> but that presumably would involve understanding more detail about the import function. 08:37 < waxwing> anyway if we have other solutions for the problem being solved (namely that the scripts would fail to sync first time, but had to be synced at least twice after every transaction) that'll be fine with me. 08:39 < waxwing> see many other comments like in #359 and in particular this comment: https://github.com/JoinMarket-Org/joinmarket-clientserver/pull/359#issuecomment-518805195 we really should be promoting the fast sync as the default, but the nasty edge cases remain. 08:48 < belcher> i remember reading that comment 08:48 < belcher> i had some thoughts on that, perhaps another way to sync wallets that have just been restored from backup: what if we use the RPC call scantxoutset 08:49 < belcher> which scans the UTXO set and doesnt need a rescan... so users could use that mode to see if their wallet has money and the right amount of money in it, and if it does then use the existing sync (whether --fast or not) 08:53 < waxwing> sounds at least interesting(btw i did my run, no surprise it's ~0.5 secs for 64 addresses. i guess i can try 32 and see if it's linear) 08:55 < waxwing> yeah about linear 0.29s 08:56 < waxwing> i mean what the frick is it doing, since it's not rescanning it has no idea of the history of that address; isn't it just adding it to a list in a database? 08:57 < belcher> your comment talks about detailed/nonfast sync being only useful for recovering a wallet, but the scantxoutset method of sync'ing seems to be an even better way of recovering a wallet (except you dont get a history, but joinmarket barely ever uses transaction history for anything) 08:58 < waxwing> it needs to query history to know what's been used, so as not to use it again, right? 08:58 < belcher> hmm yes thats right 08:59 < belcher> ok, so scantxoutset is only appropriate for seeing whether a seed phrase has money inside, not for receiving more money on it 09:03 < waxwing> maybe you're right that the encoding/decoding json and rpc part is actually the cost, however crazy that seems,because `time` tells me: 09:03 < waxwing> real 0m0.575s 09:03 < waxwing> user 0m0.010s 09:03 < waxwing> sys 0m0.006s 09:04 < waxwing> doesn't that mean it's not CPU? 09:04 < belcher> but then why would switching to ram disk make the json encoding/decoding faster 09:04 < waxwing> i guess not though. damn computers can be opaque. 09:04 < waxwing> yeah. 09:04 < belcher> regarding the wallet how about this: so the problem with default sync is when a user has been running a maker for a while and then runs wallet-tool to get more addresses and sends a transaction to one of those addresses.. but those addresses havent been imported so then the user's deposit isnt seen... as a solution what if we made wallet-tool import the addresses before displaying them 09:04 < waxwing> oh right ofc. if disk access is slow, it would not show up in `user`, `sys` so that makes sense. IO bound. 09:05 < belcher> the user cant send to any addresses without first obtaining them from wallet-tool 09:05 < belcher> and joinmarket-qt... oh wait i see why you did it this way 09:06 < belcher> so then the solution is import addresses up to the gap limit when wallet-tool is run and when joinmarket-qt displays addresses in the address tab 09:06 < belcher> ?? 09:06 -!- takamatsu [~takamatsu@unaffiliated/takamatsu] has joined #joinmarket 09:07 < waxwing> but also funds are received into txs generated during running 09:07 < belcher> can you rephrase? 09:08 < waxwing> the problem was that the gap limit bumps forwards at that point but before that commit that 6+1 address, if you see what i mean, was not imported, instead the +1 address was imported (but that was already imported) 09:08 < waxwing> funds are received into addresses that get generated during running (for makers, actually also for takers - change) 09:09 < waxwing> i was just trying to say you can't guarantee avoiding the problem of having funds in addresses that aren't imported, by insisting that all displayed addresses are pre-imported. 09:09 < belcher> but makers and takers get their addresses from the get_new_address() function which imports the new address..? so addresses generated in runtime must have been imported 09:12 < belcher> i mustve missed something? 09:28 < belcher> either way, right now being a joinmarket maker doesnt work if you use a hdd, so i think the code in that commit has to be reverted 09:49 < waxwing> but you are assuming that anyone that uses an HDD has a 30 second delay if they import 60 addresses. i think if that's true, you're right, but it still seems unlikely to me. 09:55 < belcher> what if it was only half of HDD users? 09:55 < belcher> jokes aside, it still doesnt work under certain conditions 09:58 < belcher> i didnt understand the problem with importing the addresses before they're displayed to users, i think that should fix the bug where users deposits dont show up? 10:10 < waxwing> oh sure, i misunderstood i guess, i was thinking only of the other problem, that wallet sync needs to be done twice after every tx 10:11 < belcher> ah ok, iv forgotten about that 10:11 < belcher> do you know of a github comment that talks about thae wallet-sync-needing-to-be-done-twice 10:11 < waxwing> that's the reason for that commit. 10:12 < waxwing> it's in that 395 thread but i can dig up the original Issue 10:12 < waxwing> it wasn't really one person's bug, everyone experienced it 10:12 < waxwing> https://github.com/JoinMarket-Org/joinmarket-clientserver/issues/328 it was that one 10:13 < belcher> yes iv had that come up 10:13 < waxwing> but it's kinda academic because as explained in 395 the pre-existing logic if sync makes it basically inevitable that that will happen. the import that happened dynamically didn't import up to the gap. 10:13 < belcher> my ad hoc rule was "keep running it until you get what you want and it starts norally" 10:14 < waxwing> one reason i considered it a big issue was that, unlike me, some people had sync (detailed) taking a long time. so this was a real pain. 10:14 < waxwing> they should almost always be using --fast anyway though. but most people don't. 10:34 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 268 seconds] 10:41 -!- belcher [~belcher@unaffiliated/belcher] has joined #joinmarket 10:57 < belcher> i just ssh'd into my other machine which runs joinmarket to check the timestamps, and over there indeed it takes about a second to import addresses 10:57 < belcher> sometimes even 3 seconds 11:18 < belcher> waxwing i opened a PR which reverts commit 4b4f8c9, if we want to go down the route of just removing the code 11:19 < belcher> i ran into the bug while testing something else, but for now ill test by running bitcoin on /dev/shm, so its no rush in that respect 11:19 < belcher> but i think many people's makers would stop working if they ran master branch 12:15 -!- undeath [~undeath@hashcat/team/undeath] has joined #joinmarket 12:34 < fiatjaf> coinjoinXT 12:35 < CgRelayBot> [cgan/AlexCato] checked my logs: importing 60 addresses take around 1.5 seconds for me on current master (SSD) 12:35 < CgRelayBot> [cgan/AlexCato] how much is too much? Isnt the maker timeout 60 seconds by default, or is there some other timeout which is the problem? 12:39 < belcher> AlexCato 1.5 seconds is most likely fine 12:39 < CgRelayBot> [cgan/AlexCato] yeah, but i wonder why 30s is a problem 12:39 < belcher> too much is when the taker perceives your maker as taking too long and times out 12:40 < belcher> idk exactly how long that is, 30s might be ok sometimes i guess 12:40 < CgRelayBot> [cgan/AlexCato] but i think i know, because it seems like theres often two imports running right after each other, probably one for the current mixdepth (change) and one for the next mixdepth (cjout) 12:40 < CgRelayBot> [cgan/AlexCato] in that case, the 60s would fail, if thats the relevant timeout 12:40 < belcher> ah yes 12:42 < undeath> i think that was the problem when i wanted to test something with ygrunner recently, my taker would also timeout all the makers every time 12:42 < CgRelayBot> [cgan/AlexCato] so... there's 4 ways to deal with this: 1. revert commit 2. increase taker timeout to something safe 3. reduce the number of addresses to be imported in each run 4. find a way to speed up the import 12:44 < CgRelayBot> [cgan/AlexCato] undeath... all makers shouldnt be affected, that would imply they all run on HDDs, whcih is very unlikely. Maybe you were unlucky or a big percentage of them does. But you're right that JM shouldnt depend on specific hardware to run properly 12:44 < undeath> it was during testing with ygrunner, so all makers were running off my hdd 12:45 < CgRelayBot> [cgan/AlexCato] ah, makes sense! 12:51 < belcher> alexcato theres also 5. redesign and do it another way, perhaps import the addresses in another place or do it without importing addresses 12:51 < belcher> (not that 5. is likely or possible, i just didnt want us to close our thinking) 12:51 < CgRelayBot> [cgan/AlexCato] will post all this, thanks 12:52 < belcher> undeath interesting that it happened to you too 12:52 < belcher> i just ran a tumbler run with regtest running on /dev/shm which did the trick 12:53 < belcher> i think the number of imported addresses right now is 60 because the default gap limit is 10 and theres 6 mixdepths 12:54 < CgRelayBot> [cgan/AlexCato] wait, isnt 5 mixdepths the default? 12:55 < belcher> yes you're right 12:55 < waxwing> 6 for gap limit x 5 x2 12:55 < undeath> internal+external 12:56 < waxwing> i did it that way because it was the least thinking, i assumed it was essentially free. now that it's clear i was wrong, it could be done smarter but still preserve the property of bumping the imports forward the right place. 12:56 < waxwing> it might be possible to do it with 2 imports like before. 12:56 < waxwing> but for now i'd certainly agree with belcher's decision to revert, assuming, as seems reasonable from evidence, if surprising, that HDDs generally will take on the order of 10-100 seconds not 1-10 seconds 12:57 < waxwing> i thought that really unlikely due to it being like 100 times more than what i see, but i guess that's just how it works. i'm still mystified why Core import behaves like this, but, there it is. 12:59 < waxwing> it feels like long waits would be due to needing for some reason to read or interact with like, the utxo set or some other very large data set, but i guess that's just the kind of complexity you'd only get on top of if you delved into Core code details. 13:00 < belcher> another solution might be importing them asynchronously, for example if the wallet sees it has less than 2*gap_limit addresses remaining then it will import gap_limit*5 addresses at some time in the future... which means imports wont slow down the maker's responsiveness 13:01 < belcher> you know how twisted can run callback functions after a delay 13:01 < waxwing> oh! i missed that 13:01 < waxwing> .. because i thought it was free :) 13:01 < waxwing> only Q, do we need the imports for the unconf/conf callbacks 13:02 < waxwing> we won't care about that on mainnet, it's fine, but maybe it could mess up a test conceivably. 13:02 < waxwing> anyway that sounds like by far the most sensible direction to look in. 13:03 < waxwing> still if we can't trust it to not take 30-60 seconds it could still conceivably cause issues 13:03 < belcher> just to make it explicit, the point of the 2*gap_limit and 5*gap_limit is to that the wallet never gets close to using addresses less than gap_limit away, so a user wont accidently deposit money to an address that hasnt yet been imported 13:04 < waxwing> 2* gap limit and 5* gap limit? no the x2 and 5 is the int/ext and the mixdepths, the 6 is the gap limit 13:04 < waxwing> so it's importing forwards by the gap limit on every sub-branch in the wallet 13:04 < belcher> no i think thats something else, maybe i shouldve said 20*gap_limit and 50*gap_limit 13:05 < waxwing> oh sorry you mean the thing you just suggested above, i see 13:05 < belcher> this code would be in get_new_address(), that function has to return an address but because the import will happen later the address being returned must always have been imported beforehand 13:05 < belcher> right you got it 13:05 < waxwing> see now we know there can be a perf issue then a different strategy can make sense, where we cache and notice when we get close to the edge. so yeah that kind of thing you're saying. 13:06 < waxwing> it's probably like, similarly complex to just going forward every time there's a new request, but avoids perf issues better. 13:08 < belcher> yep, so the importing would be moved to a different time where the bot doesnt have to be as responsive 13:09 < belcher> in the extreme limit we could make the json rpc connection use twisted's async structure 13:09 < belcher> the jsonrpc connection could also be slow, right? twisted deals with io that might block 13:09 < waxwing> i don't think it's needed, just twisted at the app layer is fine 13:10 < belcher> agreed, i imagine the whole app would need a big redesign for that 13:10 < waxwing> oh; no, not if we just restricted ourselves to making that work. 13:10 < belcher> since every little rpc call would result in a new twisted callback 13:10 < waxwing> but, it's certainly some work, i'm right now thinking to just fold this into that speculative wallet improvement PR 13:10 < belcher> ok 13:11 < waxwing> basically there's already a minor change there, i wanted specifically to get the property that the wallet automatically updates itself as new utxos arrive 13:12 < waxwing> for example in the Qt, when you do a tx, the wallet balance tab doesn't automatically update itself, which is obviously just not good enough. 13:12 < waxwing> but that's something that should be fixed at the wallet level, then the Qt will naturally work like that. 13:12 < waxwing> similarly outside Qt if you deposit coins while your bot is running it doesn't notice. 13:13 < waxwing> so just overall fix it so the wallet is more of a 'service' that's running and can be queried. we can also (this is more speculative) push the blockchainterface "under" the wallet so the Taker/Maker only speaks via the wallet, so changes to bci don't affect app layer. 13:13 < waxwing> but that last point is the more speculative part, it's not 100% clear,a nd it's certainly not *needed*. 14:18 -!- raedah [~x@192.30.89.51] has joined #joinmarket 15:34 -!- Zenton [~user@unaffiliated/vicenteh] has quit [Ping timeout: 246 seconds] 15:40 -!- undeath [~undeath@hashcat/team/undeath] has quit [Quit: WeeChat 2.5] 17:45 -!- AgoraRelay [~jmrelayfn@p5DE4A14D.dip0.t-ipconnect.de] has quit [Ping timeout: 245 seconds] 17:47 -!- CgRelayBot [~CgRelayBo@p5DE4A14D.dip0.t-ipconnect.de] has quit [Ping timeout: 276 seconds] 17:59 -!- AgoraRelay [~jmrelayfn@p5DE4A887.dip0.t-ipconnect.de] has joined #joinmarket 18:03 -!- CgRelayBot [~CgRelayBo@p5DE4A887.dip0.t-ipconnect.de] has joined #joinmarket