--- Day changed Wed Oct 28 2015 00:08 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has joined #bitcoin-core-dev 00:11 -!- fkhan [weechat@gateway/vpn/mullvad/x-icewhpcrqzxqibbq] has joined #bitcoin-core-dev 00:25 -!- crescendo [~mozart@unaffiliated/crescendo] has quit [Remote host closed the connection] 00:29 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has quit [Ping timeout: 252 seconds] 00:40 -!- Ylbam [uid99779@gateway/web/irccloud.com/x-xpskpfushuyiirxv] has joined #bitcoin-core-dev 01:13 < wumpus> gmaxwell: that will likely contribute to it, although most of the time corruption seems to happen on crashes / power failures, when there is no time to flush at all 01:27 < wumpus> but if that is the case too then the flush+sync on windows is not essentially not working at all 01:27 < wumpus> -first not 01:40 < gmaxwell> someone was commenting that we were writing via mmap on windows and that the sync we were using there didn't work on maps; which sounds like the mac problem. I didn't verify these claims at all. 01:58 -!- rubensayshi [~ruben@91.206.81.13] has joined #bitcoin-core-dev 02:01 -!- BashCo [~BashCo@unaffiliated/bashco] has quit [Remote host closed the connection] 02:11 -!- aaaaok [d4044622@gateway/web/freenode/ip.212.4.70.34] has joined #bitcoin-core-dev 02:11 < wumpus> didn't check that either 02:21 -!- BashCo [~BashCo@unaffiliated/bashco] has joined #bitcoin-core-dev 02:26 -!- BashCo [~BashCo@unaffiliated/bashco] has quit [Client Quit] 02:53 -!- jtimon [~quassel@74.29.134.37.dynamic.jazztel.es] has joined #bitcoin-core-dev 03:03 < dcousens> wumpus: aye, 1 OOM and my chain was broke 03:22 -!- aaaaok [d4044622@gateway/web/freenode/ip.212.4.70.34] has quit [Ping timeout: 246 seconds] 03:47 -!- evoskuil [~evoskuil@c-73-225-134-208.hsd1.wa.comcast.net] has quit [Ping timeout: 260 seconds] 04:57 -!- fanquake [~Adium@unaffiliated/fanquake] has joined #bitcoin-core-dev 05:05 -!- fanquake [~Adium@unaffiliated/fanquake] has quit [Quit: Leaving.] 05:05 -!- fanquake [~Adium@unaffiliated/fanquake] has joined #bitcoin-core-dev 05:20 -!- fanquake [~Adium@unaffiliated/fanquake] has left #bitcoin-core-dev [] 05:48 -!- paveljanik [~paveljani@79-98-72-216.sys-data.com] has joined #bitcoin-core-dev 05:48 -!- paveljanik [~paveljani@79-98-72-216.sys-data.com] has quit [Changing host] 05:48 -!- paveljanik [~paveljani@unaffiliated/paveljanik] has joined #bitcoin-core-dev 05:57 -!- treehug88 [~textual@static-108-30-103-59.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 06:02 -!- randy-waterhouse [~kiwigb@opentransactions/dev/randy-waterhouse] has joined #bitcoin-core-dev 06:03 -!- randy-waterhouse [~kiwigb@opentransactions/dev/randy-waterhouse] has quit [Client Quit] 06:09 -!- molly [~molly@unaffiliated/molly] has joined #bitcoin-core-dev 06:12 -!- moli [~molly@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 06:14 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has quit [Quit: Lost terminal] 06:56 -!- zooko [~user@2601:281:8001:26aa:a052:7c51:a1e8:65a8] has quit [Remote host closed the connection] 06:56 -!- gavinandresen [~gavin@unaffiliated/gavinandresen] has joined #bitcoin-core-dev 06:57 -!- zooko [~user@2601:281:8001:26aa:a052:7c51:a1e8:65a8] has joined #bitcoin-core-dev 07:02 -!- davec [~davec@cpe-24-243-251-52.hot.res.rr.com] has quit [Ping timeout: 250 seconds] 07:19 -!- davec [~davec@cpe-24-243-251-52.hot.res.rr.com] has joined #bitcoin-core-dev 07:22 -!- moli [~molly@unaffiliated/molly] has joined #bitcoin-core-dev 07:25 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 07:55 < jcorgan> cfields: did you ever make progress on #6681? 07:58 < jcorgan> i guess that would be #6819 now 08:10 -!- MarcoFalke [8af602ef@gateway/web/cgi-irc/kiwiirc.com/ip.138.246.2.239] has joined #bitcoin-core-dev 08:16 < cfields> jcorgan: no, i stopped there. I just wanted to get it building so that someone who knows zmq could make it actually work 08:22 < jcorgan> got it 08:32 -!- ParadoxSpiral [~ParadoxSp@p508B98B2.dip0.t-ipconnect.de] has joined #bitcoin-core-dev 08:38 -!- bsm1175321 [~bsm117532@38.121.165.30] has quit [Remote host closed the connection] 08:47 -!- MarcoFalke [8af602ef@gateway/web/cgi-irc/kiwiirc.com/ip.138.246.2.239] has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client] 09:00 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has joined #bitcoin-core-dev 09:02 -!- bsm1175321 [~bsm117532@38.121.165.30] has joined #bitcoin-core-dev 09:03 -!- jl2012 [~jl2012@unaffiliated/jl2012] has quit [Ping timeout: 252 seconds] 09:03 -!- jl2012 [~jl2012@unaffiliated/jl2012] has joined #bitcoin-core-dev 09:46 -!- jl2012_ [~jl2012@119246245241.ctinets.com] has joined #bitcoin-core-dev 09:47 -!- jl2012 [~jl2012@unaffiliated/jl2012] has quit [Ping timeout: 255 seconds] 09:57 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has joined #bitcoin-core-dev 10:07 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has joined #bitcoin-core-dev 10:28 -!- molly [~molly@unaffiliated/molly] has joined #bitcoin-core-dev 10:31 -!- moli [~molly@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 10:47 < morcos> gmaxwell: For this fast template generation on new block code. Are you envisioning you switch to a new empty template after a new most work header? Even if you haven't validated the block you're connecting yet? And then set some sort of timeout with which you'll switch back to not being willing to build off a headers only chain? 10:48 < morcos> Once you've connected a new block, I'd say there is no reason not to wait the extra few ms to generate a block template with txs in it. Which can then be validated after its already been served to you. 10:49 < gmaxwell> No I was not. This is less safe than many people think it is, and I think not needed if the other details are handled correctly. 10:49 < morcos> Well thats where all the delay is right? Receiving and connecting the new best block. 10:49 < sipa> how about building a new template, switching to working on it immediately, and then starting a validation for it 10:50 < morcos> sipa: yes thats what i'm suggesting. but thats after you've connected the best block. if we're still requiring to wait for that, don't we think people will choose to short circuit it less safely on their own 10:50 < gmaxwell> morcos: No, CNB latency is tens of times slower than validating normally. 10:51 < morcos> gmaxwell: ok, i'll give you multiples, but probably less than 10x unless your mempool is really really big. and thats just compared to validating, what about waiting to receive? 10:52 < morcos> i haven't looked at the time delay from receiving most work header to finishing connecting the block, any idea what that is typically? 10:52 < morcos> i bet its long 10:52 < gmaxwell> When the relay network is working normally about 80% of blocks are transmitted in a single packet. 10:52 < morcos> ah yes, forgot about relay network 10:52 < morcos> thats why i ask questions 10:53 < sipa> it still makes sense to have numbers for the time between receive inv and CNB building a template on top 10:53 < gmaxwell> Most of the delays in mining right now appear to be from outside of bitcoin core, actually. 10:54 < morcos> sipa in a relay node connected case or regular node or both 10:54 < sipa> morcos: in "reality" 10:54 < sipa> :) 10:55 < morcos> ok, well i'm almost ready to push a WIP branch. it still doesn't do it in a thread, but the gain is really rather limited at this point, and i'll save that i think for a second pull 10:55 < morcos> but the question i want to resolve is what to do when TestBlockValidity fails 10:55 < morcos> right now it throws an error 10:55 < gmaxwell> hm. we've done something recently that slowed down connectblock (or the network behavior changes have) 10:55 < gmaxwell> oh dear 10:55 < morcos> connectblock has always been slow since i've measured it 10:56 < gmaxwell> debug.log:2015-10-28 16:11:38 - Connect block: 7256.55ms [7475.14s] 10:56 < sipa> how much is fetching inputs vs verifying inputs? 10:56 < gmaxwell> wtf. 10:57 < gmaxwell> since my node here's last update to master connect block's time has increased monotonically for every block. 10:58 < sipa> gmaxwell: coincache being thrown out my mempool bloat? 10:59 < morcos> gmaxwell, what block hash was that? 11:01 < morcos> gmaxwell: are you still running mempool.check? it runs inside that timer 11:01 < gmaxwell> morcos: before this latest rounds of attack this node was taking <100ms to connect block. 11:01 < gmaxwell> morcos: must be the mempool checks then. 11:02 < gmaxwell> morcos: it's all of them. 11:02 < sipa> gmaxwell: turning on mempool checks are a certain way to blow away your cache every block 11:02 < morcos> the blocks around that time for me took 500ms and then 1ms (only coinbase) 11:04 < gmaxwell> in any case, debug logs ran this thing out of space a couple hours ago, so I've restarted it. I'll run without mempool checks to get some good timings. 11:05 < gmaxwell> Numers I posted from shortly before the MTL event were about 80ms. 11:06 < morcos> 80ms?? hmm... 11:07 < sipa> so with a 200 MB mempool i seem to need a 700-1200 MB coincache 11:07 < sipa> with matt's latest patch 11:09 < sipa> i wonder if we should (as a short term hack) treat some factor of the size-of-pulled-in-memory of a tx as its txsize 11:11 < morcos> gmaxwell: i looked at some old numbers of mine (couple of months ago) and they were like 500ms on average (during fairly busy time, mid July) 11:11 < gmaxwell> morcos: Interesting! 11:12 < morcos> sipa: so what do you mean "need"? 11:13 < morcos> a 200MB mempool isn't really the right measure right 11:13 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 11:13 < sipa> morcos: how do you mean? 11:13 < morcos> because as you go from 0 - 200MB you'll pull in a certain amount of txin's. but as you keep running, youre mempool stays at 200MB, but you'll i think tend to pull in more txin's? or does matt's patch actually remove txin's no longer requested when a tx gets mined? 11:14 < sipa> when a tx gets mined it goes into the cache with dirty bit on 11:15 < sipa> so it can't be removed from the cache anymore 11:15 < sipa> until a flush 11:15 < sipa> but just a 700 MB coincache sounds pretty painful already 11:16 < morcos> yes i think 700MB is pretty painful, but we need to think a bit about how to be smarter about it 11:17 < sipa> per-txout cache... 11:18 < morcos> yeah if it was pertxout, then you'd solve that problem 11:18 < morcos> every so often you don't flush your cache, you just write out the dirty entries 11:19 < morcos> we could still do that now, except we don't know if that tx should still be in cache b/c of other mempool txs 11:19 < sipa> well an lru or random eviction of the utxo set could work too 11:19 < sipa> the cache would just become less effective 11:23 -!- rubensayshi [~ruben@91.206.81.13] has quit [Ping timeout: 240 seconds] 11:25 < morcos> so when we flush the cache, we have to write everything correct? so its consistent. i think if we could jsut get smarter about what we wiped from the cache at that point, then maybe we could jsut flush a lot more often, or do you think that would be bad. 11:26 < sipa> yes, a flush shouldn't wipe everything 11:26 < morcos> tell me if this would be too cumbersome. i think it might be pretty fast, how long does a flush take? 11:27 < morcos> after you write everything, you quickly scan the top 10MB of txs in the mempool, and insert all of their txin.prevout.hash's into a set, and then you iterate throught the cachemap erasing anything thats not one of those 11:30 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client] 11:33 < morcos> so actually flushing takes over a second right? i think you might be able to do something like i suggested on the order of 10's of ms, but not sure 11:33 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has joined #bitcoin-core-dev 11:35 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has quit [Client Quit] 11:36 < morcos> i don't know if it's bad (or maybe its even good) to flush more regularly. but if we did something like that, we wouldn't even need to worry about matt's patch. we could just "flush" every time the cache was getting too big. 11:37 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has joined #bitcoin-core-dev 11:38 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has quit [Client Quit] 11:38 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has joined #bitcoin-core-dev 11:39 -!- treehug88 [~textual@static-108-30-103-59.nycmny.fios.verizon.net] has quit [Quit: Textual IRC Client: www.textualapp.com] 11:41 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has joined #bitcoin-core-dev 11:43 -!- PaulCapestany [~PaulCapes@204.28.124.82] has quit [Quit: .] 11:45 -!- PaulCapestany [~PaulCapes@204.28.124.82] has joined #bitcoin-core-dev 11:51 < morcos> re: TestBlockValidity failing. I think I'm going to log an error and return NULL. Seems better than throwing an error. I'd like to reuse FormatStateMessage (in main.cpp), should I move it to a different file, or just declare it in main.h? 11:54 -!- MarcoFalke [c3523fc5@gateway/web/cgi-irc/kiwiirc.com/ip.195.82.63.197] has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client] 12:34 -!- hustler [~g@194.44.84.246] has joined #bitcoin-core-dev 12:38 -!- hustler [~g@194.44.84.246] has left #bitcoin-core-dev ["Bye."] 12:56 -!- zxzzt [~prod@static-100-38-11-146.nycmny.fios.verizon.net] has quit [Ping timeout: 260 seconds] 12:56 -!- sdaftuar [~sdaftuar@static-100-38-11-146.nycmny.fios.verizon.net] has quit [Ping timeout: 272 seconds] 12:57 -!- morcos [~morcos@static-100-38-11-146.nycmny.fios.verizon.net] has quit [Ping timeout: 250 seconds] 12:58 -!- zxzzt [~prod@static-100-38-11-146.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 12:58 -!- sdaftuar [~sdaftuar@static-100-38-11-146.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 12:58 -!- morcos [~morcos@static-100-38-11-146.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 13:09 -!- zooko [~user@2601:281:8001:26aa:a052:7c51:a1e8:65a8] has quit [Remote host closed the connection] 13:11 -!- zooko [~user@50.141.117.48] has joined #bitcoin-core-dev 13:11 -!- belcher [~user@unaffiliated/belcher] has joined #bitcoin-core-dev 13:45 -!- CodeShark [CodeShark@cpe-76-167-237-202.san.res.rr.com] has joined #bitcoin-core-dev 13:47 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 13:54 < Luke-Jr> morcos: ⁇? if the block is invalid you do NOT want to use the template ever 13:55 < Luke-Jr> morcos: pretty sure you would entirely break proposals too 13:55 < morcos> Luke-Jr: I know you don't want to use it, the question is how to handle that case. 13:56 < morcos> It means code is broken somewhere, so human intervention is going to be required at some point 13:56 < morcos> The existing code would have thrown an error. I chose to return NULL and log the error (which will cause getblocktemplate to throw a now misnamed JSONRPC error) 13:57 < morcos> Another option would be to try to return a template with no tx's instead (since the likely bug is mempool consistency was broken) 13:57 < morcos> I'm about to PR as a WIP... , would be great if you want to take a look 13:58 < gmaxwell> morcos: I vaguly recall something that if createnewblock can fail there is a crash elsewher.e 13:58 < GitHub181> [bitcoin] morcos opened pull request #6898: [WIP] Rewrite CreateNewBlock (master...fasterCNB) https://github.com/bitcoin/bitcoin/pull/6898 13:58 < gmaxwell> or maybe I also fixed that in anticipation of people making it possible to fail again. 13:59 < morcos> It took a long time to make it produce the exact same blocks as the old code, but helped work out a couple of bugs. 13:59 < Luke-Jr> IMO human intervention is preferable to silently mining empty blocks 14:00 < Luke-Jr> and debug.log is not non-silent 14:00 < gmaxwell> Most miners will never see anything in debug.log. 14:00 < morcos> I really hate the ugliness around hacking in the ability to still do priority space in the blocks. 14:00 < Luke-Jr> morcos: if that requires ugliness, the old code was better ?/ 14:00 < Luke-Jr> :\ 14:01 < morcos> well, I'd call the old code ugly as well. 14:01 < gmaxwell> Luke-Jr: the old code is very ugly. 14:01 < Luke-Jr> slow != ugly 14:01 < morcos> Also I didn't clean up the whole thing, I just am putting this out there for proof of concept. 14:02 < Luke-Jr> but I cant see the new code to compare yet.. 14:03 < morcos> Luke-Jr: the big problem is priority is very difficult to calculate 14:04 < Luke-Jr> ? no 14:04 < morcos> If there is a consensus that its an important metric to keep, then I think we should #6357 which would make it much faster to calculate the correct priority in the mining code 14:04 < morcos> however it'll still be impossible to keep a sort based on it (i think) 14:05 < morcos> it changes! 14:05 < Luke-Jr> you need to lookup the inputs anyway 14:05 < morcos> not anymore 14:06 < morcos> but even if you do, the biggest problem i see with the old code, is that you have to look up the inputs for ALL the txs in your mempool 14:06 < morcos> not just the ones you're putting in a block 14:06 < Luke-Jr> hmm 14:06 < morcos> and if you're going to do a priority portion of the block, you have to keep doing that 14:07 < Luke-Jr> resorting once per block seems reasonable imo? 14:07 < morcos> although maybe the dynamic priority calculation would fix that, i haven't looked at it in a while.. and maybe it could be made even easier now with the concept of mempool children 14:08 < morcos> Luke-Jr: you're right, that would be the way to improve this code if priority isn't going away 14:09 < Luke-Jr> morcos: I still plan to redo all this btw :p 14:09 < morcos> keep an index (either part of the multi-index or separate) of all the priorities sorted, and only update it once per block 14:09 < morcos> yeah me too! :) 14:09 < Luke-Jr> so the mempool is a list of block templates 14:10 < morcos> but i was hoping we might be able to get something simple'ish done for 0.12, which would make GBT run a lot faster. 14:10 < gmaxwell> morcos: the input look up problem exists for fees too. :( 14:10 < morcos> gmaxwell: they are already stored in CTxMemPoolEntries 14:10 < morcos> and now i addded sigops to that too 14:10 < gmaxwell> Oh I see right fees can be cached but priority cannot. 14:10 < Luke-Jr> right 14:11 < Luke-Jr> priority is our best metric right now I think, so I wouldnt want to lose ikt even temporarily 14:12 < morcos> best metric for what? why do you think its better than fees? 14:12 < gmaxwell> Luke-Jr: I don't really agree there. Priority works fine for you and I, I don't think he serves most users all that well. 14:13 < Luke-Jr> morcos: spammers are happy to pay fees 14:13 < morcos> Luke-Jr: yeah i actually agree its a pretty good anti-spam mechanism 14:13 < morcos> but thats not how we use it now! 14:14 < Luke-Jr> we do both now 14:14 < Luke-Jr> gmaxwell: imo thats why it isnt *exclusively* priority 14:14 < gmaxwell> Luke-Jr: point was spam goes through currently. 14:14 < morcos> Luke-Jr: there is also the problem of incentives 14:15 < Luke-Jr> gmaxwell: not via priority…? 14:15 < morcos> how do you make miners prefer priority vs fee 14:15 < Luke-Jr> morcos: you dont 14:16 < phantomcircuit> wumpus, i haven't forgotten about getting you a copy of a corrupted datadir btw 14:16 < Luke-Jr> longterm fees are the only realistic metric 14:16 < morcos> Luke-Jr: so you view priority as like an HOV lane. at least some txs will sneak past even if the spam is causing congestion on most of the block 14:16 < Luke-Jr> but for now we want to try to keep fees low 14:16 < gmaxwell> morcos: and it does have that effect. 14:17 < Luke-Jr> morcos: basically 14:17 < Luke-Jr> also a nice fallback 14:17 -!- paveljanik [~paveljani@unaffiliated/paveljanik] has quit [Quit: Leaving] 14:17 < morcos> gmaxwell: if we care about preserving that, why don't we just redefine priority, to be your priority at the time the tx was accepted 14:17 < morcos> then it can be cached and its easy to reason about and who cares if different nodes/miners calculate it differently 14:17 < Luke-Jr> as long as we have priority, every tx can get confirmed *eventually* 14:17 < gmaxwell> morcos: we could expect then it'll turn into dead weight in the mempool. 14:18 < morcos> oh sorry, thats not what i mean 14:18 < morcos> t 14:18 < morcos> i meant the priority only depends on your inputs that were confirmed at the time you were accpeted 14:18 < morcos> so its still a bit complicated, but way less than currently 14:19 < phantomcircuit> a better question is, why would miners use priority? 14:19 < gmaxwell> I think that would be fine. Then it only needs to update by some ratio of the size, I guess. 14:19 < Luke-Jr> that may work good enough as a temporary thing 14:19 < Luke-Jr> phantomcircuit: already had that minidiscussion scroll up 14:21 < morcos> gmaxwell: but yeah, i mean even easier, lets just make it not age once its in your mempool. if you guessed wrong and it doesn't get confirmed soonish, then you resubmit after it expires (currently 72 hours) 14:22 < Luke-Jr> morcos: maybe have a post-block resort for new confirmed inputs optionally enabled in conf file 14:22 < Luke-Jr> just in case it turns out bad 14:22 < Luke-Jr> re solrt I mean 14:23 < Luke-Jr> re sort sorry 14:23 < gmaxwell> morcos: thats a little obnoxious, in that if it doesn't make it in the first block then immediately anything new added has an advantage. Maybe it okay? 14:25 < morcos> gmaxwell: yeah i think its a tradeoff. the annoying thing however is i'm not sure you can combine it into one score very easily and still have it serve the purpose you want it to serve 14:25 < Luke-Jr> afk 14:26 < gmaxwell> I supposed if we cared more we could have a background task that just goes and recosts them from time to time., 14:26 < gmaxwell> presumably we're doing some kind of linear scan for the expiration? (I haven't kept up with the latest changes) 14:33 < morcos> gmaxwell: expiration? oh no, there is an entry time index as well. 14:45 -!- evoskuil [~evoskuil@c-73-225-134-208.hsd1.wa.comcast.net] has joined #bitcoin-core-dev 14:48 -!- zooko` [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has joined #bitcoin-core-dev 14:49 -!- zooko [~user@50.141.117.48] has quit [Ping timeout: 246 seconds] 14:51 -!- zooko` is now known as zooko 14:55 < phantomcircuit> gmaxwell, im seeing ~200ms on average for Connect total on my server for the last two months 15:00 < gmaxwell> morcos: so... perhaps another way to handle priority is to maintain a seperate very small mempool for it. 15:00 < gmaxwell> so then the cost of having to update and resort the 'whole mempool' is not very lage. 15:00 < gmaxwell> er large* 15:04 -!- ParadoxSpiral [~ParadoxSp@p508B98B2.dip0.t-ipconnect.de] has quit [Remote host closed the connection] 15:08 -!- zooko [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has quit [Ping timeout: 240 seconds] 15:12 -!- belcher [~user@unaffiliated/belcher] has quit [Read error: Connection reset by peer] 15:14 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has joined #bitcoin-core-dev 15:16 < phantomcircuit> gmaxwell, why not just mark transaction sin the mempool as dirty when there's a new block and update the priority in a background thread? 15:17 -!- belcher [~user@unaffiliated/belcher] has joined #bitcoin-core-dev 15:40 < sipa> I'm probably wasting my time, but I'm writing a more (space) efficient std::vector drop-in replacement 15:41 < sipa> a vector has an average overhead of 40 bytes currently... 15:42 < gmaxwell> sipa: probably; ... maybe a more useful thing to do is to find things using vectors for small amounts of data and make them not use vectors. :P 15:42 < gmaxwell> sipa: question; I can't figure out why we would need to cache failing signatures at all: rationale: constructing a new novel failing signature is free. 15:42 < sipa> gmaxwell: the idea is to use a union to either store elements directly, or store the size + pointer of a dynamic array 15:43 < sipa> gmaxwell: we cache failing transactions, not signatures 15:43 < sipa> to prevent downloading and verifying them over and over again 15:44 < gmaxwell> that was just a brainfart, for some reason I thought the sigcache type included a validity flag. 15:44 < gmaxwell> of course it doesn't. 15:45 < gmaxwell> sipa: I've wondered before why the normal std::vector didn't special case small numbers of objects by making one of the pointers null and storing the objects internally. 15:47 < gmaxwell> sipa: even if you superoptimize the vector, there is still malloc overhead and fragmentation and the pointer to the vector object in its parent object which can all be eliminated in cases where the vector can be avoided. 15:48 < sipa> gmaxwell: well for example almost every CScript (which is a std::vector) in the utxo cache and mempool stores at least 25 bytes 15:49 < gmaxwell> Every vector is probably suspect to begin with. :P 15:49 < sipa> i can make a vector that is exactly 32 bytes (4 bytes size + 28 bytes data, or 4 bytes size + 4 bytes alloced size + pointer to actual data) 15:50 < gmaxwell> It might be interesting to instrument std::vector so we can get a report of vectors in the codebase that have a constant number of objects in them. 15:50 < sipa> so it's 32B for up to 28 bytes of data, or 32B + N for N > 28 bytes of data 15:51 < sipa> instead of vector's 40B + N for everything 15:52 < gmaxwell> sipa: or you could make the parent object able to store up to 32 bytes internally, and only create a vector if there is more than that. (I suppose it could use a union with the char[32] to store the pointer to the vector when there is one). 15:53 < sipa> gmaxwell: that's what i'm doing 15:53 < sipa> when i say "vector" i mean parent type; i think you're interpreting that word as "malloced data" 15:54 < gmaxwell> Got it. 15:55 < gmaxwell> why 28? why not 31? you don't need a 4 byte length for only 31 bytes of range. 15:56 < sipa> it complicates things a bit if you can't share the size for one with the other 15:56 < sipa> but yes, indeed 15:58 < gmaxwell> ah I see, actually that also lets you avoid a flag.. since you can switch the behavior on size. 15:59 < sipa> i thought about that first, but that's unfortunately not the case 15:59 < sipa> as vector guarantees that a resize that shrinks doesn't invalidate pointers 16:00 < sipa> so dropping from over the threshold to below the threshold cannot switch to in-object storage 16:01 < sipa> so i use 1 bit of the 4-byte size to determine what type of storage to use 16:02 < gmaxwell> what I was thinking about when I said changing the datastructure was just flattening. e.g. one dynamically sized txout object that contains everything in it directly, and no pointers at all. It's not like we ever modify any of this except setting flags. 16:02 < gmaxwell> so having any pointers in it under any condition, is just waste and overhead. 16:02 < gmaxwell> but it means that there needs to be smart accessors for it. 16:49 < GitHub107> [bitcoin] mcelrath opened pull request #6899: Warnings clean with C++11 (master...cpp11) https://github.com/bitcoin/bitcoin/pull/6899 17:04 < aj> what's the status of BIP62 (malleability)? is there something documenting what's stopping it from being ready to be deployed, at least for third-party malleability? 17:07 < gmaxwell> aj: the fix for nussance third party malleability is already deployed, in 0.11.1 and 0.10.3, but most hashpower isn't running it yet. 17:08 < gmaxwell> BIP62 which also protects against miners creating malleability for a subset of transactions likely has issues still, and needs to be rewritten... fortunately the parts of it that don't have issues are flowing in. 17:12 < gmaxwell> aj: the downside is that some wallets with absentee maintance are going to get their transactions blocked; but a couple years of nagging is all we could do and with active attacks going on it didn't make sense to wait any longer and let everyone continue to get disrupted just to avoid disrupting wallets responsible for only a couple percent of transactions. 17:15 < aj> gmaxwell: makes sense... is there an existing PR for bip 62 or something i can read? 17:17 < aj> gmaxwell: (the consensus change side, more than the already-deployed isStandard changes i mean) 17:27 < gmaxwell> aj: well they're one in the same in Bitcoin Core; these changes are accomplished via validation flags. To make them non-standard we add the new restrictions to the set of flags used to verify transactions going into the mempool. 17:27 < gmaxwell> To add them to consensus, the softfork negoiates turning them on for block validation. 17:29 < aj> gmaxwell: hmm, i guess i'm confused as to what are the parts that have issues and need rewriting before it can softfork and protect from miners too then? 17:33 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has quit [Ping timeout: 264 seconds] 17:37 < gmaxwell> aj: there are more sources of malleability for transactions generally (but not ordinary p2pkh and multisig) than this addresses; at the same time, fancy contract usage -- like swaps and refunds need #9 (and perhaps #8) to be addressed, and CLTV addresses their needs better. 17:38 < gmaxwell> aj: as far as softforking it, it really need to be comprehensively non-standard first before we can do that; and softforking to prevent nussance mallability from miners is probably of pretty low value since miners have no reason to create it, so it's a lower priority especially considering how complex and invasive the changes are. 17:39 < gmaxwell> (and because every time we've picked up BIP62 again we've found more cases that weren't covered. :( ) 17:39 < gmaxwell> (again, related to less common usage) 17:40 < aj> gmaxwell: oh, hmm. that sounds more complicated than i was hoping :) 17:41 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has joined #bitcoin-core-dev 17:42 < gmaxwell> aj: rethinking this resulted in coming up with the seggregated witness approach thats in elements alpha, and which may be possible to soft-fork into bitcoin. 17:42 < gmaxwell> So personally thats the route I'd like to go down. 17:42 < aj> gmaxwell: i was assuming that was a lot further off than bip62 though? 17:42 < sipa> bip62 can be simplified a lot now, if we want just that 17:43 -!- zooko [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has joined #bitcoin-core-dev 17:43 < sipa> all of bip62's rules are already nonstandard in 0.10.3 and 0.11.1 17:43 < gmaxwell> BIP62 is infintely far off right now, no one is working on it. And I don't think the approach is likely to be very successful; except for blocking malleability in a few narrow cases (which we've already been breaking out) 17:43 < sipa> so if the network accepts those rules, we don't even need v2 transactions in bip62... just unconditionally make violations of its rules fatal 17:44 < gmaxwell> For the nussance things, non-standardness is sufficient. For contracts BIP62 is insufficient, but CLTV covers a lot of them. 17:44 < gmaxwell> (CLTV and CSV) 17:46 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has quit [Ping timeout: 240 seconds] 17:48 < aj> gmaxwell: so how far off is segregated witness? it doesn't have a bip, and needs a block size increase, i think? 17:51 < sipa> what does the block size have to do with it...? 17:51 < sipa> the complication is mostly that it needs a change in the p2p relay protocol for blocks and transactions 17:52 -!- jgarzik [~jgarzik@unaffiliated/jgarzik] has joined #bitcoin-core-dev 17:53 < aj> sipa: i thought i just read that segregated witness increased tx size by a bunch 17:53 < sipa> i thibk you're confusing with confidential transactions 17:53 < aj> sipa: aha; "In fact, this witnessing occupies 2/3rd of the blockchain." https://bitcointalk.org/index.php?topic=1210235.0 17:53 < aj> sipa: could be 17:54 < sipa> seggregated witness just moves scriptSig out of transactions 17:54 < sipa> in alpha, the seggregation is scriptSig AND the range proofs 17:54 < aj> sipa: aha, that makes more sense 17:57 -!- PaulCapestany [~PaulCapes@204.28.124.82] has quit [Quit: .] 17:57 < morcos> sipa: that idea i mentioned about scanning the feerate sort for "hot hashes" (txins likely to be redeemed in the next few blocks) and then not deleting those from the cache on a flush 17:58 < morcos> it has some promise, i just coded up a rough version, takes about 30ms to generate the set of txins from the top 10MB worth of txs 17:58 < morcos> its hard for me to calculate exactly how well its working b/c it totally screwed up the cache size accounting unless i get a bit smarter 17:58 -!- PaulCapestany [~PaulCapes@204.28.124.82] has joined #bitcoin-core-dev 18:00 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has joined #bitcoin-core-dev 18:00 < morcos> that 30ms is out of a flush time thats usually in the 300ms range, but then causes validation times to get a bit faster. anyway, i'll play around with it some more. 18:01 < gmaxwell> aj: ... uh, ... thats saying that _currently_ it's 2/3rd of the blockchain, which is all bandwidth that could be _saved_ by a synchronizing node that isn't checking historic signatures. 18:01 < gmaxwell> aj: so its the opposite, and one of the reasons that that approach is so much more attractive. 18:02 < gmaxwell> (though until a week or so ago I didn't believe it was soft-forkable, but luke appears to have more or less solved that design question) 18:03 < sipa> gmaxwell: i haven't heard luke's idea, but I think it's simple enough? use " OP_NOP7" as scriptPubKey, "" as scriptSig, and define an auxiliary data structure with " " in it 18:11 < gmaxwell> sipa: yes sir. And require the scrippubkey in the original datastructure to be empty for OP_SEGWITNESS scripts. 18:12 < gmaxwell> er original scriptsig. 18:12 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 18:12 < sipa> as i said :) 18:13 < gmaxwell> lol "" didn't take up enough space on the screen. 18:34 -!- zooko [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has quit [Ping timeout: 246 seconds] 18:38 -!- zooko [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has joined #bitcoin-core-dev 18:42 -!- daniel____ [~quassel@106.120.101.38] has joined #bitcoin-core-dev 18:44 -!- Ylbam [uid99779@gateway/web/irccloud.com/x-xpskpfushuyiirxv] has quit [Quit: Connection closed for inactivity] 18:49 -!- Arnavion [arnavion@unaffiliated/arnavion] has quit [Ping timeout: 272 seconds] 18:55 -!- bsm117532 [~bsm117532@static-108-21-236-13.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 18:55 < bsm117532> Why is leveldb in the core in the first place? 18:57 < sipa> because we want to have control over its changes 18:57 < bsm117532> By not changing it... 18:57 < sipa> and we have local modifications to it (win env, disable compression, and we've had other ones before) 18:57 < bsm117532> Ref https://github.com/bitcoin/bitcoin/pull/6899 18:57 < sipa> if a bug were to be found in leveldb, fixing it may cause a consensus failure 18:58 < sipa> and bugs like that have happened 18:58 < sipa> (before we were using leveldb, to be clear) 19:01 < gmaxwell> bsm117532: leveldb has (prior to our use of it) fixed 'bugs' that would have broken network consensus. 19:02 < gmaxwell> oh jinx, sorry. 19:02 < bsm117532> That's yucky on so many levels. 19:02 < sipa> how do you mean? 19:02 < bsm117532> FWIW I'd have put it in a separate repo under bitcoin, rather than import the code directly. 19:03 < sipa> bsm117532: https://github.com/bitcoin/leveldb 19:03 < bsm117532> Yeah, like that ;-) 19:03 < sipa> not like that; that 19:03 < gmaxwell> It is. 19:03 < bsm117532> The github repo should have a reference to it, rather than have the code imported. 19:03 < sipa> that is how git subtree works 19:03 < sipa> subtrees are ugly 19:04 < bsm117532> checking... 19:04 < sipa> but the only alternative is submodules, which are even more ugly 19:04 < bsm117532> Yeah that's what I mean, a submodule. 19:05 -!- belcher [~user@unaffiliated/belcher] has quit [Quit: Leaving] 19:05 < sipa> well we're using subtrees 19:05 < sipa> i don't feel like reiterating the advantages of one over the other in either direction; you can find enough discussion about that on the internet :) 19:06 < bsm117532> AFAICT, when I clone bitcoin, I don't get the bitcoin/leveldb repo. I get bitcoin/src/leveldb which is entirely separate. Am I wrong about that? 19:06 < sipa> correct, though a script is included to verify their correspondence 19:07 < bsm117532> I see. Egad that's ugly. Well then you'll just have to deal with spurious patches to leveldb. :-P 19:08 < sipa> if we need to change something to our leveldb tree, the correct way is submit it as a PR to the bitcoin/leveldb repo, and then bitcoin core can update to a new version 19:08 < bsm117532> That makes sense. It's the other direction that is generating a problem here. (modifying src/leveldb...) 19:09 < sipa> we don't ever accept changes directly to that directory 19:11 < bsm117532> Well we're going to replace leveldb with sqlite, right?!?! ;-) 19:12 < sipa> maybe. 19:12 < CodeShark> is sqlite performant enough? 19:12 < bsm117532> Also replace Berkeley db with /dev/null... 19:12 < sipa> bsm117532: already possible, use --disable-wallet at compile time :) 19:12 < sipa> CodeShark: doubtful 19:13 < bsm117532> sipa: I'm well aware, always do. 19:14 < CodeShark> we're pretty set on ditching leveldb, though, right? 19:14 < bsm117532> CodeShark: don't know. I re-ran some benchmarks from an old article: https://gist.github.com/mcelrath/6952eab246a7c705a0fb 19:14 < sipa> CodeShark: only if a suitable replacement is found 19:15 < bsm117532> I think we need a more targeted leveldb vs. sqlite (or something else) comparison. 19:15 < sipa> bsm117532: jeff has a branch with bitcoin core running on sqlite3 19:15 < sipa> but there are problems with background checkpointing etc 19:16 < CodeShark> the main issues with leveldb are that it's no longer being maintained and that it doesn't guarantee consistency, right? 19:16 < bsm117532> Well he did that quickly... 19:16 < sipa> CodeShark: it does guarantee consistency; it just seems to fail on windows pretty often 19:16 < CodeShark> doesn't it rely on the OS queueing up writes in the correct order or something? 19:16 < sipa> no 19:16 < sipa> or at least, it doesn't intend to 19:17 < sipa> it relies on the OS behaviour properly when asked to do a synchronous write/flush, though 19:18 < bsm117532> Are we worried about db corruption on power failure? Or something else generating inconsistencies? 19:18 < sipa> yes, that's what seems to happen 19:19 < CodeShark> I haven't looked at the actual study, but https://en.wikipedia.org/wiki/LevelDB#Bugs_and_Reliability 19:19 < bsm117532> That's a very hard problem that I don't think bitcoin can solve by choice of db. 19:19 < bsm117532> This is why Oracle charges big bucks for dedicated hardware. 19:19 < sipa> sqlite is well known for being very stable 19:20 < CodeShark> sqlite has worked very well for me in certain applications that don't require extremely frequent insertions 19:20 < dcousens> sipa: aye, the mere simplicity of sqlite is very attractive 19:20 < sipa> CodeShark: we have extremely infrequent insertions 19:21 < sipa> but they're very big batches 19:21 < bsm117532> I've had good luck with sqlite too. But these are anecdotes. 19:22 < tripleslash> bsm117532, its not even power failure. Windows gives very little time for apps to cleanly shutdown these days. 19:22 -!- zooko [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has quit [Ping timeout: 240 seconds] 19:23 < sipa> lmdb seems to be a interesting candidate 19:23 < sipa> but it's mmap-based, so not usable on 32-bit systems for such large databases as we have 19:24 < dcousens> sipa: leveldb is just used for the txdb atm right? 19:24 < tripleslash> Pretty much any new pc today is shipping x64. There comes a time where it will be time to just cut the cord on the X86 systems. 19:25 < sipa> dcousens: block index and chainstate 19:25 < sipa> tripleslash: yet ARM is becoming more and more relevant 19:26 < CodeShark> can't we create multiple databases all restricted to 4 gigs? :) 19:26 < tripleslash> sipa: true that 19:26 < sipa> CodeShark: we need atomic changes across databases then 19:27 < bsm117532> I vote that a 64 bit system as a requirement is a reasonable thing. Bending over backwards to shoe-horn into 32 bits is not worth anyone's time. 19:27 < dcousens> sipa: right, the CBlockTree is defined in txdb.h haha 19:29 -!- zooko [~user@50.141.117.96] has joined #bitcoin-core-dev 19:29 < CodeShark> we can always support mmap on 64-bit systems and fall back on leveldb for 32-bit systems 19:29 < CodeShark> although dunno what that might entail regarding consensus 19:30 < CodeShark> it only takes one "undocumented feature" to screw everything up 19:30 < sipa> having multiple databases is technically not hard; the database interface is pretty neatly abstracted 19:31 < sipa> but it's very unattractive to risk divergence between them, especially when testing mostly happens on one, and production mostly on another :) 19:31 < dcousens> sipa: production on 32-bit? 19:32 < bsm117532> Quick git question...I did a push -f to overwrite my previous commit, because I like having clean histories. I could have also made a second commit on this PR. Does anyone care? Would github have squashed them anyway? (Re: https://github.com/bitcoin/bitcoin/pull/6899 ) 19:32 < sipa> bsm117532: github won't squash for you 19:32 < sipa> reviewers may ask to squash things 19:32 * bsm117532 has spent too long being the only committer to his repo. 19:32 < bsm117532> Ok so I did the right thing. 19:33 < sipa> only reason to not squash is if it's a complicated commit that required lots of review (for example, big code movement commits) 19:33 < dcousens> sipa: is the squash op not deterministic? 19:33 < sipa> dcousens: the resulting tree of a squashed commit is identical to the resulting tree of the series of commits it derived from 19:33 < bsm117532> There is no squash op... rebase -i is very manual and not deterministic at all. 19:34 < dcousens> I know the rebase is, and hence verifiable 19:34 < sipa> dcousens: you're confusing commits with trees 19:34 < CodeShark> rebase is only not deterministic when you either change commit order or rebase off a different branch 19:34 < sipa> or if there were conflicts 19:35 < sipa> or merges within the commits you're rebasing 19:35 < sipa> rebase applies a merge resolution algorithm, and you can change that algorithm 19:35 < CodeShark> commit followed by squash is effectively the same as commit --amend 19:35 < sipa> which is also not deterministic :) 19:35 < sipa> (the resulting tree is, but the commit itself isn't) 19:36 < dcousens> sipa: interesting 19:36 < sipa> the reason it isn't is because a commit has a timestamp 19:36 < CodeShark> but that's a trivial source of nondeterminism ;) 19:37 < sipa> you can avoid it by specifying the timestamp on the command-line, though 19:37 < sipa> yeah 19:37 < sipa> but rebases are generally not verifiable 19:37 < sipa> and afaik nobody does that 19:43 < bsm117532> LMDB does look really interesting... 19:45 < bsm117532> Looks like it uses COW instead of sqlite's *.journal which I think is write-ahead logging. So should be much faster than sqlite, probably a factor of 2. 19:45 -!- zooko [~user@50.141.117.96] has quit [Ping timeout: 260 seconds] 19:45 < sipa> indeed 19:45 < bsm117532> It looks like an in-memory btrfs ;-) 19:46 < sipa> except it's not in-memory 19:46 < bsm117532> Yeah but mmapped, so it looks like it's in-memory. 19:50 < bsm117532> I've got one more tedious bug I want to fix, but maybe then I'll look at making an LMDB branch for comparison. If it was so easy for jeff to rip out leveldb for sqlite... 19:53 -!- arowser [~arowser@106.120.101.38] has left #bitcoin-core-dev [] 20:02 -!- zooko [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has joined #bitcoin-core-dev 20:10 < jgarzik> COW is COW, somewhat like write-ahead logging. :) 20:10 < jgarzik> If you order your writes properly, it's write-once 20:10 < sipa> ... i do wish we could just use LMDB 20:11 < jgarzik> It's quite easy to switch out. I could do an LMDB patch. Also working on a COW dbm myself. 20:12 < sipa> i'm sure it wouldn't be hard; but LMDB is a total no go on 32-bit systems 20:12 < jgarzik> the best performance should be COW + Kyoto Cabinet scheme 20:12 < jgarzik> (KC is worth a look too) 20:13 < jgarzik> COW practically guarantees no corruption (assuming write order is proper) 20:14 < sipa> but i would be interested in LMDB's performance... 20:14 < jgarzik> no idea if LMDB performs better than KC. Worth benching, definitely. 20:15 < jgarzik> Google benching leveldb, sqlite, kyoto cabinet: http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html 20:15 < phantomcircuit> jgarzik, COW without any delete? :P 20:16 < jgarzik> phantomcircuit, typical strategy is reclaim after X generations of the superblock 20:16 < jgarzik> assuming no stored snapshots 20:16 < sipa> http://symas.com/mdb/microbench/ 20:18 -!- daniel____ [~quassel@106.120.101.38] has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.] 20:18 -!- daniel____ [~quassel@106.120.101.38] has joined #bitcoin-core-dev 20:18 -!- daniel____ [~quassel@106.120.101.38] has quit [Remote host closed the connection] 20:18 < jgarzik> Useful link. They should have benched the KC hash db and not btree. ;p 20:19 < sipa> not a fair comparison, as lmdb/leveldb only do ordered maps 20:20 -!- daniel____ [~quassel@106.120.101.38] has joined #bitcoin-core-dev 20:20 < gmaxwell> has anyone managed to complete a sync with jeff's sqllite attempt? 20:20 < gmaxwell> I think jcorgan said it was going on day 2 for him. 20:20 < tripleslash> if you build a win64 binary for me, I'll give it a go. 20:21 < jgarzik> bench'd BDB in b-tree mode too :( 20:21 -!- daniel____ [~quassel@106.120.101.38] has quit [Remote host closed the connection] 20:21 < jgarzik> if we didn't need ordering, things could go really fast 20:21 < sipa> we don't really need ordering 20:22 < sipa> though it may end up being useful if we go from per-tx caching to per-txout caching 20:22 < sipa> and for normative utxo hashes 20:23 -!- daniel____ [~quassel@106.120.101.38] has joined #bitcoin-core-dev 20:24 < phantomcircuit> jgarzik, more importantly we can significantly over provision the initial db to avoid resizing constantly 20:29 -!- Arnavion [arnavion@unaffiliated/arnavion] has joined #bitcoin-core-dev 20:34 < jgarzik> LMDB could use a windowed mmap approach and support 32-bit systems, >2GB databases 20:34 < jgarzik> or we could just drop 32-bit support 20:35 < jgarzik> "full node = big database = big iron = no rPi" 20:41 < Luke-Jr> jgarzik: … 20:41 < Luke-Jr> I use 32-bit. 20:41 < tripleslash> Luke-Jr, you also use dialup. At some point, people have to upgrade. ;-) 20:41 < Luke-Jr> except for Valgrind not supporting it, it's a more logical choice than 64-bit. especially x32, once more stuff works. 20:42 < Luke-Jr> tripleslash: I do not use dialup. 20:42 < tripleslash> my apologies. 20:42 < jgarzik> for large databases the address space helps a -lot-. x32 useful but not in this case. 20:42 < Luke-Jr> tripleslash: also, I upgraded to 64-bit within a week of AMD releasing the first 64-bit capable CPU. I decided a year or two ago that CPUs were fast enough that 32-bit was the better option now. 20:43 < Luke-Jr> jgarzik: x32 is only useful if basically everything is x32; not so much if you have both x32 and amd64 programs 20:43 < jgarzik> not true (but off-topic so I'll stop) 21:01 -!- zooko [~user@2601:281:8001:26aa:40fb:d309:c1ba:51a2] has quit [Remote host closed the connection] 21:06 -!- zxzzt [~prod@static-100-38-11-146.nycmny.fios.verizon.net] has quit [Ping timeout: 272 seconds] 21:07 -!- sdaftuar [~sdaftuar@static-100-38-11-146.nycmny.fios.verizon.net] has quit [Ping timeout: 272 seconds] 21:07 -!- zxzzt [~prod@static-100-38-11-146.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 21:07 -!- sdaftuar [~sdaftuar@static-100-38-11-146.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 21:32 < sipa> note: LMDB's on-disk format is platform dependent 21:32 < sipa> (byte order and word size) 22:35 -!- molly [~molly@unaffiliated/molly] has quit [Ping timeout: 240 seconds] 22:36 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has joined #bitcoin-core-dev 22:38 < phantomcircuit> jgarzik, unfortunately lmdb really isn't an option because of the trade offs they made 22:38 < sipa> phantomcircuit: any specifics, apart from intentionally no support for 32 bit systems? 22:44 < dcousens> Luke-Jr: "CPUs were fast enough that 32-bit was the better option 22:44 < gmaxwell> sipa: well no integrity checks iirc, and non-portable data (latter less of an issue); I think there was some other thing wumpus raised. 22:45 < dcousens> If its not relevant, could you PM what you mean by that, ooi 22:45 < Luke-Jr> dcousens: #bitcoin maybe? 22:45 < dcousens> Luke-Jr: sure 22:46 < gmaxwell> sipa: or we could just admit that we'll need a custom data structure to make any kind of commitment space/time efficient. :-/ 23:08 -!- Guest72716 [~pigeons@94.242.209.214] has quit [Ping timeout: 260 seconds] 23:20 -!- pigeons [~pigeons@94.242.209.214] has joined #bitcoin-core-dev 23:20 -!- pigeons is now known as Guest75176 23:27 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has joined #bitcoin-core-dev 23:45 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]