--- Day changed Mon Oct 26 2015 00:06 < GitHub133> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/46f74379b86b...450893769f7a 00:06 < GitHub133> bitcoin/master ceb2a9c Wladimir J. van der Laan: doc: mention BIP65 softfork in bips.md 00:06 < GitHub133> bitcoin/master 4508937 Wladimir J. van der Laan: Merge pull request #6879... 00:06 < GitHub137> [bitcoin] laanwj closed pull request #6879: doc: mention BIP65 softfork in bips.md (master...2015_10_bip65) https://github.com/bitcoin/bitcoin/pull/6879 00:21 < GitHub163> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/450893769f7a...867d6c90b850 00:21 < GitHub163> bitcoin/master dca7bd3 Wladimir J. van der Laan: doc: Add developer notes about gitignore... 00:21 < GitHub163> bitcoin/master 867d6c9 Wladimir J. van der Laan: Merge pull request #6878... 00:21 < GitHub17> [bitcoin] laanwj closed pull request #6878: doc: Add developer notes about gitignore (master...2015_10_ignore_files) https://github.com/bitcoin/bitcoin/pull/6878 00:35 < jonasschnelli> gmaxwell: re: -maxuploadtarget. Will try to complete it today. Need to adapt sdaftuar tests and rebase/squash. 00:45 -!- paveljanik [~paveljani@unaffiliated/paveljanik] has quit [Read error: Connection reset by peer] 00:45 -!- paveljanik [~paveljani@unaffiliated/paveljanik] has joined #bitcoin-core-dev 00:45 -!- paveljanik [~paveljani@unaffiliated/paveljanik] has quit [Client Quit] 01:06 -!- Arnavion [arnavion@unaffiliated/arnavion] has quit [Remote host closed the connection] 01:07 -!- Arnavion [arnavion@unaffiliated/arnavion] has joined #bitcoin-core-dev 01:07 -!- Ylbam [uid99779@gateway/web/irccloud.com/x-hvdxzhpjpaqlezfa] has joined #bitcoin-core-dev 01:10 < GitHub137> [bitcoin] laanwj pushed 7 new commits to master: https://github.com/bitcoin/bitcoin/compare/867d6c90b850...5242bb32c723 01:10 < GitHub137> bitcoin/master 4d2a926 dexX7: Ignore coverage data related and temporary test files 01:10 < GitHub137> bitcoin/master d425877 dexX7: Remove coverage and test related files, when cleaning up... 01:10 < GitHub137> bitcoin/master 8e3a27b dexX7: Require Python for RPC tests, when using lcov... 01:10 < GitHub173> [bitcoin] laanwj closed pull request #6813: Support gathering code coverage data for RPC tests with lcov (master...btc-test-lcov-rpc) https://github.com/bitcoin/bitcoin/pull/6813 01:34 < jonasschnelli> anyone getting the same build errors (osx) for current master: boost: mutex lock failed in pthread_mutex_lock: Invalid argument? 01:35 * jonasschnelli is doing a fresh checkout / build 01:43 < jonasschnelli> hmm.. same issue when doing a fresh checkout / build... 01:44 < wumpus> is that a *build* error? 01:44 < jonasschnelli> libc++abi.dylib: terminating with uncaught exception of type boost::exception_detail::clone_impl >: boost: mutex lock failed in pthread_mutex_lock: Invalid argument 01:44 < wumpus> looks like a runtime errror to me, something abusing boost? 01:45 < jonasschnelli> i try now to upgarde from 1.57 to 1.58... 01:45 < wumpus> let's first try to debug it 01:45 < wumpus> when does it happen? can you get a traceback? 01:45 < jonasschnelli> stacktrace stops when program tries to call the first LOCK 01:46 < wumpus> ok 01:46 < jonasschnelli> EnterCritical() / push_lock() 01:46 < wumpus> when was the last time it worked? maybe try a git bisect 01:46 < jonasschnelli> that is strange: *pthread_mutex_lock: Invalid argument* 01:47 < jonasschnelli> wumpus: okay... i'll try to find out if it's related to a source change or something on my local machine. 01:47 < wumpus> can be a use-after-free of a lock 01:48 < wumpus> if it happens at startup it may be a unexpected error path that doesn't wind down properly 02:01 -!- BashCo [~BashCo@unaffiliated/bashco] has quit [Remote host closed the connection] 02:02 -!- BashCo [~BashCo@unaffiliated/bashco] has joined #bitcoin-core-dev 02:06 -!- BashCo [~BashCo@unaffiliated/bashco] has quit [Ping timeout: 250 seconds] 02:07 < GitHub136> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/5242bb32c723...26f5b34e8838 02:07 < GitHub136> bitcoin/master 10e2eae Wladimir J. van der Laan: rpc: Add maxmempool and effective min fee to getmempoolinfo 02:07 < GitHub136> bitcoin/master 26f5b34 Wladimir J. van der Laan: Merge pull request #6877... 02:07 < GitHub170> [bitcoin] laanwj closed pull request #6877: rpc: Add maxmempool and effective min fee to getmempoolinfo (master...2015_10_mempool_effective_fee) https://github.com/bitcoin/bitcoin/pull/6877 02:14 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has joined #bitcoin-core-dev 02:19 -!- BashCo [~BashCo@unaffiliated/bashco] has joined #bitcoin-core-dev 02:24 -!- AtashiCon [arnavion@unaffiliated/arnavion] has quit [Quit: AtashiCon] 02:27 -!- rubensayshi [~ruben@91.206.81.13] has joined #bitcoin-core-dev 02:28 -!- AtashiCon [arnavion@unaffiliated/arnavion] has joined #bitcoin-core-dev 02:33 < dcousens> gmaxwell: with the address index comment, you mention it might drive away users, but, this is an opt-in patch 02:34 < dcousens> I'd much rather use this than my own self-maintained version of the patch, hell, several block explorers I know have written their own implementations eating up much more than 50GB of space etc 02:38 < btcdrak> dcousens: FYI, my .bitcoin folder is 67GB with addrindex turned on. What is it without? 02:39 < dcousens> btcdrak: moment, just checking 02:40 < dcousens> 56G 02:40 < dcousens> with -txindex 02:40 < btcdrak> dcousens: but as a maintainer of an addrindex fork, I think this particular implementation is absolutely not the right way to go about it. It's ugly. I'd much prefer to see something like #5048 which maintains a separate index entirely. 02:40 < dcousens> Oh no doubt 02:40 < dcousens> I meant in concept only 02:40 < dcousens> Hence, only my concept ACK 02:41 < dcousens> There might be a way to do this as gmaxwell pointed out, via fast re-scan or something 02:42 < dcousens> It would just have to be super fast in the case of being feasible for block explorers etc, but, I guess you could just throw some caches/LBs in front of it 02:43 < btcdrak> dcousens: yeah, I'm not sure how addrindex would scale in the long term. 02:45 < dcousens> yeah, but I mean, we have that same concern for the blockchain itself haha 02:45 < dcousens> I just figured, in terms of worrying about users losing resources, in the end, their opting-in, so they'd be aware no? 02:46 < btcdrak> dcousens: I'm no longer as passionate because I provide a fork for those that want it. It's easy enough to maintain. 02:46 < wumpus> indexing the whole block chain is generally the wrong thing to do 02:47 < wumpus> and I understand there are forensic or research reasons to do so, but if you end up at that it's generally an indication you're doing something wrong in your design 02:48 < dcousens> wumpus: that really depends on your trust model though? 02:48 < wumpus> no, it depends on your scaling model 02:49 < wumpus> it's best to regard the block chain as transitory data used to update the utxo state 02:50 < wumpus> (as well as other things, such as wallets) 02:51 < dcousens> agreed, but the UTXO may not be all you care about. But yes your right, in that sense, its your scaling model 02:52 < wumpus> that's why I added (as well as other things) - it's data you want to use to apply changes to your running state, then throw away 02:52 < dcousens> wumpus: but that lacks the information in the case of a user wanting to see the 'history' of a certain address 02:52 < wumpus> bitcoind's wallet, as one exaple, does this correctly. It only requires the whole beast for rescans. It would be nice to have an external example though that isn't embedded into bitcoind. 02:52 < dcousens> the only way to generate that history would be to view all hte transitory data 02:53 < wumpus> the way to generate that history would be to store it as it comes along, as metadata 02:53 < dcousens> that assumes you know the address from the start 02:53 < wumpus> the same way bitcoind's wallet does - store the transactions, or whatever you need to list it later 02:54 < dcousens> for example for a wallet I maintain, we very often have users that come along with BIP39 mnemonics with 100's of used addresses, forcing a rescan for each time they introduce a new mnemonic/wallet would be a huge rescan burden for us 02:54 < wumpus> history (or the part of history that you want to store for accounting purposes) is part of your running state 02:54 < dcousens> Which again, relates to our scaling model, but, that too relates directly to this patch (as a concept) 02:57 < wumpus> yes, if you essentially offer rescanning-as-a-service it makes sense to keep an index. I still think it should be something external though, not part of bitcoind. There are tons of differents kinds of indexes one could want, for specific purposes 02:57 < wumpus> ideally some external indexing tool would source block data from bitcoind then keep an index, as running state 02:59 < wumpus> so my points are: do not encourage people to keep that index, and do not clutter the codebase with all kinds of extra indexes not necessary for node functionaity. It is not "never create an index". sure there may be reasons to do so but it's very custom 03:00 < dcousens> wumpus: hmmm, but, maintaing this indexing is not a simple task 03:00 < btcdrak> wumpus: I think I finally see your point 03:00 < wumpus> I haven't used the word 'simple' 03:00 < dcousens> It requires a lot of complex logic in regards to re-orgs etc 03:01 < dcousens> wumpus: I know, just putting it out there as to why this is probably a persistent feature request 03:01 < wumpus> yes, it requires logic with regard to re-orgs. To be precise, you need to be able to roll back updates from a block. 03:01 < wumpus> bitcoind does this by keeping undo files, but your database may have features to do so as well 03:02 < wumpus> and once this code is written people can just use it, it's no more complex than using your patch 03:02 < dcousens> wumpus: in my mind, it seems more like its just a way to offload the maintenance burden haha 03:03 < btcdrak> why dont we use zmq to notify external apps? 03:03 < wumpus> there's --enable-zmq? 03:04 < dcousens> wumpus: is there a re-org notification? 03:04 < dcousens> That would make the roll backs super simple 03:04 < btcdrak> it could store whatever it wants, bitcoind would notify about reorgs etc. that would allow to build a sort of indexd. The indexd could always query bitcoind for things like blocks as a passthrough 03:05 < wumpus> not sure - there is a notification when there is a new block, you can use that to determine if there has been a reorg. But a more direct notification of what blocks have to be un-done/applies might be useful too. Though be careful, zmq notifications can be lossy, so you shouldn't rely on their content. 03:06 < wumpus> btcdrak: right 03:06 < wumpus> indexd would pull the data it requires from bitcoind 03:06 < btcdrak> wumpus: if we're missing notifications I'm sure they are easy to add. 03:06 < dcousens> wumpus: really? Are they consistently lossy? 03:07 < wumpus> dcousens: yes, if the queue is full new messages are discarded 03:07 < dcousens> wumpus: right, so its just a matter of consumption 03:08 < wumpus> not sure if are any guarantees for delivery - that's why I have always been kind of careful there, it's easy to use it in a wrong way. Notifications are "hey, wake up, there is new information". If those get lost there's no big issue, it will pick them up next time. 03:10 < dcousens> wumpus: mmm, I guess, in the end, its as btcdrak as suggested. The underyling problem is that the maintenance of the address index becomes difficult due to re-org detection. If we can have bitcoind provide us with that information in a clear way, most of this becomes trivial 03:11 < wumpus> I'm not sure reorg detection is so difficult? keep the list of hashes up to the current block, then if a new best block comes in, look at its parents, where it intersects with the list of your blocks is the fork/reorg point 03:12 < dcousens> wumpus: requires maintaing another list, another source of data 03:12 < wumpus> yes it does 03:12 < wumpus> that's life, you need to maintain the data structures that you need to do your work. only be keeping this state yourself you can be sure your state is consistent. 03:13 < btcdrak> indexd could even use bitcoind for SPV lookups 03:14 < wumpus> would be great to have some library or existing software to handle this for you, I'm just talking concepts here not implementation 03:14 < dcousens> btcdrak: I was thinking about that 03:14 < dcousens> you don't even need to verify since you'd already trust your own node 03:15 < wumpus> right - you outsource trust to your node, verification, that's what it's for 03:15 < dcousens> just as wumpus said, probably still need to maintain the hash-chain if theres no notification for re-orgs 03:15 < wumpus> well you need to know what hash-chain your state is based on 03:16 < dcousens> well, not really, all you actually care about is what blocks got reverted, such that you can roll back the relevant transactions in the index 03:17 < wumpus> otherwise the couling is dangerously tight, say, your indexd was disconnected from bitcoind for a while and you reconnect and have to catch up 03:17 < wumpus> coupling* 03:17 < wumpus> yes, but you need to detect that yourself. You can't rely on bitcoind to keep state for you 03:17 < dcousens> wumpus: it would know what blocks get reverted in a re-org, no? 03:18 < wumpus> but only in realtime - ideally you don't want that kind of tight coupling. What if indexd wasn't connected at the time the reorg happens. Do you want to restart all the way from the beginning? 03:18 < dcousens> true 03:19 < wumpus> you have a) your own hash chain b) bitcoind's hash chain , you always want to go from a to b 03:19 < wumpus> (given that you trust bitcoind to give you the best chain) 03:20 < jonasschnelli> wumpus: i can successfully build up to https://github.com/bitcoin/bitcoin/commit/579b863cd7586b98974484ad55e19be2a54d241d 03:20 < jonasschnelli> wumpus: https://github.com/bitcoin/bitcoin/commit/a09297010e171af28a7a3fcad65a4e0aefea53ba seams to break things.. how can this even be possible... 03:20 < jonasschnelli> no code changes at all 03:21 < dcousens> btcdrak: happy to work with you on indexd concept 03:21 < wumpus> would be awesome if someone implemented that 03:21 < wumpus> jonasschnelli: huh that just changes my silly dev tools :) 03:22 < jonasschnelli> yeah. I know... let me investigate more 03:28 < wumpus> so 0fbfc51 works? 03:30 < wumpus> (that's the last commit before merging #6854) 03:33 < GitHub168> [bitcoin] laanwj pushed 3 new commits to master: https://github.com/bitcoin/bitcoin/compare/26f5b34e8838...ff057f41aa14 03:33 < GitHub168> bitcoin/master 9d55050 Mark Friedenbach: Add rules--presently disabled--for using GetMedianTimePast as endpoint for lock-time calculations... 03:33 < GitHub168> bitcoin/master dea8d21 Mark Friedenbach: Enable policy enforcing GetMedianTimePast as the end point of lock-time constraints... 03:33 < GitHub168> bitcoin/master ff057f4 Wladimir J. van der Laan: Merge pull request #6566... 03:33 < GitHub161> [bitcoin] laanwj closed pull request #6566: BIP-113: Mempool-only median time-past as endpoint for lock-time calculations (master...medianpasttimelock) https://github.com/bitcoin/bitcoin/pull/6566 03:36 < btcdrak> dcousens: sure. 03:38 < dcousens> wumpus: would it better to maintain indexd in repo or as a separate application? 03:39 < wumpus> well - better is always to have it completely detached, but I suppose developing it inside the repository is easier for now if you need serialization and certain data structures 03:40 < wumpus> we don't have a way to export those as a library yet 03:40 < jonasschnelli> wumpus : found the problem: https://github.com/bitcoin/bitcoin/pull/6722/files#diff-ca74c4b28865382863b8fe7633a85cd6R312 03:40 < dcousens> wumpus: aye 03:40 < jonasschnelli> clear() calls LOCK() in cxx_global_var_init() 03:41 < jonasschnelli> i think this should be changed. 03:42 -!- BashCo_ [~BashCo@unaffiliated/bashco] has joined #bitcoin-core-dev 03:43 -!- BashCo [~BashCo@unaffiliated/bashco] has quit [Ping timeout: 240 seconds] 03:43 -!- BashCo [BashCo@gateway/vpn/mullvad/x-idxdawvvdmhnqcwa] has joined #bitcoin-core-dev 03:44 < wumpus> common, straightforward to do this is to separate it out into a private function, say clear_(), and a public function clear() that gets the lock and calls clear_() 03:44 < wumpus> this way, functions that already have the lock, or don't need it otherwise, can call clear_ instead 03:45 < wumpus> in this case it is the constructor so obviously it doesn't need the lock 03:45 < jonasschnelli> okay... can try a patch. 03:47 -!- BashCo_ [~BashCo@unaffiliated/bashco] has quit [Ping timeout: 268 seconds] 03:49 < wumpus> not that it should normally hurt to lock a lock that is part of your object inthe constructor... but this looks like some global obscure initialization order thing because the mempool object is global :( 03:51 < wumpus> (probably it would be better to use an explicit lifetime for objects such as these, but fixing that is more than you bargained for) 03:54 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has quit [Ping timeout: 240 seconds] 04:03 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has joined #bitcoin-core-dev 04:20 -!- jtimon [~quassel@74.29.134.37.dynamic.jazztel.es] has quit [Read error: Connection reset by peer] 04:32 < Luke-Jr> huh, the wallet rescan % shown in the splash screen is much higher than the debug.log Progress - block count vs tx count? :/ 04:36 < wumpus> GUIstd::max(1, std::min(99, (int)((Checkpoints::GuessVerificationProgress(chainParams.Checkpoints(), pindex, false) - dProgressStart) / (dProgressTip - dProgressStart) * 100))) 04:36 < wumpus> log: Checkpoints::GuessVerificationProgress(chainParams.Checkpoints(), pindex) 04:37 < wumpus> the GUI takes into account the starting point, which you'd expect would result in it showing a *lower* progress % when it makes a difference 04:38 < wumpus> but the GUI also has fSigchecks=false for GuessVerificationprogress, maybe tha'ts it 05:42 < wumpus> this is curious, anyone else having problems with address parsing on windows 10? https://github.com/bitcoin/bitcoin/issues/6886 05:44 < wumpus> looks like the IP parsing (using getaddrinfo) always returns 0:0:0:0:0:0:0:0/128, IPv4 parsing as well as IPv6 parsing fails 05:49 -!- jtimon [~quassel@74.29.134.37.dynamic.jazztel.es] has joined #bitcoin-core-dev 05:51 -!- ParadoxSpiral [~ParadoxSp@p508B86EF.dip0.t-ipconnect.de] has joined #bitcoin-core-dev 06:57 < GitHub175> [bitcoin] jonasschnelli opened pull request #6889: fix locking issue with new mempool limiting (master...2015/10/fix_mempool_lock) https://github.com/bitcoin/bitcoin/pull/6889 07:07 -!- Guest25608 [~pigeons@94.242.209.214] has quit [Ping timeout: 240 seconds] 07:07 -!- treehug88 [~textual@static-108-30-103-59.nycmny.fios.verizon.net] has joined #bitcoin-core-dev 07:14 -!- pigeons [~pigeons@94.242.209.214] has joined #bitcoin-core-dev 07:14 -!- pigeons is now known as Guest20816 07:18 -!- danielsocials [~quassel@45.32.248.113] has joined #bitcoin-core-dev 07:23 -!- danielsocials [~quassel@45.32.248.113] has quit [Remote host closed the connection] 07:26 -!- danielsocials [~quassel@45.32.248.113] has joined #bitcoin-core-dev 07:46 < morcos> Does anybody have any thoughts on storing the number of sigOps in a tx in the CTxMempoolEntry? 07:47 < morcos> We calculate this on ATMP, and we need it in CreateNewBlock, but it is on the expensive'ish side to calculate. 07:48 < sipa> sounds good 07:48 < morcos> Wow, I was prepared to have to fight for that... :) 07:50 < morcos> I was naively hoping you could just assume every scriptSig was satisfying a P2SH and you wouldn't over count by too much, but turns out that is NOT the case.. 07:55 < dgenr8> morcos: any plans to cache ancestor pkg sums the way descendants are? 07:56 < morcos> dgenr8: sdaftuar has written something which he might publish soon. but i don't think there is any point in merging it until we have mining code that can take advantage of it. 07:57 < morcos> i'm starting by just trying to make the existing mining code much faster 07:59 < gmaxwell> 02:58 < gmaxwell> Is there a reason that for fee purposes we are not using max(size, sigops*BLOCK_MAX_BYTES/BLOCK_MAX_SIGOPS) as the size of a transaction? 07:59 < gmaxwell> as in "the size" that we use for mempool/etc. purposes. 08:02 < gmaxwell> dcousens: Opt-in isn't sufficient to prevent it from driving users out, I do explain this... 08:02 < morcos> gmaxwell: i saw that. on my list somewhere is to research algorithms for satisfying multiple constraints. presumably if we know that actual size is usually the limiting factor, then it doesn't seem right to sort something lower b/c it has a lot of sigops even though you're not going to hit that limit 08:02 < morcos> i was thnking about it in the terms of some future consensus requirement on utxo deltas as well 08:03 -!- bsm1175321 [~bsm117532@38.121.165.30] has quit [Remote host closed the connection] 08:05 < morcos> the discussion we were having the other day about a separate thread for template generation seems very difficult to me. its very hard to extract code from needing to lock at least the mempool. 08:05 < morcos> i think to do that, we might need 2 mempools, or to just allow the mempool to queue tx's while the mining code is running 08:06 < morcos> but its an easy win to make existing createnewblock significantly faster just by adding an index on individual tx feerate, so i figured i'd start with that. 08:12 -!- bsm1175321 [~bsm117532@38.121.165.30] has joined #bitcoin-core-dev 08:13 -!- BashCo_ [~BashCo@unaffiliated/bashco] has joined #bitcoin-core-dev 08:17 -!- BashCo [BashCo@gateway/vpn/mullvad/x-idxdawvvdmhnqcwa] has quit [Ping timeout: 264 seconds] 08:19 < gmaxwell> morcos: I've generally assumed the multivariable optimization would be much harder and not really worth it-- the sigops limit is high enough that it doesn't normally have an effect. What I suggested at least prevents the case where some silly dos attack transaction with a high per byte fee kills a feerate greedy selection. 08:20 < gmaxwell> morcos: hm. So I didn't expect the seperate template thread would avoid locking the mempool. I assumed it would. But that having it would allow the createnewblock to always run without taking any long-held locks. 08:21 < gmaxwell> E.g. block creation thread would lock the mempool and build a block. Then take a blocktemplate lock for just an instant to replace the last template. 08:21 < morcos> gmaxwell: ah, interesting, so trick the mining code into producing small blocks unecessarily.. maybe that is a good idea. 08:23 < morcos> gmaxwell: i see. i guess i need to understand how getblocktemplate is used in practice. i was assuming the threading was so we could be optimizing all the time. i didn't realize it was because you wanted to return a previously generated template to someone calling gbt. i assumed they had already gotten the previous template 08:24 < gmaxwell> and if the cached template is out of date with the chain when gbt runs, you just generate an empty template, which you can do without touching he mempool. 08:25 < morcos> yeah, so i did write a threaded version that does what you suggested, but it was bad, because it was basically just busy grabbing cs_main and mempool, and i thought well ok we'll have to make it run only every so often, but then i didn't see why that was better than the existing usage 08:26 < morcos> but you want to optimize for the case where the caller currently has nothing valid to work on? 08:26 < gmaxwell> When a new block is accepted by the node the miner process learns this (either via p2p inspection or via longpolling) and calls GBT. You want the lowest latency response possible; because until it returns the mining gear is working on a fork. 08:26 < gmaxwell> Then the mining clients will periodically poll to update the transaction list. In this polling you don't care if the transaction data returned is seconds stale. 08:27 < morcos> yes, understood that the new block case needs to be optimized, that was my next step, and that seems pretty easy 08:27 < gmaxwell> since all that does is deny you the little incremental fee rate from new transactions that came in. 08:28 < gmaxwell> We already do createnewblock caching. but it only helps with the second and later requests. 08:28 < morcos> so what i was assuming is that when you periodically poll, which is the common use case, you already have valid work, so the existing longpoll functionality is fine, and there is no need for background process 08:28 < morcos> yes right 08:29 -!- jl2012 [~jl2012@unaffiliated/jl2012] has quit [Ping timeout: 240 seconds] 08:29 -!- jl2012 [~jl2012@unaffiliated/jl2012] has joined #bitcoin-core-dev 08:30 < morcos> so i think what you're saying is that for instance a new block could come in while you happen to not be longpolling, so then it won't be until you call getblocktemplate that it even tries to do anything, and it woudl be better if it started immediately? 08:30 < gmaxwell> Ultimately it would be good to avoid the case where if you poll just before a new block is accepted, you don't delay getting work on the new block. Though, indeed, then I think you run into locking issues with the mempool. 08:31 < gmaxwell> morcos: yes, when a new block comes in and you have been mining it should immediately compute a new block template, and until thats done, return empty ones so you'll at least be working on extending the longest valid chain. 08:32 < gmaxwell> (a failure to make this optimization is part of what contributes to miners mining on other miners work without validating-- they go and 'optimize' at a layer higher than Bitcoin, and as a resut complex details like validation go out the window. :) ) 08:32 < morcos> it seems like you'd want to be notified of 2 things, when there is a new block template (empty) and when there is the first template with txs based on the new block. how do you get that notification if you're not longpolling constantly? 08:36 < morcos> i was just going to change it so the longpool returns immediately with an empty block if a new block comes in, via a flag to create new block or something. and then your next long polling call to getblocktemplate also calls CNB immediately which will return with txs. 08:36 < gmaxwell> We could trigger the GBT LP's every N seconds or when the fees change by X, I don't actually know what (if anything) triggers the GBT LP now except a new block. I also don't know how widely GBT LP is used at the moment: RPC concurrency has been kinda broken for a long time (fixed in git master), and GBT LP was a late addition, so I think most things decide when to poll GBT based on monitoring 08:36 < gmaxwell> P2P and then otherwise run on a timer. 08:37 < morcos> ok... well doing it the threaded way shouldn't be hard either... 08:38 < gmaxwell> morcos: we do want to get people mining transactions quickly too-- as otherwise you're missing out on fees (and the public tends to get really angry when they're waiting for transactions and an empty block shows up. :) ) 08:38 < morcos> gmaxwell: yes, will be much much faster, so as long as you poll again immediately, shouldn't be more than 100ms 08:39 < morcos> i'll probably have a lot more questions when our mining hardware shows up.. :) 08:43 < gmaxwell> There is some ... potential for "can't win" here, so there is some hardware that really dislikes being longpolled often. E.g. for some dumb reason (usually a complete lack of understanding about all the reasons you might LP) and they flush the pipeline completely. Now, GBT LPing doesn't necessarily mean the hardware gets disrupted, as some higher layer thing can ignore an early long poll and pi 08:43 < gmaxwell> ck up the non-empty template on a later timed run. 08:44 < gmaxwell> So it might be the case that we expose breakage if we start LPing 100ms after the last time we LPed, just because we now have transactions. But if so, we can probably add a flag to delay that second LP event. 08:44 < gmaxwell> (that same hardware is the stuff that cannot be used with P2Pool) 08:52 -!- danielsocials [~quassel@45.32.248.113] has quit [Remote host closed the connection] 09:25 < GitHub52> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/ff057f41aa14...c8322ff7f754 09:25 < GitHub52> bitcoin/master 143d173 Eric Lombrozo: Use BOOST_CHECK_MESSAGE() rather than BOOST_CHECK() in alerts_tests.cpp and initialize strMiscWarning before calling PartitionCheck()." 09:25 < GitHub52> bitcoin/master c8322ff Wladimir J. van der Laan: Merge pull request #6888... 09:25 < GitHub96> [bitcoin] laanwj closed pull request #6888: Clear strMiscWarning before running PartitionAlert (master...alert_tests) https://github.com/bitcoin/bitcoin/pull/6888 09:34 -!- challisto [~challisto@unaffiliated/challisto] has quit [Quit: Leaving] 09:42 < GitHub16> [bitcoin] laanwj pushed 3 new commits to master: https://github.com/bitcoin/bitcoin/compare/c8322ff7f754...dbc5ee821ecd 09:42 < GitHub16> bitcoin/master d4aa54c Kevin Cooper: added org.bitcoin.bitcoind.plist for launchd (OS X) 09:42 < GitHub16> bitcoin/master e04b0b6 Kevin Cooper: added OS X documentation to doc/init.md 09:42 < GitHub16> bitcoin/master dbc5ee8 Wladimir J. van der Laan: Merge pull request #6621... 09:42 < GitHub118> [bitcoin] laanwj closed pull request #6621: added org.bitcoin.bitcoind.plist for launchd (OS X) (master...master) https://github.com/bitcoin/bitcoin/pull/6621 09:48 -!- CodeShark [~CodeShark@cpe-76-167-237-202.san.res.rr.com] has joined #bitcoin-core-dev 09:54 < GitHub11> [bitcoin] laanwj pushed 3 new commits to master: https://github.com/bitcoin/bitcoin/compare/dbc5ee821ecd...7939164d8985 09:54 < GitHub69> [bitcoin] laanwj closed pull request #6622: Introduce -maxuploadtarget (master...2015/09/maxuploadtarget) https://github.com/bitcoin/bitcoin/pull/6622 09:54 < GitHub11> bitcoin/master 872fee3 Jonas Schnelli: Introduce -maxuploadtarget... 09:54 < GitHub11> bitcoin/master 17a073a Suhas Daftuar: Add RPC test for -maxuploadtarget 09:54 < GitHub11> bitcoin/master 7939164 Wladimir J. van der Laan: Merge pull request #6622... 09:54 -!- MarcoFalke [8af60235@gateway/web/cgi-irc/kiwiirc.com/ip.138.246.2.53] has joined #bitcoin-core-dev 10:06 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has joined #bitcoin-core-dev 10:13 -!- BashCo_ [~BashCo@unaffiliated/bashco] has quit [Remote host closed the connection] 10:14 -!- instagibbs_ is now known as instagibbs 10:21 -!- PaulCape_ [~PaulCapes@204.28.124.82] has quit [Quit: .] 10:22 -!- PaulCapestany [~PaulCapes@204.28.124.82] has joined #bitcoin-core-dev 10:32 -!- BashCo [~BashCo@unaffiliated/bashco] has joined #bitcoin-core-dev 10:47 -!- rubensayshi [~ruben@91.206.81.13] has quit [Remote host closed the connection] 11:33 < sipa> morcos, sdaftuar, BlueMatt: perhaps we need to talk a bit about what the expectation of the coincache + mempool size limiting is 11:34 < sipa> do we want a "all of the mempool's dependencies are guaranteed to always fit in memory" ? 11:36 < morcos> sipa: hmm, i don't think i was trying to argue for that. in fact i don't think it would be at all reasonable to do that. 11:37 < morcos> i think we should set the default sizes so that is true in general though 11:37 < dgenr8> Was reversing the feerate sort just an expressiveness change, or did it fix something? 78b82f4 11:37 < sdaftuar> dgenr8: expressiveness mostly, avoids an issue with using project 11:38 < sipa> morcos: one way to accomplish that is to just have a combined coincache+mempool memory limit size, and depending on whether the dependencies are large or small, effectively have a smaller resp. larger mempool limit as a result 11:38 < sipa> morcos: but if we don't have that, we _must_ enforce the coincache limit 11:38 < dgenr8> sdaftuar: thx! 11:38 < sdaftuar> i think that might be a little dangerous though because that has an effect on relay policy 11:39 < sipa> morcos: if the coincache exceeds its size, it will be wiped anyway when the next block comes 11:39 < sipa> calling flush in the tx code just makes it happen a bit earlier to avoid an OOM 11:39 < sdaftuar> sipa: i think we should revisit that behavior. ideally the coinscache would have in it the set of things likely to be needed when the next block arrives 11:39 < morcos> sipa: you realize that we can't always enforce the coincache size? when we're connecting a block it could temporarily exceed for example 11:40 < sipa> morcos: yeah... 11:40 < morcos> i agree that we should _eventually_ put in something that enforces limitation of the coincache size between blocks 11:40 < morcos> but that might as well wait for something smarter than just flushing 11:40 < sipa> i think the only real solution is to switching to a per-txout cache instead of a per-tx cache 11:40 < sipa> so the ratio is more or less fixed 11:40 < sdaftuar> ah i was wondering how feasible that would be 11:40 < morcos> and in the meantime we set the ratio of the defaults to be reasonable so that we don't usually blow through the cache limit 11:41 < morcos> given that you have to pay for relay to increase the cache size there is some cost to the attack after all 11:41 < morcos> and how big are we worried the cache size could get? 11:41 < sipa> define usually 11:42 < morcos> i guess i'm thinking that if we set mempool to 300M then it'd be rare than the coinsview cache was over 600M or something 11:43 < morcos> thats just a lot bigger than the current default 11:45 < morcos> i just think flushing after txs should be rare, if we want to set some higher limit to protect against OOM and flush after txs there, then thats fine, lets just set all these defaults so that what happens usually is the cache is big enough to contain a good chunk of the mempool. it doesn't really have to contain the whole thing, because 11:45 < morcos> you wont' get to 100% mempool size in between blocks 12:08 -!- jl2012_ [~jl2012@119246245241.ctinets.com] has joined #bitcoin-core-dev 12:08 -!- jl2012 [~jl2012@unaffiliated/jl2012] has quit [Ping timeout: 272 seconds] 12:33 < phantomcircuit> indeed why are we keeping the already accepted mempool dependencies in the cache? they're already validated isn't that going to reduce the cache hit % ? 12:34 < sipa> phantomcircuit: they're validated again when a block is received 12:34 < sipa> which we want to make fast 12:35 < phantomcircuit> sipa, ah that's right 12:35 < sipa> an approach to avoid that is caching the success, but if you store that cache value in the mempool, you're making the mempool code consensus critical 12:35 < phantomcircuit> yeah lets not do that :) 12:35 < sipa> eh, and that doesn't even help 12:35 < sipa> signature validation is already cached anyway 12:35 < phantomcircuit> oh that reminds me 12:35 < sipa> and you need to be able to update the spends in the chainstate anyway after the block is validated 12:36 < phantomcircuit> the size of the sig cache isn't in the help menu 12:36 < sipa> good point 12:36 < phantomcircuit> it significantly reduces worst case AcceptNewBlock() latency to increase that 12:36 < phantomcircuit> miners should basically all have that be 10-100x the default 12:36 < gmaxwell> what? 12:37 < gmaxwell> the entries are _tiny_, we should be able to have a "big enough for ~100% hitrate" cache for everyone. 12:37 < sipa> i actually have no idea what size the sigcache is 12:39 < phantomcircuit> huh it is in the help 12:39 < gmaxwell> phantomcircuit: for block acceptance speed it may make sense to seperately cache success and failure. 12:39 < phantomcircuit> 50000 entries 12:40 < phantomcircuit> gmaxwell, yes map > 12:41 < phantomcircuit> should work nicely under non adversarial conditions 12:41 < sipa> that means that an entry "3000 ago" (which is around 1000 tx) only has a 96% hitrate 12:41 < sipa> eh, 94% 12:41 < gmaxwell> phantomcircuit: what, no. 12:41 < sipa> it uses random replacement 12:41 < phantomcircuit> sipa, yup 10-100x makes a big difference 12:41 < gmaxwell> phantomcircuit: transactions can spend unconfirmed outputs my friend. 12:42 < phantomcircuit> gmaxwell, yeah you still have to check that the dependencies 12:43 < phantomcircuit> that *does* make the mempool view consensus critical though 12:43 < gmaxwell> phantomcircuit: if you only mean to check that the ECDSA passes, than is enough. 12:43 < sipa> the sigcache is per signature, not per transaction 12:43 < sipa> per transaction is pretty complicated, as you need to take active softforks into account etc 12:44 < phantomcircuit> sipa, hadn't considered that 12:44 < sipa> there have been patching before that cached full tx validation 12:44 < sipa> but actually just script validation caching is cheaper 12:45 < sipa> s/cheaper/easier 12:45 < phantomcircuit> hmm 12:46 < sipa> i can implement that easily 12:46 < phantomcircuit> have we considered that upload limiting disconnecting peers will have an impact on the network topology? 12:47 < gmaxwell> phantomcircuit: it only disconnects peers that fetch historic blocks currently; largely avoiding the concern. 12:47 < phantomcircuit> that seems safe 12:48 < gmaxwell> well safer at least. it doesn't completely escape topology impact. 12:48 < gmaxwell> Because a new node will, of course, not remain connected to anything over its limit. 12:49 < gmaxwell> so even once its synced up it will end up with a different position in the topology than it would otherwise. 12:49 < phantomcircuit> export MALLOC_ARENA_MAX=1 12:49 < phantomcircuit> that completely fixes the problem 12:49 < phantomcircuit> neat 12:50 < sipa> IBD's stall detection already causes significant movement in the topology 12:50 < sipa> phantomcircuit: ?? 12:50 < sipa> to reduce fragmentation? 12:50 < phantomcircuit> gmaxwell, right which means new nodes will tend to end up connected to nodes that have no upload limit 12:50 < phantomcircuit> sipa, yeah, calling getblocktemplate in a loop and memory usage <1GB 12:50 < gmaxwell> phantomcircuit: at least during their first runtime, after restart their topo will be more uniform. 12:51 < gmaxwell> phantomcircuit: we really need outbound rotation to flatten topo hotspots. 12:51 < phantomcircuit> gmaxwell, for that we need "outbound slots" basically 12:52 < phantomcircuit> the easiest way to do that is to just assign the second half of the outbound connections (based on vNodes position) as rotating 12:52 < phantomcircuit> definitely dont want to rotate all the outbounds :) 12:52 < gmaxwell> right, it's a good way to wander into a sybil attack. 12:53 < CodeShark> are you guys submitting proposals for hong kong? 12:53 < CodeShark> sorry if off-topic 12:54 < phantomcircuit> hmm this is actually kind of annoying to test 12:55 < phantomcircuit> have to get the mempool full again 12:55 < phantomcircuit> sending the "mempool" command to peers when they connect results in comical performance issues 12:55 < sipa> phantomcircuit: i run with that for benchmarking mempool behaviour 12:55 -!- molly [~molly@unaffiliated/molly] has quit [Read error: Connection reset by peer] 12:56 < phantomcircuit> sipa, have you noticed that you just sit there spinning at 100% cpu processing something (i haven't bothered to figur eout what yet) 12:56 < sipa> phantomcircuit: turn on debug=mempool and debug=mempoolrej 12:57 < phantomcircuit> i run this node with debug= 13:04 < GitHub138> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/7939164d8985...2b625510d374 13:04 < GitHub138> bitcoin/master 7bbc7c3 Suhas Daftuar: Add option for microsecond precision in debug.log 13:04 < GitHub138> bitcoin/master 2b62551 Wladimir J. van der Laan: Merge pull request #6881... 13:05 < GitHub108> [bitcoin] laanwj closed pull request #6881: Debug: Add option for microsecond precision in debug.log (master...add-microsecond-timestamps) https://github.com/bitcoin/bitcoin/pull/6881 13:08 < phantomcircuit> BlueMatt, plz2rebase seed 13:14 < phantomcircuit> sipa, non-scientific gdb interrupt+ bt says it's spinning on ECDSA_verify 13:14 < phantomcircuit> which i guess is to be expected :) 13:16 -!- Thireus [~Thireus@icy.thireus.fr] has quit [Ping timeout: 250 seconds] 13:16 < wumpus> is it catching up or in initial sync? if so, that's expected 13:16 < wumpus> if in steady state it's not normal 13:17 < sipa> wumpus: this is with sending out a BIP35 "mempool" command, so at startup it receives zillions of transactions 13:18 < wumpus> ok... 13:18 < wumpus> yes, then it makes sense too 13:18 < sipa> so i do expect you'd be accepting mempool txn at the speed you can handle 13:26 < phantomcircuit> wumpus, http://0bin.net/paste/8349G1adfXBgLthd#l6rCtj99ayGFnNdx6-C+bbP1fYFF0EJIvg4qvV4JnD9 13:32 < wumpus> you do have commit 5ce43da03dff3ee655949cd53d952dace555e268? 13:33 < phantomcircuit> wumpus, yes 13:33 < wumpus> if so, this shouldn't happen - unless gdb happens to reenable the signal 13:34 < phantomcircuit> it shouldn't 13:34 < wumpus> yeah it does, it reports all signals by default. Try: "handle SIGPIPE nostop noprint pass" 13:34 < phantomcircuit> but who knows... gmaxwell do you know :) 13:35 < phantomcircuit> wumpus, sigh 13:35 < phantomcircuit> ok yes that's good 13:36 < phantomcircuit> bleh why does it do that? 13:57 < phantomcircuit> wumpus, can verify that MALLOC_ARENA_MAX=1 works btw 13:57 < sipa> phantomcircuit: what about 2 instead of 1? 13:58 < phantomcircuit> sipa, not sure it takes ages to load enough transactions into the mempool for the result to be valid 13:58 < wumpus> phantomcircuit: awesome 13:59 < sipa> too bad we can't set MALLOC_ARENA_MAX from the program itself, i suppose? 13:59 < phantomcircuit> sipa, well we can but... 14:00 < sipa> how so? 14:00 < wumpus> I think you can 14:00 < sipa> we can change the env variable, but i expect that the arenas are created at library load time 14:00 < sipa> before we can change it 14:00 < phantomcircuit> sipa, the easiest would be to set the env and execve 14:01 < phantomcircuit> ie h4x 14:01 < wumpus> let me see glibc source... 14:03 < wumpus> it indeed makes no sense to set the env var, as it parses it at startup as expected, however you should be able to do mallopt(M_ARENA_MAX, 1) 14:05 < wumpus> /* mallopt options that actually do something */ in /usr/include/malloc.h :-) 14:06 < sipa> man mallopt does not list M_ARENA_MAX here 14:07 < wumpus> it's an undocumented option 14:08 < wumpus> probably should call it before any call to malloc to be effective 14:09 < wumpus> or at least before calling it in a thread 14:09 < sipa> the execve approach works... 14:09 < sipa> it's ugly because it needs to know the name of the binary you just started 14:09 < wumpus> that's really ugly, and makes debugging harder 14:10 < sipa> agree 14:10 < phantomcircuit> ha it's not even in the libc manual 14:10 < wumpus> and that (though you should always have that in argv[0] on unix) 14:10 < phantomcircuit> it looks like we could set M_MMAP_THRESHOLD which would be slower but seems to guarantee memory is actually free'd 14:11 -!- ParadoxSpiral [~ParadoxSp@p508B86EF.dip0.t-ipconnect.de] has quit [Remote host closed the connection] 14:12 < wumpus> but gdb will show something like "process 28686 is executing new program: /bin/dash" when a program execve's, and it loses all its state 14:13 < phantomcircuit> wumpus, yeah i know thus "h4x" 14:13 < phantomcircuit> it would probably also break anybody using start-stop-daemon 14:13 < phantomcircuit> gtg 14:14 < wumpus> but the mallopt should work too (can't check it right now though) 14:14 < wumpus> ok later 14:16 -!- randy-waterhouse [~kiwigb@opentransactions/dev/randy-waterhouse] has joined #bitcoin-core-dev 14:28 -!- treehug88 [~textual@static-108-30-103-59.nycmny.fios.verizon.net] has quit [Remote host closed the connection] 14:48 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has joined #bitcoin-core-dev 15:01 -!- molly [~molly@unaffiliated/molly] has joined #bitcoin-core-dev 15:12 -!- MarcoFalke [8af60235@gateway/web/cgi-irc/kiwiirc.com/ip.138.246.2.53] has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client] 15:20 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 15:21 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has joined #bitcoin-core-dev 15:31 -!- belcher [~user@unaffiliated/belcher] has joined #bitcoin-core-dev 16:06 < phantomcircuit> 41k transaction in the mempool and counting 16:07 < sipa> yeah, 50k sigcache entries won't be good in that case... 16:07 < phantomcircuit> yeah i changed mine to maxsigcachesize=1000000 16:13 < gmaxwell> sipa: sigcache is currently uint256,vector,cpubkey.... bloat bloat bloat. 16:14 < gmaxwell> Could be compressed with H(the above). 16:14 < gmaxwell> H() need not even be very collission resistant if there is a per node secret nonce. 16:21 < phantomcircuit> gmaxwell, yeah that's what i was thinking 16:21 < sipa> it's around 220 bytes per entry now 16:23 < phantomcircuit> sipa, depends on the size of the pubkey script right? 16:23 < phantomcircuit> it would be nice if it didn't 16:23 < phantomcircuit> upto 44.5k now 16:24 < sipa> phantomcircuit: it's a pubkey; not a script 16:24 < sipa> pubkeys are 65 bytes 16:24 < sipa> in memory 16:24 < sipa> + alignment 16:24 < phantomcircuit> oh 16:26 < gmaxwell> sipa: plus whatever overhead there is from the std::set. 16:26 < sipa> gmaxwell: around 40 bytes, i included that 16:38 -!- randy-waterhouse [~kiwigb@opentransactions/dev/randy-waterhouse] has quit [Quit: Leaving.] 16:39 < gmaxwell> sipa: would it be really ugly if accepting the block carried a flag to the checker that told it to remove the entry? with that alone the hitrate would no longer be low absent attacks because the cache would not be full all the time. 16:52 < morcos> sipa: phantomcircuit: i don't think we need to worry about M_ARENA_MAX. we can drastically reduce the size of the problem. first we rewrite CreateNewBlock to only pull into the cache 1 blocks worth of txs. second we make it run in only 1 thread. 16:53 < morcos> but yeah that maxsigcachesize will bite you. i couldn't figure out why my new code was slower and it was that its sigcache was getting blown away 17:05 < phantomcircuit> morcos, my huge sigcache is mostly fine 17:05 < morcos> phantomcircuit: i'm running with 1M now and its still noticeably slower (than infinite) 17:05 < phantomcircuit> changing the max arena 100% fixed the problem without rewritting a bunch of stuff too :) 17:06 < phantomcircuit> hmm it really shouldn't be 17:06 < morcos> well i'm chaning the other stuff anyway :) 17:06 < morcos> well noticeably slower i mean just that... say instead of taking 30us to verify a txin on average it takes 40us 17:07 < morcos> but without a cache at all of course its 10x higher 17:09 < morcos> its the process of filling up your 300M mempool for the first time that makes it slow... i guess it should improve once its full and the mempool gets a minfee 17:10 < morcos> scarily fast filling up to 300M from starting from scratch. like 45mins 17:11 < phantomcircuit> morcos, without the limiting 17:11 < phantomcircuit> im at about 2GB now 17:11 < phantomcircuit> 3.168GB 17:12 < morcos> i'm still disturbed by how this same backlog of txs is so readily rerelayed 17:13 < gmaxwell> esp since the network behavior is supposted to not relay transactions once they've already been relayed. :( 17:13 < gmaxwell> unfortunately there seem to be some abusive nodes out there that behave strangely, constantly resending transactions and such. 17:14 < morcos> yeah i noticed that, i was wondering if they were nodes who considerd these wallet tx's or if was specifically designed to retransmit them 17:15 < morcos> ok modulo the final TestBlockValidity check, i brought the time of CreateNewBlock from a bit over 1 second to under 10 ms. 17:15 < gmaxwell> the ones I've caught doing it with an instrumented node appeared to regurgitate their whole mempool every block. 17:16 < morcos> unfortunately i wouldn't feel good about dumping that check. 17:16 < morcos> that also doesn't include adding any priority transactions. i can add that back in, but i think it'll be slow 17:16 < morcos> gmaxwell: oh really?? thats bad 17:18 < midnightmagic> fwiw, gmaxwell recall you wanted me to check and see whether any of those nodes were resending me tx dupes: across the entire time I was monitoring, none of them did that I could detect while parsing the debug.log. 17:18 < midnightmagic> I don't think I actually got back to you about that. 17:18 < midnightmagic> gmaxwell: What's the periodicity of the resends? 17:19 < gmaxwell> midnightmagic: at least some were doing this after every block. 17:20 < gmaxwell> e.g. a bogus assumption that any transaction known would make it in a block, and if it didn't you must have just 'lost' it. 17:20 < gmaxwell> or the like. 17:20 < midnightmagic> I wonder if some services are attempting to "help" the network by propagating tx in order to guarantee reach for nodes that have rebooted or something. 17:29 < CodeShark> sending as in sending the actual transaction? or just an inv? 17:32 < phantomcircuit> CodeShark, same thing for this purpose 17:32 < phantomcircuit> zombie transactions 17:33 < CodeShark> for what purpose? I would think sending the mempool (as in the actual txs, not just the inv) is MUCH worse for bandwidth 17:34 < morcos> would anybody else find useful a feature which split the minimum fee for acceptance to your mempool and the min fee for relay? 17:34 < morcos> often during this testing, i either hack up my node to not relay, or i feel bad about the fact that i'm helping propogate these low fee txs all over again 17:34 < CodeShark> I would find really useful a feature that would allow you to get paid for running a validator/relay node :p 17:35 < morcos> you get paid in food for your soul 17:35 < CodeShark> lol 17:35 < sipa> CodeShark: i think that screws incentives completely 17:35 < CodeShark> sipa: howso? 17:35 < sipa> CodeShark: if you can get paid to run a node, you can equally get paid to run a compromised node 17:36 < CodeShark> presumably you'd have to actually perform the function sufficiently well 17:36 < sipa> you can't prove that you are 17:37 < CodeShark> well, relay might be more easily amenable to micropayments - validation is a little trickier 17:37 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has quit [Ping timeout: 256 seconds] 17:38 < CodeShark> but I think ultimately doable...not sure how, but my gut tells me at the very least some efficient probabilistic checks are possible 17:39 < CodeShark> without having to go to SNARKs 17:39 < sipa> their validation is unobservable... it's a feature they provide to their users, not to you 17:42 < CodeShark> it's a feature they provide to the network 17:43 < sipa> no 17:43 < sipa> relay is what they provide to the network 17:43 < sipa> and you can't observe that they validate before relay 17:43 < sipa> 1) they may just be connected to honest nodes them selves, so they'll only relay valid blocks through that 17:44 < sipa> 2) past performance is not an indicator for future success. One awesome way to sybil attack the network is to spin up many totally honest nodes, and then suddenly turn evil once you notice a sufficient portion of the network depends on you. 17:45 < sipa> relay/ibd is a dumb service they offer to the network - which hopefully verifies what you tell them enough to not need to trust you 17:46 < sipa> the power of validation is only through knowing that the person running it has economic activity depending on it 17:48 < CodeShark> so I guess if the network is to subsidize it it would have to either require miners to be doing validation somehow...or to incentivize invalidation 17:50 < CodeShark> in any case, if we could just take care of incentives for relay for now I'd be happy :) 17:50 < CodeShark> that would go a long ways towards solving the mempool issues 17:57 < CodeShark> providing SPV proofs could be incentivized...and perhaps nodes that can provide SPV proofs would also have to perform full validation 18:00 < CodeShark> or it would be in their interest to, I suppose 18:01 -!- fkhan [weechat@gateway/vpn/mullvad/x-yjxxwavjvubkbjnz] has quit [Ping timeout: 250 seconds] 18:15 -!- fkhan [weechat@gateway/vpn/mullvad/x-fpuzdzgceqhznkgv] has joined #bitcoin-core-dev 18:15 < phantomcircuit> morcos, i always return early in relaytransaction when messing with mempool stuff 18:22 < phantomcircuit> ha i can detect which peers are master by the order of their responses to the mempool command 18:23 < phantomcircuit> 2015-10-27 01:23:33 CSignatureCache.setValid.size() 230249 18:23 < phantomcircuit> 2015-10-27 01:23:33 AcceptToMemoryPool: peer=15: accepted 3f5cfddb91105914ddbe9dc8dd3f32542bd9b1c5fe6b73873b4b6a5c87b62f51 (poolsz 3360) 18:23 < phantomcircuit> interesting 18:25 < gmaxwell> whats interesting? 18:26 < morcos> phantomcircuit: is this a new node? not the 3GB one? its not surprising. most of these spam txs have 100txins, so you'd assume about 100 to 1 ration of txs recieved to setValid size 18:27 < phantomcircuit> gmaxwell, sigcache vs mempool size 18:27 < phantomcircuit> morcos, it's currently talking upto being full 18:27 < phantomcircuit> takes ages 18:34 -!- Ylbam [uid99779@gateway/web/irccloud.com/x-hvdxzhpjpaqlezfa] has quit [Quit: Connection closed for inactivity] 18:53 < phantomcircuit> ok so 1,000,000 sigcache ~= 300MiB mempool 18:54 < phantomcircuit> we should definitely try to switch to a single parameter "-memlimit" 18:54 < phantomcircuit> and then split it out as percentages 19:30 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 19:43 -!- belcher [~user@unaffiliated/belcher] has quit [Quit: Leaving] 19:46 -!- jl2012_ [~jl2012@119246245241.ctinets.com] has quit [Read error: Connection reset by peer] 19:46 -!- jl2012 [~jl2012@unaffiliated/jl2012] has joined #bitcoin-core-dev 21:03 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has quit [Quit: Lost terminal] 21:28 -!- CodeShark [~CodeShark@cpe-76-167-237-202.san.res.rr.com] has quit [Ping timeout: 246 seconds] 21:41 -!- dcousens [~anon@c110-22-219-15.sunsh4.vic.optusnet.com.au] has joined #bitcoin-core-dev 21:41 < dcousens> hmph. CXX libbitcoin_server_a-init.o 21:41 < dcousens> g++: internal compiler error: Killed (program cc1plus) 21:41 < dcousens> Awesome :S 21:42 -!- fkhan [weechat@gateway/vpn/mullvad/x-fpuzdzgceqhznkgv] has quit [Ping timeout: 250 seconds] 21:43 < jcorgan> in my experience that is almost certainly an out-of-memory problem 21:43 < dcousens> jcorgan: interesting, I am trying to compile it on a device with only 1GB 21:45 < jcorgan> look in dmesg for linux OOM killer action 21:45 < dcousens> yeah I did, you were right 21:46 < gmaxwell> add swap. There are some options you can give g++ which will drastically reduce memory usage... but I have no idea what they are. :) 21:46 < dcousens> gmaxwell: easier to just put in another stick atm 21:52 -!- blur3d [~blur3d@pa49-197-5-125.pa.qld.optusnet.com.au] has joined #bitcoin-core-dev 21:56 -!- fkhan [weechat@gateway/vpn/mullvad/x-slnmaxmiuwdufown] has joined #bitcoin-core-dev 21:59 -!- blur3d [~blur3d@pa49-197-5-125.pa.qld.optusnet.com.au] has quit [Ping timeout: 260 seconds] 23:08 -!- Guest20816 [~pigeons@94.242.209.214] has quit [Ping timeout: 265 seconds] 23:08 -!- pigeons [~pigeons@94.242.209.214] has joined #bitcoin-core-dev 23:09 -!- pigeons is now known as Guest72716 23:12 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has joined #bitcoin-core-dev 23:18 -!- Thireus [~Thireus@icy.thireus.fr] has joined #bitcoin-core-dev 23:20 -!- d_t [~textual@c-50-136-139-144.hsd1.ca.comcast.net] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…] 23:23 -!- ParadoxSpiral [~ParadoxSp@p508B86EF.dip0.t-ipconnect.de] has joined #bitcoin-core-dev 23:31 < dcousens> poop 23:31 < dcousens> 2GB still dies lol 23:31 < dcousens> time to look up those g++ options... 23:31 < dcousens> or swap... fk it I'll add swap 23:32 < dcousens> hmmm, dmesg not showing OOM this time 23:32 < dcousens> but same error 23:45 -!- deepcore [~deepcore@2a01:79d:469e:ed94:8e70:5aff:fe5c:ae78] has joined #bitcoin-core-dev 23:46 -!- ParadoxSpiral [~ParadoxSp@p508B86EF.dip0.t-ipconnect.de] has quit [Remote host closed the connection] 23:56 < wumpus> 2GB should certainly be enough. See e.g. https://github.com/bitcoin/bitcoin/issues/6658 , worst contender main.cpp uses 1.2MB while compiling (on gcc 4.8). Adding swap may help though, linux' memory management works better if swap is enabled even if you don't need the extra memory 23:56 < wumpus> if you want to build bitcoind for a smaller device I'd recommend cross compiling though, it's easy with the depends system 23:56 < dcousens> I ended up `git clean -xdf`, re-configuring/autconf and it got past that second hurdle 23:56 < dcousens> Guessing it configured something different when I only had 1GB? 23:57 < wumpus> it doesn't 23:57 < gmaxwell> wumpus: some people are managing to run with no overcommit _and_ no swap; ... dunno how anything at all works for them. 23:57 < dcousens> Then I have no idea, except that it worked this time, but again, no OOM from dmesg so not sure 23:57 < wumpus> but it may be possible for gcc, when it crashes, to generate a poop file that will crash it later 23:59 < dcousens> gmaxwell: eh, on my own home nodes I always ran without swap 23:59 < dcousens> but, they always had a bit more hardware to play with 23:59 < gmaxwell> dcousens: note that I said both no swap and no overcommit.