On Sun, Jun 14, 2015 at 05:53:05PM -0700, Eric Lombrozo wrote: > I think the whole complexity talk is missing the bigger issue. > > Sure, per block validation scales linearly (or quasilinearly…there’s an O(log n) term in there somewhere but it’s probably dominated by linear factors at current levels…asymptotic limits don’t always apply very well to finite systems). And there’s an O(n^2) bandwidth issue. > > The real issue, though, is validation cost. The n in O(n) here does not represent block size - it represents the size of the entire block chain for every new validator that must be synchronized! It means we have no way to construct short proofs (or at least arguments that are computationally *hard* to forge) without requiring the validator to maintain the complete system state. And currently, there is no mechanism for directly compensating validators. ...and can there be? The goal of validation after all is finding if a mistake has been made, and current production cryptography doesn't have any way to prove you have done that honestly. You need "moon math" like recursive SNARKS to do that, and it's unknown when they'll be available for production usage. When we say "compensating validators", if we're being honest with outselves what we really mean is the much more boring task of compensating servers who are giving us blockchain data. That has nothing to do with validation. A useful task would be to make an SPV archival node implementation that did no validation at all, while distributing the blockchain data linked to the longest chain. Such an implementation can and should serve SPV clients, as this is what their actual security model usually is given the lack of authentication of the identity of the server they're connecting too. Actually implementing this would be a simple matter of patching Bitcoin Core to turn off block validation. > A full validator that goes offline even for a short period of time takes a while to fully catch up to the rest of the network - and starting up a new validator from scratch will continue to be painful…even for those of us who’ve turned this into routine by now, let alone new nontechnical users. Concretely, 20MB blocks lead to 20GB/week of blocks. On my 1MB/second down internet, turning on my node after a week away would take five hours; starting up a new node after two years of 20MB blocks would take 23 days - likely longer in practice. There's serious unsolved and undiscussed devops and development issues with this. For instance, after changes to the validation code, it's routine to resync/reindex Bitcoin Core to ensure starting up a new node actually works. Even now we haven't really come to grips with what consistent 1MB blocks looks like from this point of view after a few years of usage, let another order of magnitude longer sync times. > Satoshi’s SPV is not a real solution - it’s a mere suggestion that wasn’t fully thought out at the time of the Bitcoin white paper. Besides lacking a good validation security model, practical implementations of it weaken privacy and complicate client implementations substantially…and the worst part, it still doesn’t scale all that well. The validator still has to query every single block (even if filtered) back to the first transaction (which cannot be determined without doing a blockchain scan anyway). Note how with 20MB blocks it would take up to 1TB of IO per year-synced for a bloom-filter-using wallet to sync the blockchain. We already have a bloom IO DoS attack issue - what are the consequences of making that issue 20x worse? Nobody has analysed it yet. -- 'peter'[:-1]@petertodd.org 0000000000000000127ab1d576dc851f374424f1269c4700ccaba2c42d97e778