On Fri, Apr 11, 2014 at 5:54 PM, Gregory Maxwell <gmaxwell@gmail.com> wrote:
For the non-error-coded case I believe nodes
with random spans of blocks works out asymptotically to the same
failure rates as random.

If each "block" is really 512 blocks in sequence, then each "slot" is more likely to be hit.  It effectively reduces the number of blocks by the minimum run lengths.

ECC seemed cooler though.
 
(The conversation Peter Todd was referring to was one where I was
pointing out that with suitable error coding you also get an
anti-censorship effect where its very difficult to provide part of the
data without potentially providing all of it)

Interesting too.

I think in the network we have today and for the foreseeable future we
can reasonably count on there being a reasonable number of nodes that
store all the blocks... quite likely not enough to satisfy the
historical block demand from the network alone, but easily enough to
supply blocks that have otherwise gone missing.

That's true.  Scaling up the transactions per second increases the chance of data lost.

With side/tree chains, the odds of data loss in the less important chains increases (though they are by definition lower value chains)