I'd imagined that nodes would be able to pick their own ranges to keep rather than have fixed chosen intervals. "Everything or two weeks" is rather restrictive - presumably node operators are constrained by physical disk space, which means the quantity of blocks they would want to keep can vary with sizes of blocks, cost of storage, etc.

Adding new fields to the addr message and relaying those fields to newer nodes means every node could advertise the height at which it pruned. I know it means a longer time before the data is available everywhere vs service bits, but it seems like most nodes won't be pruning right away anyway. There's plenty of time for upgrades. If an old node connected to a new node and getdata-d blocks that had been pruned, immediate disconnection should make the old node go find a different one. It means the combination of old node+not run for a long time might take a while before it can find a node that has what it wants, but that doesn't seem like a big deal.

What is the use case for NODE_VALIDATE? Nodes that throw away blocks almost immediately? Why would a node do that?


On Sun, Apr 28, 2013 at 5:51 PM, Pieter Wuille <pieter.wuille@gmail.com> wrote:
Hello all,

I think it is time to move forward with pruning nodes, i.e. nodes that fully validate and relay blocks and transactions, but which do not keep (all) historic blocks around, and thus cannot be queried for these.

The biggest roadblock is making sure new and old nodes that start up are able to find nodes to synchronize from. To help them find peers, I would like to propose adding two extra service bits to the P2P protocol:
* NODE_VALIDATE: relay and validate blocks and transactions, but is only guaranteed to answer getdata requests for (recently) relayed blocks and transactions, and mempool transactions.
* NODE_BLOCKS_2016: can be queried for the last 2016 blocks, but without guarantee for relaying/validating new blocks and transactions.
* NODE_NETWORK (which existed before) will imply NODE_VALIDATE and guarantee availability of all historic blocks.

The idea is to separate the different responsibilities of network nodes into separate bits, so they can - at some point - be implemented independently. Perhaps we want more than just one degree (2016 blocks), maybe also 144 or 210000, but those can be added later if necessary. I monitored the frequency of block depths requested from my public node, and got this frequency distribution: http://bitcoin.sipa.be/depth-small.png so it seems 2016 nicely matches the set of frequently-requested blocks (indicating that few nodes are offline for more than 2 weeks consecutively.

I'll write a BIP to formalize this, but wanted to get an idea of how much support there is for a change like this.

Cheers,

-- 
Pieter
 



------------------------------------------------------------------------------
Try New Relic Now & We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service
that delivers powerful full stack analytics. Optimize and monitor your
browser, app, & servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development