This is probably just noise, but what if nodes could compress and store earlier transaction sets (archive sets) and serve them up conditionally. So if there were let's say 100 archive sets of (10,000 blocks) you might have 5 open at any time when you're an active archive node while the others sit on your disk compressed and unavailable to the network. This would allow nodes to have all full transactions but conserve disk space and network activity since they wouldn't ever respond about every possible transaction.

This could be based on a rotational request period, based on request count or done periodically. Once their considered active they would be expected to uncompress a set and make it available to the network. Clients would have to piece together archive sets from different nodes, but if there weren't enough archive nodes to cover the chain they could ratchet up the amount of required open archive sets when your node was active.

I fully expect to have my idea trashed, but I'm dipping toes in the waters of contribution.




On Thu, Apr 10, 2014 at 10:19 AM, Wladimir <laanwj@gmail.com> wrote:

On Thu, Apr 10, 2014 at 2:10 PM, Gregory Maxwell <gmaxwell@gmail.com> wrote:
But sure I could see a fixed range as also being a useful contribution
though I'm struggling to figure out what set of constraints would
leave a node without following the consensus?   Obviously it has
bandwidth if you're expecting to contribute much in serving those
historic blocks... and verifying is reasonably cpu cheap with fast
ecdsa code.   Maybe it has a lot of read only storage?

The use case is that you could burn the node implementation + block data + a live operating system on a read-only medium. This could be set in stone for a long time.

There would be no consensus code to keep up to date with protocol developments, because it doesn't take active part in it.

I don't think it would be terribly useful right now, but it could be useful when nodes that host all history become rare. It'd allow distributing 'pieces of history' in a self-contained form.
 
I think it should be possible to express and use such a thing in the
protocol even if I'm currently unsure as to why you wouldn't do 100000
- 200000  _plus_ the most recent 144 that you were already keeping
around for reorgs.

Yes, it would be nice to at least be able to express it, if it doesn't make the protocol too finicky.

In terms of peer selection, if the blocks you need aren't covered by
the nodes you're currently connected to I think you'd prefer to seek
node nodes which have the least rare-ness in the ranges they offer.
E.g. if you're looking for a block 50 from the tip,  you're should
probably not prefer to fetch it from someone with blocks 100000-150000
if its one of only 100 nodes that has that range.

That makes sense.

In general, if you want a block 50 from the tip, it would be best to request it from a node that only serves the last N (N>~50) blocks, and not a history node that could use the same bandwidth to serve earlier, rarer blocks to others.

Wladimir


------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development