How many nodes are necessary to ensure sufficient network reliability? Ten, a hundred, a thousand? At what point do we hit the point of diminishing returns, where adding extra nodes starts to have negligible impact on the overall reliability of the system? 





On Thu, Aug 6, 2015 at 10:26 AM, Pieter Wuille via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
On Thu, Aug 6, 2015 at 5:06 PM, Gavin Andresen <gavinandresen@gmail.com> wrote:
On Thu, Aug 6, 2015 at 10:53 AM, Pieter Wuille <pieter.wuille@gmail.com> wrote:
So if we would have 8 MB blocks, and there is a sudden influx of users (or settlement systems, who serve much more users) who want to pay high fees (let's say 20 transactions per second) making the block chain inaccessible for low fee transactions, and unreliable for medium fee transactions (for any value of low, medium, and high), would you be ok with that?

Yes, that's fine. If the network cannot handle the transaction volume that people want to pay for, then the marginal transactions are priced out. That is true today (otherwise ChangeTip would be operating on-blockchain), and will be true forever.

The network can "handle" any size. I believe that if a majority of miners forms SPV mining agreements, then they are no longer affected by the block size, and benefit from making their blocks slow to validate for others (as long as the fee is negligable compared to the subsidy). I'll try to find the time to implement that in my simulator. Some hardware for full nodes will always be able to validate and index the chain, so nobody needs to run a pesky full node anymore and they can just use a web API to validate payments.

Being able the "handle" a particular rate is not a boolean question. It's a question of how much security, centralization, and risk for systemic error we're willing to tolerate. These are not things you can just observe, so let's keep talking about the risks, and find a solution that we agree on.

 
If so, why is 8 MB good but 1 MB not? To me, they're a small constant factor that does not fundamentally improve the scale of the system.

"better is better" -- I applaud efforts to fundamentally improve the scalability of the system, but I am an old, cranky, pragmatic engineer who has seen that successful companies tackle problems that arise and are willing to deploy not-so-perfect solutions if they help whatever short-term problem they're facing.

I don't believe there is a short-term problem. If there is one now, there will be one too at 8 MB blocks (or whatever actual size blocks are produced).
 
 
I dislike the outlook of "being forever locked at the same scale" while technology evolves, so my proposal tries to address that part. It intentionally does not try to improve a small factor, because I don't think it is valuable.

I think consensus is against you on that point.

Maybe. But I believe that it is essential to not take unnecessary risks, and find a non-controversial solution.

--
Pieter


_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev