I find it to be an admirable goal to try to keep node operation costs low and accessible to the average user. On the other hand, if we are able to keep the resource requirements of nodes at the level of, say, whatever the latest Raspberry Pi model on a residential Internet connection can handle, I'm not sure how helpful it will be if the demand for inclusion in blocks results in transaction fees prices out more users. Stated differently, if the cost or contention of using the network rises to the point of excluding the average user from making transactions, then they probably aren't going to care that they can run a node at trivial cost.

If we're approaching the block size from a resource usage standpoint, it seems to me that someone is going to be excluded one way or another. Not raising the block size will exclude some users from sending transactions while raising the block size will exclude some users from running nodes. The latter seems preferable to me because more users will grow the ecosystem, which should increase the value of the ecosystem, which should increase the cost that entities are willing to pay to run nodes.

I see two primary points of view / objectives clashing in this debate:

1) Decentralization and stability even if it retards growth of the ecosystem
2) Push the system's load as far as we are comfortable in order to accommodate the growth it is experiencing

It's clear to me that Core developers have a responsibility to maintain a stable platform for the ecosystem. I think it's less clear that they have a responsibility to grow it or ask node operators to expend more resources in order to support more users. As an operator of several nodes, I can anecdotally state that I find their resource usage to be trivial and I welcome more load.

- Jameson

On Thu, Jul 30, 2015 at 11:12 AM, Jorge Timón <bitcoin-dev@lists.linuxfoundation.org> wrote:
1) Unlike previous blocksize hardfork proposals, this uses median time
instead of block.nTime for activation. I like that more but my
preference is still using height for everything. But that discussion
is not specific to this proposal, so it's better if we discuss that
for all of them here:
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009731.html

2) I think uncontroversial hardforks should also take miner
confirmation into account, just like uncontroversial softforks do. We
cannot make sure other users have upgraded before activating the
chain, but we can know whether miners have upgraded or not. Having
that tool available, why not use it. Of course other hardforks may not
care about miners' upgrade state. For example "anti-miner hardforks,
see https://github.com/jtimon/bips/blob/bip-forks/bip-forks.org#asic-reset-hardfork
But again, this is common to all uncontroversial hardforks, so it
would probably better to discussed it in
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008936.html
(gmaxwell assigned to bip99 to my bip draft).

3) As commented to you privately, I don't like to make any assumptions
about technological advancements (much less on economical growth). I
don't expect many people to agree with me here (I guess I've seen too
many "peak oil" [or more generally, peak energy production] plus I've
read Nietzsche's "On the utility and liability of history for life"
[1]; so considering morals, technology or economics as "monotonic
functions" in history is simply a ridiculous notion to me), but it's
undeniable that internet connections have improved overall around the
world in the last 6 years. I think we should wait for the
technological improvements to happen and then adapt the blocksize
accordingly. I know, that's not a "definitive solution", we will need
to change it from time to time and this is somewhat ugly.
But even if I'm the only one that considers a "technological
de-growth" possible, I don't think is wise to rely on pseudo-laws like
Moore's or Nielsen’s so-called "laws".
Stealing a quote from another thread:

"Prediction is difficult, especially about the future." - Niels Bohr

So I would prefer a more limited solution like bip102 (even though I
would prefer to have some simulations leading to  a concrete value
(even if it's bigger) rather than using 2MB's arbitrary number.

Those are my 3 cents.

[1] https://philohist.files.wordpress.com/2008/01/nietzsche-uses-history.pdf

On Thu, Jul 30, 2015 at 4:25 PM, Pieter Wuille via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> Hello all,
>
> here is a proposal for long-term scalability I've been working on:
> https://gist.github.com/sipa/c65665fc360ca7a176a6
>
> Some things are not included yet, such as a testnet whose size runs ahead of
> the main chain, and the inclusion of Gavin's more accurate sigop checking
> after the hard fork.
>
> Comments?
>
> --
> Pieter
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev