On Tue, Aug 4, 2015 at 4:59 AM, Jorge Timón <bitcoin-dev@lists.linuxfoundation.org> wrote:
Also I don't think "hitting the limit" must be necessarily harmful and
if it is, I don't understand why hitting it at 1MB will be more
harmful than hitting it at 2MB, 8MB or 8GB.

I don't think merely hitting the limit is bad. The level of tx fees in equilibrium give some clue as to the level of harm being done. If fees are at $5/tx at 1MB, it's about as bad as if fees are at $5/tx at 4MB. 

There is NO criterion based on mining centralization to decide between
2 sizes in favor of the small one.
It seems like the rationale it's always "the bigger the better" and
the only limitation is what a few people concerned with mining
centralization (while they still have time to discuss this) are
willing to accept. If that's the case, then there won't be effectively
any limit in the long term and Bitcoin will probably fail in its
decentralization goals.
I think its the proponents of a blocksize change who should propose
such a criterion and now they have the tools to simulate different
block sizes.

In the absence of harder data, it might be interesting to use these simulations to graph centralization pressure as a function of bandwidth cost over time (or other historical variables that affect centralization). For instance, look at how high centralization pressure was in 2009 according to the simulations, given how cheap/available bandwidth was then, compared to 2010, 2011, etc. Then we could figure out: given today's bandwidth situation, what size blocks right now would give us the same centralization pressure that we had in 2011, 2012, 2013, etc?

Of course this doesn't mean we should blindly assume that the level of centralization pressure in 2012 was acceptable, and therefore any block size increase that results in the same amount of pressure now should be acceptable. But it might lead to a more productive discussion.
 
I want us to simulate many blocksizes before rushing into a decision
(specially because I disagree that taking a decision there is urgent
in the first place).

IMO it is not urgent if the core devs are committed to reacting to a huge spike in tx fees with a modest block size increase in a relatively short time frame, if they judge the centralization risks of that increase to be small. Greg Maxwell posted on reddit a while back something to the effect of "the big block advocates are overstating the urgency of the block size increase, because if there was actually a situation that required us to increase block size, we could make the increase when it was actually needed." I found that somewhat persuasive, but I am concerned that I haven't seen any discussion of what the "let's wait for now" camp would consider a valid reason to increase block size in the short term, and how they'd make the tradeoff with tx fees or whatever else was necessitating the increase. 

Jorge, if a fee equilibrium developed at 1MB of $5/tx, and you somehow knew with certainty that increasing to 4MB would result in a 20 cent/tx equilibrium that would last for a year (otherwise fees would stay around $5 for that year), would you be in favor of an increase to 4MB? 

For those familiar with the distinction between near/far mode thinking popularized by Robin Hanson: focusing on concrete examples like this encourages problem solving and consensus, and focusing on abstract principles (like decentralization vs. usability in general) leads to people toward using argument to signal their alliances and reduce the status of their opponents. I think it'd be very helpful if more 1MB advocates described what exactly would make them say "OK, in this situation a block size increase is needed, we should do one quickly!", and also if Gavin/Mike/Jeff described what hypothetical scenarios and/or test results would make them want to stick with 1MB blocks for now.