Nielsen's Law of Internet Bandwidth is simply not true, but if you look at data points like http://www.netindex.com/upload/ which will show we are at least on the right track, but this is flawed still.

The fact of the matter is these speed tests are done from local origin to local destination within the city, let alone the country ( see note about how these test are only conducted 300 miles within the server). and our current internet infrastructure is set up with CDNs for the web and media consumption.
these data points can not and should not be used to model the connectivity of a peer to peer network. 

Uplink bandwidth is scarce is China and expensive, avg about $37 per 1mbps after 5, but this is not the real problem. the true issue lies in the ISP transparent proxy they run. this is not a problem isolated in just China, Thailand and various other countries in Asia like Lebanon. I have also heard in various IRCs that southern France also face this similar issue due to poor routing configurations they have that prevents connections to certain parts of the world, unsure if this is a mistake or a geopolitical by-product.

As for your question earlier Gavin, from Dallas TX to a VPS in Shanghai on 上海电信/Shanghai telecom, which is capped at 5mbps the data results match my concerns, I've gotten low as 83 Kbits/sec and as high as 9.24 Mbits/sec. and other ranges in between, none are consistent. ping avg is about 250ms.

The temporary solution I recommend again from earlier is MPTCP: http://www.multipath-tcp.org/ which allows you to multiple interfaces/networks for a single TCP connection, this is mainly developed for mobile3g/wifi transition but I found uses to improve bandwidth and connection stability on the go by combining a local wifi/ethernet connection with my 3g phone tether. this allows you to set up a middlebox somewhere, put shadowsocks server on it, and on your local machine you can route all TCP traffic over the shadow socks client and MPTCP will automatically pick the best path for upload and download.



On Mon, Jun 1, 2015 at 9:59 AM, Gavin Andresen <gavinandresen@gmail.com> wrote:
On Mon, Jun 1, 2015 at 7:20 AM, Chun Wang <1240902@gmail.com> wrote:
I cannot believe why Gavin (who seems to have difficulty to spell my
name correctly.) insists on his 20MB proposal regardless the
community. BIP66 has been introduced for a long time and no one knows
when the 95% goal can be met. This change to the block max size must
take one year or more to be adopted. We should increase the limit and
increase it now. 20MB is simply too big and too risky, sometimes we
need compromise and push things forward. I agree with any solution
lower than 10MB in its first two years.


Thanks, that's useful!

What do other people think?  Would starting at a max of 8 or 4 get consensus?  Scaling up a little less than Nielsen's Law of Internet Bandwidth predicts for the next 20 years?  (I think predictability is REALLY important).

I chose 20 because all of my testing shows it to be safe, and all of my back-of-the-envelope calculations indicate the costs are reasonable.

If consensus is "8 because more than order-of-magnitude increases are scary" -- ok.

--
--
Gavin Andresen

------------------------------------------------------------------------------

_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development




--
Yifu Guo
"Life is an everlasting self-improvement."