public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* Re: [Bitcoin-development] No Bitcoin For You
@ 2015-05-26  2:30 Thy Shizzle
  2015-05-26  2:41 ` gabe appleton
  2015-05-26  2:53 ` Jim Phillips
  0 siblings, 2 replies; 13+ messages in thread
From: Thy Shizzle @ 2015-05-26  2:30 UTC (permalink / raw)
  To: Jim Phillips, Mike Hearn; +Cc: Bitcoin Dev


[-- Attachment #1.1: Type: text/plain, Size: 3573 bytes --]

Nah don't make blocks 20mb, then you are slowing down block propagation and blowing out conf tikes as a result. Just decrease the time it takes to make a 1mb block, then you still see the same propagation times today and just increase the transaction throughput.
________________________________
From: Jim Phillips<mailto:jim@ergophobia•org>
Sent: ‎26/‎05/‎2015 12:27 PM
To: Mike Hearn<mailto:mike@plan99•net>
Cc: Bitcoin Dev<mailto:bitcoin-development@lists•sourceforge.net>
Subject: Re: [Bitcoin-development] No Bitcoin For You

On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:

This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
> right now, but I showed years ago that you could keep up with VISA on a
> single well specced server with today's technology. Only people living in a
> dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
... And will certainly NEVER have if we can't solve the capacity problem
SOON.

In a former life, I was a capacity planner for Bank of America's mid-range
server group. We had one hard and fast rule. When you are typically
exceeding 75% of capacity on a given metric, it's time to expand capacity.
Period. You don't do silly things like adjusting the business model to
disincentivize use. Unless there's some flaw in the system and it's leaking
resources, if usage has increased to the point where you are at or near the
limits of capacity, you expand capacity. It's as simple as that, and I've
found that same rule fits quite well in a number of systems.

In Bitcoin, we're not leaking resources. There's no flaw. The system is
performing as intended. Usage is increasing because it works so well, and
there is huge potential for future growth as we identify more uses and
attract more users. There might be a few technical things we can do to
reduce consumption, but the metric we're concerned with right now is how
many transactions we can fit in a block. We've broken through the 75%
marker and are regularly bumping up against the 100% limit.

It is time to stop debating this and take action to expand capacity. The
only questions that should remain are how much capacity do we add, and how
soon can we do it. Given that most existing computer systems and networks
can easily handle 20MB blocks every 10 minutes, and given that that will
increase capacity 20-fold, I can't think of a single reason why we can't go
to 20MB as soon as humanly possible. And in a few years, when the average
block size is over 15MB, we bump it up again to as high as we can go then
without pushing typical computers or networks beyond their capacity. We can
worry about ways to slow down growth without affecting the usefulness of
Bitcoin as we get closer to the hard technical limits on our capacity.

And you know what else? If miners need higher fees to accommodate the costs
of bigger blocks, they can configure their nodes to only mine transactions
with higher fees.. Let the miners decide how to charge enough to pay for
their costs. We don't need to cripple the network just for them.

--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

[-- Attachment #1.2: Type: text/html, Size: 6186 bytes --]

[-- Attachment #2: Type: text/plain, Size: 409 bytes --]

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y

[-- Attachment #3: Type: text/plain, Size: 188 bytes --]

_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists•sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-26  2:30 [Bitcoin-development] No Bitcoin For You Thy Shizzle
@ 2015-05-26  2:41 ` gabe appleton
  2015-05-26  2:53 ` Jim Phillips
  1 sibling, 0 replies; 13+ messages in thread
From: gabe appleton @ 2015-05-26  2:41 UTC (permalink / raw)
  To: Thy Shizzle; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 5304 bytes --]

But don't you see the same trade-off in the end there? You're still
propagating the same amount of data over the same amount of time, so unless
I misunderstand, the costs of such a move should be approximately the same,
just in different areas. The risks as I understand are as follows:

20MB:


   1. Longer per-block propagation (eventually)
   2. Longer processing time (eventually)
   3. Longer sync time

1 Minute:

   1. Weaker individual confirmations (approx. equal per confirmation*time)
   2. Higher orphan rate (immediately)
   3. Longer sync time

That risk-set makes me want a middle-ground approach. Something where the
immediate consequences aren't all that strong, and where we have some idea
of what to do in the future. Is there any chance we can get decent network
simulations at various configurations (5MB/4min, etc)? Perhaps
re-appropriate the testnet?

On Mon, May 25, 2015 at 10:30 PM, Thy Shizzle <thyshizzle@outlook•com>
wrote:

>  Nah don't make blocks 20mb, then you are slowing down block propagation
> and blowing out conf tikes as a result. Just decrease the time it takes to
> make a 1mb block, then you still see the same propagation times today and
> just increase the transaction throughput.
>  ------------------------------
> From: Jim Phillips <jim@ergophobia•org>
> Sent: ‎26/‎05/‎2015 12:27 PM
> To: Mike Hearn <mike@plan99•net>
> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
> Subject: Re: [Bitcoin-development] No Bitcoin For You
>
>
> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>
>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
> down right now, but I showed years ago that you could keep up with VISA on
> a single well specced server with today's technology. Only people living in
> a dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
>  ... And will certainly NEVER have if we can't solve the capacity problem
> SOON.
>
>  In a former life, I was a capacity planner for Bank of America's
> mid-range server group. We had one hard and fast rule. When you are
> typically exceeding 75% of capacity on a given metric, it's time to expand
> capacity. Period. You don't do silly things like adjusting the business
> model to disincentivize use. Unless there's some flaw in the system and
> it's leaking resources, if usage has increased to the point where you are
> at or near the limits of capacity, you expand capacity. It's as simple as
> that, and I've found that same rule fits quite well in a number of systems.
>
>  In Bitcoin, we're not leaking resources. There's no flaw. The system is
> performing as intended. Usage is increasing because it works so well, and
> there is huge potential for future growth as we identify more uses and
> attract more users. There might be a few technical things we can do to
> reduce consumption, but the metric we're concerned with right now is how
> many transactions we can fit in a block. We've broken through the 75%
> marker and are regularly bumping up against the 100% limit.
>
>  It is time to stop debating this and take action to expand capacity. The
> only questions that should remain are how much capacity do we add, and how
> soon can we do it. Given that most existing computer systems and networks
> can easily handle 20MB blocks every 10 minutes, and given that that will
> increase capacity 20-fold, I can't think of a single reason why we can't go
> to 20MB as soon as humanly possible. And in a few years, when the average
> block size is over 15MB, we bump it up again to as high as we can go then
> without pushing typical computers or networks beyond their capacity. We can
> worry about ways to slow down growth without affecting the usefulness of
> Bitcoin as we get closer to the hard technical limits on our capacity.
>
>  And you know what else? If miners need higher fees to accommodate the
> costs of bigger blocks, they can configure their nodes to only mine
> transactions with higher fees.. Let the miners decide how to charge enough
> to pay for their costs. We don't need to cripple the network just for them.
>
>  --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy *
>
>   *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
>
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists•sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
>

[-- Attachment #2: Type: text/html, Size: 8055 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-26  2:30 [Bitcoin-development] No Bitcoin For You Thy Shizzle
  2015-05-26  2:41 ` gabe appleton
@ 2015-05-26  2:53 ` Jim Phillips
  1 sibling, 0 replies; 13+ messages in thread
From: Jim Phillips @ 2015-05-26  2:53 UTC (permalink / raw)
  To: Thy Shizzle; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 5300 bytes --]

Frankly I'm good with either way. I'm definitely in favor of faster
confirmation times.

The important thing is that we need to increase the amount of transactions
that get into blocks over a given time frame to a point that is in line
with what current technology can handle. We can handle WAY more than we are
doing right now. The Bitcoin network is not currently Disk, CPU, or RAM
bound.. Not even close. The metric we're closest to being restricted by
would be Network bandwidth. I live in a developing country. 2Mbps is a
typical broadband speed here (although 5Mbps and 10Mbps connections are
affordable). That equates to about 17MB per minute, or 170x more capacity
than what I need to receive a full copy of the blockchain if I only talk to
one peer. If I relay to say 10 peers, I can still handle 17x larger block
sizes on a slow 2Mbps connection.

Also, even if we reduce the difficulty so that we're doing 1MB blocks every
minute, that's still only 10MB every 10 minutes. Eventually we're going to
have to increase that, and we can only reduce the confirmation period so
much. I think someone once said 30 seconds or so is about the shortest
period you can practically achieve.

--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>
<http://www.linkedin.com/in/ergophobe>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle <thyshizzle@outlook•com> wrote:

>  Nah don't make blocks 20mb, then you are slowing down block propagation
> and blowing out conf tikes as a result. Just decrease the time it takes to
> make a 1mb block, then you still see the same propagation times today and
> just increase the transaction throughput.
>  ------------------------------
> From: Jim Phillips <jim@ergophobia•org>
> Sent: ‎26/‎05/‎2015 12:27 PM
> To: Mike Hearn <mike@plan99•net>
> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
> Subject: Re: [Bitcoin-development] No Bitcoin For You
>
>
> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>
>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
> down right now, but I showed years ago that you could keep up with VISA on
> a single well specced server with today's technology. Only people living in
> a dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
>  ... And will certainly NEVER have if we can't solve the capacity problem
> SOON.
>
>  In a former life, I was a capacity planner for Bank of America's
> mid-range server group. We had one hard and fast rule. When you are
> typically exceeding 75% of capacity on a given metric, it's time to expand
> capacity. Period. You don't do silly things like adjusting the business
> model to disincentivize use. Unless there's some flaw in the system and
> it's leaking resources, if usage has increased to the point where you are
> at or near the limits of capacity, you expand capacity. It's as simple as
> that, and I've found that same rule fits quite well in a number of systems.
>
>  In Bitcoin, we're not leaking resources. There's no flaw. The system is
> performing as intended. Usage is increasing because it works so well, and
> there is huge potential for future growth as we identify more uses and
> attract more users. There might be a few technical things we can do to
> reduce consumption, but the metric we're concerned with right now is how
> many transactions we can fit in a block. We've broken through the 75%
> marker and are regularly bumping up against the 100% limit.
>
>  It is time to stop debating this and take action to expand capacity. The
> only questions that should remain are how much capacity do we add, and how
> soon can we do it. Given that most existing computer systems and networks
> can easily handle 20MB blocks every 10 minutes, and given that that will
> increase capacity 20-fold, I can't think of a single reason why we can't go
> to 20MB as soon as humanly possible. And in a few years, when the average
> block size is over 15MB, we bump it up again to as high as we can go then
> without pushing typical computers or networks beyond their capacity. We can
> worry about ways to slow down growth without affecting the usefulness of
> Bitcoin as we get closer to the hard technical limits on our capacity.
>
>  And you know what else? If miners need higher fees to accommodate the
> costs of bigger blocks, they can configure their nodes to only mine
> transactions with higher fees.. Let the miners decide how to charge enough
> to pay for their costs. We don't need to cripple the network just for them.
>
>  --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy *
>
>   *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
>

[-- Attachment #2: Type: text/html, Size: 8363 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-26  5:43     ` gabe appleton
@ 2015-05-26  8:29       ` Jim Phillips
  0 siblings, 0 replies; 13+ messages in thread
From: Jim Phillips @ 2015-05-26  8:29 UTC (permalink / raw)
  To: gabe appleton; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 12188 bytes --]

I think all the suggestions recommending cutting the block time down also
suggest reducing the rewards to compensate.

--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>
<http://www.linkedin.com/in/ergophobe>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Tue, May 26, 2015 at 12:43 AM, gabe appleton <gappleto97@gmail•com>
wrote:

> Sync time wouldn't be longer compared to 20MB, it would (eventually) be
> longer under either setup.
>
> Also, and this is probably a silly concern, but wouldn't changing block
> time change the supply curve? If we cut the rate in half or a power of two,
> that affects nothing, but if we want to keep it in round numbers, we need
> to do it by 10, 5, or 2. I feel like most people would bank for 10 or 5,
> both of which change the supply curve due to truncation.
>
> Again, it's a trivial concern, but probably one that should be addressed.
> On May 25, 2015 11:52 PM, "Jim Phillips" <jim@ergophobia•org> wrote:
>
>> Incidentally, even once we have the "Internet of Things" brought on by
>> 21, Inc. or whoever beats them to it, I would expect the average home to
>> have only a single full node "hub" receiving the blockchain and
>> broadcasting transactions created by all the minor SPV connected devices
>> running within the house. The in-home full node would be peered with high
>> bandwidth full-node relays running at the ISP or in the cloud. There are
>> more than enough ISPs and cloud compute providers in the world such that
>> there should be no concern at all about centralization of relays. Full
>> nodes could some day become as ubiquitous on the Internet as authoritative
>> DNS servers. And just like DNS servers, if you don't trust the nodes your
>> ISP creates or it's too slow or censors transactions, there's nothing
>> preventing you from peering with nodes hosted by the Googles or OpenDNSs
>> out there, or running your own if you're really paranoid and have a few
>> extra bucks for a VPS.
>>
>> --
>> *James G. Phillips IV*
>> <https://plus.google.com/u/0/113107039501292625391/posts>
>> <http://www.linkedin.com/in/ergophobe>
>>
>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>> immortals." -- David Ogilvy*
>>
>>  *This message was created with 100% recycled electrons. Please think
>> twice before printing.*
>>
>> On Mon, May 25, 2015 at 10:23 PM, Jim Phillips <jim@ergophobia•org>
>> wrote:
>>
>>> I don't see how the fact that my 2Mbps connection causes me to not be a
>>> very good relay has any bearing on whether or not the network as a whole
>>> would be negatively impacted by a 20MB block. My inability to rapidly
>>> propagate blocks doesn't really harm the network. It's only if MOST relays
>>> are as slow as mine that it creates an issue. I'm one node in thousands
>>> (potentially tens or hundreds of thousands if/when Bitcoin goes
>>> mainstream). And I'm an individual. There's no reason at all for me to run
>>> a full node from my home, except to have my own trusted and validated copy
>>> of the blockchain on a computer I control directly. I don't need to act as
>>> a relay for that and as long as I can download blocks faster than they are
>>> created I'm fine. Also, I can easily afford a VPS server or several to run
>>> full nodes as relays if I am feeling altruistic. It's actually cheaper for
>>> me to lease a VPS than to keep my own home PC on 24/7, which is why I have
>>> 2 of them.
>>>
>>> And as a business, the cost of a server and bandwidth to run a full node
>>> is a drop in the bucket. I'm involved in several projects where we have
>>> full nodes running on leased servers with multiple 1Gbps connections. It's
>>> an almost zero cost. Those nodes could handle 20MB blocks today without
>>> thinking about it, and I'm sure our nodes are just a few amongst thousands
>>> just like them. I'm not at all concerned about the network being too
>>> centralized.
>>>
>>> What concerns me is the fact that we are using edge cases like my home
>>> PC as a lame excuse to debate expanding the capacity of the network.
>>>
>>> --
>>> *James G. Phillips IV*
>>> <https://plus.google.com/u/0/113107039501292625391/posts>
>>> <http://www.linkedin.com/in/ergophobe>
>>>
>>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>>> immortals." -- David Ogilvy*
>>>
>>>  *This message was created with 100% recycled electrons. Please think
>>> twice before printing.*
>>>
>>> On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle <thyshizzle@outlook•com>
>>> wrote:
>>>
>>>>  Indeed Jim, your internet connection makes a good reason why I don't
>>>> like 20mb blocks (right now). It would take you well over a minute to
>>>> download the block before you could even relay it on, so much slow down in
>>>> propagation! Yes I do see how decreasing the time to create blocks is a bit
>>>> of a band-aid fix, and to use tge term I've seen mentioned here "kicking
>>>> the can down the road" I agree that this is doing this, however as you say
>>>> bandwidth is our biggest enemy right now and so hopefully by the time we
>>>> exceed the capacity gained by the decrease in block time, we can then look
>>>> to bump up block size because hopefully 20mbps connections will be baseline
>>>> by then etc.
>>>>  ------------------------------
>>>> From: Jim Phillips <jim@ergophobia•org>
>>>> Sent: ‎26/‎05/‎2015 12:53 PM
>>>> To: Thy Shizzle <thyshizzle@outlook•com>
>>>> Cc: Mike Hearn <mike@plan99•net>; Bitcoin Dev
>>>> <bitcoin-development@lists•sourceforge.net>
>>>>
>>>> Subject: Re: [Bitcoin-development] No Bitcoin For You
>>>>
>>>>  Frankly I'm good with either way. I'm definitely in favor of faster
>>>> confirmation times.
>>>>
>>>>  The important thing is that we need to increase the amount of
>>>> transactions that get into blocks over a given time frame to a point that
>>>> is in line with what current technology can handle. We can handle WAY more
>>>> than we are doing right now. The Bitcoin network is not currently Disk,
>>>> CPU, or RAM bound.. Not even close. The metric we're closest to being
>>>> restricted by would be Network bandwidth. I live in a developing country.
>>>> 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
>>>> connections are affordable). That equates to about 17MB per minute, or 170x
>>>> more capacity than what I need to receive a full copy of the blockchain if
>>>> I only talk to one peer. If I relay to say 10 peers, I can still handle 17x
>>>> larger block sizes on a slow 2Mbps connection.
>>>>
>>>>  Also, even if we reduce the difficulty so that we're doing 1MB blocks
>>>> every minute, that's still only 10MB every 10 minutes. Eventually we're
>>>> going to have to increase that, and we can only reduce the confirmation
>>>> period so much. I think someone once said 30 seconds or so is about the
>>>> shortest period you can practically achieve.
>>>>
>>>>  --
>>>> *James G. Phillips IV*
>>>> <https://plus.google.com/u/0/113107039501292625391/posts>
>>>> <http://www.linkedin.com/in/ergophobe>
>>>>
>>>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>>>> immortals." -- David Ogilvy *
>>>>
>>>>   *This message was created with 100% recycled electrons. Please think
>>>> twice before printing.*
>>>>
>>>> On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle <thyshizzle@outlook•com>
>>>> wrote:
>>>>
>>>>  Nah don't make blocks 20mb, then you are slowing down block
>>>> propagation and blowing out conf tikes as a result. Just decrease the time
>>>> it takes to make a 1mb block, then you still see the same propagation times
>>>> today and just increase the transaction throughput.
>>>>  ------------------------------
>>>> From: Jim Phillips <jim@ergophobia•org>
>>>> Sent: ‎26/‎05/‎2015 12:27 PM
>>>> To: Mike Hearn <mike@plan99•net>
>>>> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
>>>> Subject: Re: [Bitcoin-development] No Bitcoin For You
>>>>
>>>>
>>>> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>>>>
>>>>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki
>>>> is down right now, but I showed years ago that you could keep up with VISA
>>>> on a single well specced server with today's technology. Only people living
>>>> in a dreamworld think that Bitcoin might actually have to match that level
>>>> of transaction demand with today's hardware. As noted previously, "too many
>>>> users" is simply not a problem Bitcoin has .... and may never have!
>>>>
>>>>
>>>>  ... And will certainly NEVER have if we can't solve the capacity
>>>> problem SOON.
>>>>
>>>>  In a former life, I was a capacity planner for Bank of America's
>>>> mid-range server group. We had one hard and fast rule. When you are
>>>> typically exceeding 75% of capacity on a given metric, it's time to expand
>>>> capacity. Period. You don't do silly things like adjusting the business
>>>> model to disincentivize use. Unless there's some flaw in the system and
>>>> it's leaking resources, if usage has increased to the point where you are
>>>> at or near the limits of capacity, you expand capacity. It's as simple as
>>>> that, and I've found that same rule fits quite well in a number of systems.
>>>>
>>>>  In Bitcoin, we're not leaking resources. There's no flaw. The system
>>>> is performing as intended. Usage is increasing because it works so well,
>>>> and there is huge potential for future growth as we identify more uses and
>>>> attract more users. There might be a few technical things we can do to
>>>> reduce consumption, but the metric we're concerned with right now is how
>>>> many transactions we can fit in a block. We've broken through the 75%
>>>> marker and are regularly bumping up against the 100% limit.
>>>>
>>>>  It is time to stop debating this and take action to expand capacity.
>>>> The only questions that should remain are how much capacity do we add, and
>>>> how soon can we do it. Given that most existing computer systems and
>>>> networks can easily handle 20MB blocks every 10 minutes, and given that
>>>> that will increase capacity 20-fold, I can't think of a single reason why
>>>> we can't go to 20MB as soon as humanly possible. And in a few years, when
>>>> the average block size is over 15MB, we bump it up again to as high as we
>>>> can go then without pushing typical computers or networks beyond their
>>>> capacity. We can worry about ways to slow down growth without affecting the
>>>> usefulness of Bitcoin as we get closer to the hard technical limits on our
>>>> capacity.
>>>>
>>>>  And you know what else? If miners need higher fees to accommodate the
>>>> costs of bigger blocks, they can configure their nodes to only mine
>>>> transactions with higher fees.. Let the miners decide how to charge enough
>>>> to pay for their costs. We don't need to cripple the network just for them.
>>>>
>>>>  --
>>>> *James G. Phillips IV*
>>>> <https://plus.google.com/u/0/113107039501292625391/posts>
>>>>
>>>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>>>> immortals." -- David Ogilvy *
>>>>
>>>>   *This message was created with 100% recycled electrons. Please think
>>>> twice before printing.*
>>>>
>>>>
>>>>
>>>
>>
>>
>> ------------------------------------------------------------------------------
>> One dashboard for servers and applications across Physical-Virtual-Cloud
>> Widest out-of-the-box monitoring support with 50+ applications
>> Performance metrics, stats and reports that give you Actionable Insights
>> Deep dive visibility with transaction tracing using APM Insight.
>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
>> _______________________________________________
>> Bitcoin-development mailing list
>> Bitcoin-development@lists•sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>>
>>

[-- Attachment #2: Type: text/html, Size: 19005 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-26  3:49   ` Jim Phillips
@ 2015-05-26  5:43     ` gabe appleton
  2015-05-26  8:29       ` Jim Phillips
  0 siblings, 1 reply; 13+ messages in thread
From: gabe appleton @ 2015-05-26  5:43 UTC (permalink / raw)
  To: Jim Phillips; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 11425 bytes --]

Sync time wouldn't be longer compared to 20MB, it would (eventually) be
longer under either setup.

Also, and this is probably a silly concern, but wouldn't changing block
time change the supply curve? If we cut the rate in half or a power of two,
that affects nothing, but if we want to keep it in round numbers, we need
to do it by 10, 5, or 2. I feel like most people would bank for 10 or 5,
both of which change the supply curve due to truncation.

Again, it's a trivial concern, but probably one that should be addressed.
On May 25, 2015 11:52 PM, "Jim Phillips" <jim@ergophobia•org> wrote:

> Incidentally, even once we have the "Internet of Things" brought on by 21,
> Inc. or whoever beats them to it, I would expect the average home to have
> only a single full node "hub" receiving the blockchain and broadcasting
> transactions created by all the minor SPV connected devices running within
> the house. The in-home full node would be peered with high bandwidth
> full-node relays running at the ISP or in the cloud. There are more than
> enough ISPs and cloud compute providers in the world such that there should
> be no concern at all about centralization of relays. Full nodes could some
> day become as ubiquitous on the Internet as authoritative DNS servers. And
> just like DNS servers, if you don't trust the nodes your ISP creates or
> it's too slow or censors transactions, there's nothing preventing you from
> peering with nodes hosted by the Googles or OpenDNSs out there, or running
> your own if you're really paranoid and have a few extra bucks for a VPS.
>
> --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
> <http://www.linkedin.com/in/ergophobe>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy*
>
>  *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
> On Mon, May 25, 2015 at 10:23 PM, Jim Phillips <jim@ergophobia•org> wrote:
>
>> I don't see how the fact that my 2Mbps connection causes me to not be a
>> very good relay has any bearing on whether or not the network as a whole
>> would be negatively impacted by a 20MB block. My inability to rapidly
>> propagate blocks doesn't really harm the network. It's only if MOST relays
>> are as slow as mine that it creates an issue. I'm one node in thousands
>> (potentially tens or hundreds of thousands if/when Bitcoin goes
>> mainstream). And I'm an individual. There's no reason at all for me to run
>> a full node from my home, except to have my own trusted and validated copy
>> of the blockchain on a computer I control directly. I don't need to act as
>> a relay for that and as long as I can download blocks faster than they are
>> created I'm fine. Also, I can easily afford a VPS server or several to run
>> full nodes as relays if I am feeling altruistic. It's actually cheaper for
>> me to lease a VPS than to keep my own home PC on 24/7, which is why I have
>> 2 of them.
>>
>> And as a business, the cost of a server and bandwidth to run a full node
>> is a drop in the bucket. I'm involved in several projects where we have
>> full nodes running on leased servers with multiple 1Gbps connections. It's
>> an almost zero cost. Those nodes could handle 20MB blocks today without
>> thinking about it, and I'm sure our nodes are just a few amongst thousands
>> just like them. I'm not at all concerned about the network being too
>> centralized.
>>
>> What concerns me is the fact that we are using edge cases like my home PC
>> as a lame excuse to debate expanding the capacity of the network.
>>
>> --
>> *James G. Phillips IV*
>> <https://plus.google.com/u/0/113107039501292625391/posts>
>> <http://www.linkedin.com/in/ergophobe>
>>
>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>> immortals." -- David Ogilvy*
>>
>>  *This message was created with 100% recycled electrons. Please think
>> twice before printing.*
>>
>> On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle <thyshizzle@outlook•com>
>> wrote:
>>
>>>  Indeed Jim, your internet connection makes a good reason why I don't
>>> like 20mb blocks (right now). It would take you well over a minute to
>>> download the block before you could even relay it on, so much slow down in
>>> propagation! Yes I do see how decreasing the time to create blocks is a bit
>>> of a band-aid fix, and to use tge term I've seen mentioned here "kicking
>>> the can down the road" I agree that this is doing this, however as you say
>>> bandwidth is our biggest enemy right now and so hopefully by the time we
>>> exceed the capacity gained by the decrease in block time, we can then look
>>> to bump up block size because hopefully 20mbps connections will be baseline
>>> by then etc.
>>>  ------------------------------
>>> From: Jim Phillips <jim@ergophobia•org>
>>> Sent: ‎26/‎05/‎2015 12:53 PM
>>> To: Thy Shizzle <thyshizzle@outlook•com>
>>> Cc: Mike Hearn <mike@plan99•net>; Bitcoin Dev
>>> <bitcoin-development@lists•sourceforge.net>
>>>
>>> Subject: Re: [Bitcoin-development] No Bitcoin For You
>>>
>>>  Frankly I'm good with either way. I'm definitely in favor of faster
>>> confirmation times.
>>>
>>>  The important thing is that we need to increase the amount of
>>> transactions that get into blocks over a given time frame to a point that
>>> is in line with what current technology can handle. We can handle WAY more
>>> than we are doing right now. The Bitcoin network is not currently Disk,
>>> CPU, or RAM bound.. Not even close. The metric we're closest to being
>>> restricted by would be Network bandwidth. I live in a developing country.
>>> 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
>>> connections are affordable). That equates to about 17MB per minute, or 170x
>>> more capacity than what I need to receive a full copy of the blockchain if
>>> I only talk to one peer. If I relay to say 10 peers, I can still handle 17x
>>> larger block sizes on a slow 2Mbps connection.
>>>
>>>  Also, even if we reduce the difficulty so that we're doing 1MB blocks
>>> every minute, that's still only 10MB every 10 minutes. Eventually we're
>>> going to have to increase that, and we can only reduce the confirmation
>>> period so much. I think someone once said 30 seconds or so is about the
>>> shortest period you can practically achieve.
>>>
>>>  --
>>> *James G. Phillips IV*
>>> <https://plus.google.com/u/0/113107039501292625391/posts>
>>> <http://www.linkedin.com/in/ergophobe>
>>>
>>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>>> immortals." -- David Ogilvy *
>>>
>>>   *This message was created with 100% recycled electrons. Please think
>>> twice before printing.*
>>>
>>> On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle <thyshizzle@outlook•com>
>>> wrote:
>>>
>>>  Nah don't make blocks 20mb, then you are slowing down block
>>> propagation and blowing out conf tikes as a result. Just decrease the time
>>> it takes to make a 1mb block, then you still see the same propagation times
>>> today and just increase the transaction throughput.
>>>  ------------------------------
>>> From: Jim Phillips <jim@ergophobia•org>
>>> Sent: ‎26/‎05/‎2015 12:27 PM
>>> To: Mike Hearn <mike@plan99•net>
>>> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
>>> Subject: Re: [Bitcoin-development] No Bitcoin For You
>>>
>>>
>>> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>>>
>>>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki
>>> is down right now, but I showed years ago that you could keep up with VISA
>>> on a single well specced server with today's technology. Only people living
>>> in a dreamworld think that Bitcoin might actually have to match that level
>>> of transaction demand with today's hardware. As noted previously, "too many
>>> users" is simply not a problem Bitcoin has .... and may never have!
>>>
>>>
>>>  ... And will certainly NEVER have if we can't solve the capacity
>>> problem SOON.
>>>
>>>  In a former life, I was a capacity planner for Bank of America's
>>> mid-range server group. We had one hard and fast rule. When you are
>>> typically exceeding 75% of capacity on a given metric, it's time to expand
>>> capacity. Period. You don't do silly things like adjusting the business
>>> model to disincentivize use. Unless there's some flaw in the system and
>>> it's leaking resources, if usage has increased to the point where you are
>>> at or near the limits of capacity, you expand capacity. It's as simple as
>>> that, and I've found that same rule fits quite well in a number of systems.
>>>
>>>  In Bitcoin, we're not leaking resources. There's no flaw. The system
>>> is performing as intended. Usage is increasing because it works so well,
>>> and there is huge potential for future growth as we identify more uses and
>>> attract more users. There might be a few technical things we can do to
>>> reduce consumption, but the metric we're concerned with right now is how
>>> many transactions we can fit in a block. We've broken through the 75%
>>> marker and are regularly bumping up against the 100% limit.
>>>
>>>  It is time to stop debating this and take action to expand capacity.
>>> The only questions that should remain are how much capacity do we add, and
>>> how soon can we do it. Given that most existing computer systems and
>>> networks can easily handle 20MB blocks every 10 minutes, and given that
>>> that will increase capacity 20-fold, I can't think of a single reason why
>>> we can't go to 20MB as soon as humanly possible. And in a few years, when
>>> the average block size is over 15MB, we bump it up again to as high as we
>>> can go then without pushing typical computers or networks beyond their
>>> capacity. We can worry about ways to slow down growth without affecting the
>>> usefulness of Bitcoin as we get closer to the hard technical limits on our
>>> capacity.
>>>
>>>  And you know what else? If miners need higher fees to accommodate the
>>> costs of bigger blocks, they can configure their nodes to only mine
>>> transactions with higher fees.. Let the miners decide how to charge enough
>>> to pay for their costs. We don't need to cripple the network just for them.
>>>
>>>  --
>>> *James G. Phillips IV*
>>> <https://plus.google.com/u/0/113107039501292625391/posts>
>>>
>>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>>> immortals." -- David Ogilvy *
>>>
>>>   *This message was created with 100% recycled electrons. Please think
>>> twice before printing.*
>>>
>>>
>>>
>>
>
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists•sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
>

[-- Attachment #2: Type: text/html, Size: 17456 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-26  3:23 ` Jim Phillips
@ 2015-05-26  3:49   ` Jim Phillips
  2015-05-26  5:43     ` gabe appleton
  0 siblings, 1 reply; 13+ messages in thread
From: Jim Phillips @ 2015-05-26  3:49 UTC (permalink / raw)
  To: Thy Shizzle; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 9962 bytes --]

Incidentally, even once we have the "Internet of Things" brought on by 21,
Inc. or whoever beats them to it, I would expect the average home to have
only a single full node "hub" receiving the blockchain and broadcasting
transactions created by all the minor SPV connected devices running within
the house. The in-home full node would be peered with high bandwidth
full-node relays running at the ISP or in the cloud. There are more than
enough ISPs and cloud compute providers in the world such that there should
be no concern at all about centralization of relays. Full nodes could some
day become as ubiquitous on the Internet as authoritative DNS servers. And
just like DNS servers, if you don't trust the nodes your ISP creates or
it's too slow or censors transactions, there's nothing preventing you from
peering with nodes hosted by the Googles or OpenDNSs out there, or running
your own if you're really paranoid and have a few extra bucks for a VPS.

--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>
<http://www.linkedin.com/in/ergophobe>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 10:23 PM, Jim Phillips <jim@ergophobia•org> wrote:

> I don't see how the fact that my 2Mbps connection causes me to not be a
> very good relay has any bearing on whether or not the network as a whole
> would be negatively impacted by a 20MB block. My inability to rapidly
> propagate blocks doesn't really harm the network. It's only if MOST relays
> are as slow as mine that it creates an issue. I'm one node in thousands
> (potentially tens or hundreds of thousands if/when Bitcoin goes
> mainstream). And I'm an individual. There's no reason at all for me to run
> a full node from my home, except to have my own trusted and validated copy
> of the blockchain on a computer I control directly. I don't need to act as
> a relay for that and as long as I can download blocks faster than they are
> created I'm fine. Also, I can easily afford a VPS server or several to run
> full nodes as relays if I am feeling altruistic. It's actually cheaper for
> me to lease a VPS than to keep my own home PC on 24/7, which is why I have
> 2 of them.
>
> And as a business, the cost of a server and bandwidth to run a full node
> is a drop in the bucket. I'm involved in several projects where we have
> full nodes running on leased servers with multiple 1Gbps connections. It's
> an almost zero cost. Those nodes could handle 20MB blocks today without
> thinking about it, and I'm sure our nodes are just a few amongst thousands
> just like them. I'm not at all concerned about the network being too
> centralized.
>
> What concerns me is the fact that we are using edge cases like my home PC
> as a lame excuse to debate expanding the capacity of the network.
>
> --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
> <http://www.linkedin.com/in/ergophobe>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy*
>
>  *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
> On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle <thyshizzle@outlook•com>
> wrote:
>
>>  Indeed Jim, your internet connection makes a good reason why I don't
>> like 20mb blocks (right now). It would take you well over a minute to
>> download the block before you could even relay it on, so much slow down in
>> propagation! Yes I do see how decreasing the time to create blocks is a bit
>> of a band-aid fix, and to use tge term I've seen mentioned here "kicking
>> the can down the road" I agree that this is doing this, however as you say
>> bandwidth is our biggest enemy right now and so hopefully by the time we
>> exceed the capacity gained by the decrease in block time, we can then look
>> to bump up block size because hopefully 20mbps connections will be baseline
>> by then etc.
>>  ------------------------------
>> From: Jim Phillips <jim@ergophobia•org>
>> Sent: ‎26/‎05/‎2015 12:53 PM
>> To: Thy Shizzle <thyshizzle@outlook•com>
>> Cc: Mike Hearn <mike@plan99•net>; Bitcoin Dev
>> <bitcoin-development@lists•sourceforge.net>
>>
>> Subject: Re: [Bitcoin-development] No Bitcoin For You
>>
>>  Frankly I'm good with either way. I'm definitely in favor of faster
>> confirmation times.
>>
>>  The important thing is that we need to increase the amount of
>> transactions that get into blocks over a given time frame to a point that
>> is in line with what current technology can handle. We can handle WAY more
>> than we are doing right now. The Bitcoin network is not currently Disk,
>> CPU, or RAM bound.. Not even close. The metric we're closest to being
>> restricted by would be Network bandwidth. I live in a developing country.
>> 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
>> connections are affordable). That equates to about 17MB per minute, or 170x
>> more capacity than what I need to receive a full copy of the blockchain if
>> I only talk to one peer. If I relay to say 10 peers, I can still handle 17x
>> larger block sizes on a slow 2Mbps connection.
>>
>>  Also, even if we reduce the difficulty so that we're doing 1MB blocks
>> every minute, that's still only 10MB every 10 minutes. Eventually we're
>> going to have to increase that, and we can only reduce the confirmation
>> period so much. I think someone once said 30 seconds or so is about the
>> shortest period you can practically achieve.
>>
>>  --
>> *James G. Phillips IV*
>> <https://plus.google.com/u/0/113107039501292625391/posts>
>> <http://www.linkedin.com/in/ergophobe>
>>
>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>> immortals." -- David Ogilvy *
>>
>>   *This message was created with 100% recycled electrons. Please think
>> twice before printing.*
>>
>> On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle <thyshizzle@outlook•com>
>> wrote:
>>
>>  Nah don't make blocks 20mb, then you are slowing down block propagation
>> and blowing out conf tikes as a result. Just decrease the time it takes to
>> make a 1mb block, then you still see the same propagation times today and
>> just increase the transaction throughput.
>>  ------------------------------
>> From: Jim Phillips <jim@ergophobia•org>
>> Sent: ‎26/‎05/‎2015 12:27 PM
>> To: Mike Hearn <mike@plan99•net>
>> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
>> Subject: Re: [Bitcoin-development] No Bitcoin For You
>>
>>
>> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>>
>>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
>> down right now, but I showed years ago that you could keep up with VISA on
>> a single well specced server with today's technology. Only people living in
>> a dreamworld think that Bitcoin might actually have to match that level of
>> transaction demand with today's hardware. As noted previously, "too many
>> users" is simply not a problem Bitcoin has .... and may never have!
>>
>>
>>  ... And will certainly NEVER have if we can't solve the capacity
>> problem SOON.
>>
>>  In a former life, I was a capacity planner for Bank of America's
>> mid-range server group. We had one hard and fast rule. When you are
>> typically exceeding 75% of capacity on a given metric, it's time to expand
>> capacity. Period. You don't do silly things like adjusting the business
>> model to disincentivize use. Unless there's some flaw in the system and
>> it's leaking resources, if usage has increased to the point where you are
>> at or near the limits of capacity, you expand capacity. It's as simple as
>> that, and I've found that same rule fits quite well in a number of systems.
>>
>>  In Bitcoin, we're not leaking resources. There's no flaw. The system is
>> performing as intended. Usage is increasing because it works so well, and
>> there is huge potential for future growth as we identify more uses and
>> attract more users. There might be a few technical things we can do to
>> reduce consumption, but the metric we're concerned with right now is how
>> many transactions we can fit in a block. We've broken through the 75%
>> marker and are regularly bumping up against the 100% limit.
>>
>>  It is time to stop debating this and take action to expand capacity.
>> The only questions that should remain are how much capacity do we add, and
>> how soon can we do it. Given that most existing computer systems and
>> networks can easily handle 20MB blocks every 10 minutes, and given that
>> that will increase capacity 20-fold, I can't think of a single reason why
>> we can't go to 20MB as soon as humanly possible. And in a few years, when
>> the average block size is over 15MB, we bump it up again to as high as we
>> can go then without pushing typical computers or networks beyond their
>> capacity. We can worry about ways to slow down growth without affecting the
>> usefulness of Bitcoin as we get closer to the hard technical limits on our
>> capacity.
>>
>>  And you know what else? If miners need higher fees to accommodate the
>> costs of bigger blocks, they can configure their nodes to only mine
>> transactions with higher fees.. Let the miners decide how to charge enough
>> to pay for their costs. We don't need to cripple the network just for them.
>>
>>  --
>> *James G. Phillips IV*
>> <https://plus.google.com/u/0/113107039501292625391/posts>
>>
>> *"Don't bunt. Aim out of the ball park. Aim for the company of
>> immortals." -- David Ogilvy *
>>
>>   *This message was created with 100% recycled electrons. Please think
>> twice before printing.*
>>
>>
>>
>

[-- Attachment #2: Type: text/html, Size: 15717 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-26  3:02 Thy Shizzle
@ 2015-05-26  3:23 ` Jim Phillips
  2015-05-26  3:49   ` Jim Phillips
  0 siblings, 1 reply; 13+ messages in thread
From: Jim Phillips @ 2015-05-26  3:23 UTC (permalink / raw)
  To: Thy Shizzle; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 8383 bytes --]

I don't see how the fact that my 2Mbps connection causes me to not be a
very good relay has any bearing on whether or not the network as a whole
would be negatively impacted by a 20MB block. My inability to rapidly
propagate blocks doesn't really harm the network. It's only if MOST relays
are as slow as mine that it creates an issue. I'm one node in thousands
(potentially tens or hundreds of thousands if/when Bitcoin goes
mainstream). And I'm an individual. There's no reason at all for me to run
a full node from my home, except to have my own trusted and validated copy
of the blockchain on a computer I control directly. I don't need to act as
a relay for that and as long as I can download blocks faster than they are
created I'm fine. Also, I can easily afford a VPS server or several to run
full nodes as relays if I am feeling altruistic. It's actually cheaper for
me to lease a VPS than to keep my own home PC on 24/7, which is why I have
2 of them.

And as a business, the cost of a server and bandwidth to run a full node is
a drop in the bucket. I'm involved in several projects where we have full
nodes running on leased servers with multiple 1Gbps connections. It's an
almost zero cost. Those nodes could handle 20MB blocks today without
thinking about it, and I'm sure our nodes are just a few amongst thousands
just like them. I'm not at all concerned about the network being too
centralized.

What concerns me is the fact that we are using edge cases like my home PC
as a lame excuse to debate expanding the capacity of the network.

--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>
<http://www.linkedin.com/in/ergophobe>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle <thyshizzle@outlook•com>
wrote:

>  Indeed Jim, your internet connection makes a good reason why I don't
> like 20mb blocks (right now). It would take you well over a minute to
> download the block before you could even relay it on, so much slow down in
> propagation! Yes I do see how decreasing the time to create blocks is a bit
> of a band-aid fix, and to use tge term I've seen mentioned here "kicking
> the can down the road" I agree that this is doing this, however as you say
> bandwidth is our biggest enemy right now and so hopefully by the time we
> exceed the capacity gained by the decrease in block time, we can then look
> to bump up block size because hopefully 20mbps connections will be baseline
> by then etc.
>  ------------------------------
> From: Jim Phillips <jim@ergophobia•org>
> Sent: ‎26/‎05/‎2015 12:53 PM
> To: Thy Shizzle <thyshizzle@outlook•com>
> Cc: Mike Hearn <mike@plan99•net>; Bitcoin Dev
> <bitcoin-development@lists•sourceforge.net>
>
> Subject: Re: [Bitcoin-development] No Bitcoin For You
>
>  Frankly I'm good with either way. I'm definitely in favor of faster
> confirmation times.
>
>  The important thing is that we need to increase the amount of
> transactions that get into blocks over a given time frame to a point that
> is in line with what current technology can handle. We can handle WAY more
> than we are doing right now. The Bitcoin network is not currently Disk,
> CPU, or RAM bound.. Not even close. The metric we're closest to being
> restricted by would be Network bandwidth. I live in a developing country.
> 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
> connections are affordable). That equates to about 17MB per minute, or 170x
> more capacity than what I need to receive a full copy of the blockchain if
> I only talk to one peer. If I relay to say 10 peers, I can still handle 17x
> larger block sizes on a slow 2Mbps connection.
>
>  Also, even if we reduce the difficulty so that we're doing 1MB blocks
> every minute, that's still only 10MB every 10 minutes. Eventually we're
> going to have to increase that, and we can only reduce the confirmation
> period so much. I think someone once said 30 seconds or so is about the
> shortest period you can practically achieve.
>
>  --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
> <http://www.linkedin.com/in/ergophobe>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy *
>
>   *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
> On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle <thyshizzle@outlook•com>
> wrote:
>
>  Nah don't make blocks 20mb, then you are slowing down block propagation
> and blowing out conf tikes as a result. Just decrease the time it takes to
> make a 1mb block, then you still see the same propagation times today and
> just increase the transaction throughput.
>  ------------------------------
> From: Jim Phillips <jim@ergophobia•org>
> Sent: ‎26/‎05/‎2015 12:27 PM
> To: Mike Hearn <mike@plan99•net>
> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
> Subject: Re: [Bitcoin-development] No Bitcoin For You
>
>
> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>
>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
> down right now, but I showed years ago that you could keep up with VISA on
> a single well specced server with today's technology. Only people living in
> a dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
>  ... And will certainly NEVER have if we can't solve the capacity problem
> SOON.
>
>  In a former life, I was a capacity planner for Bank of America's
> mid-range server group. We had one hard and fast rule. When you are
> typically exceeding 75% of capacity on a given metric, it's time to expand
> capacity. Period. You don't do silly things like adjusting the business
> model to disincentivize use. Unless there's some flaw in the system and
> it's leaking resources, if usage has increased to the point where you are
> at or near the limits of capacity, you expand capacity. It's as simple as
> that, and I've found that same rule fits quite well in a number of systems.
>
>  In Bitcoin, we're not leaking resources. There's no flaw. The system is
> performing as intended. Usage is increasing because it works so well, and
> there is huge potential for future growth as we identify more uses and
> attract more users. There might be a few technical things we can do to
> reduce consumption, but the metric we're concerned with right now is how
> many transactions we can fit in a block. We've broken through the 75%
> marker and are regularly bumping up against the 100% limit.
>
>  It is time to stop debating this and take action to expand capacity. The
> only questions that should remain are how much capacity do we add, and how
> soon can we do it. Given that most existing computer systems and networks
> can easily handle 20MB blocks every 10 minutes, and given that that will
> increase capacity 20-fold, I can't think of a single reason why we can't go
> to 20MB as soon as humanly possible. And in a few years, when the average
> block size is over 15MB, we bump it up again to as high as we can go then
> without pushing typical computers or networks beyond their capacity. We can
> worry about ways to slow down growth without affecting the usefulness of
> Bitcoin as we get closer to the hard technical limits on our capacity.
>
>  And you know what else? If miners need higher fees to accommodate the
> costs of bigger blocks, they can configure their nodes to only mine
> transactions with higher fees.. Let the miners decide how to charge enough
> to pay for their costs. We don't need to cripple the network just for them.
>
>  --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy *
>
>   *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
>
>

[-- Attachment #2: Type: text/html, Size: 13403 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
@ 2015-05-26  3:02 Thy Shizzle
  2015-05-26  3:23 ` Jim Phillips
  0 siblings, 1 reply; 13+ messages in thread
From: Thy Shizzle @ 2015-05-26  3:02 UTC (permalink / raw)
  To: Jim Phillips; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 6301 bytes --]

Indeed Jim, your internet connection makes a good reason why I don't like 20mb blocks (right now). It would take you well over a minute to download the block before you could even relay it on, so much slow down in propagation! Yes I do see how decreasing the time to create blocks is a bit of a band-aid fix, and to use tge term I've seen mentioned here "kicking the can down the road" I agree that this is doing this, however as you say bandwidth is our biggest enemy right now and so hopefully by the time we exceed the capacity gained by the decrease in block time, we can then look to bump up block size because hopefully 20mbps connections will be baseline by then etc.
________________________________
From: Jim Phillips<mailto:jim@ergophobia•org>
Sent: ‎26/‎05/‎2015 12:53 PM
To: Thy Shizzle<mailto:thyshizzle@outlook•com>
Cc: Mike Hearn<mailto:mike@plan99•net>; Bitcoin Dev<mailto:bitcoin-development@lists•sourceforge.net>
Subject: Re: [Bitcoin-development] No Bitcoin For You

Frankly I'm good with either way. I'm definitely in favor of faster
confirmation times.

The important thing is that we need to increase the amount of transactions
that get into blocks over a given time frame to a point that is in line
with what current technology can handle. We can handle WAY more than we are
doing right now. The Bitcoin network is not currently Disk, CPU, or RAM
bound.. Not even close. The metric we're closest to being restricted by
would be Network bandwidth. I live in a developing country. 2Mbps is a
typical broadband speed here (although 5Mbps and 10Mbps connections are
affordable). That equates to about 17MB per minute, or 170x more capacity
than what I need to receive a full copy of the blockchain if I only talk to
one peer. If I relay to say 10 peers, I can still handle 17x larger block
sizes on a slow 2Mbps connection.

Also, even if we reduce the difficulty so that we're doing 1MB blocks every
minute, that's still only 10MB every 10 minutes. Eventually we're going to
have to increase that, and we can only reduce the confirmation period so
much. I think someone once said 30 seconds or so is about the shortest
period you can practically achieve.

--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>
<http://www.linkedin.com/in/ergophobe>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle <thyshizzle@outlook•com> wrote:

>  Nah don't make blocks 20mb, then you are slowing down block propagation
> and blowing out conf tikes as a result. Just decrease the time it takes to
> make a 1mb block, then you still see the same propagation times today and
> just increase the transaction throughput.
>  ------------------------------
> From: Jim Phillips <jim@ergophobia•org>
> Sent: ‎26/‎05/‎2015 12:27 PM
> To: Mike Hearn <mike@plan99•net>
> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
> Subject: Re: [Bitcoin-development] No Bitcoin For You
>
>
> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>
>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
> down right now, but I showed years ago that you could keep up with VISA on
> a single well specced server with today's technology. Only people living in
> a dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
>  ... And will certainly NEVER have if we can't solve the capacity problem
> SOON.
>
>  In a former life, I was a capacity planner for Bank of America's
> mid-range server group. We had one hard and fast rule. When you are
> typically exceeding 75% of capacity on a given metric, it's time to expand
> capacity. Period. You don't do silly things like adjusting the business
> model to disincentivize use. Unless there's some flaw in the system and
> it's leaking resources, if usage has increased to the point where you are
> at or near the limits of capacity, you expand capacity. It's as simple as
> that, and I've found that same rule fits quite well in a number of systems.
>
>  In Bitcoin, we're not leaking resources. There's no flaw. The system is
> performing as intended. Usage is increasing because it works so well, and
> there is huge potential for future growth as we identify more uses and
> attract more users. There might be a few technical things we can do to
> reduce consumption, but the metric we're concerned with right now is how
> many transactions we can fit in a block. We've broken through the 75%
> marker and are regularly bumping up against the 100% limit.
>
>  It is time to stop debating this and take action to expand capacity. The
> only questions that should remain are how much capacity do we add, and how
> soon can we do it. Given that most existing computer systems and networks
> can easily handle 20MB blocks every 10 minutes, and given that that will
> increase capacity 20-fold, I can't think of a single reason why we can't go
> to 20MB as soon as humanly possible. And in a few years, when the average
> block size is over 15MB, we bump it up again to as high as we can go then
> without pushing typical computers or networks beyond their capacity. We can
> worry about ways to slow down growth without affecting the usefulness of
> Bitcoin as we get closer to the hard technical limits on our capacity.
>
>  And you know what else? If miners need higher fees to accommodate the
> costs of bigger blocks, they can configure their nodes to only mine
> transactions with higher fees.. Let the miners decide how to charge enough
> to pay for their costs. We don't need to cripple the network just for them.
>
>  --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy *
>
>   *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
>

[-- Attachment #2: Type: text/html, Size: 10514 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
@ 2015-05-26  2:51 Thy Shizzle
  0 siblings, 0 replies; 13+ messages in thread
From: Thy Shizzle @ 2015-05-26  2:51 UTC (permalink / raw)
  To: gabe appleton; +Cc: Dev, Bitcoin

[-- Attachment #1: Type: text/plain, Size: 6511 bytes --]

I wouldn't say same trade-off because you need the whole 20mb block before you can start to use it where as a 1mb block can be used quicker thus transactions found in tge block quicker etc. As for tge higher rate of orphans, I think this would be complimented by a faster correction rate, so if you're pumping out blocks at a rate of 1 per minute, if we get a fork and the next block comes in 10 minutes and is the decider, it took 10 minutes to determine which block is the orphan. But at a rate of 1 block per 1 minute then it only takes 1 minute to resolve the orphan (obviously this is very simplified) so I'm not so sure that orphan rate is a big issue here. Indeed you would need to draw upon more confirmations for easier block creation but surely that is not an issue?

Why would sync time be longer as opposed to 20mb blocks?
________________________________
From: gabe appleton<mailto:gappleto97@gmail•com>
Sent: ‎26/‎05/‎2015 12:41 PM
To: Thy Shizzle<mailto:thyshizzle@outlook•com>
Cc: Jim Phillips<mailto:jim@ergophobia•org>; Mike Hearn<mailto:mike@plan99•net>; Bitcoin Dev<mailto:bitcoin-development@lists•sourceforge.net>
Subject: Re: [Bitcoin-development] No Bitcoin For You

But don't you see the same trade-off in the end there? You're still
propagating the same amount of data over the same amount of time, so unless
I misunderstand, the costs of such a move should be approximately the same,
just in different areas. The risks as I understand are as follows:

20MB:


   1. Longer per-block propagation (eventually)
   2. Longer processing time (eventually)
   3. Longer sync time

1 Minute:

   1. Weaker individual confirmations (approx. equal per confirmation*time)
   2. Higher orphan rate (immediately)
   3. Longer sync time

That risk-set makes me want a middle-ground approach. Something where the
immediate consequences aren't all that strong, and where we have some idea
of what to do in the future. Is there any chance we can get decent network
simulations at various configurations (5MB/4min, etc)? Perhaps
re-appropriate the testnet?

On Mon, May 25, 2015 at 10:30 PM, Thy Shizzle <thyshizzle@outlook•com>
wrote:

>  Nah don't make blocks 20mb, then you are slowing down block propagation
> and blowing out conf tikes as a result. Just decrease the time it takes to
> make a 1mb block, then you still see the same propagation times today and
> just increase the transaction throughput.
>  ------------------------------
> From: Jim Phillips <jim@ergophobia•org>
> Sent: ‎26/‎05/‎2015 12:27 PM
> To: Mike Hearn <mike@plan99•net>
> Cc: Bitcoin Dev <bitcoin-development@lists•sourceforge.net>
> Subject: Re: [Bitcoin-development] No Bitcoin For You
>
>
> On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:
>
>   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
> down right now, but I showed years ago that you could keep up with VISA on
> a single well specced server with today's technology. Only people living in
> a dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
>  ... And will certainly NEVER have if we can't solve the capacity problem
> SOON.
>
>  In a former life, I was a capacity planner for Bank of America's
> mid-range server group. We had one hard and fast rule. When you are
> typically exceeding 75% of capacity on a given metric, it's time to expand
> capacity. Period. You don't do silly things like adjusting the business
> model to disincentivize use. Unless there's some flaw in the system and
> it's leaking resources, if usage has increased to the point where you are
> at or near the limits of capacity, you expand capacity. It's as simple as
> that, and I've found that same rule fits quite well in a number of systems.
>
>  In Bitcoin, we're not leaking resources. There's no flaw. The system is
> performing as intended. Usage is increasing because it works so well, and
> there is huge potential for future growth as we identify more uses and
> attract more users. There might be a few technical things we can do to
> reduce consumption, but the metric we're concerned with right now is how
> many transactions we can fit in a block. We've broken through the 75%
> marker and are regularly bumping up against the 100% limit.
>
>  It is time to stop debating this and take action to expand capacity. The
> only questions that should remain are how much capacity do we add, and how
> soon can we do it. Given that most existing computer systems and networks
> can easily handle 20MB blocks every 10 minutes, and given that that will
> increase capacity 20-fold, I can't think of a single reason why we can't go
> to 20MB as soon as humanly possible. And in a few years, when the average
> block size is over 15MB, we bump it up again to as high as we can go then
> without pushing typical computers or networks beyond their capacity. We can
> worry about ways to slow down growth without affecting the usefulness of
> Bitcoin as we get closer to the hard technical limits on our capacity.
>
>  And you know what else? If miners need higher fees to accommodate the
> costs of bigger blocks, they can configure their nodes to only mine
> transactions with higher fees.. Let the miners decide how to charge enough
> to pay for their costs. We don't need to cripple the network just for them.
>
>  --
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy *
>
>   *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
>
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists•sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
>

[-- Attachment #2: Type: text/html, Size: 10418 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-25 18:36 ` Mike Hearn
@ 2015-05-26  2:26   ` Jim Phillips
  0 siblings, 0 replies; 13+ messages in thread
From: Jim Phillips @ 2015-05-26  2:26 UTC (permalink / raw)
  To: Mike Hearn; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 2973 bytes --]

On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike@plan99•net> wrote:

This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
> right now, but I showed years ago that you could keep up with VISA on a
> single well specced server with today's technology. Only people living in a
> dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
... And will certainly NEVER have if we can't solve the capacity problem
SOON.

In a former life, I was a capacity planner for Bank of America's mid-range
server group. We had one hard and fast rule. When you are typically
exceeding 75% of capacity on a given metric, it's time to expand capacity.
Period. You don't do silly things like adjusting the business model to
disincentivize use. Unless there's some flaw in the system and it's leaking
resources, if usage has increased to the point where you are at or near the
limits of capacity, you expand capacity. It's as simple as that, and I've
found that same rule fits quite well in a number of systems.

In Bitcoin, we're not leaking resources. There's no flaw. The system is
performing as intended. Usage is increasing because it works so well, and
there is huge potential for future growth as we identify more uses and
attract more users. There might be a few technical things we can do to
reduce consumption, but the metric we're concerned with right now is how
many transactions we can fit in a block. We've broken through the 75%
marker and are regularly bumping up against the 100% limit.

It is time to stop debating this and take action to expand capacity. The
only questions that should remain are how much capacity do we add, and how
soon can we do it. Given that most existing computer systems and networks
can easily handle 20MB blocks every 10 minutes, and given that that will
increase capacity 20-fold, I can't think of a single reason why we can't go
to 20MB as soon as humanly possible. And in a few years, when the average
block size is over 15MB, we bump it up again to as high as we can go then
without pushing typical computers or networks beyond their capacity. We can
worry about ways to slow down growth without affecting the usefulness of
Bitcoin as we get closer to the hard technical limits on our capacity.

And you know what else? If miners need higher fees to accommodate the costs
of bigger blocks, they can configure their nodes to only mine transactions
with higher fees.. Let the miners decide how to charge enough to pay for
their costs. We don't need to cripple the network just for them.

--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

[-- Attachment #2: Type: text/html, Size: 4450 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-14 15:22 Tom Harding
  2015-05-17  2:31 ` Ryan X. Charles
@ 2015-05-25 18:36 ` Mike Hearn
  2015-05-26  2:26   ` Jim Phillips
  1 sibling, 1 reply; 13+ messages in thread
From: Mike Hearn @ 2015-05-25 18:36 UTC (permalink / raw)
  To: Tom Harding; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 1447 bytes --]

>
> If capacity grows, fewer individuals would be able to run full nodes.
>

Hardly. Nobody is currently exhausting the CPU capacity of even a normal
computer currently and even if we did a 20x increase in load overnight,
that still wouldn't even warm up most machines good enough to be always on.

The reasons full nodes are unpopular to run seem to be:

1. Uncontrollable bandwidth usage from sending people the chain
2. People don't run them all the time, then don't want to wait for them to
catch up

The first can be fixed with better code (you can already easily opt out of
uploading the chain, it's just not as fine-grained as desirable), and the
second is fundamental to what full nodes do and how people work. For
merchants, who are the most important demographic we want to be using full
nodes, they can just keep it running all the time. No biggie.


> Therefore miners and other full nodes would depend on
> it, which is rather critical as those nodes grow closer to data-center
> proportions.
>

This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
right now, but I showed years ago that you could keep up with VISA on a
single well specced server with today's technology. Only people living in a
dreamworld think that Bitcoin might actually have to match that level of
transaction demand with today's hardware. As noted previously, "too many
users" is simply not a problem Bitcoin has .... and may never have!

[-- Attachment #2: Type: text/html, Size: 1954 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bitcoin-development] No Bitcoin For You
  2015-05-14 15:22 Tom Harding
@ 2015-05-17  2:31 ` Ryan X. Charles
  2015-05-25 18:36 ` Mike Hearn
  1 sibling, 0 replies; 13+ messages in thread
From: Ryan X. Charles @ 2015-05-17  2:31 UTC (permalink / raw)
  To: Tom Harding, bitcoin-development

I agree with this analysis. I'm not sure if we will increase 1 MB
block size or not, but with a block size that small, it is all but
impossible for most people on the planet to ever own even a single utxo.

At 7tps, how long would it take to give 1 utxo to all of the 7 billion
people currently alive? It would take 1 billion seconds, or about 32
years.[1]  So for all practical purposes, at 1 MB block size, far less
than 1% of people will ever be able to own even a single satoshi.
Unless those people are willing to wait around 30 years for their
lightning network to settle, they will either not use bitcoin, or they
will use a substitute (such as a parallel decentralized network, or a
centralized service) that lacks the full trust-minimized security
guarantees of the main bitcoin blockchain.

I can't speak for most people, but for me personally, the thing I care
most about as an individual (besides being able to send bitcoin to and
from anyone on the planet) is being able to validate the blockchain.
With a pruning node, this means I need to download the blockchain one
time (not store it), and maintain the utxo set. The utxo set is,
roughly speaking, 30 bytes per utxo, and therefore, at one utxo per
person, about 7*30 billion bytes, or 210 GB. That's very achievable on
the hardware of today. Of course, some individuals or companies will
have far more than one utxo. Estimating an average of ten utxos per
person, that will be 2.1 TB. Also very achievable on the hardware of
today.

I don't think every transaction in the world should be on the
blockchain, but I think it should be able to handle (long-term) enough
transactions that everyone can have their transactions settled on a
timescale suitable for human life. 30 years is unsuitable, but 1 day
would be pretty good. It would be great if I could send trillions of
transactions per day on networks built on top of bitcoin, and have my
transactions settle on the actual blockchain once per day. This means
we would need to support about 1 utxo per person per day, or 7 billion
transactions per day. That translates to about 81 thousand
transactions per second [2], or approximately 10,000 times the current
rate. That would be 10 GB per ten minutes, which is achievable on
current hardware (albeit not yet inexpensively).

Using SPV security rather than pruning security makes the cost even
lower. A person relying on SPV would not have to download every 10 GB
block, but only their transactions (or a small superset of them),
which is already being done - scaling to 7 billion people would not
require that SPV nodes perform any more computation than they already
do. Nonetheless, I think pruning should be considered the default
minimum, since that it what is required to get the full
trust-minimized security guarantees of the blockchain. And that
requires 10 GB blocks (or thereabouts).

The number of people on the planet will also grow, perhaps to 14
billion people in the next few decades. However, the estimates here
would still be roughly correct. 10 GB blocks, or approximately so,
allows everyone in the world to have their transactions settled on the
blockchain in a timely manner, whereas 1 MB blocks do not. And this is
already achievable on current hardware. The most significant cost is
bandwidth, but that will probably become substantially less expensive
in the coming years, making it possible for everyone to inexpensively
and securely send and receive bitcoin to anyone else, without having
to use a parallel network with reduced security or rely on trusted
third parties.

[1] 10^9 / 60 / 60 / 24 / 365 ~= 32.

[2] 7*10^9 / 24 / 60 / 60 ~= 81018

On 05/14/2015 08:22 AM, Tom Harding wrote:
> A recent post, which I cannot find after much effort, made an 
> excellent point.
> 
> If capacity grows, fewer individuals would be able to run full
> nodes. Those individuals, like many already, would have to give up
> running a full-node wallet :(
> 
> That sounds bad, until you consider that the alternative is running
> a full node on the bitcoin 'settlement network', while massive
> numbers of people *give up any hope of directly owning bitcoin at
> all*.
> 
> If today's global payments are 100Ktps, and move to the Lightning 
> Network, they will have to be consolidated by a factor of 25000:1
> to fit into bitcoin's current 4tps capacity as a settlement
> network. You executing a personal transaction on that network will
> be about as likely as you personally conducting a $100 SWIFT
> transfer to yourself today. For current holders, just selling or
> spending will get very expensive!
> 
> Forcing block capacity to stay small, so that individuals can run 
> full nodes, is precisely what will force bitcoin to become a
> backbone that is too expensive for individuals to use.  I can't
> avoid the conclusion that Bitcoin has to scale, and we might as
> well be thinking about how.
> 
> There may be a an escape window.  As current trends continue toward
> a landscape of billions of SPV wallets, it may still be possible
> for individuals collectively to make up the majority of the
> network, if more parts of the network itself rely on SPV-level
> security.
> 
> With SPV-level security, it might be possible to implement a
> scalable DHT-type network of nodes that collectively store and
> index the exhaustive and fast-growing corpus of transaction
> history, up to and including currently unconfirmed transactions.
> Each individual node could host a slice of the transaction set with
> a configurable size, let's say down to a few GB today.
> 
> Such a network would have the desirable property of being run by
> the community.  Most transactions would be submitted to it, and
> like today's network, it would disseminate blocks (which would be
> rapidly torn apart and digested).  Therefore miners and other full
> nodes would depend on it, which is rather critical as those nodes
> grow closer to data-center proportions.
> 
> 
> 
> ------------------------------------------------------------------------------
>
>
>
> 
One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications 
> Performance metrics, stats and reports that give you Actionable 
> Insights Deep dive visibility with transaction tracing using APM 
> Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y 
> _______________________________________________ Bitcoin-development
>  mailing list Bitcoin-development@lists•sourceforge.net 
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
> 

-- 
Ryan X. Charles
Software Engineer @BitGo

twitter.com/ryanxcharles
github.com/ryanxcharles
keybase.io/ryanxcharles
onename.com/ryanxcharles



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bitcoin-development] No Bitcoin For You
@ 2015-05-14 15:22 Tom Harding
  2015-05-17  2:31 ` Ryan X. Charles
  2015-05-25 18:36 ` Mike Hearn
  0 siblings, 2 replies; 13+ messages in thread
From: Tom Harding @ 2015-05-14 15:22 UTC (permalink / raw)
  To: Bitcoin Dev

A recent post, which I cannot find after much effort, made an excellent
point.

If capacity grows, fewer individuals would be able to run full nodes. 
Those individuals, like many already, would have to give up running a
full-node wallet :(

That sounds bad, until you consider that the alternative is running a
full node on the bitcoin 'settlement network', while massive numbers of
people *give up any hope of directly owning bitcoin at all*.

If today's global payments are 100Ktps, and move to the Lightning
Network, they will have to be consolidated by a factor of 25000:1 to fit
into bitcoin's current 4tps capacity as a settlement network.  You
executing a personal transaction on that network will be about as likely
as you personally conducting a $100 SWIFT transfer to yourself today. 
For current holders, just selling or spending will get very expensive!

Forcing block capacity to stay small, so that individuals can run full
nodes, is precisely what will force bitcoin to become a backbone that is
too expensive for individuals to use.  I can't avoid the conclusion that
Bitcoin has to scale, and we might as well be thinking about how.

There may be a an escape window.  As current trends continue toward a
landscape of billions of SPV wallets, it may still be possible for
individuals collectively to make up the majority of the network, if more
parts of the network itself rely on SPV-level security.

With SPV-level security, it might be possible to implement a scalable
DHT-type network of nodes that collectively store and index the
exhaustive and fast-growing corpus of transaction history, up to and
including currently unconfirmed transactions.  Each individual node
could host a slice of the transaction set with a configurable size,
let's say down to a few GB today.

Such a network would have the desirable property of being run by the
community.  Most transactions would be submitted to it, and like today's
network, it would disseminate blocks (which would be rapidly torn apart
and digested).  Therefore miners and other full nodes would depend on
it, which is rather critical as those nodes grow closer to data-center
proportions.





^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-05-26  8:30 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-26  2:30 [Bitcoin-development] No Bitcoin For You Thy Shizzle
2015-05-26  2:41 ` gabe appleton
2015-05-26  2:53 ` Jim Phillips
  -- strict thread matches above, loose matches on Subject: below --
2015-05-26  3:02 Thy Shizzle
2015-05-26  3:23 ` Jim Phillips
2015-05-26  3:49   ` Jim Phillips
2015-05-26  5:43     ` gabe appleton
2015-05-26  8:29       ` Jim Phillips
2015-05-26  2:51 Thy Shizzle
2015-05-14 15:22 Tom Harding
2015-05-17  2:31 ` Ryan X. Charles
2015-05-25 18:36 ` Mike Hearn
2015-05-26  2:26   ` Jim Phillips

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox