public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all?
@ 2015-06-01 18:32 Jim Phillips
  2015-06-01 19:02 ` Stephen Morse
  0 siblings, 1 reply; 4+ messages in thread
From: Jim Phillips @ 2015-06-01 18:32 UTC (permalink / raw)
  To: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 4331 bytes --]

Ok, I understand at least some of the reason that blocks have to be kept to
a certain size. I get that blocks which are too big will be hard to
propagate by relays. Miners will have more trouble uploading the large
blocks to the network once they've found a hash. We need block size
constraints to create a fee economy for the miners.

But these all sound to me like issues that affect some, but not others. So
it seems to me like it ought to be a configurable setting. We've already
witnessed with last week's stress test that most miners aren't even
creating 1MB blocks but are still using the software defaults of 730k. If
there are configurable limits, why does there have to be a hard limit?
Can't miners just use the configurable limit to decide what size blocks
they can afford to and are thus willing to create? They could just as
easily use that to create a fee economy. If the miners with the most
hashpower are not willing to mine blocks larger than 1 or 2 megs, then they
are able to slow down confirmations of transactions. It may take several
blocks before a miner willing to include a particular transaction finds a
block. This would actually force miners to compete with each other and find
a block size naturally instead of having it forced on them by the protocol.
Relays would be able to participate in that process by restricting the
miners ability to propagate large blocks. You know, like what happens in a
FREE MARKET economy, without burdensome regulation which can be manipulated
through politics? Isn't that what's really happening right now? Different
political factions with different agendas are fighting over how best to
regulate the Bitcoin protocol.

I know the limit was originally put in place to prevent spamming. But that
was when we were mining with CPUs and just beginning to see the occasional
GPU which could take control over the network and maliciously spam large
blocks. But with ASIC mining now catching up to Moore's Law, that's not
really an issue anymore. No one malicious entity can really just take over
the network now without spending more money than it's worth -- and that's
just going to get truer with time as hashpower continues to grow. And it's
not like the hard limit really does anything anymore to prevent spamming.
If a spammer wants to create thousands or millions of transactions, a hard
limit on the block size isn't going to stop him.. He'll just fill up the
mempool or UTXO database instead of someone's block database.. And block
storage media is generally the cheapest storage.. I mean they could be
written to tape and be just as valid as if they're stored in DRAM. Combine
that with pruning, and block storage costs are almost a non-issue for
anyone who isn't running an archival node.

And can't relay nodes just configure a limit on the size of blocks they
will relay? Sure they'd still need to download a big block occasionally,
but that's not really that big a deal, and they're under no obligation to
propagate it.. Even if it's a 2GB block, it'll get downloaded eventually.
It's only if it gets to the point where the average home connection is too
slow to keep up with the transaction & block flow that there's any real
issue there, and that would happen regardless of how big the blocks are. I
personally would much prefer to see hardware limits act as the bottleneck
than to introduce an artificial bottleneck into the protocol that has to be
adjusted regularly. The software and protocol are TECHNICALLY capable of
scaling to handle the world's entire transaction set. The real issue with
scaling to this size is limitations on hardware, which are regulated by
Moore's Law. Why do we need arbitrary soft limits? Why can't we allow
Bitcoin to grow naturally within the ever increasing limits of our
hardware? Is it because nobody will ever need more than 640k of RAM?

Am I missing something here? Is there some big reason that I'm overlooking
why there has to be some hard-coded limit on the block size that affects
the entire network and creates ongoing issues in the future?

--

*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>

*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

[-- Attachment #2: Type: text/html, Size: 5059 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-06-01 20:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-01 18:32 [Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all? Jim Phillips
2015-06-01 19:02 ` Stephen Morse
2015-06-01 20:02   ` Jim Phillips
2015-06-01 20:18     ` [Bitcoin-development] We are filling most blocks right now - Let's change the max blocksize default Raystonn .

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox