public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
@ 2015-05-11  8:50 insecurity
  0 siblings, 0 replies; 10+ messages in thread
From: insecurity @ 2015-05-11  8:50 UTC (permalink / raw)
  To: sergiolerner; +Cc: bitcoin-development

> So if the server pushes new block
> header candidates to clients, then the problem boils down to increasing
> bandwidth of the servers to achieve a tenfold increase in work
> distribution.

Most Stratum pools already do multiple updates of the header every block 
period,
bandwidth is really inconsequential, it's the latency that kills. At the 
present
time you are looking up to 15 seconds between the first and last pools 
to push
headers to their clients for the latest block. It's sort of 
inconsequential with
a 10 minute block time, but it cuts into a 1 minute one very heavily.

Some pools already don't do their own validation of blocks, but simply 
mirror
other pools, pushing them to be even more latency focused will just make 
this an
epidemic of invalidity rather than a solution.


> There are several proof-of-work cryptocurrencies in existence
> that have lower than 1 minute block intervals and they work just fine.
> First there was Bitcoin with a 10 minute interval, then was LiteCoin
> using a 2.5 interval, then was DogeCoin with 1 minute, and then
> QuarkCoin with just 30 seconds.

You can't really use these as examples of things going just fine. None 
of these
networks see anything approaching the Bitcoin transaction volume and 
none have
even remotely the same network size. Some Bitcoin forks use floats in 
consensus
critical code and work "just fine", for the moment. We can't justify 
poor
decisions with "but the altcoins are doing it".

Is there even a single study of the stale rates within these networks?



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
       [not found] ` <5551021E.8010706@LeoWandersleb.de>
@ 2015-05-12 18:55   ` Sergio Lerner
  0 siblings, 0 replies; 10+ messages in thread
From: Sergio Lerner @ 2015-05-12 18:55 UTC (permalink / raw)
  To: Leo Wandersleb, bitcoin-development

[-- Attachment #1: Type: text/plain, Size: 2254 bytes --]



On 11/05/2015 04:25 p.m., Leo Wandersleb wrote:
> I assume that 1 minute block target will not get any substantial support but
> just in case only few people speaking up might be taken as careful
support of
> the idea, here's my two cents:
>
> In mining, stale shares depend on delay between pool/network and the
miner. This
> varies substantially globally and as Peter Todd/Luke-Jr mentioned,
speed of
> light will always keep those at a disadvantage that are 100 light
milli seconds
> away from the creation of the last block. If anything, this warrants
to increase
> block target, not reduce. (The increase might wait until we have
miners on Mars
> though ;) )

An additional delay of 200 milliseconds means loosing approximately 0.3%
of the revenue.
Do you really think this is going to be the key factor to prevent a
mining pool from being used?
There are lot of other factors, such as DoS protections, security,
privacy, variance, trust, algorithm to distribute shares, that are much
more important than that.

And having a 1 minute block actually reduces the payout variance 10x, so
miners will be happy for that. And many pool miners may opt to do solo
mining, and create new full-nodes.

>
>
> If SPV also becomes 10 times more traffic intensive, I can only urge
you to
> travel to anything but central Europe or the USA.
The SPV traffic is minuscule. Bloom-filers are an ugly solution that
increases bandwidth and does not provide a real privacy solution.
Small improvements in the wire protocol can reduce the traffic two-fold.

>
>
> I want bitcoin to be the currency for the other x billion and thus I
oppose any
> change that moves the balance towards the economically upper billion.
Because having a 10 minute rate Bitcoin is a good Internet money. If you
have a 1 minute rate, then it can also be a retail payment method, an
virtual game trading payment method, a gambling, XXX-video renting 
(hey, it takes less than 10 minutes to see one of those :), and much more.

You can reach more billions by having near instant payments.
Don't tell me about the morning caffe, I would like that everyone is
buying their coffe with Bitcoin and there are millions of users before
we figure out how to do that off-chain.

Best regards,
 Sergio.



[-- Attachment #2: Type: text/html, Size: 3062 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
  2015-05-11  7:03 Sergio Lerner
  2015-05-11 10:34 ` Peter Todd
@ 2015-05-11 16:47 ` Luke Dashjr
       [not found] ` <5551021E.8010706@LeoWandersleb.de>
  2 siblings, 0 replies; 10+ messages in thread
From: Luke Dashjr @ 2015-05-11 16:47 UTC (permalink / raw)
  To: bitcoin-development

On Monday, May 11, 2015 7:03:29 AM Sergio Lerner wrote:
> 1. It will encourage centralization, because participants of mining
> pools will loose more money because of excessive initial block template
> latency, which leads to higher stale shares
> 
> When a new block is solved, that information needs to propagate
> throughout the Bitcoin network up to the mining pool operator nodes,
> then a new block header candidate is created, and this header must be
> propagated to all the mining pool users, ether by a push or a pull
> model. Generally the mining server pushes new work units to the
> individual miners. If done other way around, the server would need to
> handle a high load of continuous work requests that would be difficult
> to distinguish from a DDoS attack. So if the server pushes new block
> header candidates to clients, then the problem boils down to increasing
> bandwidth of the servers to achieve a tenfold increase in work
> distribution. Or distributing the servers geographically to achieve a
> lower latency. Propagating blocks does not require additional CPU
> resources, so mining pools administrators would need to increase
> moderately their investment in the server infrastructure to achieve
> lower latency and higher bandwidth, but I guess the investment would be
> low.

1. Latency is what matters here, not bandwidth so much. And latency reduction 
is either expensive or impossible.
2. Mining pools are mostly run at a loss (with exception to only the most 
centralised pools), and have nothing to invest in increasing infrastructure.

> 3, It will reduce the security of the network
> 
> The security of the network is based on two facts:
> A- The miners are incentivized to extend the best chain
> B- The probability of a reversal based on a long block competition
> decreases as more confirmation blocks are appended.
> C- Renting or buying hardware to perform a 51% attack is costly.
> 
> A still holds. B holds for the same amount of confirmation blocks, so 6
> confirmation blocks in a 10-minute block-chain is approximately
> equivalent to 6 confirmation blocks in a 1-minute block-chain.
> Only C changes, as renting the hashing power for 6 minutes is ten times
> less expensive as renting it for 1 hour. However, there is no shop where
> one can find 51% of the hashing power to rent right now, nor probably
> will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
> confirmation (60 1-minute blocks) if you wish for high-valued payments,
> so the security decreases only if participant wish to decrease it.

You're overlooking at least:
1. The real network has to suffer wasted work as a result of the stale blocks, 
while an attacker does not. If 20% of blocks are stale, the attacker only 
needs 40% of the legitimate hashrate to achieve 50%-in-practice.
2. Since blocks are individually weaker, it becomes cheaper to DoS nodes with 
invalid blocks. (not sure if this is a real concern, but it ought to be 
considered and addressed)

> 4. Reducing the block propagation time on the average case is good, but
> what happen in the worse case?
> 
> Most methods proposed to reduce the block propagation delay do it only
> on the average case. Any kind of block compression relies on both
> parties sharing some previous information. In the worse case it's true
> that a miner can create and try to broadcast a block that takes too much
> time to verify or bandwidth to transmit. This is currently true on the
> Bitcoin network. Nevertheless there is no such incentive for miners,
> since they will be shooting on their own foots. Peter Todd has argued
> that the best strategy for miners is actually to reach 51% of the
> network, but not more. In other words, to exclude the slowest 49%
> percent. But this strategy of creating bloated blocks is too risky in
> practice, and surely doomed to fail, as network conditions dynamically
> change. Also it would be perceived as an attack to the network, and the
> miner (if it is a public mining pool) would be probably blacklisted.

One can probably overcome changing network conditions merely by trying to 
reach 75% and exclude the slowest 25%. Also, there is no way to identify or 
blacklist miners.

> 5. Thousands of SPV wallets running in mobile devices would need to be
> upgraded (thanks Mike).
> 
> That depends on the current upgrade rate for SPV wallets like Bitcoin
> Wallet  and BreadWallet. Suppose that the upgrade rate is 80%/year: we
> develop the source code for the change now and apply the change in Q2
> 2016, then  most of the nodes will already be upgraded by when the
> hardfork takes place. Also a public notice telling people to upgrade in
> web pages, bitcointalk, SPV wallets warnings, coindesk, one year in
> advance will give plenty of time to SPV wallet users to upgrade.

I agree this shouldn't be a real concern. SPV wallets are also more likely and 
less risky (globally) to be auto-updated.

> 6. If there are 10x more blocks, then there are 10x more block headers,
> and that increases the amount of bandwidth SPV wallets need to catch up
> with the chain
> 
> A standard smartphone with average cellular downstream speed downloads
> 2.6 headers per second (1600 kbits/sec) [3], so if synchronization were
> to be done only at night when the phone is connected to the power line,
> then it would take 9 minutes to synchronize with 1440 headers/day. If a
> person should accept a payment, and the smart-phone is 1 day
> out-of-synch, then it takes less time to download all the missing
> headers than to wait for a 10-minute one block confirmation. Obviously
> all smartphones with 3G have a downstream bandwidth much higher,
> averaging 1 Mbps. So the whole synchronization will be done less than a
> 1-minute block confirmation.

Uh, I think you need to be using at least median speeds. As an example, I can 
only sustain (over 3G) about 40 kbps, with a peak of around 400 kbps. 3G has 
worse range/coverage than 2G. No doubt the *average* is skewed so high because 
of densely populated areas like San Francisco having 400+ Mbps cellular data. 
It's not reasonable to assume sync only at night: most payments will be during 
the day, on battery - so increased power use must also be considered.

> According to CISCO mobile bandwidth connection speed increases 20% every
> year.

Only in small densely populated areas of first-world countries.

Luke



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
  2015-05-11 11:49     ` Dave Hudson
@ 2015-05-11 12:34       ` Christian Decker
  0 siblings, 0 replies; 10+ messages in thread
From: Christian Decker @ 2015-05-11 12:34 UTC (permalink / raw)
  To: Dave Hudson, insecurity; +Cc: bitcoin-development

[-- Attachment #1: Type: text/plain, Size: 2887 bytes --]

The propagation speed gain from having smaller blocks is linear in the size
reduction, down to a small size, after which the delay of the first byte
prevails [1], however the blockchain fork rate increases superlinearly,
giving an overall worse tradeoff. A high blockchain fork rate is a symptom
of inefficient use of the network's mining resources and may give an
advantage to an attacker that is more efficient in communicating internally.

I'd strongly against increasing the block generation rate in Bitcoin, it'd
be a very controversial proposal and would not solve anything.

[1]
http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf

On Mon, May 11, 2015 at 1:51 PM Dave Hudson <dave@hashingit•com> wrote:

>
> > On 11 May 2015, at 12:10, insecurity@national•shitposting.agency wrote:
> >
> > On 2015-05-11 10:34, Peter Todd wrote:
> >> How do you see that blacklisting actually being done?
> >
> > Same way ghash.io was banned from the network when used Finney attacks
> > against BetCoin Dice.
> >
> > As Andreas Antonopoulos says, if any of the miners do anything bad, we
> > just ban them from mining. Any sort of attack like this only lasts 10
> > minutes as a result. Stop worrying so much.
>
> This doesn't work because a large-scale miner can trivially make
> themselves look like a very large number of much smaller scale miners.
> Their ability to minimize variance comes from the cumulative totals they
> control so 10 pools of 1% of the network cumulatively have the same
> variance as 1 pool with 10% of the network. It's also very easy for miners
> to relay blocks via different addresses and the cost is minimal. The
> biggest cost would be in DDoS prevention and a miner that actually split
> their pool into lots of small fragments would actually give themselves the
> ability to do quite a lot of DDoS mitigation anyway. If no-one is doing
> this right now it's simply because they've not had the right incentives to
> make it worthwhile; if the incentives make it worthwhile then this is
> pretty trivial to do.
>
> This is one area where anonymity on behalf of transaction validators and
> block makers essentially makes it pretty-much impossible to maintain any
> sort of sanctions against antisocial behaviour.
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists•sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>

[-- Attachment #2: Type: text/html, Size: 3703 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
  2015-05-11 11:10   ` insecurity
@ 2015-05-11 11:49     ` Dave Hudson
  2015-05-11 12:34       ` Christian Decker
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hudson @ 2015-05-11 11:49 UTC (permalink / raw)
  To: insecurity; +Cc: bitcoin-development


> On 11 May 2015, at 12:10, insecurity@national•shitposting.agency wrote:
> 
> On 2015-05-11 10:34, Peter Todd wrote:
>> How do you see that blacklisting actually being done?
> 
> Same way ghash.io was banned from the network when used Finney attacks
> against BetCoin Dice.
> 
> As Andreas Antonopoulos says, if any of the miners do anything bad, we
> just ban them from mining. Any sort of attack like this only lasts 10
> minutes as a result. Stop worrying so much.

This doesn't work because a large-scale miner can trivially make themselves look like a very large number of much smaller scale miners. Their ability to minimize variance comes from the cumulative totals they control so 10 pools of 1% of the network cumulatively have the same variance as 1 pool with 10% of the network. It's also very easy for miners to relay blocks via different addresses and the cost is minimal. The biggest cost would be in DDoS prevention and a miner that actually split their pool into lots of small fragments would actually give themselves the ability to do quite a lot of DDoS mitigation anyway. If no-one is doing this right now it's simply because they've not had the right incentives to make it worthwhile; if the incentives make it worthwhile then this is pretty trivial to do.

This is one area where anonymity on behalf of transaction validators and block makers essentially makes it pretty-much impossible to maintain any sort of sanctions against antisocial behaviour.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
  2015-05-11 10:34 ` Peter Todd
@ 2015-05-11 11:10   ` insecurity
  2015-05-11 11:49     ` Dave Hudson
  0 siblings, 1 reply; 10+ messages in thread
From: insecurity @ 2015-05-11 11:10 UTC (permalink / raw)
  To: pete; +Cc: bitcoin-development

On 2015-05-11 10:34, Peter Todd wrote:
> How do you see that blacklisting actually being done?

Same way ghash.io was banned from the network when used Finney attacks
against BetCoin Dice.

As Andreas Antonopoulos says, if any of the miners do anything bad, we
just ban them from mining. Any sort of attack like this only lasts 10
minutes as a result. Stop worrying so much.

https://youtu.be/ncPyMUfNyVM?t=20s





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
  2015-05-11  7:03 Sergio Lerner
@ 2015-05-11 10:34 ` Peter Todd
  2015-05-11 11:10   ` insecurity
  2015-05-11 16:47 ` Luke Dashjr
       [not found] ` <5551021E.8010706@LeoWandersleb.de>
  2 siblings, 1 reply; 10+ messages in thread
From: Peter Todd @ 2015-05-11 10:34 UTC (permalink / raw)
  To: Sergio Lerner; +Cc: bitcoin-development

[-- Attachment #1: Type: text/plain, Size: 5829 bytes --]

On Mon, May 11, 2015 at 04:03:29AM -0300, Sergio Lerner wrote:
> Arguments against reducing the block interval
> 
> 1. It will encourage centralization, because participants of mining
> pools will loose more money because of excessive initial block template
> latency, which leads to higher stale shares
> 
> When a new block is solved, that information needs to propagate
> throughout the Bitcoin network up to the mining pool operator nodes,
> then a new block header candidate is created, and this header must be
> propagated to all the mining pool users, ether by a push or a pull
> model. Generally the mining server pushes new work units to the
> individual miners. If done other way around, the server would need to
> handle a high load of continuous work requests that would be difficult
> to distinguish from a DDoS attack. So if the server pushes new block
> header candidates to clients, then the problem boils down to increasing
> bandwidth of the servers to achieve a tenfold increase in work
> distribution. Or distributing the servers geographically to achieve a
> lower latency. Propagating blocks does not require additional CPU
> resources, so mining pools administrators would need to increase
> moderately their investment in the server infrastructure to achieve
> lower latency and higher bandwidth, but I guess the investment would be low.

It's *way* easier to buy more bandwidth that it is to get lower latency.

After all, getting to the other side of the planet via fiber takes at
*minimum* 100ms simply due to the speed of light; routing overheads
approximately double or triple that for all but highly specialized and
very, very expensive, networking services. Bandwidth simply can't fix
the speed of light.

It's also not at all realistic or desirable to assume connectivity in a
single hop, so you can again multiply that base latency by 2-5 times.

And on top of *that* you have to take into account latency from hasher
to mining pool - time that the hashing power isn't working on the new
block because they're work unit hasn't been updated matters just as much
as the time to get that block to the pool in the first place. Being
forced to reduce that latency is very damaging to the ecosystem as
you're making it more profitable to keep hashing power centralized.

In any case, even with 10 minute blocks pools already pay a lot of
attention to latency... Why make that problem 10x worse?

> 2. It will increase the probability of a block-chain split
> 
> The convergence of the network relies on the diminishing probability of
> two honest miners creating simultaneous competing blocks chains. To
> increase the competition chain, competing blocks must be generated in
> almost simultaneously (in the same time window approximately bounded by
> the network average block propagation delay). The probability of a block
> competition decreases exponentially with the number of blocks. In fact,
> the probability of a sustained competition on ten 1-minute blocks is one
> million times lower than the probability of a competition of one
> 10-minute block. So even if the competition probability of six 1-minute
> blocks is higher than of six ten-minute blocks, this does not imply
> reducing the block rate increases this chance, but on the contrary, 
> reduces it.

Can you explain your reasoning here in detail?

> 4. Reducing the block propagation time on the average case is good, but
> what happen in the worse case?
> 
> Most methods proposed to reduce the block propagation delay do it only
> on the average case. Any kind of block compression relies on both
> parties sharing some previous information. In the worse case it's true
> that a miner can create and try to broadcast a block that takes too much
> time to verify or bandwidth to transmit. This is currently true on the
> Bitcoin network. Nevertheless there is no such incentive for miners,
> since they will be shooting on their own foots. Peter Todd has argued
> that the best strategy for miners is actually to reach 51% of the
> network, but not more. In other words, to exclude the slowest 49%

Actually the correct figure is less than ~30%:

http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg03200.html

> percent. But this strategy of creating bloated blocks is too risky in
> practice, and surely doomed to fail, as network conditions dynamically 
> change.

They dynamically change? Source?

Remember that the strategy still gives you a benefit if you simply
target, say, 75% rather than the minimum threshold.

> Also it would be perceived as an attack to the network, and the
> miner (if it is a public mining pool) would be probably blacklisted.

How do you see that blacklisting actually being done?

Equally, it's easy to portray such mining as being "for the good of
Bitcoin" - "we're just making transaction cheap! tough luck if your
shitty pool can't keep up" This is quite unlike selfish mining.

> 7. There has been insufficient testing and/or insufficient research into
> technical/economic implications or reducing the block rate
>  
> This is partially true. in the GHOST paper, this has been analyzed, and
> the problem was shown to be solvable for block intervals of just a few
> seconds.

GHOST works radically differently than a linear blockchain, and it's not
clear that it actually has the correct economic incentives.

> These networking optimizations ( O(1) propagation using headers-first or
> IBLTs), can be added later.

Keep in mind that miners already use optimized propagation techniques,
like p2pool's implementation or Matt Corallo's block relaying network.

-- 
'peter'[:-1]@petertodd.org
00000000000000000c2b75113c6d2539f436ee9ac90abf620d9d3a3a4a19d3e8

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
  2015-05-11  7:30 Thy Shizzle
@ 2015-05-11  8:16 ` Dave Hudson
  0 siblings, 0 replies; 10+ messages in thread
From: Dave Hudson @ 2015-05-11  8:16 UTC (permalink / raw)
  To: Thy Shizzle; +Cc: bitcoin-development

[-- Attachment #1: Type: text/plain, Size: 14169 bytes --]

I proposed the same thing last year (there's a video of the presentation I was giving somewhere around). My intuition was that this would require slowly reducing the inter-block time, probably by step reductions at particular block heights.

Having had almost a year to think about it some more there are a few subtleties:

1) I think it could discourage decentralisation if the nominal 2 week period per difficulty retarget is retained. If we reached 4032 blocks and a 5 minute block time then there would be 2x as many blocks at any given difficulty which increases the odds of a smaller pool finding a block and thus getting a reward. Block rewards would have to drop in proportion to the reduced interval to keep the total schedule of 21M coins on track though, but the reduction in variance is a win for smaller miners.

2) There are limits to the block time. The speed of light is an ultimately limiting factor here, but we would want to avoid excessive orphan rates.

3) There would be some amount of confusion about numbers of confirmations. I actually think that confirmation numbers are a really misleading idea anyway and it would be safer to think in terms of "minutes of security". A zero conf transaction has "zero minutes", while right now 1, 2, 3 and 6 would be "ten minutes", "twenty minutes", "thirty minutes" and "sixty minutes" respectively. If our block time were 5 minutes then 8 confirmations would be "forty minutes" of security; if the block time was 2.5 minutes then 8 confirmations would be "twenty minutes" of security. The "minutes of security" measure indicates the mean number of minutes of the entire network's hash rate would be required to undo a transaction.

4) Reducing the inter-block time reduces the variance in reaching that "sixty minutes" of security level. The variance around finding 6 blocks with a ten minute interval is much wider than the variance for finding 12 blocks with a 5 minute interval.



> On 11 May 2015, at 08:30, Thy Shizzle <thyshizzle@outlook•com> wrote:
> 
> Yes This!
> 
> So many people seem hung up on growing the block size! If gaining a higher tps throughput is the main aim, I think that this proposition to speed up block creation has merit!
> 
> Yes it will lead to an increase in the block chain still due to 1mb ~1 minute instead of ~10 minute, but the change to the protocol is minor, you are only adding in a different difficulty rate starting from hight blah, no new features or anything are being added so there seems to me much less of a security risk! Also that impact if a hard fork should be minimal because there is nothing but absolute incentive for miners to mine at the new easier difficulty!
> 
> I feel this deserves a great deal of consideration as opposed to blowing out the block through miners voting etc!!!!
> From: Sergio Lerner <mailto:sergiolerner@certimix•com>
> Sent: ‎11/‎05/‎2015 5:05 PM
> To: bitcoin-development@lists•sourceforge.net <mailto:bitcoin-development@lists•sourceforge.net>
> Subject: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
> 
> In this e-mail I'll do my best to argue than if you accept that
> increasing the transactions/second is a good direction to go, then
> increasing the maximum block size is not the best way to do it. I argue
> that the right direction to go is to decrease the block rate to 1
> minute, while keeping the block size limit to 1 Megabyte (or increasing
> it from a lower value such as 100 Kbyte and then have a step function).
> I'm backing up my claims with many hours of research simulating the
> Bitcoin network under different conditions [1].  I'll try to convince
> you by responding to each of the arguments I've heard against it.
> 
> Arguments against reducing the block interval
> 
> 1. It will encourage centralization, because participants of mining
> pools will loose more money because of excessive initial block template
> latency, which leads to higher stale shares
> 
> When a new block is solved, that information needs to propagate
> throughout the Bitcoin network up to the mining pool operator nodes,
> then a new block header candidate is created, and this header must be
> propagated to all the mining pool users, ether by a push or a pull
> model. Generally the mining server pushes new work units to the
> individual miners. If done other way around, the server would need to
> handle a high load of continuous work requests that would be difficult
> to distinguish from a DDoS attack. So if the server pushes new block
> header candidates to clients, then the problem boils down to increasing
> bandwidth of the servers to achieve a tenfold increase in work
> distribution. Or distributing the servers geographically to achieve a
> lower latency. Propagating blocks does not require additional CPU
> resources, so mining pools administrators would need to increase
> moderately their investment in the server infrastructure to achieve
> lower latency and higher bandwidth, but I guess the investment would be low.
> 
> 2. It will increase the probability of a block-chain split
> 
> The convergence of the network relies on the diminishing probability of
> two honest miners creating simultaneous competing blocks chains. To
> increase the competition chain, competing blocks must be generated in
> almost simultaneously (in the same time window approximately bounded by
> the network average block propagation delay). The probability of a block
> competition decreases exponentially with the number of blocks. In fact,
> the probability of a sustained competition on ten 1-minute blocks is one
> million times lower than the probability of a competition of one
> 10-minute block. So even if the competition probability of six 1-minute
> blocks is higher than of six ten-minute blocks, this does not imply
> reducing the block rate increases this chance, but on the contrary, 
> reduces it.
> 
> 3, It will reduce the security of the network
> 
> The security of the network is based on two facts:
> A- The miners are incentivized to extend the best chain
> B- The probability of a reversal based on a long block competition
> decreases as more confirmation blocks are appended.
> C- Renting or buying hardware to perform a 51% attack is costly.
> 
> A still holds. B holds for the same amount of confirmation blocks, so 6
> confirmation blocks in a 10-minute block-chain is approximately
> equivalent to 6 confirmation blocks in a 1-minute block-chain.
> Only C changes, as renting the hashing power for 6 minutes is ten times
> less expensive as renting it for 1 hour. However, there is no shop where
> one can find 51% of the hashing power to rent right now, nor probably
> will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
> confirmation (60 1-minute blocks) if you wish for high-valued payments,
> so the security decreases only if participant wish to decrease it.
> 
> 4. Reducing the block propagation time on the average case is good, but
> what happen in the worse case?
> 
> Most methods proposed to reduce the block propagation delay do it only
> on the average case. Any kind of block compression relies on both
> parties sharing some previous information. In the worse case it's true
> that a miner can create and try to broadcast a block that takes too much
> time to verify or bandwidth to transmit. This is currently true on the
> Bitcoin network. Nevertheless there is no such incentive for miners,
> since they will be shooting on their own foots. Peter Todd has argued
> that the best strategy for miners is actually to reach 51% of the
> network, but not more. In other words, to exclude the slowest 49%
> percent. But this strategy of creating bloated blocks is too risky in
> practice, and surely doomed to fail, as network conditions dynamically 
> change. Also it would be perceived as an attack to the network, and the
> miner (if it is a public mining pool) would be probably blacklisted.
> 
> 5. Thousands of SPV wallets running in mobile devices would need to be
> upgraded (thanks Mike).
> 
> That depends on the current upgrade rate for SPV wallets like Bitcoin
> Wallet  and BreadWallet. Suppose that the upgrade rate is 80%/year: we
> develop the source code for the change now and apply the change in Q2
> 2016, then  most of the nodes will already be upgraded by when the
> hardfork takes place. Also a public notice telling people to upgrade in
> web pages, bitcointalk, SPV wallets warnings, coindesk, one year in
> advance will give plenty of time to SPV wallet users to upgrade.
> 
> 6. If there are 10x more blocks, then there are 10x more block headers,
> and that increases the amount of bandwidth SPV wallets need to catch up
> with the chain
>  
> A standard smartphone with average cellular downstream speed downloads
> 2.6 headers per second (1600 kbits/sec) [3], so if synchronization were
> to be done only at night when the phone is connected to the power line,
> then it would take 9 minutes to synchronize with 1440 headers/day. If a
> person should accept a payment, and the smart-phone is 1 day
> out-of-synch, then it takes less time to download all the missing
> headers than to wait for a 10-minute one block confirmation. Obviously
> all smartphones with 3G have a downstream bandwidth much higher,
> averaging 1 Mbps. So the whole synchronization will be done less than a
> 1-minute block confirmation.
>  
> According to CISCO mobile bandwidth connection speed increases 20% every
> year. In four years, it will have doubled, so mobile phones with lower
> than average data connection will soon be able to catchup.
> Also there is low-hanging-fruit optimizations to the protocol that have
> not been implemented: each header is 80 bytes in length. When a set of
> chained headers is transferred, the headers could be compressed,
> stripping 32 bytes of each header that is derived from the previous
> header hash digest. So a 40% compression is already possible by slightly
> modifying the wire protocol.
>  
> 7. There has been insufficient testing and/or insufficient research into
> technical/economic implications or reducing the block rate
>  
> This is partially true. in the GHOST paper, this has been analyzed, and
> the problem was shown to be solvable for block intervals of just a few
> seconds. There are several proof-of-work cryptocurrencies in existence
> that have lower than 1 minute block intervals and they work just fine.
> First there was Bitcoin with a 10 minute interval, then was LiteCoin
> using a 2.5 interval, then was DogeCoin with 1 minute, and then
> QuarkCoin with just 30 seconds. Every new cryptocurrency lowers it a
> little bit. Some time ago I decided to research on the block rate to
> understand how the block interval impacts the stability and capability
> of the cryptocurrency network, and I came up with the idea of the DECOR+
> protocol [4] (which requires changes in the consensus code). In my
> research I also showed how the stale rate can be easily reduced only
> with changes in the networking code, and not in the consensus code.
> These networking optimizations ( O(1) propagation using headers-first or
> IBLTs), can be added later.
>  
> Mortifying Bitcoin to accommodate the change to lower the block rate
> requires at least:
>  
> - Changing the 21 BTC reward per block to 2.1 BTC
> - Changing the nPowTargetTimespan constant
> - Writing code to hard-fork automatically when the majority of miners
> have upgraded.
> - Allow transaction version 3, and interpret nLockTimes of transaction
> version 2 as being multiplied by 10.
> 
> All changes comprises no more than 15 lines of code. This is much less
> than the number of lines modified by Gavin's 20Mb patch.
>  
> As a conclusion, I haven't yet heard a good argument against lowering
> the block rate.
> 
> Best regards,
>  Sergio.
>  
> [0] https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e <https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e>
> [1] https://bitslog.wordpress.com/2014/02/17/5-sec-block-interval/ <https://bitslog.wordpress.com/2014/02/17/5-sec-block-interval/>
> [2] http://gavinandresen.ninja/time-to-roll-out-bigger-blocks <http://gavinandresen.ninja/time-to-roll-out-bigger-blocks>
> [3]
> http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.html <http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.html>
> [4] https://bitslog.wordpress.com/2014/05/02/decor/ <https://bitslog.wordpress.com/2014/05/02/decor/>
> 
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud 
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y <http://ad.doubleclick.net/ddm/clk/290420510;117567292;y>
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists•sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development <https://lists.sourceforge.net/lists/listinfo/bitcoin-development>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud 
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y_______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists•sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[-- Attachment #2: Type: text/html, Size: 18512 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
@ 2015-05-11  7:30 Thy Shizzle
  2015-05-11  8:16 ` Dave Hudson
  0 siblings, 1 reply; 10+ messages in thread
From: Thy Shizzle @ 2015-05-11  7:30 UTC (permalink / raw)
  To: Sergio Lerner, bitcoin-development

[-- Attachment #1: Type: text/plain, Size: 10665 bytes --]

Yes This!

So many people seem hung up on growing the block size! If gaining a higher tps throughput is the main aim, I think that this proposition to speed up block creation has merit!

Yes it will lead to an increase in the block chain still due to 1mb ~1 minute instead of ~10 minute, but the change to the protocol is minor, you are only adding in a different difficulty rate starting from hight blah, no new features or anything are being added so there seems to me much less of a security risk! Also that impact if a hard fork should be minimal because there is nothing but absolute incentive for miners to mine at the new easier difficulty!

I feel this deserves a great deal of consideration as opposed to blowing out the block through miners voting etc!!!!
________________________________
From: Sergio Lerner<mailto:sergiolerner@certimix•com>
Sent: ‎11/‎05/‎2015 5:05 PM
To: bitcoin-development@lists.sourceforge.net<mailto:bitcoin-development@lists•sourceforge.net>
Subject: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

In this e-mail I'll do my best to argue than if you accept that
increasing the transactions/second is a good direction to go, then
increasing the maximum block size is not the best way to do it. I argue
that the right direction to go is to decrease the block rate to 1
minute, while keeping the block size limit to 1 Megabyte (or increasing
it from a lower value such as 100 Kbyte and then have a step function).
I'm backing up my claims with many hours of research simulating the
Bitcoin network under different conditions [1].  I'll try to convince
you by responding to each of the arguments I've heard against it.

Arguments against reducing the block interval

1. It will encourage centralization, because participants of mining
pools will loose more money because of excessive initial block template
latency, which leads to higher stale shares

When a new block is solved, that information needs to propagate
throughout the Bitcoin network up to the mining pool operator nodes,
then a new block header candidate is created, and this header must be
propagated to all the mining pool users, ether by a push or a pull
model. Generally the mining server pushes new work units to the
individual miners. If done other way around, the server would need to
handle a high load of continuous work requests that would be difficult
to distinguish from a DDoS attack. So if the server pushes new block
header candidates to clients, then the problem boils down to increasing
bandwidth of the servers to achieve a tenfold increase in work
distribution. Or distributing the servers geographically to achieve a
lower latency. Propagating blocks does not require additional CPU
resources, so mining pools administrators would need to increase
moderately their investment in the server infrastructure to achieve
lower latency and higher bandwidth, but I guess the investment would be low.

2. It will increase the probability of a block-chain split

The convergence of the network relies on the diminishing probability of
two honest miners creating simultaneous competing blocks chains. To
increase the competition chain, competing blocks must be generated in
almost simultaneously (in the same time window approximately bounded by
the network average block propagation delay). The probability of a block
competition decreases exponentially with the number of blocks. In fact,
the probability of a sustained competition on ten 1-minute blocks is one
million times lower than the probability of a competition of one
10-minute block. So even if the competition probability of six 1-minute
blocks is higher than of six ten-minute blocks, this does not imply
reducing the block rate increases this chance, but on the contrary,
reduces it.

3, It will reduce the security of the network

The security of the network is based on two facts:
A- The miners are incentivized to extend the best chain
B- The probability of a reversal based on a long block competition
decreases as more confirmation blocks are appended.
C- Renting or buying hardware to perform a 51% attack is costly.

A still holds. B holds for the same amount of confirmation blocks, so 6
confirmation blocks in a 10-minute block-chain is approximately
equivalent to 6 confirmation blocks in a 1-minute block-chain.
Only C changes, as renting the hashing power for 6 minutes is ten times
less expensive as renting it for 1 hour. However, there is no shop where
one can find 51% of the hashing power to rent right now, nor probably
will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
confirmation (60 1-minute blocks) if you wish for high-valued payments,
so the security decreases only if participant wish to decrease it.

4. Reducing the block propagation time on the average case is good, but
what happen in the worse case?

Most methods proposed to reduce the block propagation delay do it only
on the average case. Any kind of block compression relies on both
parties sharing some previous information. In the worse case it's true
that a miner can create and try to broadcast a block that takes too much
time to verify or bandwidth to transmit. This is currently true on the
Bitcoin network. Nevertheless there is no such incentive for miners,
since they will be shooting on their own foots. Peter Todd has argued
that the best strategy for miners is actually to reach 51% of the
network, but not more. In other words, to exclude the slowest 49%
percent. But this strategy of creating bloated blocks is too risky in
practice, and surely doomed to fail, as network conditions dynamically
change. Also it would be perceived as an attack to the network, and the
miner (if it is a public mining pool) would be probably blacklisted.

5. Thousands of SPV wallets running in mobile devices would need to be
upgraded (thanks Mike).

That depends on the current upgrade rate for SPV wallets like Bitcoin
Wallet  and BreadWallet. Suppose that the upgrade rate is 80%/year: we
develop the source code for the change now and apply the change in Q2
2016, then  most of the nodes will already be upgraded by when the
hardfork takes place. Also a public notice telling people to upgrade in
web pages, bitcointalk, SPV wallets warnings, coindesk, one year in
advance will give plenty of time to SPV wallet users to upgrade.

6. If there are 10x more blocks, then there are 10x more block headers,
and that increases the amount of bandwidth SPV wallets need to catch up
with the chain

A standard smartphone with average cellular downstream speed downloads
2.6 headers per second (1600 kbits/sec) [3], so if synchronization were
to be done only at night when the phone is connected to the power line,
then it would take 9 minutes to synchronize with 1440 headers/day. If a
person should accept a payment, and the smart-phone is 1 day
out-of-synch, then it takes less time to download all the missing
headers than to wait for a 10-minute one block confirmation. Obviously
all smartphones with 3G have a downstream bandwidth much higher,
averaging 1 Mbps. So the whole synchronization will be done less than a
1-minute block confirmation.

According to CISCO mobile bandwidth connection speed increases 20% every
year. In four years, it will have doubled, so mobile phones with lower
than average data connection will soon be able to catchup.
Also there is low-hanging-fruit optimizations to the protocol that have
not been implemented: each header is 80 bytes in length. When a set of
chained headers is transferred, the headers could be compressed,
stripping 32 bytes of each header that is derived from the previous
header hash digest. So a 40% compression is already possible by slightly
modifying the wire protocol.

7. There has been insufficient testing and/or insufficient research into
technical/economic implications or reducing the block rate

This is partially true. in the GHOST paper, this has been analyzed, and
the problem was shown to be solvable for block intervals of just a few
seconds. There are several proof-of-work cryptocurrencies in existence
that have lower than 1 minute block intervals and they work just fine.
First there was Bitcoin with a 10 minute interval, then was LiteCoin
using a 2.5 interval, then was DogeCoin with 1 minute, and then
QuarkCoin with just 30 seconds. Every new cryptocurrency lowers it a
little bit. Some time ago I decided to research on the block rate to
understand how the block interval impacts the stability and capability
of the cryptocurrency network, and I came up with the idea of the DECOR+
protocol [4] (which requires changes in the consensus code). In my
research I also showed how the stale rate can be easily reduced only
with changes in the networking code, and not in the consensus code.
These networking optimizations ( O(1) propagation using headers-first or
IBLTs), can be added later.

Mortifying Bitcoin to accommodate the change to lower the block rate
requires at least:

- Changing the 21 BTC reward per block to 2.1 BTC
- Changing the nPowTargetTimespan constant
- Writing code to hard-fork automatically when the majority of miners
have upgraded.
- Allow transaction version 3, and interpret nLockTimes of transaction
version 2 as being multiplied by 10.

All changes comprises no more than 15 lines of code. This is much less
than the number of lines modified by Gavin's 20Mb patch.

As a conclusion, I haven't yet heard a good argument against lowering
the block rate.

Best regards,
 Sergio.

[0] https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e
[1] https://bitslog.wordpress.com/2014/02/17/5-sec-block-interval/
[2] http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
[3]
http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.html
[4] https://bitslog.wordpress.com/2014/05/02/decor/

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists•sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development

[-- Attachment #2: Type: text/html, Size: 12998 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
@ 2015-05-11  7:03 Sergio Lerner
  2015-05-11 10:34 ` Peter Todd
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Sergio Lerner @ 2015-05-11  7:03 UTC (permalink / raw)
  To: bitcoin-development

In this e-mail I'll do my best to argue than if you accept that
increasing the transactions/second is a good direction to go, then
increasing the maximum block size is not the best way to do it. I argue
that the right direction to go is to decrease the block rate to 1
minute, while keeping the block size limit to 1 Megabyte (or increasing
it from a lower value such as 100 Kbyte and then have a step function).
I'm backing up my claims with many hours of research simulating the
Bitcoin network under different conditions [1].  I'll try to convince
you by responding to each of the arguments I've heard against it.

Arguments against reducing the block interval

1. It will encourage centralization, because participants of mining
pools will loose more money because of excessive initial block template
latency, which leads to higher stale shares

When a new block is solved, that information needs to propagate
throughout the Bitcoin network up to the mining pool operator nodes,
then a new block header candidate is created, and this header must be
propagated to all the mining pool users, ether by a push or a pull
model. Generally the mining server pushes new work units to the
individual miners. If done other way around, the server would need to
handle a high load of continuous work requests that would be difficult
to distinguish from a DDoS attack. So if the server pushes new block
header candidates to clients, then the problem boils down to increasing
bandwidth of the servers to achieve a tenfold increase in work
distribution. Or distributing the servers geographically to achieve a
lower latency. Propagating blocks does not require additional CPU
resources, so mining pools administrators would need to increase
moderately their investment in the server infrastructure to achieve
lower latency and higher bandwidth, but I guess the investment would be low.

2. It will increase the probability of a block-chain split

The convergence of the network relies on the diminishing probability of
two honest miners creating simultaneous competing blocks chains. To
increase the competition chain, competing blocks must be generated in
almost simultaneously (in the same time window approximately bounded by
the network average block propagation delay). The probability of a block
competition decreases exponentially with the number of blocks. In fact,
the probability of a sustained competition on ten 1-minute blocks is one
million times lower than the probability of a competition of one
10-minute block. So even if the competition probability of six 1-minute
blocks is higher than of six ten-minute blocks, this does not imply
reducing the block rate increases this chance, but on the contrary, 
reduces it.

3, It will reduce the security of the network

The security of the network is based on two facts:
A- The miners are incentivized to extend the best chain
B- The probability of a reversal based on a long block competition
decreases as more confirmation blocks are appended.
C- Renting or buying hardware to perform a 51% attack is costly.

A still holds. B holds for the same amount of confirmation blocks, so 6
confirmation blocks in a 10-minute block-chain is approximately
equivalent to 6 confirmation blocks in a 1-minute block-chain.
Only C changes, as renting the hashing power for 6 minutes is ten times
less expensive as renting it for 1 hour. However, there is no shop where
one can find 51% of the hashing power to rent right now, nor probably
will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
confirmation (60 1-minute blocks) if you wish for high-valued payments,
so the security decreases only if participant wish to decrease it.

4. Reducing the block propagation time on the average case is good, but
what happen in the worse case?

Most methods proposed to reduce the block propagation delay do it only
on the average case. Any kind of block compression relies on both
parties sharing some previous information. In the worse case it's true
that a miner can create and try to broadcast a block that takes too much
time to verify or bandwidth to transmit. This is currently true on the
Bitcoin network. Nevertheless there is no such incentive for miners,
since they will be shooting on their own foots. Peter Todd has argued
that the best strategy for miners is actually to reach 51% of the
network, but not more. In other words, to exclude the slowest 49%
percent. But this strategy of creating bloated blocks is too risky in
practice, and surely doomed to fail, as network conditions dynamically 
change. Also it would be perceived as an attack to the network, and the
miner (if it is a public mining pool) would be probably blacklisted.

5. Thousands of SPV wallets running in mobile devices would need to be
upgraded (thanks Mike).

That depends on the current upgrade rate for SPV wallets like Bitcoin
Wallet  and BreadWallet. Suppose that the upgrade rate is 80%/year: we
develop the source code for the change now and apply the change in Q2
2016, then  most of the nodes will already be upgraded by when the
hardfork takes place. Also a public notice telling people to upgrade in
web pages, bitcointalk, SPV wallets warnings, coindesk, one year in
advance will give plenty of time to SPV wallet users to upgrade.

6. If there are 10x more blocks, then there are 10x more block headers,
and that increases the amount of bandwidth SPV wallets need to catch up
with the chain
 
A standard smartphone with average cellular downstream speed downloads
2.6 headers per second (1600 kbits/sec) [3], so if synchronization were
to be done only at night when the phone is connected to the power line,
then it would take 9 minutes to synchronize with 1440 headers/day. If a
person should accept a payment, and the smart-phone is 1 day
out-of-synch, then it takes less time to download all the missing
headers than to wait for a 10-minute one block confirmation. Obviously
all smartphones with 3G have a downstream bandwidth much higher,
averaging 1 Mbps. So the whole synchronization will be done less than a
1-minute block confirmation.
 
According to CISCO mobile bandwidth connection speed increases 20% every
year. In four years, it will have doubled, so mobile phones with lower
than average data connection will soon be able to catchup.
Also there is low-hanging-fruit optimizations to the protocol that have
not been implemented: each header is 80 bytes in length. When a set of
chained headers is transferred, the headers could be compressed,
stripping 32 bytes of each header that is derived from the previous
header hash digest. So a 40% compression is already possible by slightly
modifying the wire protocol.
 
7. There has been insufficient testing and/or insufficient research into
technical/economic implications or reducing the block rate
 
This is partially true. in the GHOST paper, this has been analyzed, and
the problem was shown to be solvable for block intervals of just a few
seconds. There are several proof-of-work cryptocurrencies in existence
that have lower than 1 minute block intervals and they work just fine.
First there was Bitcoin with a 10 minute interval, then was LiteCoin
using a 2.5 interval, then was DogeCoin with 1 minute, and then
QuarkCoin with just 30 seconds. Every new cryptocurrency lowers it a
little bit. Some time ago I decided to research on the block rate to
understand how the block interval impacts the stability and capability
of the cryptocurrency network, and I came up with the idea of the DECOR+
protocol [4] (which requires changes in the consensus code). In my
research I also showed how the stale rate can be easily reduced only
with changes in the networking code, and not in the consensus code.
These networking optimizations ( O(1) propagation using headers-first or
IBLTs), can be added later.
 
Mortifying Bitcoin to accommodate the change to lower the block rate
requires at least:
 
- Changing the 21 BTC reward per block to 2.1 BTC
- Changing the nPowTargetTimespan constant
- Writing code to hard-fork automatically when the majority of miners
have upgraded.
- Allow transaction version 3, and interpret nLockTimes of transaction
version 2 as being multiplied by 10.

All changes comprises no more than 15 lines of code. This is much less
than the number of lines modified by Gavin's 20Mb patch.
 
As a conclusion, I haven't yet heard a good argument against lowering
the block rate.

Best regards,
 Sergio.
 
[0] https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e
[1] https://bitslog.wordpress.com/2014/02/17/5-sec-block-interval/
[2] http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
[3]
http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.html
[4] https://bitslog.wordpress.com/2014/05/02/decor/



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-05-12 18:55 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-11  8:50 [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size insecurity
  -- strict thread matches above, loose matches on Subject: below --
2015-05-11  7:30 Thy Shizzle
2015-05-11  8:16 ` Dave Hudson
2015-05-11  7:03 Sergio Lerner
2015-05-11 10:34 ` Peter Todd
2015-05-11 11:10   ` insecurity
2015-05-11 11:49     ` Dave Hudson
2015-05-11 12:34       ` Christian Decker
2015-05-11 16:47 ` Luke Dashjr
     [not found] ` <5551021E.8010706@LeoWandersleb.de>
2015-05-12 18:55   ` Sergio Lerner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox