public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* Re: [bitcoin-dev] Pseudocode for robust tail emission
@ 2023-02-01 22:04 jk_14
  0 siblings, 0 replies; 16+ messages in thread
From: jk_14 @ 2023-02-01 22:04 UTC (permalink / raw)
  To: Bitcoin Protocol Discussion


'only' in this sentence: "only two orders of magnitude higher"
- is just like in this one:

"We're raising $100,000 for the Tesla S and we're not short of $99,900, we're only short of $99,000..."




W dniu 2023-01-22 16:13:42 użytkownik John Tromp via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> napisał:
> > Right now the total reward per transaction is $63, three orders of magnitude
> higher than typical fees.

No need to exaggerate; this is only two orders of magnitude higher
than current fees, which are typically over $0.50
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists•linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev




^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [bitcoin-dev] Pseudocode for robust tail emission
@ 2023-01-21 10:20 jk_14
  0 siblings, 0 replies; 16+ messages in thread
From: jk_14 @ 2023-01-21 10:20 UTC (permalink / raw)
  To: bitcoin-dev


This is the phrase that should be recalled very often:

"the total reward per transaction is Three Orders of Magnitude
higher than typical fees. Sufficient fee increases to bring back hashing power
in a scenario like that would cause Enormous Disruption to many things,
including Lightning channels"

> Your proposal does not address that problem as it can only measure difficulty prior to the halving point

Yes, my proposal of fixing the inevitable (but only spreaded over the long time) failure - is quite conservative, surprisingly.

Simplifying it to the edge case:
If in a four-year perspective there is no the average price +100% increase - to properly compensate last halving,
but instead there is a hashrate -50% drop - another possible and "proper" (!) compensation

- absolutely don't worse the situation by executing next halving,
accept such drop because there is nothing you can do about it
and wait with halvings for the hashrate to recover. As long as it takes.
Maybe even 20 years if necessary (fortunately we are at mature phase of ASIC technology right now),
And iterate.

This way we land at lowest possible annual inflation and set by a free market.

As I said this is quite conservative approach. It would suit bitcoin,.
Too bad it wasn't foreseen at the beginning...




W dniu 2023-01-18 21:58:15 użytkownik Peter Todd <pete@petertodd•org> napisał:
> On Sun, Jan 01, 2023 at 11:42:50PM +1100, Alfie John wrote:
> On 31 Dec 2022, at 10:28 am, Peter Todd via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
> > 
> >> This way:
> >> 
> >> 1. system cannot be played
> >> 2. only in case of destructive halving: system waits for the recovery of network security
> > 
> > The immediate danger we have with halvings is that in a competitive market,
> > profit margins tend towards marginal costs - the cost to produce an additional
> > unit of production - rather than total costs - the cost necessary to recover
> > prior and future expenses. Since the halving is a sudden shock to the system,
> > under the right conditions we could have a significant amount of hashing power
> > just barely able to afford to hash prior to the halving, resulting in all that
> > hashing power immediately having to shut down and fees increasing dramatically,
> > and likely, chaotically.  Your proposal does not address that problem as it can
> > only measure difficulty prior to the halving point.
> 
> 
> > ... Since the halving is a sudden shock to the system
> 
> Is it though? Since everyone knows of the possible outcomes, wouldn't a possible halving be priced in? 

Re-read that I said. That explains why despite the halving being a forseeable
event, there's no mechanism to "price it in" when it comes to hashing power.

> > resulting in all that hashing power immediately having to shut down and fees increasing dramatically
> 
> Which should cause that hashing power to come back because of this fee increases.

Right now the total reward per transaction is $63, three orders of magnitude
higher than typical fees. Sufficient fee increases to bring back hashing power
in a scenario like that would cause enormous disruption to many things,
including Lightning channels.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org




^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [bitcoin-dev] Pseudocode for robust tail emission
@ 2023-01-07 18:52 jk_14
  2023-01-07 23:22 ` Eric
  0 siblings, 1 reply; 16+ messages in thread
From: jk_14 @ 2023-01-07 18:52 UTC (permalink / raw)
  To: Billy Tetrud; +Cc: Bitcoin Protocol Discussion


> Anyways if it turns out that fees alone don't look like they're supporting enough security, we have a good amount of time to come to that conclusion and do something about it. 

The worst-case scenario is that the first global hashrate regression may take place in 2028.
Instead of the average price increase at least x2 every halving - the global hashrate may gradually decrease from that point. Again, it would be the worst-case scenario.

In my proposal you don't need to think about any calculations - just simple logic which we have right now. No hardcoded values and the free market in its finest - self-regulating the level of taxation of parties involved, but with opposite interests. And the mechanism would try to fix a global hashrate regression if appear.
In other words: let's be optimistic regarding fees, but with emergency mechanism built-in just in case.
The only drawback here is that the system is already running.

In my personal opinion avoiding long-term global hashrate regression is more important for store of value feature than the 21M schelling point (or trap...)




W dniu 2023-01-04 17:03:33 użytkownik Billy Tetrud <billy.tetrud@gmail•com> napisał:
> In Bitcoin "the show must go on" and someone must pay for it. Active [and/or] passive users 


I certainly agree. 


> or more precisely: tiny inflation


👍


> Right now security comes from almost fully from ~1.8% inflation.


Best I could find, fees make up about 13% of miner revenue. So yes, the vast majority of security comes from coinbase rewards. I assume you're implying that ~13% of today's security is not enough? I would love to see any quantitative thoughts you have on how one might determine that. 


Have there been any thoughts put out in the community as to what size of threat is unlikely enough to arise that we don't need to worry about it? Maybe 1% of the yearly government budgets of the world would be an upper bound on how much anyone would expect could realistically be brought to bear? Today that would be maybe around $350 billion. 


Or perhaps a better way to estimate would be calculating the size of the motivation of an attacker. For example, this paper seems to conclude that the US government was extracting a maximum of ~$20 billion/year in 1982 dollars (so maybe $60 billion/year in 2022 dollars if you go by CPI). If we scale this up to the entire world of governments, this seems like it would place an upper bound of $180 billion/year of seigniorage extraction that would be at risk if bitcoin might put the currencies they gain seigniorage from out of business. Over 10 years (about as far as we can expect any government to think), that's almost $2 trillion. 


Whereas it would currently cost probably less than $7 billion to purchase a 50% share of bitcoin miners. To eventually reach a level of $350 billion, bitcoin's price would need to reach about $800,000 / bitcoin. That seems within the realm of possibility. To reach a level of $2 trillion, you'd need a price of $4.3 million/bitcoin. That's still probably within the realm of possibility, but certainly not as likely.  If you then assume we won't have significant coinbase rewards by that point, and only 13% of the equivalent revenue (from fees) would be earned, then a price of ~$6 million would be needed to support a $350 billion and $34 million to support a $2 trillion security. I think that second one is getting up towards the realm of impossibility, so if we think that much security is necessary, we might have to rethink things. Its also quite possible, as the network of people who accept and use bitcoin as payment grows, that the fee market will grow superlinearly in comparison to market cap, which would make these kind of high levels of security more realistic. 


Anyways if it turns out that fees alone don't look like they're supporting enough security, we have a good amount of time to come to that conclusion and do something about it. 


> Deflation in Bitcoin is not 1:1 matter like in gold, for example...  Deflation in Bitcoin is more complex issue


It's helpful to keep our language precise here. Price inflation and deflation act identically in bitcoin and gold and anything else. What you seem to be talking about at this point is monetary inflation (specifically, a reduction in it) which of course operates differently on the machinery of bitcoin than it does in the machinery of gold or other things. Whereas my comment about you mentioning Gresham's law was specifically talking about price inflation, not the effects of the coin emission machinery in bitcoin. 




_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists•linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev






^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [bitcoin-dev] Pseudocode for robust tail emission
@ 2023-01-02 23:02 jk_14
  2023-01-04 16:03 ` Billy Tetrud
  0 siblings, 1 reply; 16+ messages in thread
From: jk_14 @ 2023-01-02 23:02 UTC (permalink / raw)
  To: Billy Tetrud; +Cc: Bitcoin Protocol Discussion



Right now security comes from almost fully from ~1.8% inflation.
In November mempool was inflated to ~150MB and people were rather waiting for cheap transactions back.
Instead of being happy that system is closer for a while to default working area.

Deflation in Bitcoin is not 1:1 matter like in gold, for example.
If all plain gold available to mine would be finished - gold mines as unprofitable enterprices are immediately closed.
And it doesn't affect security of gold already in circulation.
In Bitcoin "the show must go on" and someone must pay for it.
Active and passive users together (balanced by market play) or: only active users (in current scenario, long-term).

Deflation (or more precisely: tiny inflation) in Bitcoin is more complex issue with more repercussions than in gold.
In case of drop of network security - the tax will be paid anyway, in Bitcoin price.
So, there is an self-regulating mechanism here. The harsh one, but still.



W dniu 2023-01-02 05:53:57 użytkownik Billy Tetrud <billy.tetrud@gmail•com> napisał:
> is surely better than not delaying it.
 
I might agree, but I don't think it really solves the problem well enough to be worth it. Any solution that would solve the problem better would make delaying halvings unnecessary. 
 
> there is non-zero risk that people will hoard it more and more, according to old Gresham's law


Gresham's law doesn't apply here. Gresham's law is about the interaction between two currencies with a fixed, usually government-enforced exchange rate. You seem to be saying that Bitcoin will be hoarded because Bitcoin inflation reduces every halving. But even with 0 inflation, it certainly won't cause all Bitcoin to be hoarded. Also, "hoarding" is also known as "saving", and there's nothing wrong with saving. The spectre of deflation comes from a misunderstanding of deflation and why it happens during bad economic times. It is an effect, not a cause.


On Sun, Jan 1, 2023, 15:23 <jk_14@op•pl> wrote:

Yes, the idea is:
if mining activity is growing - let's execute consecutive halvings
but if miner exodus has happened - let's delay next halving until mining activity is recovered to previous levels

If it gets to the point where a sudden drop in mining difficulty happens - delaying the next halving may be not sufficient to correct, but is surely better than not delaying it.

While Bitcoin is better and better money with every halving in comparision to other types of money - there is non-zero risk that people will hoard it more and more, according to old Gresham's law ("HODL"). And this way decreasing liquidity / transactions volume. The positive feedback loop - is my real concern here.

Regarding the relationship between difficulty and security - I fully agree.
But ASIC technology is already matured. And also any technology breakthrough is a short event within 4 years period.
So growth of difficulty could be gained by technology breakthrough, but any sudden drop of difficulty would be always an issue, while there is no such thing as: ASIC technology regression.

Obviously, not complicated solution would be better than complicated one.




W dniu 2022-12-30 19:21:10 użytkownik Billy Tetrud <billy.tetrud@gmail•com> napisał:
If the idea is to ensure that a catastrophic miner exodus doesn't happen, the "difference" you're calculating should only care about downward differences. Upward differences indicate more mining activity and so shouldn't cause a halving skip.


But I don't think any scheme like this that only acts on the basis of difficulty will be sufficient. If it gets to the point where a sudden drop in mining difficulty happens, it is very likely that simply delaying the next halving or even ending halving all together will not be sufficient to correct for whatever is causing hashrate to tank. There is also the danger of simple difficulty stagnation, which this mechanism wouldn't detect. 


The relationship between difficulty and security becomes less and less predictable the longer you want to look ahead. There's no long term relation between difficulty and any reasonable security target. A security target might be something like "no colluding group with less than $1 trillion dollars at their disposal could successfully 51% attack the network (with a probability of blah blah)". There is no way to today program in any code that detects based on difficult alone when that criteria is violated. You would have to program in assumptions about the cost of hashrate projected into the future.


I can't think of any robust automatic way to do this. I think to a certain degree, it will have to be a change that happens in a fork of some kind (soft or hard) periodically (every 10 years? 30 years?). The basic relations needed is really the cost in Bitcoin of the security target (ie the minimum number of Bitcoin it should take to 51% attack the system) and the cost in Bitcoin of acquiring a unit of hashrate. This could be simply input into the code, or could use some complicated oracle system. But with that relation, the system could be programmed to calculate the difficulty necessary to keep the system secure.


Once that is in place, the system could automatically adjust the subsidy up or down to attract more or less miners, or it could adjust the block size up or down to change the fee market such that more or less total fees are collected each block to attract more or less miners. 


On Tue, Dec 27, 2022, 09:41 Jaroslaw via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:

It seems like the more elegant solution could be by using a chainwork parameter instead.
i.e. comparison just before halving - if the last 210,000 block interval has a higher chainwork difference between the begining and the end of interval
than any other such inter-halving interval before.

LIttle digression yet:
A system in which all users participate in ensuring its security looks better than one in which only some (i.e. active) of them participate (and passive stakeholders are de facto free riders)
In my opinion this concept above is only the complement of currently missing mechanism: achieving equilibrium regarding costs of security between two parties with opposing interests.
It's easy to understand and - most important - it has no hardcoded value of tail emission - what is the clear proof it is based on a free market.
And last but not least, if someone is 100% sure that income from transactions will takeover security support from block subsidy - accepting such proposal is like putting the money where the mouth is: this safety measure will never be triggered, then (no risk of fork)


Best Regards
Jaroslaw



W dniu 2022-12-23 20:29:20 użytkownik Jaroslaw via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> napisał:
>
Necessary or not - it doesn't hurt to plan the robust model, just in case. The proposal is:

Let every 210,000 the code calculate the average difficulty of 100 last retargets (100 fit well in 210,000 / 2016 = 104.166)
and compare with the maximum of all such values calculated before, every 210,000 blocks:


if average_diff_of_last_100_retargets > maximum_of_all_previous_average_diffs
        do halving
else
        do nothing


This way:

1. system cannot be played
2. only in case of destructive halving: system waits for the recovery of network security


Best Regards
Jaroslaw
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists•linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists•linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev




^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [bitcoin-dev] Pseudocode for robust tail emission
@ 2023-01-01 22:27 jk_14
  0 siblings, 0 replies; 16+ messages in thread
From: jk_14 @ 2023-01-01 22:27 UTC (permalink / raw)
  To: Peter Todd; +Cc: bitcoin-dev


Is a storage fee averaged out over many future blocks - but not hardcoded value and regulated by a free market?


The problem with demurrage I see is that the fee is taken when you spend. There is no additional income for miners if people are still hoarding.
In tail emission even if people are still hoarding - the fee is taken immediately and is distributed to miners.

We have a hope there is still the global adoption ahead (most of countries are like El Salvador). It may increase price and marketcap of Bitcoin by order of magnitude.
And that's why hoarding in demurrage may still exist: due to extremely appealing long-term risk/reward (i.e. relatively small, delayed tax versus huge possible profit)




W dniu 2022-12-31 00:29:08 użytkownik Peter Todd <pete@petertodd•org> napisał:
> On Fri, Dec 23, 2022 at 07:43:36PM +0100, jk_14@op•pl wrote:
> 
> Necessary or not - it doesn't hurt to plan the robust model, just in case. The proposal is:
> 
> Let every 210,000 the code calculate the average difficulty of 100 last retargets (100 fit well in 210,000 / 2016 = 104.166)
> and compare with the maximum of all such values calculated before, every 210,000 blocks:
> 
> 
> if average_diff_of_last_100_retargets > maximum_of_all_previous_average_diffs
> 	do halving
> else
> 	do nothing
> 
> 
> This way:
> 
> 1. system cannot be played
> 2. only in case of destructive halving: system waits for the recovery of network security

First of all - while I suspct you already understand this issue - I should
point out the following:

The immediate danger we have with halvings is that in a competitive market,
profit margins tend towards marginal costs - the cost to produce an additional
unit of production - rather than total costs - the cost necessary to recover
prior and future expenses. Since the halving is a sudden shock to the system,
under the right conditions we could have a significant amount of hashing power
just barely able to afford to hash prior to the halving, resulting in all that
hashing power immediately having to shut down and fees increasing dramatically,
and likely, chaotically.  Your proposal does not address that problem as it can
only measure difficulty prior to the halving point.


Other than that problem, I agree that this proposal would, at least in theory,
be a positive improvement on the status quo. But it is a hard fork and I don't
think there is much hope for such hard forks to be implemented. I believe that
a demmurrage soft-fork, implemented via a storage fee averaged out over many
future blocks, has a much more plausible route towards implementation.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org




^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [bitcoin-dev] Pseudocode for robust tail emission
@ 2023-01-01 21:23 jk_14
  2023-01-02  4:53 ` Billy Tetrud
  0 siblings, 1 reply; 16+ messages in thread
From: jk_14 @ 2023-01-01 21:23 UTC (permalink / raw)
  To: Billy Tetrud, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 5772 bytes --]

Yes, the idea is:
if mining activity is growing - let's execute consecutive halvings
but if miner exodus has happened - let's delay next halving until mining activity is recovered to previous levels
If it gets to the point where a sudden drop in mining difficulty happens - delaying the next halving may be not sufficient to correct, but is surely better than not delaying it.
While Bitcoin is better and better money with every halving in comparision to other types of money - there is non-zero risk that people will hoard it more and more, according to old Gresham's law ("HODL"). And this way decreasing liquidity / transactions volume. The positive feedback loop - is my real concern here.
Regarding the relationship between difficulty and security - I fully agree.
But ASIC technology is already matured. And also any technology breakthrough is a short event within 4 years period.
So growth of difficulty could be gained by technology breakthrough, but any sudden drop of difficulty would be always an issue, while there is no such thing as: ASIC technology regression.
Obviously, not complicated solution would be better than complicated one.
 
 
W dniu 2022-12-30 19:21:10 użytkownik Billy Tetrud <billy.tetrud@gmail•com> napisał:
If the idea is to ensure that a catastrophic miner exodus doesn't happen, the "difference" you're calculating should only care about downward differences. Upward differences indicate more mining activity and so shouldn't cause a halving skip.  
But I don't think any scheme like this that only acts on the basis of difficulty will be sufficient. If it gets to the point where a sudden drop in mining difficulty happens, it is very likely that simply delaying the next halving or even ending halving all together will not be sufficient to correct for whatever is causing hashrate to tank. There is also the danger of simple difficulty stagnation, which this mechanism wouldn't detect. 
 
The relationship between difficulty and security becomes less and less predictable the longer you want to look ahead. There's no long term relation between difficulty and any reasonable security target. A security target might be something like "no colluding group with less than $1 trillion dollars at their disposal could successfully 51% attack the network (with a probability of blah blah)". There is no way to today program in any code that detects based on difficult alone when that criteria is violated. You would have to program in assumptions about the cost of hashrate projected into the future.
 
I can't think of any robust automatic way to do this. I think to a certain degree, it will have to be a change that happens in a fork of some kind (soft or hard) periodically (every 10 years? 30 years?). The basic relations needed is really the cost in Bitcoin of the security target (ie the minimum number of Bitcoin it should take to 51% attack the system) and the cost in Bitcoin of acquiring a unit of hashrate. This could be simply input into the code, or could use some complicated oracle system. But with that relation, the system could be programmed to calculate the difficulty necessary to keep the system secure.
 
Once that is in place, the system could automatically adjust the subsidy up or down to attract more or less miners, or it could adjust the block size up or down to change the fee market such that more or less total fees are collected each block to attract more or less miners. 
On Tue, Dec 27, 2022, 09:41 Jaroslaw via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
It seems like the more elegant solution could be by using a chainwork parameter instead.
i.e. comparison just before halving - if the last 210,000 block interval has a higher chainwork difference between the begining and the end of interval
than any other such inter-halving interval before.
LIttle digression yet:
A system in which all users participate in ensuring its security looks better than one in which only some (i.e. active) of them participate (and passive stakeholders are de facto free riders)
In my opinion this concept above is only the complement of currently missing mechanism: achieving equilibrium regarding costs of security between two parties with opposing interests.
It's easy to understand and - most important - it has no hardcoded value of tail emission - what is the clear proof it is based on a free market.
And last but not least, if someone is 100% sure that income from transactions will takeover security support from block subsidy - accepting such proposal is like putting the money where the mouth is: this safety measure will never be triggered, then (no risk of fork)
Best Regards
Jaroslaw
W dniu 2022-12-23 20:29:20 użytkownik Jaroslaw via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> napisał:
>
Necessary or not - it doesn't hurt to plan the robust model, just in case. The proposal is:
Let every 210,000 the code calculate the average difficulty of 100 last retargets (100 fit well in 210,000 / 2016 = 104.166)
and compare with the maximum of all such values calculated before, every 210,000 blocks:
if average_diff_of_last_100_retargets > maximum_of_all_previous_average_diffs
        do halving
else
        do nothing
This way:
1. system cannot be played
2. only in case of destructive halving: system waits for the recovery of network security
Best Regards
Jaroslaw
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists•linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists•linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

[-- Attachment #2: Type: text/html, Size: 7438 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [bitcoin-dev] Pseudocode for robust tail emission
@ 2022-12-27 15:34 jk_14
  2022-12-30 18:20 ` Billy Tetrud
  0 siblings, 1 reply; 16+ messages in thread
From: jk_14 @ 2022-12-27 15:34 UTC (permalink / raw)
  To: bitcoin-dev


It seems like the more elegant solution could be by using a chainwork parameter instead.
i.e. comparison just before halving - if the last 210,000 block interval has a higher chainwork difference between the begining and the end of interval
than any other such inter-halving interval before.

LIttle digression yet:
A system in which all users participate in ensuring its security looks better than one in which only some (i.e. active) of them participate (and passive stakeholders are de facto free riders)
In my opinion this concept above is only the complement of currently missing mechanism: achieving equilibrium regarding costs of security between two parties with opposing interests.
It's easy to understand and - most important - it has no hardcoded value of tail emission - what is the clear proof it is based on a free market.
And last but not least, if someone is 100% sure that income from transactions will takeover security support from block subsidy - accepting such proposal is like putting the money where the mouth is: this safety measure will never be triggered, then (no risk of fork)


Best Regards
Jaroslaw



W dniu 2022-12-23 20:29:20 użytkownik Jaroslaw via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> napisał:
> 
Necessary or not - it doesn't hurt to plan the robust model, just in case. The proposal is:

Let every 210,000 the code calculate the average difficulty of 100 last retargets (100 fit well in 210,000 / 2016 = 104.166)
and compare with the maximum of all such values calculated before, every 210,000 blocks:


if average_diff_of_last_100_retargets > maximum_of_all_previous_average_diffs
	do halving
else
	do nothing


This way:

1. system cannot be played
2. only in case of destructive halving: system waits for the recovery of network security


Best Regards
Jaroslaw
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists•linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev





^ permalink raw reply	[flat|nested] 16+ messages in thread
* [bitcoin-dev] Pseudocode for robust tail emission
@ 2022-12-23 18:43 jk_14
  2022-12-30 23:28 ` Peter Todd
  0 siblings, 1 reply; 16+ messages in thread
From: jk_14 @ 2022-12-23 18:43 UTC (permalink / raw)
  To: bitcoin-dev


Necessary or not - it doesn't hurt to plan the robust model, just in case. The proposal is:

Let every 210,000 the code calculate the average difficulty of 100 last retargets (100 fit well in 210,000 / 2016 = 104.166)
and compare with the maximum of all such values calculated before, every 210,000 blocks:


if average_diff_of_last_100_retargets > maximum_of_all_previous_average_diffs
	do halving
else
	do nothing


This way:

1. system cannot be played
2. only in case of destructive halving: system waits for the recovery of network security


Best Regards
Jaroslaw


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-02-01 22:04 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.9.1674388803.14535.bitcoin-dev@lists.linuxfoundation.org>
2023-01-22 14:13 ` [bitcoin-dev] Pseudocode for robust tail emission John Tromp
2023-02-01 22:04 jk_14
  -- strict thread matches above, loose matches on Subject: below --
2023-01-21 10:20 jk_14
2023-01-07 18:52 jk_14
2023-01-07 23:22 ` Eric
2023-01-02 23:02 jk_14
2023-01-04 16:03 ` Billy Tetrud
2023-01-01 22:27 jk_14
2023-01-01 21:23 jk_14
2023-01-02  4:53 ` Billy Tetrud
2022-12-27 15:34 jk_14
2022-12-30 18:20 ` Billy Tetrud
2022-12-23 18:43 jk_14
2022-12-30 23:28 ` Peter Todd
2023-01-01 12:42   ` Alfie John
2023-01-18 20:58     ` Peter Todd

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox