On Fri, Dec 23, 2022 at 07:43:36PM +0100, jk_14@op.pl wrote: > > Necessary or not - it doesn't hurt to plan the robust model, just in case. The proposal is: > > Let every 210,000 the code calculate the average difficulty of 100 last retargets (100 fit well in 210,000 / 2016 = 104.166) > and compare with the maximum of all such values calculated before, every 210,000 blocks: > > > if average_diff_of_last_100_retargets > maximum_of_all_previous_average_diffs > do halving > else > do nothing > > > This way: > > 1. system cannot be played > 2. only in case of destructive halving: system waits for the recovery of network security First of all - while I suspct you already understand this issue - I should point out the following: The immediate danger we have with halvings is that in a competitive market, profit margins tend towards marginal costs - the cost to produce an additional unit of production - rather than total costs - the cost necessary to recover prior and future expenses. Since the halving is a sudden shock to the system, under the right conditions we could have a significant amount of hashing power just barely able to afford to hash prior to the halving, resulting in all that hashing power immediately having to shut down and fees increasing dramatically, and likely, chaotically. Your proposal does not address that problem as it can only measure difficulty prior to the halving point. Other than that problem, I agree that this proposal would, at least in theory, be a positive improvement on the status quo. But it is a hard fork and I don't think there is much hope for such hard forks to be implemented. I believe that a demmurrage soft-fork, implemented via a storage fee averaged out over many future blocks, has a much more plausible route towards implementation. -- https://petertodd.org 'peter'[:-1]@petertodd.org