I wouldn't fully discount general purpose hardware or hardware outside of the realm of ASICS. BOINC (https://cds.cern.ch/record/800111/files/p1099.pdf) implements a decent distributed computing protocol (granted it isn't a cryptocurrency), but it far computes data at a much cheaper cost compared to the competition w/ decent levels of fault tolerance. I myself am running an extremely large scale open distributed computing pipeline, and can tell you for certain that what is out there is insane. In regards to the argument of generic HDDs and CPUs, the algorithmic implementation I am providing would likely make them more adaptable. More than likely, evidently there would be specialized HDDs similar to BurstCoin Miners, and 128-core CPUs, and all that. This could be inevitable, but the main point is providing access to other forms of computation along w/ ASICs. At the very least, the generic guys can experience it, and other infrastructures can have some form of compatibility. In regards to ASICBOOST, I am already well aware of it, as well as mining firmwares, autotuning, multi-threaded processing setups, overclocking, and even different research firms involved. I think it is feasible to provide multiple forms of computation without disenfranchising one over the other. I'm also well aware of the history of BTC and how you can mine BTC just by downloading the whitepaper, to USB block erupters, to generic CPUs, to few ASICS, to entire mining farms. I also have seen experimental projects such as Cuckoo, so I know the arguments regarding computation vs. memory boundness and whether or not they can be one of the same. The answer is yes, but it needs to be designed correctly. I think in regards to the level of improvement, this is just one of the improvements in my BIPs in regards to making PoW more adaptable. I also have cryptography improvements I'm looking into as well. Nonetheless, I believe the implementation I want to do would at the very least be quite interesting.

Best regards, Andrew

On Wed, Mar 17, 2021 at 1:05 AM ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:
Good morning Andrew,

Looking over the text...

> # I am looking towards integrating memory hard compatibility w/ the mining algorithm. Memory hard computation allows for time and space complexity for data storage functionality, and there is a way this can likely be implemented without disenfranchising current miners or their hardware if done right.

I believe this represents a tradeoff between time and space --- either you use one spatial unit and take a lot of time, or you use multiple spatial units and take smaller units of time.

But such time/space tradeoffs are already possible with the existing mechanism --- if you cannot run your existing SHA256d miner faster (time), you just buy more miners (space).

Thus, I think the requirement for memory hardness is a red herring in the design of proof-of-work algorithms.
Memory hardness *prevents* this tradeoff (you cannot create a smaller miner that takes longer to mine, as you have a memory requirement that prevents trading off space).

It is also helpful to remember that spinning rust consumes electricity as well, and that any operation that requires changes in data being stored requires a lot of energy.
Indeed, in purely computational algorithms (e.g. CPU processing pipelines) a significant amount of energy is spent on *changing* voltage levels, with very little energy (negligible compared to the energy spent in changing voltage levels in modern CMOS hardware) in *maintaining* the voltage levels.

> I don't see a reason why somebody with $2m of regular hardware can't mine the same amount of BTC as somebody with $2m worth of ASICs.

I assume here that "regular hardware" means "general-purpose computing device".

The Futamura projections are a good reason I see: http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html

Basically, any interpreter + fixed program can be converted, via Futamura projection, to an optimized program that cannot interpret any other program but runs faster and takes less resources.

In short, any hardware interpreter (i.e. general-purpose computing device) + a fixed proof-of-whatever program, can be converted to an optimized hardware that can only perform that proof-of-whatever program, but consuming less energy and space and will (eventually) be cheaper per unit as well, so that $2M of such a specific hardware will outperform $2M of general-purpose computing hardwre.

Thus, all application-specificity (i.e. any fixed program) will always take less resources to run than a generic hardware interpreter that can run any program.

Thus, if you ever nail down the specifics of your algorithm, and if a thousand-Bitcoin industry ever grows around that program, you will find that ASICs ***will*** arise that run that algorithm faster and less energy-consuming than general-purpose hardware that has to interpret a binary.
**For one, memory/disk bus operations are limited only to actual data, without requiring additional bus operations to fetch code.**
Data can be connected directly from the output of one computational sub-unit to the input of another, without requiring (as in the general-purpose hardware case) that the intermediate outputs be placed in general-purpose storage register (which, as noted, takes energy to *change* its contents, and as general-purpose storage will also be used to hold *other* intermediate outputs).
Specialized HDDs can arise as well which are optimized for whatever access pattern your scheme requires, and that would also outperform general-purpose HDDs as well.

Further optimizations may also exist in an ASIC context that are not readily visible but which are likely to be hidden somewhere --- the more complicated your program design, the more likely it is that you will not readily see such hidden optimizations that can be achieved by ASICs (xref ASICBOOST).

In short, even with memory-hardness, an ASIC will arise which might need to be connected to an array of (possibly specialized) HDDs but which will still outperform your general-purpose hardware connected to an array of general-purpose storage.

Indeed, various storage solutions already have different specializations: SMR HDDs replace tape drives, PMR HDDs serve as caches of SMR HDDs, SSDs serve as caches of PMR HDDs.
An optimized technology stack like that can outperform a generic HDD.

You cannot fight the inevitability of ASICs and other specialized hardware, just as you cannot fight specialization.

You puny humans must specialize in order to achieve the heights of your civilization --- I can bet you 547 satoshis that you yourself cannot farm your own food, you specialize in software engineering of some kind and just pay a farmer to harvest your food for you.
Indeed, you probably do not pay a farmer directly, but pay an intermediary that specializes in packing food for transport from the farm to your domicile. which itself probably delegates the actual transporting to another specialist.
Similarly, ASICs will arise and focus on particularly high-value fixed computations, inevitably.



Regards,
ZmnSCPxj