Setting aside my thoughts that something like Simplicity would make a better platform than Bitcoin Script (due to expression operating on a more narrow interface than the entire stack (I'm looking at you OP_DEPTH)) there is an issue with namespace management.

If I understand correctly, your implication was that once opcodes are redefined by an OP_RETURN transaction, subsequent transactions of that opcode refer to the new microtransaction.  But then we have a race condition between people submitting transactions expecting the outputs to refer to the old code and having their code redefined by the time they do get confirmed  (or worse having them reorged).

I've partially addressed this issue in my Simplicity design where the commitment of a Simplicity program in a scriptpubkey covers the hash of the specification of the jets used, which makes commits unambiguously to the semantics (rightly or wrongly).  But the issue resurfaces at redemption time where I (currently) have a consensus critical map of codes to jets that is used to decode the witness data into a Simplicity program.  If one were to allow this map of codes to jets to be replaced (rather than just extended) then it would cause redemption to fail, because the hash of the new jets would no longer match the hash of the jets appearing the the input's scriptpubkey commitment.  While this is still not good and I don't recommend it, it is probably better than letting the semantics of your programs be changed out from under you.

This comment is not meant as an endorsement of ths idea, which is a little bit out there, at least as far as Bitcoin is concerned. :)

My long term plans are to move this consensus critical map of codes out of the consensus layer and into the p2p layer where peers can negotiate their own encodings between each other.  But that plan is also a little bit out there, and it still doesn't solve the issue of how to weight reused jets, where weight is still consensus critical.

On Tue, Mar 22, 2022 at 1:37 AM ZmnSCPxj via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
Good morning list,

It is entirely possible that I have gotten into the deep end and am now drowning in insanity, but here goes....

Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

Introduction
============

Recent (Early 2022) discussions on the bitcoin-dev mailing
list have largely focused on new constructs that enable new
functionality.

One general idea can be summarized this way:

* We should provide a very general language.
  * Then later, once we have learned how to use this language,
    we can softfork in new opcodes that compress sections of
    programs written in this general language.

There are two arguments against this style:

1.  One of the most powerful arguments the "general" side of
    the "general v specific" debate is that softforks are
    painful because people are going to keep reiterating the
    activation parameters debate in a memoryless process, so
    we want to keep the number of softforks low.
    * So, we should just provide a very general language and
      never softfork in any other change ever again.
2.  One of the most powerful arguments the "general" side of
    the "general v specific" debate is that softforks are
    painful because people are going to keep reiterating the
    activation parameters debate in a memoryless process, so
    we want to keep the number of softforks low.
    * So, we should just skip over the initial very general
      language and individually activate small, specific
      constructs, reducing the needed softforks by one.

By taking a page from microprocessor design, it seems to me
that we can use the same above general idea (a general base
language where we later "bless" some sequence of operations)
while avoiding some of the arguments against it.

Digression: Microcodes In CISC Microprocessors
----------------------------------------------

In the 1980s and 1990s, two competing microprocessor design
paradigms arose:

* Complex Instruction Set Computing (CISC)
  - Few registers, many addressing/indexing modes, variable
    instruction length, many obscure instructions.
* Reduced Instruction Set Computing (RISC)
  - Many registers, usually only immediate and indexed
    addressing modes, fixed instruction length, few
    instructions.

In CISC, the microprocessor provides very application-specific
instructions, often with a small number of registers with
specific uses.
The instruction set was complicated, and often required
multiple specific circuits for each application-specific
instruction.
Instructions had varying sizes and varying number of cycles.

In RISC, the micrprocessor provides fewer instructions, and
programmers (or compilers) are supposed to generate the code
for all application-specific needs.
The processor provided large register banks which could be
used very generically and interchangeably.
Instructions had the same size and every instruction took a
fixed number of cycles.

In CISC you usually had shorter code which could be written
by human programmers in assembly language or machine language.
In RISC, you generally had longer code, often difficult for
human programmers to write, and you *needed* a compiler to
generate it (unless you were very careful, or insane enough
you could scroll over multiple pages of instructions without
becoming more insane), or else you might forget about stuff
like jump slots.

For the most part, RISC lost, since most modern processors
today are x86 or x86-64, an instruction set with varying
instruction sizes, varying number of cycles per instruction,
and complex instructions with application-specific uses.

Or at least, it *looks like* RISC lost.
In the 90s, Intel was struggling since their big beefy CISC
designs were becoming too complicated.
Bugs got past testing and into mass-produced silicon.
RISC processors were beating the pants off 386s in terms of
raw number of computations per second.

RISC processors had the major advantage that they were
inherently simpler, due to having fewer specific circuits
and filling up their silicon with general-purpose registers
(which are large but very simple circuits) to compensate.
This meant that processor designers could fit more of the
design in their merely human meat brains, and were less
likely to make mistakes.
The fixed number of cycles per instruction made it trivial
to create a fixed-length pipeline for instruction processing,
and practical RISC processors could deliver one instruction
per clock cycle.
Worse, the simplicity of RISC meant that smaller and less
experienced teams could produce viable competitors to the
Intel x86s.

So what Intel did was to use a RISC processor, and add a
special Instruction Decoder unit.
The Instruction Decoder would take the CISC instruction
stream accepted by classic Intel x86 processors, and emit
RISC instructions for the internal RISC processor.
CISC instructions might be variable length and have variable
number of cycles, but the emitted RISC instructions were
individually fixed length and fixed number of cycles.
A CISC instruction might be equivalent to a single RISC
instruction, or several.

With this technique, Intel could deliver performance
approaching their RISC-only competition, while retaining
back-compatibility with existing software written for their
classic CISC processors.

At its core, the Instruction Decoder was a table-driven
parser.
This lookup table could be stored into on-chip flash memory.
This had the advantage that the on-chip flash memory could be
updated in case of bugs in the implementation of CISC
instructions.
This on-chip flash memory was then termed "microcode".

Important advantages of this "microcode" technique were:

* Back-compatibility with existing instruction sets.
* Easier and more scalable underlying design due to ability
  to use RISC techniques while still supporting CISC instruction
  sets.
* Possible to fix bugs in implementations of complex CISC
  instructions by uploading new microcode.

(Obviously I have elided a bunch of stuff, but the above
rough sketch should be sufficient as introduction.)

Bitcoin Consensus Layer As Hardware
-----------------------------------

While Bitcoin fullnode implementations are software, because
of the need for consensus, this software is not actually very
"soft".
One can consider that, just as it would take a long time for
new hardware to be designed with a changed instruction set,
it is similarly taking a long time to change Bitcoin to
support changed feature sets.

Thus, we should really consider the Bitcoin consensus layer,
and its SCRIPT, as hardware that other Bitcoin software and
layers run on top of.

This thus opens up the thought of using techniques that were
useful in hardware design.
Such as microcode: a translation layer from "old" instruction
sets to "new" instruction sets, with the ability to modify this
mapping.

Microcode For Bitcoin SCRIPT
============================

I propose:

* Define a generic, low-level language (the "RISC language").
* Define a mapping from a specific, high-level language to
  the above language (the microcode).
* Allow users to sacrifice Bitcoins to define a new microcode.
* Have users indicate the microcode they wish to use to
  interpret their Tapscripts.

As a concrete example, let us consider the current Bitcoin
SCRIPT as the "CISC" language.

We can then support a "RISC" language that is composed of
general instructions, such as arithmetic, SECP256K1 scalar
and point math, bytevector concatenation, sha256 midstates,
bytevector bit manipulation, transaction introspection, and
so on.
This "RISC" language would also be stack-based.
As the "RISC" language would have more possible opcodes,
we may need to use 2-byte opcodes for the "RISC" language
instead of 1-byte opcodes.
Let us call this "RISC" language the micro-opcode language.

Then, the "microcode" simply maps the existing Bitcoin
SCRIPT `OP_` codes to one or more `UOP_` micro-opcodes.

An interesting fact is that stack-based languages have
automatic referential transparency; that is, if I define
some new word in a stack-based language and use that word,
I can replace verbatim the text of the new word in that
place without issue.
Compare this to a language like C, where macro authors
have to be very careful about inadvertent variable
capture, wrapping `do { ... } while(0)` to avoid problems
with `if` and multiple statements, multiple execution, and
so on.

Thus, a sequence of `OP_` opcodes can be mapped to a
sequence of equivalent `UOP_` micro-opcodes without
changing the interpretation of the source language, an
important property when considering such a "compiled"
language.

We start with a default microcode which is equivalent
to the current Bitcoin language.
When users want to define a new microcode to implement
new `OP_` codes or change existing `OP_` codes, they
can refer to a "base" microcode, and only have to
provide the new mappings.

A microcode is fundamentally just a mapping from an
`OP_` code to a variable-length sequence of `UOP_`
micro-opcodes.

```Haskell
import Data.Map
-- type Opcode
-- type UOpcode
newtype Microcode = Microcode (Map.Map Opcode [UOpcode])
```

Semantically, the SCRIPT interpreter processes `UOP_`
micro-opcodes.

```Haskell
-- instance Monad Interpreter -- can `fail`.
interpreter :: Transaction -> TxInput -> [UOpcode] -> Interpreter ()
```

Example
-------

Suppose a user wants to re-enable `OP_CAT`, and nothing
else.

That user creates a microcode, referring to the current
default Bitcoin SCRIPT microcode as the "base".
The base microcode defines `OP_CAT` as equal to the
sequence `UOP_FAIL` i.e. a micro-opcode that always fails.
However, the new microcode will instead redefine the
`OP_CAT` as the micro-opcode sequence `UOP_CAT`.

Microcodes then have a standard way of being represented
as a byte sequence.
The user serializes their new microcode as a byte
sequence.

Then, the user creates a new transaction where one of
the outputs contains, say, 1.0 Bitcoins (exact required
value TBD), and has the `scriptPubKey` of
`OP_TRUE OP_RETURN <serialized_microcode>`.
This output is a "microcode introduction output", which
is provably unspendable, thus burning the Bitcoins.

(It need not be a single user, multiple users can
coordinate by signing a single transaction that commits
their funds to the microcode introduction.)

Once the above transaction has been deeply confirmed,
the user can then take the hash of the microcode
serialization.
Then the user can use a SCRIPT with `OP_CAT` enabled,
by using a Tapscript with, say, version `0xce`, and
with the SCRIPT having the microcode hash as its first
bytes, followed by the `OP_` codes.

Fullnodes will then process recognized microcode
introduction outputs and store mappings from their
hashes to the microcodes in a new microcodes index.
Fullnodes can then process version-`0xce` Tapscripts
by checking if the microcodes index has the indicated
microcode hash.

Semantically, fullnodes take the SCRIPT, and for each
`OP_` code in it, expands it to a sequence of `UOP_`
micro-opcodes, then concatenates each such sequence.
Then, the SCRIPT interpreter operates over a sequence
of `UOP_` micro-opcodes.

Optimizing Microcodes
---------------------

Suppose there is some new microcode that users have
published onchain.

We want to be able to execute the defined microcode
faster than expanding an `OP_`-code SCRIPT to a
`UOP_`-code SCRIPT and having an interpreter loop
over the `UOP_`-code SCRIPT.

We can use LLVM.

WARNING: LLVM might not be appropriate for
network-facing security-sensitive applications.
In particular, LLVM bugs. especially nondeterminism
bugs, can lead to consensus divergence and disastrous
chainsplits!
On the other hand, LLVM bugs are compiler bugs and
the same bugs can hit the static compiler `cc`, too,
since the same LLVM code runs in both JIT and static
compilation, so this risk already exists for Bitcoin.
(i.e. we already rely on LLVM not being buggy enough
to trigger Bitcoin consensus divergence, else we would
have written Bitcoin Core SCRIPT interpreter in
assembly.)

Each `UOP_`-code has an equivalent tree of LLVM code.
For each `Opcode` in the microcode, we take its
sequence of `UOpcode`s and expand them to this tree,
concatenating the equivalent trees for each `UOpcode`
in the sequence.
Then we ask LLVM to JIT-compile this code to a new
function, running LLVM-provided optimizers.
Then we put a pointer to this compiled function to a
256-long array of functions, where the array index is
the `OP_` code.

The SCRIPT interpreter then simply iterates over the
`OP_` code SCRIPT and calls each of the JIT-compiled
functions.
This reduces much of the overhead of the `UOP_` layer
and makes it approach the current performance of the
existing `OP_` interpreter.

For the default Bitcoin SCRIPT, the opcodes array
contains pointers to statically-compiled functions.
A microcode that is based on the default Bitcoin
SCRIPT copies this opcodes array, then overwrites
the entries.

Future versions of Bitcoin Core can "bless"
particular microcodes by providing statically-compiled
functions for those microcodes.
This leads to even better performance (there is
no need to recompile ancient onchain microcodes each
time Bitcoin Core starts) without any consensus
divergence.
It is a pure optimization and does not imply a
tightening of rules, and is thus not a softfork.

(To reduce the chance of network faults being used
to poke into `W|X` memory (since `W|X` memory is
needed in order to actually JIT compile) we can
isolate the SCRIPT interpreter into its own process
separate from the network-facing code.
This does imply additional overhead in serializing
transactions we want to ask the SCRIPT interpreter
to validate.)

Comparison To Jets
------------------

This technique allows users to define "jets", i.e.
sequences of low-level general operations that users
have determined are common enough they should just
be implemented as faster code that is executed
directly by the underlying hardware processor rather
than via a software interpreter.
Basically, each redefined `OP_` code is a jet of a
sequence of `UOP_` micro-opcodes.

We implement this by dynamically JIT-compiling the
proposed jets, as described above.
SCRIPTs using jetted code remain smaller, as the
jet definition is done in a previous transaction and
does not require copy-pasta (Do Not Repeat Yourself!).
At the same time, jettification is not tied to
developers, thus removing the need to keep softforking
new features --- we only need define a sufficiently
general language and then we can implement pretty much
anything worth implementing (and a bunch of other things
that should not be implemented, but hey, users gonna
use...).

Bugs in existing microcodes can be fixed by basing a
new microcode from the existing microcode, and
redefining the buggy implementation.
Existing Tapscripts need to be re-spent to point to
the new bugfixed microcode, but if you used the
point-spend branch as an N-of-N of all participants
you have an upgrade mechanism for free.

In order to ensure that the JIT-compilation of new
microcodes is not triggered trivially, we require
that users petitioning for the jettification of some
operations (i.e. introducing a new microcode) must
sacrifice Bitcoins.

Burning Bitcoins is better than increasing the weight
of microcode introduction outputs; all fullnodes are
affected by the need to JIT-compile the new microcode,
so they benefit from the reduction in supply, thus
getting compensated for the work of JIT-compiling the
new microcode.
Ohter mechanisms for making microcode introduction
outputs expensive are also possible.

Nothing really requires that we use a stack-based
language for this; any sufficiently FP language
should allow referential transparency.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev