Brainstorming Script VM Architecture, and Script VM Upgrade Strategy

'22 and '23 upgrades got us a great deal of functionality, however it also made the builders aware of many other limitations such as script size limits and lack of some basic operations such as shift operations, which brings me to raise a general question:

Why was the VM left incomplete? There are certain operations common to almost all programming languages, that are considered to be pretty standard. Why didn’t we just complete the VM with ALL of them in one sweep?

Also, sometimes working with numbers is so frustrating because there’s no uint type. I think this is solvable: we could define one NOP opcode as VM int/uint mode toggle which would modify behavior of all arithmetic opcodes.

Also, I’ve seen that some builders would like some aggregate operations. There are TX aggregates that validating nodes have to perform anyway. Exposing these values through some new opcodes would add negligible costs IMO, so why not do it? Remove as many bottlenecks as we can so downstream builders can build faster and easier.

Then, what’s the lowest common denominator of int64 architecture CPUs & GPUs? Couldn’t we work towards better mapping to native instructions? I’ve seen some other networks are working towards a RISV-V VM for their needs.

I’m not saying “hey let’s yolo and make a big and drastic change now!”, I’m just interested in overall strategy, then we can work towards slowly adding the pieces.

3 Likes

Indeed, it would be good to re-asses where we are currently, what we are missing and what we will predictably need. We’re discussing proposals for 2025 (which is still way out), so I hope we can be ambitious in our innovations.

I think that for the smart contract side, we should strive for three items in 2025 which complement each other in very important ways:

Targeted VM limits will allow for emulating mathematical opcodes such as muldiv, mulmod, adddiv & addmod and which enable use int125 addition and multiplication.

We see that these type of mathematical operations are also emulated in Solidity by math libraries. The math libraries allow contract developers to work with fractions and they emulate functionality such as pow, log, root & pow. This emulation utilizes bounded loops & bitwise shift operations. int125 division is also enabled by these three proposals together (but not by just any 2 of them).

Enabling Math libraries is crucially important for compounding interest DeFi applications and the more complex AMM curves, as well as for simple proportions and percentages.

By enabling bounded loops as a primitive, contract authors are able to access the introspection aggregates without any other specific changes.

For me, as a builder interested in building advanced, complex applications like we see them on Ethereum, the low (and badly designed) VM limits and limited set of mathematical tools are the most stringent bottlenecks.

4 Likes

I need to preface this with the point that Bitcoin Script is not a “programming language”, I think the term “Virtual Machine” is confusing in this context and was applied much later.

Satoshi used the term “predicate”. And he made it more generally accessible by calling it “script”. I’ve written a dozen scripting languages (and interpreters) and they are by default specific to the target usecase. Most scripting engines do not actually have math or for loops. A CMake script is meant to be completing a build. A Bitcoin script is to construct a predicate.

Which basically means that your premise is something I disagree with. There is no intention at all to make Bitcoin Script a programming language. There is no benefit to mapping individual opcodes to CPU instructions.

None of this means I have any problems with making the scripting engine more useful for specific usecases. Allowing users to create more economic activity on-chain with extra opcodes is interesting to reflect on and likey will gather wider support. Depending on how big the usecase is.

But at the same time I do object to re-interpreting the predicate system in a way that allows “devs to dev” and just add a load of opcodes because it feels good. Instead, the original point of CHIPs stands: provides an actual end-user visible usecase of what you want to do and maybe we can add opcodes to make that work.

4 Likes

Thanks for pointing it out, that’s an important distinction. Programming languages have the power to do things - like affect the state of some system, whereas our predicate system’s task is only to verify whether a proposed change satisfies constraints encoded by Script.

BTW I found only one mention of “predicate” in Satoshi’s archives:

If someone wants the possibility of chargeback, they can use an escrow transaction, which isn’t implemented yet but will be one of the next things. For instance, a transaction can be written to designate a third party to decide whether it is returned if the payer does not release it, with auto-release after a number of days. I’ll implement a more basic form of escrow first, but the network infrastructure includes a predicate language that can express any number of options.

What is a predicate? I found a nice definition here:

A predicate asks a question where the answer is true or false or, said another way, yes or no.
In computer science and in mathematics, this question comes in the form of a function. The result of the function is true or false (yes or no). The data type of the answer, again both in mathematics and in computer science is called a boolean.

and, in context of our Script VM, the predicate’s question is always: can this input be spent in this TX?

I think fex.cash whitepaper is helpful here as it describes how it works in easier terms (a check-list):

Bitcoin Cash covenants specifies what the new UTXOs should be like based on the given input UTXOs, so its code is like a “checklist”. The new UTXOs’ assets, codes and data are be arbitrarily choosen if they are not checked in the checklist.

Agreed, then I should reword: we should strive to make our predicate system expressive enough, flexible enough, and compact enough.

I imagined the benefit would be performance in evaluating the predicates, however it doesn’t seem like our VM is the validation bottleneck so I guess it’d be premature optimization.

2 Likes

Please finish the sentence. Enough for what?

Maybe we agree, see my earlier writing:

1 Like

Why didn’t we just complete the VM with ALL of them in one sweep?

You are clearly aware with your NO YOLO comment at the end, but for the audience - the simple answer to this one is that any such large scale change is too risky due to combinatorial explosion of both complexity and consequences, especially potentially negative ones. Even things that look simple or independent can have complex interactions and consequences (technical, DOS, exploits, mining, social consensus, medium term impact, long term impact, scalability, …). So there is not really a such thing as a simple change when we are talking about one way streets that everyone’s money needs to drive down.

For example, one really uncomfortable thing that these efforts will run into is the interaction with the standard mining fee rate.

There are TX aggregates that validating nodes have to perform anyway . Exposing these values through some new opcodes would add negligible costs IMO, so why not do it?

One answer is that things that look logical from the outside may not be so in practice. It really depends on the software architecture of nodes. In other words, the initial cost might be really large. Maybe not! Just one potential answer to the question.

I’m just interested in overall strategy, then we can work towards slowly adding the pieces

I would absolutely love to read some articles from someone who has the experience and confidence to lay out a long term VM plan that paves the way for global demand. I don’t have that myself.

For my part, I haven’t personally felt a need / demand for more upgrades, but they could very well be there. I haven’t been looking either. Do you happen to know if any of these networks implementing CPU/GPU-esque instruction sets have good documentation about the benefits / risks?

Some additional notes that I think are relevant:

  • For introspection (and maybe 64 bit too? I forget), GP made an article that showed a side-by-side estimate of the difference the upgrade would make to the AnyHedge contract. It helps to make the benefit concrete. If something is not possible, then still showing how it would work with an upgrade would be helpful in terms of technicals and value creation (utility / demand).
  • Someone might consider making a new CHIP, consolidating changes that must happen as a single package. Nobody has done exactly that before, but it might be meaningful here vs. separate CHIPs with separate owners that in reality will need to be strongly coupled. Might also be a bad idea depending on the contents. Just food for thought.
  • Just to say that there are many options for how to proceed - if a CHIP owner is not in a position or not wanting to continue pushing a CHIP forward, a new owner can certainly clone and re-propose a CHIP (new date/name etc.). Might also be able to agree on a symbolic transfer of ownership of an existing CHIP, to keep things simple :man_shrugging: .
3 Likes

On process, I think the current CHIP process has worked well. Each idea has been worked out and sold to the community atomically.

I do not like the idea of batching or combining CHIPs for different topics, if they’re indeed separate topics or domains, unless the topics are collapsed with a more elegant solution.

However, as a survey of unused “real estate” in the two byte table,

  • There’s 0xBD-0xBF (3 slots at end of crypto functions block)

  • There are about a half dozen disabled or disused codes peppered throughout the current codes that may someday be repurposed somehow for one-off operations.

  • The Bounded looping CHIP proposes reusing OP_VERIF and OP_VERNOTIF (0x65-0x66).

  • Shift operations already have codes (0x98-0x99).

  • 0xD4-0xFF (44 slots) is the last contiguous block of unused codes.

On big picture planning, it might be swell to leave the BD-BF space at the end of the crypto section empty for quantum resistant functions coming down the pipeline. Perhaps that was the intent.

Introspection/CT ends at D3, so it might be swell to leave a couple slots to extend introspection and CT with aggregation if that’s a priority. 0xD4-0xD?7? [It appears the unlimited team used one script to denote 8 transaction state slots beyond introspection with one op_code and a one byte case switch]

If introspection took another 4 codes, that would leave about 40 codes remaining (D8-FF).

It would be nice to always have some reserved codes for emergencies, addressing larger op_code spaces, perhaps for generalized layer-2 solutions (if ever needed). Say we loosely aim to save the F block for unforeseeable “what if” situations down the road.

That would leave a budget of 24 codes to complete the VM without getting too creative or weird.

Rather than spend the entire 24 code budget at once, in might be nice to treat the real estate as more expensive as scarcity increases.

If bit shift and loops don’t need new codes, it might be nice to budget for, at most, no more than about half of the budget “free” codes to “complete” the VM.

Can the VM be “completed” in a single sprint for 2025 using about 16 new codes? It seems like aggregated introspection (4-8) and muls (4) might be 8-12 spaces required. That would leave 16 codes permanently reserved and 12 still available to “finish” the VM in a later iteration.

2 Likes

PSA, @mainnet_pat has been working on some VM improvements on our cousin chain, maybe we could make use of it, too:

1 Like