Wider discussion, an OP_ENV for the VM upgrades

During the design and brainstorming phase of the milli-sats CHIP we stubled upon the need for existing opcodes from the native-introspection section, to slightly change behavior. This leads to the question on how to do this backwards compatible.

And that lead to a wider discussion of backwards compatibility in the Bitcoin Script environment.

Up until now this has been solved on an individual basis. Those solutions range from Schnorr signatures being detected by the size of the signature. Which frankly is a hack.
In various other technologies we have authors basically state an upgrade paths is not possible because it would not be backwards compatible.

So, here came the idea during our talks to have a new opcode OP_ENV, which is short for environment-version.

The basic concept is, a script can push a version (OP_2 OP_ENV) and we tie in a protocol upgrade specific behaviors to that we need to upgrade.

In other words; the behavior of specific opcodes may be altered based on which VM-Environment version is set.

Now, to be clear, this IS NOT REQUIRED for the milli-satoshi’s CHIP. We can just do with a piecemeal solution (op_milli-sats-enabled), but I think it makes sense to take a step back and look if maybe we should make our VM and also BitcoinScript more future-proof.

Here are further examples; #1 - Re-visit the opcode - bitcoincash/CHIP-MilliSatoshi - Codeberg.org


This is an interesting thought. How do you foresee this possibly being used? Would it be required to be the first op_codes executed in a script to be used?

You could also potentially tie the script version to the tx version now that tx versions are locked assuming this convention for interpreting versions is followed:

but this is probably messier than your op_env solution

This is how I defined it:

This opcode, when needed, shall be present once in the script-initialization. Script initialization is the start of the script, it can contain pushes and it can contain this new ENV opcode. The script initialization ends when any other opcode is encountered by the interpreter.

Combined with the rule that inputs can only have pushes, that means that it is indeed to be the first opcode of an output script. But technically some pushes could be in front of it. I don’t see a reason to make it too strict. The rule seems to be generic enough to avoid issues.

I thought that too, but BCA pointed out correctly that this gets tricky in cask-tokens where the script is to be copied exactly to the next transaction. To use the tx-version is a breaking of layers and in that case it could create bad side-effects.

Which is how we got to the idea of an op_env.

1 Like

To give a rough example for others (because this wasn’t obvious to me either and I also thought Tx version would suffice):

  1. Imagine you have a script on a V2 transaction that uses OP_UTXOVALUE (to get the satoshi value of the first UTXO used such that, to unlock, 1st Output Satoshis MUST EQUAL 1st Input Satoshis).
  2. Then imagine someone unlocks this in a V3 transaction (that uses millisats).
  3. Technically, the Unlock Script provided in the V3 transaction would match the Lockscript of the V2 Transaction, making it valid, but…
  4. The person using the V3 transaction would short-change the intended value in the V2 transaction by 1000x (because OP_UTXOVALUE would be receiving Sats value as opposed to milliSats). Thus, OP_UTXOVALUE (V2 - which is in Sats) === OP_OUTPUTVALUE (V3 - which is in mSats).

Hope I’ve understood that correctly. The proposal for OP_ENV is to cover other use-cases that we may want in future that would have a similar problem. Having ${someVersion} OP_ENV would change the lockscript hash, ensuring that scripts could only be run with the VM version that they were specifically created with.

As a feature, it would allow us to change the behaviour (and therefore version) existing OP_CODES (e.g. if OP_ENV == 2, OP_UTXOVALUE and OP_OUTPUTVALUE is in mSats).

Not saying this is a good solution but, just so that we’ve explored other possibilities, is on-the-fly conversion based on current Tx Version + UTXO version no good (messier/riskier/incompatible with what we’d want in future?)

For example:

  1. For each UTXO provided in a transaction, we also pass the UTXO’s Tx Version.
  2. If the transaction it is being included in is (e.g.) V3…
  3. Then we cast the UTXO values to their V3 equivalent on-the-fly:
if(thisOpCode == OP_UTXOVALUE) {
  // If the current transaction is V3 AND the UTXO we're processing is < V3...
  if(thisTx.version == 3 && thisUtxo.version < 3) {
    // Multiply the V2 UTXO value by 1000 to get mSats
    return thisUtxo.value * 1000;

  // Otherwise, just return the UTXO value as it's already in mSats.
  return thisUtxo.value;

The only benefit I can see in an approach like the above is that we’d shave two bytes (${version} OP_ENV) off any transaction unlocking script that wanted to use future VM versions. Feels a bit spaghetti’ish and maybe there’d be values that just cannot be cast like that too?

EDIT: Replaced “transaction” with “unlocking script” - there might be several unlocks using OP_ENV per tx. I’m still very much leaning towards OP_ENV being the best solution.

it might even fail in a series of transactions. First one v2, then more than one v10.

Yeah, I think it’s probably a bad approach. It becomes convoluted and you end up with a dirty matrix of what casts would be necessary between particular Op Codes/Tx Versions. And we might end up with some casts that simply won’t be compatible/possible in future.

OP_ENV feels like a far cleaner solution to me.

1 Like

Environment seems like the wrong term here. Perhaps “VM version” might be a better term. Executables have versions whereas environments have names. The VM is inherently versioned so it could be an interesting idea to formalise that and expose the different versions in Script.

The simplest solution the the problem as stated would be to define new opcodes with the newly desired behaviours. Are you concerned about running out of opcodes? How many need to be changed?

I think OP_TXVERSION could be used to mitigate this by ensuring that transactions interacting with the contract conform to the desired version with the desired opcode behaviour.

Update: I’ve now read the subsat CHIP pre-release and understand the problem and that it’s 2 opcodes ( OP_UTXOVALUE / OP_OUTPUTVALUE). I’ll eave further comments on the thread for that CHIP.

The term ‘environment’ is something we stole from operating systems technology. Any task (or application) has its own environment. You have environment variables that adjust behavior. It seemed an apt concept to adopt. The initial proposal would most likely simply say that the only viable combinations are OP_1 OP_ENV and OP_2 OP_ENV (see also op_x doc).

That is indeed the basic trade-off we are looking at. That is the basic premise of this thread, indeed. Which of the two main approaches makes most sense.

Naturally it is possible to add the two opcodes that deal with satoshi values and make them use micro-satoshi’s. It is simple, but is it best?

There are various trade-offs here. If you have both some VALUE and some MICRO_VALUE opcodes, then that implies you can use them in the same script. Is that powerful, or is that just asking for trouble?

Another trade-off is that we’ll most likely end up upgrading the math capabilities to 128 bit math. It is inevitable that this is going to be cheap to do on CPUs and that will be reflected in Script. There are a bunch of opcodes that this applies to. We just upgraded from 32bit to 64bit and we haven’t heard anyone scream. But isn’t it much better to avoid POSSIBLE incompatibilities? That would be a lot of new opcodes, or one new ‘version’ definition as argument to OP_ENV.

Most of the people I talked to are leaning in the OP_ENV direction, so if you know of anything that would end up being an issue I think it would be great to share here.
Because I just want to be sure no problems are created or overlooked.

1 Like