Invalidation of existing nonstandard-redemption UTXOs and transactions: Perspective from context of the proposed 2025 VM-Limit CHIP

Invalidation of existing nonstandard-redemption UTXOs and transactions: Perspective from context of the proposed 2025 VM-Limit CHIP

Summary of problem

Bitcoin (Cash) generally respects that existing and currently-possible UTXOs will continue to be redeemable, as spending of UTXOs is a core premise of financial sovereignty. Does this rule have its limits? Where is the line drawn in the face of consensus changes? While there were numerous previous cases as we will see below, they were less obvious; CHIP-2021-05-vm-limits brings this question into sharp focus.

The context

Most of the existing limits modified or replaced by the VM-Limits CHIP are consensus, but one subset stands out: Signature operations as defined in the 2020-05-15 Sigchecks upgrade. Limits to signature operations are significantly more relaxed for nonstandard transactions, which are subject to crude sigchecks limits per transaction and block, versus standard relayed transactions which are subject to a per input density limit.

The VM-limits proposal would continue to permit all currently allowed standard transactions - with some extreme theoretical edge cases relegated to nonstandardness. Discussion of this relegation to nonstandardness is not the focus of this writeup, but we do talk about it a bit at the end anyway.

Due to its focus on allowing currently-standard transactions, some nonstandard transactions currently accepted in blocks (but not relaying) may no longer be accepted post upgrade.

It is unknown how widespread such cases are, they tend to be difficult to track down due to their nature of remaining hidden behind P2SH until redeemed. Furthermore, they are unlikely to be part of any popular usecases, as popular usecases typically require transactions to be relayable with reasonable reliability, which rules out nonstandard transactions.

Can we reasonably call it “confiscating money”? General lines of reasonableness

A particularly provoking version of objection to applying the VM-Limits upgrade as it stands goes as follows:

  1. There exists a class of nonstandard transactions that would become invalid post-upgrade.

  2. Due to the nature of P2SH, it’s impossible to know for sure if any UTXOs that exist today are only redeemable with this class of transactions. New ones may even be created today, or existing ones revealed in public channels.

  3. An upgrade that makes these UTXOs irredeemable are basically “confiscation”, violating a fundamental principle of Bitcoin in general and Bitcoin Cash in particular, which partially based its split on the notion that high fees causing UTXOs to be practically irredeemable are unjustifiable.

This line of logic would be sound if Bitcoin (Cash) has never done transaction softforks in its 15-year history; in other words, never invalidating any previously valid transactions. Note that by the strictest definition this doesn’t just apply to potential onchain UTXOs becoming irredeemable, but also applies to transactions that could be remade and re-signed to claim UTXOs. This is because users could potentially be holding onto pre-signed but unbroadcast transactions as a way to conceal their ownership of coins; if they do not have access to re-signing facilities, invalidating the transaction they hold make the coins irredeemable in practice as well.

To expand further, such withheld transactions could themselves output to UTXOs that become irredeemable post-upgrade. Fun!

However, as we can easily show, the line of logic does not hold, and have been violated several times throughout Bitcoin (Cash) history. If there ever existed such a strict principle, it hasn’t been upheld with any rigor whatsoever.

Consensus transaction softforks: Some notable instances

We would exclude any emergency bugfix instances, and strictly only include “upgrades” where the desire is to improve bitcoin’s functionality.

2010: UTXO, withheld transactions “Just in case” panic disabling of math opcodes. While it could be argued that this is part of an urgent bugfix, there was no real justification to not constrain the disabling to a smaller set.

2012: UTXO, withheld transactions BIP16 P2SH. New rules for a particular shape of outputs, which must be redeemed through evaluated redeemScript in standard transactions. It is possible, though with no known usecases at the time or today, that some UTXOs or UTXOs-of-withheld-transactions may become irredeemable (albeit previously nonstandard) because their corresponding script cannot evaluate to True.

2015: Withheld transactions BIP66 DER signatures. While it does not make any UTXOs strictly irredeemable, it’s possible some withheld transactions may need to be re-signed, possibly constituting a “confiscation” by some standards.

2015,2016: UTXO, withheld transactions BIP65/68/112 CLTV and CSV. Some OP_NOPs were consumed as new opcodes. Previously valid but nonstandard transaction made invalid, while standard transactions had its scope expanded.

—This is post BCH split—

2017: Withheld transactions UAHF SIGHASH_FORKID. BTC transactions not valid on BCH, any potentially withheld standard transactions cannot claim funds on the new chain.

2017: UTXO, withheld transactions BIP62 LOW_S and NULLFAIL. Previously valid but nonstandard transactions made invalid.

2018: UTXO, withheld transactions BIP62 PUSHONLY and CLEANSTACK. Previously valid but nonstandard transactions made invalid.

2019: UTXO, withheld transactions BIP62 MINIMALDATA. Previously valid but nonstandard transactions made invalid.

2023: UTXO, withheld transactions CHIP-2021-01 Restrict Transaction Version. Previously valid but nonstandard transactions made invalid.

2023: UTXO, withheld transactions CHIP-2022-05 Pay-to-Script-Hash-32 (P2SH32). New rules for a particular shape of outputs, which must be redeemed through evaluated redeemScript in standard transactions. Pre-upgrade they could be theoretically spendable via nonstandard transactions without evaluating redeemscript.

2023: UTXO, withheld transactions CHIP-2022-02 CashTokens. New rules for a particular shape of outputs (PATFOs), which if existed before the upgrade becomes strictly unspendable post-upgrade. Pre-upgrade PATFOs could theoretically be spendable via nonstandard transactions.

A simple test: Prior Standardness

“Keep user funds redeemable both in theory and in practice! Your keys, your coins!” is an obviously noble and some would say even sacred principle of Bitcoin (Cash). So how does one reconcile this principle with the numerous violations above?

With some exceptions - one in 2010, one involving explicitly reserved opcodes and one at BCH split - all of the above historical cases invalidated only nonstandard transactions. An argument could be made that we have a longrunning historical precendence of upgrades consuming some parts of the nonstandard space to facilitate new functions. If there was any desire to preserve sanctity of nonstandard transactions or UTXOs, that ship has long sailed.

There’s an underlying logic to this: Nonstandard transactions are generally difficult to use as they’re not relayed, so they’re not expected to be part of any popular or even reasonably common niche usecases. Redeemability expectations are therefore also correspondingly lower.

It would therefore seem reasonable to state that transactions and UTXOs nonstandard prior to a given upgrade are generally fair game for constraining in the upgrade.

The case of segwit recovery

This longrunning history has one notable case where problem arose from invalidating nonstandard transactions: Segwit recovery invalidation from the BIP62 CLEANSTACK upgrade in 2018. Segwit recovery is a special case where some nonstandard transactions, existing as an artifact from the BCH split, did find a miner-enabled major usecase. Not accounting for this usecase seemed like a major oversight of the CLEANSTACK upgrade in hindsight.

No comparable nonstandard transaction usecases of any nature exist today on BCH other than segwit recovery itself, though - so it’s unlikely that VM-Limits would create complications of a similar sort.

Making previously standard transactions nonstandard

If we go by this longstanding historical precedent though, one particular complication may occur: Moving previously standard transaction into nonstandard would lower its expectation for protection, making it more likely to be invalidated in future upgrades. This has not been a notable problem thus far, but CHIP-2021-05-vm-limits may be the first upgrade to bring this into serious consideration since SIGHASH_FORKID.

The range of standard transactions affected have no known usecases and so this is unlikely to pose a problem, but it did require some clarification as intentionally relegating standard transactions to nonstandardness has no non-crisis precedent. CHIP-2021-05-vm-limits did so by establishing a genuine new precedent:

“By extension, it should be noted that any abusive behavior made nonstandard by this proposal is a candidate for full invalidation in future upgrades.”

This resolves the problem in effectively stating that “for standard transactions with no known usecases, relegating them to nonstandard is a valid path to future deprecation”. In my opinion this is a sensible and satisfying resolution.

10 Likes

Wow this is profoundly insightful and elegantly argumented. You have convinced me. I agree with this 100%.

6 Likes

I have scanned for those during my work on P2SH32 CHIP and found total amount locked in those outputs was 0.044 BTC.

Imagine a case where you generate timelocked signatures and throw away the keys.
I think our smart contract upgrades will actually reduce future risks, because covenants allow people to explicitly write contracts that move coins to address X if conditions are met, a way more forward-compatible way than relying on pre-signed transactions.

We could entirely remove the risk if we added a versioning opcode.

There were only these, adding up to 0.00740026 BCH.

Note that during discussions about implications of density-based budget one idea surfaced that could resolve some hypothetical issues: simply have an optional input field (added in a non-breaking way, same like we added token fields to UTXOs) where one can declare additional filler bytes that would not be encoded (no wasted bandwidth) but would count for the budget and against the size limits as if they actually existed, imaginary bytes :slight_smile:

They would not break the contract (unlike adding an input data push of filler 0-bytes like <0x00..00> where the contract doesn’t expect it and doesn’t have an OP_DROP for it) and could be a way to give the contract more budget so that it may pass - all the while letting the VM limits system do its thing to protect nodes from excessive CPU density.

3 Likes

The VM limits chip has as it’s goal to keep the CPU usage roughly identical to what was allowed before. Which was acceptable already 15 years ago.
It is not a stretch to imagine the limits after a year or two in production (and more testing and modeling) to get a proposal where the allowed CPU budget is multiplied with some factor every halving. Simply because 70 years of computer design has shown that the processing power keeps growing too. As a result this will keep the actual wall-clock maximum from getting larger even though what is allowed gets larger.


A separate point here is the idea with regards to op-env. The opcode described here: Wider discussion, an OP_ENV for the VM upgrades

The thinking there is to allow a script to pick between the provided set of standardized environments in which they want the script to be validated.

As OP points out, there may be times when VM changes end up creating an environment that is hostile to old scripts already committed to by coin on the chain. At this point the actual cost is likely just zero, but it makes sense to understand that what we consider bugs may be features to others.

What future upgrades may do in case of uncertainty is to create a new environment version. The direct usecase is milli-satoshi’s, but you may imagine others where old code will have side-effects or issues in the new environment.
So the idea is to make a script pick explicitly which environment they will run in with a special opcode to be pushed at the beginning of the output script (or embedded p2sh).

It would seem that the worries that OP displays are a good reason to continue the research into op-env.

2 Likes