Jason has done a lot of research into “OP_DEFINE / OP_INVOKE” vs OP_EVAL and I trust his conclusion. “OP_DEFINE / OP_INVOKE” does provide functions and recursion which are the key features we need. There are some trade-offs that have been mentioned in the discussion.
With this comment I just want to point out two things: 1) OP_EVAL allows for closure like constructs which “OP_DEFINE / OP_INVOKE” does not allow (assuming an assign-once function table) 2) if we later want to have this feature then implementing OP_EVAL via the assign-once function table does not really work. We would need to add pure OP_EVAL or move to a mutable function table (a more likely outcome if we start out with “OP_DEFINE / OP_INVOKE”).
Here is an example of a “closure” that can be tested in bitauth IDE (2026). It toggles its internal state every time it is invoked:
This may not seem very useful but it can be a component in functional style programming, which perhaps is not as byte efficient, but promotes composing programs out of small reusable building blocks. If this kind of construct was used with iteration/recursion and assign-once function slots, we would quickly run out of them.
Thanks! I’m excited that you’re exploring FP-style contract composition
I agree – a future upgrade could certainly make the case for also adding OP_EVAL, the passable-lambda above (“OP_DEFINE-LAMBDA”), or any number of other overhead-reducing improvements for various contract development styles and use cases.
Just a nit RE “does not allow”: the Bitcoin Cash VM is already computationally universal within a single transaction – computations can both inspect themselves and be continued across multiple inputs (loops and/or functions just make the contracts far smaller and safer/easier to audit). The only kinds of computations that the VM doesn’t “allow” are those made excessively impractical by VM limits. Some of those limits are still very conservative of course, but they’re already plenty high enough for financial calculations, programming-paradigm machinery, business logic, etc.; generally only “heavy cryptographic math” use cases come close.
In this case, there are likely many ways to setup the functions-defining-functions machinery to work however you like, especially if contract bytecode-length optimization isn’t the top priority. (Though with trampolining, it might be quite byte-efficient, too? Libauth’s VM is implemented in an FP-style to make opcode changes and debugging features easier to hack in, but it requires trampolining to avoid call stack issues in all JS environments.)
Thank you for continuing the research on this topic!
I’m enthusiastic about the new technical approach to enable the same re-usable functions feature for the Bitcoin Cash VM!
For smart contract tooling and high level language development this will make it significantly easier because 1) functions don’t have to be juggled around as stack items and 2) it removes the concept to do the compiler optimizations to then use OP_EVAL all over the place to save a few bytes, this would very significantly increase the cost of implementation and greatly decrease the auditability of the raw compiled script
These reasons are also well-outlined in the CHIP itself
So both as a contract developer and as a developer working on CashScript I think this CHIP very much gets to the goals of the prior proposal in a way that’s much nicer for the broader ecosystem
I have not seen any update yet, and some months have passed.
So I’ll repeat the problem I have with this approach, hopefully there is a solution in the work that just hasn’t been published yet
The basic design of Bitcoin and Bitcoin Cash is that the actual opcodes that run are locked in at the time the transaction that locks the coin is signed.
This specifically means that it is impossible to replace the code that unlocks it with another, making the utxo safe. As that is public and vulnerable for brute-forcing attacks. We recently upgraded to p2sh32 to strengthen that concept.
Or, the short short version, is it IMPOSSIBLE today to create the code unlocking the money that is stored in a signed UTXO after that money was locked up. This is for safety. Being able to write code later means you can brute force the money on-chain.
P2SH and P2SH32 use a hash of the unlocking code for this purpose, MAST is a proposal from the BTC side that does this too a bit more featureful.
The latest chip doesn’t have any concepts of validating the code pushes to be the ones known at time of signing. I am aware that some suggest we can emulate this in other opcodes, but this is an unacceptable proposal as security is simply not something that is optional.
Introducing a way to unlock a UTXO for spending with code that may be written afterwards is a massive massive change in behavior of Bitcoin Cash and one I don’t think is healthy for the coin.
I suggested a solution in a previous number of posts, I won’t repeat it as the owner of this CHIP is aware already. I just wanted to ask for an update towards solving this in the proposal. It already got a lot of progress in the right direction, so I have hope it will get solved.
This is a good observation we can take to heart.
Yes, adding protection may mean “good engineers” feel like they are being treated like children. But they have to admit that there will be a lot of other people that will write code too and that will actually need said protection to not lose their money. Which will reflect on all of Bitcoin Cash.
BCH cannot optimise for the lowest common denominator - because then it will simply lose. We do not have the status quo advantage or network effect to be the conservative option & still gain ground.
Also, in a system like BCH, upside is far far far more important than downside. People doing stupid stuff go broke (which both stops them doing stupid stuff, and is a strong incentive to be cautious in the first place).
Same way that a bunch of scams may have been funded by Flipstarter, but the fact that it hit a couple of home runs with BCHN & others paid off the unfortunate cost of the failed scams and more - many times over.
You correctly identified the lowest end, the quote of bad engineers doing bad things. That is the lowest boundary of the gray zone we want to stay in.
The top level of the gray zone is what the CHIP does today, it gives 100% of the responsibility of avoiding code-injection to the script author. Which is fine for the good engineers that don’t see the problem with it. They know how do to it properly.
But the big gray zone in the middle is relevant. We want neither the top nor the low extreme.
The ONLY arguments against my suggestion (again, see earlier description) are absolutely about this specific point. Good engineers feeling it is not needed to provide protection for any other programmers.
All suggested usecases are still going to be possible just fine with a verification built in. The upsides are all going to be there regardless of validation being built in or not.
There are even extra upsides with it built in, in that MAST becomes much easier to do in a consistent way.
So I think we agree on all the high level things,
as I wrote a couple of months ago, this CHIP has already moved a lot into a nice direction. Lets hope we can keep moving that forward.
We had a nice discussion on telegram between BCA, Jonas and me about options.
Here is one option to solve the problem of code injection that may be simple and useful for all.
As per the chip we have OP_DEFINE.
Suggested to add is OP_DEFINE_VERIFY which is effectively an shorthand for OP_DUP OP_HASH256 <hash> OP_EQUALVERIFY <index> OP_DEFINE
In my post: “code injection” means that the locking script that is visible for all can be brute forced with any unlocking script, just find one that unlocks it. Which means you lose your money. That is code injection.
Simple example; someone provides a transaction with code on output one, that is used in output two via introspection.
The user mistakingly assumes that the two outputs to be spent in the same transaction. Which is not an assumption that makes sense, yet you need to understand UTXO quite deeply to realize this.
The user can safely do this by simply using the VERIFY version of DEFINE and providing the hash of the code.
This might be silly (and I also might be a bit behind the ball in the sense that this has already been discussed and is part of the rationale) but there’s been a bit of contention as to whether defining functions from introspected data (e.g. Token NFTs) should be allowed.
If BCH ends up with read-only inputs, there might actually be a very good use-case for this in the sense that we might be able to do something like the following:
Create an NFT that contains a program as commitment - and make this provably impossible to Unlock.
We can then use this NFT (containing the program) as a read-only input.,
These NFTs end up behaving somewhat like custom OP codes/programs built into the blockchain that can be used by contracts/contract-systems.
Obviously, this is only really beneficial if the size of the program is substantially larger than the input itself (and the OP codes required to verify it). And the NFT commitment size is currently capped to 40 bytes, so this would only be useful if this was increased substantially.
I’m not familiar enough with ZKP’s to wrap my head around this yet, but I think this might be what Jason is proposing here: CHIP-2025-01 TXv5: Transaction Version 5 ?
Input 0 (read-only): The ZKP is provided in this unlocking bytecode. The locking bytecode verifies that the transaction is correctly structured and that the ZKP justifies the state transition between the previous application state ( Input 1 ) and the next application state ( output 0 ).
So, for those very storage-expensive programs (ZKPs), we might eventually end up with the capability to build those storage-expensive programs into the blockchain itself as (provably) unspendable UTXOs that can be used as read-only inputs by any contract that needs them.
Contract Devs could then leverage them as people have described above.
// Get the ZKP program at (read-only) input 0's commitment.
<0> OP_UTXOTOKENCOMMITMENT
// Verify it matches the expected program.
OP_DUP OP_HASH256 <expectedProgramHash> OP_EQUALVERIFY
// Define it as a function at id zero.
<0> OP_DEFINE
// TODO: Use the on-chain program
// ...
Obviously, with 40 bytes Token Commitments, the above probably isn’t sensible. But it might serve as good rationale for leaving that door open in future.
(Apologies if this has already been discussed a lot. The potential usefulness of something like this only just clicked for me earlier today.)
As far as I can tell, this is the CHIP most in contention still for the 2026 upgrade, what kind of timeline (given that it seems to change every year, as standards to be ready ahead of time rise because we’re more and more organised) are we expecting that to need to be resolved to lock in?
All stakeholders should have agreement before the lock in date of half November. That november date has been unchanged for various years now.
Likely it helps to get everyone on board earlier, obviously nobody would disagree with that. Please don’t wait until the end if disappointment is to be avoided.
The only hard rule in Bitcoin Cash upgrades is that there can not be an upgrade unless all the stakeholders agree the upgrade is positive and to be done. Everything else is peopleware.
I think we already have most of what we want from functions. I’d prefer to avoid code injection and mutation, and I see compression as the most valuable benefit of functions. Ideally, if a function can be used across multiple contracts (i.e., inputs), it would save a lot of bytes.
Based on the discussions I’ve followed, allowing arbitrary code execution is dangerous but I think it can be contained and enforced. So if we go ahead with this CHIP, I think there are ways where the contract authors can enforce how and what gets executed.
That said, let me share the approach where I think we do not need functions at all.
Let me call this concept “Contract as a Function” a way to achieve function like behaviour using existing opcodes.
Contract as a Function
Contract as a Function: Write to OP_RETURN or nftCommitment
Let’s say ContractA has a function that adds two numbers. It enforces that whatever two parameters are passed to it in the unlocking bytecode are summed, and then it ensures an OP_RETURN output is created to act as the function’s return data.
Any other contract relying on this contract can then read the returned value from the OP_RETURN or from an NFT commitment, thanks to introspection.
Input 0 (ContractA) -> Output 0: back to the same contract so it can be reused
Input 1 (ContractB) -> Output 1: does whatever it needs to do
x -> Output 2: OP_RETURN
When Input1 is evaluated, it uses introspection to read the value from the OP_RETURN. The value is guaranteed to be correct because ContractA enforces that the calculation result must be written into Output 2 as an OP_RETURN
Contract as a function: Process
This type of function does not return any value but processes something. For example, a loop that goes through all the outputs of transaction and ensures that there is no tokenAmount burn
Input 0 (ContractA) -> Output 0: back to the same contract so it can be reused
Input 1 (ContractB) -> Output 1: does whatever it needs to do
Here, ContractA has the code to validate the transaction outputs and ContractB can simply add a check to expect the 0th input to be from ContractA i.e, it enforces the locking bytecode of input 0 to be ContractA.
Contract as a function: Nested functions and Closures
ContractA can be a function contract that internally relies on other function contracts, either providing some information to the parent function or using it’s value to make some logical decisions.
Example:
FunctionContractA: Generates a random number(VRF) and updates it’s own value in it’s nftcommitment
FunctionContractB: Reads the output nftcommitment of functionA and performs a calculation and updates it’s own nftcommitment (Maybe update the global state of a staking contract)
CallerContractA: Reads output of FunctionContractB and does whatever
Input 0 (FunctionContractA) -> Output 0: back to itself with updated commitment
Input 1 (FunctionContracBA) -> Output 0: back to itself with updated commitment
Input 2 (CallerContractA) -> Output 1: does whatever it needs to do
Contract as a Function: Single Input Multiple Uses
The unlocking bytecode of a function contract can follow a predefined structure of byte sequences. For example: <input_index_x><input_index_x_param><input_index_y><input_index_y_param>. Using loops, these segments can be split and processed, with the results later stored in an NFT commitment or OP_RETURN. Relevant contracts can then read this processed information.
Threads
The UTXOs in the function contracts can be dust-sized, since we’re only using them for the ‘code’ they require to be unlocked. Sending the UTXO back to the same script ensures that the function can be executed again. This is not a single-threaded operation, as multiple dust UTXOs can be sent to the function contracts to enable parallel execution.
Libraries
This approach also allows us to have known public contract libraries(e.g., Math, VRF) that can be used by multiple independent contract systems. These contracts simply expect an input from one of these libraries and perform actions accordingly.
In order to address the concerns about code that writes code — the following two proposals (which are mutually exclusive) amend this proposal with additional restrictions.
I see several misunderstandings from the past few weeks about the Functions CHIP as it relates to “code mutability”, “code injection”, and what is already possible today on BCH.
In fact, the phrases “code mutability” and “code injection” appear to have taken on a variety of meanings to different stakeholders.
The purpose of this post is to exhaustively review each known interpretation. I’ll reference this in a new CHIP section, “Rationale: Non-Impact on Code Mutability or Code Injection”.
Code mutation is explicitly disallowed by the Functions CHIP
If you interpret “mutation” and/or “injection” to refer to some ability to mutate the code of an already-defined function, thereby tricking the program into executing an attacker’s code:
The Functions CHIP has never allowed mutation of function bodies. For good reason too – see Rationale: Immutability of Function Bodies (this has been in the CHIP since it was republished as “the Functions CHIP” in May):
This proposal enforces the immutability of function bodies after definition – attempting to redefine a function identifier during evaluation triggers immediate validation failure. Immutable function bodies simplify contract security analysis and tooling, eliminating some potential contract bugs and malicious code path obfuscation techniques. […]
Given the safety advantages of immutable function bodies, coupled with the impracticality and inconsequentiality of potential optimizations made possible by mutability, this proposal deems immutable function bodies to be most prudent.
Native functions are trivial to use safely
If you interpret “mutation” and/or “injection” as ways in which contract authors may accidentally misuse functions, introducing vulnerabilities into their contracts: the Functions CHIP is trivial to use safely.
While it’s impossible to prevent contract authors from making mistakes, the Functions CHIP is far easier to integrate safely in compilers and tooling vs. OP_EVAL, easier to use in hand-written contracts, and both safer and easier to audit than today’s macro-expansion or naive copy/paste-ing.
We already have delegation, it’s simple and useful
If you interpret “mutation” and/or “injection” as a new delegation-like capability: note that delegation is an intentional feature in use by BCH contracts today.
No, the term for this is delegation, and it is easy, common, and very useful today on mainnet. Here’s all it takes: OP_UTXOTOKENCATEGORY <category> OP_EQUAL. And for example, Quantumroot – Receive Address: Token Spend Scripts is an in-context example with detailed explanation (that particular script doesn’t use any 2026 CHIPs).
And of course, the CashTokens CHIP is full of explanations and further rationale on this topic.
In summary: BCH contract authors have already been using delegation for years, and it is already very byte efficient.
If anything, the Functions CHIP’s native, immutable functions make delegation safer by simplifying some contract system dependency graphs – places where bugs and/or malicious code can be hidden.
We already have “code that writes code”
If you interpret “mutation” and/or “injection” as some aspect of Turing completeness which BCH does not yet possess: you are mistaken, BCH is Turing complete.
This section is both the most theoretical and the least relevant to practical usage or evaluation of the Functions CHIP.
Once again, the Functions CHIP has no impact on “static analysis” – see:
On an even more theoretical level: BCH was arguable Turing complete for computations evolving across multiple transactions in 2018 (a paper). As described above, a core motivation of the CashTokens upgrade was to enable such messages to be passed across contracts (i.e. transaction inputs), allowing covenants to efficiently build on each other and interact over time. As transactions can have multiple inputs, this necessarily implies that the same Turing completeness is also available within atomic BCH transactions, and any computation can already be encoded to execute atomically, within a single BCH transaction, provided it fits within the VM limits.
So, yes, it can run Doom (subject to VM limits). See also: @albaDsl’s TurtleVm Proof of Concept, which demos an implementation of a CashVM interpreter executing on CashVM.
Zooming out a bit: it’s relevant to note that native loops would enable an unlimited set of further constructions with this same “code that writes code” aesthetic – meaning that distaste for “code that writes code” also doesn’t logically square with support for the Loops CHIP.
I’ve written even more about this over in the new “Code that Writes Code” topic. If this sub-topic interests you, please kindly review that post and post any responses over there.
Request for additional concerns and/or interpretations
If you know of any other concerns related to “mutation”, “injection”, or another euphemism for “risky” that someone ascribes to the Functions CHIP, please respond by Friday so I can incorporate it into this additional rationale section (DMs ok too). Please write or link to at least a few sentences describing the concern in sufficient technical detail for review. Note that a sufficient description includes the terms “unlocking bytecode” and “locking bytecode" at least once.
Above I’ve provided very specific, good-faith technical responses to every misunderstanding raised, with some stakeholders for the 3rd or 4th time now (in this forum and out of band) – if you believe you haven’t been heard, please give us the courtesy of a falsifiable description.
From a more practical perspective:
Offering contract authors clear, native, immutable functions (the Functions CHIP) would optimize contracts, simplify audits, and even reduce the surface area for “underhanded code” to obfuscate malicious behavior, because honest contracts could use native functions rather than today’s clearly-harder-to-understand workarounds.
Reminder: unsurprising functions are safer functions
Attempting to be “clever” by adding restrictions on function definition locations, esoteric failure cases, demanding various kinds of pre-commitments, or otherwise tinkering with function definition cannot prevent contract authors from making “code mutability” or “code injection” related mistakes in contracts that work on BCH today.
Instead, unusual limitations on native functions are very likely to increase development costs and cause new, practical contract vulnerabilities by making CashVM diverge from the native capabilities of other languages in commensurately unusual/surprising ways.
It would be a shame if one of these so-produced edge cases were to set back the port of a decentralized application from another blockchain ecosystem, delay implementation of BCH as a target in a zkVM compiler, or create a denial-of-service vector in a ported contract due to faulty implementation of some unnecessary workaround.
Unsurprising functions are safer functions, and the Functions CHIP’s native, immutable functions are as boring and unsurprising as functions can be.
Summary
The Functions CHIP reduces the risk of unexpected “code mutation” or “code injection” in practical contracts that work today on Bitcoin Cash.
The Functions CHIP would give contract authors the option to use simple, native, immutable functions. Resolving this basic shortcoming of CashVM would shorten and simplify contracts, leading to smaller transactions and safer, more auditable applications.
What’s Next?
Thank you to the stakeholders who have already reviewed the CHIPs and sent in statements.
I’ll give this a week or so, then I’ll reference this discussion in the rationale and begin pulling in public statements for lock-in.
You misunderstood the problem people are trying to solve. I honestly don’t remember anyone suggesting a problem with code mutation.
The problem is about mixing data and code. Turning data stack items into code stack items.
Both not just allowed by your chip, but specifically part of your design requirements.
You can, for instance, do these:
copy another output script. Cut it up and paste stuff in there. Then turn it into callable code.
have the user push data in the unlocking script and without any checking if this is “correct” just execute it.
I don’t mind you having your own termology for things, the functionality is the point.
The functionality:
At the time the output is signed and broadcast, the code that is going to run at unlocking is likewise set and unchangable.
You can reach that requirement that in more than one way, the p2sh solution is to hash the code and store the hash on the blockchain. I think that works quite well.
If script authors can use OP_DEFINE together with introspection they can also use this where applicable […] OP_DUP OP_HASH256 <hardcoded hash> OP_EQUALVERIFY […]
I think Jason has a point here! And I like the original version of Functions. If we can somehow address General protocol concerns and move on with this, it would be a wonderful upgrade for our ecosystem.
I believe Jason understands the problem, as clearly stated in the GP article above. We need a productive solution please. Time is limited, and we are near the finishing line to ship this safely and demonstrate our trust in the VM limit upgrade as well.