CHIP-2025-01 TXv5: Transaction Version 5

Exciting stuff! Looking forward to seeing this conversation continue!

1 Like

Seems like some thorough review is needed, as well as possibly some example use cases or more concrete details of the potential payoffs, plus some clearer elaboration of the conditions that a rollout would make sense (e.g. Top 10 on CMC), but this is proposed way ahead of schedule and I think it’s great to get the community discussion and consideration flowing already.

5 Likes

Copying some posts below:

https://x.com/bitjson/status/1884833296401453289

Also, new transaction versions are apparently unbelievably big deals, any thoughts on if maybe a later version will avoid that (as with PMv3 and CashTokens I believe). Or if a new transaction version is absolutely necessary, elaborate a little bit on why?

Yes – a new transaction format requires many more wallets, exchanges, and other systems to “deal with” the upgrade, creating a serious bottleneck: some would upgrade, some would drop BCH support.

In reality, we probably need to be in the top 10 cryptocurrencies by market cap to upgrade the transaction format without damaging the BCH ecosystem. (People may complain about ETH upgrades, but they’re not dropping ETH over them.)

Note that I’m not pushing this for 2026, and markets can change quickly – if BCH sees sudden growth (in time for a 2027 lock-in), it’d be nice to have this ready.

For now, this CHIP is probably most useful as a technical vision for how far we can push BCH contracts: they can truly subsume every “layer 1” network out there, matching and often outperforming them on transaction sizes, scalability, and overall user experience.

RE later iterations:

Yes, read-only inputs are actually backwards compatible in the TXv5 CHIP. Those can cleanly be activated before v5 transactions. Likewise with unifying the byte length limits.

On the other hand, detached signatures (and comprehensive malleability protection + re-enabling non-push unlocking bytecode), fractional sats/CashTokens, and trimming ancient TX encoding waste should probably just wait for a transaction format upgrade. (It’s always possible to hack things in SegWit-style, but I’d be against defacing the protocol over upgrade fears/logistics. Better to wait and do it right.)

https://x.com/bitjson/status/1884834849871085823

“Should we limit signatures in v5 to be Schnorr only going forward? It might make sense to slowly phase out old tech.”

IMO, no – deprecation can’t really simplify the protocol, only remove functionality. It’s very hard to justify removing even “silly” accidental behaviors like multisig stack clearing or modern codeseparator usage. See Increased Usability of Multisig Stack Clearing and Ongoing Value of OP_CODESEPARATOR Operation.

https://x.com/bitjson/status/1884838231146922293

Why are you proposing this upgrade for chipnet lock-in November 2026, mainnet deployment May 2027? What keeps you from aiming for mainnet deployment in may 2026?

  1. the other 2026 CHIPs are much higher priority, as they’re more foundational and unlock new use cases: x.com and
  2. transaction format upgrades are really hard: x.com

https://x.com/bitjson/status/1884995047851950338

Covenants already enable any “layer 1” technology: zkVMs, Monero (inc. Full-Chain Membership Proofs), Zcash, Mimblewimble, etc.

The 2026 proposals make these truly practical: from messy, multi-MB transaction chains to cheap, atomic transactions sent from any wallet.

Endgame: CHIP TXv5 demonstrates that Bitcoin Cash can consistently match or outperform “privacy coins” and other use-case-specific networks in the long term – on both transaction sizes and overall user experience.

https://x.com/bitjson/status/1885022284538130873

What’s the point if zkVMs covenants and other layer 1 technology that you mentioned aren’t even unlocked as new functionality? Shrinking MB size is nice, but without Bitcoin Cash covenants, it’s just optimizing transaction size, not expanding what’s possible.

Transaction fees and wallet support. There’s a huge difference between:

  1. Wallets chaining together dozens or hundreds of maximum-sized transactions to do something that should be simple (e.g. send someone a ZK-shielded transaction) vs.
  2. The code being efficient enough to use one small transaction (created by the wallet filling in blanks of common templates).

In practice – even if all wallet developers had the unlimited resources to implement complex, multi-step chaining protocols – real medium-of-exchange users don’t want to pay $20+ per interaction just because the VM requires copying and pasting the same code hundreds of times. (The underlying issue, literally.)

Instead, fixing the copy/paste requirement would make implementation easier for wallets (often “drop-in” using common templates) + the resulting protocols are inexpensive to use (less than $0.01 per interaction).
[…]
BCH already has very complete covenant functionality. We already have the “compute” capabilities, and popular protocols are already being ported to BCH.

With these duplication issues resolved, BCH would be far ahead of ETH and other networks in base protocol capabilities.

https://x.com/bitjson/status/1885125090179711215

Would it be possible to implement a zkVM on chipnet today, for release on mainnet in May 2025?

Theoretically yes, but very impractical – the VM bytecode evaluated to verify a proof would be absurdly filled with duplication. Many segments would be copied hundreds or thousands of times, likely accounting for more than 99% of overall transaction bandwidth (even considering potentially-large proof sizes), and in practice, each “proof verification” might have to be split across tens or hundreds of max-sized (100KB) transactions.

In a bit more detail:

It’s possible (since 2023) to break apart any computation, performing it across multiple transaction inputs: each input looks for its operands in some introspect-able location, performs part of the computation, then verifies that its intermediate result appears in another introspect-able location for the next input, etc.

This pattern can also be extended across multiple transactions to perform any imaginable computation, given sufficient mining fees paid. ZKP verification is just math, so by definition, it’s possible.

In practice though, many kinds of computations are absurdly inefficent to express with our current VM bytecode lacking function definition and loops. An easy example is hashing, which can be expressed in hundreds of bytes given function definition and a loop, but otherwise might require hundreds of KBs or even MBs of bytecode when everything is “inlined”.

This is even more of a problem for various ZKP constructions, which require repeated applications of even more verbose (in bytecode) primitive functions: efficient modular arithmetic (batched ops), EC group ops, bilinear pairings, Fourier transforms, etc. Individually, these might be reasonably succinct in VM bytecode, but inlining each in different locations across an algorithm – often in one or more tight loops – causes exponential or factorial growth in the length of the required bytecode.

This is a rather silly exercise though, as (after the 2025 upgrade) computation limits are no longer implicitly specified by a delicate set of magic constants, program length, and message encodings – the density-based limits comprehensively prevent abuse regardless of program structure (loops, OP_EVAL, any other flow control, 100KB redeem bytecode, 100KB stack item lengths, etc).

Even if someone were willing to commit the resources to porting a zkVM’s on-chain verification to the 2025 BCH VM (for which most of the work is in the workarounds) – the ported system likely wouldn’t be very production-useful: every protocol interaction might cost ~10,000x the typical BCH transaction fees, with extremely limited wallet support. (And even if building from scratch, the differences in constraints would encourage selection of a sub-optimal ZK construction vs. one chosen with less concern for program length – so even the “non-workaround” work might be a technical dead end.)

I wrote more here on how the 2026 CHIPs make it more practical: above

https://x.com/bitjson/status/1885337780730892770

Would it make sense to write a privacy app using the 2026 CHIPs to make sure we aren’t missing anything that might be needed? Usually when actually trying to implement something is when you really work all the bugs out.

Yes! The 2026 CHIPs are fully supported in Bitauth IDE (open source), and a handful of BCH contract devs have already been experimenting with them for a while. Here’s a video of the new loop debugging UI: https://x.com/bitjson/status/1867472399542710439…

Note, while I’ve focused on highlighting privacy apps over the past few days to make the impacts more tangible (and get people thinking about bigger contracts), it’s not just privacy apps that benefit from making our VM more efficient at expressing algorithms (with P2S, function definition, loops).

Prediction markets, loan protocols, decentralized exchanges, and a wide variety of other use cases benefit from deduplication by avoiding excessive “inlining” of loops and function applications.

Excessive duplication is also holding back many of these other use cases, e.g. multi-market scoring rules and decision matrices for prediction markets (where various functions need to be repeatedly applied to mathematical inputs from a long list of sub-market transaction inputs).

https://x.com/bitjson/status/1885510754033344949

Why 19?

The 19 isn’t a design choice, just the decimal result of squeezing every bit of precision out of our existing 64-bit CompactUint format.

What’s the limit for non CashToken ie native?

The CHIP proposes the same encoding for both fractional satoshis and fractional CashTokens.

Are tx fees still limited by sat denomination and/or pre existing dust limits?

Yes, no proposed change to fees or dust limits.

I don’t think it should be called zero-overheads when it’s not clear (from your wording at least) that’s it’s really zero.

It’s really zero. Covenants can implement the exact same behavior as e.g. Monero, with final transaction sizes that match or outperform those of Monero.

User-deployed covenants can truly have zero overhead compared to deploying the very same functionality as a consensus upgrade.

4 Likes

Podcast talking at length about this CHIP and the 2026 proposals: Fiendish & Friends #11 - Jason Dreyzehner talks 2026 CHIPS OP_EVAL, P2S, and Loops

On youtube:

Includes some discussion about how two components could reasonably be extracted from TXv5 (if BCH doesn’t have sufficient market dominance by 2026 to lock-in a transaction format upgrade), making this into 3 CHIPs:

  1. Read-only inputs – the implementation in the CHIP is already backwards compatible, the TXv5 part just adds a more efficient encoding
  2. Unified bytecode length limits
  3. The v5 encoding itself (everything else in this CHIP)

If an improved encoding format still looks too disruptive in early 2026, I’ll just propose the smaller, backwards-compatible items (1 and 2) for 2027, and re-propose the v5 encoding for 2028.

However, as I tried to get across in the podcast, I consider the 2026 proposals many times more important than any part of TXv5.

My estimated ranking:

  1. OP_EVAL – 1,000 - 10,000x improvements in algorithmically complex use cases, required for practical zero-knowledge proofs, and even a modest improvement for nearly all other BCH contracts
  2. P2S – unlocks new use cases and wallet patterns (e.g. covenants can operate without off-chain tracking of redeem bytecode, some interactions can be atomic rather than requiring a chain of transactions, eliminates need for sidecar inputs in many cases, etc.)
  3. Loops – less impactful than OP_EVAL, but in the cases where it can be used, improves compression by 20-30% vs. OP_EVAL (and makes compiled contracts easier to audit vs. recursion-based iteration)

On the other hand, the TXv5 improvements are long-term important, but not urgent:

  1. Detached signatures, non-push unlocking bytecode – Up to ~60% savings in the best cases (esp. CashFusion)
  2. Read-only inputs: seems like 10-50% reduction in the most complex covenant transaction sizes (on the higher end without the 2026 proposals, on the lower end with them)
  3. Other TX format trimming – Less than 10% reduction (but applies to nearly every v5 transaction, so the savings are substantial over years/decades)
  4. Unified bytecode length limits: saves ~40-75 bytes per 10,000 bytes (less than 1%, but reduces needless contract complexity)
5 Likes

Is the idea here to run some introspection opcodes to just copy stuff from some other input, thus avoiding the need to replicate common data across all inputs? Why did people of the past ever introduce this when they could have just prohibited OP_RETURN from appearing in unlocking bytecode?

Consider adding this to the roadmap, way for spenders to “buy” more bytes for the purpose of getting a bigger operation budget:

Same method could be used to add a byte to declare an input read-only. Speaking of which, read-only inputs would need a new set of introspection opcodes, right? Else they could be used to fool old contracts.

Also, I’d love it if we could reliably access mature headers (at greater security than SPV): something like <block_hash> OP_BLOCKHASHVERIFY that would pass only if referenced hash is 100 or more blocks deep, else fail the TX.

1 Like

Yes, or any other computation that more efficiently produces a particular stack item than pushing the raw bytes (e.g. providing a number via squaring rather than directly pushing the product).

Initially, IIRC just uncertainty about whether similar vulnerabilities existed. And they were right to be careful: non-push unlocking operations always allow third-party malleability without some way to sign unlocking bytecode (like TXv5 detached signatures).

Yes, thank you! That’s definitely something that should at least be reviewed in the rationale (even if left to a separate upgrade). Could also be set as sequence number bit 24, input bitfield bit 4, and an additional compact Uint input field (like age lock, but the value wouldn’t be inspectable via sequence number, so would still need an introspection operation; if either 6 or 8 bits could be sufficient, encoding directly in the sequence number enables full backwards compatibility and saves the codepoint).

I hope we’ll also get a better understanding of any use cases and requirements here over the next 1-2 years.

I don’t think so:

  • Existing UTXOs could only be referenced by a valid spend, so by definition, if you can reference a UTXO, you could have spent it in the same way.
  • Existing UTXOs can’t be negatively affected by unintended references, as references are not even observable by simultaneous and later transactions.
  • Any existing UTXOs inspecting for transfers of assets will simply fail if a read-only input fails to transfer an asset, just as it would fail if the transaction implicitly burned that asset (as mining fees for BCH, or simply burned for CashTokens).
  • So concern falls only to intended-sibling UTXOs that check for spends of known UTXOs or token identities in sibling inputs: here again, if you’re able to successfully reference a pre-upgrade UTXO, you could have simply spent it. Additionally, if you’re not checking the actual outputs for an asset transfer, you’re already susceptible to malicious burning.

I’m interested in others’ reviews of this question, but so far I think it’s hard to contrive a scenario in which read-only inputs impact an existing use case. The proposed implementation is even consistent with the precedent from BIP68 by defining unused sequence number bits.

I think this can safely be a separate CHIP and forum topic, I don’t think there’s much interaction with transaction encoding.

3 Likes

If we relax input bytecode rules then any spender could add some <0x00...00> OP_DROP somewhere in unlocking bytecode to get more budget, even if not anticipated by the locking bytecode, so that could work - but it would spend bandwidth. However, some compression at transport layer could address that just as well.

Sure, just thought to add here considering you laid out a general roadmap in post I was replying to.

1 Like

Probably the better idea is to make cash fusion be used more. This is already available today.
On top of that, if this massive upgrade can’t take place until bch is a top-10 coin, then any problems people may have with fusions are then also solved by having a massively bigger anonymity pool and number of people wanting to fuse with you.

A contract today where someone burns a NFT to redeem some coins could be exploited.
If the covenant checks that an input contains a specific NFT and that NFT is not in any of its outputs it’s implicitly burned. If read-only spends is possible that NFT can be spent endless times and drain the covenant. Or am I missing something?

4 Likes

Yes, you’re totally right. Thanks! Covenants can currently depend on implicit burning of assets, so any inspection of a read-only input can violate that assumption. :100:

Alright, so before any introspection opcode can inspect a read-only input, we’ll also need some means for the locking bytecode to have opted-in (e.g. a prefix).

3 Likes

Or, allow any UTXO to be referenced as read-only: the spender decides whether it’s to be consumed or to be referenced. In that case, you need a new set of introspection opcodes, or some VM state toggle to switch mode of existing introspection opcodes.

We need to answer some key design decision questions:

  1. Should UTXOs be declared read-onlyable at their creation? Why?
  2. Should read-only inputs’ scripts be evaluated or not? Why?

These will then drive detail design.

If 2. is No, that means contracts could require things like “prove that this address still holds a balance” - and the spender would simply reference the address UTXO(s) to show them to the contract (bring it into TX evaluation context but without affecting UTXO state), even if the spender doesn’t have authority to spend it (e.g. can’t produce signature for some P2PKH UTXO).
It would also mean you could create UTXOs that are unspendable on their own, with their purpose being just to be referenced by some TX as read-only and have their locking bytecode introspected and executed in another input’s script (using op_eval).

2 Likes

No. We don’t force the separate configuration of any other spending condition at UTXO creation, it would be very arbitrary to draw a new distinction here.

UTXO creators can always define such requirements in their contract: OP_INPUTINDEX OP_INPUTSEQUENCENUMBER test_read_only [OP_NOT] OP_VERIFY (where test_read_only depends on the ultimate encoding, just like the current behavior of age lock inspection). Though by definition, read-only references can’t impact later transactions via e.g. spend races or modifying the UTXO, so it only matters in situations where the UTXO creator wants to e.g. extract a rent by disallowing direct read-only references on-chain (like a pay-per-use on-chain oracle).

It’s critical that they can be evaluated, as those inputs are the ideal location for transactions to include signatures or proofs that are validated by the read-only input’s locking bytecode.

Given density-based VM limits, the proof material should ideally extend the limits for the actual contract performing the verification. Even assuming eventual OP_EVAL support, introspecting and evaluating the contract from a separate “consumed sidecar” input wastes at least 40 bytes (and might also complicate wallet implementations, as they’re forced to track the movement of the consumed sidecar input).

I can imagine it also being valuable for read-only inputs to optionally not be evaluated (i.e. the referencer couldn’t otherwise spend the UTXO, they’re just introspecting it as part of another contract). That is a much larger scope than proposed in the TXv5 CHIP (and could reasonably be a separate upgrade), but I’d be interested in more research on the topic! If we can prove it doesn’t break other existing contract assumptions, it could certainly give contract authors more flexibility.

Nit: if you’re just using the “script” to store a large push of data for introspection from another input (and the data isn’t just a representation of zero), empty unlocking bytecode already produces a successful evaluation, no need for the evaluation skipping behavior. (Also RE OP_EVAL: note again the above discussion of consumed sidecar inputs wasting 40 bytes if read-only evaluation isn’t available.)

4 Likes

Just finished my first proper read of this CHIP.

Read-only inputs are pretty radical! Almost spat out my coffee and it took me a few seconds to convince myself we’d still have a DAG! :rofl: It’s very novel and would be very powerful, but will definitely break some assumptions and mental models along the way. :exploding_head:

Something I don’t understand is how read-only inputs can “deduplicate bytecode across transactions”. My understanding is they can only deduplicate unlocking bytecode, so do we need P2S for dedup of large scripts or does this work with P2SH by somehow deduping redeem bytecode?

4 Likes

Thanks for the review @rnbrady!

Read-only inputs have actually been deployed on various UTXO networks for years (and really, Ethereum too in the form of read-only state access): Corda “reference states” (2016), Hyperledger Fabric “read sets” (2017), Ergo “data inputs” (2019), and Cardano “reference inputs” (2022).

They’re a very natural evolution of UTXO systems though (“why do we have to delete and recreate it every time?”), so I’m sure lots of researchers independently stumble on it. I don’t know of any sources that previously identified it as a solution to the covenant UTXO recycling issue (which – unsolved – leaves more and more unspendable dust in the UTXO set forever), but other chains also have different dust/fee policies and behaviors – that might be less applicable outside of Bitcoin (Cash).

Right, P2SH doesn’t make much sense for most public covenants, you save bytes in both the creation and spend TXs by skipping the P2SH wrapper with P2S. (Doesn’t waste storage/bandwidth/fees + simpler for wallets: no need to track separate P2SH hashes in addition to contract params.)

Off the top of my head, I’m not coming up with a plausible case for deduplication with P2SH; you can always save ~40+ bytes by omitting a read-only input and pushing all the redeem bytecode in just one input, so I think it’s probably only relevant for deduplicating locking bytecode. (Though detached signatures and non-push unlocking bytecode can definitely deduplicate a lot of redeem bytecode! E.g. 62% of this CashFusion TX.)

4 Likes

A topic I’d like to think more about: if we were to expand the scope this way – allow unevaluated reading of inputs (whereas TXv5 currently requires you to be able to spend the input) – without an explicit method for inputs to “opt in”, there seem to be additional strategies for interfering with incentives of particular contracts.

E.g. is it easier to create trustless “pay-to-censor” systems? Like a UTXO that pays once per block via anyone/miner-can-spend if the referenced UTXO remains unspent. If you point such a contract at some critical UTXO in a DEX system, it’d be a little like tossing a wheel clamp on the DEX.

While it’s always possible to pay miners to censor (out-of-band or likely even via some on-chain ZKP approaches), it’s logistically much easier and more likely to be effective if honest network nodes assist with the discovery and relaying (in the same way that widespread RBF eliminated practical zero-conf security on BTC) and particularly if miners can run software to automatically find and claim relevant bounties without adding new security assumptions.

Anyways, if unevaluated read-only inputs were allowed, I think it would probably be wise to require UTXOs to opt-in to the behavior, and read-only inputs would require evaluation by default (i.e. the referencer would otherwise be authorized to actually spend the input).

1 Like

I think this is a conversation worth revisiting now that we have successfully locked in the may 2026 upgrade and discussions/brainstorming for next upgrade cycle slowly starts.

To me the " Unified bytecode length limits" seems straightforward if we want to enable onchain ZKP verification.

There was also the idea during the discussions on the loops chip to potentially lower the base opcost (from 100 to 10)

3 Likes

Just to drop few thoughts re. read-only inputs. What problem are we aiming to solve?

  1. Give contracts indiscriminate read-only access to UTXO state (with open question of whether there should be a flag to toggle direct evaluation or just reference it for other input’s use through introspection)
  2. Global function tables

If we have 1) we can emulate 2), but if we want 2) why not just go for 2)? Becuase 1) is a kind of hacky way to get 2). Having both would be useful: 1) for read-only access to general UTXO state 2) for having an efficient global function table

Global Function Tables

Extend the UTXO model with special global function definition UTXOs. Imagine if you could create an output with locking bytecode: PFX_DEFINE <bytes> <lifetime>. This output skips the UTXO database and is treated as unspendable (similar to OP_RETURNs).

Once mined, the output is added to another database, a global function table, a simple key-value store: hash(bytes), bytes, sum(lifetime). Each re-definition (people creating outputs with exact same <bytes>) just extends the lifetime. If the lifetime expires, we could allow garbage collection with a special input unlocking bytecode: PFX_UNDEFINE <hash(bytes)> that adds to TX fee. This would allow the option for the network to price adding these definitions (min. dust requirement based on lifetime & size) and expired balance would be paid to miners. It could be re-defined later, and it will again get the same hash. So contracts relying on it can’t be broken by the dependency being purged - you can just create a new TX to re-define the functions your contract needs.

Usage: <hash(bytes)> OP_INVOKE from any Script. If the entry exists and hasn’t expired then the function is retrieved and evaluated. If the entry has expired or doesn’t exist then the OP_INVOKE call fails.

Alternative to read-only inputs: global UTXO introspection opcodes

This avoids having to carve exceptions to avoid old contracts introspecting other inputs being fooled by read only inputs. Just allow direct read opcodes, without having to add input references to TX at all:

  • <txid:vout> OP_EXTERNALUTXOVALUE
  • <txid:vout> OP_EXTERNALUTXOBYTECODE
  • <txid:vout> OP_EXTERNALUTXOTOKENCATEGORY
  • <txid:vout> OP_EXTERNALUTXOTOKENCOMMITMENT
  • <txid:vout> OP_EXTERNALUTXOTOKENAMOUNT

or just overload existing opcodes:

  • <txid:vout> OP_UTXOVALUE
  • <txid:vout> OP_UTXOBYTECODE
  • <txid:vout> OP_UTXOTOKENCATEGORY
  • <txid:vout> OP_UTXOTOKENCOMMITMENT
  • <txid:vout> OP_UTXOTOKENAMOUNT

which would be forward-compatible with any future fields/introspection we may add to UTXOs.

Implementations could cache these external UTXO retrievals, load them once into context on first call, and with VM limits framework in place we could budget these calls by number of unique UTXOs loaded.

With this we lose garbage collection. If used for global function definitions and to avoid contracts getting broken users would have the incentive to create indestructible UTXOs for their function definitions, so there’d be a monotonically growing number of these indestructible zombie UTXOs. Probably won’t ever become a problem, how many of these could people possibly create?

4 Likes

I’m excited about read-only inputs but need to research them further. For now, I’m for just going with Jason’s suggestion for read-only inputs for the following reasons:

  1. The implementation is simple and consists of, as far as I know, just small tweaks to how things are done today. This simplifies validation of the design, makes the implementation low-risk, and leaves the node implementation no more complex than today. This makes it easier to get consensus for including read-only inputs already for May 2027!

  2. Read-only inputs on their own still deliver a lot of value. They are like functions at the Tx level. It looks like Jason believes having read-only inputs is enough to allow for ‘Zero-Knowledge Proof Covenants’ with cheap transactions. This is a big win in itself.

    https://github.com/bitjson/bch-txv5?tab=readme-ov-file#zero-knowledge-proof-covenants

    From what I understand, this would then also open up for privacy covenants with low fees.

  3. If there are cases where lock-scripts really need to have access to “global functions” via OP_INVOKE then, as you mention, it can be emulated by using introspection to bring in code from read-only inputs into the local function table. If the read-only input is bespoke for the lock-script in question (or if the functions in the read-only input don’t reference each other) then this can be done without relocation of function slot references in the library, which makes the procedure straight forward. As opcode cost is reduced, this procedure becomes less of a big deal. And if we were to later add OP_EVAL it is possible to write read-only input code that is broken down into functions but does not require relocation.

We are still limited by the 201 byte ‘Locking Bytecode Length’ limit for read-only inputs though, right? Not sure how keen people are on raising that. The same question would apply to global function tables too.

3 Likes

Fiendish & Friends episode is dead on Twitter but still available here on RSS for anyone like me who wants to review it.

1 Like

Regarding relocation of function references. The fact that the Functions CHIP moved away from having function references be integers (0-999) to instead allowing arbitrary byte strings (0 to 7 bytes in size) makes relocation much less likely to be needed. The VM only allows for one thousand functions, but the set of valid function identifiers is massive. This allows for name spacing. So a ZK verifier library (brought in via read-only inputs) could have all its functions prefixed with “zk” (reflected in its internal self references) and that way avoid conflict with function names in contracts using it.

2 Likes