CHIP-2025-08 Functions (Takes 2 & 3)

Maybe I need to see a fuller description but from what is described here, that would not prevent code-that-writes-code (because nothing is stopping a script from just hashing some arbitrary blob and passing it to OP_DEFINE_VERIFY).

Perhaps in the specification for this scheme we would also need to enforce some 2-phase operation mode to the VM (one where all you can do is define, another where you execute normally but not define)…

How is it responsible to get less of the benefits when we are perfectly capable of getting more without having to sacrifice anything? 3. is just a little more work for Calin :slight_smile:

As Calin pointed out, Script can compute hash on-the-fly so it can still be made to accept unknown user code.

Also, if you use define on a blob pushed with locking script, which you would do just to structure/optimize your script, then the hash verification would be a waste of bytes because the code is already immutable because it’s being wholly defined in the locking script.

Why don’t we need a hash in old “bare” pay-to-public-key scripts? Because nobody can change the key because it’s defined in locking script itself.

With p2pkh we have to verify the key against the hash because it is provided later by spender.

yes, the defines would indeed happen in the input, which is today push-only.
Ideally it would not “take” the code from stack, instead it would behave like a push itself. Saving script bytes.
So the ‘push only’ rule would be expanded to “push or define only”

In such case, from my (responsible) point of view, “less” means “more”.

I believe that more restrictive environment initially means less things that can go wrong.

Why add more work right now if we are likely to remove restrictions completely in the future? (unless it turns out it IS dangerous, which means adding more restrictions now IS “safer”)?

I mean think on your own words.

That’s my point exactly, that’s why it is responsible.

We get “less” now, in case the “more” turns out dangerous.

It also means less things can go right. :slight_smile:

2 Likes

Naturally, you can do anything you want in the case you put it all in the locking script. Nobody disagrees AFAICT. That’s not what this topic was about, right? Nobody cares if you do fractal code expanding your own pushed code from your own locking script. Foot meet shutgun. Go ahead. At least it will never be someone else’s shutgun. That’s important. You can shoot your own foot, I can’t shoot yours.

The point is about getting the runnable code from elsewhere. Because that code is untrusted.
The p2sh example is the main known one, the code is supplied only at the time of unlocking. And to know it was the exact byte-for-byte one we meant, we use a hash.
A op-define-verify duplicates that p2sh behavior, and thereby solving the entire point of this article that code should not be mixed with data.

Sure, if you think this (3) is the way to go, just go ahead.

I am not that good with opcodes anyway, I really have no position to oppose.

Just trying to save you from doing too much work that might be discarded later right now.

I honestly think (1) the original is the way to go. sigh

1 Like

Well nobody has produced use cases/benchmarks/tests that would break (1)

If /someone did it, that would be surely useful.

I’d love to see ANY usage of any version of the idea as published scripts to see what people are actually doing with this.
Very academic all this talking without any actual usage in real code…

For those that didn’t follow along in the beginning of the year, here the result of some of my research. An alternative we could talk about (not many did, however) that has some design requirements;


This is money we are making programmable, which means that there is a high incentive to steal and there are long lists of such problems on other chains. Draining a Decentralized Autonomous Organization is an experience we should try to avoid.

  1. Only “trusted” code can be run.
    The definition of trusted here is simply that the code was known at the time when the money was locked in. Which means that at the time the transaction was build and signed, we know the code that is meant to unlock it. To make this clear, with P2SH the code is ‘locked in’ using the hash provided ensuring that only a specific unlocking script can run.
  2. Separation of data and code.
    Subroutines are code, code can use data in many ways. Multiply it, cut it and join it. Code can’t do any of those things to other code.

One clear and easy to understand example of the dangers here is one where a script author may assume that the creation of 2 outputs will then result in those two being spent in one specific transaction. While the way that utxo works, this isn’t a given at all. But one of the more common misunderstandings on how stuff works.
As such a script may try to read data from another input and use it as code. Believing that to be safe.

If that code ends up on chain anyone can brute force a transaction that supplies just the right data in order to convince the script (that is now on-chain and immutable) that it can be spent. And take the money that was locked in that transaction.

This is easy to write and exploit, just a introspection to get a specific locking script from a numbered input, cut it and then op-define it and run it.

This is the main “thing” that people have worried about with regards to mixing introspection and “the stack” and functions. And what I understand the ‘executable bit’ is meant to solve. But, again, I don’t think it solves it very nicely and just hides the real problem.

See: Quantumroot: Quantum-Secure Vaults for Bitcoin Cash

He makes heavy use of the proposal in (1) and it’s not clear it would work as elegantly with proposal (2) or (3)…

1 Like

My idea is, that is why we can/will just remove the limitations after a trial period. And then we can have cool stuff like Quantumroot.

1 Like

Really, we should just have everything working now so we never have to worry about this again… and so BCH is clearly lightyears ahead of the competition. :slight_smile:

1 Like

:man_shrugging:

Well I lack the technical competence to make the right decision, also evidence from more competent people I could use does not exist, so I will just drop this topic.

Maybe you can simply make the decision yourself?

Later, guys.

1 Like

Thanks heaps for writing these up Calin. I’m leaning towards (1) still also. In some ways, I actually think losing the static analyze-ability (for lack of better term) might be a perk because it makes it difficult for miners to omit evaluating transactions they think might be computationally expensive which I think might make the VM Limits CHIP more reliable as an indicator of inclusion (but it’s very possible I’ve overlooked something here).

Just a question on Take II:

This flag (and thus the restriction on how OP_DEFINE can be used) may be removed in a future Bitcoin Cash network upgrade in order to explicitly allow “code that writes code”.

If we do remove this restriction, can you think of any situations where it might make an existing contract insecure? E.g. I write a contract with the assumption that fCanDefineFunctions exists and, once removed, it then makes the contract-system I’ve developed insecure/vulnerable? This would probably be my biggest concern, but haven’t thought through whether that’s actually valid. If it is, we might want to look at doing something like this in tandem with something like an OP_ENV opcode? ( Wider discussion, an OP_ENV for the VM upgrades - #9 by tom )

1 Like

Here is a high-level example of what we could do with the original OP_DEFINE/OP_INVOKE (but not take2&3).

I could ask people to pay to a p2sh(32) address so I have a bunch of UTXOs that could only be unlocked with a redeemscript in the form of:

  • From the 0th output, push the locking bytecode to the stack (OP_0 OP_OUTPUTBYTECODE)
  • Split so the first bytes is removed and everything after byte 200
  • Duplicate the data on stack, hash it and compare to an hardcoded hash
  • Do OP_DEFINE on the top stack item
  • Run the code with OP_INVOKE

Output#0 that contains the code could have a output of the form:
OP_RETURN <200 bytes of bytecode> [...]

This means that I could spend all UTXOs in one transaction but only need to include the 200 bytes logic once.

Of course the code can be moved to a specific input and have even greater sizes.

5 Likes

Man this is so powerful. Combined with p2s where you don’t even need the redeemscript pushed it can make for some really compact txns that spend from contracts very efficiently indeed.

Thanks for the example.

Yeah to me the original Take 1 proposal basically stimulates the imagination towards new possibilities for clever ways to do smart contracts. It really unleashes the script vm to be a very flexible computational and logic system … what you just outlined is just an example of how flexible it becomes …

5 Likes

Indeed. Definitely could be a perk although I guess if you do a poor man’s taproot type setup you can hide some spend paths anyway even without the dynamism of code that composes code but yes… obfuscation can be a perk…

It’s possible that some strange contract that is kind of artificial and contrived would rely on the failure mode existing then when goes away yes bad things may happen.

But we sort of made it a policy of BCH already since at least 2023 to not rely on such failure modes as part of contract logic because we basically like to say current limits on things can always be relaxed.

Doubly so when we announce that’s the case ahead of time.

In this case I don’t think any real world contract would care…

But consider that already you could have relied in overflowing 64-bit math before 2025 and now after May your contract no longer fails in that failure mode … since we have 80k bit math now!

Op-env might be a good idea tbh… some day maybe. Esp if we do very radical changes … at some point.

That’s one avenue to liberate us from previous constraints in already locked funds that may or may not have made assumptions we are breaking for sure that’s a valid way to solve that problem …

3 Likes

Yeah, I can’t think of a real-world case where it matters and so long as we have this policy:

But we sort of made it a policy of BCH already since at least 2023 to not rely on such failure modes as part of contract logic because we basically like to say current limits on things can always be relaxed.

… I think it’s fine: Contract Devs should assume this might be relaxed in future.

I’m still leaning strongly with the original approach, but would compromise on Take II. So long as we keep the door open to allowing it in future (as I do think it’s a good capability and there will be some good uses for it, similar to Jonas’s examples), I’d be okay with it.

Thanks again for writing those up, really appreciate it. :pray: