CHIP 2024-12 OP_EVAL: Function Evaluation

Hi all,

I wrote a long evaluation comparing OP_EVAL to an optimized version of OP_EXEC here:

https://x.com/bitjson/status/1877821412897120387

Can anyone identify any use cases for the stack-isolating behavior of Nexa’s OP_EXEC?

Fully-formed product ideas not needed – even contrived examples to demonstrate an advantage would be great.

Even if we steelman OP_EXEC to be less wasteful per-invocation vs. OP_EVAL (e.g. by accepting the “function signature” in some encoding that is pre-concatenated with the evaluated bytecode, wasting those bytes only once per function definition), I’m having trouble devising any scenarios where the function signature adds value:

  1. The resulting transactions require more bytes than OP_EVAL.
  2. Defining a per-contract, bytecode-based “OP_EXEC API” is complex, error prone, and incompatible with existing contract systems. E.g. for an existing multisig wallet, each multisig signer has to be taught to understand and sign for the new OP_EXEC-based input type rather than simply signing their known input type in new transactions.
  3. Stack isolation adds no security value. When you compare an end-to-end example OP_EXEC-based delegation scheme vs. constructions that we already have today (sibling inputs, introspection, CashTokens, etc.), OP_EXEC systems have more potential security pitfalls with no bandwidth savings.

Some ideas I’ve reviewed:

Contract compression via reusable functions

Transaction compilers should be capable of minimizing any contract by identifying and factoring sequences of bytecode into functions, pushing them to the stack at optimal times and OP_PICK OP_EVAL-ing as needed.

This is the “common case”. Nearly all complex contracts can save at least a few bytes by factoring reused sequences of bytecode into functions. For contracts implementing finite field arithmetic, zero-knowledge proofs, post-quantum cryptography, and other non-trivial computations, these savings are significant enough to enable many currently-impractical applications – cutting KBs or MBs from total transaction sizes.

Reviewing OP_EXEC:

Stack isolation here adds no security and significantly increases compiler and contract complexity. Existing OP_EXEC-like proposals would waste several bytes per function invocation, but the steelman-ed OP_EXEC above could reduce that to only a few bytes per function definition. Depending on contract complexity, this may still result in hundreds or thousands of wasted bytes per transaction.

MAST-like constructions (“Merklized Alternative Script Tree”)

The UTXO commits to the hash of one or more spending paths which get revealed and authenticated at spending time. This has already been possible on Bitcoin Cash since 2023, but OP_EVAL or OP_EXEC would save bytes by avoiding the need for a sibling input.

For less-interactive setups, hashes can be either directly referenced or committed within a merkle tree.

For interactive setups, participants can prepare a data signature over each allowed spending path using an aggregated Schnorr key, eliminating the need for unpacking a particular hash from a merkle tree. Spending then requires only one of the pre-signed scripts and any unlocking material (variable length) + aggregated signature (65 bytes) + aggregated key (33 bytes). Collaborative spending from the address is also “included”, costing no further bytes (like with BTC’s Taproot).

Reviewing OP_EXEC:

Stack isolation offers no additional security for these cases and wastes a few bytes per definition/invocation depending on precise semantics.

Post-funding assigned instructions

In these cases, OP_EVAL or OP_EXEC could be used to avoid committing to all spending paths prior to the UTXO being funded; the contract instead commits to some other method for authenticating later-assigned instructions. For example, a spending path in the contract could accept instructions authorized by some key, a large deposit, a held CashToken, etc.

Note the overlap between these cases and “Mast-like constructions” above. However, rather than aiming to reduce the on-chain footprint of contracts, these cases aim to delegate control over the spending path to whatever mechanism authorizes the new instructions.

Reviewing OP_EXEC:

Again, stack isolation operates at the wrong level to be useful here. If the instruction authorizer is malicious, they can likely lock up funds by committing the contract to unspendable instructions (e.g. OP_RETURN); if the instruction authorizer wants to take the funds, they can always commit the contract to instructions allowing spends from a key they hold. In fact, I’m skeptical that there are any use cases in this category which aren’t either 1) security theater (the authorizer is just a custodian), or 2) more efficient to do with just CashTokens (in cases of delegation to decentralized contract systems).

User-provided pure functions

This is my attempted steelman of an OP_EXEC use case: a decentralized application author wants to allow some user(s) to provide a pure function (in VM bytecode) which accepts some raw input from the contract and computes some result(s) which are used by the contract. To make this plausible, we need a reason to accept a function rather than precomputed results, i.e. the contract needs to 1) save and run the function later using yet-unknown inputs or 2) prove that the same pure function was faithfully performed against multiple sets of inputs.

Contrived example: a decentralized exchange protocol allows users to submit new market making algorithms, and anyone can deposit assets or trade with any market maker.

Ignoring all the data, timing, and incentive questions – can we design a scenario in which OP_EXEC at least saves some bytes vs. OP_EVAL by taking advantage of the “built-in” isolation?

Presumably this is the kind of situation made “safer” by isolating an evaluation: some users are risking funds in contracts which execute arbitrary code provided by different users. If the market maker function has some secret exploit, the function author can exploit it to drain the market maker of other people’s money.

Reviewing OP_EXEC:

Once again, stack isolation is protecting the wrong thing – policing stack usage neither prevents the function from including an underhanded exploit nor can it prevent the function from surprise-bricking the contract (<funds> <100_BCH> OP_GREATERTHAN OP_IF OP_RETURN OP_ENDIF).

Fundamentally, the stack usage of an evaluation is simply not relevant to contract security. If the function uses too many items or produces too few, the rest of the contract will fail and the attempted transaction will be invalid (and if the contract can be frozen by a failing function, we’re doomed with or without an isolated stack). If the stack contains something we don’t want the pure function to modify, we need only rearrange our contract to push it later (or validate it later, if the data comes from unlocking bytecode).

So: stack isolation remains useless while wasting several bytes per function definition or invocation. If it mattered, contracts could easily prevent segments of bytecode from manipulating the stack in unexpected ways, but again, the validation that really matters in this scenario would have to look at the actual contents of the pure function. In reality, the end user('s wallet) is ultimately responsible for verifying the safety and security of the contract they’re using.

In general, I’m very skeptical that user-provided pure functions are the optimal construction for any use case. If a contract system requires on-chain configurability, it’s almost certainly more efficient to “build up” state by expressing the configuration as one or more parameters for fixed contract instructions.

And at a higher level, it’s almost certainly even more efficient to simply use smaller, purpose-built contracts rather than configure (on-chain) a one-size-fits-all contract. E.g. because Jedex (Joint-Execution Decentralized Exchange) is carefully broken into efficient, single-purpose contracts, almost all Jedex inputs are smaller than standard single-signature inputs (P2PKH).


To summarize:

As of now, I’m skeptical that OP_EXEC has any plausible advantages vs. OP_EVAL, and OP_EXEC has serious disadvantages in protocol complexity, contract complexity, and overall transaction sizes.

Please let me know of any other use cases I should review, and please leave a comment if I can answer any questions.

5 Likes

Copying snippet from Brainstorming OP_EVAL - #10 by bitjson :

OP_EVAL vs. word definition

Yes! I definitely need to include a section comparing the OP_EVAL CHIP with “proper” Forth-like word definition (also called OP_DEFINE/OP_UNDEFINE/OP_INVOKE in old Libauth branches).

As you pointed out, a full “word definition” upgrade proposal is quite a bit more involved: how and where we track definitions, any necessary limits for those new data structures, whether or not a word can be undefined (we only have OP_0 to OP_16 + maybe OP_1NEGATE!), what makes a valid identifier (only numbers? any single byte? multi-byte?), should we include Forth OP_FETCH/OP_STORE corollaries for data (some discussion of OP_EVAL vs. a TX-wide “data annex” here), and probably many more details.

Fortunately, we have a great argument for avoiding this bikeshed altogether: we can easily prove that OP_EVAL is the optimal construction for single-use evaluations (as you mentioned). Even if a “word definition” upgrade were hammered out and activated in the future, OP_EVAL would still be the most efficient option for many constructions. (This coincidentally was the same argument that made P2SH32 a strong proposal vs. OP_EVAL – even with OP_EVAL, P2SH32 remains the more byte-efficient construction for those use cases.)

As BCH VM bytecode is a concatenative language, a perfectly-optimizing compiler is quite likely to produce single-use evaluations from common pieces of different words/functions, even if the contract author didn’t deliberately design something requiring OP_EVAL (e.g. MAST-like constructions).

So:

  • OP_EVAL is feature-equivalent to word definition (each enables all the same use cases)
  • OP_EVAL typically requires 1 extra byte per invocation, but sometimes saves 1 byte vs. OP_INVOKE.
    • OP_EVAL is 3 bytes (<index> OP_PICK OP_EVAL) for many calls, but some will be optimized to only 1 byte (just OP_EVAL)
    • OP_INVOKE is always 2 bytes (<identifer> OP_INVOKE).
  • OP_EVAL always saves 1 byte per definition by avoiding the OP_DEFINE.
  • OP_EVAL remains optimal for some uses even if a future upgrade added word definition (as a 1-byte optimization for some function calls).
4 Likes

Jason, I don’t disagree with going OP_EVAL and ignoring OP_EXEC but I am having trouble agreeing with this claim:

I don’t think this is true. One can imagine a scenario where it adds security – if you exec “untrusted” code, the stack isolation, etc, provides perfect security. The untrusted code cannot muck with any state you are tracking at all. All it can do is receive parameters and return 1 or more result(s).

This adds security.

Thus, I think the statement “OP_EXEC stack isolation adds no security” is a false statement. It does add security, demonstrably.

Whether or not that security is of any value to imagined use-cases is another matter. But security it does add, even if the added security is value-less, inconsequential, useless, etc (TBD).

It’s like saying: “Passing-out and sleeping inside a locked, bulletproof tank versus on a park bench adds no security… because I don’t anticipate ever being robbed.”

No, the tank is more secure than you sprawled on a park bench. Whether or not that security makes any difference to you is another matter… but demonstrably one is more secure than the other, even if the additional provided security leads to identical outcomes as the less secure situation in practice…

3 Likes

Here’s an example:

{some trusted code} OP_2DUP <untrusted_code> <2> <1> OP_EXEC OP_VERIFY {some trusted code}

If the sequence between 2 blocks of trusted code passes, from point-of-view of outer code it will just be a NOP which can’t affect stack state around it. If it fails, the whole script fails.

With OP_EVAL you can’t allow the untrusted code in any place because it could mess up the stack and break the code around it, so it could only be ran as last operation in the whole script, and some VERIFY opcode must precede it: {some trusted code ending with OP_VERIFY} <untrusted_code> OP_EVAL

2 Likes

Yep, decent example.

@bitjson Just to be clear – I’m all in favor of simple OP_EVAL! I am not advocating for OP_EXEC! I am perfectly happy with OP_EVAL. I am not a contract author and I have no idea what authors need. To me, the usecase of a compiler optimizing common code into OP_EVAL bits makes tons of sense… even for that reason alone it’s fine by me.

3 Likes

This whole discussion has gotten weird.

People are comparing two ideas that neither is perfect. Notice that here on bchr we don’t actually have the actual proposals in detail so the “it uses less bytes” is a bit hard to follow and likely closes the discussion to a lot of readers.

But there is no point in comparing two, we can make 20 new proposals based on what it is that we actually want. There is no reason to limit us to just those two.

And, as I pointed out in another thread: Brainstorming OP_EVAL - #14 by tom, the comparison is mostly based on false premises. There are not just TWO ways of doing things that pick things like stack protection. Stack protection is an ingredient that can be applied to either propoosal under discussion today.

The real question is not about picking between two options, the real question is to find a good way that uses the various ingredients we actually want.

As far as I can tell, the ingredients you can mix and match are:

  • Stack-protection
  • Requiring a method declaration and then calling it based on a method-index (two opcodes).
  • Being able to put methods in another output on the same transaction.
  • Being able to get the method-code from stack, which implies being able to execute code that is untrusted.
  • Being able to write self-modifying code.

Personally, I’d add stack protection as that is virtually free. It stops script writers doing really nasty things that breaks good programming assumtions.

Having the idea of a method declaration is neat as that saves bytes in calling, as your indexes are always 1 byte. Additionally it provides compiler safety because your compiler can ensure that the method you call has the right number of parameters (see linked post).

Being able to put methods in another output fits very nicely in the rest of our scripting design and allows neat things like being able to do verification of the methods-holding-output by simply hashing it and comparing the hash, all in script.

Being able to call code that at any point is stored on the stack, however, sounds to me like something you really really want to avoid. I mean, we introduced p2sh-32, so maybe it is not sane to introduce a no-checking way to execute a script that can be pushed in the unlocking script. Now, I know you can also check the hash of that, but if you’re supposed to always include that check, then that warrents its own opcode. An op-mast, if you will.
The problem with executing code that was not known at the time of mining the locking script is that this is money that can be unlocked by people coming up with a script to unlock it. That may be neat, but it doesn’t make for good money.

Using the stack to store your code you eval means you can write self-modifying code. This likewise has no place in modern programming strategies. Absolutely double that when it comes to money.

Last reason why storing your code on-stack makes no sense is that stack operations are defined to cost a price to operate on. This will be activated in May. As such either the usage of eval will cost double (once for pushing, once for executing it), or it needs some sort of exception. Which makes the design no longer coherent as a whole.

Further OP_EVAL critisisms;

  • the chip as it is on github right now writes: “the evaluation may modify the stack and alternate stack without limitation”.
    This means you could dirty things like writing a method that does 5 pops.
    This idea goes against 60 years of computer science concept and learning, and I don’t like it. It gives the contract writer not just a gun to shoot its own foot, it likely will leave a crater.

  • " ii. The OP_CODESEPARATOR operation records the index of the current instruction pointer (A.K.A. pc ) within the OP_EVAL -ed bytecode."
    I read that as a way to skip all code from the jumped-to-position to the first codeseparator and start evaluating there.
    This will be useful if you want to write self-modifying code. Which is not something that belongs in money.

I don’t like op-eval. It makes hacky code writing the norm, it provides lots of ways to do extremely dirty things which will make the The Fourth Annual Obfuscated Perl Contest Results - The Perl Journal, Fall 1999 look like easy to understand code.

OP_EXEC misunderstands contract security and usage

I’ll have to mention OP_EXEC in the CHIP’s evaluation of alternatives, and I don’t want there to be any doubt that the rationale treats it fairly.

I’ve tried my best to steelman OP_EXEC, but as of now, my analysis is:

  • OP_EXEC misunderstands contract security and usage. It only seems to make sense if you haven’t tried to use it.
  • OP_EXEC has zero advantages and significant disadvantages vs. OP_EVAL.

If anyone disagrees, it should be easy to provide a counterexample: a contract with an exploit that OP_EXEC could prevent.

The reasoning trap

WARNING: experienced programmers will almost certainly be misled about OP_EVAL and OP_EXEC by their intuitions from other languages/environments.

This is downstream of 1) our unusual environment – the whole transaction is the potentially-abusive input, not just the hypothetical “code” being consumed by OP_EVAL/OP_EXEC (and in another sense, the VM is itself a “sandbox”: malicious code can’t consume excessive resources or take control of the node), and 2) the concatenative programming paradigm. (It’s a deep rabbit hole, see Thinking Forth and maybe 1% the code.)

(@cculianu and @bitcoincashautist please forgive me for nit-picking parts of your posts to explain myself now, I appreciate you guys commenting here. :pray:)

This is the reasoning trap: we’re used to the concept of “untrusted code” elsewhere, so we assume that VM bytecode has a direct analogy. Skip some steps, and stack isolation looks like a solution to some plausible contract security problem – no need for further review.

I’m arguing that 100% of these imagined scenarios actually rely on internal contradictions which can be revealed by fleshing out the example.

  1. Stack “isolation” is superfluous – any contract where it seems to prevent an exploit has other equivalent exploits not prevented by the isolation.
  2. You can already get that “stack isolation” behavior – for zero bytes – today, simply by fixing your code (or, more likely, using a good compiler). If you think you need OP_EXEC to get that effect, your contract is wasting bytes. In fact, a linter could automatically optimize contracts by removing instances of OP_EXEC (replacing them with OP_EVAL and/or compiling them away altogether).

I hope to convince everyone to either:

  1. Try it yourself and prove me wrong by offering a single counterexample, or
  2. acknowledge that stack isolation is an anti-feature, not some imagined security-for-efficiency tradeoff.

“Untrusted” means nothing without context

(Again, thanks for your comments here @bitcoincashautist. I get that you already prefer OP_EVAL and you’re just trying to objectively review OP_EXEC; please forgive me for nit-picking your comment. :pray:)

This code snippet includes OP_EXEC, but it’s missing all the context we need to review. In fact, you even mention that the whole OP_EXEC portion is unnecessary in your description: “from point-of-view of outer code it will just be a NOP”.

Please correct me if you meant to say something else – doesn’t this sentence imply that the computation’s result can simply be pushed as data instead of using OP_EXEC?

My claim about OP_EXEC

“Stack isolation” is a reasoning trap. If you think it might add security value, your mental model is omitting critical context.

If stack isolation offers any security value at all, it should be easy for someone to refute this with a single example. No hand-waving away the context though: that’s where the logical errors are hiding.

For anyone who still cares to defend “stack isolation” as an idea:

  • Can you fill in your “trusted code” and “untrusted code” blanks with a concrete example? Who is using this contract?
  • What potential exploits can be performed by the untrusted code – who loses money and how?

TL;DR

OP_EXEC is like requiring motorcycle helmets in a swimming pool. Given the context, “safer” is not a word that comes to mind.

2 Likes

Word/function signature verification

I see some discussion about function signatures being generally useful in programming (I agree), and that being somehow an argument for OP_EXEC and/or “stack isolation”.

Please note that Libauth and Bitauth IDE have supported compile-time verification of “function signatures” and behavior since 2020 – I’ve found it invaluable for optimizing various constructions (e.g. P2SH assurance contract - #15 by bitjson predates CashTokens lock-in).

Adding – via a consensus upgrade – some half-baked runtime checks to approximate function signatures wouldn’t simplify or improve our current capabilities.

2 Likes

If I understand correctly, @andrewstone designed Nexa’s OP_EXEC.

@andrewstone, do you disagree with my review of OP_EXEC? I also responded to you here:

https://x.com/bitjson/status/1880227699198714348

GAndrewStone: TBH, tl;dr. It allows the template to execute untrusted holder constraints.

OP_EXEC is like requiring motorcycle helmets in a swimming pool.

It simply misunderstands contract development.

Even with the correction I described (@NexaMoney’s version is even more nonsensical) – OP_EXEC adds no security and harms protocol complexity, contract complexity, and overall transaction sizes when compared to OP_EVAL.

If you disagree, it should be easy to provide a counterexample that doesn’t hand-wave about context (e.g. leaving a blank for the “untrusted code”). Given any particular contract, what exploit is prevented by OP_EXEC’s stack isolation? Please be sure to include threat model info, then I can help you optimize it by switching to OP_EVAL.


Again, unless someone can produce a credible counterexample (with enough context to review), I don’t plan to spend more time on OP_EXEC.

2 Likes

Thanks Jason for a very enlightening series of posts.

You asked a bunch of times to show a “counterexample”, which was your choice of words indicating an example that did something nasty. Inside of the VM. Knowing full well that a VM is already a sand-box.
Repeating that so often while ignoring all the actually quotable objections I made is really quite enlightening.

You’re arguing that stack isolation is “not good” and that “experienced programmers will be misled” by “intuition”. Yeah, because who would have the audacity to ask people with experience! Where is the fun in that?

Ok, lets leave the schoolyard and sum up the actual facts;

  • op-exec from nexa is super expensive in every usage. Nobody likes that one.
  • op-eval avoids a function signature and avoid stack safety to be less expensive.
  • a simple op-exec1 (lets use my design for discussion sake) is not expensive. Is actually cheaper than op-eval for repeated cases. And Jason stated he expects hundreds or more calls in some cases. Arguing cost makes op-eval lose.
  • a simple op-exec1 design with pushing of a single int for argument count allows stack safety.

A argument count as part of a function description has 2 main advantages:

  1. a toolset like an IDE can do compile time checking. Notice that “compile time” is per definition at the time of transaction construction. I mean, no full node compiles, so this one should have been obvious. But we got someone misunderstanding this, so lets be clear. A transaction with a single output that has a bunch of pre-defined subroutines can be used by a contract you’re building. You’ll compile the contract using the binary library and voila, you have compile time safety.
    Or, in short, this 1 byte that indicates the number of arguments is the simplest form of an API-docs for your IDE to use. Pretty cheap, if you ask me!

  2. a much more important usage of an argument count is the stack protection.
    Now, this is not about being able to escape your VM. That suggestion is more like a “when did you stop hitting your wife” question.
    Instread, this is about avoiding all the hallmarks of unsafe code.

Arguing “it is not worth anything” is a great way of saying you don’t actually have an argument against it, you just don’t like it. Fine, don’t like it. It still is valuable to others and practically free.

Unsafe code in this context is about unpredictable code. If I use this library method in my contract, will it do funny stuff? Will my money be stolen because I didn’t manage to understand the code I used from another dev?

Being able to chop up a script on-stack, partly execute it with usage of codeseparators and indeed altering or reading the stack outside your subroutine, those are all really nasty things that makes code very hard to understand and near impossible to test to be “correct” when used in not yet written scripts.

But the bottom line here is that this is money. This is not some toy virtual machine to do cool new things with. Well, not new, nothing here is novel. Jason may think he came up with this all on his own, but sorry to say that he’s about 60 years late to that party. One way or another any and all options we pick have been tried before.
Maybe that knowledge helps letting the ego’s deflate and we can get back to picking something that actually is good for Bitcoin Cash.

I started the discussion some time ago here:

My opinions on each item above in the original post.

And, to repeat, op-eval doesn’t just allow spaghetti code that can steal your funds, it ALSO has a pretty serious security issue by default:
code can come from any place without being known at the time of building the transaction. That allows injection of untrusted code and that means bugs can steal your money.

You can claim you can write code to verify your inputs, but first of all this after DECADES is still the number one issue in software: unverified inputs. (xkcd).
And that just throws out of the window the entire argument that it would somehow be cheaper byte-wise, being forced to write your own input verification.

None of the arguments hold water, which is why Jason isn’t replying to my posts, which is why he uses very angry and dismissive language (tit for tat) because he knows he’s in the wrong and his lovely op-eval is rotten under the surface.

Which others, how many smart contracts have you designed? Let’s hear from these others.

It’s not even his (it came from Gavin, remember?) and it is certainly not rotten, it is the preferred solution.

1 Like

@tom again, unless someone can produce a credible counterexample (with enough context to review), I don’t plan to spend more time reviewing stack isolation.

And I’d go back further than that! There are many decades of prior art here – if anything, this CHIP is taken more from Chuck Moore than Gavin or any of the other 2012 proposals. OP_EVAL gets BCH VM bytecode to a “fully capable” Forth dialect.

On the other hand, the various 2012 proposals did nonsensical things like clearing stacks and preventing “nesting” – i.e. functions calling functions – and generally misunderstood the control stack and/or how non-trivial Forth programs are factored. (Understandable for the time: VM limits were a huge, untenable problem that overshadowed clear thinking about a lot of topics, and most “smart contract” ideas were very hypothetical. We’re spoiled now to have more certainty on both.)

3 Likes

It will be a NOP only if it passess, else it will fail the TX, that’s just like some:
<0> OP_UTXOBYTECODE <0> OP_OUTPUTBYTECODE OP_EQUALVERIFY sequence. The individual locktime/sequence opcodes work the same. If it passes it’s like a NOP from PoV of surrounding code, otherwise it fails the TX because predicate not satisfied.
You can’t replace that with data because result depends on TX context, and the purpose is to force spender to set the TX context right.

OP_EXEC makes it possible for untrusted parties to insert their predicate checks into the placeholder inside the main contract, without the possibility of breaking “outer” predicate checks created by the designer.

Consider running OP_EXEC with arguments 0 and 0 (which means the untrusted_code can’t affect the main stack): it can only be some kind of user-specified -VERIFY sequence.

{some trusted code} <untrusted_code> <0> <0> OP_EXEC {some trusted code}

Now imagine the preceding trusted code does some calculation, and the succeeding code is supposed to continue it. The latter code can trust the stack state that the former code resulted in, because the OP_EXEC guarantees that it couldn’t have changed it.

but if you have {some trusted code} <untrusted_code> OP_EVAL {some trusted code} then the untrusted_code could’ve changed the result of the preceding block, and the succeeding block would have to treat the stack state as untrusted data - just as if it’s being executed after input’s data pushes - the spender could’ve provided anything to be run as untrusted_code.

EVAL could also be used to create a slot in the contract for user-set predicate check, but then it must be either called as either first or last in the contract (and if last then the main must execute a final VERIFY to “lock-in” whatever checks it did).

Here’s a full example (link to open it in BitauthIDE).

Unlocking script:

// Next user-set constraint commitment, decided by the spender of this TX
<0x4ae81572f06e1b88fd5ced7a1a000945432e83e1551e6f721ee9c00b8cc33260>
// Reveal user-commited constraint for this spend
// (the one committed by the previous TX)
<0x51>

Locking script:

// Example of some fixed covenant code
OP_INPUTINDEX OP_UTXOVALUE OP_2 OP_DIV
OP_INPUTINDEX OP_OUTPUTVALUE OP_EQUALVERIFY

// Force inheriting the fixed covenant part, while allowing the user
// to change only the commitment for the user-set constraint
OP_ACTIVEBYTECODE <33> OP_SPLIT OP_DROP
OP_ROT OP_CAT
<0x8862> OP_CAT
OP_HASH256 <0x87> OP_CAT <0xaa20> OP_SWAP OP_CAT
OP_INPUTINDEX OP_OUTPUTBYTECODE
OP_EQUALVERIFY

// Verify additional constraints committed by the previous TX
OP_DUP
OP_SHA256
OP_PUSHBYTES_32
// Current user-set constraint commitment
0x4ae81572f06e1b88fd5ced7a1a000945432e83e1551e6f721ee9c00b8cc33260
OP_EQUALVERIFY
OP_EVAL
2 Likes

And ignoring all the actual relevant points against your chip that have been made.

But, really, you’re barking up the wrong tree. You are the one making a suggestion that goes againts decades of actual software engineering practices.
You are violently against a practically free way of avoiding said problems.
And said barking is without any arguments. The wider bitcoin cash ecosystem doesn’t need you to “review” anything. This specific feature is not hard to do. If you refuse to even address actual critisism of your proposal, then please go away. We don’t need that. We need someone that actually can work with others. More people working together get better results.

At this point NOT accepting that there is a one byte-costing approach which avoids well documented and known problems is just plain malicious.

Again, you have given NO arguments why you don’t want this one feature in the list of possible features for the common idea of subroutines. While not even mentioning, let alone discussing, any other features that would be pretty cool to have.

You’d know, considering you’re doing all the barking with no arguments.

1 Like

I would be very interested in hearing if contract coders see value in features like:

  • having a subroutine that is able to alter the stack in a way that goes against the basic “function” based design of cashscript. Or any programming language using function design.
    Specifically, a subroutine would be able to remove more from the stack than “expected” by the function signature. Which means you can’t expect it to behave the same even if you call it with the exact same arguments.

  • the ability to have the unlocking script have a push which is your unlocking code. Not like p2sh where the hash has to match, but without hash verification. A transaction mined on-chain that you can write code to unlock.

  • the ability to cut / join and otherwise alter a subroutine’s code.

Anyone interested in any of those features? Is the risk of making mistakes worth it to you?

Script is not CashScript. Script is more akin to assembly, which is why EVAL is fine.

If you wrote a function in CashScript that takes in 2 args and returns 1 then the compiler would create a well-behaved Script bytecode for that function - which doesn’t even try to break out by consuming more than 2 stack items, so why would we need low-level guarantees when we’re already controlling the bytecode to be eval’d and can ourselves guarantee that it adheres to calling convention or have the compiler produce compliant bytecode?

A function written in CashScript can’t surprise the compiler because compiler is the one creating the bytecode - compiler decides what’s the max. number of stack items it will pop during execution.

If the contract author wants to make his program modular, to occasionally “load” some of his code to be executed from input’s data or from another input/output, he always needs to authenticate the code against a commitment (dup, hash, equalverify) - even when using OP_EXEC - which is why Jason makes a good argument that OP_EXEC doesn’t really add security, it only adds some flexibility for one class of use-cases that would be akin to 3rd party plugins, but we don’t see much use for those.

2 Likes

Heavy debate on merits and details is very important. Attacks on character, ad hominem, passive aggression, caricaturing others, etc. are not warranted or welcome here. Fair warning.

2 Likes

I pasted the whole HTML of this page to Claude and asked him to find examples of:

Here are the receipts, for posteriority:

Looking through the thread, here are some examples of unconstructive behavior:

From Post #23 (Tom):

  • Accuses Jason of deliberately ignoring points: “Repeating that so often while ignoring all the actually quotable objections I made is really quite enlightening.”
  • Sarcastic/dismissive: “Yeah, because who would have the audacity to ask people with experience! Where is the fun in that?”
  • Personal attack suggesting malicious intent: “he knows he’s in the wrong and his lovely op-eval is rotten under the surface”
  • Questions motives: “Jason may think he came up with this all on his own, but sorry to say that he’s about 60 years late to that party”

From Post #27 (Tom):

  • Hostile/dismissive: “If you refuse to even address actual critisism of your proposal, then please go away”
  • Accuses of malicious intent: “is just plain malicious”

From Post #28 (bitcoincashautist to Tom):

  • Retaliatory snark: “You’d know, considering you’re doing all the barking with no arguments.”

In Post #31, a moderator (emergent_reasons) had to step in and warn: “Heavy debate on merits and details is very important. Attacks on character, ad hominem, passive aggression, caricaturing others, etc. are not warranted or welcome here. Fair warning.”

The exchange appears to have devolved from technical discussion into personal attacks, particularly from Post #23 onward.

3 Likes

Can just paste the whole html. We are living in the future :smiley:

Ok back to OP_EVAL.

2 Likes