CHIP-2025-05 Functions: Function Definition and Invocation Operations

You only need the merkle-root. One hash for all functions you import. I think if you google for “MAST Bitcoin” you’ll find some documentation on this design. There is a BIP, but that one is probably too technical for most.

The tooling people haven’t really shown an interest in this concept, last message from Mathieu here on this topic was a “maybe” and months ago.
It would be nice to get more people working across layers involved in solving problems, I do agree with you there.

Most of the past year discussions have been with jonas and bca vehemently disagreeing with the statement that authors could lose money by not verifying their inputs.
And, indeed, if we accept that this is indeed possible for a group of usages, we can move forward and try to solve it.

Maybe the best idea is to have a OP_DEFINE_MAST opcode for 2027, that may be the best bang for the buck. Depends a bit of people actually being interested in working on solving problems or just dismissing problems that the experts won’t have.

That’s it. I’m out. Unclear if I ever return here to BCR.

Don’t leave, you are very much valued by the community!

4 Likes

There has been a few MAST proposals so it would be helpful if you linked to the one you are referring to that avoids Merkle-proofs.

If we look at for example BIP 341 (Taproot) we can see that to spend a Taproot output one needs to provide a control block which contains the Merkle-proof. It can hold of up to 128 hashes (32 bytes each) depending on the size of the Merkle tree. This is what proves the inclusion of the script in the Merkle tree.

Probably the easiest to understand it is if you look at a block header. It has one merke-root and the number of transactions in a block is variable.

The merkle-tree is built by hashing the actual [data] (here a series of transactions), in a specified way. If you provide all transactions you need not provide any hashes, and the merkle root is there to verify that all data is proper and byte for byte as expected.
Look up the size of the block header, it is a standard unchangable 80 bytes. That is because it only holds the merkle-root. and no other hashes.

MAST in a way of verifying the content of scripts would work identical. Regardless of how many scripts you’d supply in the unlocking script, the mast operation just hashes all of them into a tree which results in a single hash that is then compared to the one stored on the blockchain in the output we are unlocking.

What may be confusing is that (see the SPV chapter in the bitcoin whitepaper) merkle trees have a second feature that is used in mast. You can omit one piece of [data] at the unlocking time and instead provide its hash. Which may be useful in some cases.

Edit:

so, in short, MAST uses merkle-trees. But they in normal cases are only in-memory. Not shipped. You don’t need to store them on chain.

Yes, Merkle trees can be represented by a single root hash. Inclusion proofs (Merkle-proofs), however, consist of a Merkle-path from the leaf node to the root as illustrated by the Taproot BIP.

Thanks for engaging in conversation. I have what I need regarding OP_DEFINE_VERIFY. My conclusion is, as before, that a separate opcode is not justified but that tooling can define it as a macro.

1 Like

the whole merkle-root and mast discussion was a bit off topic here, yes.

1 Like

Yes, better to collect these ideas into a separate CHIP.

I think that’s a bit of a trivialization of the whole debate… input verification is obviously programming-101 level stuff… I don’t have a stake there but let’s try to stay focused on the big picture?

I think it’s been established that having a quick and easy way to verify some bytecode before defining/invoking it is probably a good idea. The disagreement seems to come from whether or not it should be baked into the protocol as its own opcode.

I personally think no, it’s the responsibility of a higher-level tool. (Don’t forget to null-terminate your C strings, btw!)

Probably because those people are pragmatic and will build for things that exist today. If Functions makes it to mainnet, I’m sure /someone will create such tooling if the need arises… maybe even you could?

…what?

This kind of quipping is really unproductive imo. Yes, I know it’s not just you. General statement.

3 Likes

My personal goal is always to avoid any and all personal attacks, and just focus on the tech. In the last year or two, specifically with a small number of people here, the efforts were very much one sided and productivity went negative.

I’ve always trusted that a moderator or otherwise indepdendent 3rd party could step in and stop personal attacks, correct (intentional) mis-interpretations and such. But this has not happened, we even have the most useful moderator on telegram leave who tried. As such I’m thinking we need to call such problems as they are.
Maybe a new moderator steps up and we can get back to being civil. That would be my dream.

I think a define opcode that validates the pushes as being what was expected is useful.
Imagine the usecase where I have an output:
[ripe-160 hash] [id-start] [script-count] OP_DEFINE_MAST

Then an unlocking script has a bunch of pushes for your to-eval-scripts. Say, 10 pushes of scripts.
This is a neat replacement of p2sh, giving you much more power and flexibility for the complex type of things.
While validating your inputs with little to no overhead.

I’d probably use it, but I don’t know if others would. Maybe we can turn the whole topic around and move to have people come together for a good solution not the first but for the next upgrade.

I think one of the most beautiful things about getting Functions is that you could actually define these things as functions on the script level and publish them on-chain for anyone else to use… imagine using introspection to compose functions stored on UTXOs as some kind of on-chain library… such a library would be widely auditable, tamper-proof, and verifiable by hash… what do you think of that as a solution to the problems you’re describing?

3 Likes

Bruh.

This platform is pretty unmoderated, otherwise an independent 3rd party would step in and temp-ban you long ago.

What you are doing is repeatedly pretending that the arguments of the other side never happened and just keep peddling your point of view.

It’s extremely frustrating, because essentially you are ignoring the existence of other people and their opinions completely.

You have been doing it here and you have been doing it in AMM transaction-related discussions for over a year.

This is not how it looks from here. Actually the opposite.

3 Likes

If you look at my CHIP from last January, it has quite a lot of work in that direction.
There are various really interesting things possible, and cheaply if you do it correctly. But they have not been explored, mentioned or discussed. (Well, except some throwing of shade)

We are missing out the opportunity to do this without cut and paste of byte-arrays by ignoring a much nicer way that I described here: BitcoinCash/CHIP-subroutines: Declare and call subroutines with some new opcodes. - CHIP-subroutines - BitcoinCash Code

OP_RUNSUB2 is identical to OP_RUNSUB except that it fetches the subroutine list from earlier processed inputs on this same transaction.

To put that in perspective of Jason’s chip: Imagine a ‘op_define’ done in one of those UTXOs you’re talking about, you using that as one of your inputs and then being able to use that script with basically no overhead (no introspection code, no split / cat etc).

Maybe we can do that in a future opcode, but my idea has been ignored so far. Maybe you can think about it and since it is not a perfect fit, it will need work. But it would be nice to see that being taken up if people see a core of usefulness.

I think that is orthogonal to what I’ve been talking about. Different usecases and different security requirements.
I mean, not everyone will use those provides scripts. There will be people using their own. For instance in ‘Beta’ release. And those people will need to verify their inputs to avoid their funds being lost.

Thank you to all contributors and reviewers so far – I’ve frozen the Functions CHIP for lock-in at v2.0.2 (26e22566), and stakeholder statements will be periodically updated through November 1. Final approval requests will go out in early October.

Please feel free to open issues in the repo for further comments, clarifications, or feedback, and please continue to publish and/or send pull requests with stakeholder statements.

This CHIP is integrated and live on the Sept. 15 public test network (tempnet) – please see that topic for details on how to setup a tempnet node and experiment with the upgrade :rocket:

4 Likes