CashTokens & PMv3: fixed-size inductive proofs, transaction integer compatibility

Today the VM numbers and the transaction-level numbers are not the same, making it still not the same, but in a different way in the future doesn’t affect the VM at all. They are still completely separate systems.

Why would you want to make those encodings the same in the first place?

@tom and I talked a bit more offline, but I also wanted to answer here for anyone else reading.

I just finished writing a full overview of Hashed Witnesses here:

Hashed Witnesses: Build Decentralized Applications on Bitcoin Cash

That post tries to describe the problem and this solution in detail, and also how CashTokens enable a lot of other applications: contract-based & miner-validated tokens, tradable synthetic assets, decentralized exchanges, and prediction markets.

For those reading this thread later, these answers will be much more intuitive after reading the post above.


Can you explain the parent/child relationship here? It reads like the Vm would need to access not just the locking script, but maybe also the locking-script of a parent-transaction (of a parent-tx etc).

Yes, that’s correct, and in some cases, the grandparent locking script.

The contract first confirms that it’s been given its true parent TX (by comparing the hash with its own outpoint transaction hash), then it inspects the parent to confirm the parent’s 0th output is the expected token covenant. (It also does the same validation for the grandparent transaction.) Here’s the explanation from the Token Transfer (...tx3) script in the CashTokens template:

/**
 * This script is the key to the inductive proof. It's 
 * designed to be unspendable unless:
 * 1. Its parent was the mint transaction funded by the transaction
 *    hash claimed (requires full parent transaction), or
 * 2. It was previously spent successfully – the parent's 0th input's
 *    UTXO used this covenant, i.e. the parent's 0th-input-parent's
 *    0th output uses this covenant. (Valiated by checking the full
 *    parent and grandparent transactions.)
 * 
 * With these limitations, if a user can move the CashToken,
 * we know the CashToken has the lineage it claims.
 */

Would you agree that if NativeIntrospection is implemented that the majority of the cases for scripts to understand VarInts disappears?

Yes, most VarInt parsing use cases are better solved by native introspection. But for any cases where the introspection is not on the current transaction, introspection opcodes probably won’t help (e.g. in CashTokens, we’re actually inspecting the parent and grandparent token TXs for validity).

I wrote here about Serialized Transaction Introspection, but I think it’s probably wise to first wait and see what other kinds of applications use ancestor transaction introspection, and only if it becomes common enough, we might eventually want to accelerate it with specialized opcodes.

‘witness’ is a weird name

I agree, sorry… I settled for “Hashed Witnesses” initially because it’s best compared against Segregated Witness. SegWit fully excludes the unlocking bytecode from the transaction, creating a new section of the block for it (and some other changes). “Hashed Witnesses” simply hashes the unlocking bytecode, compressing it into a fixed size within the transaction.

I’m definitely open to better names though. Maybe I should have just gone with Hashed Unlocking Bytecode. :sweat_smile:

Thank you for taking the time to review this! And sorry I hadn’t finished writing that Hashed Witnesses overview before today! Hopefully that helps to address the rationale in wanting all transaction integer formats to be consistent.

We could definitely implement OP_NUM2VARINT and OP_VARINT2NUM operations instead, but I think internal consistency between the VM and transaction format will pay serious dividends in the long run. And saving ~5% in transaction size and paving the way for fractional satoshis are both nice bonuses.

May I suggest a ‘cost / complexity’ section to your proposal? This is a biggy. Pruning logic would change very considerably, validation would become much more expensive when counted in IO-reads (up to 3x, assuming it never goes beyond the grand-parent).

Questions that are still open:

Can you put the witness inside the txid-calculation? Why not?

How does Bitcoin-Script get access to the parent transaction-data?

Its not segretated, its not a witness, I’m still unsure what it is, I’ll read more and maybe when I finally understand the thing you built I can come up with some more descriptive names. For now I’d suggest "hashed-output-script’.

Thanks again for all the thoughtful questions, these have been really helpful in clarifying things for an eventual FAQ.

Cost/Complexity

May I suggest a ‘ cost / complexity ’ section to your proposal? This is a biggy. Pruning logic would change very considerably, validation would become much more expensive when counted in IO-reads (up to 3x, assuming it never goes beyond the grand-parent).

That’s a good idea. :+1:

To clarify though, I don’t think pruning logic should change at all. Nodes aren’t providing any new data to the VM – only the same UTXO information for each transaction input. (Nodes are not providing access to raw parent or grandparent transaction data, that data is actually being pushed in the unlocking bytecode with the signatures.) So there is up to 3x as much data in these transactions vs. normal transactions (~600 bytes rather than ~200 bytes), but I don’t think that has a meaningful effect on node IO.

Introspecting Parent Transactions

Can you put the witness inside the txid-calculation? Why not?
How does Bitcoin-Script get access to the parent transaction-data?

Not without having transactions grow each time the token is moved.

The root issue is: the child transaction needs to first verify that it’s been given the real contents of its parent. So it hashes the data it’s given to get a txid, then it compares that txid to the 0th outpoint transaction hash (which it knows is the correct parent txid). If they are the same, it can safely inspect the parent transaction for validity.

So really, inductive proofs are already easy to do in the current VM, there’s just this tiny implementation detail that breaks everything: we have to hash the whole parent transaction (including its proofs) to get the same hash which is already present in the outpoint transaction hash. Without being able to authenticate that the data we’re given is “the real parent transaction data”, we can’t validate it.

So to find other ways to accomplish this (other than allowing the unlocking bytecode/witness to be hashed inside the txid serialization), what we’re really looking for is ways to introspect elements of the parent transaction. But right now, nodes only have a two relevant pieces of data available:

  • the UTXO (the locking bytecode and the value in satoshis)
  • the txid

So unless we provide data other than the txid as a way of introspecting the parent transaction, we need to modify the txid so it can be used in proofs which don’t grow from transaction to transaction.

Aside, this suggests a potential future optimization: if token transactions begin taking up more than half of network throughput, it might be worth providing some new VM state which allows for introspection of the parent transaction without having to provide the whole transaction. E.g. some sort of merkle proof which allows access to contents of parent and grandparent transactions. (Though we probably should not worry about it until these tokens have serious traction on the network and we’ve validated that sort of optimization would be worth it.)

Naming

For now I’d suggest "hashed-output-script’.

It’s the input script that’s being hashed, so we could maybe go with “hashed-input-script”.

I’m personally not a fan of “script” because it’s really vague: scripts are typically human readable, and when you compile a script, it’s technically “bytecode” inside the actual transaction. (E.g. a JS library method asks for a “script”: do you give it the ASM string? UTF8 encoded? Pre-compiled to bytecode? etc.)

“Input” can also be vague because you could be talking about the bytecode in the input or the bytecode “provided by” the input (the input’s locking bytecode). If you try walking someone through a complex contract, “input” and “output” quickly become meaningless words :sweat_smile:. So I also try be maximally descriptive by referring to them specifically as “locking” or “unlocking”.

I think the most “correct” name would be Unlocking Bytecode Hash, which may optionally be provided in place of the Unlocking Bytecode, then the Hashed Unlocking Bytecode is appended at the end of the TX. (Quite the mouthful, but maybe better than throwing the word “witness” into the mix.)

I also think we should call “Script” (the language) “BCH VMB” for BCH Virtual Machine Bytecode, but I’ll stick with “Script” until a critical mass of developers are tired of that ambiguity. :grinning_face_with_smiling_eyes:

The following began as a response to @andrewstone’s review on Reddit, but it probably belongs here (and it’s also too long for me to post there :sweat_smile:).


Comparisons with the Group Tokenization Proposal (Previously OP_GROUP)

Hey @andrewstone, thanks for reviewing! Sorry I missed your review earlier.

Aside: is there some other public forum where OP_GROUP has been discussed in technical detail before? (I see a few reddit threads, but they’re pretty thin on substance.) Also is this Google Doc the primary spec for Group Tokenization right now?

Before getting into details, I just wanted to clarify: I believe hashed witnesses are the smallest-possible-tweak to enable tokens using only VM bytecode. My goal is to have something we can confidently deploy in 2022. I don’t intend for this to stop research and development on other large-scale token upgrades.

If anything, I’d like to let developers start building contract-based tokens, then after we’ve seen some usage, we’ll find ways to optimize the most important use cases using upgrades like Group Tokenization.

Some responses and questions:

Group Tokenization increases a transaction’s size by 32 bytes per output. So pretty much the same in terms of size scalability. (And the latest revision includes a separate token quantity field, rather than using the BCH amount. This is a design decision that I assume CashTokens will also need).

Hashed witnesses increase the size by 32 bytes per input rather than output. and token transactions will typically only need one. Do you have any test vectors or more implementation-focused specs where I can dig into the expected size of token mint, redeem, and transfer transactions with Group Tokenization?

For CashTokens, a 32-level merkle tree proof (>4 billion token holders) requires 192 bytes of redeem bytecode and 1024 bytes of proof. That overhead would apply only to mint and redeem transactions. For transfers, the inductive proof requires only the serialization of the parent and grandparent transaction, so at ~330 bytes each, the full unlocking bytecode is probably ~1,000 bytes. (If you’re curious, you can review the code in the CashTokens Demo.)

On quantity/amount: I didn’t propose a new quantity field since it’s pretty easy to use satoshis to represent different token quantities for CashTokens which need to support it. And it’s easy for covenants to prevent users from incorrectly modifying the value.

But on the CPU use side there’s need to verify the inductive proof so this CashTokens would use more processing power.

Verifying the proof costs less than verifying a signature – two OP_HASH256s and some OP_EQUALs. There’s no fancy math: we just check the hash of the parent transactions, then check that they had the expected locking bytecode. (Check out the demo, it’s quite easy to review.)

Both require a hard fork. Since CashTokens is creating a new transaction format it is likely a larger change.

This seems… unlikely :sweat_smile:

The Group Tokenization spec is ~24 pages (~43 pages with appendices and examples), and that doesn’t include a lot of implementation detail. The PMv3 proposal is just a few pages, and includes a full binary format breakdown/comparison and test vectors.

Also, doesn’t the latest Group Tokenization spec also include a new transaction format? “We define a new transaction format that is used for transactions with groups (or with BCH).

The inductive proof logic is also extremely critical code, even though it seems to not be part of miner consensus. Any bug in it (in any client) would allow transactions to be committed to the blockchain the appear valid on that client (but no others). In the context of SLP, I call this problem a “silent fork” (clients have forked their UTXO set – that is, their tracking of who owns what coins – but the blockchain has not forked). In many ways a silent fork is worse than an accidental blockchain fork because the problem can remain latent and exploited for an indeterminate amount of time. In contrast, a bug that causes an accidental blockchain fork is immediately obvious.

The inductive proof actually happens entirely in VM bytecode (“Script”) – if different implementations have bugs which cause the VM to work differently between full nodes, that is already an emergency, unrelated to CashTokens.

If wallets have bugs which cause them to parse/sign transactions incorrectly, their transactions will be rejected, since they won’t satisfy the covenant(s). (Or they’ll lose access to the tokens, just as buggy wallets currently lose access to money sent to wrong addresses).

On the other hand, the “silent fork” you described is possible with Group Tokenization. Group Tokenization is a much larger protocol change, where lots of different clients will need to faithfully implement a variety of specific rules around group creation, group flags, group operations, group “ancillary information”, “group authority UTXOs”, authority “capabilities”, “subgroup” delegation, the new group transaction format, “script templates”, and “script template encumbered groups”. It’s plausible that someone will get something wrong.

Again, my intention is not to have the Group Tokenization proposal abandoned – a large-scale “rethink” might be exactly what we need. I just want a smaller, incremental change which lets developers get started on tokens without risking much technical debt (if, e.g. we made some bad choices in rushing out a version of Group Tokenization).

One scalability difference is that in the Group tokenization proposal, the extra 32 bytes are the group identifier. So they repeat in every group transaction, and in general there are a lot more transactions than groups. They will therefore be highly compressible in disk or over the network, just by substituting small numbers for the currently most popular groups (e.g. LZW compression).

However, each inductive proof hash in CashTokens is unique and pseudo-random so therefore not compressible.

Yes, and moreover, the largest part of CashToken transactions will be their actual unlocking bytecode, since the network doesn’t keep track of any state for them. That’s why they are so low-risk: Hashed Witnesses/CashTokens specifically avoid making validation more stateful.

Without a doubt, a solution which makes transaction validation more stateful can probably save some storage space, but even comparing to current network activity, CashTokens would be among today’s smaller transactions. CashFusion transactions are regularly 10x-100x larger than CashToken transactions, and we’re not urgently concerned about their size. And CashToken fees will be as negligible as expected on BCH: users will pay fractions of a cent in 2020 USD per transaction.

There’s some huge misunderstanding that Group Tokens impacts scalability. It doesn’t, except in the sense that the blockchain is now carrying transactions for both BCH and USDT (for example). No token protocol will be able to have on-chain tokens without having token transactions! I’ve come to believe that people who take this scalability argument are being disingenuous – they should just say they don’t want tokens – rather than concern trolling.

Described that way – that BCH should be for BCH only – I can respect but entirely disagree with that argument. And I think that the market agrees with me based on the performance of BCH over the last few years.

To be fair, the original OP_GROUP proposal had a far a larger impact on scaling, and people might not know there’s a new spec. Also, the latest spec reduces the amount of new state introduced into transaction validation, but it’s still not “new state"-free. There are real tradeoffs being made, even if you and I agree that tokens might be worth it.

A smaller, easily-reviewed change like PMv3 would give us a great opportunity to showcase the value of contract-integrated tokens on BCH. If we see a lot of on-chain applications, larger upgrade ideas like Group Tokenization will probably get more interest.

And with an active token-contract developer community, we’d have a lot more visibility into the token features BCH needs to support!

1 Like

Hashed witnesses increase the size by 32 bytes per input rather than output. and token transactions will typically only need one. Do you have any test vectors or more implementation-focused specs where I can dig into the expected size of token mint, redeem, and transfer transactions with Group Tokenization?

I understand. But there’s ultimately a 1-1 correspondence between inputs and outputs. Although, yes the outputs become part of the UTXO so there’s more database use for data in the outputs.

For CashTokens, a 32-level merkle tree proof (>4 billion token holders) requires 192 bytes of redeem bytecode and 1024 bytes of proof. That overhead would apply only to mint and redeem transactions. For transfers, the inductive proof requires only the serialization of the parent and grandparent transaction, so at ~330 bytes each, the full unlocking bytecode is probably ~1,000 bytes. (If you’re curious, you can review the code in the CashTokens Demo.)

So that’s a lot more data per transaction. For group tokens there’s no additional unlocking bytecode – its the normal P2PKH or P2SH style locking.

On quantity/amount: I didn’t propose a new quantity field since it’s pretty easy to for CashTokens which need to support it. And it’s easy for covenants to prevent users from incorrectly modifying the value.

The original OP_GROUP tokenization did the same. I got a lot of push back so I added a token quantity field.

(in that same gist you mention bigints. I’ve implemented something at www.nextchain.cash if you are interested)

Do you have any test vectors or more implementation-focused specs where I can dig into the expected size of token mint, redeem, and transfer transactions with Group Tokenization?

This seems… unlikely :sweat_smile:

Really. At its heart all you need to implement group tokens is:

  1. a for loop that adds up the tokenized inputs
  2. a for loop that adds up the tokenized outputs
  3. compare them to make sure input quantity == output quantity.
  4. a few if statements to ignore 3 in the case of mint and melt.

Calling it 1 page of consensus code is very generous – I implemented it a very clean, sparse manner. Now, back in the day, some people threw a bunch of crap problems at it to make it a monster change, probably so it would never be implemented. Like insisting on a new transaction format because “data shouldn’t be put in the output script”.

The Group Tokenization spec is ~24 pages (~43 pages with appendices and examples), and that doesn’t include a lot of implementation detail.

The original specification was just a few pages. There was a lot of push back that the spec and group tokens were not comprehensive enough. ABC claimed to need ERC-721-like functionality or nothing. So this subsequent document lays out an entire vision, including a few years of additional features, including proposing features like transaction introspection, covenants, subgroups (NFTs), etc.

On the other hand, the “silent fork” you described is possible with Group Tokenization. Group Tokenization is a much larger protocol change, where lots of different clients will need to faithfully implement a variety of specific rules around group creation, group flags, group operations, group “ancillary information”, “group authority UTXOs”, authority “capabilities”, “subgroup” delegation, the new group transaction format, “script templates”, and “script template encumbered groups”. It’s plausible that someone will get something wrong.

A “silent” fork is not possible. A mis-implementation will cause a blockchain fork. But like I’ve already said, regardless of the above concepts, its actually a short consensus implementation, and lays out an entire roadmap. Stuff like implementing the new group transaction format doesn’t make technical sense (and wasn’t in OP_GROUP). It was ABC yanking me around with their obsessive desire to refactor and perfectionize every piece of code.

I hadn’t realized that the script enforces the inductive proof – I thought that you were proposing that that proof be part of the merkle proof of tx inclusion (and the client verifies it). In that case, you are right, your proposal also won’t suffer from “silent forks”.

Anyway, I’ve implemented it in the nextchain experimental testnet (minus the make-work “requirements” like a new tx format). You can head over to www.nextchain.cash to see details.

To be fair, the original OP_GROUP proposal had a far a larger impact on scaling, and people might not know there’s a new spec.

It didn’t have any different impact on scaling.

The “new state” is a group identifier per token output, so similar to your proposal which puts 1 per input and not a significant scaling issue. Also it seems like you require much larger scripts…

A smaller, easily-reviewed change like PMv3

The original op_group was 1 page of isolated consensus code that implemented 1 new function “do group inputs and outputs balance?”. The revised Group Tokenization is a bit bigger, maybe 2-3 pages? because it implements features like mint/melt authorities. These features basically say “sometimes inputs and outputs shouldn’t balance…” so are conceptually simple code.

2 Likes

Thanks for the clarifications!

Are there any other public forums with good technical discussion on Group Tokens? I’d love to have more background if it’s available anywhere.

Anyway, I’ve implemented it in the nextchain experimental testnet […]

Ah, nice. (Code here.)


So in summary, maybe the best comparison is:

Group Tokenization (prev. OP_GROUP) vs. CashTokens

Group Tokenization aims to be an all-inclusive framework for BCH token support – it’s much like Ethereum’s ERC20, except Group Tokens would be built directly into the BCH consensus protocol, including “groups”, “authorities”, “capabilities”, “subgroups”, etc.

CashTokens are “userland” protocols: they are completely implemented using the VM bytecode language. Like any contract, new ideas (for e.g. minting, redeeming, and token management) can be designed and deployed by anyone, whenever they want. The only prerequisite for people to get started building CashToken-style covenants is the ability to succinctly introspect parent transactions (e.g. a solution like hashed witnesses).

(discussion from CashToken Devs)


And again, I don’t think these are mutually exclusive upgrades: inductive proofs are needed for several covenant applications which aren’t likely to be covered by a built-in token protocol (e.g. weighted, blinded voting for two-way-bridged prediction markets). There are also definitely applications which could be optimized beyond inductive proofs – a future upgrade which lets us “group” and save token balances in with the UTXO set could save a lot of bytes per transaction.

1 Like

“it’s much like Ethereum’s ERC20

I wouldn’t put it that way. As I’m sure you know (but just setting the stage) Ethereum is basically a replicated computer with shared state. Things like tokens are built on top of that computer. Something like CashTokens uses that same architecture since the logic that creates tokens into the BCH scripting language.

This is a very fundamental architectural decision. Its much more fundamental IMHO than saying that Group Tokenization is like ERC20 because they both support feature X. In fact, that’s not even accurate. ERC20 just supports transfer – its up to each contract to layer on all the other features that Ethereum has taught us that we need.

The ramifications of this architectural decision influence EVERYTHING downstream. There are advantages and disadvantages.

Disadvantages of implementing tokens in the blockchain script:

  1. Safety: every token can be implemented differently so really before holding a token every person should review that code (which may be closed source).
  2. Scalability (CPU): no matter how much more efficient the blockchain scripting language gets, its unlikely to exceed the efficiency of native code. Additionally, the native code can leverage miner consensus and therefore “do” a lot less – in this case just a simple sum and compare of token inputs and outputs.
  3. Scalability (Space): the scripts that implement and enforce token constraints will take up space in transactions. Admittedly this space is very compressible using the macro scripting architecture that I laid out a few years ago but still it cannot be smaller than no script…

Advantages of implementing tokens in the blockchain script:

  1. Flexibility: There’s just the one, but its a big one!

So let’s talk product philosophy, because excepting BTC with its 1st mover advantages, blockchains are effectively products competing for users. Here’s a key product design idea:

If you can’t be better in all respects than an established competitor, then at least be different.

Why? Because your differences will mean that you are better at some things (while being worse at others). Ideally of course, you want to be better at MORE things then you are worse at. But even if you are worse at more things, your different product can find applications that specifically need what you are better at.

This is the philosophy I think we should use with Bitcoin Cash. Verses Bitcoin, we’ve made it worse on low end machines/internet connections to make it better at handling large volumes of transactions. Verses Ethereum, we shouldn’t make it a (worse) general purpose computer (and no, we are never going to slowly build BCH script into something that can out perform the ETH VM which is IIRC going to leverage web assembly). Instead, lets make it really good at handling crypto-financial computing. The way to do this is to have powerful primitives for that task. This is why I want super-efficient native tokens, rather than implementing them at the scripting layer. This is why I want OP_PUSH_TX_DATA (transaction introspection) rather than using the inefficient CDS “trick”. This is why nextchain.cash implements variable sized big nums rather than picking 256bits (say). Ethereum defines uint256… my big num implementation will be (perhaps – its using libgmp where very clever people are focused on performance and writing asm implementations for common architectures) very slightly less efficient at 256 bit numbers but more efficient at every other size.

1 Like

I’m still behind on reviewing the inductive proofs and other details from Jason, but I do like the discussion Andrew started.

What I specifically appreciate in Bitcoin is its scaling proposition. Validating a transction and/or script is very easy to shard over multiple processors, all parts are easy to parallelize. The only thing that we can’t parallelize is the UTXO. We need some locking in order to make sure we can detect a double spend within a signle block.

Based on the fact that the UTXO is the scaling bottleneck, I’m indeed happy with the general design in Jasons stuff: it doesn’t add anything to the UTXO.

The OP_GROUP and maybe the GROUP one too (I mirror Jasons request for some forum or documentation on it) have as a rather severe problem that they add a second 32-byte identifyer to every UTXO entry. As the current lookup is just a single 32-byte entry (and maybe a output-index), this is a rather large difference that will have an instant and negative effect on scaling.

1 Like

Related discussion about how inductive proof strategies can allow covenants to offload functionality to child covenants, allowing public covenants to be parallelized. This reduces competition over who gets to make the next covenant transaction (avoiding competing chains of unconfirmed covenant transactions):

It’s possible that BCH could ultimately support both parent transaction introspection (for complex covenant applications) and a built-in “group” token system (for stablecoins and other common token applications). There doesn’t seem to be a good thread on this forum about native token support, so I responded to @andrewstone’s comment here:

1 Like

The OP_GROUP and maybe the GROUP one too (I mirror Jasons request for some forum or documentation on it) have as a rather severe problem that they add a second 32-byte identifyer to every UTXO entry. As the current lookup is just a single 32-byte entry (and maybe a output-index), this is a rather large difference that will have an instant and negative effect on scaling.

Neither OP_GROUP nor GROUP modified a single line of the UTXO code and did not add an additional lookup. There is never a reason to look up a UTXO via group id. A UTXO that happens to contain tokens is the same 32 byte tx id + output index lookup.

It has been well documented, implemented, and even ported to and deployed on the ION cryptocurrency. Technical discussion with the BCH community happened years ago, so I don’t have ancient links handy, and was, frankly, not actually technical due to ABC’s terrible attitude towards cooperative-yet-also-competing clients.

1 Like

Responded to @andrewstone here: Native Group Tokenization - #3 by bitjson

1 Like

Another summary of this proposal derived from telegram discussions:

Bitcoin cash covenants already support introspection/validation of outputs, enforcing rules for future transactions.

PMv3 allows us to do validation looking backwards, inspecting elements of a covenant’s parent transaction and enforcing rules for parent transactions.

Support for both forward and backward validation unlocks a huge number of contract use cases: a lot of complex applications happening on “global state” chains like Ethereum can be implemented more scalably on BCH, with no other changes to the VM. (Details: Hashed Witnesses: Build Decentralized Applications on Bitcoin Cash)

Personally I prefer small incremental steps. Much easier to roll out and debug. As such it is important to notice that indeed they strenghthen each other, but they don’t depend on each other. They can be in two individual upgrades if that is the way it turns out.

Personally I prefer small incremental steps.

Strongly agree. Minimal changes, one at a time. :+1: :+1:

They can be in two individual upgrades

Sorry, I wasn’t clear – there’s only one upgrade we still need. The forward validation part is already working on mainnet. (E.g. CashChannels)

This PMv3 proposal focuses only on the minimum upgrade to support backward validation: “compression” of past proofs + script integer compatibility. With these two elements, we can succinctly introspect parent inputs in the same way we can currently introspect future outputs.

The other “features” of PMv3 (reduced TX size and an upgrade path to fractional satoshis) are just helpful byproducts of these two minimum changes.

So I strongly agree. There’s a chance people will want to pack in other changes too, so I tried to describe here why I think we should keep PMv3 minimal: CashTokens & PMv3: fixed-size inductive proofs, transaction integer compatibility - #2 by bitjson

After some more discussion on Telegram, I think it might be helpful for me to copy/paste some of the “most interesting” parts of the CashToken Demo here for start-to-end viewing (and mobile readers).

If you’re on a modern desktop browser, you can also jump right into the CashTokens template on Bitauth IDE:

CashTokens template screenshot

CashTokens Template Summary

This template provides scripts for each stage of a simplified "depository"
CashToken Corporation, where users may deposit 10,000 satoshis in exchange 
for a CashToken. This implementation supports exactly 4 homogeneous tokens. 
Tokens are tracked in a 2 level Merkle tree:
     rt
    /  \
   b0   b1
  / \   / \
 a0 a1 a2 a3

The covenant begins with no depositors (all leaves start as the hash of 0) and a
dust balance. A user deposits 10,000 satoshis, receiving a CashToken (recorded
at a0) secured by a single key. Then the CashToken is "notarized" to prove it is
a descendant of the mint transaction. Once notarized, the owner transfers the
CashToken to a 2-of-3 multisig for stronger security. From there, the owner
transfers the token to a different owner (now single sig). Finally, the new owner
redeems the CashToken with the corporation, withdrawing their 10,000 satoshi
deposit, and replacing a0 in the tree with the hash of 0 (making the leaf available
to other depositors).

There are 5 transaction types demonstrated:
- tx0: Incorporation TX - the "corporation" is established with a dust balance
- tx1: Token Mint TX - a CashToken is created
- tx2: Notarization TX - the CashToken is proven to be valid
- tx3: Token Transfer TX - the CashToken is transferred independently
- tx4: Token Redeem TX - the CashToken is redeemed with the "corporation"

Transaction Graph/Timeline:

tx0 --- tx1 -------------------------------------- tx4
           \--- tx2 --- tx3 --- tx3 --- ...tx3 ---/

Transaction Contents:

tx0: Incorporation TX - 
  Input 0 - any funding output (dust amount + tx fee)
  ---
  Output 0 - Corporation Covenant (dust amount)
  Output 1 - change output

tx1: Token Mint TX -
  Input 0 - tx0 output 0
  Input 1 - any funding output (10,000 satoshi deposit + tx fee)
  ---
  Output 0 - Corporation Covenant
  Output 1 - CashToken Covenant (Token ID is tx0 hash)
  Output 2 - change output

[Token circulates separately from the corporation covenant]

  tx2: Notarization TX
    Input 0 - tx1 output 1
    Input 1 - any funding output (tx fee)
    ---
    Output 0 - CashToken covenant
    Output 1 - change output

  tx3: Token Transfer TX
    Input 0 - tx2 output 0
    Input 1 - any funding output (tx fee)
    ---
    Output 0 - CashToken covenant
    Output 1 - a change output

[Token is redeemed with the corporation covenant]

tx4: Token Redeem TX
  Input 0 - tx1 output 0
  Input 1 - tx3 output 0
  Input 2 - any funding output (for fees)
  ---
  Output 0 - Corporation covenant
  Output 1 - a change output (+10,000 satoshi withdrawal)

---

Additional Notes

Indestructible Dust – Once created, this CashToken corporation
implementation cannot be destroyed. The covenant could be modified
to allow the final dust balance to be collected from the covenant if all
shares are unallocated, but the additional bytecode required over the
life of the covenant is likely to be more costly than the dust amount "saved"
when the covenant is abandoned. It's worth noting that any bitcoin user
can create an unlimited number of dust outputs at any time, so dust from
covenants is irrelevant from the perspective of the network.

Token Mint (Unlocks the Corporation Covenant)

/**
 * To mint a token we must prove:
 * - TX output 0 is the next covenant state where:
 *   - the new tree replaces an empty leaf (<0>) with the covenant outpoint TX hash
 *     - requires: new tree root, all opposite nodes, proof path
 *   - the new covenant UTXO has a value of 10,000 satoshis greater than before
 * - TX output 1 is the expected token pattern
 *   - requires: m-of-n and public keys to use for the token
 * 
 *     rt
 *    /  \
 *   b0   b1
 *  / \   / \
 * a0 a1 a2 a3
 * 
 * This example verifies that a0 is `0`, then replaces
 * it with the outpoint TX hash.
 */

<root_hash_after_mint> // rt
<sibling_tier_2_hash> // b1
<sibling_leaf_hash> // a1

<1> // tier 2 is left side (requires swap)
<1> // leaf is left side (requires swap)

<1> // enter branch: Mint Token

Corporation Covenant

<empty_root_hash> OP_TOALTSTACK 

OP_IF
/**
 * branch: Mint Token
 */

/**
 * we need 2 copies of the "validation path": 
 * 1. confirm the leaf was empty
 * 2. confirm the new root only changes that leaf
 */
OP_2DUP
OP_FROMALTSTACK OP_ROT OP_ROT // current_root_hash
OP_TOALTSTACK OP_TOALTSTACK

// First we confirm the leaf being updated was empty:
<4> OP_PICK // sibling_tier_2_hash
<4> OP_PICK // sibling_leaf_hash
<0> OP_HASH160 // constant: hash160 of <0>

OP_FROMALTSTACK // check if leaf requires swap
OP_IF OP_SWAP OP_ENDIF
OP_CAT OP_HASH160

OP_FROMALTSTACK // check if tier 2 requires swap
OP_IF OP_SWAP OP_ENDIF
OP_CAT OP_HASH160 // should be current root

OP_EQUALVERIFY // verified that replaced leaf was <0>

// Now we confirm that the new leaf uses outpoint TX hash

OP_TOALTSTACK OP_TOALTSTACK

OP_OUTPOINTTXHASH
OP_HASH160

OP_FROMALTSTACK // check if leaf requires swap
OP_IF OP_SWAP OP_ENDIF
OP_CAT OP_HASH160

OP_FROMALTSTACK // check if tier 2 requires swap
OP_IF OP_SWAP OP_ENDIF
OP_CAT OP_HASH160 // verified new root

OP_DUP OP_TOALTSTACK // save for later use
OP_EQUALVERIFY // new root is correct


// require that output 0 is 10_000 satoshis larger than last value
OP_UTXOVALUE
<10_000> OP_ADD
<0> OP_OUTPUTVALUE
OP_EQUALVERIFY // covenant has been paid properly

// require that the correct covenant script is used
<OP_PUSHBYTES_20>
OP_FROMALTSTACK // root_hash_after_mint
OP_UTXOBYTECODE // get this covenant's bytecode
/**
 * OP_SPLIT is often unsafe for user input, but this input
 * comes from the VM/previous contract.
 */
<21> OP_SPLIT OP_NIP // trim off previous root hash push
OP_CAT OP_CAT
OP_HASH160
<OP_EQUAL> <OP_HASH160> <OP_PUSHBYTES_20> OP_2SWAP
OP_CAT OP_CAT OP_CAT
// require that output 0 is the new covenant bytecode
<0> OP_OUTPUTBYTECODE
OP_EQUALVERIFY // TX pays to the updated covenant

// require that output 1 is a cashtoken UTXO
<1> OP_OUTPUTBYTECODE
<0xaabbccddee> // TODO: push expected cashtoken bytecode
/**
 * We get the Token ID from the last covenant TX ID. This: 
 * 1) prevents tokens from being created to impersonate existing tokens
 * 2) saves bytecode space (vs. pushing an ID)
 * 3) avoid bugs in wallet implementations (validating at contract-level)
 */
// TODO: use <0> OP_OUTPOINTTXHASH to get the Token ID, concat into CashToken covenant
OP_EQUAL
OP_ELSE
/**
 * branch: Redeem Token
 * TODO: leaving out of first draft, see notes in unlocking script
 */
OP_ENDIF

Notarization (Unlocks Token Covenant)

/**
 * This is the initial "Notarization" spend. It can only 
 * be successful if the parent (mint) transaction includes 
 * the outpoint TX hash referenced by the CashToken
 * covenant (in its 0th output). 
 * 
 * To prove that this notarization step has been
 * completed successfully, future spends need only 
 * prove that the covenant has previously been
 * successfully spent.
 * 
 * This implementation does not offer a way to transfer
 * the token to a new set of holders during the notarization
 * step. Rather, the notarization transaction must lock funds
 * in a covenant with the same signing requirements as those
 * specified in the mint transaction. This is a feature:
 * notarization can be safely performed by anyone, making
 * CashTokens slightly easier to implement in some types of
 * wallets (and reducing the size of the covenant by saving
 * on validation bytecode).
 */

<example_tx1> // TODO: switch to `<parent_mint_tx>`, fix scenario generation error (using `example_tx1` for `bytecode.parent_mint_tx`)

<0> <0>

Token Transfer (Also Unlocks Token Covenant)

/**
 * This script is use to transfer a notarized CashToken between
 * wallets. This implementation supports multisig wallets of
 * up to 3 keys.
 * 
 * Counterintuitively, the token transfer branch of this contract
 * does not place any limitations on the outputs to which a token
 * is spent. Instead, only its parent and grandparent transactions
 * are validated to ensure the tokens lineage. However, because
 * this covenant is impossible to spend without the proper
 * "proof-of-lineage", if any transfer does not properly continue
 * the covenant, the token is burned.
 * 
 * This means it is always possible to "burn" the token in one final
 * transaction – in this authentication template, the token should
 * be burned/redeemed back into the parent corporation covenant
 * (which itself then validates that the token's lineage was
 * unbroken). With this strategy, the parent covenant can validate
 * the authenticity of tokens.
 */

[...]

Token Covenant

/**
 * This script is the key to the inductive proof. It's 
 * designed to be unspendable unless:
 * 1. Its parent was the mint transaction funded by the transaction
 *    hash claimed (requires full parent transaction), or
 * 2. It was previously spent successfully – the parent's 0th input's
 *    UTXO used this covenant, i.e. the parent's 0th-input-parent's
 *    0th output uses this covenant. (Valiated by checking the full
 *    parent and grandparent transactions.)
 * 
 * With these limitations, if a user can move the CashToken,
 * we know the CashToken has the lineage it claims.
 */

/**
 * By "baking-in" the code branch selection, we prevent the
 * CashToken from being unecessarily notarized multiple times.
 */
<is_notarized>

// mint parent hash: the outpoint TX hash in the 0th output of the mint transaction
<tx0_hash> 

OP_TOALTSTACK // tx0_hash
OP_TOALTSTACK // is_notarized

<1> // threshold (m)

// push n public keys (up to 3)
// TODO: convert to variable
<first_holder_key.public_key>

<1> // count (n)

/**
 * Though this is only checked by the transfer branch,
 * it drops all ownership-related items from the stack
 * more efficiently than can be done with manual dropping
 * operations.
 * 
 * Note, notarizations could be restricted to token owners by
 * replacing this with OP_CHECKMULTISIGVERIFY.
 */
OP_CHECKMULTISIG

// in both branches, verify the full parent transaction is provided
OP_DUP
OP_HASH256
OP_OUTPOINTTXHASH

OP_DROP OP_DROP // debug: switch comment to skip check
// OP_EQUALVERIFY // verfied provided transaction is parent

OP_FROMALTSTACK // is_notarized
OP_NOTIF
    /**
     * Notarization branch:
     * Prove that this transaction's parent is the claimed token mint transaction,
     * i.e. its 0th input spends from the claimed token ID (outpoint TX hash).
     * 
     * Note, this branch can be executed by any interested observer (doesn't 
     * require access to any private keys), so it must be carefully validated
     * to avoid griefing.
     */

    OP_DROP // drop failed multisig check

    <4> OP_SPLIT OP_NIP // remove and discard tx version
    <1> OP_SPLIT OP_SWAP // get first byte of tx input count
    /**
     * Between 0 and 127, Script Numbers and VarInts are compatible.
     * 
     * 127 (`0xfc`) is the largest integer which can be represented
     * by the Script Number format in a single byte (0x80 begins 
     * the range negative 0 through 127).
     */
    <2> OP_EQUALVERIFY // require exactly 2 inputs (covenant + fee funding)

    <32> OP_SPLIT OP_DROP // get 0th outpoint tx hash, drop everything else
    OP_FROMALTSTACK // tx0_hash
    OP_EQUALVERIFY // Token ID verified: parent transaction spends from claimed outpoint tx hash

    /**
     * Verify the transaction's 0th output is re-assigned
     * the updated covenant.
     */
    OP_UTXOVALUE
    <0> OP_OUTPUTVALUE
    OP_EQUALVERIFY // require output value to be the same
    <OP_HASH160 OP_PUSHBYTES_20>
    <OP_1> OP_UTXOBYTECODE
    <1> OP_SPLIT OP_NIP OP_CAT // remove is_notarized of OP_0, replace with OP_1
    OP_HASH160 <OP_EQUAL>
    OP_CAT OP_CAT // expected P2SH bytecode
    <0> OP_OUTPUTBYTECODE
    OP_EQUAL // 0th output bytecode is correct
    /**
     * TODO: further optimization: after notarization, can we prune the
     * notarization branch?
     * Requires other changes to validation in both convenants.
     */

OP_ELSE
    /**
     * Transfer branch:
     * Prove that the outpoint tx spends from this same covenant. Then
     * prove that this transaction is signed by the required private key(s).
     * 
     * Note: to save space, this branch doesn't validate the locking bytecode
     * to which the transaction pays, making it possible for wallets to burn
     * the CashToken (intentionally or due to a bug). This is unlikely to
     * be a problem in practice because CashTokens can only be moved by
     * wallets which support the CashToken covenant template (no
     * backwards-compatibility with wallet which might unintentionally
     * burn tokens).
     * 
     * By not forward-validating outputs, we elliminate the need for each
     * CashToken covenant to be capable of correctly "identifying" its 
     * parent covenant during redeem transactions. (Instead, we verify the
     * lineage of the token by checking its parent and grandparent.)
     */

    OP_VERIFY // Signature validation must have been successful

    // Owner has authorized transfer, now prove lineage:

    // get parent tx's outpoint tx hash, then verify we have the grandparent
    <4> OP_SPLIT OP_NIP // remove and discard tx version (no validation)
    <1> OP_SPLIT OP_SWAP // get first byte of tx input count
    <0x02> OP_EQUALVERIFY // require exactly 2 inputs
    <32> OP_SPLIT // get parent outpoint tx hash
    <4> OP_SPLIT OP_DROP // get parent outpoint index, drop everything else
    OP_BIN2NUM
    <0> OP_EQUALVERIFY // must be grandparent's 0th output (grandparent may be 1th for mint)
    OP_TOALTSTACK // parent outpoint tx hash

    // validate and concat grandparent back together, confirming it used this covenant
    <
      0x02000000 // always require version 2
      0x02 // always require exactly 2 inputs
    >

    OP_SWAP // top is grandparent input 0 outpoint (hash + index)
    OP_SIZE <36> OP_EQUALVERIFY // require grandparent input 0 outpoint to be 36 bytes
    OP_CAT

    OP_SWAP // top is grandparent input 0 bytecode
    OP_SIZE 
    OP_DUP
    <4> OP_ROLL OP_EQUALVERIFY // provided bytecode is expected size
    OP_NUM2VARINT // serialize length
    OP_SWAP
    OP_CAT
    OP_CAT // grandparent TX up to input 1

    
    OP_SWAP // top is grandparent input 1 outpoint (hash + index)
    OP_SIZE <36> OP_EQUALVERIFY // require grandparent input 1 outpoint to be 36 bytes
    OP_CAT

    OP_SWAP // top is grandparent input 1 bytecode
    OP_SIZE
    OP_DUP
    <4> OP_ROLL OP_EQUALVERIFY // provided bytecode is expected size
    OP_NUM2VARINT // serialize length
    OP_SWAP
    OP_CAT
    OP_CAT // grandparent TX through inputs
    
    // start building grandparent outputs

    OP_SWAP // top is grandparent is_notarized
    <0> OP_EQUAL
    OP_IF
      /**
       * Grandparent is not notarized (mint transaction), the 1th output is the covenant.
       */
      <3> OP_CAT // require exatly 3 outputs in mint transaction
      OP_SWAP // top is 0th output satoshi value
      OP_SIZE <8> OP_EQUALVERIFY // satoshi value must be 8 bytes
      OP_CAT
      <$(<23> OP_NUM2VARINT)> // length of corporate covenant bytecode (P2SH)
      OP_CAT
      OP_SWAP
      OP_SIZE <23> OP_EQUALVERIFY // corporate covenant bytecode must be P2SH length
      OP_CAT
      // end of 0th output
    OP_ELSE
      /**
       * Grandparent is notarized, the 0th output is the covenant.
       */
      <0x02> OP_CAT // require exactly 2 outputs in transfer transactions
    OP_ENDIF

    OP_SWAP // top is grandparent's cashtoken covenant value
    OP_DUP
    /**
     * To support heterogenous tokens, this covenant prevents the
     * satoshi value of the token from changing during transfers.
     * 
     * However, we can't simply verify the "next" satoshi value is the
     * same as the "current" value - eventually, the token will be
     * redeemed with the corporation covenant, and the covenant's 
     * expected balance at that time can't necesssarily be predicted.
     * 
     * Instead, we only verify backwards by comparing the token output
     * values of the grandparent and current transaction. With this
     * limitation, we can know the token's value can't be modified
     * until the redeem transaction, where the parent covenant can
     * read the value before it is destroyed.
     */
    OP_UTXOVALUE <8> OP_NUM2BIN
    OP_EQUALVERIFY // disallow token value from being modified
    OP_CAT

    < <23> OP_HASH160 > OP_CAT // length of cashtoken covenant bytecode (P2SH)

    // begin transforming parent bytecode into grandparent bytecode for validation

    <OP_0> // is_notarized for mint transactions
    OP_UTXOBYTECODE // get parent bytecode
    <1> OP_SPLIT OP_NIP OP_CAT // replace is_notarized with OP_0
    
    <36> OP_SPLIT // preserve token ID (OP_0 + OP_PUSHBYTES_32 + 32 bytes + OP_TOALTSTACK + OP_TOALTSTACK)
    
    <1> OP_SPLIT OP_NIP // remove parent threshold pushing opcode (<<m>>)
    <3> OP_ROLL // add grandparent_threshold pushing opcode
    OP_SWAP // top is parent bytecode after removed m push

    <4> OP_ROLL // get parent public key count opcode

    // TODO: PRs welcome – are there more efficient ways to implement this "case" statement?
    // check for parent n of 1, 2, or 3 (saving the opcode)
    OP_TUCK 
    <<1>> OP_EQUAL OP_IF
      <34> OP_SPLIT
      OP_ELSE
      OP_OVER <<2>> OP_EQUAL OP_IF
        <$(<34> <34> OP_ADD)> OP_SPLIT
      OP_ELSE
        OP_OVER <<3>> OP_EQUAL OP_IF
          <$(<34> <34> <34> OP_ADD OP_ADD)> OP_SPLIT
        OP_ELSE
          OP_RETURN // fail, parent must match a planned n
        OP_ENDIF
      OP_ENDIF
    OP_ENDIF
    /**
     * TODO: SECURITY: do we need to validate these bytes? (E.g. in the
     * corporation covenant, they must be validated.) Can a malicious
     * grandparent transaction use different opcodes in these bytes to
     * defraud future token holders?
     */
    OP_NIP // drop the removed parent public key pushes
    OP_NIP // drop the parent public key count opcode
    <1> OP_SPLIT OP_NIP // drop parent n opcode from bytecode

    <4> OP_ROLL // pick grandparent public key count opcode
    // TODO: PRs welcome – there are definitely more efficient ways to implement this one
    OP_DUP
    <<1>> OP_EQUAL OP_IF
      <OP_PUSHBYTES_33> <6> OP_ROLL OP_SIZE <33> OP_EQUALVERIFY
      OP_CAT
      OP_ELSE
      OP_DUP <<2>> OP_EQUAL OP_IF
        <OP_PUSHBYTES_33> <6> OP_ROLL OP_SIZE <33> OP_EQUALVERIFY
        OP_CAT
        <OP_PUSHBYTES_33> <7> OP_ROLL OP_SIZE <33> OP_EQUALVERIFY 
        OP_CAT OP_CAT
      OP_ELSE
        OP_DUP <<3>> OP_EQUAL OP_IF
          <OP_PUSHBYTES_33> <6> OP_ROLL OP_SIZE <33> OP_EQUALVERIFY
          OP_CAT
          <OP_PUSHBYTES_33> <7> OP_ROLL OP_SIZE <33> OP_EQUALVERIFY
          OP_CAT OP_CAT
          <OP_PUSHBYTES_33> <7> OP_ROLL OP_SIZE <33> OP_EQUALVERIFY
          OP_CAT OP_CAT
        OP_ELSE
          OP_RETURN // fail, grandparent must match a planned n
        OP_ENDIF
      OP_ENDIF
    OP_ENDIF
    OP_SWAP OP_CAT // concat grandparent n opcode after public key pushes

    OP_SWAP
    OP_CAT OP_CAT OP_CAT // reconstructed grandparent redeem bytecode
    OP_HASH160 // get redeem script hash
    OP_CAT
    <OP_EQUAL> OP_CAT

    OP_SWAP // top is remaining TX serialization

    // no need to verify remaining grandparent outputs

    OP_CAT // full grandparent transaction
    OP_HASH256 // grandparent transaction hash
    OP_FROMALTSTACK // outpoint tx hash from parent

    OP_DROP OP_DROP <1> // debug: switch comment to skip check
    // OP_EQUAL // verify grandparent is parent outpoint tx

    // (don't bother dropping the mint parent hash left on altstack)

OP_ENDIF

Again, these are just the most important parts.

As you can see, I’ve made comments documenting many design decisions and noting various security and usability concerns. You can read a more thorough intro on the CashToken Demo Gist.

If you’re on a supported browser, you can also jump right into the CashTokens template on Bitauth IDE to experiment with the full evaluation traces.


PMv3’s “hashed witnesses” allow scripts like the above Token Covenant to “compress” their parent transaction’s witness data (by hashing it), avoiding the uncontrolled growth in transaction sizes which currently prevents covenants from introspecting parent transactions. The second change proposed by PMv3 allows scripts to parse the integer format used in these parent transactions by making them compatible with the existing Script Number format. (Which also happens to save ~5% in transaction sizes.)

1 Like

Hashed Witnesses vs. MalFix, SIGHASH_NOINPUT, SegWit, etc.

The hashed witnesses half of this proposal could also be solved by any transaction malleability solution which removes witness data from the parent transaction’s outpoint transaction hash.

If there was any demand for a full transaction malleability solution, we might want to consider implementing that instead of hashed witnesses.

I personally don’t see the need for an additional “malleability fix”, so I prefer the hashed witnesses solution:

  • hashed witnesses avoid creating two different TXID formats (e.g. TXID and WTXID) – this keeps the system as simple as possible, ensuring the competing ID formats never confuse user experiences, and ensuring consistency between e.g. the TXID in SPV proofs and the TXID referenced in contracts.
  • hashed witnesses avoid the need for an additional TXID index – nodes don’t need to maintain an index to match the two TXID formats (with and without witness data) even at the P2P layer.
  • hashed witnesses don’t affect block merkle root construction or any system components other than v3 transactions (depending on the solution – malleability fixes can be more disruptive)

Unless someone can point out advantages in implementing a malleability solution instead, I prefer that this problem be solved with the hashed witnesses solution. I started a thread to collect discussion if anyone is interested:

Hashed Witnesses vs. TXID Merkle Trees

Just to document another possible alternative to hashed witnesses: a new transaction version could also use a new merkle tree construction for its TXID.

Instead of hashing the full, serialized transaction, each major component of the transaction could be hashed, allowing child transactions to only introspect e.g. a particular output during a proof. (So in a sense, this would be hashed witnesses + hashed everything else, in a tree.)

While this may save a few bytes for contracts which introspect a parent transaction, it would require significantly more hashing per transaction. And even for a relatively small type of tree, e.g. hash256([version][input merkle tree][output merkle tree][locktime]), the small additional savings would probably not be worth it. (Proofs would still require 32 bytes * node count; a minimal 330 byte proof could maybe be reduced to ~128 bytes, but at the cost of requiring possibly 10x as much hashing when computing TXIDs.)

I think hashed witnesses are a much better choice, only costing one additional hash for any inputs which use them (and users pay for the extra bytes, discouraging unnecessary use).