CHIP 2022-02 CashTokens: Token Primitives for Bitcoin Cash

This proposal enables two new primitives on Bitcoin Cash: fungible tokens and non-fungible tokens.

Bitcoin Cash contracts lack primitives for trustlessly issuing transferable, contract-verifiable messages, limiting coordination strategies available to multi-party contracts and covenants.

By enabling token primitives on Bitcoin Cash, this proposal offers several benefits.

Cross-Contract Interfaces

Using non-fungible tokens (NFTs), contracts can trustlessly commit to messages which can be consumed by other contracts. These messages are impersonation-proof: other contracts can safely read and act on the commitment, certain that it was produced by the claimed contract. This primitive enables covenants to expose public interfaces – paths of operation intended for other, not-yet-developed contracts.

Decentralized Applications

Beyond enabling covenants to interoperate with other covenants, these token primitives allow for byte-efficient representations of complex internal state – supporting advanced, decentralized applications on Bitcoin Cash.

Fungible tokens are critical for covenants to efficiently represent on-chain assets – e.g. voting shares, utility tokens, collateralized loans, prediction market options, etc. – and to efficiently implement complex coordination tasks – e.g. liquidity-pooling, auctions, voting, sidechain withdrawals, spin-offs, mergers, and more.

Non-fungible tokens are critical for coordinating activity trustlessly between multiple covenants, enabling covenant-tracking tokens, depository child covenants, multithreaded covenants, and other constructions in which a particular covenant instance must be authenticated.

Universal Token Primitives

By exposing basic, consensus-validated token primitives, this proposal supports the development of higher-level, interoperable token standards (e.g. SLP). Token primitives can be held by any contract, wallets can easily verify the authenticity of a token or group of tokens, and tokens cannot be inadvertently destroyed by non-token-aware wallet software.


The full specification is tracked in a repository on GitHub:

Deployment of this specification is proposed for the May 2023 upgrade, and requires upgrades only to full nodes.

I’ve also written a short blog post to summarize the main ideas:

Comments, feedback, and reviews are appreciated, either here or on GitHub issues. Thanks!

5 Likes

Collection of thoughts after a quick read:

  1. There doesn’t seem to be a way to mint new fungible tokens. Shouldn’t the “mint” NFTs also mint fungibles (instead of just other NFTs)? Was that overlooked, or was that dropped in favor of a very simple way to verify supply?

  2. The argument about disallowing implicit destruction: This would allow transaction fees to be paid (or partially-paid) using tokens deemed valuable by some set of miners. Initially forbidding implicit forfeiture ensures that such an upgrade could allow users to opt-in, avoiding economic disruption. actually argues strongly for allowing implicit destruction. For wallet safety we can require a pfx in every input spending a token output as well; anything that may make it easier for miners to take fees in non-BCH terms in the future receives a strong nack.

  3. Is detached proof still expected somewhere else, or is the commit facility (which can be used to store state) expected to fully replace it? I’m not sure that’s the case, should warrant a deeper look.

  4. The entire commit facility looks like it can be replaced by script-enforced op_return, especially when it’s only present on the NFT class. There’s some space benefit to putting it on a new commit field, but i’m not terribly sure how much - note that BCH allows for many opreturns as long as they don’t collectively exceed 220 bytes in length.

  5. Is the has_nonfungible field even necessary, especially if we already deal with commit and mutable some other way (see above)? If all we want is a way to identify mints and “true” NFTs, would it not be better to use 1 or 2 bits from the amount field?

3 Likes

I’ll add mine too, comparing it with Group V6.0:

  1. CT2.0 is really introducing a nerfed “native tokens” super contract because you have to mint entire supply at genesis + some other super contracts that you think are needed + extending the format in pretty much the same way like subgroups did. Over the course of last few months I’ve been talking with imaginary_username and I dropped all of those “super contracts” I thought were cool because I now see them as optimization of functionality that can be implemented using Script on top of the one “native token” super-contract.

  2. Subgroups/commitment are a special case where I’m unsure, but there’s a possible upgrade, OP_COMMIT that maybe lets us compress it inside groupID instead of having it replicated on each UTXO, and it can use the same byte 0xEE.

  3. You have a different take on genesis, I looked in that direction (special prevout) when pondering alternatives and felt it’s convoluted and doesn’t solve the problem I wanted to solve: flexibility of preimage construction. I feel we finally cracked it with the input genesis declaration PFX and I will argue all day that it is much superior for the flexibilitty and upgradeability it provides :slight_smile: Being able to jump to the genesis TX is a nice to have convenience, but then we have to sacrifice using the ID as a more flexible commitment primitive which I think gives a lot of power to contracts. Example of this interaction is your own “depository corporation” example implemented relying on Group V6.0 spec. Edit: It hit me just now, PFX_GROUP_GENESIS is compatible with coinbase inputs, which means a token can prove it’s been created in a coinbase TX and at a specific height! and because we require height to be encoded there could be a way to enable miners from some epoch to have a vote

  4. Group’s vanity gen hack is not such an inconvenience now that we have a nice nonce field for it, used only as part of genesis input annotation so not replicated with outputs. Group genesis setup can still be verified by other contracts if the spender provides the genesis parameters. You can have P2SH covenant validate a genesis setup, it doesn’t need to know the groupID because it can reconstruct it and verify the bits that are of interest (like the genesisAmount==1 for NFT groups). It’s just 1 byte, I need few minutes to roll it by hand using a script in BitAuthIDE :slight_smile: and it’ll take nanoseconds if done by code, but we save a byte on each UTXO which there could be a lot of. Having said this, I’m not really convinced either way, the nonce makes it problematic to sign for the ID from within P2SH because it could be malleated if you only verify the parameters without signing at least one output, so… I wouldn’t mind respeccing groupType to a dedicated field for v6.1 :smiley:

  5. Introspection opcodes galore, 6 of them! I was actually pondering reducing it to 3: 2 for obtaining the whole annotation, and 1 for the genesis primage, it’s simple enough to use OP_SPLIT, and if your limits CHIP moves on which I hope it will then we can afford slightly less efficient bytecode since functionally it all works the same and we preserve opcodes for the future.

2 Likes

Hah, our talks about such tokens (feels ages ago) has been fun and it looks like you run with it.

Will have to take some time to read it, but just wanted to do a shout-out :slight_smile:

2 Likes

Thank you all for the quick responses! When I started writing this, no one else had posted yet :joy: I’m just going to post this as-is, and I’ll reply to your posts separately.


I think a useful way to kick off discussion in this thread is to describe the core insight for this CHIP and lay out premises for people to possibly refute. My hope is that we’ll be able to align on the core idea and then any discussion of implementation details will be more focused:

Summary

I claim that these primitives are emergent from the design of the Bitcoin Cash virtual machine: I did not design them to accomplish a set of “features”, I just discovered they are missing.

More specifically, these primitives are inherent to the strategy of coordinating multi-party financial protocols with covenants (contracts that can enforce any kind of constraints on their spending transaction).

Bitcoin Cash already supports covenants – OP_CHECKDATASIG(VERIFY) (since 2018) and transaction introspection opcodes (2022). Bitcoin Cash has also embraced on-chain scaling, a feature of advanced covenants. (Advanced covenants are not possible for chains aiming at full blocks. In my opinion, this is a huge competitive advantage for BCH over BTC.)

A growing body of research offers strategies for creating trustless and low-trust sidechain bridges (A.K.A. “decentralized oracles”) which enable BCH to be used as the native currency of other, decentralized networks. Unfortunately, practically all of these strategies require the CashToken primitives – two basic building blocks for covenant applications.

For example SmartBCH, an EVM sidechain of BCH currently uses a trust-based bridge. It might not be possible to make SmartBCH resistant to exit scams without CashTokens.

If BCH is going to be money for the world, it needs to be both functional and scalable. If BCH can’t even be safely used on other decentralized networks, we’re going to be outcompeted by currencies that can.

By making these primitives available to covenants, we make Bitcoin Cash vastly more useful as a currency, while retaining Bitcoin Cash’s global scalability.

Two Primitives

The Covenant “Model of Computing” (in the Bitcoin Cash context) requires one critical data type: identity commitments – a method for a past transaction to “commit” to some information, communicating that information to a transaction which happens in the future.

Identity Commitments

Identity commitments are the core idea behind PMv3/detached proofs + inductive proofs.

With some form of identity commitment (and sufficiently large contracts), I think any kind of decentralized application can be built on Bitcoin Cash’s extremely-parallel, UTXO-based, stateless transaction model. (Meaning we can have both massive scale and fancy applications, the applications just have to be designed in this ultra-fast, covenant model of computing.)

Numeric Identity Commitments

However, we’re not just concerned with what is technically possible: we want Bitcoin Cash transactions to be as small and efficient as possible.

So there’s one specialization of identity commitments which is critical for practically every covenant application: scalar/fractional/numeric (identity) commitments. I.e. identity commitments which are actually numbers, and can be divided and merged without coordinating with the issuing identity.

With numeric commitments, covenants can represent fractional parts of abstract concepts – shares, pegged assets, bonds, loans, options, tickets, loyalty points, voting outcomes, etc.

For example, a “corporation covenant” could issue millions of “shares” as identity commitments, each with a unique identifier tracked in a merkle tree. With a fancy enough contract, this covenant can allow any shareholder to split or merge shares (by rearranging the merkle tree). When managing a vote of the shareholders, it can keep track of which shares have already voted (copy the shareholder tree, then modify it vote by vote), and it can even support sealed voting.

You can technically do all of this with only identity commitments, no numeric primitive is required. However, numbers are hugely useful in computing, and a numeric identity commitment primitive allows covenants to “collapse” huge amounts of manual management into simple, efficient contracts.

Rather than 1) managing that huge merkle tree, 2) constantly requiring merkle proofs within every transaction, and 3) requiring merge/split transactions to check-in with the covenant (requiring more multithreaded covenants), the Bitcoin Cash platform provides a numeric type, allowing users to split and merge their “fractional identity commitments” at will. They don’t even have to talk to the covenant about it: the covenant trusts the platform to perform merge/divide math correctly.

Scalar identity commitments simplify massive covenants into tiny, easy-to-audit covenants; tiny covenants make small, efficient transactions.

Applying these ideas in a CHIP

To apply this theory, we need a good way to offer these primitives to contracts. The underlying ideas map well to “tokens” as commonly understood:

  • non-fungible tokens – (raw) identity commitments

  • fungible tokens – numeric identity commitments

This CHIP uses the same strategy for extending the transaction format as past colored coin proposals (OP_CHECKCOLORVERIFY, Confidential Assets, OP_GROUP, Unforgeable Groups), but instead of trying to settle on a fixed set of token features and locking them into the base protocol, this CHIP implements VM primitives which enable any kind of token design using the BCH VM.

The CHIP is backwards-compatible with v1/v2 transactions because we place the optional token data into the scriptPubKey part of the transaction, where we have ~70 available codepoints which are guaranteed to never occur in valid Bitcoin Cash transactions. (As with the latest Group CHIP!)

Using one of these codepoints, we define the most byte-efficient structure for encoding both primitives. (Please check my work!)

I also had to make a few design choices, which I tried to document thoroughly in the Rationale section. (Please let me know if I missed something!)

Hopefully this high-level theory helps to evaluate the specifics of the CHIP. (I might try to integrate it into a “Theory” section or something). I’d love to answer questions and hear your thoughts on both the theory and the CHIP’s execution. Thanks!

2 Likes

Intuitively I think that your statement makes sense, but you have a “strong” not-ok there, which makes me believe you have some further worked-out thinking here.
Can I ask you to please share your thinking?

ps. you and Jason are in agreement on this point, as far as my reading skills go.

No, I’m not agreeing with him; he described that the ban on implicit destruction is a conservative choice, because it preserves the possibility of paying miners in tokens (instead of BCH) in the future. I’m arguing that this possibility threatens the long term health of BCH, and we’re much better off sealing it permanently (by allowing implicit destruction), hence the strong NACK.

thanks for replying to my ps. I still stand by what I wrote.

Would you be able to answer my question?

Edit; how do you come to the conclusion that this “threatens the long term health of BCH”, please.
Again, I probably agree. It feels you have a point. But if you have arguments spelled out that would be awesome to read.

The concept of abstraction, aka success of the network while the coin fades into obscurity, is very real. Paying fees in BCH is an anchor of value that preserves a utility even if all else fails; threatening that prospect is hence threatening the long term health of BCH the coin, which I’m primarily aligned with. I’ll strongly and unambiguously fight against anyone who proposes a BCH replacement as fee payment above all else.

I’ll recommend searching for “abstraction” in cryptocurrency context to read more on this.

right, that makes sense. BCH is the tree-trunk and we can add lots of fun little things as extra branches up until a point where they become bigger than the trunk and then the whole thing collapses.

so, to your reading of this topic; I saw this line:

This strategy is also conservative at the protocol level

or to write out the reference;

Disallowing Implicit Destruction of Immutable Tokens is also conservative at the protocol level

as a line that doesn’t actually imply it is a good idea to do the reverse and allow implicit destruction. Hence you two agree. Implicit destruction is a bad idea and specifically forbidden in the proposal.

The rest of the paragraph just ponders about different strategies in different contexts. Reading Jasons reddit post he was thinking about a sidechain and in that sidechain paying with a token may make sense. Naturally that would not change how the mainchain behaves.

Thanks for your awesome reply!

3 Likes

Talking with @im_uname over the last few months, we were all on the same track, trying to find a minimal set of primitives that can later enable features/products that we can’t even imagine right now. We had no idea you were doing a sort of “clean room design” and it’s brilliant!

When this landed at first I was like, oh damn, 2 CHIPs now, this will spread us thin, but on a second thought it’s awesome because your CHIP is really good!! It’ll help more people grasp the idea and make a stronger case for activating the tech, I don’t really care what we call it, CashTokens sounds good to me :slight_smile: Thinking about how to avoid spreading ourselves thin I think there is a way: if we can converge on the same spec, we’ll have 2 CHIPs making a bullet-proof case for the same thing! 2 CHIPs, one tech!

First step in that direction, edited mine to take out groupType into its own field: Remove vanity gen for the groupType, expand groupID to 33 bytes (bf37f2db) · Commits · ac-0353f40e / Unforgeable Groups for Bitcoin Cash · GitLab and as discussion progresses I hope to make some more edits.

The other day, thanks to your heads up and @tom 's idea from long time ago I had a breakthrough discovery with genesis setup, that’s now the main difference between the 2 CHIPs and I hope you’ll carefully evaluate v6.0 genesis setup and see what I’m seeing there, it’s such a powerful primitive!

1 Like

To avoid threads getting lost, I’ll try to respond to everything so far in one post.

Thanks! I’ve been trying to work out various designs for covenants which implement sidechain bridges/decentralized oracles, but I only convinced myself in the past couple weeks that the CashToken primitives truly belong in the BCH VM. (When I realized it would impact your work, I messaged you guys and dropped everything to get the draft done.)

Just to summarize, I think two different categories of work need to happen next:

Implementation & Tooling

This draft specification needs to be implemented in branches for nodes, Libauth, Bitauth IDE, Chaingraph, and any other high-use software. Since having these commitment primitives would even simplify the mental model required for covenant design, we have some thinking to do about development tooling and authentication template design, both of which are required for real-world wallets to interact with these covenants. Even if the CashTokens CHIP were deployed today, there’s a considerable amount of development work to be done before end users will be making predictions or trading on DEXs in their preferred wallet app. (But we have a clear path!)

Token Standards

The CashTokens CHIP describes the minimum network upgrade required to support any token standard. But that’s not enough: token issuers need complete, application-specific standards, guidance on best practices, and application-specific tooling.

Some of the low-level technical standards we’ll likely need:

  • Migration strategies from SLP(v1?)
  • Standards for issuing “collectable” NFTs
  • Metadata standards for tokens (name, icon, symbol, description, etc.):
    • Standards for immutable metadata (e.g. one-time issued tokens)
    • Standards for mutable metadata (e.g. issuers of trust-based assets need to be able to update the icon using e.g. Bitauth, users’ wallets need to notify the user and ask for confirmation to change how the asset is represented in the app)
  • Standards for multi-category, fungible tokens (e.g. for issuer who somehow need to issue more fungible tokens which they consider “fungible” with an existing batch of tokens. This may happen for some covenants which have shareholder votes, and probably will happen with lots of centralized issuers who don’t plan far enough ahead. Might be related to SLPv1 migration standards.)

We also need industry-specific standards, for example:

  • Standards for issuing equity in companies (companies who want to issue shares/options on Bitcoin Cash and then pay dividends directly to shareholders) – consistent standards here allow the whole Bitcoin Cash ecosystem to innovate on services and user experience, while preventing users from being locked-in to centralized holding institutions.
  • Standards for issuing event tickets – allowing ticket to be safely sold and transferred without being the event coordinators’ problem (a lot of event coordinators are not happy with centralized solutions, the fees they charge, and poor user experiences driven by lack of competition)
    • E.g. standards for transferring the ticket to a P2PKH key which can be displayed in an offline QR code (to be printed or displayed on screen for the ticket “collector” to scan/redeem)
  • Standards for loyalty points – while existing issuers of loyalty points often prefer that they can’t be transferred, customers want to transfer their points, and innovative issuers can differentiate themselves by making their points transferable.
  • Standards for gift cards and store credit – as with loyalty points, some issuers won’t be interested in making issued credit more transferrable, but customers want better ways to hold/transfer gift cards – if Bitcoin Cash offers a truely open standard, innovative companies will move the industry forward.

These are standards that need champions, wide coordination between stakeholders, and hopefully, innovative Bitcoin Cash startups to drive standardization and adoption.

In the same way that Bitcoin Cash already disintermediates money and payment networks – where existing companies/governments/central banks extract rent from a widely-used network – these token standards would allow Bitcoin Cash to disintermediate other industries where users “own” assets they cannot practically control – stocks, event tickets, loyalty points, store credit, etc.

By sharing a ledger, companies on Bitcoin Cash token standards can both compete with each other and benefit from a shared network effect. Institutions currently extracting rent from controlled networks are not competing with any one Bitcoin Cash company, they’re competing with all of them at the same time. Bitcoin Cash users get to choose which services or user interfaces they prefer for various assets/use cases, and all competition contributes to the value of the Bitcoin Cash network. (Note, this observation implies that these use cases could be lucrative for Bitcoin Cash startups.)

I should also note, this isn’t a call to move the internal database of every company on earth onto the Bitcoin Cash blockchain (that would be absurdly inefficient). Our goal should be to encode the minimum bytes needed to record asset ownership, so individuals can have meaningful control of their assets. Bitcoin Cash is a ledger for globally coordinating economic activity with the informed consent of every user; the network has no administrators to rent-seek or violate property rights. My current heuristic: if putting something on-chain improves individual sovereignty, do it.


Responses

On to specific questions – some of these were also answered above in the “theory” summary or in discussion on CashToken Devs, but just going to reply to each here for completeness:

Exactly – dropped. If we want logical consistency in the VM, token amounts should never exceed the maximum VM number. This is important for contracts because it eliminates a class of overflow bugs. Note however, it’s not a limitation in practice: Other token standards built on top of CashTokens can allow for an unlimited supply. (E.g. multi-category tokens, see: Limitation of Fungible Token Supply.)

Thanks for bringing that up – I’m now thinking that paragraph in the CHIP’s rationale is just incorrect: it’s almost certainly more efficient (in terms of real-world transaction costs) to have liquidity pools for various tokens which allow you to hand the pool some tokens and receive some BCH. If you wanted to pay for a transaction in another asset, you would simply swap it and not claim all of the released BCH (paying some BCH as a fee). This is far simpler to coordinate, and doesn’t require the “coincidence of wants” issue where the miner of the next block might not care about the particular token you’re hoping to use for fees. So token holders can already pay fees in BCH using atomic swaps, and that paragraph can be deleted. (Done :+1:, thanks!)

It fully replaces the inductive proof use case behind PMv3. PMv3 let us create transactions which themselves plan for a future transaction to be able to prove something about the contents of that transaction, i.e. an inefficient CashToken NFT.

You’re right that the “collectable NFT” use case can be partially replaced by a covenant which forces the user to re-add the same bytes in some part of the locking/redeem bytecode (in fact, it doesn’t even need OP_RETURN to do so). However, things fall apart with more interesting use cases: every such “token” needs to know (in advance) the structure of every contract which might “hold” it. (CashTokens v0 actually demonstrates the kind of hack you need for this with its optional multisig path.) In practice, this means covenants are forced to be closed systems, i.e. limited cross-contract interfaces, no decentralized ecosystem. (And as mentioned above, my goal is not to support collectable NFTs, they just happen to be made efficient by CashTokens.)

See above, but to expand on the second question – this actually highlights one of the simplifications we’re able to make when we acknowledge the incompatibility between fungible and non-fungible tokens:

Non-fungible tokens must be able to support a minting capability, a token which enables one-to-many token creation. (The mutable capability offers the corresponding one-to-one behavior, and every token has the default “burnable” behavior of one-to-none.) Because non-fungible is the more general of the two types (its the “raw” identity commitment), its “minting tokens” can also be used to demonstrate category-ownership for fungible tokens (but not the reverse). So even though it seems like we might need some equivalent “minting” capability for fungible tokens, that would add unnecessary complexity to these basic types. Any kind of fungible token standard can be implemented with just these basic types, and because all tokens use PREFIX_TOKEN, we even get a definition of “circulating supply” for free.

Both of these questions relate to storing data in token category preimages: this can achieve one behavior of CashToken NFTs – passing an authenticated message from a past transaction to a future transaction, but it’s less flexible and extremely inefficient. Pulling data from those preimages requires reconstructing the preimages, which doesn’t actually matter to the transaction in the future – it only needs the authenticated data. Having an explicit “identity commitment” primitive accomplishes the same task with smaller transaction sizes.

As for the mining use case – that’s not too difficult with just a covenant. Covenants can hand out tokens to any miners who prove they have the 0th output of a coinbase transaction (the covenant can inspect the preimage, which also includes the height since BIP34). Different people can deploy the same covenant, so a “miner vote” is available to any covenant system.

There are some significant disadvantages to this: Avoiding Proof-of-Work for Token Data Compression. And also relevant, any hashing strategy precludes the use of transaction IDs as token category IDs.

Yes, we’re fundamentally adding new data fields to the transaction format (in a way which is backwards compatible with v1/v2 transactions). The VM supports introspection of every property of transaction and evaluation state for operation set completeness, so every bit of new static state should have its own, single-byte opcode.

There is a design decision here I should share, though:

Including Capabilities in Token Category Inspection Operations

The token category inspection operations (OP_*TOKENCATEGORY) in this proposal push the concatenation of both category and capability (if present). While token capabilities could instead be inspected with individual OP_*TOKENCAPABILITY operations, the behavior specified in this proposal is valuable for contract efficiency and security.

First, the combined OP_*TOKENCATEGORY behavior reduces contract size in the most common case: every covenant which handles tokens must regularly compare both the category and capability of tokens. With the separated OP_*TOKENCATEGORY behavior, this common case would require at least 3 additional bytes for each occurrence – <index> OP_*TOKENCAPABILITY OP_CAT, and commonly, 6 or more bytes: <index> OP_UTXOTOKENCAPABILITY OP_CAT and <index> OP_OUTPUTTOKENCAPABILITY OP_CAT (<index> may require multiple bytes).

There are generally two other cases to consider:

  • covenants which hold mutable tokens (somewhat common) – these covenants are also optimized by the combined OP_*TOKENCATEGORY behavior. Because their mutable token can only create a single new mutable token, they need only verify that the user’s transaction doesn’t steal that mutable token: OP_INPUTINDEX OP_UTXOTOKENCATEGORY <index> OP_OUTPUTTOKENCATEGORY OP_EQUALVERIFY (saving at least 4 bytes when compared to the separated approach: OP_INPUTINDEX OP_UTXOTOKENCATEGORY OP_INPUTINDEX OP_UTXOTOKENCAPABILITY OP_CAT <index> OP_OUTPUTTOKENCATEGORY <index> OP_OUTPUTTOKENCAPABILITY OP_CAT OP_EQUALVERIFY).
  • covenants which hold minting tokens (rare) – because minting tokens allow for new tokens to be minted, these covenants must exhaustively verify all outputs to ensure the user has not unexpectedly minted new tokens. (For this reason, minting tokens are likely to be held in isolated, minting-token child covenants, allowing the parent covenant to use the safer mutable capability.) For most outputs (verifying the output contains no tokens), both behaviors require the same bytes – <index> OP_OUTPUTTOKENCATEGORY OP_0 OP_EQUALVERIFY. For expected token outputs, the combined behavior requires a number of bytes similar to the separated behavior, e.g.: <index> OP_OUTPUTTOKENCATEGORY <32> OP_SPLIT <0> OP_EQUALVERIFY <depth> OP_PICK OP_EQUALVERIFY (combined, where depth holds the 32-byte category set via OP_INPUTINDEX OP_UTXOTOKENCATEGORY <32> OP_SPLIT OP_DROP) vs. OP_INPUTINDEX OP_UTXOTOKENCATEGORY <index> OP_OUTPUTTOKENCATEGORY OP_EQUALVERIFY OP_INPUTINDEX OP_UTXOTOKENCAPABILITY <index> OP_OUTPUTTOKENCAPABILITY (separated).

Beyond efficiency, this combined behavior is also critical for the general security of the covenant ecosystem: it makes the most secure validation (verifying both category and capability) the “default” and cheaper in terms of bytes than more lenient validation (allowing for other token capabilities, e.g. during minting).

For example, assume this proposal specified the separated behavior: if an upgradable (e.g. by shareholder vote) covenant with a tracking token is created without any other token behavior, dependent contracts may be written to check that the user has also somehow interacted with the upgradable covenant (i.e. <index> OP_UTXOTOKENCATEGORY <expected> OP_EQUALVERIFY). If the upgradable covenant later begins to issue tokens for any reason, a vulnerability in the dependant contract is exposed: users issued a token by the upgradable covenant can now mislead the dependent contract into believing it is being spent in a transaction with the upgradeable contract (by spending their issued token with the dependent contract). Because the dependent contract did not include a defensive <index> OP_UTXOTOKENCAPABILITY <0xfe> OP_EQUALVERIFY (either by omission or to reduce contract size), it became vulnerable after a “public interface change”. If OP_UTXOTOKENCATEGORY instead uses the combined behavior (as specified by this proposal) this class of vulnerabilities is eliminated.

Finally, this proposal’s combined behavior preserves two additional, unused codepoints in the Bitcoin Cash VM instruction set.

(Didn’t have that polished before I published the draft, but it’s now in the CHIP :+1:)


I hope that answers all the open questions in this thread, please keep them coming!

3 Likes

I want to merge the 2 proposals so we can all use our energy better.
As you said, there’s much work to do post-activation, tooling, standards, etc.
The sooner we can “lock-in” a spec for activation, the sooner we can move to doing all those other exciting things!

Analyzing the “diff” between this CHIP and Group’s v6, there are “merge conflicts” and I intend to resolve those by implementing your spec into some v7 which can be just for reference if we bless your chip for activation - and it will also help me and anyone who’s followed Group’s progress better reason about it in familiar terms, and better understand your design decisions.

Your rationale for dropping the “singularity/baton” amount overload is convincing, and as you say - multi-category tokens make infinite supply viable while avoiding making a problem for the Script VM.
I gave up vanity-gen already for the same reason - it complicates things for Script.

So really, I think the only merge conflict will be the genesis setup, and there I hope it will be CT2.0 to resolve it by implementing the v6 genesis, I’ll make some arguments below as to why…

Note that I’m not (anymore) suggesting to replace your “commitment” field with packing it into “category”. I will be suggesting to have both primitives available. My only suggestion then is to replace your category genesis definition with an adaptation of Group’s (v6) and add the 7th introspection opcode, which can actually use the same codepoint (1 codepoint, 3 contexts - input prefix, opcode, output prefix) which is fitting because it’ll all relate to how the category came into existence.

What if the “authenticated data” needed is the information on the genesis setup?
What if you want to pick up any random UTXO and store its information for later?
You’d have to use another output as carrier, prepare it in advance (create it as vout 0), then spend it alongside the UTXO you want to store, and copy its TXID/index into the commitment field, and then the NFT will have to replicate this data all the time on every transfer, and when you need data from its preserved TX you still have to unpack the TXID by providing the preimage. If you wanted to store more than the output’s txid/index you’d have had to hash it all anyway so it can fit into the commitment field of the other output.
Sometimes having the data moved in “plaintext” is beneficial when you want to scan the blockchain for particular data, sometimes it is not.

The v6 genesis setup lets you pick and pack any existing UTXO (even another token) into the category (or commitment) of a new NFT created by consuming it, it may or may not be fully compressed in transit (depending on what you want), and sure, at the end destination it may or may not require some more of proof to unpack, depending on the genesis script/parameters you chose.
If v6 genesis setup is merged with your “commitment” extension, then it’ll enable more cool stuff, you can pack any UTXO and add a message to it, all in one output.

With your genesis setup, how do you resolve the problem of txid/0 already having a category? Is it prohibited or does it get burned? You can’t have both because then the input category inspection opcode would be ambiguous. This introduces a sort of discrimination against existing UTXOs… feels gnarly.

With a variant v6 genesis, you can pick any existing UTXO for genesis, and the input can carry both and the Script placed on prevout chooses what happens (new category genesis prohibited or old category burned, or prevout’s token simply ignored), and prevout inspection works the same (obtains prevout data), while a 7th opcode would be used to obtain the genesis preimage (which would contain non-deterministic parameters so you can validate the genesis setup - NFT type or genesis amount)

<expected parameters> // expected parameters hardcoded in redeem script
OP_INPUTINDEX OP_INPUTCATEGORYGENESIS // push the genesis parameters
OP_EQUALVERIFY // verify

OP_INPUTCATEGORYGENESIS returns either a <0> if not a genesis input, or genesis parameters:

- genesis version (1 byte),
- genesis prefix `has_nonfungible` parameter,
- genesis prefix `commitment_length` parameter,
- genesis prefix `commitment` parameter,
- genesis prefix `amount` parameter.

The categoryID preimage is then:

  1. prevout ref and genesis input index (txid + vout index + genesis input index)
  2. genesis parameters
  3. optional - whole prevout

The 1. MUST be there regardless of the version, it’s our security guarantee and ensures unique, unforgeable, and unmalleable (changing input slot) ID. The 2. can be used to prove how the token was created to some other contract. The 3. can be signalled by the genesis version byte, and used only when needed.

To prevent malleation you either “sign” the ID by hardcoding expected parameters (P2SH) as above, or you sign 1 output of the category being created (P2PKH).

Note that: if 3. is a commitment-carrying NFT, then its message will get compressed into the new token, too!

I claim that these primitives are emergent from the design of the Bitcoin Cash virtual machine: I did not design them to accomplish a set of “features”, I just discovered they are missing.

I will argue the same - going over many possible ways to do genesis (commiting to just the first input as Andrew propsoed, adding the prevout to that, using TXID as Tom proposed, using a genesis prototype prevout with ID=0x00, using the whole genesis TX for the preimage…) and thinking about how it could be later used in contracts I discovered what’s missing - input genesis declaration. :slight_smile:

2 Likes

aaah now I see it… the output caries BOTH the NFT and the leftToMint FT’s supply ie mint pool (in the amount) of the same category… the NFT mint type is then like a permission to take from the supply, right? And this way lets you enforce the MAX_SUPPLY globally to prevent funny situations in Script.

aand you can achieve infinite supply tokens by using the commitment as a sort of “family identifier” so even tokens of a distinct category could be fungible!

Right you are then, I’m pretty much on-board with CT as it is, it’s just this genesis setup described above I have strong feelings about because I really think it’s a breakthrough :slight_smile:

For other details, I’ll know better when I try to resolve “merge conflicts” by matching it with group chip and see if I discover anything that I’m not seeing yet.

2 Likes

If I understand your question, the goal is to encode some information in the category ID by making the category ID a hash of a specific, genesis data structure – a “genesis commitment”. But the category ID is a transaction ID, the hash of a specific data structure (the “pre-genesis” transaction) – a “pre-genesis commitment”. Is there some application of genesis commitments which cannot be supported by pre-genesis commitments? (The token issuer can easily create a “pre-genesis” transaction with whatever OP_RETURN outputs are needed by downstream applications or protocols.) If not, using transaction IDs as category IDs enables token-related applications to be powered by existing indexes.

It helps to focus only on UTXOs – every transaction has exactly one UTXO at index 0. The network already prevents UTXOs from being spent twice, so Category IDs are guaranteed to be unique by double-spending prevention.

Yes, that will be a common behavior for covenants. :+1:

I should clarify though, there is no fundamental minting designation specifically for the fungible tokens in an output – like “minting buckets”. All fungible CashTokens must be created in the category’s genesis transaction (even if some amount is still in the issuers control, i.e. reserved supply). But this is just how the underlying primitives work – standards built on top of CashTokens can easily standardize a strategy for issuing later batches of fungible tokens. (And there are many ways to design trustless migrations for category IDs using covenants.)

Note that even if we had some concept of “minting buckets”, off-chain token issuers could still fail to “properly label” their reserved supply (such that circulating supply is easy to calculate by all indexing applications). So the best we can do is make sure issuers know that “labelling” their reserved supply is possible (and token standards built on top of CashTokens should encourage compatibility too).

One cool side effect of simplifying away “minting buckets” is that it makes reserved/circulating supply easy to calculate for covenant-issued tokens by default – a covenant would have to be intentionally designed to make calculating its reserved/circulating supply harder for e.g. block explorers. So we get a more consistent/block-explorable token ecosystem for free.

3 Likes

It’s not just about the data although I arrived there thinking in that direction, trying to achieve similar functionality to “detached proofs” of PMv3, and that led me to a version (4.3 I think) where the whole genesis TX (except genesis IDs being generated) was the preimage (input scripts individually hashed) but that was too complex and still rigid so I had dropped it and went back to simple hash of the full prevout ref + prevout itself, thinking the same: you can move one step back and unwrap TXID to prove parts of “pre-genesis”.

Then when you had pinged me I was thinking “what could it possibly be?” and I thought it must be that you cracked the genesis setup! And that led me to revisit some old ideas (ID=00…00 for pre-genesis, next TX would generate the real ID) but then it hit me: genesis naturally belongs on inputs! Little did I know that you cracked something else that was giving me a headache… since subgroups got dropped I never looked back to adding more data but I had tried to achieve similar thing to your “commitment” by overloading the amount field and implementing a groupType=NFT | hasAmount which would let you have 8 “free” bytes on an NFT.

Anyway, that was just for some background - it’s about all existing UTXOs having equal “genesis potential” and creators of UTXOs being able to make a covenant that can prohibit, ignore, or require a token genesis on top of itself, with parameters specified by the covenant creator, without having to make sure they put it in the right slot. Also, if genesis is implicit, then you have to do it the roundabout way: verify your genesis output(s) introspected ID against the input’s introspected prevout TXID, if it matches - it’s a genesis. What if you want to verify the supply being created at genesis? Then you have to tally genesis TX outputs from within Script which is something we’re trying to avoid, no?

using transaction IDs as category IDs enables token-related applications to be powered by existing indexes

Well aware of this benefit and the trade-off, it’s just that I’m not convinced that it’s a good one now that I see this genesis-on-input approach.

You’re sacrificing a genesis way that perfectly fits in with Script VM and other BCH blockchain primitives in order to make some secondary jobs easier, jobs that weren’t really hard to begin with.

From talking with some others I got the impression that this easy indexing is a solution looking for a problem. The whole ecosystem is used to indexing token IDs, and if you get a random token and don’t want to use an index, then finding the genesis TX is a simple matter of tracing back a chain between your UTXO and the genesis TXO.

Is there some application of genesis commitments which cannot be supported by pre-genesis commitments?

Yeah, pre-genesis covenants, like these 2 examples:

  • Require that 10,000 sats is paid into some P2SH covenant and a NFT token created. The target covenant later easily verifies NFT’s genesis and that it’s getting burned and releases the 10,000 sats. Cool aspect of this: you can take 10,000 from any covenant UTXO, even ones not created by you, because the covenant only verifies the NFT template. Bitauth IDE
  • Require a token genesis to be created. Just that lets us do something interesting: define a standard where tokens are created from a public covenant like this (or authenticate by verifying a “detached owner” at txid/n+1). This way, services can subscribe to a single address to be updated whenever a new standard token genesis gets created. Bitauth IDE
2 Likes

This is clear to me, it was not about this, I misunderstood your spec. Question was related to what would executing OP_UTXOTOKENCATEGORY return if executed on an input whose prevout is a txid/0 and the prevout already has a token? But from the spec it’s clear - it would return the “old” token, and Script running on the input would have to look at outputs to know if it’s creating a new category and of what kind.

1 Like

Sorry for the late reply - there are some excellent points addressing my concerns. Some more thoughts for @bitjson :

Exactly – dropped. If we want logical consistency in the VM, token amounts should never exceed the maximum VM number. This is important for contracts because it eliminates a class of overflow bugs. Note however, it’s not a limitation in practice: Other token standards built on top of CashTokens can allow for an unlimited supply.

Note that even if we had some concept of “minting buckets”, off-chain token issuers could still fail to “properly label” their reserved supply (such that circulating supply is easy to calculate by all indexing applications). So the best we can do is make sure issuers know that “labelling” their reserved supply is possible (and token standards built on top of CashTokens should encourage compatibility too).

Note taken, I suppose the “right way” to do mintable fungibles, then, is to establish a very large bucket that is locked away and accessible only to a baton NFT. Standardizing how a specific way to do this is “mintable” will help application interfaces to deal with this, and separate the intentions of a “very high supply token” and a “mintable token”. One way to be very clear about it is perhaps by standardizing “mintable” tokens as tokens who are simply minted with the maximum VM supply, then further defining how the reserve can be held.

On this point: (also thanks @bitcoincashautist for clarifying some of that above!)

The reserved supply of a fungible token category can be computed by retrieving all UTXOs which contain token prefixes matching the category ID, removing provably-destroyed outputs (spent to OP_RETURN ), and summing the amount s held in prefixes which have either the minting or mutable capability."

Is there any advantage to limit reserving to mutable/mintables? It seems to me that they don’t confer any additional advantage compared to vanilla NFTs; in fact using them may complicate identification of reserve pots down the line.

While this does not affect consensus, when specifying standards for mintable tokens I’d actually do the opposite: reserves should go to a vanilla NFT so they remain in one easily identifiable chain, and we can further specify they need to be at output x of genesis for easy identification, etc. Other configurations are possible but will take the form of additional standards.

Thanks for bringing that up – I’m now thinking that paragraph in the CHIP’s rationale is just incorrect: it’s almost certainly more efficient (in terms of real-world transaction costs) to have liquidity pools for various tokens which allow you to hand the pool some tokens and receive some BCH. If you wanted to pay for a transaction in another asset, you would simply swap it and not claim all of the released BCH (paying some BCH as a fee). This is far simpler to coordinate, and doesn’t require the “coincidence of wants” issue where the miner of the next block might not care about the particular token you’re hoping to use for fees. So token holders can already pay fees in BCH using atomic swaps, and that paragraph can be deleted. (Done :+1:, thanks!)

I don’t think it’s sufficient to delete the miner fee rationale - with that gone, the only remaining rationale is safety, and there are ways to get around it (for example, adding a requirement for, say, a SIGHASH_TOKEN bit in inputs spending tokens) without losing the freedom to destroy. I feel strongly that wallet owners should be able to destroy their tokens and recover the sats - I can attempt a PR to the specs if necessary.

4 Likes

They could, with OP_RETURN, but it would be cheaper with implicit destruction so there’s an argument there for allowing implicit destruction.

If you had 10 token UTXOs with distinct category IDs, you’d need 10 inputs and 10 OP_RETURN outputs to “eat” each token + 1 pure BCH change output.

With implicit destruction, you’d need just the 10 inputs and 1 BCH change output.

Wallet safety is one argument for keeping it explicit, although there may be alternatives as you suggested.

I wonder whether there are some Script VM implications but I can’t think of any right now. It’s just one more way to burn: 1bitcoineater, 0-amount OP_RETURN, omission (needs >= instead of == balancing rule).

3 Likes

Posting a review mixed with suggestions here.

2 Likes