CHIP 2022-02 CashTokens: Token Primitives for Bitcoin Cash

@bitcoincashautist and I worked through the token genesis question offline. TL;DR: if any relevant use cases are discovered in the future, there are at least two good alternatives to a built-in category preimage data structure.

First observation: we haven’t yet identified any use case that requires a contract to “accept” tokens of an unknown category where some vetting process isn’t already required, e.g. a vote of shareholders, some staking process, submission with a liquidity pool of BCH, etc. Since these processes already imply that economic actors are somehow vetting the unknown token category (e.g. reviewing the details of the issuing covenant, verifying the tokens’ utility in an external system, checking that a pegged asset has audited reserves, etc.), (re)verifying a few details of the unknown token category’s genesis transaction in the on-chain contract bytecode doesn’t offer any additional value (it just wastes transaction bytes/fees).

However, if someone did discover a use case in the future, there are at least two good options available without modifying the current specification:

  1. Using the pre-genesis transaction as a commitment structure – e.g. committing data to OP_RETURN outputs or specifying a covenant which enforces constraints around token genesis, and/or
  2. Covenants-as-standards - well-known public covenants which oversee the creation of many new tokens according to strict rules, allowing those covenants to:
    1. Enforce uniqueness across a large, managed category of NFTs, and/or
    2. Freely issue “certification tokens” – tokens which attest to the newly created token category’s compliance, e.g. fixed supply below some limit, no minting tokens, minting tokens assigned to a strict covenant, etc. Such certification tokens could also be issued to public covenants which prevent them from being moved or destroyed, allowing any transaction to temporarily borrow them for a proof (and then return them to the same covenant).

Interestingly enough, the covenants-as-standards strategy is far more efficient than either commitment structure option. It cuts down the transaction size cost of verifying category genesis properties to 36 bytes (<index> OP_UTXOTOKENCATEGORY <covenant_managed_category_id> OP_EQUAL) rather than the hundreds of bytes required for contracts to reconstruct, verify, and inspect a category ID’s preimage (whether that preimage is a transaction or a new data structure).

So with that, we concluded it makes sense to leave that part of the CHIP as-is (using transaction IDs as category IDs).

3 Likes

Thanks for the comments @im_uname, @bitcoincashautist, and @emergent_reasons! I’ll try to respond here to everything in this thread, then respond on GitHub to activity there.

Yes, exactly. I think that would be a good start for standardizing “mintable, fungible tokens” in some “SLPv2” specification. I think it’s also good idea for standards to attempt to conform to how covenants will issue such tokens by recommending that trusted token issuers hold their “reserved” tokens in outputs with either minting or mutable capabilities. (More in below response.)

If I’m understanding the question – yes. If the goal were only to standardize a method for identifying circulating/reserved tokens, I would argue that definition belongs in complete token standards rather than in this CHIP.

The rationale behind including these particular supply definitions in this CHIP is that there is an important, “emergent standard” inherent to how most covenants must issue tokens (if they use the minimum possible contract/transaction sizes): most covenants hold easily identifiable reserves of their own, unissued tokens. This supply of unissued covenant-held tokens will almost always be held in a top-level or depository child covenant, many of which will use a tracking token with the mutable capability. Simpler covenants may also directly hold the minting capability, rather than isolating it to a privileged child covenant (somewhat like a linux superuser account). In both cases, the “reserved” supply is easy to calculate by outside observers (and without adding complexity to the covenant to accommodate some standard).

Given this reality, it makes sense for higher level token standards to attempt to standardize around compatible definitions, ensuring good application-layer compatibility between on-chain and off-chain token uses.

On higher level standards: I agree – it’s a great idea to standardize around minting and holding all reserves in the 0th transaction output. (Bonus: Bitauth-supporting infrastructure like Chaingraph already supports recursive lookups of the 0th output.) So this CHIP’s only contribution in that respect would be to clarify that each of those 0th outputs must have either the minting or mutable capability, ensuring that supply calculation for standards-compliant tokens issued by centralized entities are compatible with supply calculation for covenant-issued tokens.

This is a great point, thanks for noticing @im_uname! As @bitcoincashautist mentioned, requiring OP_RETURN for token destruction makes cleaning up “token dust” much more expensive than some sort of SIGHASH_TOKEN. A signing serialization type flag also simplifies away the awkward destruction policy difference between fungible or immutable tokens and minting or mutable tokens (where the former currently cannot be implicitly destroyed, but the later can).

In fact, to prevent offline signers from being mislead by omitting token information in signing requests, we also need to add the token information directly to the signing serialization. Maybe we include the full contents of the token output prefix (same encoding) after value in the algorithm for SIGHASH_TOKEN signatures?

If you’d be interested in sending a PR, I’d definitely appreciate it! (I’m also happy to write the update or any relevant rationale, just let me know what you’re interested in doing.)

Right, it will still be possible for tokens to be explicitly burned to OP_RETURN outputs (e.g. for protocols which require proof of burn), but the SIGHASH_TOKEN strategy is both more efficient and closes some possible vulnerabilities in offline signers.

Aside: funds/tokens sent to 1bitcoineater... are actually not safe for any token standard to consider “burned” – those funds can be unlocked in the future if someone manages to acquire that private key e.g. a break in the crypto, even decades from now. Even more plausible, a very expensive collision search could eventually create a new P2PKH “burn address” with a known private key. That’s one reason we can be pretty confident in the CHIP’s token supply definitions after only removing OP_RETURN outputs: OP_RETURN covers all provably unspendable outputs which can be generated by standard transactions. (To provably burn tokens with a different locking bytecode – e.g. OP_0 – each user would have to manually submit/mine a non-standard transaction.)

4 Likes

Thanks, I’ll try to submit a PR this week. :slight_smile:

3 Likes

That captures and summarizes it - for now :slight_smile:
Let’s keep it on the back-burner while we focus on other things, and maybe we’ll later hit some “Aha!” moment when we start working out examples and standards.

Just to add some closing thoughts: I wouldn’t mind if genesis was left as-is, although I’m not fully convinced, because I see workarounds for what I had in mind with using the TXID. If you need workarounds to achieve something, then maybe the whole thing could be reworked so you don’t need those workarounds.

I feel like our genesis setup could become an important “commitment primitive” as it can preserve and carry any TXID and with that preserve a proof that something happened in the past. With that in mind I’d like to future-proof it and would want it to be as generic and flexible as possible, so we don’t wake up one day thinking like we did with TXID: Damn, it would be real nice if TXID hashed locking scripts individually while constructing the preimage.
We’re looking at it from different angles/philosophies I guess. I’m thinking: if we’re introducing blockchain-native primitives, then they should allow for maximum expressiveness to blockchain-native agents (contract entities).

1 Like

Yes. I thought it would be “automatically” included as if it was part of hashOutputs scriptPubKey but better to spell it out.

This comment made me realize something: the prevout’s token payload is special because signing serialization sometimes uses the locking script and sometimes the redeem script so we should add the prevout’s PFX inside the scriptCode, prepended to the actual script.

2 Likes

(I think you mean hashPrevouts?) Right, hashPrevouts commits to the full contents of all the transactions with outputs being spent by the current transaction. The main reason value is included separately by BIP143 (and then by BCH’s signing serialization algorithm) is to make verification easy for offline signers. In practice, many offline signers weren’t actually verifying the values from the source transactions because to do so could require transferring, decoding, and inspecting MBs of raw transactions. Committing to the value directly in the signing serialization allows for equivalent security using much simpler offline signing implementations. (Even if all transaction data must be transferred e.g. by keyboard or a QR code.)

I’m not sure if I understand the logic here, but my initial reaction is that scriptCode is already a pretty complicated idea – I think we’re best off leaving it as-is and just committing to the full token prefix directly after value. (And only if SIGHASH_TOKEN is present.)

2 Likes

I thought that it commits only to prevout refs (which by themselves commit to the prevout satoshi amount and locking script through TXID). So, I meant to include the token prefix same way as value, so that you can see what you’re signing without having to obtain parent TXes. We’re thinking the same here, but I had a different place for it in mind.

This sounds good, and cleaner :slight_smile:

No, there I meant newly created outputs although it was not really applicable to your comment about offline signing. Newly created outputs token payload needs to make it into signature preimage(s), too. We should spell out how both prevouts and new outputs are to be handled when signing is concerned.

1 Like

Finally read through all this as part of my review. Amount of constructive debate and collaboration going on is amazing.

2 Likes

Just had time to finish digesting the proposal and I finally get it. My initial impression a couple of months ago was uncertain to negative, as it seemed to be a shortcut that added application logic to layer 1. However the reasoning provided around the elemental nature of the two types of tokens is sound and the CHIP is really impressive and detailed. I hope it succeeds and I’ve gotten all excited about the possibilities for things to build on top.

4 Likes

Great to know! Can’t wait till Jason wraps up his other projects so we can push this forward together!

PS @rnbrady can we add that as a quote for the CHIP?

1 Like

Yes, sure thing and let me know how else I can help.

2 Likes

Making a comment here as placeholder that we need to address activation strategy in the CHIP.

While nonstandard, it is possible that transactions generated before activation will contain “valid-looking” token outputs, and since the ruleset doesn’t exist before activation, they can be “wrong” in a wide variety of ways - including but not limited to duplicious Category IDs, invalid-when-summed amounts, nonsense NFT capabilities, nonexistent genesis, and so on.

Declaring these pre-activation outputs as invalid might be simple as a thought experiment, but incurs technical debt in practice - we’ll need a separate pass checking UTXO height to determine the validity of all token transactions. Not ideal…

… but these should not be a big deal in practice even if we adopt the ruleset as is! These “fake” UTXOs can simply be declared “valid if exist at activation”. These may lead to shenanigans involving any categories that use txids that exist pre-activation, but there is a clean way around it: implementers (of wallets, services, smart contract providers etc.) mostly need to be aware to do a de novo two-step genesis for any tokens they generate, shenanigans are only applicable to actual users if they venture into directly using pre-activation UTXOs for genesis.

Does this mean we don’t actually need to do much? Yes, but we do also need to address this point in the specs, lest people get confused about what the best practices are.

1 Like

Goos points @im_uname .

Just to elaborate: there are a bunch of corner cases here, all somewhat related to the way this works.

What do you do if you saved a scriptPubKey to your UTXO db as a node some time ago, and it has the prefix-byte. Now some new tx, post-activation, wishes to spend that UTXO. So you deserialize the coin and lo and behold it looks it has the PREFIX_BYTE.

  • What do you do if the SPK passes muster and deserializes correctly as a [tokenData + SPK] byte blob? (Has ok capabilities, positive amount, short-enough commitment, etc). Now there is a “fake” token that can be spent… which is what @im_uname is discussing above…
    • This has implications for token amounts. It’s possible for a category-id to exceed INT64_MAX if someone makes a bogus token from pre-activation that has the same category-id as a real token from post-activation… Now your inputs can sum up to >INT64_MAX. This is a caveat for node implementors to worry about…
  • The other case is what happens if the TXO fails to deserialize because while it used to just be an opaque byte blob that containes SPK bytes, now there are new rules about SPK’s with the prefix having to follow the new token binary format (commitment length, etc). So maybe the TXO doesn’t deserialize correctly as a token… but you thought it was a token!
    • Do you deserialize it anyway and just throw all the bytes (including the PREFIX) into scriptPubKey (as it was when it was created, really)… ? (Note this would be an unspendable TXO, but still the behavior needs to be specified).
    • I say this only as a corner case because one can imagine some node software assuming “illegal” PREFIX_BYTE containing SPK’s are impossible if they come from the internal node DB, and if that assumption doesn’t hold one can imagine software crashing when it hits the impossible condition it thought was impossible…

There are also other caveats with respect to activation… that are subtle … which I can go into later.

1 Like

Great points, thanks for bringing it up @im_uname and @cculianu! That definitely needs to be addressed in the spec.

One useful observation: any locking bytecode with a PREFIX_TOKEN (0xd0) is currently provably unspendable. That’s not true for all occurrences of 0xd0 in any locking bytecode, but because 0xd0 is an unknown opcode, and an OP_IF/OP_ENDIF block can’t cross from unlocking to locking bytecode (also – push-only enforcement since Nov 2019), we know that 0xd0 can not be the first byte in any successful locking bytecode evaluation (and this has been the case since Satoshi).

So until the upgrade, all outputs with a 0xd0 prefix are practically equivalent to OP_RETURN outputs, and can reasonably be pruned from the UTXO set at upgrade time. (In fact, implementations could reasonably prune lots of other provably unspendable UTXOs from their databases, but in most other cases that probably wouldn’t be worth the effort, makes UTXO commitments harder to standardize, etc.)

After that, there shouldn’t be any need to keep track of “fake token” outputs. While it still requires some activation logic, at least node implementations don’t have to pay a cost after the upgrade block.

One caveat with this strategy (and if we go with it, should be in the spec) any token transactions prepared in advance of the upgrade should use locktime to ensure the transaction isn’t included in a pre-upgrade block. (Even if the transaction is broadcasted after the new rules are expected to be in effect, it’s still possible for a backdated re-org to burn those funds.) Of course, creating tokens doesn’t require significant funds, so many users might not care if their token-creating dust gets burned by a malicious re-org (especially people creating tokens in the first few blocks, many will just be upgrade lock-in transactions). But worth mentioning for completeness.

Related, I think this is also the right strategy for handling “improperly encoded token prefixes” after activation. If PREFIX_TOKEN is followed by an invalid token encoding, we can’t really assume the locking bytecode was intending to encode tokens (and, e.g., attempt to somehow slice off the invalid token prefix to allow the BCH dust to be spent). Instead, it’s sending dust to a provably unspendable output, and can just be treated like any OP_RETURN output. (The transaction would be non-standard anyway due to the unrecognized contract type, so in practice this would only happen if a miner deliberately mined the nonstandard transaction.)

1 Like

You can’t “just prune” things from UTXO following a certain rule that’s later slated to be spendable again - the “pruning because unspendable” needs an activation in itself, else you get a consensus failure.

2 Likes

Would be real nice if we had version locked already. I peeked at Andrew’s old Group code, he has this in the loop that accumulates input amounts:

// no prior coins can be grouped.
        if (coin->nHeight < miningEnforceOpGroup.Value())
            continue;

Because they wouldn’t count for the aggregate sum, the TX would later fail for having outputs without inputs to balance against.

@bitjson Thanks for replying. Yes, they are unspendable, and yes we already do nuke other unspendables (at least in BCHN) from the DB – namely the following two rules exist at least in the BCHN codebase:

  • anything beginning with OP_RETURN is just pruned and ignored completely as if it doesn’t exist.
  • any script that happened to end up in the DB and be over 10kb is treated similarly

Since they are unspendable now, and while I do concur in principle with @im_uname that pruning TXOs is a bad look, I could be convinced to prune them.

Still this means the node now has to keep track of activation height for this upgrade, so as to disallow TXOs with the PREFIX_BYTE created before the upgrade from wrecking havoc. Which sounds easy right? But… nodes (at least BCHN) doesn’t normally operate upgrades based on height (with some rare exceptions from the Satoshi days). It actually uses “MTP” … and using MTP it can only answer the question: “Is this upgrade active now”? It cannot (easily) answer the question: “Was the upgrade active at height X?”. It just doesn’t think of the blockchain in those terms… So it would require some coding to get right in BCHN at least. Meh.

At any rate – I think we can all agree on these points:

  • Creation of new TXOs that have the PREFIX_BYTE but that don’t deserialize correctly as token data should be disallowed going forward post-activation. I think the V2 spec says this subtly in 1 small paragraph but perhaps this should be emphasized? We should make this a consensus rule post-activation. In fact, in my implementation I already coded it as such. This just saves us some headaches to have that as a rule… IMHO. (Of course we could not have that as a rule but it’s just cleaner to have it if you ask me… and anyway the spec already declares this, so…)
  • We still have to specify how to handle legacy TXOs (if any??) that happen to have PFX-BYTE (whether they deserialize as valid tokens or not). My preference is to “allow” them at the present time… but I don’t have a strong preference here. We can also just disallow them and do a height check when spending them (we already store TXO confirm height in the DB anyway because reasons). But… like I said earlier… asking the question “Was upgrade X active at height H?” is non-trivial to answer in BCHN codebase at least so it would be a headache…!
    • In the case of TXOs with PFX-BYTE but that don’t really encode a token, all of this is an implementation detail at the end of the day really… I guess…
    • In the case of a hypothetical pre-upgrade-generated TXO with PFX-BYTE but that does “look” like a token – we definitely need to decide now what the spec should be for that! (whether it be YOLO allow and not care or harder-to-implement forbid… either work).

I have a bunch of other notes and caveats and landmines I noticed while implementing that I will summarize later for other implementations to take heed… so as to avoid subtle bugs, etc… but none of them are super critical just “stuff to look out for”…

3 Likes

the problem here is it doesn’t matter if you or BCHN or all node teams can be “convinced to prune them”, you need a coordinated activation in order for that to be clean - and said activation, unlike consensus rules, also isn’t enforceable - one will have to devise some clever strategy to enforce them.

Without such an activation, when consensus rules make them spendable again you’ll have unknown proportions of the network pruning the TXOs and some parts that haven’t, risking network fracturing and consensus failure. It’s not just a “bad look”.

To prevent such a disaster from happening one will still need to have consensus rules preventing those TXOs from getting spent (you mentioned it :slight_smile: ) rendering the pruning redundant except for saving a tiny bit of space.

2 Likes

As we discussed elsewhere in more detail, my current take is YOLO-allowing doesn’t actually do harm as long as the spec is clear about how people should handle their tokens downstream.

Without such an activation, when consensus rules make them spendable again you’ll have unknown proportions of the network pruning the TXOs and some parts that haven’t, risking network fracturing and consensus failure. It’s not just a “bad look”

Ok, no prune. True. Duh. Lol.

1 Like