CHIP 2022-02 CashTokens: Token Primitives for Bitcoin Cash

(I think you mean hashPrevouts?) Right, hashPrevouts commits to the full contents of all the transactions with outputs being spent by the current transaction. The main reason value is included separately by BIP143 (and then by BCH’s signing serialization algorithm) is to make verification easy for offline signers. In practice, many offline signers weren’t actually verifying the values from the source transactions because to do so could require transferring, decoding, and inspecting MBs of raw transactions. Committing to the value directly in the signing serialization allows for equivalent security using much simpler offline signing implementations. (Even if all transaction data must be transferred e.g. by keyboard or a QR code.)

I’m not sure if I understand the logic here, but my initial reaction is that scriptCode is already a pretty complicated idea – I think we’re best off leaving it as-is and just committing to the full token prefix directly after value. (And only if SIGHASH_TOKEN is present.)

2 Likes

I thought that it commits only to prevout refs (which by themselves commit to the prevout satoshi amount and locking script through TXID). So, I meant to include the token prefix same way as value, so that you can see what you’re signing without having to obtain parent TXes. We’re thinking the same here, but I had a different place for it in mind.

This sounds good, and cleaner :slight_smile:

No, there I meant newly created outputs although it was not really applicable to your comment about offline signing. Newly created outputs token payload needs to make it into signature preimage(s), too. We should spell out how both prevouts and new outputs are to be handled when signing is concerned.

1 Like

Finally read through all this as part of my review. Amount of constructive debate and collaboration going on is amazing.

2 Likes

Just had time to finish digesting the proposal and I finally get it. My initial impression a couple of months ago was uncertain to negative, as it seemed to be a shortcut that added application logic to layer 1. However the reasoning provided around the elemental nature of the two types of tokens is sound and the CHIP is really impressive and detailed. I hope it succeeds and I’ve gotten all excited about the possibilities for things to build on top.

4 Likes

Great to know! Can’t wait till Jason wraps up his other projects so we can push this forward together!

PS @rnbrady can we add that as a quote for the CHIP?

1 Like

Yes, sure thing and let me know how else I can help.

2 Likes

Making a comment here as placeholder that we need to address activation strategy in the CHIP.

While nonstandard, it is possible that transactions generated before activation will contain “valid-looking” token outputs, and since the ruleset doesn’t exist before activation, they can be “wrong” in a wide variety of ways - including but not limited to duplicious Category IDs, invalid-when-summed amounts, nonsense NFT capabilities, nonexistent genesis, and so on.

Declaring these pre-activation outputs as invalid might be simple as a thought experiment, but incurs technical debt in practice - we’ll need a separate pass checking UTXO height to determine the validity of all token transactions. Not ideal…

… but these should not be a big deal in practice even if we adopt the ruleset as is! These “fake” UTXOs can simply be declared “valid if exist at activation”. These may lead to shenanigans involving any categories that use txids that exist pre-activation, but there is a clean way around it: implementers (of wallets, services, smart contract providers etc.) mostly need to be aware to do a de novo two-step genesis for any tokens they generate, shenanigans are only applicable to actual users if they venture into directly using pre-activation UTXOs for genesis.

Does this mean we don’t actually need to do much? Yes, but we do also need to address this point in the specs, lest people get confused about what the best practices are.

1 Like

Goos points @im_uname .

Just to elaborate: there are a bunch of corner cases here, all somewhat related to the way this works.

What do you do if you saved a scriptPubKey to your UTXO db as a node some time ago, and it has the prefix-byte. Now some new tx, post-activation, wishes to spend that UTXO. So you deserialize the coin and lo and behold it looks it has the PREFIX_BYTE.

  • What do you do if the SPK passes muster and deserializes correctly as a [tokenData + SPK] byte blob? (Has ok capabilities, positive amount, short-enough commitment, etc). Now there is a “fake” token that can be spent… which is what @im_uname is discussing above…
    • This has implications for token amounts. It’s possible for a category-id to exceed INT64_MAX if someone makes a bogus token from pre-activation that has the same category-id as a real token from post-activation… Now your inputs can sum up to >INT64_MAX. This is a caveat for node implementors to worry about…
  • The other case is what happens if the TXO fails to deserialize because while it used to just be an opaque byte blob that containes SPK bytes, now there are new rules about SPK’s with the prefix having to follow the new token binary format (commitment length, etc). So maybe the TXO doesn’t deserialize correctly as a token… but you thought it was a token!
    • Do you deserialize it anyway and just throw all the bytes (including the PREFIX) into scriptPubKey (as it was when it was created, really)… ? (Note this would be an unspendable TXO, but still the behavior needs to be specified).
    • I say this only as a corner case because one can imagine some node software assuming “illegal” PREFIX_BYTE containing SPK’s are impossible if they come from the internal node DB, and if that assumption doesn’t hold one can imagine software crashing when it hits the impossible condition it thought was impossible…

There are also other caveats with respect to activation… that are subtle … which I can go into later.

1 Like

Great points, thanks for bringing it up @im_uname and @cculianu! That definitely needs to be addressed in the spec.

One useful observation: any locking bytecode with a PREFIX_TOKEN (0xd0) is currently provably unspendable. That’s not true for all occurrences of 0xd0 in any locking bytecode, but because 0xd0 is an unknown opcode, and an OP_IF/OP_ENDIF block can’t cross from unlocking to locking bytecode (also – push-only enforcement since Nov 2019), we know that 0xd0 can not be the first byte in any successful locking bytecode evaluation (and this has been the case since Satoshi).

So until the upgrade, all outputs with a 0xd0 prefix are practically equivalent to OP_RETURN outputs, and can reasonably be pruned from the UTXO set at upgrade time. (In fact, implementations could reasonably prune lots of other provably unspendable UTXOs from their databases, but in most other cases that probably wouldn’t be worth the effort, makes UTXO commitments harder to standardize, etc.)

After that, there shouldn’t be any need to keep track of “fake token” outputs. While it still requires some activation logic, at least node implementations don’t have to pay a cost after the upgrade block.

One caveat with this strategy (and if we go with it, should be in the spec) any token transactions prepared in advance of the upgrade should use locktime to ensure the transaction isn’t included in a pre-upgrade block. (Even if the transaction is broadcasted after the new rules are expected to be in effect, it’s still possible for a backdated re-org to burn those funds.) Of course, creating tokens doesn’t require significant funds, so many users might not care if their token-creating dust gets burned by a malicious re-org (especially people creating tokens in the first few blocks, many will just be upgrade lock-in transactions). But worth mentioning for completeness.

Related, I think this is also the right strategy for handling “improperly encoded token prefixes” after activation. If PREFIX_TOKEN is followed by an invalid token encoding, we can’t really assume the locking bytecode was intending to encode tokens (and, e.g., attempt to somehow slice off the invalid token prefix to allow the BCH dust to be spent). Instead, it’s sending dust to a provably unspendable output, and can just be treated like any OP_RETURN output. (The transaction would be non-standard anyway due to the unrecognized contract type, so in practice this would only happen if a miner deliberately mined the nonstandard transaction.)

1 Like

You can’t “just prune” things from UTXO following a certain rule that’s later slated to be spendable again - the “pruning because unspendable” needs an activation in itself, else you get a consensus failure.

2 Likes

Would be real nice if we had version locked already. I peeked at Andrew’s old Group code, he has this in the loop that accumulates input amounts:

// no prior coins can be grouped.
        if (coin->nHeight < miningEnforceOpGroup.Value())
            continue;

Because they wouldn’t count for the aggregate sum, the TX would later fail for having outputs without inputs to balance against.

@bitjson Thanks for replying. Yes, they are unspendable, and yes we already do nuke other unspendables (at least in BCHN) from the DB – namely the following two rules exist at least in the BCHN codebase:

  • anything beginning with OP_RETURN is just pruned and ignored completely as if it doesn’t exist.
  • any script that happened to end up in the DB and be over 10kb is treated similarly

Since they are unspendable now, and while I do concur in principle with @im_uname that pruning TXOs is a bad look, I could be convinced to prune them.

Still this means the node now has to keep track of activation height for this upgrade, so as to disallow TXOs with the PREFIX_BYTE created before the upgrade from wrecking havoc. Which sounds easy right? But… nodes (at least BCHN) doesn’t normally operate upgrades based on height (with some rare exceptions from the Satoshi days). It actually uses “MTP” … and using MTP it can only answer the question: “Is this upgrade active now”? It cannot (easily) answer the question: “Was the upgrade active at height X?”. It just doesn’t think of the blockchain in those terms… So it would require some coding to get right in BCHN at least. Meh.

At any rate – I think we can all agree on these points:

  • Creation of new TXOs that have the PREFIX_BYTE but that don’t deserialize correctly as token data should be disallowed going forward post-activation. I think the V2 spec says this subtly in 1 small paragraph but perhaps this should be emphasized? We should make this a consensus rule post-activation. In fact, in my implementation I already coded it as such. This just saves us some headaches to have that as a rule… IMHO. (Of course we could not have that as a rule but it’s just cleaner to have it if you ask me… and anyway the spec already declares this, so…)
  • We still have to specify how to handle legacy TXOs (if any??) that happen to have PFX-BYTE (whether they deserialize as valid tokens or not). My preference is to “allow” them at the present time… but I don’t have a strong preference here. We can also just disallow them and do a height check when spending them (we already store TXO confirm height in the DB anyway because reasons). But… like I said earlier… asking the question “Was upgrade X active at height H?” is non-trivial to answer in BCHN codebase at least so it would be a headache…!
    • In the case of TXOs with PFX-BYTE but that don’t really encode a token, all of this is an implementation detail at the end of the day really… I guess…
    • In the case of a hypothetical pre-upgrade-generated TXO with PFX-BYTE but that does “look” like a token – we definitely need to decide now what the spec should be for that! (whether it be YOLO allow and not care or harder-to-implement forbid… either work).

I have a bunch of other notes and caveats and landmines I noticed while implementing that I will summarize later for other implementations to take heed… so as to avoid subtle bugs, etc… but none of them are super critical just “stuff to look out for”…

3 Likes

the problem here is it doesn’t matter if you or BCHN or all node teams can be “convinced to prune them”, you need a coordinated activation in order for that to be clean - and said activation, unlike consensus rules, also isn’t enforceable - one will have to devise some clever strategy to enforce them.

Without such an activation, when consensus rules make them spendable again you’ll have unknown proportions of the network pruning the TXOs and some parts that haven’t, risking network fracturing and consensus failure. It’s not just a “bad look”.

To prevent such a disaster from happening one will still need to have consensus rules preventing those TXOs from getting spent (you mentioned it :slight_smile: ) rendering the pruning redundant except for saving a tiny bit of space.

2 Likes

As we discussed elsewhere in more detail, my current take is YOLO-allowing doesn’t actually do harm as long as the spec is clear about how people should handle their tokens downstream.

Without such an activation, when consensus rules make them spendable again you’ll have unknown proportions of the network pruning the TXOs and some parts that haven’t, risking network fracturing and consensus failure. It’s not just a “bad look”

Ok, no prune. True. Duh. Lol.

1 Like

As we discussed elsewhere in more detail, my current take is YOLO-allowing doesn’t actually do harm as long as the spec is clear about how people should handle their tokens downstream.

Yeah I think so long as we are “strict” about it in that we only allow TXOs that parse correctly and are “legal” in some abstract sense (correct capability byte, <= 40 byte commitment, amount >0 if pure fungible, etc), then I think that’s “safe” in a way. Just have to be careful when summing up category ID’s to catch overflow (but that’s an implementation detail for nodes)… sure. Wouldn’t be catastrophic.

And I predict the following: not a single TXO will be intentionally mined in this way. I would be surprised if more than a handful… or even more than 0 will appear between now and activation time. So long as we are very clear in the spec how to handle this, there is little incentive for “griefers” to do this to us.

If we don’t anticipate it happening, and we have bugs in our code… then yes, griefers may do this to us. But just having a plan for this I think is enough to avoid potential attacks from BCH haters… :slight_smile:

3 Likes

@im_uname @bitjson @bitcoincashautist

Another thing we should probably add to the spec: A special rule for coinbase txns.

I propose the following consensus rule post-activation:

  • A coinbase txn should not be allowed to generate any vouts with PREFIX_TOKEN.

This would avoid the situation where miners can endlessly mint for CategoryID 0x0000000000000000000000000000000000000000000000000000000000000000 which would be both fairly useless and also annoying.

Unless we want such a “feature”?

3 Likes

concept ACK on explicitly forbidding coinbase genesis. Using coinbase as input for genesis, though, is harmless/useful and should still be allowed.

1 Like

It is implicitly disallowed because coinbase inputs have prevout index 0xFFFFFFFF, but we only consider inputs with prevout index 0 as genesis candidates. Jason has this line:

Note: coinbase transactions have only one input with an outpoint index of 4294967295 , so they must never include a token prefix in any output."

Agreed, it should be explicitly stated it’s disallowed.

2 Likes

Oh yeah duh… coinbase has prevoutN == 0xffffffff and so it can never be genesis.

So yeah – it can’t be genesis. But definitely think it should be explicitly stated that coinbase txn blanket forbidden to have token PFX_BYTE in its outs at byte position zero – as a blanket thing to explicitly point out…

2 Likes