Native Group Tokenization

@Tom I’m sorry to be a little short. If you are truly interested in Group tokens and pursuing first class tokens for BCH I’m super happy to help.

Its just that in my imagination of how this might work, we’d first “all” (or many of us) agree that first class tokens are a priority. Then the token authors would go off and prepare something. Then they’d present, and we’d pick one.

Instead what’s happening here is you are repeatedly making incorrect assertions that I need to run off and correct. This is inherently antagonistic, and I have to wonder if its worth the time because I don’t know your or Bitcoin Cash-as-a-whole’s commitment to adding first class tokens. Is this just idle chatter for you or do you believe that first class tokens are important for BCH? And in essence, wouldn’t it be better if you asked a question rather than make a claim?

1 Like

I see some problems with that approach, the main one being that one team preparing something isn’t the best way to get the best solution because this lets bad assumptions go on for far too long. Like the approach you took to making the code well to read, separated in one file, and you now see that I actually get worried about what may simply be implementation details with regards to the BU utxo implementation.

Instead what we are aiming for (and this is the basic concept behind the network discussions) is a shorter lead time. You have an idea, you talk about it with other devs, you have some dirty proof-of-concept, you present it. Small iterations and repeated feedback from your peers. Then when it gets big enough, you start to include the customers too in this feedback cycle.

This has several benefits, the most obvious is we challange assumptions one always makes about these things early on. Different viewpoints are good, the earlier in the design process the better. It also has the rubber duck benefit (Rubber duck debugging - Wikipedia). And naturally you can get the benefit of talking to people smarter that oneself.

These short loops is, incidentally, at the basis of most software development approaches, from agile to XP. They can explain better if you want to dive in.

This is equally frustrating for me, I want to understand the high-level first and you give me the implementation details, which are unique to BUs codebase! To get the actual high-level approach out of that is… difficult.

Notice that people are reading along, this thread got linked on reddit just yesterday. My personal opinion is indeed that native tokens (in general) are far superior than SLP, but the important part here is that our talking is seen by all and my worry about scaling is something many more people worry about. While native tokens are important, the money usecase is still the root of our chain. Because if bch-is-cash fails, the entire chain fails and it leads us down the road of extremely high fees like BTC and ETH.

And in the end all your work is for naught if we can’t convince the wider ecosystem that this design should be activated on the network. And that goes back to my earlier suggestion of how to approach this with smaller steps and more devs talking between themselves. When you explain to me (and those reading along) how it actually works and I understand it, it becomes easier to support your work. And soon you have a growing wave of support with many people willing to help. See my writeup of The story of how BCHs 2020 DAA came to be. on how this helps immensely towards actually activating something like group.

The fact is that Jason started doing this, and in mere weeks has more support than you. Because devs won’t support a proposel they don’t understand.


Hi all, I’d like to help here.

I will do my best to understand Group Tokens, and then to present them in a way that Tom requires. Any pointers to where I can find the relevant documentation would be helpful.

How I see it, Tom cares about the process a lot and refuses to dig deeply into content if it’s not presented according to the process. That’s fine and understandable. We’re doing peer-review, but we need to fit it into a form so peers will want to do the reviewing. So there are 2 ways to go about this: nag Tom to look into it even though it’s not presented in a way that he requires, or help Andrew present it according to spec. I’d like to help by helping Andrew present it. I started some discussions on Reddit and that’s all good to bring attention but not good to package it into the form for the process and help move things forward here.

I will likely need to bother people to explain to me things that need explaining, and I’ll find channels to do that where it doesn’t increase the noise here.

Here’s what I have so far.

Somebody has to do the math for tokens to function. I believe it would be better for the whole ecosystem if miners did it. Do we want such tokens, or not? Because if we don’t want miners to do some arithmetic in a scalable way, we’re stuck with SLP, and that isn’t really taking off, because it lacks competitive advantage and has other issues do to with hacky way of implementation which couldn’t be avoided given the historical context. Now we are here and we could move forward. SLP now competes with other blockchains, so what would be the problem if it also competed with a solution on the same blockchain, and if that solution were better than other blockchains? The ecosystem would benefit, and maybe we’d attract more users, adoption and talent instead of having them build elsewhere.

This is not a technical argument, but still an important one. Do we want it? Why do we want it? Who will have to “pay” for it? Is the price acceptable for what we’d be getting? What do the miners think about paying that CPU price? What about opportunity cost of not implementing it? Anyway…

Below is how I addressed some concerns on Reddit. I think that’s a start and I hope I got it right and readable for the laymen, but if not please correct me.

Processing a BCH transaction today can use CPU time that is bounded by a linear function of transaction size.

Linear function scaling would continue to be the fact, even if Group Tokens were introduced. Right now I take Andrew’s claim(s) at face value, but it would be nice if others would verify it at this stage, and it would be a must to verify were it to be included in the HF.

A peer-review process. But we have to get the peers to review. Seems like Andrew could use some help motivating his peers to review, or presenting his work in a reviewable way. That’s what prompted me to get the ball rolling.

This includes database operations, since each of these can be performed in linear time and the number of these is bounded linearly by the size of the transaction.

Isn’t it bounded by the number of outputs, though? Size can mean different things. SLP tokens add to the size of transactions, do they not? Anything in the OP_RETURN adds to the size, and it’s not been a problem even though anyone’s allowed to put whatever they want in to increase the size of transaction, even now.

Thing is, with OP_RETURN, miners don’t have to do anything with that data, so it only increases the size in kilobytes of data passed around, it doesn’t add cycles. More transaction outputs, linearly more time. More outputs, linearly more time. Bigger outputs because more data in each, less than linear. Why? Because miners have to doSomething() with every output, whether it contains a token transaction or not. That doSomething() is quite big, they have to verify the signatures, perform crypto math etc. Every output increases the number of times a miner has to call doSomething() by 1. Adding Group Tokens doesn’t increase the number. It adds a little basic bookkeping math inside the doSomething() function. It piggy-backs on something miners have to do anyway, and then when all the doSomething() is finished checks the signatures + this group token running total and says whether the TX is valid or not.

Every Group Token TX is also a BCH TX because it has to pay the fee. This is no different than some TX which amounts to 0 BCH and has something in the OP_RETURN. It spends the entirety of inputs for fees and gets that piece of data written on the blockchain. The only difference is, it doesn’t ask the miner to add a few numbers to check whether the data in the OP_RETURN makes sense. Now, SLP users/nodes do that math.

Somebody has to do the math for tokens to function. The argument is - it would be better for the whole ecosystem if miners did it.

Processing time would continue to be bounded by a linear function, with slightly altered slope. That’s it. That’d be the cost of processing, if my understanding is correct.

Moreover any changes should not significantly increase the size of the UTXO database.

Why it shouldn’t increase it just a little, though? And how do we define what is little and what is significant? We aren’t the stakeholders here. It’s not our CPU and RAM that will have to process this, so we should really be asking nodes & miners that.

Would you work just a little extra to have the best simple token operations on the market? That’s what we have to ask nodes & miners. There are always trade-offs. Will refusing to take a little sacrifice now prevent BCH from achieving its potential? There’s this opportunity cost involved, which grows every day we’re not taking action. Users will enjoy the benefit for “free”, we’re not the stakeholders. All we have to do is pay the fee if we will want to use it. Users will also enjoy the benefit of adoption by both other users and new developers who may come to use our first class tokens. And having proper tokens should help there. They will all pay for the services provided by our blockchain through BCH fees.

There are two limiting possible UTXO implementations: a “fast” one that keeps database entries in RAM hashed by identifier; and a “cheap” one that stores database entries on random access storage.

We’re far behind the hardware. Changing the slope of linear scaling won’t make us get ahead of the hardware just like that, if ever. Maybe this could be argued better but I’m not equipped with arguments right now.

Any need for locking adds huge difficulty to the designer of node software.

Agreed. Thing is, it looks like Group Tokens aren’t locking, at least that’s what I see from recent discussions. If when you process each output you have to doSomethingWithOutput() then this function can be executed for each output in parallel. Then, you have to tally the outputs of transaction, and again we add a little math to doSomethingWithTXes(). This one has to wait for all output processors to finish, which it has to do anyway because it has to tally BCH balance of the TX. So no extra locks there, either.

Where are the locks?


So I’m working on something higher level, and in the spirit of this guideline I invite anyone interested to get involved!
Here’s the working doc

1 Like

@andrewstone if you’re willing to dig it up, I’d love to see what I missed. I want to prepare some kind of compendium about group tokens… here’s a start


2017-10-16 Intro

2017-11-19 BUIP

2017-11-22 Chris Pacia’s explanation

2018-05-21 Group Tokenization document

2021-01-27 Jonathan Toomim comments

2021-02-03 Jason’s thread

2021-02-12 BU Poll

2021-02-13 George and Andrew Interview

2021-02-13 We want first class tokens on BCH! Part 1/N, motivation

2021-02-15 We want first class tokens on BCH! Part 2/N, scalability, stakeholders

Looks like I missed quite a bit of discussion over Valentine’s Day weekend! Going to try to compress responses into one post:

Group Tokenization Needs a Spec

@tom and others have been extremely generous with their time, deeply reviewing this topic and several other proposals in this forum.

I don’t think this is about process: Group Tokenization simply does not have a specification. It has a 43 page, mostly-prose, Google Doc with no public edit history and very little provided rationale about specific technical decisions. (Edit: In fact, I think it was edited as I wrote this comment? it is now 44 pages.)

Huge segments of the Group Tokenization proposal don’t seem to be formally specified at all: “group authority UTXOs”, authority “capabilities”, “subgroup” delegation, the new group transaction format, “script templates”, and “script template encumbered groups” – all of these are described in prose, but the reader is left to guess about important details. In my review, I tried to assume the best in each case, but I can see why others find that frustrating.

After this discussion, I was reasonably convinced that “native” tokens could be implemented without negatively impacting scaling. I’m not yet sure whether the latest Group Tokenization proposal does so successfully, but we’ll see once a draft specification exists. (I think earlier versions of the proposal did impact scaling, but I can’t verify without a history of spec changes.) Regardless, a complete specification would be the best way to put all these fears to rest.

@andrewstone: would you consider developing an “implementation-ready” specification like PMv3? We need some concise, details-only document of the precise changes and any relevant test vectors. It’s valuable to include rationale, but please provide it in a truncatable “Rationale” section at the end. I think it’s also very important that the specification be source-controlled in a Git repo.

Unanswered Concerns

We still don’t have answers for some of the concerns I mentioned at the beginning of this thread:

So, @andrewstone:

  1. How are you confident that the current Group Tokenization proposal is “complete” and doesn’t need to first be tested in a “userland” system like CashTokens? Or do you expect future upgrades can correct any deficiencies we discover after this Group Tokenization proposal is rolled out?
  2. Can you provide any full example of “group tokens” interacting with covenants?

On (2), I posed a small challenge a couple of weeks ago:

I’m asking this again because in reading your recent comparison of Group Tokens and CashTokens on Reddit, you still seem to think “CashTokens” is an alternative to Group Tokens. I want to suggest: the most interesting use cases allowed by parent transaction introspection (“CashTokens”) are not possible to achieve with Group Tokens. Group Tokens need parent transaction introspection too. I think if you try to implement this covenant (as might be used by a side-chained prediction market) you will find that you need a solution like hashed witnesses.

Unless someone can demonstrate how Group Tokens might avoid the need for hashed witnesses, my current preference is that we first implement a smaller change like PMv3 in May 2022. This would give end-user-developers complete flexibility in implementing creative new token designs at no risk to the network. (And PMv3/hashed witnesses are important for use cases other than tokens, so they will remain valuable regardless of whatever “token solution” ultimately sees the most adoption.)

After some miner-validated token standards emerge on the market, we could eventually choose one like this Group Tokenization proposal to “bless” as the standard (e.g. in May 2023). With good real-world examples, we’d be able to more effectively evaluate features and tradeoffs. It seems premature to optimize specific token use cases – using new, permanent consensus changes and data structures – before we can quantify the value of those optimizations.


That “spec” doc is more than a spec. It is a roadmap with multiple options considered and full of use examples etc.

The step 1 of the roadmap is simple, it involves a new opcode OP_GROUP with simple verification requirements. Yes, it writes that an older spec required to look around but current one does not, so I think it scales well and avoids problems toomim was talking about.

Actual manipulation of the tokens is achieved by cleverly using the spec above, it’s not part of the consensus touching spec, but it will be a part of userland spec, an user manual. How to do X? How to mint, how to melt, how to change authority, etc… that all happens in userland. OP_GROUP simply enables such solutions.

The most basic implementation is so simple it’s beautiful and I think that’s confusing everyone, because we don’t know what we’re discussing, step 1 or something down the line which may or may not come after step 1. That’s where a spec for only the step 1 will come in handy, which I volunteer to write when I gather enough knowledge.

From previous correspondence and reading how group tokens are supposed to work, you just attach a special message to each output it’s so simple it’s beautiful… because it uses bch for double-spend checks because every group tx is also a bch tx, then when it’s time to check the tx balance you piggy back on those reads writes and deletes you have to do anyway for bch. Once you’ve read a group utxo you add the group amount to some cache ie running sum(s), once you’ve deleted it, you subtract from the sum, and at the end you check the balance of both bch and whatever group tokens you encountered and give a yes or no for the tx. It’s as simple as that. There’s no extra loops or looking at other TXOs. All the magic happens inside existing loops we use to check bch balance. I’d be happy if Andrew could confirm my understanding.

Now, if you want to be a group token block explorer, only then you have to index the tokens, but that can be done separately, that’s userspace, not node/mining job. Can’t have tokens and not have them. Question is - who does what part of the job to have them? With SLP it’s all userland. With group tokens, miners will do a tiny but important part: basic accounting equation.

1 Like

Is BCH going to compete with Bitcoin, Ethereum, neither, or both? Personally I think the answer is “Bitcoin – by doing everything it does, and p2p cash as well”. Other projects are already competing with Ethereum, and one of them might succeed.

1 Like

BCH already competes with BTC for the original use case which is P2P Cash. BTC can’t even compete there anymore so it’s no competition. The narrative around BTC changed to “store of value” “anything hedge” “new asset class” “digital gold” etc. That narrative is too strong, and big tech and big finance is getting in, so I think that’s the way it will be. Any competition with BTC is competing with the narrative, not the technology. We can’t compete with that narrative, and we shouldn’t. BCH should become a store of value organically, as a side-effect after it’s been used as cash because it works well.

We see many other cryptos saying they enable instant cheap transactions that could be seen as competition, but where are their transactions? BCH seems to have an advantage there because it has boots on the ground, it has volunteers working hard every day to bring it out there and it’s taking root. Of all the cryptos that can do cash because their tech enables it, BCH is the only one that’s actually doing it because it’s bringing in users. I love to look at this map and see places where BCH acceptance is concentrated: We need concentration, because that way we reduce churn. We need to continue to build the narrative about cash, so when someone thinks crypto/digital cash they think Bitcoin Cash!

Ok, the above is more for Reddit, what does it have to do with Group Tokens? It has everything to do with it. Because GTs are a way to expand the scope of “cash”. GTs are 1st class tokens meaning they enjoy the same efficiencies that BCH has. Miner enforcement, SPV wallets, 0-conf, etc. Cash is not just 1 currency. Having 1st class tokens would enable multiple currencies on BCH blockchain, and they’d all be carried around by small amounts of BCH and paying fees in BCH. GTs are NOT smart contracts. But future smart contracts could be using them alongside BCH.

1 Like

Here’s an interesting argument about UTXO. Fact is that every NGT UTXO is also a BCH UTXO. At the beginning I saw this as an interesting side-effect, and now I see it as an important feature because it will provide economic incentive to consolidate NGT UTXOs if the token value disappears. If the NGTs have value, there’s even more value to consolidate, you’re consolidating both NGT and BCH amounts in the same TX. Say you have 10 NGT UTXOs, you can consolidate them all into 1 NGT UTXO and you can choose what to do with the excess BCH you freed - either keep it colored or send elsewhere as 1 more pure BCH UTXO. Like this you claim “free” BCH for consolidating NGT UTXOs! If you want to split a NGT UTXO, you have to lock up more BCH.

Hey all! We now have a high level doc outlining the requirements for enabling group token semantics. It’s one and half A4 page, should be easy to read and enough to enable us to assess impact on scaling etc, ie start the talk at high level, and as everyone’s understanding catches up can work our way down to design choices and implementation details, in a way we will together reinvent group tokens.

I’m gonna embed the doc here below, and later edit as required. Meanwhile, Andrew is working on a full spec which you can find here. He already had some comments on the doc below, but the essence is good already. I need to read up more to implement those comments for this doc and it can be part of this reinvention process.

Native Group Tokens High Level Requirements Overview

Script Requirements

OP_GROUP enables storage of data in a place preceding normal script. That’s all
that Script has to check. That data is given meaning in the consensus checking
part. This data could be stored in a new transaction field all the same.

Grouped transaction output scripts MUST follow the following format (<>
brackets indicate data pushes):

<Group ID> <quantity> OP_GROUP [normal script constraints],

Consensus Requirements

When verifying an individual transaction, perform the below in addition to
normal processing.

For every input seen:

  • If they have the OP_GROUP opcode then:
    • Check whether fields are of valid size. Fail the transaction if not.
    • If it’s a group token then add group balances to running sums.
    • If it’s a group authority then OR group authorities to running
  • Else do nothing.

For every output seen:

  • If they have the OP_GROUP opcode then:
    • Check whether fields are of valid size. Fail the transaction if not.
    • If it’s a group token then add group balances to running sums.
    • If it’s a group authority then OR group authorities to running
  • Else do nothing.

Finally, check the validity of the transaction as a whole:

  • If an authority appears from nothing AND group genesis requirement is NOT met
    then fail the transaction.
  • Else check whether authority “balances” out i.e. only lower or equal authority
    can be found in the outputs flag. Fail the transaction if not met.
  • Check token balance taking into consideration authority found in the input and
    fail the transaction if criteria not met.
    • No authority: input group token balance == output group token balance
    • Mint authority: input group token balance <= output group token balance
    • Melt authority: input group token balance >= output group token balance

That’s it. Whatever Native Group Token semantic details will designed, they will
be implemented in one of these 3 places above. This high-level description
should be sufficient to evaluate impact on scaling.

User Requirements

Users can decide whether to support the Native Group Token semantics or not. If
it will not be supported, then NGT UTXOs will be unspendable by the user.

Non-supporting wallets should at least check for the presence of OP_GROUP and
deal with them in a way that makes sense otherwise regular transactions created
by the wallet could fail by accidentally including an OP_GROUP UTXO as input.

Assuming majority of wallets already match the UTXO script against known patterns, this
shouldn’t be an issue, as those will just ignore those UTXOs.

It is recommended that non-supporting wallets at least show them as non-spendable BCH
balance because showing 0 would hide the fact that there’s still something in there.

User Manual

Should the user choose to support NGTs then he should support, as a minimum:

  • Preservation of authority UTXOs i.e. if you use one as input, you must create
    one as change output.
  • Tracking wallet NGT balances which is as simple as summing UTXOs that belong to the wallet
  • Building ordinary NGT transactions, which are transactions containing OP_GROUP and respecting the consensus rules shown above
  • Melting NGT UTXOs

While advanced features won’t be supported this way, at least the end user will
be protected from accidentally giving up authority or unknowingly passing it
on to other parties. If he doesn’t want the tokens, end user can burn the tokens
and claim the locked BCH.


This document is placed in the public domain.
Attribution neither required nor desired.


Andrew Stone


Thanks for this succinct and clear overview.

One thing that is unclear to me however is how authorities work - it seems these are tied to UTXOs, but it is unclear how one can see whether UTXOs have mint/melt permissions.

I don’t think melt authority should be a thing. Anyone should be able to melt and access their own BCH: token issuers should not be able to hold the underlying BCH from other people hostage. Bitcoin Cash also allows people to burn their own BCH, so it would be strange if you wouldn’t be allowed to burn your own tokens.


Yes, there are 2 kinds of group utxos authority and token. Authority reuses the amount field to store authority bits. If a particular authority bit exists in any of the input utxos then creation of other authority utxos in the outputs may be allowed, and circumventing the sum_in=sum_out may be allowed. This also means that if you don’t want to give up authority, you need to always create a “change” utxo to preserve it. I think this was not the 1st idea but he worked his way to this, and I love it for it’s elegance.

I don’t think melt authority should be a thing.

My view is that it should be a thing but anyone should be allowed to create melt authority from “nothing”. If you just allowed it by directly allowing sum token outs to be <ins, then people could accidentally burn their tokens. With BCH you have to go out of your way to burn. That should continue to be the case with tokens. Example:

TX     Inputs                           Outputs
 1     0.01 BCH UTXO                    NGT melt authority UTXO (0.01 BCH wrapped)
 2     NGT melt (0.01 BCH)              0.02 BCH UTXO
       100 NGT tokens token (0.01 BCH)

Anyway I’m asking Andrew about that to clarify. He thought of a “fencing” feature which would enable people to hold it hostage as a feature, I think. Like, you make group tokens but they carry significant amount of BCH, so you could color a bunch of BCH which only folks with authority could take out, or you could only move BCH inside the fence. I’ll repeat the question here and add the answer when I get it:

Maybe we should reconsider and let people melt and give up the fencing feature so BCH can’t get stuck with worthless tokens. I imagine you already thought about it, but would be good to share the rationale.


That min bch feature is called the “dust limit”, BTW. WRT melting, there are future smart contract reasons to require a melt constraint script to be met. They are basically along the lines of “if have X you can melt it to get Y”. The group creator can give melt permissions to everyone by creating a melt authority that has no constraints… they then just toss a bunch of these onto the blockchain and anyone can use one. Also, note that the bch in the utxo isn’t constrained by the group (unless a flag is set). So you could extract all the bch down to the dust limit without melting. One reason for this is fees. Group UTXOs can be “charged up” with enough BCH to pay a few transfers worth of fees (if desired). Doing this keeps tx smaller because you don’t have to pull in a bch-only UTXO and pay yourself change just to pay fees, so long as you can use some from the token UTXO.


I’m talking about extracting the dust, too. Imagine you have 1000 token UTXOs and the token becomes worthless. If you could give yourself melt authority regardless of the token creator wishes, you could melt them all into 1 normal BCH UTXO. On a second thought, w/o melt authority you simply consolidate them down to 1 token UTXO and 1 BCH UTXO which would extract 999 dust amounts. If fence flag was enabled, you could consolidate them to 1 token UTXO but couldn’t recover the BCH in which case there’s no incentive to do it if token is abandoned and the UTXO is left with 1000 zombie UTXOs.


Sounds about right.

WRT being unable to extract the “fenced” BCH – today you can send BCH to unspendable addresses where it will be stuck forever… there’s no real difference.


Difference is in incentives I guess. There’s no utility in burning BCH to unspendable addresses. There is utility in creating lots of tokens for your project so that’s ok, we want utility. And if the project fails and tokens become worthless that means all those UTXOs become zombies without any incentive to consolidate, that could be a problem because we lose the ability to clean up. Removing BCH from the circulating supply could be seen as “paying” for their tombstone in the UTXO set. 1m zombies would remove 5.46 BCH from the supply.

If you want to prevent accidental burning by wallets unaware of OP_GROUP then I suggest to simply require a new transaction version (i.e. make any use of OP_GROUP invalid in the existing transaction versions 1 and 2). Actually, on second thought, accidental burning is impossible anyway, because wallets that don’t understand OP_GROUP will not recognise the public key script and hence will not believe the coins are part of the wallet. (Unlike SLP, where the tokens look like normal wallet coins.)

Then you get a much simpler system: no more dealing with melt authorities, and the only remaining authority is mint authority which can be encoded simply as zero/null quantity.


Andrew told me the same just now:

Actually nonsupporting wallets won’t even recognize a group UTXO as one of “theirs” (because this happens by matching the UTXO script against known patterns) so no explicit checking is needed.

So I will update that in the doc above. I imagine a wallet could still mess it up by awkward implementation but from Andrews answer I figure that wasn’t the main motivation for requiring melt authority anyway. Check out my updated post above.

the only remaining authority

He imagined a few more authorities, like authority to give sub-authorities which would enable a lot of interesting functionality. I’m thinking about producing something like a feature tree: this authority enables this functionality down the line, and then we see what we discuss what want and what are the trade-offs.

Is BATON authority the same as CCHILD referred to in ALL_PERM_BITS definition?

Should be, looks like an error maybe he was changing the names of things and forgot that one. I’m not sure whether he considers the doc as in progress or complete. I did say he’s “working on a full spec” :slight_smile:

Yes. I am currently updating the specifications in some ways. One way is to move to the more modern terminology. SLP borrowed the Group system and renamed it “batons”, I name it “authorities”. Its funny because those two names get to the 2 essential properties of the system. I like their name “baton” and it’s widely known. So I am moving the specs to use the term “Authority Baton”, and “baton” in particular when that action (passing the authority from input to output UTXO) happens. So the CCHILD bit becomes the BATON bit.

Please understand that the Group Tokenization document likely lays out the content of a few hard forks. In particular there’s a natural separation between basic tokens and enhancing BCH script to handle robust smart contracts.

Unless the community really wants to make all the changes at once, I’ll be proposing first just the Group changes (so none of the OP_TEMPLATE, OP_PUSH_TX_STATE, or OP_EXEC changes). But if you read and understand those sections, you can get a good understanding of where we need to go to have robust smart contracts, irrespective of whether Group is deployed.


So to clarify the melt rationale, it is not the same as burning. From talking with Andrew it was never about error prevention, it was just that I saw it could be used like that among other things. You don’t need any authority to burn. You can send both the token+bch from an UTXO to 1bitcoineater and burn them forever together. Or use OP_RETURN to burn them.

Melting is another animal, it burns only the token and frees the BCH. But since BCH is free to cross group borders, you can free your BCH without melting by combining multiple token UTXOs into a single token UTXO and recover the excess BCH. Like, if you had 1000 token UTXOs, you consolidate them into 1 token UTXO locked with only the 546 sats and you extract 999x546 sats into the pure BCH UTXO with this operation. This also means that you can charge token UTXOs with more than the minimum, so they can pay for the fee themselves without having to include a pure BCH input alongside.

Fencing locks the border but also disables token amounts. This creates colored BCH, where only authorities can add or remove coloring.

So, these authorities enable a lot of versatility in how we can use group tokens. There’s more about that in the functional description doc.


You are right. Burning is sending a coin or token to an output that is either provably or convinceably something that can never be spent. But that UTXO lives onchain forever, and actually it may take a human’s input to decide whether the token is burnt (a machine likely won’t understand that 1tokeneater… is a probable burn, unless its been specifically coded for that address).

Melting removes the UTXO. So its easy for machines to identify and reduces UTXO database size.