Pre-release of a new CHIP-SubSatoshi

This is a pre-release (please share opinions) of a new CHIP;

After various months of considering alternatives, I believe this is by far the cleanest and easiest way to approach this function. It is clean since it will require only small changes in most apps and even for the full node the changes will be minor.

Who shares my optimism to get this enabled in May 2025?


Similar proposal as Mankind needs fractional satoshis · Joannes Vermorel's blog , but with a simple decimal adjustment — might be better for financial software with a lot of downstream apps than the powers-of-two / naks suggestion by Vermorel.

Looking forward to the discussions around this!

1 Like

It’s really nice that the 8 bytes can get us x 1000: We could maybe denote amounts as Millisats (mSats)?

JS implementations may have to be careful about using Number to store amounts - but I think most of the ecosystem has already started trending towards using BigInt anyway.


Pretty much identical to what I had in mind. So yeah, I think this is a good route to go.

Some thoughts:

  • I don’t think the Vermorel proposal for power-of-two shifts is smart. I do think 1000x etc (using decimal not binary shifts) as is proposed here is smart.
    • The power-of-two thing is classic eCash/ABC wizardry to make obvious stuff more confusing in order to keep up the illusion that they are supergeniuses and normal people just don’t understand their genius. Decimal shifts are way better for many reasons, namely we write numbers down in decimal. Our whole world is structured around decimal. Nobody cares about 1/16384th of a Satoshi. It’s a useless quantity for humans to think about. Also, PoT shifts may lead to situations where you can’t exactly express some decimal numbers… and you are forced to do some weird rounding. So yeah, huge thumbs-up on the decimal shifting.
    • If PoT shifts were smart, Satoshi himself would have chosen them when designing the original Bitcoin software. He did not. Probably because he understood they offer no advantages and only can create problems.
      • Continuing with Satoshi’s design choices is the most natural thing to do here.
  • We need to be sure no version>=3 txn exist on mainnet, testnet, etc currently.
    • I believe this is the case but I would have to check.
    • If they do exist then these on-chain txns need to be specified to have a special rule that they be interpreted in the old way otherwise nodes will fail to IBD or whatever.
  • I think May 2025 is a bit optimistic.
    • I don’t think we need this change anytime soon. I’d be very happy if we needed it for May 2025. But realistically I would not be surprised if we found ourselves not needing it until May 2035.
    • Still, it’s good for us to have a strategy ready to deploy it. But we should monitor market developments and how BCH is used in practice.

One more thought:

One could also not even peg the txn version. One could just always use the high-order bits in the 8-byte “nValue” word to always indicate the fractional part. Un-upgraded nodes would reject such txns outright (as they likely would nVersion=3 txns with a different serialization as is proposed herein).

But the advantage of making the rule just be “just use the high order bits for fractions” is this: You don’t need to do much to upgrade the coins db for Core derivatives like e.g.BCHN.

With the proposal here, you would need to modify the coinsdb format (maybe, maybe not).

Just some random thoughts.

Of course, nothing is stopping the coinsdb from doing “tricks” to store the 64-bit word in a special way so as to instantly “know” if it contains a fraction or not, without having to modify the data layout… regardless of which proposal one uses.


IIRC it exists, but we could just skip 3 and use the first non-used number, like v4 :slight_smile:

There was a bunch of other numbers used, tho, so we can’t do a simple >= with the version, and would have to evaluate against a (small) list instead.

One thing to add to the CHIP is: how should the VM deal with these? OP_*VALUE should continue to return sats (if there’s milisats just truncate them) and we’d need another 2 opcodes to get the decimals?

  • Good point regardng the >= thing not being ideal – yes we would need to check against a small set.
  • Good point also. We would need to discuss how fractional sats will be represented in the VM… when e.g. doing things like native introspection op-codes to retrieve input and/or output amounts, etc.
    • Gut instinct leads me to prefer an alternate set of op-codes that retrieve the full number (with fraction) and all existing opcodes just truncate… however that would need to be evaluated and discussed.
    • An alternative: One could declare a new op-code e.g. “OP_SET_FRACTIONAL_SAT_MODE” or something which toggles whether numbers get scaled by 1000x or not when issuing the op-codes that retrieve txn amounts… this may save on op-code space but will introduce more “state” in the execution (note we already have state in the execution such as whether you are inside an “if” statement or not, so this is akin to that).

I love the idea of VM toggles that would change behavior of other opcodes, not just for sats! We could get uint ops the same way :slight_smile:


I had the same idea in the previous iteration (see git log).

The thing that changed my mind on that is that currently the economic value of sub-sat is negligible. And that is a benefit for the upgrade path.

It means that an incoming v3 transaction (with maybe some sub-sats) can cheaply be converted back to a v2 transaction (because that vendor hasn’t upgraded something yet?). And the only loss is the rounding of the sub-sats to sats. It can go to the mining fee.
Which is to say, this upgrade is going to get more expensive as the price rises.

Earliest we can plan this for is May 2025 anyway, which is 18 months away and with the market today we could be looking at much increased prices already by then. Or the same as today, but since we need to look far ahead with such a change I’d rather err on caution and avoid losing the window of opportunity.

This too I was thinking, but in reality this is almost never actually an issue. Realize that a standard var in JS can store all sats ever created (21 e+14). Which obviously will never actually be used in 99% of the codepaths because all those sats are not going to be on a single output or input. Or even a single transaction!
So with 1000x you limit yourself to (quick dumb math) about 21000 BCH being able to fit in that ‘double’ (or JS var). Which is enough for practically all normal usecases. Sure, anyone making libraries and specialistic software like wallets needs to take care and focus on using bigint. But I’m merely pointing out that the pain is a lot less than expected.

I recall that testnet3 had plenty of random version numbers, no clue about testnet4.
The mainnet tests we did back when the check was made showed a very low number of them. Will have to re-run that.

Good question.
My initial response is that to avoid tech debt we ideally drop the concept of ‘sub-sats’ as soon as software is over to the new setup. We then simply have more satoshis and that will make all software much easier to manage for the next 100 years.

With that in mind, my ideal is that the transaction version is a flag on how the script unlocking that money is interpreted.
Is the VM verifying a predicate unlocking money held in a 3+ UTXO, then OP_*VALUE gives the ‘subsats’ amount, otherwise it gives the ‘sats’ amount.
Which has the advantage that there is no trucation going to happen, as that sounds dirty to my ears.

This gives people the time for scripts to get verified to work on sub-sats before being deployed on v3 transaction predicates.

For the people more at home with scripting, how does that sound?


This would break recursive covenants (predicates that require the same predicate be passed on to a new UTXO but with some new amount calculated by the previous predicate).
Then the spender can spend v2 predicate, upgrade it to v3, then spend from v3 and fool the predicate into thinking it has more sats than it actually has.


Right, good catch!

Yeah, then I’m definitely leaning towards an opcode that switches ‘mode’.


Tx Version now 5

At the advice of people here, I investigated the actual usage of versions on-chain. The result is this CSV file; CHIP-SubSatoshi/tx-versions.csv at master - bitcoincash/CHIP-SubSatoshi -

In short there is one version 3 transaction waaay back in 2015 and one v4 in 2016. More recent someone added 859 version 4 transactions.

Geeking out detail: I wanted to get the data out in a fast way, so in true software developer fashion I took the opportunity to learn cpp2 (an experimental compile-to-cpp language) and write it there. Took me nearly 2 days to write this simple app.
It was massively faster than calling bitcoin-cli a lot, and it was fun to play with new tech…



No problem. Just do what Linus did in the kernel. Increase / skip the numbers by an order of magnitude.

Instead of Version v4 we get Version v40.

Sure, lack of continuity is not aesthetically pleasant, but it fixes any backward compatibility problems and removes 99% of confusion.


That actually makes a lot of sense, and is quite attractive. Maybe jump to ver 10…


Linus literally said he went full alpha male on it and took the shot.

So just take the shot.

1 Like

Oh, so someone added a bunch, care to update this list?

1 Like

There is a CSV in my git repo. (see OP for link)


I just had a couple of minutes so checked testnet4.

I’m surprised too, but there are ONLY version 2 transactions there.


Ver X, per Apple :grinning_face_with_smiling_eyes:
Or go retro apple, and lets start naming versions with different cats!

1 Like

I did go for version 10. :wink:

In the meantime some more insight around the opcode has made me revisit that topic. The main one was the observation that you really don’t want to allow a script to change behavior mid-calculation between 8 or 11 digits behind the dot.

Some more stuff added too, please look at the merge request and check if it makes sense.

1 Like