Pre-release of a new CHIP-SubSatoshi

This is a pre-release (please share opinions) of a new CHIP;

After various months of considering alternatives, I believe this is by far the cleanest and easiest way to approach this function. It is clean since it will require only small changes in most apps and even for the full node the changes will be minor.

Who shares my optimism to get this enabled in May 2025?

7 Likes

Similar proposal as Mankind needs fractional satoshis Ā· Joannes Vermorel's blog , but with a simple decimal adjustment ā€” might be better for financial software with a lot of downstream apps than the powers-of-two / naks suggestion by Vermorel.

Looking forward to the discussions around this!

1 Like

Itā€™s really nice that the 8 bytes can get us x 1000: We could maybe denote amounts as Millisats (mSats)?

JS implementations may have to be careful about using Number to store amounts - but I think most of the ecosystem has already started trending towards using BigInt anyway.

3 Likes

Pretty much identical to what I had in mind. So yeah, I think this is a good route to go.

Some thoughts:

  • I donā€™t think the Vermorel proposal for power-of-two shifts is smart. I do think 1000x etc (using decimal not binary shifts) as is proposed here is smart.
    • The power-of-two thing is classic eCash/ABC wizardry to make obvious stuff more confusing in order to keep up the illusion that they are supergeniuses and normal people just donā€™t understand their genius. Decimal shifts are way better for many reasons, namely we write numbers down in decimal. Our whole world is structured around decimal. Nobody cares about 1/16384th of a Satoshi. Itā€™s a useless quantity for humans to think about. Also, PoT shifts may lead to situations where you canā€™t exactly express some decimal numbersā€¦ and you are forced to do some weird rounding. So yeah, huge thumbs-up on the decimal shifting.
    • If PoT shifts were smart, Satoshi himself would have chosen them when designing the original Bitcoin software. He did not. Probably because he understood they offer no advantages and only can create problems.
      • Continuing with Satoshiā€™s design choices is the most natural thing to do here.
  • We need to be sure no version>=3 txn exist on mainnet, testnet, etc currently.
    • I believe this is the case but I would have to check.
    • If they do exist then these on-chain txns need to be specified to have a special rule that they be interpreted in the old way otherwise nodes will fail to IBD or whatever.
  • I think May 2025 is a bit optimistic.
    • I donā€™t think we need this change anytime soon. Iā€™d be very happy if we needed it for May 2025. But realistically I would not be surprised if we found ourselves not needing it until May 2035.
    • Still, itā€™s good for us to have a strategy ready to deploy it. But we should monitor market developments and how BCH is used in practice.
4 Likes

One more thought:

One could also not even peg the txn version. One could just always use the high-order bits in the 8-byte ā€œnValueā€ word to always indicate the fractional part. Un-upgraded nodes would reject such txns outright (as they likely would nVersion=3 txns with a different serialization as is proposed herein).

But the advantage of making the rule just be ā€œjust use the high order bits for fractionsā€ is this: You donā€™t need to do much to upgrade the coins db for Core derivatives like e.g.BCHN.

With the proposal here, you would need to modify the coinsdb format (maybe, maybe not).

Just some random thoughts.

Of course, nothing is stopping the coinsdb from doing ā€œtricksā€ to store the 64-bit word in a special way so as to instantly ā€œknowā€ if it contains a fraction or not, without having to modify the data layoutā€¦ regardless of which proposal one uses.

2 Likes

IIRC it exists, but we could just skip 3 and use the first non-used number, like v4 :slight_smile:

There was a bunch of other numbers used, tho, so we canā€™t do a simple >= with the version, and would have to evaluate against a (small) list instead.

One thing to add to the CHIP is: how should the VM deal with these? OP_*VALUE should continue to return sats (if thereā€™s milisats just truncate them) and weā€™d need another 2 opcodes to get the decimals?

3 Likes
  • Good point regardng the >= thing not being ideal ā€“ yes we would need to check against a small set.
  • Good point also. We would need to discuss how fractional sats will be represented in the VMā€¦ when e.g. doing things like native introspection op-codes to retrieve input and/or output amounts, etc.
    • Gut instinct leads me to prefer an alternate set of op-codes that retrieve the full number (with fraction) and all existing opcodes just truncateā€¦ however that would need to be evaluated and discussed.
    • An alternative: One could declare a new op-code e.g. ā€œOP_SET_FRACTIONAL_SAT_MODEā€ or something which toggles whether numbers get scaled by 1000x or not when issuing the op-codes that retrieve txn amountsā€¦ this may save on op-code space but will introduce more ā€œstateā€ in the execution (note we already have state in the execution such as whether you are inside an ā€œifā€ statement or not, so this is akin to that).
2 Likes

I love the idea of VM toggles that would change behavior of other opcodes, not just for sats! We could get uint ops the same way :slight_smile:

3 Likes

I had the same idea in the previous iteration (see git log).

The thing that changed my mind on that is that currently the economic value of sub-sat is negligible. And that is a benefit for the upgrade path.

It means that an incoming v3 transaction (with maybe some sub-sats) can cheaply be converted back to a v2 transaction (because that vendor hasnā€™t upgraded something yet?). And the only loss is the rounding of the sub-sats to sats. It can go to the mining fee.
Which is to say, this upgrade is going to get more expensive as the price rises.

Earliest we can plan this for is May 2025 anyway, which is 18 months away and with the market today we could be looking at much increased prices already by then. Or the same as today, but since we need to look far ahead with such a change Iā€™d rather err on caution and avoid losing the window of opportunity.

This too I was thinking, but in reality this is almost never actually an issue. Realize that a standard var in JS can store all sats ever created (21 e+14). Which obviously will never actually be used in 99% of the codepaths because all those sats are not going to be on a single output or input. Or even a single transaction!
So with 1000x you limit yourself to (quick dumb math) about 21000 BCH being able to fit in that ā€˜doubleā€™ (or JS var). Which is enough for practically all normal usecases. Sure, anyone making libraries and specialistic software like wallets needs to take care and focus on using bigint. But Iā€™m merely pointing out that the pain is a lot less than expected.

I recall that testnet3 had plenty of random version numbers, no clue about testnet4.
The mainnet tests we did back when the check was made showed a very low number of them. Will have to re-run that.

Good question.
My initial response is that to avoid tech debt we ideally drop the concept of ā€˜sub-satsā€™ as soon as software is over to the new setup. We then simply have more satoshis and that will make all software much easier to manage for the next 100 years.

With that in mind, my ideal is that the transaction version is a flag on how the script unlocking that money is interpreted.
Is the VM verifying a predicate unlocking money held in a 3+ UTXO, then OP_*VALUE gives the ā€˜subsatsā€™ amount, otherwise it gives the ā€˜satsā€™ amount.
Which has the advantage that there is no trucation going to happen, as that sounds dirty to my ears.

This gives people the time for scripts to get verified to work on sub-sats before being deployed on v3 transaction predicates.

For the people more at home with scripting, how does that sound?

4 Likes

This would break recursive covenants (predicates that require the same predicate be passed on to a new UTXO but with some new amount calculated by the previous predicate).
Then the spender can spend v2 predicate, upgrade it to v3, then spend from v3 and fool the predicate into thinking it has more sats than it actually has.

4 Likes

Right, good catch!

Yeah, then Iā€™m definitely leaning towards an opcode that switches ā€˜modeā€™.

2 Likes

Tx Version now 5

At the advice of people here, I investigated the actual usage of versions on-chain. The result is this CSV file; CHIP-SubSatoshi/tx-versions.csv at master - bitcoincash/CHIP-SubSatoshi - Codeberg.org

In short there is one version 3 transaction waaay back in 2015 and one v4 in 2016. More recent someone added 859 version 4 transactions.

Geeking out detail: I wanted to get the data out in a fast way, so in true software developer fashion I took the opportunity to learn cpp2 (an experimental compile-to-cpp language) and write it there. Took me nearly 2 days to write this simple app.
It was massively faster than calling bitcoin-cli a lot, and it was fun to play with new techā€¦

Cheers!

3 Likes

No problem. Just do what Linus did in the kernel. Increase / skip the numbers by an order of magnitude.

Instead of Version v4 we get Version v40.

Sure, lack of continuity is not aesthetically pleasant, but it fixes any backward compatibility problems and removes 99% of confusion.

2 Likes

That actually makes a lot of sense, and is quite attractive. Maybe jump to ver 10ā€¦

2 Likes

Linus literally said he went full alpha male on it and took the shot.

So just take the shot.

1 Like

Oh, so someone added a bunch, care to update this list?

1 Like

There is a CSV in my git repo. (see OP for link)

3 Likes

I just had a couple of minutes so checked testnet4.

Iā€™m surprised too, but there are ONLY version 2 transactions there.

3 Likes

Ver X, per Apple :grinning_face_with_smiling_eyes:
Or go retro apple, and lets start naming versions with different cats!

1 Like

I did go for version 10. :wink:

In the meantime some more insight around the opcode has made me revisit that topic. The main one was the observation that you really donā€™t want to allow a script to change behavior mid-calculation between 8 or 11 digits behind the dot.

Some more stuff added too, please look at the merge request and check if it makes sense.

1 Like