2026 protocol upgrade ideas

Making Nested PoW for 2026 will be very hard work for me. I will have to create a testing-network with testnet-in-a-box/dedicated testnet with multiple nodes to make sure it all works together nicely. Then run it for 6 months+ or so to prove to people and companies it is reliable.

Unless I get some external help maybe, 2026 is difficult.

But I am OK with moving it to 2027 if I cannot make it. No rush.

About milli-sat.

A little discussion on telegram gave some good ideas and opinions. One of the main ones is that the cost is high for a currently low return on investment.
High cost is because all parties parsing raw transactions should adust their software or risk losing money.
Low return on investment is because our current price is such that there is no benefit yet.

One suggestion made was to simply avoid assumptions about schedule. Don’t assume it will fit in the november/may design we’ve been using.
Instead say we want to activate this in 4 years. Give everyone a clear indication and notification. Have a testnet soon and all that.

Or, maybe even simpler; start the process now and have as part of this specific chip the actual lock-in date and the activation date. We likely won’t be using November 2025 and May 2026 for that, but maybe November 2025 for lock in but May 2029 for activation.
All dates are draft and open for discussion.

With this much longer timeline it may be possible to not break the transaction format at all. Just change the value in a new tx-version.
Or, if people STILL find that too risky, then the longer timeline means we might fit various transaction-format-cleanups at the same time.

Having those dates part of this CHIP gives us lots of options, hopefully one will please enough people to make it happen :slight_smile:

1 Like

There’s also the “evalutaion of alternatives”, it needs to be demonstrated that bumping TX version and breaking TX format to shift value for x1000 has better trade-offs than inserting a new field in a non-breaking way, using the same prefix approach CashTokens used.

The non-breaking alternative, add an additional field to output format:

Field Length Format Description
value 8 bytes unsigned integer(LE) The number of satoshis to be transferred.
subsat prefix and locking script length variable variable length integer The combined size of full subsat prefix and the locking script in bytes.
[PREFIX_SUBSATS] 1 byte constant Magic byte defined at codepoint 0xff (255) and indicates the presence of a subsat prefix.
[subsat amount] variable variable length integer Amount of subsatoshis
locking script variable bytes(BE) The contents of the locking script.

Note: if the output encodes both subsatoshis and tokens, then subsatoshi prefix MUST come after the token prefix.

Applications could choose on how to present the high-precision composite:

  • As a floating point: float_value = value + (double) subsat_amount / (1<<64)
  • As high-precision atomic units: atomic_value = (uint128) value * (1<<64) + subsat_amount

Script VM could be upgraded using some OP_ENV so to have introspection opcodes return the high-precision atomic_value, or we could just add 2 new introspection opcodes.

1 Like

The ‘non breaking way’ is adding tech-debt. Unneeded complexity. Complexity that can never ever be removed again because once in production those transactions stay forever.
We picked hard forks over soft forks and we relished in the fact that we can do things more clean and with much less tech-debt as a result. And this is useful to keep the chain long-term viable.
I want BitcoinCash (the protocol) to still run mostly unchanged 20 years from now. Maybe even longer, but I dare not look that much into the future.

So, sure, it is possible to do push this data into a script field and make all script parsers adjust so transaction parsers can wait a couple more years to do so.
Yes, that may short term be a positive thing.

But, people, if you think short term why are you worried about enabling mili-sats?

The trade-off of doing this inside of the value field is that it is by far the least invasive change possible. It is very simple to write the code in practicall all parsers. If txversion < 10, multiply ‘amount’ with 1000. That’s it. One line spec.

The risk of some old software messing up is a deminishing one over time. Software honestly has a horrible half-time in our crypto world. This is a super managable problem, just by giving it time.

First post here. First of all I am not a dev so I am not super technical. However, it is quite obvious to me that the biggest problem with BCH currently is still usage. We do not consistently even produce 1mb blocks. So in my mind things like utxo commitments are things that are still not as important as allowing more apps to be built on bch. Speed is a factor in this, but with dsproofs and ZCE’s perhaps that is all that is needed. Things like Tail Storm or shorter blocktime will not be doable this go around anyway most likely.

So in my mind it would probably be best to do another round of VM upgrades to allow even more future capabilities for builders. Everyone’s deep in the VM right now anyway so maybe this would also help while its still fresh in everyone’s minds. The ones that seem to be the most pertinent after talking to @MathieuG are:

  • Loops
  • re-enabling LSHIFT / RSHIFT
  • perhaps adding new math opcodes for more complex market makers

Those are my thoughts at this time, thank you.

8 Likes

Welcome to Bitcoin Cash Research!

It’s true that driving adoption is a huge focus we need. In my opinion, VM Limits will already have opened the playground far enough (in combination with CashTokens, Big Big Ints etc.) that protocol dev is not going to be a limiting factor on driving that building. At that point it’s much more on the developer tooling for higher layer, quality of wallet integrations (to drive interest and provide a starting point), effective media promotion, price action (up always drives interest) etc.

That’s just my take, which is why I’m more interested in protocol development moving into more “esoteric” features. The 2023 & 2025 upgrades have been big for enabling building, and I think to the extent we need more adoption (which we do) those problems now basically need to be addressed through other parts of the ecosystem.

But I agree the Loops idea does sound cool & might be quite significant on that front.

2 Likes

Which “esoteric” features are you referencing? I agree tooling has not caught up to our VM upgrades and that the VM isn’t the main thing holding us back anymore. I still feel we might as well continue focus on it for next upgrade cycle as I don’t see any other big changes that are close to reaching consensus. It’s also something on peoples minds still and we might as well stay a couple steps ahead of tooling if there’s a couple add ons that will provide even more future use cases. The main one at this time does seem to be Loops, followed by Shift ops.

Things like UTXO Committments or Sub satoshis. Stuff that isn’t IMMEDIATELY needed, maybe not even for 2 or 3 or 4 years, but if we get it in the pipeline and/or shipped over the next couple of years then it’ll be ready for when it is.

With regards to things needed in the VM, I am interested in the idea of Bounded Loops, but in general I’m hoping that ecosystem focus moves up a couple of layers for the time being. Anything really meaty at the protocol layer might distract focus from that which would be unfortunate.

2 Likes

here’s my list with regards to VM: op_env, pfx_budget, bounded loops, op_eval, op_merkle, negative op_roll & pick, alt stack swap/rot/roll/pick, op_matureblockheader, more math ops, shift ops, op_invert, maybe more hash ops, maybe stream hashing, maybe raw tx parsing opcode(s), detached signatures (pfx_detachedsig)

the below, I doubt there’s will/need to do it, but still putting it out just to ponder:

if we ever break tx format, then use the opportunity to switch prevout refs to use txidem (something like =H(H(input prevout refs) + H(outputs))), and change txid to be something like =H(txidem + H(input script, locktime) + tx_ver + tx_locktime). Furthermore, hashes covering ins & outs could be little merkle trees. This way, it would be possible to have compact proofs for individual utxos getting spent & created, have SPV proofs that further drill down to particular outputs.

also, if we break tx format, it needs to be upgradeable to 384-bit hashes, for when/if QCs get powerful enough where 256-bit collision search becomes a risk

also, we could consider switching to other hash functions like Blake (for everything except PoW), to increase overall performance while maintaining or increasing cryptographic security

Echo’int the “if we ever” part (i.e. not next year, or the one after), I’d want to put forward the idea of a tagged values system for transaction encoding. A basic idea where you effectively have ‘key=value’ style stuff. This results in much cleaner parsers and allows plenty of opportunities for improvements that are practically impossible today.
Because if you have a ‘key=value’ style encoding, you can choose to skip a key if it is irrelevant. For instance the ‘sequence’ is mostly never used.
And you could define specialistic keys instead of ‘abusing’ existing ones. Which is relevant for the sequence again, but also the coinbase input could be a special short tag instead of an ‘empty’ prev-txid.

To give an example of features that are impossible today, you could add a key that specifically states “reuse the signature from input ‘n’” in order to drastically shrink transactions that combine loads of utxos from one address.

I did an, admittedly naive, design of this nearly a decade ago (man time flies!), that may still have good ideas (bip 134). If we want to ever actually break the transaction format.

1 Like

Here’s an idea for how we could compact the amount of data needed by SPV clients.

Imagine if we added a merkle tree over all block hashes and kept updating it and committing the updated root in coinbase TX.

With that, a SPV client wouldn’t need the entire header chain anymore, they’d just need (given N is current tip):

  • Coinbase TX at some height M (where M < N) - from this they’d extract the root of all headers 0 to M-1 which can be used to make inclusion proof for any historical header and TX belonging to the range.
  • Header chain for the segment [M, N], which would prove the [0, M-1] commitment is buried under (N-M+1) worth of PoW, or to prove any TX in the [M, N] range.

It would take only take tree_height number of hash ops to update it with a new header after each block, because headers chain is a sorted list growing at the end so no random insertions, you only ever update the right-most side of the tree.

1 Like

something like this ? merkleblock checkpoints. (lovingly named: super-merkleroot) · Issue #187 · cculianu/Fulcrum · GitHub

1 Like

I do not fully understand this yet, this needs much deeper thought, but I already love the mere concept.

Can you make it into a nice graph with blocks and arrows to illustrate it more?

That would be great, thanks.

1 Like

Yes!

It’d just be an additional merkle tree over the chain of hashes.

Right now, you have a chain like:

0 -- 1 -- 2 -- 3 -- 4 ... - N

We could compute and maintain a merkle tree over all headers and record the root in the block after.

       R3
   /        \
/    \    /   \
0 -- 1 -- 2 -- 3 -- (4 & R3)

It’s easy to keep it updated, because if you cache intermediate hashes you only need to recompute the side that gets changed when a new block is added.

               R4
       /             \
   /        \        /\
/    \    /   \     /\
0 -- 1 -- 2 -- 3 -- 4 -- (5 & R4)
               R5
       /                 \
   /        \          /   \
/    \    /   \     /    \
0 -- 1 -- 2 -- 3 -- 4 -- 5 -- (6 & R5)

Then having just the (6 & R5) lets you prove any historical header and TX by providing the merkle path.

3 Likes

TailStorm or shorter blocks would be great to improve UX. Many people, including myself previously, would choose just to use LTC if Monero was not an option, because when sending payment for services outside of the BCH ecosystem they all want at least one confirm, and usually three. This can be really painful when we don’t get a block for 30 minutes, brings back the bad memories of BTC.

We really need BCH to just be the best choice in all scenarios.

3 Likes

I agree. But don’t expect it done very fast, I am a rather slow programmer, also my complicated life is not helping at all.

2026 is not realistic without external help, I can say this upfront.

3 Likes

2026 is not realistic without external help, I can say this upfront.

I would assume something like this would take 2 years at least. If it is a funding issue maybe a flistarter would help.

Funding will not conjure time magically in my life. There are only 24 hours in a day and I need 7-8 hours of sleep.

Also I am not poor, I do not really need it. Unless maybe we’re talking hiring somebody.

1 Like

TailStorm would be amazing. I would love to see it as a 2027 upgrade, potentially.

2 Likes

My thoughts for logical upgrades to the BCH VM were already included in the thread above :grinning_face_with_smiling_eyes:

But I mainly wanted to re-post @bitjson thoughts on the 2026 upgrade which he shared on X:

For 2026, I’m hoping to see (at least) loops, exponentiation, and relaxed output standardness (“Pay-2-Script”). Just those would close most remaining gaps + extend BCH’s lead in areas where it already excels.

Similar to the VM limits, the bounded loops proposal (CHIP 2021-05 Bounded Looping Operations) was already created in 2021 so has been in the research discussions for quite a while.

1 Like