2026 protocol upgrade ideas

Things like UTXO Committments or Sub satoshis. Stuff that isn’t IMMEDIATELY needed, maybe not even for 2 or 3 or 4 years, but if we get it in the pipeline and/or shipped over the next couple of years then it’ll be ready for when it is.

With regards to things needed in the VM, I am interested in the idea of Bounded Loops, but in general I’m hoping that ecosystem focus moves up a couple of layers for the time being. Anything really meaty at the protocol layer might distract focus from that which would be unfortunate.

2 Likes

here’s my list with regards to VM: op_env, pfx_budget, bounded loops, op_eval, op_merkle, negative op_roll & pick, alt stack swap/rot/roll/pick, op_matureblockheader, more math ops, shift ops, op_invert, maybe more hash ops, maybe stream hashing, maybe raw tx parsing opcode(s), detached signatures (pfx_detachedsig)

the below, I doubt there’s will/need to do it, but still putting it out just to ponder:

if we ever break tx format, then use the opportunity to switch prevout refs to use txidem (something like =H(H(input prevout refs) + H(outputs))), and change txid to be something like =H(txidem + H(input script, locktime) + tx_ver + tx_locktime). Furthermore, hashes covering ins & outs could be little merkle trees. This way, it would be possible to have compact proofs for individual utxos getting spent & created, have SPV proofs that further drill down to particular outputs.

also, if we break tx format, it needs to be upgradeable to 384-bit hashes, for when/if QCs get powerful enough where 256-bit collision search becomes a risk

also, we could consider switching to other hash functions like Blake (for everything except PoW), to increase overall performance while maintaining or increasing cryptographic security

Echo’int the “if we ever” part (i.e. not next year, or the one after), I’d want to put forward the idea of a tagged values system for transaction encoding. A basic idea where you effectively have ‘key=value’ style stuff. This results in much cleaner parsers and allows plenty of opportunities for improvements that are practically impossible today.
Because if you have a ‘key=value’ style encoding, you can choose to skip a key if it is irrelevant. For instance the ‘sequence’ is mostly never used.
And you could define specialistic keys instead of ‘abusing’ existing ones. Which is relevant for the sequence again, but also the coinbase input could be a special short tag instead of an ‘empty’ prev-txid.

To give an example of features that are impossible today, you could add a key that specifically states “reuse the signature from input ‘n’” in order to drastically shrink transactions that combine loads of utxos from one address.

I did an, admittedly naive, design of this nearly a decade ago (man time flies!), that may still have good ideas (bip 134). If we want to ever actually break the transaction format.

1 Like

Here’s an idea for how we could compact the amount of data needed by SPV clients.

Imagine if we added a merkle tree over all block hashes and kept updating it and committing the updated root in coinbase TX.

With that, a SPV client wouldn’t need the entire header chain anymore, they’d just need (given N is current tip):

  • Coinbase TX at some height M (where M < N) - from this they’d extract the root of all headers 0 to M-1 which can be used to make inclusion proof for any historical header and TX belonging to the range.
  • Header chain for the segment [M, N], which would prove the [0, M-1] commitment is buried under (N-M+1) worth of PoW, or to prove any TX in the [M, N] range.

It would take only take tree_height number of hash ops to update it with a new header after each block, because headers chain is a sorted list growing at the end so no random insertions, you only ever update the right-most side of the tree.

1 Like

something like this ? merkleblock checkpoints. (lovingly named: super-merkleroot) · Issue #187 · cculianu/Fulcrum · GitHub

1 Like

I do not fully understand this yet, this needs much deeper thought, but I already love the mere concept.

Can you make it into a nice graph with blocks and arrows to illustrate it more?

That would be great, thanks.

1 Like

Yes!

It’d just be an additional merkle tree over the chain of hashes.

Right now, you have a chain like:

0 -- 1 -- 2 -- 3 -- 4 ... - N

We could compute and maintain a merkle tree over all headers and record the root in the block after.

       R3
   /        \
/    \    /   \
0 -- 1 -- 2 -- 3 -- (4 & R3)

It’s easy to keep it updated, because if you cache intermediate hashes you only need to recompute the side that gets changed when a new block is added.

               R4
       /             \
   /        \        /\
/    \    /   \     /\
0 -- 1 -- 2 -- 3 -- 4 -- (5 & R4)
               R5
       /                 \
   /        \          /   \
/    \    /   \     /    \
0 -- 1 -- 2 -- 3 -- 4 -- 5 -- (6 & R5)

Then having just the (6 & R5) lets you prove any historical header and TX by providing the merkle path.

3 Likes

TailStorm or shorter blocks would be great to improve UX. Many people, including myself previously, would choose just to use LTC if Monero was not an option, because when sending payment for services outside of the BCH ecosystem they all want at least one confirm, and usually three. This can be really painful when we don’t get a block for 30 minutes, brings back the bad memories of BTC.

We really need BCH to just be the best choice in all scenarios.

3 Likes

I agree. But don’t expect it done very fast, I am a rather slow programmer, also my complicated life is not helping at all.

2026 is not realistic without external help, I can say this upfront.

3 Likes

2026 is not realistic without external help, I can say this upfront.

I would assume something like this would take 2 years at least. If it is a funding issue maybe a flistarter would help.

Funding will not conjure time magically in my life. There are only 24 hours in a day and I need 7-8 hours of sleep.

Also I am not poor, I do not really need it. Unless maybe we’re talking hiring somebody.

1 Like

TailStorm would be amazing. I would love to see it as a 2027 upgrade, potentially.

2 Likes

My thoughts for logical upgrades to the BCH VM were already included in the thread above :grinning_face_with_smiling_eyes:

But I mainly wanted to re-post @bitjson thoughts on the 2026 upgrade which he shared on X:

For 2026, I’m hoping to see (at least) loops, exponentiation, and relaxed output standardness (“Pay-2-Script”). Just those would close most remaining gaps + extend BCH’s lead in areas where it already excels.

Similar to the VM limits, the bounded loops proposal (CHIP 2021-05 Bounded Looping Operations) was already created in 2021 so has been in the research discussions for quite a while.