2026 protocol upgrade ideas

Hopefully the upgrades to scripting capabilities drive adoption to the point where UTXO Commitments become “necessary.” It’s something I’d love to see. To me, UTXO Commitments are one of the last remaining foundational components to make the system indefinitely sustainable.

2 Likes

Lots I’d love to see, but Bounded Loops are the big one that comes to mind.

Also, though I’m not itching for it, will add this as another item:

2 Likes

oh, right! Output standardness is technically only a relay / isStandard rule, but it would definitely benefit from being changed for all participants at the same time.

So that one would fit this list too!

I could still edit my initial post, so I added this one there.

2 Likes
  • Avalanche
  • 2 minute blocktime

:heavy_plus_sign::one: For bounded loops

Bounded loops are basically normal loops like in any programming language, however they are limited in amount of processing power/resources they can “steal” from Bitcoin nodes (without this security measure you could essentially break the network).

This means that Bitcoin’s language is getting even closer to any normal programming language. The potential for amazing things that can be built here is basically unlimited.

So yes, very HYPE :fire::fire:

3 Likes

This came up during our discussions about VM limits, and I’d like to elaborate more on this, so added a post here:

3 Likes

Would love in 2026:

  • Millisats
  • OP_PFX

Could love in 2026 if well-researched:

  • OP_ENV
  • OP_EVAL
  • OP_LOOP

Open to being convinced, maybe in 2026:

  • tx-ack
  • relax output standardness rules

Open to being convinced, probably not for 2026:

  • Changes to blocktime
  • Nested PoW / Infra Blocks

Definitely don’t want (no thank you):

  • Avalanche

Special mention:

  • UTXO Commitments, although this doesn’t need to be at protocol level right now
6 Likes

Can it run Doom though?

For serious actually, running DOOM on things can easily make it into the biggest news outlets like even Wallstreet News Journal etc.

So if you can make it run DOOM, BCH will hit big news outlets.

2 Likes

Making Nested PoW for 2026 will be very hard work for me. I will have to create a testing-network with testnet-in-a-box/dedicated testnet with multiple nodes to make sure it all works together nicely. Then run it for 6 months+ or so to prove to people and companies it is reliable.

Unless I get some external help maybe, 2026 is difficult.

But I am OK with moving it to 2027 if I cannot make it. No rush.

About milli-sat.

A little discussion on telegram gave some good ideas and opinions. One of the main ones is that the cost is high for a currently low return on investment.
High cost is because all parties parsing raw transactions should adust their software or risk losing money.
Low return on investment is because our current price is such that there is no benefit yet.

One suggestion made was to simply avoid assumptions about schedule. Don’t assume it will fit in the november/may design we’ve been using.
Instead say we want to activate this in 4 years. Give everyone a clear indication and notification. Have a testnet soon and all that.

Or, maybe even simpler; start the process now and have as part of this specific chip the actual lock-in date and the activation date. We likely won’t be using November 2025 and May 2026 for that, but maybe November 2025 for lock in but May 2029 for activation.
All dates are draft and open for discussion.

With this much longer timeline it may be possible to not break the transaction format at all. Just change the value in a new tx-version.
Or, if people STILL find that too risky, then the longer timeline means we might fit various transaction-format-cleanups at the same time.

Having those dates part of this CHIP gives us lots of options, hopefully one will please enough people to make it happen :slight_smile:

1 Like

There’s also the “evalutaion of alternatives”, it needs to be demonstrated that bumping TX version and breaking TX format to shift value for x1000 has better trade-offs than inserting a new field in a non-breaking way, using the same prefix approach CashTokens used.

The non-breaking alternative, add an additional field to output format:

Field Length Format Description
value 8 bytes unsigned integer(LE) The number of satoshis to be transferred.
subsat prefix and locking script length variable variable length integer The combined size of full subsat prefix and the locking script in bytes.
[PREFIX_SUBSATS] 1 byte constant Magic byte defined at codepoint 0xff (255) and indicates the presence of a subsat prefix.
[subsat amount] variable variable length integer Amount of subsatoshis
locking script variable bytes(BE) The contents of the locking script.

Note: if the output encodes both subsatoshis and tokens, then subsatoshi prefix MUST come after the token prefix.

Applications could choose on how to present the high-precision composite:

  • As a floating point: float_value = value + (double) subsat_amount / (1<<64)
  • As high-precision atomic units: atomic_value = (uint128) value * (1<<64) + subsat_amount

Script VM could be upgraded using some OP_ENV so to have introspection opcodes return the high-precision atomic_value, or we could just add 2 new introspection opcodes.

1 Like

The ‘non breaking way’ is adding tech-debt. Unneeded complexity. Complexity that can never ever be removed again because once in production those transactions stay forever.
We picked hard forks over soft forks and we relished in the fact that we can do things more clean and with much less tech-debt as a result. And this is useful to keep the chain long-term viable.
I want BitcoinCash (the protocol) to still run mostly unchanged 20 years from now. Maybe even longer, but I dare not look that much into the future.

So, sure, it is possible to do push this data into a script field and make all script parsers adjust so transaction parsers can wait a couple more years to do so.
Yes, that may short term be a positive thing.

But, people, if you think short term why are you worried about enabling mili-sats?

The trade-off of doing this inside of the value field is that it is by far the least invasive change possible. It is very simple to write the code in practicall all parsers. If txversion < 10, multiply ‘amount’ with 1000. That’s it. One line spec.

The risk of some old software messing up is a deminishing one over time. Software honestly has a horrible half-time in our crypto world. This is a super managable problem, just by giving it time.

First post here. First of all I am not a dev so I am not super technical. However, it is quite obvious to me that the biggest problem with BCH currently is still usage. We do not consistently even produce 1mb blocks. So in my mind things like utxo commitments are things that are still not as important as allowing more apps to be built on bch. Speed is a factor in this, but with dsproofs and ZCE’s perhaps that is all that is needed. Things like Tail Storm or shorter blocktime will not be doable this go around anyway most likely.

So in my mind it would probably be best to do another round of VM upgrades to allow even more future capabilities for builders. Everyone’s deep in the VM right now anyway so maybe this would also help while its still fresh in everyone’s minds. The ones that seem to be the most pertinent after talking to @MathieuG are:

  • Loops
  • re-enabling LSHIFT / RSHIFT
  • perhaps adding new math opcodes for more complex market makers

Those are my thoughts at this time, thank you.

7 Likes

Welcome to Bitcoin Cash Research!

It’s true that driving adoption is a huge focus we need. In my opinion, VM Limits will already have opened the playground far enough (in combination with CashTokens, Big Big Ints etc.) that protocol dev is not going to be a limiting factor on driving that building. At that point it’s much more on the developer tooling for higher layer, quality of wallet integrations (to drive interest and provide a starting point), effective media promotion, price action (up always drives interest) etc.

That’s just my take, which is why I’m more interested in protocol development moving into more “esoteric” features. The 2023 & 2025 upgrades have been big for enabling building, and I think to the extent we need more adoption (which we do) those problems now basically need to be addressed through other parts of the ecosystem.

But I agree the Loops idea does sound cool & might be quite significant on that front.

2 Likes

Which “esoteric” features are you referencing? I agree tooling has not caught up to our VM upgrades and that the VM isn’t the main thing holding us back anymore. I still feel we might as well continue focus on it for next upgrade cycle as I don’t see any other big changes that are close to reaching consensus. It’s also something on peoples minds still and we might as well stay a couple steps ahead of tooling if there’s a couple add ons that will provide even more future use cases. The main one at this time does seem to be Loops, followed by Shift ops.

Things like UTXO Committments or Sub satoshis. Stuff that isn’t IMMEDIATELY needed, maybe not even for 2 or 3 or 4 years, but if we get it in the pipeline and/or shipped over the next couple of years then it’ll be ready for when it is.

With regards to things needed in the VM, I am interested in the idea of Bounded Loops, but in general I’m hoping that ecosystem focus moves up a couple of layers for the time being. Anything really meaty at the protocol layer might distract focus from that which would be unfortunate.

2 Likes

here’s my list with regards to VM: op_env, pfx_budget, bounded loops, op_eval, op_merkle, negative op_roll & pick, alt stack swap/rot/roll/pick, op_matureblockheader, more math ops, shift ops, op_invert, maybe more hash ops, maybe stream hashing, maybe raw tx parsing opcode(s), detached signatures (pfx_detachedsig)

the below, I doubt there’s will/need to do it, but still putting it out just to ponder:

if we ever break tx format, then use the opportunity to switch prevout refs to use txidem (something like =H(H(input prevout refs) + H(outputs))), and change txid to be something like =H(txidem + H(input script, locktime) + tx_ver + tx_locktime). Furthermore, hashes covering ins & outs could be little merkle trees. This way, it would be possible to have compact proofs for individual utxos getting spent & created, have SPV proofs that further drill down to particular outputs.

also, if we break tx format, it needs to be upgradeable to 384-bit hashes, for when/if QCs get powerful enough where 256-bit collision search becomes a risk

also, we could consider switching to other hash functions like Blake (for everything except PoW), to increase overall performance while maintaining or increasing cryptographic security

Echo’int the “if we ever” part (i.e. not next year, or the one after), I’d want to put forward the idea of a tagged values system for transaction encoding. A basic idea where you effectively have ‘key=value’ style stuff. This results in much cleaner parsers and allows plenty of opportunities for improvements that are practically impossible today.
Because if you have a ‘key=value’ style encoding, you can choose to skip a key if it is irrelevant. For instance the ‘sequence’ is mostly never used.
And you could define specialistic keys instead of ‘abusing’ existing ones. Which is relevant for the sequence again, but also the coinbase input could be a special short tag instead of an ‘empty’ prev-txid.

To give an example of features that are impossible today, you could add a key that specifically states “reuse the signature from input ‘n’” in order to drastically shrink transactions that combine loads of utxos from one address.

I did an, admittedly naive, design of this nearly a decade ago (man time flies!), that may still have good ideas (bip 134). If we want to ever actually break the transaction format.

1 Like

Here’s an idea for how we could compact the amount of data needed by SPV clients.

Imagine if we added a merkle tree over all block hashes and kept updating it and committing the updated root in coinbase TX.

With that, a SPV client wouldn’t need the entire header chain anymore, they’d just need (given N is current tip):

  • Coinbase TX at some height M (where M < N) - from this they’d extract the root of all headers 0 to M-1 which can be used to make inclusion proof for any historical header and TX belonging to the range.
  • Header chain for the segment [M, N], which would prove the [0, M-1] commitment is buried under (N-M+1) worth of PoW, or to prove any TX in the [M, N] range.

It would take only take tree_height number of hash ops to update it with a new header after each block, because headers chain is a sorted list growing at the end so no random insertions, you only ever update the right-most side of the tree.

1 Like