CHIP-2025-03 Faster Blocks for Bitcoin Cash

Why? Do you think variance is due to us being minority hashrate? I’ve seen people say something like “oh variance is due to switch mining causing oscillations” and I wondered where do they get that from. It sounds plausible, but is it true? This is a good opportunity to address that question.

What I found is that it used to be true (with EDAA and CW-144 DAA), but it has not been true since introduction of ASERT DAA in 2020. Still, the common knowledge persisted - because it used to be true, even though it isn’t anymore.

Now it takes a big hashrate event (big price move on BCH or BTC, or halving, or big miner misconfiguration or error) to have noticeable impact on block times. Absent that, real block times are well aligned with the theoretical distribution for a random process targeting 10 minutes.

Let’s examine a few samplings of day’s worth of block times.

Extreme Swings With EDAA

We had EDAA for 106 days (2017-08-01 to 2017-11-13), and during the period:

  • Average block time was 5.86 minutes (and variance was extreme due to huge hash volatility as miners were gaming the algorithm)
  • BCH created 10,577 more blocks compered to ideal 10-min schedule
  • Emission schedule was shifted by 73 days, minting 132k BCH ahead of schedule

The below figure (2017-10-12) illustrates the problems well:

Oscillations With CW-144 DAA

We had CW-144 DAA from 2017-2020, and it maintained the average, but it had oscillations due to nature of simple moving average: of all blocks in the moving average window having equal impact, so when a slow block “enters” the sampling window it drops the difficulty, but then after 144 blocks it “exits” the sampling window and brings it back up - leading to oscillations and volatility in hashrate.

The below figure (2019-10-12) illustrates this effect well:

image

Packs of short blocks followed by packs of long blocks are pronounced. We can compare all the percentiles with calculated theoretical probabilities:

likelihood of waiting less than
(case 600s average)
0.7%* 00:04
10% 01:03
20% 02:14
30% 03:34
40% 05:06
50% 06:56
60% 09:10
70% 12:02
80% 16:06
90% 23:02
99.3%* 49:37

* 1 block per day (1/144 = 0.7%)

Notice that the whole distribution of wait times is skewed far from the theoretical random process (median 1-conf wait of 12 minutes, while in a truly random process it is expected to be 7 minutes).
So yes, it definitely used to be true that DAA oscillations and miners switching were causing additional variance. Jonathan Toomim’s analysis goes more in depth on this.

Current Situation (ASERT)

Is that still the case, are we now experiencing additional variance or is it better aligned with expected theoretical distribution?

Steady State (ASERT)

Days with sideways price movement (2024-09-26) show variance expected of a random process:

image

There’s still some discrepancy in percentiles when compared to above theoretical, because 1 day worth of blocks (144 blocks) is still a small sampling window, not enough to fully smooth out impact of luck, and some switch mining can still impact our block times. However, it is much less pronounced than just normal variance of a random process.

To confirm impact of luck, we can generate random data and observe interval distribution looks similar, and the percentiles table is affected by the particular sampling of ~144 blocks:

image

Upward Price Move (ASERT)

If the price goes up enough, the average block time is expected to be proportionally faster until DAA catches up - but the individual block times are still expected to be randomly distributed around the stretched average. See sample from 2025-05-22, it had a +9% price move and result was 9:10 average (-8%) for the day.

image

We still had an outlier of 58 minutes, and if you pick a random point on the timeline for that day - you get a distribution where there’s 20% chance of waiting >16 minutes, and 10% chance of waiting >24 minutes. This is close to theoretical:

likelihood of waiting less than
(case 600s average) (case 550s average)
0.7%* 00:04 00:04
10% 01:03 00:58
20% 02:14 02:03
30% 03:34 03:16
40% 05:06 04:41
50% 06:56 06:21
60% 09:10 08:24
70% 12:02 11:02
80% 16:06 14:45
90% 23:02 21:06
99.3%* 49:37 45:29

* 1 block per day (1/144 = 0.7%)

Downward Price Move (ASERT)

We can observe the same in a downward price move (2025-02-03). The price moved -15% over 2 days, and block time average for the 2nd day was +22% off target.

image

The distribution of individual times was still aligned well with theoretical for the matching average.

likelihood of waiting less than
(case 600s average) (case 732s average)
0.7%* 00:04 00:05
10% 01:03 01:17
20% 02:14 02:43
30% 03:34 04:21
40% 05:06 06:14
50% 06:56 08:27
60% 09:10 11:11
70% 12:02 14:41
80% 16:06 19:38
90% 23:02 28:05
99.3%* 49:37 60:32

* 1 block per day (1/144 = 0.7%)

Block Reward Halving (ASERT)

What about halving? It should have impact equivalent to a 50% price drop.

image

Looks like, on the 1st day, it was more pronounced than anticipated, indicating miners played it safe and removed (or moved to BTC) more hashpower than 50%. The distribution of individual times was still aligned well with theoretical for the matching average, despite the small sample size (47 blocks for that day).

likelihood of waiting less than
(case 600s average) (case 1830s average)
0.7% 00:04 00:13
10% 01:03 03:13
20% 02:14 06:48
30% 03:34 10:53
40% 05:06 15:35
50% 06:56 21:08
60% 09:10 27:57
70% 12:02 36:43
80% 16:06 49:05
90% 23:02 70:14
99.3% 49:37 151:20

* 1 block per day (1/144 = 0.7%)

I think this is the only case where our share in total sha256d haspower would matter. Thankfully such events only happen once every 4 years. :slight_smile:

What Does This Mean for The CHIP?

Nice thing is that extremes scale with target block time, what is now 0.5 to 30min range (90% of waits) could become 0.05 to 3min by reducing the target/average time to 1min, and impact of the next halving would be increased to 0.15 to 9min waits for a day or two (assuming 1/3 hashpower drop like last time).

Impact due to price/hashrate volatility would be barely noticeable, since a 20% move would have just 12 seconds impact on the average (as opposed to 2 minutes now).

3 Likes

It’s not just AMMs, it is any public-use contract where there could be a significant number of concurrent users - such contract UTXOs have a chance of accidental double-spends and only 1 transaction chain will get through.

Yup, 0-conf is a must for brick and mortar, but for it to be safe the user must be spending a confirmed UTXO, rather than have unsafe 0-conf dependencies (like P2SH). Unless you were trading on a DEX while waiting in cashier line, you should have some confirmed UTXOs in your wallet, ready for safe 0-conf use :slight_smile:

4 Likes

Well, I define any “anyonecanspend-type contract” as an “AMM contract”, maybe I could be more clear about it.

Otherwise I agree with your arguments.

Ah, I almost forgot:

Well we have to fix this “unsafe 0 confs” that come as a child of AMM transactions sooner or later.

It’s an obvious security risk to instant transactions.

It’s detectable via looking up the parent, so it should be doable. Maybe DSPs could send a notification or something (just an idea)?

Either way we will have to fix it, preferably soon.

Avalanche Pre-Consensus will fix this.

In what way, specifically?

Be mindful this is a “research” website. I expect actual research and technological explanation, no AI slop.

4 Likes

You can see the detailed breakdown here: https://avalanche.cash

You did not answer the question.

I meant specifically, in what way will Avalanche fix notifying 0-confirmation transaction users that their transaction is at risk because the parent is an AMM transaction.

With some details, please.

2 Likes

Avalanche is a voting network on top that put limits on miners. That’s why you can do it as a softfork. Miners will be orphaned if they include a double-spend of a transaction voted valid by Avalanche. The transactions will be voted on as soon as they appear on the network. The economic majority participating in Avalanche will most likely not want to undermine the value of their coins, but if they turn malicious the miners and economic nodes could just agree to ignore it.

Avalanche is a voting network on top that put limits on miners. That’s why you can do it as a softfork. Miners will be orphaned if they include a double-spend of a transaction voted valid by Avalanche. The transactions will be voted on as soon as they appear on the network. The economic majority participating in Avalanche will most likely not want to undermine the value of their coins, but if they turn malicious the miners and economic nodes could just agree to ignore it.

That did not answer the question, so I will ask again:

Specifically, in what way will Avalanche fix notifying 0-confirmation transaction users that their transaction is at risk because the parent is an AMM transaction.

I assume you obviously understand the question. Right?

obraz

2 Likes

Users will not be notified, since their transaction is not at risk.

This is why we don’t want it. Why even have miners anymore? Why even have PoW? Just let the “voting network” do everything, and if you want a “voting network” then there’s 1000s of networks to pick from, or you can even start your own. The only thing you’re achieving with this annoying Avalanche shilling and trying to plug it in every topic is repeatedly pissing everyone off, just stop it.

5 Likes

that and the maybe because on the competing chain its been in development for 7 years, by a paid team, and it still doesn’t actually have the main feature of instant finality. According to the website he just linked.

SegWit seems to be simple and easy in comparison…

3 Likes

This is not a fix Imo as this would work in a cash-like transaction, not on DeFi where most things are automated via contracts and AMM. But the idea itself is very good in general I like it.

Btw Since you brought this to the table, I propose that a 1-minute block would be a more optimal choice by the time this CHIP gets implemented, rather than a 2-minute block time. It presents better benefits vs trade-offs.

1 Like

Erm. It’s not supposed to work on AMM and defi.

There is no such thing as “double spend” in an AnyoneCanSpend AMM-style transaction, so detecting for double spending is meaningless.

DSPs could work for normal P2P transactions, non-AMM contracts and shopping.

2 Likes

I think it may be an underrated benefit of faster blocks that both ABLA & DAA are more responsive (even with the formula adjusted relative to block time, the increased frequency of data points will make the adaptation track the “ideal” value more closely).

This seems like it could be a big benefit in a BTC Flippening scenario. Where BTC is less responsive and stuck on its slow 10 minute blocks / 2016 block DAA, BCH will siphon off hashrate with 1 minute blocks / 1 block DAA. This will give BCH maximum stability in such a situation (which will involve extremely volatile price swings), while further dooming BTC due to low responsivity & adaptability.

5 Likes

I discussed 1 minute blocks with @joemar_taganna on the Podcast recently.

You can listen to hear his opinion in his own word, but I found it interesting that he didn’t see a strong immediate need for it despite running a large payments/merchant network. This is a pattern I’ve seen in several other people as well.

I’ve really noticed that reactions to the idea of 1 minute blocks tends to fall into two camps:

  • People who see the pain point that 1 minute blocks addresses, and think it’s a really good idea. Maybe people frequently interacting with exchanges, or payment processors that use 1 conf or some amount of confs, or who are heavy in DeFi and see how that plays into things, or who are sick of arguing on social media with people who are shilling alternatives like LTC with lower block times.
  • People who don’t see the pain point, and are more cautious. They don’t think the idea is bad, so much as they can’t see why it’s necessary to change at all or they have confidence in the status quo. It’s interesting that the reservations are around the impacts on mining (which can be addressed with good enough research & consensus building) or the lack of need for this change, rather than any strong proactive reason or aversion to the general concept of changing the block time.

Something for 1 minute blocks advocates to reflect on. Maybe there needs to be a section about this kind of “hesitancy” (not sure if that’s the best word for it).

2 Likes

This is very good point :+1: and you are getting into the main problem here. 1 min block is not needed within BCH wallets, exception is DeFi and AMM in DEXs however with Exchanges , payments processor and even some services like Mullvad etc… its very much needed right now

2 Likes

Hi all,

Great and thoughtful discussion here—it’s great to see the community digging into the merits of faster blocks for improving DeFi UX on BCH, especially around issues like UTXO contention in AMMs, MEV reordering, and variance in recovery times for failed chains. The points on how a deterministic tiebreaker (e.g., from the paper linked) or research like Tailstorm could enhance fairness and convergence are spot-on, and I agree they deserve exploration as simpler alternatives to full block time changes.

High-level, if uncle/orphan rates under Tailstorm for ~10-second blocks end up comparable to current orphan rates (potentially with minimal differences due to its merging mechanics), it could achieve similar benefits to faster blocks with less protocol risk—definitely worth modeling in the context of this CHIP.

To contribute productively, I prototyped a CLI simulator for a covenant-based AMM using “covenant recursion for DeFi without trust.” This demonstrates what’s conceptually achievable today with BCH’s existing VM features (e.g., 2018-re-enabled OP_CAT for concatenation and basic introspection for merkleized state) to enable atomic sequencing without signers or faster blocks, while previewing how proposed 2026 enhancements (e.g., loops via OP_BEGIN/OP_UNTIL and OP_EVAL for modular functions) would make it scalable for high-volume use.

It addresses the core DeFi limitations raised—e.g., concurrent users creating accidental double-spends on public contracts, where only one chain confirms after a block, forcing retries with 10+ minute waits.

The prototype simulates a constant-product AMM where users commit hashed actions (queuing without revealing to prevent front-running), then reveal to process swaps atomically. The covenant merkleizes commits into a fair sequence (appending hashes like the suggested “tagging input” but fully on-chain), then recurses state trustlessly on reveal.

No external server/signing (avoiding the “serial provider” centralization risk, as pointed out), and it keeps AMMs simple with “AnyoneCanSpend”-like openness, per the push: “People want simple AMM… without forcing everybody else to comply.”

Compared to today (where 2018-enabled OP_CAT allows basic versions but requires manual unrolling for sequences, bloating scripts and limiting to ~5-10 commits before size/VM limits), 2026 upgrades make it performant: Loops iterate dynamically (e.g., append 100+ hashes without code repetition, reducing size 5-10x), OP_EVAL modularizes extraction/math (faster execution, no bloat), and deeper introspection strengthens anti-MEV binding. This scales high-volume DeFi without protocol risks—atomic batches resolve contention instantly, complementing 0-conf for “snappier” UX.

Demo run (multi-user with 2 concurrent swaps racing on the same pool; hashes may vary, but logic is reproducible):

> status
Pool: BCH=1000, Tokens=1000, K=1000000
Merkle Root: 0000000000000000000000000000000000000000000000000000000000000000
Pending Commitments: 0

> multi 2
User-1 Committed: Hash ca9f145926c8e58d68cdb92f90d5b8a45da118b81925392f81b599e9f7374aab. New Merkle Root: ea423618fa4c248dcc2eb6c571a1f3841b7c80c9db4e1458c770f7598923e66b
User-2 Committed: Hash 82fac6c59221b850d5dab3832f3f84a075fa8b61bf7bf3bf91f3b09c6a070ab3. New Merkle Root: 64eec1e55a0fe91a539054926c93a110af092df1e246b6cafcc9bf122f5b29ef
User-1 Swap executed: Output 48. New Pool: BCH=1050, Tokens=952
User-1 Merkle Root Updated: fefcc03c125fcbc5f4af5aae371eb37790b4b082d1f724acb1d209036ed65f57. Simulated Wait: 26.73 min
User-2 Swap executed: Output 100. New Pool: BCH=950, Tokens=1052
User-2 Merkle Root Updated: dc82e08db4b7f3f8915621ac690d8ee1a97a84ac97a43187fdb0d72ef6006e65. Simulated Wait: 10.58 min

> status
Pool: BCH=950, Tokens=1052, K=1000000
Merkle Root: dc82e08db4b7f3f8915621ac690d8ee1a97a84ac97a43187fdb0d72ef6006e65
Pending Commitments: 0

In this run, User-1 and User-2 race commits (simulated delays for network realism). The merkle root sequences them fairly—reveals process atomically on updated state, resolving contention without failures or long waits. A mismatched reveal (e.g., tampering) fails trustlessly: “Error: Invalid commit hash.”

Full Python script (CLI REPL with threading for races) and CashScript contract: [https://github.com/bastiancarmy/bitcoin-cash-trustless-defi-recursion]. Basics work today with 2018 OP_CAT, but 2026 upgrades scale it efficiently for real-world volume.

This could complement or serve as an alternative to the CHIP: The 2026 VM upgrades fix DeFi internals at the script level without the consensus risks of broader protocol changes—e.g., avoiding orphan spikes while preserving BCH’s foundational ‘soul’ and enabling ‘instant’ atomic batches for smoother UX.

3 Likes