CHIP-2025-03 Faster Blocks for Bitcoin Cash

Also need a section dealing with this. People are gonna be very confused about 0 conf or not and blah de blah.

Need a simple, bullet point breakdown of impact on:

  • 0 conf ecosystem (none/synergistic)
  • People who need 1-3 confs (good, if they don’t care about accumulated PoW)
  • People who adjust the 1x10min conf to 10x1min conf (no difference, albeit it accumulates)
  • DeFi apps, some of which can accept 0 conf and some which can’t
    etc.

https://x.com/TheBCHPodcast/status/1907861271686361516

Plus note that the whitepaper doesn’t specify block time. Because a lot of people are going to be confused/hung up on that.

2 Likes

Just to reinforce the point “specify” – white paper does mention “assuming” a 10 minute block time. Assuming is not the same as specifying.

1 Like

Sovereign Naan made a 3 video series about 1 minute blocks on BCH as a proposal in September of 2021 - discussing the 1 conf times, faster DAA response, same inflation schedule, concerns about impact on mining - including getting support from Roger Ver & Ryan Giffin. Incredibly prescient.

4 Likes

Haven’t caught up yet but just throwing in 2sats for now:

If the purpose of lowering blocktime is to increase the UX I’d imagine reducing the variance itself would be just as important. Variance would still be common with just a shorter blocktime which would still be a bad UX in the end.

I liked the high-level idea of Bobtail when I first heard of it which was about making blocktimes much more consistent at the intended rate. I think something that addresses blocktime variance as well would make sense to include into any potential blocktime changes since the goal is the same: make blocks more predictable and increase user experience. There is definitely a nice UX when using other chains that generate blocks consistently around the same time.

Going back it seems like BCA likened Tailstorm to a ‘Bobtail 2.0’ so I’m pretty behinde on this stuff now :sweat_smile: But if the end-result of a shorter blocktime is much more consistent blocktime as well then that’s pretty positive imo.

Just for reference:
Bobtail video: https://www.youtube.com/watch?v=GhJqNtYFcBw&list=PLfUWWM-POgQsz0H0uwSVQK5_ISMiW0m98&index=3
Bobtail paper link: Bobtail: Improved Blockchain Security with Low-Variance Mining - NDSS Symposium

My high-level understanding of Bobtail was that it switched blocks to requiring finding multiple valid hashes and it uses X amount of the best ones that miners find for the block, splitting the block reward among those miners. Almost like miners are part of a global pool rather than all-or-nothing, which would have some more consistent payouts for smaller miners.

Will need to read up on Tailstorm, etc.

1 Like

A shorter blocktime doesn’t change the variation proportionally, but it would change it absolutely - which in this case goes a long way to solving the problem.

Ie the difference between 10 minutes and an outlier of 1 hour (50 minutes) is absolutely extreme, but a difference between 1 minute and an outlier of 6 minutes (5 minutes) is still well below the threshold of 13 minutes to hit user frustration. And with a 10x greater sampling interval, the run of “bad luck” that can string together multiple large-gap slots is also significantly reduced (not proportionally, but in terms of how it feels in absolute terms).

Edit: It’s a good observation to be thinking about how to increase block consistency as well.

1 Like

Variance would still exist but wouldn’t be bad UX, because variance scales with target time just the same, e.g. right now there’s 13.5% chance of waiting 20+ minutes and 5% chance of waiting 30+ minutes

This will drop to 2 minutes and 3 minutes which is still fast. Outliers of 4+ minutes would be extremely rare, 0.5% of the time

I added a game version where you can set target time and test your confirmation luck against generated durations: Block Interval Game (Simulated)

I did, and I liked it, however it’s really just hiding faster blocks (with the same variance) under the hood + adding uncle branches merging system to alleviate impact of orphans with 12s block time.

Basically it pretends a “pack” of sub-blocks is 1 block so the 1 block (of 10 minutes) seemingly has reduced variance.
With simply reducing to 1-min target time anyone requiring 10-conf (10 minutes target wait) will also see reduced variance.
We could also soft-fork faster blocks, so non-upgraded software would still see 10-min blocks, but with less variance.

I now think we don’t need it, it would add complexity and we can get good enough improvement without it.

You can’t remove variance from a random process, it always exist in the smallest amount of work that gets recognized. You can only bundle these smaller works and pretend the bundle is 1 unit.

2 Likes

More than that though. It allows for lower orphan rates with faster blocks. Also limits benefits of selfish mining.
Imo there is still reason to explore tailstorm, but faster blocks and abstracting block time is the first step.

2 Likes

No, orphans (even though they get merged later) still happen at same rate. The trick is in reducing and socializing reward losses, so miners’ share of revenue still matches their share of hashrate.

This potentially achieves the same, in a much simpler way: A Deterministic Tiebreaker for Bitcoin Block Selection: Enhancing Fairness and Convergence

Orphan rate calcs for tailstorm (corrected) below. Am I missing something?

That table makes it hard to compare things.
I take it the 6.67% (for T=75) refers to orphan rate of summary blocks, but with k=15 the inner blocks are at 5s and should see a huge uncle rate.

With Tailstorm:

  1. Each merge redistributes payout from miners who’d have won the orphan races to miners who’d have lost them.
  2. Each merge contributes to PoW accumulation.

The simpler alternative also redistributes payout from would-be-winner to would-be-loser, but doesn’t include the losing block, so one block’s work gets scrapped whenever there’s an orphan race.

What’s the objective? To make mining fair (by fair I mean that %hashrate matches %revenue) or to reduce scrap work? It can be fair while still producing scrap.

I believe this is for sub-blocks, not summary blocks:
“The orphan rate, that is subblocks orphaned per subblocks included, is bounded by [above formula]”

One objective is reducing selfish mining, yes.
https://arxiv.org/pdf/2306.12206

1 Like

Ah yes, sub-blocks are what’s getting orphaned (especially in the implementation where summary is a 0-PoW deterministic block rather than adding 1/K too), but orphan races all happen at summary block time interval. I believe reduction in orphan rate is just because of the full download assumption. With Tailstorm: a node will have already downloaded K-1 sub-blocks, and if 2 subs are racing to get picked as last sub to complete the summary - a node will only have to download 1/K of data. So: with compact block relay (or high enough bandwidth) this difference in orphan rates should be negligible.

Difference is in cost of these orphans, with Tailstorm the orphan will cost only 1/K reward, which makes them less impactful to mining fairness because they’ll reduce miners’ revenue only by 0.44% (relative to K blocks worth of rewards) rather than 6.67%.

This is all ignoring uncle rate.
(Uncle+orphan) rate for 10s subblocks should be the same as orphan rate for 10s blocks.

@sickpig @pkel Curious as to y’all’s thoughts

@Griffith (forgot to tag you)

Can you TLDR me, what is going on here? How can I best contribute to the discussion? Is it productive to discuss the merits of Tailstorm here? I think we have done that excessively in another thread.

@bitcoincashautist should consider adding the benefit for node operators since each block will require less memory and CPU cycles.

1 Like

Absolutely. Just high level!
In terms of the above couple comments (two brief points below):

  1. Could the deterministic tiebreaker

be a way to assume most of Tailstorm in a simpler matter?
2. Would uncle+orphan rate on Tailstorm for x second blocks be the same roughly as orphan rate for x second blocks, and very high level what the actual end difference would be if any.

Sorry yeah I know largely discussed but in this context just curious to hear from your perspective!

1 Like

Deterministic Tie Breaker. My understanding is that whenever honest miners see two competing chains on the same height, they will deterministically select one of the chains as preferred chain. This is in contrast to the current solution where miners prefer the chain first received. This change ensures that all honest miners agree on the tip of the chain, in cases where it is ambiguous.

In a benign network, where everybody acts honestly, this would reduce unfairness. But as a pool operator I would certainly modify the software to prefer my own block instead of discarding it. If all pool operators think like me, we’re back at square one regarding fairness.

Regarding selfish mining: I assume you’re familiar with the attack models of Sapirshtein/Sompolinsky/Zohar or Bar-Zur/Eyal/Tamar. They give the attacker some means to reorder messages: if one block of height n shows up, they can send their own block of height n (if they have any) with the MATCH action. Fraction \gamma of the defenders will continue mining on the attackers block. If you now add your hash-based tie breaker, this reordering will work in 50% of the cases and should be at least as bad as assuming \gamma = 0.5 in the other model. I say at least as bad because the attacker knows in advance whether the MATCH will work and probably can adjust his strategy based on that.

Uncle+Orphan rate. I’m not sure what you mean with uncles, as Tailstorm does not have any. It has trees of k subblocks between each pair of summary blocks. It discounts rewards based on the height of this tree. Probably you mean, if the subblock tree has height k - i, there are i uncles on this tree? Then the uncle+orphan rate in tailstorm with subblock interval n seconds should be about the same as the orphan rate in BCH with n seconds block interval. Assuming propagation times of Tailstorm subblocks and BCH blocks are the same.

1 Like

It’s assuming you’d do this, so dominant pool hash-rate wise still has its advantages. However, it would use the tie-breaker on blocks received from others, so pools with higher ping would still have a chance of winning races.

Wouldn’t expected ROI be negative? E.g. you had a valid block with no competition and you withheld it then in doing so you allowed someone else to mine a block that will flip yours with 50:50 chance.

What if we don’t give attackers the time? Independent researcher zawy was suggesting to enforce block timestamps to be monotonic (each block must advance at least by 1 second) AND have nodes have very strict future time limit (e.g. max. 6 seconds ahead of their local clock).