I did some analysis, global pings are below 300 ms and more than 80% VPS providers offer more than 436 Mbit/s uplink and it’s all only going to improve from here.
Even with 3 hops from a mining node to major pools and compact block relay worst case (1.5 * RTT + full block transmission time) - we could have 1-minute blocks stay under 1.99% orphan rate.
More realistic scenario with compact block relay: orphan rates between 0.23% (best case 0.5 * RTT when receiver already has all the TXs) and 0.63% (1.5 * RTT + time to transmit missing 1% of block’s TXs).
With 1-minute target even placing miners on Moon would still be viable (with 2.43%-2.84% orphan rate).
So, I’m changing the proposal from 2 minutes to 1 minute.
Other technical and social challenges are bigger obstacles than orphan rate.
Unfortunately your analysis did not include real-life orphan rates, it is therefore void.
You cannot automatically assume the orphan rates will be within certain range after just studying pings. You need real world data, from working blockchains.
Maybe our fears (“What will miners do?” “What will exchanges do?”) of making this change are overblown?
Since Compact Block Relay (CBR) (BIP-152) they’ve been (expectedly) so low nobody bothers to report them. You can only find some pre-CBR info which heavily depended on bandwidth so you’ll sometimes find LTC and DOGE folks complaining on old forum posts.
Since then, both LTC and DOGE have implemented CBR and you don’t anymore see the complaints.
Maybe because LTC has 0.02% orphan rate and ZEC has 0.03% (even with 75s block time) (as per ViaBTC)
“You did not throw an apple yourself therefore your calculation of apple’s trajectory is void.”
“Satoshi didn’t analyze a real-life blockchain therefore his analysis in chapter 11. of the whitepaper is void.”
You can’t just hand-wave results of a calculation like that. The same calculation (“Expected Propagation”) predicts Bitcoin propagation times between 0.13s and 0.43s (with orphan rates between 0.02% and 0.07%) - which is close to propagation times reported by bitnodes (0.2 - 0.25s average) and reported orphan rates (ViaBTC: 0.3% for BTC).
Why can’t you? Propagation is a function of latency, bandwidth, number of hops (network topology), and mempool overlaps. That’s it. There’s no some elusive secret cause that’s not captured by the calculation.
For 10s blocks, my calc predicts 3.69% for 99% mempool overlap. ViaBTC reports 3.4% for Nervos chain (which has 10s blocks). I suspect they could do some optimization to more often hit 100% overlap and get closer to 1.24%. (Edit: apparently they use something similar to CBR which they call NC-Max but they also dynamically adjust block times based on orphan rates, tuned to 2.5% target orphan rate).
Well, you can, it’s basic math really.
Real world orphan rates are below 0.05% today (as specified in the CHIP - real data).
For limits, look at just a few variables: ping, bandwidth, block size, hops… These are the only things that can affect orphan rate, outside of potential variables like nefarious miners, but any tools they would have can affect the chain today. No different. This isn’t akin to a change like TailStorm or otherwise where more real world data could be useful.
3.4% actual or limit? Granted, not a 1:1 comparison I imagine since different node distribution/software/otherwise (but also sort of similar since also SHA-256 iirc).
PR submitted to the CHIP with several improvements.
Introduction section
Inclusion of demo game
Clarity of adjustment vs algorithm
Evaluation of industry competitors
Inspiration of other proposals
Edit: Don’t have time now, but it would be good to add a visual image of the blocktime game to make the point a bit clearer visually about the larger timeline gaps.
Thanks! I’ll have to make some edits, especially regarding this “A 10-minute blocktime actually produces a median 17-minute wait for users (larger gaps dominate the timeline), which is uncompetitive in the modern electronic payments context.”
After making the game, I realized Rucknium’s introduction of Erlang distribution had led me to a wrong conclusion. Yes, Erlang distribution says there’s a 50:50 chance you “land” in an interval longer than 17 minutes - but when user makes a TX he could “land” anywhere inside that interval and his wait time until the end of interval will still be 10 minutes on average (with 40% chance of exceeding 10 min, and 14% chance of exceeding 20 min).
Erlang affects user’s perception of blockchain slowness, e.g. you pull up explorer it’s been 7 minutes since last block - and you end up waiting 10 more minutes: 17 min total for the block but you only had to wait 10 minutes of that 17.
So, I need to rework Motivation & Benefits sections.
Sovereign Naan made a 3 video series about 1 minute blocks on BCH as a proposal in September of 2021 - discussing the 1 conf times, faster DAA response, same inflation schedule, concerns about impact on mining - including getting support from Roger Ver & Ryan Giffin. Incredibly prescient.
Haven’t caught up yet but just throwing in 2sats for now:
If the purpose of lowering blocktime is to increase the UX I’d imagine reducing the variance itself would be just as important. Variance would still be common with just a shorter blocktime which would still be a bad UX in the end.
I liked the high-level idea of Bobtail when I first heard of it which was about making blocktimes much more consistent at the intended rate. I think something that addresses blocktime variance as well would make sense to include into any potential blocktime changes since the goal is the same: make blocks more predictable and increase user experience. There is definitely a nice UX when using other chains that generate blocks consistently around the same time.
Going back it seems like BCA likened Tailstorm to a ‘Bobtail 2.0’ so I’m pretty behinde on this stuff now But if the end-result of a shorter blocktime is much more consistent blocktime as well then that’s pretty positive imo.
My high-level understanding of Bobtail was that it switched blocks to requiring finding multiple valid hashes and it uses X amount of the best ones that miners find for the block, splitting the block reward among those miners. Almost like miners are part of a global pool rather than all-or-nothing, which would have some more consistent payouts for smaller miners.
A shorter blocktime doesn’t change the variation proportionally, but it would change it absolutely - which in this case goes a long way to solving the problem.
Ie the difference between 10 minutes and an outlier of 1 hour (50 minutes) is absolutely extreme, but a difference between 1 minute and an outlier of 6 minutes (5 minutes) is still well below the threshold of 13 minutes to hit user frustration. And with a 10x greater sampling interval, the run of “bad luck” that can string together multiple large-gap slots is also significantly reduced (not proportionally, but in terms of how it feels in absolute terms).
Edit: It’s a good observation to be thinking about how to increase block consistency as well.
Variance would still exist but wouldn’t be bad UX, because variance scales with target time just the same, e.g. right now there’s 13.5% chance of waiting 20+ minutes and 5% chance of waiting 30+ minutes
This will drop to 2 minutes and 3 minutes which is still fast. Outliers of 4+ minutes would be extremely rare, 0.5% of the time
I added a game version where you can set target time and test your confirmation luck against generated durations: Block Interval Game (Simulated)
I did, and I liked it, however it’s really just hiding faster blocks (with the same variance) under the hood + adding uncle branches merging system to alleviate impact of orphans with 12s block time.
Basically it pretends a “pack” of sub-blocks is 1 block so the 1 block (of 10 minutes) seemingly has reduced variance.
With simply reducing to 1-min target time anyone requiring 10-conf (10 minutes target wait) will also see reduced variance.
We could also soft-fork faster blocks, so non-upgraded software would still see 10-min blocks, but with less variance.
I now think we don’t need it, it would add complexity and we can get good enough improvement without it.
You can’t remove variance from a random process, it always exist in the smallest amount of work that gets recognized. You can only bundle these smaller works and pretend the bundle is 1 unit.
More than that though. It allows for lower orphan rates with faster blocks. Also limits benefits of selfish mining.
Imo there is still reason to explore tailstorm, but faster blocks and abstracting block time is the first step.
No, orphans (even though they get merged later) still happen at same rate. The trick is in reducing and socializing reward losses, so miners’ share of revenue still matches their share of hashrate.
That table makes it hard to compare things.
I take it the 6.67% (for T=75) refers to orphan rate of summary blocks, but with k=15 the inner blocks are at 5s and should see a huge uncle rate.
With Tailstorm:
Each merge redistributes payout from miners who’d have won the orphan races to miners who’d have lost them.
Each merge contributes to PoW accumulation.
The simpler alternative also redistributes payout from would-be-winner to would-be-loser, but doesn’t include the losing block, so one block’s work gets scrapped whenever there’s an orphan race.
What’s the objective? To make mining fair (by fair I mean that %hashrate matches %revenue) or to reduce scrap work? It can be fair while still producing scrap.
I believe this is for sub-blocks, not summary blocks:
“The orphan rate, that is subblocks orphaned per subblocks included, is bounded by [above formula]”
Ah yes, sub-blocks are what’s getting orphaned (especially in the implementation where summary is a 0-PoW deterministic block rather than adding 1/K too), but orphan races all happen at summary block time interval. I believe reduction in orphan rate is just because of the full download assumption. With Tailstorm: a node will have already downloaded K-1 sub-blocks, and if 2 subs are racing to get picked as last sub to complete the summary - a node will only have to download 1/K of data. So: with compact block relay (or high enough bandwidth) this difference in orphan rates should be negligible.
Difference is in cost of these orphans, with Tailstorm the orphan will cost only 1/K reward, which makes them less impactful to mining fairness because they’ll reduce miners’ revenue only by 0.44% (relative to K blocks worth of rewards) rather than 6.67%.
This is all ignoring uncle rate.
(Uncle+orphan) rate for 10s subblocks should be the same as orphan rate for 10s blocks.