Variance would still exist but wouldn’t be bad UX, because variance scales with target time just the same, e.g. right now there’s 13.5% chance of waiting 20+ minutes and 5% chance of waiting 30+ minutes
This will drop to 2 minutes and 3 minutes which is still fast. Outliers of 4+ minutes would be extremely rare, 0.5% of the time
I added a game version where you can set target time and test your confirmation luck against generated durations: Block Interval Game (Simulated)
I did, and I liked it, however it’s really just hiding faster blocks (with the same variance) under the hood + adding uncle branches merging system to alleviate impact of orphans with 12s block time.
Basically it pretends a “pack” of sub-blocks is 1 block so the 1 block (of 10 minutes) seemingly has reduced variance.
With simply reducing to 1-min target time anyone requiring 10-conf (10 minutes target wait) will also see reduced variance.
We could also soft-fork faster blocks, so non-upgraded software would still see 10-min blocks, but with less variance.
I now think we don’t need it, it would add complexity and we can get good enough improvement without it.
You can’t remove variance from a random process, it always exist in the smallest amount of work that gets recognized. You can only bundle these smaller works and pretend the bundle is 1 unit.