Sure, happy to answer! It’s seen a few iterations but underlying function has been the same through the iterations - the EWMA, I didn’t even realize that the first versions were actually EWMA-based haha, I kinda re-discovered it. Older version had a fixed %fullness target and it is thanks to @emergent_reasons that I found a way to have the elastic multiplier: he was concerned that we may not want too much headroom when we get to bigger sizes, and so the current version applies an elastic multiplier to the baseline EWMA so headroom can get reduced if bigger sizes would have less variance in some future steady-state, which is how we got to the current version.
How about we let it run and then re-evaluate every 1 year or so? How many times did we re-evaluate the DAA? What’s a bigger risk long-term:
a) refusing to increase the limit despite good technical reasons to do so
b) refusing to add a fixed boundary to algo’s limit (or just update the algo’s params to slow it down more, while leaving it boundless) despite hypothetical in-the-future good technical reasons to do so
Algo’s parameters could be re-evaluated, without a need for a hard boundary. Just because we set it up in a certain way for '24, doesn’t mean it has to be like that forever, we can re-evaluate as we go on just the same as we could re-evaluate whatever hard limit we had so far. Difference is between re-evaluating the limit VS re-evaluating the limit’s max. rate of change.
Before arriving to 1TB we’ll have arrived to 100GB, before arriving to 100GB we’ll have arrived to 10GB, before arriving to 10GB, we’ll have arrived to 1GB, before arriving to 256MB, we’ll have arrived to 32MB.
It can’t surprise us because the algo is rate-limited and conditional on actual utilization because the rate of adjustment is proportional to how much of the current space actually gets used: so going from 32MB to 256MB would likely take more than 2 yrs in the most optimistic rate-of-adoption scenario. Going from 256MB to 1GB would be 4x and likely take more than that since it’s harder to fill 256MB than it is to fill 32MB - someone has to be making all those TX-es to maintain the baseload.
We can evaluate yearly whether the next 4x from wherever we are now is feasible or not - and plan next scheduled upgrade accordingly.
We don’t, but the relay policy determines the minimum fee, and then users can either load the network with their TX-es or not, to which miners would respond by building the blocks, to which algo would respond by slowly making room for more. If nobody makes the TX-es, the algo doesn’t move (or it even moves down).
So, algo is fundamentally driven by the negotiation between users, nodes, and miners, and it is totally agnostic of fee / TX - it will be whatever is negotiated by the network - leave it to the market & min. fee relay policy.
Instead of debating block size limit, which is consensus-sensitive and comes with a bigger “meta cost” to change, we could be debating min. fee relay policy if we want to increase / decrease TX load which has smaller “meta cost”.
Jessquit noticed another phenomena that could alleviate the “what if we grow too much” concern - the price hype cycle is a natural adoption rate-limiter. If we got to 10 MB, where would the price be, and how would it affect adoption?
The fixed emission rate is also an adoption rate limiter.
As demand for new coins exceeds the limiter, the price begins to rise exponentially. The rising price forces us down the demand curve.
Demand is thus always kept in check by the exigencies of the exchange market. There simply cannot be a sudden planetwide onchain rush because the very nature of the fixed emission rate means the demand is always throttled. The exponential nature of the price curve is very effective at throttling demand. We’ve seen it repeatedly throughout Bitcoin’s history.
So: given BCH’s proven ability to absorb the world’s onchain demand through the mid-future and given the built in onboarding rate limiter, I’d say it’s arguable that onchain scaling for P2P cash transactions is already solved.
I’ve been regularly sharing the progress on the main BCH Telegram group and on the BCR forum here, and I’ve shared progress on BCH Podcast Telegram group, too, and on my Twitter, too. It’s been picked up by Xolos’s podcast too and it sparked a discussion on reddit. Haven’t been as much active on r/btc since all the buzz is on Tg and Twitter, but I guess some like yourself still use r/btc as a main source, but even there I had some good talks about the proposal. The CHIP references some good discussions:
- Bitcoin Cash Research (BCR) Forum: This CHIP Discussion
- Bitcoin Cash Research (BCR) Forum: Asymmetric Moving Maxblocksize Based On Median
- Bitcoin Cash Research (BCR) Forum: CHIP 2021-07 UTXO Fastsync (discussion about commiting historical block sizes in order to support an algorithm in fast-sync/pruned mode)
- Telegram, Bitcoin Verde: discussion about commiting historical block sizes in order to support an algorithm in fast-sync/pruned mode
- Twitter: Hash-rate Direct Voting VS This Proposal
- Reddit: Median VS an Older EWMA-based Proposal
- Reddit: Manual Adjustment by Individual Nodes VS an Older EWMA-based Proposal
- Reddit: Making the Case for an Older EWMA-based Proposal
and I’m going to add this one too:
I think it had good reach so far and a lot of non-devs are already aware, but sure, could make a few more posts on /r/btc and /r/bitcoincash