General Protocols: Opinions and considerations on a maxblocksize adjustment scheme

Sure, happy to answer! It’s seen a few iterations but underlying function has been the same through the iterations - the EWMA, I didn’t even realize that the first versions were actually EWMA-based haha, I kinda re-discovered it. Older version had a fixed %fullness target and it is thanks to @emergent_reasons that I found a way to have the elastic multiplier: he was concerned that we may not want too much headroom when we get to bigger sizes, and so the current version applies an elastic multiplier to the baseline EWMA so headroom can get reduced if bigger sizes would have less variance in some future steady-state, which is how we got to the current version.

How about we let it run and then re-evaluate every 1 year or so? How many times did we re-evaluate the DAA? :smiley: What’s a bigger risk long-term:

a) refusing to increase the limit despite good technical reasons to do so
b) refusing to add a fixed boundary to algo’s limit (or just update the algo’s params to slow it down more, while leaving it boundless) despite hypothetical in-the-future good technical reasons to do so

Algo’s parameters could be re-evaluated, without a need for a hard boundary. Just because we set it up in a certain way for '24, doesn’t mean it has to be like that forever, we can re-evaluate as we go on just the same as we could re-evaluate whatever hard limit we had so far. Difference is between re-evaluating the limit VS re-evaluating the limit’s max. rate of change.

Before arriving to 1TB we’ll have arrived to 100GB, before arriving to 100GB we’ll have arrived to 10GB, before arriving to 10GB, we’ll have arrived to 1GB, before arriving to 256MB, we’ll have arrived to 32MB.

It can’t surprise us because the algo is rate-limited and conditional on actual utilization because the rate of adjustment is proportional to how much of the current space actually gets used: so going from 32MB to 256MB would likely take more than 2 yrs in the most optimistic rate-of-adoption scenario. Going from 256MB to 1GB would be 4x and likely take more than that since it’s harder to fill 256MB than it is to fill 32MB - someone has to be making all those TX-es to maintain the baseload.

We can evaluate yearly whether the next 4x from wherever we are now is feasible or not - and plan next scheduled upgrade accordingly.

We don’t, but the relay policy determines the minimum fee, and then users can either load the network with their TX-es or not, to which miners would respond by building the blocks, to which algo would respond by slowly making room for more. If nobody makes the TX-es, the algo doesn’t move (or it even moves down).

So, algo is fundamentally driven by the negotiation between users, nodes, and miners, and it is totally agnostic of fee / TX - it will be whatever is negotiated by the network - leave it to the market & min. fee relay policy.

Instead of debating block size limit, which is consensus-sensitive and comes with a bigger “meta cost” to change, we could be debating min. fee relay policy if we want to increase / decrease TX load which has smaller “meta cost”.

Jessquit noticed another phenomena that could alleviate the “what if we grow too much” concern - the price hype cycle is a natural adoption rate-limiter. If we got to 10 MB, where would the price be, and how would it affect adoption?

The fixed emission rate is also an adoption rate limiter.

As demand for new coins exceeds the limiter, the price begins to rise exponentially. The rising price forces us down the demand curve.

Demand is thus always kept in check by the exigencies of the exchange market. There simply cannot be a sudden planetwide onchain rush because the very nature of the fixed emission rate means the demand is always throttled. The exponential nature of the price curve is very effective at throttling demand. We’ve seen it repeatedly throughout Bitcoin’s history.

So: given BCH’s proven ability to absorb the world’s onchain demand through the mid-future and given the built in onboarding rate limiter, I’d say it’s arguable that onchain scaling for P2P cash transactions is already solved.

I’ve been regularly sharing the progress on the main BCH Telegram group and on the BCR forum here, and I’ve shared progress on BCH Podcast Telegram group, too, and on my Twitter, too. It’s been picked up by Xolos’s podcast too and it sparked a discussion on reddit. Haven’t been as much active on r/btc since all the buzz is on Tg and Twitter, but I guess some like yourself still use r/btc as a main source, but even there I had some good talks about the proposal. The CHIP references some good discussions:

and I’m going to add this one too:

I think it had good reach so far and a lot of non-devs are already aware, but sure, could make a few more posts on /r/btc and /r/bitcoincash

3 Likes

Great work finding all these previous discussions!

As I said, this topic has 3 years and has been discussed extensively.

2 Likes

I would eagerly donate to a flipstarter for implementation of an algorithmically adjusted maxblocksize (based on usage) as default in a full node with or before the 2024 network upgrade.

3 Likes

I would assume we need to implement this many years before insufficient blocksize becomes a problem in order not to stiffle adoption on BCH network.

Since the code is simple and idea has been discussed to death, 2024 sounds reasonable, doesn’t it?

1 Like

I like the latest version because it is aimed at the right part of the system. It is close to being an information tool. With the idea of “hey, if you stay in this range, its safe”.

The fact is that it responds to the market, including miners choices, it does not dictate a choice. Miners continue to decide what block they want to mine, even if that decision is to just follow a default.

I see his proposal as filling in one of the parts needed to have better communication in the future. It can’t fulfill everything as communication is inherently a human endeavor. For instance the deployment time-delay issue is near impossible to cover without actively picking up the phone. Miners will still need to communicate with big stakeholders about their limits. That can not be automated. Assumptions may prove costly.

To me this is worth a try as it helps avoid some stupid problems. It is ok to try and fail since it won’t take a protocol upgrade to decide something else may work better. We won’t lock ourselves into a prison just to avoid another.

In the end it is bringing a smile to my face that the proposal I made last winter is practically identical in ideology and approach. The coordination I glossed over is partly filled in by BCAs proposal and miners in his proposal are forced to take the responsibility of assessing what the technical capabilities are of their setup whereas my proposal puts that on the software maintainers. But there is room for both and more ways to try and solve this social issue.

2 Likes

I don’t know the exact dates, but I have been seeding automatic blocksize increase mentions into the podcast for months now. It has been more extensively discussed in Episode 88 (coming out today on Youtube, been available on Twitch already this week), but it was already discussed on Episode 83 on the 3rd of June, and I am positive I have made references to the possibility of an upcoming algorithm and discussion many times before that too on previous episodes.

For instance it was also discussed on the May 15th 2023 Upgrade livestream.

All this is to say it has definitely been raised and highlighted in the discussion on The Bitcoin Cash Podcast as well as many other forums.

2 Likes