General Protocols: Opinions and considerations on a maxblocksize adjustment scheme

Related:

Article on read.cash: read.cash

Background

The primary motivation for BCH’s forking event in 2017 was an impasse in increasing the blocksize maximum, so the relevance of further blocksize increases to accommodate transaction volume needs no introduction to the BCH community, a community focused on getting to global usage on the L1 blockchain. Maximum blocksize discussed for the rest of this writeup is defined as the maximum size of a mined block, beyond which it’ll be rejected by the majority of the network both by hashrate and economy. Note that there is an independent variable “soft limit” that is self-imposed by the miners and is strictly below maxblocksize, that is not relevant to this discussion.

We have had two one-time increases to this number in the past:

  1. From 1MB to 8MB at the initial fork in 2017, and
  2. From 8MB to 32MB in 2018.

The 32MB limit has not been moved since 2018, and demand has not been high due to slow growth in usage. While short bursts of “stress tests” that were conducted explicitly to challenge the limit were done from time to time, average long-term blocksize has been well below 500kB. It is important to note that in 2021 the default non-consensus “soft limit” shipped with BCHN has been increased from 2MB to 8MB, which has proven useful in accommodating some burst scenarios.

Problem statement

While average usage today is well over two orders of magnitudes away from challenging the current 32MB maxblocksize limit, two factors make it desirable to address it today:

  1. One time increases in maxblocksize are an ongoing and unpredictable effort. While the CHIP process offers some stability and transparency to the effort, it nevertheless subjects the network to regular episodes of uncertainty regarding what some would consider its raison d’etre. Putting a predictable, sane plan into action reduces that uncertainty and increases confidence for all parties - users, businesses, infrastructure providers and developers.
  2. In the event of rapid adoption, the social makeup of BCH’s community can inflate and diversify rapidly, destabilizing efforts to address the problem, possibly resulting in a chaotic split as witnessed with BTC in the past. A plan adopted right now will carry with it the inertia necessary to combat such destabilizing tendencies.

Considerations

Some crypto enthusiasts, using a Satoshi quote, correctly note the mechanistic ease of changing the maxblocksize in the code while missing important impacts beyond changing a single number:

  1. On the low side, a small maxblocksize, even when the blocks are not congested, may deter commercial usage and development activity. This is due to the fact that business and development investment are long-commitment activities that often span months or even years. If entrepreneurs and developers cannot be offered confidence that the capacity will be there when they need it, they are less likely to make the investment of their precious time and money.
  2. On the high side, a maxblocksize that is too large for current activity invites adverse, unpredictable conditions that typically consist of short bursts of noncommercial traffic that push the limits. The network impact of these activities is more subtle: they generate additional, volatile cost for infrastructure and service providers that may be difficult to justify. It is important to note that contrary to intuition, most of the cost to operators come from human operation and development complications, followed by processing power that scales with sizes of single blocks, with storage and bandwidth costs coming a distant last. We have observed this phenomenon in certain other cryptocurrencies, where very high throughput that did not come from commercial activity ultimately resulted in businesses ceasing to operate on their chains, reducing the network’s overall value. It is important to note that we do not view all existing operators’ continued existence as sacred; rather, we take the reasoned view that increased investment in infrastructure should be justified by corresponding commercial, value generating activities.
  3. Historically, changing the maxblocksize comes with a heavy social cost each time it happens, with the risk of community and network fracture. Satoshi’s quote makes sense back in the days when he made the decisions by himself, less so today when the majority of the network needs to come to consensus. A longer lasting plan up front that minimizes each of these potentially centralizing decision points can make the network more robust.

In short, the aim of a good scheme regarding maxblocksize adjustment should offer the maximum amount of *predictability* to all parties: users who want steady fees, developers who want stable experiences, entrepreneurs who want to reduce uncertainty in growth, and service providers who want to minimize cost while accommodating usage.

Alternatives

With the criteria stated above, let’s examine some alternatives:

  1. Outright removal of consensus blocksize limit : The general purist argument is that miners would resolve any disagreements on their own without a software imposed limit. In reality, without an effective way to coordinate an agreement, each node can have vastly different capabilities and opinions on the sizes that are tolerable. The result is therefore either network destabilization and split without coordination, or opaque, centralizing coordination outside the protocol. Neither scenario are likely to offer confidence or stability.
  2. One-time increases to maxblocksize : While extremely simple in execution under the BCH context, as described above it subjects the network to regular episodes of uncertainty and social cost, and thus is less ideal for long term growth. At every manual increase, concerns of all parties have to be reconsidered, sometimes under adverse social conditions without the benefit of inertia.
  3. Fixed schedule : Have the maxblocksize increase on a rigid schedule, such as BIP101 or BIP103. Also simple in execution, these schemes additionally offer a possible scenario where if demand roughly stays in line with the schedule, no manual adjustment is needed. It is impossible to perfectly predict the future though, and such schemes will inevitably diverge from real world usage and cost, requiring frequent revisits to their parameters. Each revision can incur larger social costs than even one-time increases due to the complexity of schedules as opposed to just sizes.
  4. Algorithmic adjustment based on miner voting : Adopted by Ethereum, the scheme proposes that miners (and pools, by proxy) vote for the maximum block capacity on fixed intervals, with the result tallied based on a fixed algorithm that then adjusts maxblocksize up or down at the next period. While this scheme can work well with a well-informed and proactive population of pools, our current observation is no such population exists for BCH - miners and pools typically only intervene when a crisis happens, which may not be ideal for user confidence. BCH is additionally a minority chain in its algorithm, which may complicate incentives when it comes time to adjust maxblocksize.
  5. Algorithmic adjustment based on usage : Multiple attempts exists, including an older dual-median approach and a newer, more sophisticated WTEMA-based algorithm. These schemes generally aim to algorithmically adjust maxblocksize based on a fixed interpretation of past usage in terms of block content. While far from perfect, we see these schemes as our best path forward to achieve reasonable stability, responsiveness, and minimization of social cost for future adjustments.

Criteria of a good algorithm

In our opinion, a good maxblocksize adjustment algorithm must address the following concerns:

  1. For predictability and stability in service operators, any increases must happen over a long window. We have observed some adjustment algorithms where it’s possible to double maxblocksize over a matter of hours or days - the volatility they allow reduces the utility of an algorithmic approach.
  2. The algorithm should aim to accommodate commercial bursts such as holidays, conventions, and token sales, such that user experience is not impacted by fee increases in the vast majority of times. Note that while a rapid-increase algorithm can satisfy this for a user, it’ll conflict with # 1 above in that it does not offer a predictable, stable course for operators - it is therefore likely preferable to just keep a healthy maxblocksize with a large buffer well above average usage.
  3. The algorithm should aim to reduce costs for operator in times of commercial downturn. It is inevitable in BCH’s many more years and decades of operation that it’ll see ups and downs, and it’s important that higher operating costs justified during boom times do not unreasonably burden services during the bust years. During a long downturn, a reasonable limit that defends well against unpredictably high bursts of costs (see “Considerations” above) can mean the difference between keeping or losing services. Such adjustments can happen slowly, but should not be removed altogether.
  4. The algorithm should be well-tested against edge cases that may cause undesirable volatility. This is especially important considering the history of BCH’s difficulty adjustment algorithm, which was plagued by instability for years both in the Emergency Adjustment era of 2017-2018, and fixed-window-based era of 2018-2020. Blocksize algorithms must learn well from this experience and aim to minimize potential vectors of trouble.

Additional notes on miner control

Some may say that usage-based algorithms take control out of the hand of miners; in our opinion this is not true. Miners today have an additional control vector in the form of a “soft cap” that allows them to easily specify maximum size *for the blocks they themselves mine* that is below network-wide maxblocksize. Adjusting this cap allows them an input into any usage-based algorithm, as the algorithms depend on the size of past blocks actually mined.

It is also important to stress that while the quality of any algorithm adopted must be very high, it is not necessary to be perfect. A large part of the value of the algorithm is that it relieves social costs going forward. In the case where an algorithm is found to need adjustment or even determined to be inadequate, it is certainly possible for the ecosystem to change it - through a CHIP or other possible systems - just like any other consensus rule.

7 Likes

Very reasonable position & very important discussion.

The Bitcoin Cash Podcast will be available as a venue for discussion, debate & knowledge sharing as this topic rises in the community’s thinking. A resolution on this for the 2024 upgrade would be amazing, and if not hopefully by 2025.

Personally I am still not entirely decided, I do want to see more contrary opinions (if there are any) argued & countered, but on the whole the idea of an automatic adjustment to relieve social burden & add predictability makes a lot of sense to me. I think the final point is also crucial, that it’s not a “one & done”. We should come to a carefully considered solution, but if there are large issues that arise it can always be reverted or modified as necessary.

7 Likes

Let me help you there then.

The most important function of automated blocksize adjustment system is not actually about making the network technically better (of course it does make it better, but that’s not my point). The actual problem to solve is a social problem, not technical problem.

The problem I am talking about is behaviour of the miners [and people who use crypto in general]. As we have already established over multiple years of trial and error, miners are most certainly the most herd-following part of the BCH/BTC ecosystem.

Let me be frank; The only thing miners do is seek&download the default software of a network (in case of BCH it is BitcoinCashNode) and then use it to mine like there is no tomorrow.

And in the miner’s choice of nodes and node settings, there is little

  • Rational decision process
  • Logical thinking
  • Reasonining of any kind involved.

No, miners just follow the leaders (main developers). They download the default software and mine. They are not interested in internal project politics, they do not want to decide themselves whether they have downloaded the “correct” software with the “correct” consensus. They don’t want to decide.

And yes, while it is true that they do adjust some parameters like soft blocksize limit, but the most important decision like “what should be the hard blocksize limit be” or “should CashToken system be implemented on BCH chain” remain untouched. Developers & community take 100% care of that. Miner’s don’t want to touch any hard decisions at all [Which is also the main reason why projects like BMP or Tom Zander’s last blocksize-related CHIP cannot and will not possibly work].

Miners instead download the default software (usually meaning most popular).

So what we are actually doing here by setting up automated blocksize adjustement is we are creating a herd movement direction for miners to follow. We are showing the way. And for next years or decades that is.

If miners don’t want to and cannot make the hard decisions, then the community and the developers have to decide for them. The universe hates void.

Under the circumstances, this is the most rational direction we can possibly take.

2 Likes

[Deleted by ShadowOfHarbringer]

my mental model is in a more real-world concepts way explained simply like this;

individuals run services (nodes explorer whatever) that has a physical / real world limit in what kind of blocksize they can process.
This limit grows over time as they get new hardware or new software or indeed better Internet service.

Individuals make up the total of the ecosystem. You need to somehow find the relevant ones, though. Nobody cares about a raspberry pi not keeping up anymore. But if Binance can’t keep up, there is a problem.

So, you need some way to do signaling. And the same goes for any system, algorithmic or not. Kraken won’t be mining but it surely should have a say in maximum block size. (not that we have to listen, but they should have a voice).

At the end of the day, given all the people’s limits, miners choose a blocksize that maximizes their profits. They obviously don’t want to get orphaned, but they don’t want to miss out on fee paying transactions either.
Their job is to find the best blocksize for their business which doesn’t hurt the coin and thus their income.


The algorthmic increase of blockchain sounds fine, provided humans can overrule it without a hard fork. BCA’s suggested algorithm is capable of doing that, so no issues.

You know, this is indeed the state of things and I don’t disagree that miners are… to pick a word: followers.

The problem here is one that there has never in the history of the coin been a case where a miner mined too big a block for it to be an issue. Maybe one or two minerd a big one and they got orphaned due to a race. But that is quite rare and more important those miners were already mining bigger blocks than the main full node configured.

So, maybe I am wrong to call it a problem that the default settings have always been conservative enough to make sure nobody lost money. Thats a feature right? Not a problem! Sure, its a feature which has as a side-effect that miners don’t make decisions. Soft times and all that.

Should BCH get to 200 MB blocks I think its very hard to predict what will happen. What the ecosystem looks like then. I expect that the mining software infrastructure will be very different. I expect that the mining priorities will be quite different to maximize profits from fees and coin-value. And I expect that there are big profitable companies that may just be pushing back against the growth speed.

We’ll see :slight_smile:

Whatever happens, I hope that should miners become super competitive and should they start to listen to big players on blocksizes in the (far) future, I do want to make sure that there is no need to have a hard fork for them to be able to bypass any algorithm.
Like I stated in my previous post, I have not seen any indication that this is the intention of the algorithm, it could be shipped default on in the next BCHN and nothing bad would happen.

I am not saying that your entire way of reasoning is wrong.

In fact, in a perfect world where miners are active participants of the ecosystem, they take part on the discussion here, on GitLab and in other places developers work, your way of thinking would totally work (Also there would be no BCH, because miners would just HARD drop Core in 2017 and instead use Bitcoin Classic / Bitcoin Unlimited, we would have 256MB-blocksized BTC right now and live happily ever after).

However, the world is not a perfect place, we are still not developed enough as a civilization and humans are unfortunately just more complex animals, that follow either the herd or the alpha (or both). And miners are the most animalistic of us BCHers, for whatever reason.

In such imperfect place, your reasoning cannot possibly work, but it is not your fault. It’s evolution’s fault. It’s just too slow.

We developed our tech way too fast and we are not ready for it, clearly. So we have to do some workarounds.

From BU website:

Shortly after the release of Bitcoin Unlimited 1.0.0.0 Bitcoin.com’s mining pool mined a block with a size bigger than 1MB, which was immediately orphaned.

Incident report from pool:

3 Likes

It will be hard to predict what it will look like even at 10 MB baseload :slight_smile: Consider that Ethereum, with all its size, barely reached 9 MB every 10 minutes.

The algorithm can be bypassed or have its parameters updated using the same logic we can now use to update the flat EB. I now understand why you don’t see it as a hard-fork and why we were talking past each other before, because the updated limit doesn’t get tested on each block since it’s a <= rule and changing it’s parameter does not cause a hard-fork unless actually tested by hash-rate. From a Reddit discussion:

The nuance is in the difference between == rules (like for validating execution of opcodes, correctness of signatures etc) and <= rule (like for validating the blocksize limit, max stack item limit, etc).

If now we have a <= EB_1 rule, then later when we move the limit to some <= EB_2 it will be as if the EB_1 rule never existed, as if it was EB_2 rule all the time - nodes don’t need to introduce some if-then-else check at height N that switched from EB_1 to EB_2 in order to validate the chain from scratch - they just use EB_2 as if it was set at genesis.

The MTP activation code is only temporary, to ensure everyone moves from EB_1 to EB_2 at the same time, and can later be removed and we pretend it was EB_2 right from the start.

Algorithm’s activation (or later updating or deactivation) would work the same.

1 Like

Freetrader,

I remember that. BU software forgot to count some bytes and made a block that was too big. That was the last time their client was ever used to do mining, probably forever cementing the concept of “reference node” into the public consiousness.

Its a great exception tha proves the rule, I agree. A straight up bug and a silly Roger that believed more in a democratic software process than in good developers. :man_shrugging:

There is no real risk in activating the algorithm without a coordination event either. You can release it next BCHN update and miners can updat their nodes at their leasure because the limits imposed are not in conflict with the max blocksize mined.

Just like us releasing the 2MB Bitcoin Classic on the mainnet was not a hard fork event, contrary to what some Core devs claimed (they also claimed it for XT)

1 Like

OK guys, so it is 100% clear to me that nobody is in any way against implementing @bitcoincashautist’s algorithm.

Is it already slated for 2024’s upgrade or are we waiting for something?

What do we need to get it done? Does somebody need to get paid to do it? Should I make a flipstarter or something?

3 Likes

My take on this (as a “medium blocker” who has been following BCH from the beginning in 2017):

Miners should have nothing to do with the maximum block size. Large miners’ interest is to confirm as many transactions as possible. As we have seen with Ethereum, they will raise the blocksize limit until they can’t do it anymore. The max block size parameter comes from users, or more precisely merchants, who define what consensus rules are.

The idea of an algorithmic adjustment based on usage to handle rapid surges in tx throughput is a good idea. Such an algorithm should also be able to lower the variable max block size, in case of a censorship attack for instance. But IMO it still need an absolute max block size. As a node operator, I need to know what the “worst case scenario” might be, so I can plan and estimate the costs. More generally, I don’t want to see BCH follow the path of BSV, which was delisted from Blockchair due to its reckless decision to remove blocksize limit entirely.

Besides, 2024 is way too soon: this major upgrade needs a lot of discussions, and blocks are still very small today.

4 Likes

The idea of an algorithmic adjustment based on usage to handle rapid surges in tx throughput is a good idea

Note that in practice, “adjustment to handle rapid surges in throughput” doesn’t exist; an adjustment algorithm that increases blocksize limit rapidly wrecks operators the same way a really big cap/nocap does, because the limiting factor is CPU/RAM/software - all things responding very poorly to a rapid surge - as opposed to disk, which most people think of, that has a flat response to short term surges. The best one can do for rapid surges is to leave some reasonable headroom, then concede that really big surges will just have to be left capped. (The article touches on this briefly, no hours/days doublings)

As an operator you are going to be served alright by a slow-moving algorithm in planning out your investments. I’ve been mesmerized by the idea of a rapid-moving algorithm too for years until I tried to square it with real world operations, and realized it simply does not work for the important stuff.

If the algorithm is slow enough, then a hard cap would be obsolete; it’ll either be small enough that the algorithm cease to be useful, or large enough the algorithm will take months/years to get there in worst case anyway, which leaves plenty of room for an operator to see it coming.

Again, please think in slow-moving terms, not fast-moving terms. The considerations are quite different. :slight_smile:

5 Likes

Looking at the consecutive iterations of @bitcoincashautist’s algorithm, I would risk a thesis that the “worst case scenario” is the algorithm will require minor tweaks along the way, but it will not cause any kind of disaster.

The math (actually the simulations) looks rock solid to me and I see no reason why we should not implement it, right now.

That said, I am open to being wrong, so
image

1 Like

Even if activated in '24, it wouldn’t necessarily mean the limit would actually get moved beyond 32 MB any time soon, it would be more like a commitment to automatically respond to actual growth later - when it actually comes, not before.
Consider these features of the proposed algorithm:

  • It wouldn’t move at all if all miners would have their self-limit under 10.67 MB (most have it at 8 MB now)
  • If some 20% hash-rate defected and started pumping at max., they could only push the limit to 58 MB, and it would take years to get there if other 80% kept their self-limit at 10.67 MB or below.
  • Temporary burst, like 80% hash-rate mining full, could make a jump from 50 to 60 in 1 month, but as soon as TX load goes down, so does the limit decay back to equilibrium.
  • Notice that the green line doesn’t move at all except for the 80% burst case. It actually takes more than 50% hash-rate to continuously move the baseline (as in scenario 12, where the majority bumps it up).

Keep in mind the time scale, each grid-line is half a year.

How much discussion would count as “a lot”? In recent BCH history the discussions have been going on since 2020, and in the meantime BU actually deployed @im_uname’s dual-median proposal for their new cryptocurrency, which can serve as validation of general approach. My proposal is an improvement over the median-based approach in that it achieves same 50:50 stability (takes more than 50% hash-rate to keep pushing it up indefinitely) but it’s smoother, is actually based on a robust “control function” used in many other industries, has less lag, and is better behaved in corner cases, please see here how it compares vs median-based.

Yes, if it’s user-made transactions paying enough fees to counter reorg risk, in which case it’s evidence of healthy network growth, right? The transactions have to come from somewhere, someone has to actually make them, right? If it’s real users making those TX-es and paying at least 1sat / byte then do we agree we should find a way to move the limit to avoid killing growth like in '15? The algorithm will do the part of moving the limit for us, even though it cannot magically do all the lifting work that is required to bring the actual adoption and prepare the network infrastructure for sustainable throughput at increased transaction numbers.

If it’s some miner trying to mess with us and stuff his own TX-es, then he will pay a cost in block propagation and he’d have limited impact if he has <50% hash-rate, and he’d have to maintain that regime of mining as long as he wants to be “stretching” our limit - and when he loses steam, the algorithm would fall back down to baseload of actual user’s transactions.

I thought their block size was limited by other factors, like validation time & gas prices, specific to EVM architecture? Anyway, here’s how my proposed algo would’ve responded to actual Ethereum “chunks” (chunk = sum of 50 blocks bytes, so I can directly compare with our 10-min blocks).

image

The algorithm would always provide some headroom, about 2x above what’s actually getting mined, and has some reserve speed to give more breathing space if limit starts getting hit too often, but it’s all rate-limited, extreme load won’t magically move the limit by 10x in a week and shock everyone, users would have to suck it up a little - have the TX load spread across multiple blocks and give the algorithm time to adjust to new reality, e.g. imagine a burst of 1sat/byte TXes, those would have to wait a few blocks, while maybe just 1.1sat/byte would be enough to get in the next one for normal users, and mempool would clear once the limit expands to accommodate the rate of incoming TX-es, or the TX load drops down.
Back-testing against BTC shows this, sudden jump due to ordinals would cause them to temporarily hit the limit (1 or 2 out of 10 blocks would hit the limit) until new equilibrium is found (at about 3.4 MB for BTC).

image

Here I have to disagree. Having an absolute limit would not achieve the objective of the move to algorithm: close the social attack vector, what when we get to the limit? What if we get stuck there forever? The main motivation for the change is not technical, technically we could just bump it up “manually” every year, but it’s easier said than done - because that has a “meta cost” in coordinating and agreeing on the next bump - and it leaves us open to adversaries poisoning the well and causing a dead-lock, like it happened in '15. This proposal would change the “meta game” so that “doing nothing” means the network can still continue to grow in response to utilization, while “doing something” would be required to prevent the network from growing. The “meta cost” would have to be paid to hamper growth, instead of having to be paid to allow growth to continue, making the network more resistant to social capture.

You could plan 1-2 years ahead based on current network conditions, since the mathematical properties of the algorithm make it easy to model whatever “what if?”. Like, we did some tests for 256 MB on scalenet already, but with the algo initialized at 32 MB minimum it would likely take a few years to bring the limit to 256 MB. By the time it actually gets there, people would likely be already doing 512 MB tests to stay ahead, right? Remember BIP-101 - well, 75% blocks would have to be consistently full in order to move the limit at BIP-101 rates (x2 every 2 yrs).
Here’s the full set of simulations.

Please let me know if you want me to run some particular simulation to see how the algorithm would respond.

None of us do. Which is also why we don’t want to just bump it to 256 MB “to be ready for VISA-scale” or w/e - because putting the cart before the horse, having too much underutilized space, would leave us open for abuse in the meantime and could actually hamper organic adoption by increasing cost of participation before economic activity justifies it - see here for the full argument.

4 Likes

Just as a note - if anyone sees contrary positions / reasonable alternatives that are not in the CHIP, they should be, so please send a link to BCA. I expect this CHIP to be one of the largest ever written, not for technical reasons but for ensuring that it’s really iterated through and been deeply considered vs. all alternatives.

6 Likes

@im_uname @bitcoincashautist All right, I didn’t realize how slow the adjustment was! This sounds better. Thanks for the (detailed) answers!

I would still prefer to keep a hard blocksize cap that could be raised or lowered, like every 10 years or so. I hear about the risk of a Schelling-point “social attack”, i.e. refusing to increase the limit despite good technical reasons to do so, but that’s how Bitcoin works. The idea is to ensure that the network is decentralized enough to secure low-value (“cash”) transactions; we don’t know what the good blocksize is for this. It could be 256MB, 1GB, 10GB, 100GB or 1TB, idk.

I hadn’t heard about this before yesterday, except for the im_uname proposal. Not on Reddit, not on Twitter, not on BCH Podcast Youtube channel. I bet most BCH users haven’t heard about this either. It needs to reach non-developers, before it gets to the protocol, otherwise people are trusting you blindly and that’s not a good thing. Look at what happened with Taproot upgrade and Ordinals on BTC: a lot of people got upset because they didn’t know this kind of things could be done with Taproot.

3 Likes

This topic has been extensively discussed. If I remember correctly, the first iteration of @bitcoincashautist’s algorithm has 3 years?

Also this matter has been discussed in one form or another since at least 2016, even back in BTC days. It’s not new and it did not appear out-of-nowhere.

And I am pretty sure it has been discussed here, on bitcoincashresearch for at least a year and half.

I do admit we do have a communication problem though.

There is basically no in-the-middle person that takes part in daily developer discusssions and then translates the issues to the public.

That said, I am currently personally up-to-date with the opinions of the community and with the considerations about multiple topics and I am certain that there is no signifiant opposition against this idea that would make any sense (excluding known trolls of course).

1 Like

Well, no offense but this is your answer.

As a (serious, I assume) Node Operator, you’re expected to know what is currently going on, which means residing here and on Telegram too.

This way you would have learned about this idea years ago.

FYI, things that touch consensus like this one are marked with big, red “CONSENSUS” tag/label.