Related:
Article on read.cash: read.cash
Background
The primary motivation for BCH’s forking event in 2017 was an impasse in increasing the blocksize maximum, so the relevance of further blocksize increases to accommodate transaction volume needs no introduction to the BCH community, a community focused on getting to global usage on the L1 blockchain. Maximum blocksize discussed for the rest of this writeup is defined as the maximum size of a mined block, beyond which it’ll be rejected by the majority of the network both by hashrate and economy. Note that there is an independent variable “soft limit” that is self-imposed by the miners and is strictly below maxblocksize, that is not relevant to this discussion.
We have had two one-time increases to this number in the past:
- From 1MB to 8MB at the initial fork in 2017, and
- From 8MB to 32MB in 2018.
The 32MB limit has not been moved since 2018, and demand has not been high due to slow growth in usage. While short bursts of “stress tests” that were conducted explicitly to challenge the limit were done from time to time, average long-term blocksize has been well below 500kB. It is important to note that in 2021 the default non-consensus “soft limit” shipped with BCHN has been increased from 2MB to 8MB, which has proven useful in accommodating some burst scenarios.
Problem statement
While average usage today is well over two orders of magnitudes away from challenging the current 32MB maxblocksize limit, two factors make it desirable to address it today:
- One time increases in maxblocksize are an ongoing and unpredictable effort. While the CHIP process offers some stability and transparency to the effort, it nevertheless subjects the network to regular episodes of uncertainty regarding what some would consider its raison d’etre. Putting a predictable, sane plan into action reduces that uncertainty and increases confidence for all parties - users, businesses, infrastructure providers and developers.
- In the event of rapid adoption, the social makeup of BCH’s community can inflate and diversify rapidly, destabilizing efforts to address the problem, possibly resulting in a chaotic split as witnessed with BTC in the past. A plan adopted right now will carry with it the inertia necessary to combat such destabilizing tendencies.
Considerations
Some crypto enthusiasts, using a Satoshi quote, correctly note the mechanistic ease of changing the maxblocksize in the code while missing important impacts beyond changing a single number:
- On the low side, a small maxblocksize, even when the blocks are not congested, may deter commercial usage and development activity. This is due to the fact that business and development investment are long-commitment activities that often span months or even years. If entrepreneurs and developers cannot be offered confidence that the capacity will be there when they need it, they are less likely to make the investment of their precious time and money.
- On the high side, a maxblocksize that is too large for current activity invites adverse, unpredictable conditions that typically consist of short bursts of noncommercial traffic that push the limits. The network impact of these activities is more subtle: they generate additional, volatile cost for infrastructure and service providers that may be difficult to justify. It is important to note that contrary to intuition, most of the cost to operators come from human operation and development complications, followed by processing power that scales with sizes of single blocks, with storage and bandwidth costs coming a distant last. We have observed this phenomenon in certain other cryptocurrencies, where very high throughput that did not come from commercial activity ultimately resulted in businesses ceasing to operate on their chains, reducing the network’s overall value. It is important to note that we do not view all existing operators’ continued existence as sacred; rather, we take the reasoned view that increased investment in infrastructure should be justified by corresponding commercial, value generating activities.
- Historically, changing the maxblocksize comes with a heavy social cost each time it happens, with the risk of community and network fracture. Satoshi’s quote makes sense back in the days when he made the decisions by himself, less so today when the majority of the network needs to come to consensus. A longer lasting plan up front that minimizes each of these potentially centralizing decision points can make the network more robust.
In short, the aim of a good scheme regarding maxblocksize adjustment should offer the maximum amount of *predictability* to all parties: users who want steady fees, developers who want stable experiences, entrepreneurs who want to reduce uncertainty in growth, and service providers who want to minimize cost while accommodating usage.
Alternatives
With the criteria stated above, let’s examine some alternatives:
- Outright removal of consensus blocksize limit : The general purist argument is that miners would resolve any disagreements on their own without a software imposed limit. In reality, without an effective way to coordinate an agreement, each node can have vastly different capabilities and opinions on the sizes that are tolerable. The result is therefore either network destabilization and split without coordination, or opaque, centralizing coordination outside the protocol. Neither scenario are likely to offer confidence or stability.
- One-time increases to maxblocksize : While extremely simple in execution under the BCH context, as described above it subjects the network to regular episodes of uncertainty and social cost, and thus is less ideal for long term growth. At every manual increase, concerns of all parties have to be reconsidered, sometimes under adverse social conditions without the benefit of inertia.
- Fixed schedule : Have the maxblocksize increase on a rigid schedule, such as BIP101 or BIP103. Also simple in execution, these schemes additionally offer a possible scenario where if demand roughly stays in line with the schedule, no manual adjustment is needed. It is impossible to perfectly predict the future though, and such schemes will inevitably diverge from real world usage and cost, requiring frequent revisits to their parameters. Each revision can incur larger social costs than even one-time increases due to the complexity of schedules as opposed to just sizes.
- Algorithmic adjustment based on miner voting : Adopted by Ethereum, the scheme proposes that miners (and pools, by proxy) vote for the maximum block capacity on fixed intervals, with the result tallied based on a fixed algorithm that then adjusts maxblocksize up or down at the next period. While this scheme can work well with a well-informed and proactive population of pools, our current observation is no such population exists for BCH - miners and pools typically only intervene when a crisis happens, which may not be ideal for user confidence. BCH is additionally a minority chain in its algorithm, which may complicate incentives when it comes time to adjust maxblocksize.
- Algorithmic adjustment based on usage : Multiple attempts exists, including an older dual-median approach and a newer, more sophisticated WTEMA-based algorithm. These schemes generally aim to algorithmically adjust maxblocksize based on a fixed interpretation of past usage in terms of block content. While far from perfect, we see these schemes as our best path forward to achieve reasonable stability, responsiveness, and minimization of social cost for future adjustments.
Criteria of a good algorithm
In our opinion, a good maxblocksize adjustment algorithm must address the following concerns:
- For predictability and stability in service operators, any increases must happen over a long window. We have observed some adjustment algorithms where it’s possible to double maxblocksize over a matter of hours or days - the volatility they allow reduces the utility of an algorithmic approach.
- The algorithm should aim to accommodate commercial bursts such as holidays, conventions, and token sales, such that user experience is not impacted by fee increases in the vast majority of times. Note that while a rapid-increase algorithm can satisfy this for a user, it’ll conflict with # 1 above in that it does not offer a predictable, stable course for operators - it is therefore likely preferable to just keep a healthy maxblocksize with a large buffer well above average usage.
- The algorithm should aim to reduce costs for operator in times of commercial downturn. It is inevitable in BCH’s many more years and decades of operation that it’ll see ups and downs, and it’s important that higher operating costs justified during boom times do not unreasonably burden services during the bust years. During a long downturn, a reasonable limit that defends well against unpredictably high bursts of costs (see “Considerations” above) can mean the difference between keeping or losing services. Such adjustments can happen slowly, but should not be removed altogether.
- The algorithm should be well-tested against edge cases that may cause undesirable volatility. This is especially important considering the history of BCH’s difficulty adjustment algorithm, which was plagued by instability for years both in the Emergency Adjustment era of 2017-2018, and fixed-window-based era of 2018-2020. Blocksize algorithms must learn well from this experience and aim to minimize potential vectors of trouble.
Additional notes on miner control
Some may say that usage-based algorithms take control out of the hand of miners; in our opinion this is not true. Miners today have an additional control vector in the form of a “soft cap” that allows them to easily specify maximum size *for the blocks they themselves mine* that is below network-wide maxblocksize. Adjusting this cap allows them an input into any usage-based algorithm, as the algorithms depend on the size of past blocks actually mined.
It is also important to stress that while the quality of any algorithm adopted must be very high, it is not necessary to be perfect. A large part of the value of the algorithm is that it relieves social costs going forward. In the case where an algorithm is found to need adjustment or even determined to be inadequate, it is certainly possible for the ecosystem to change it - through a CHIP or other possible systems - just like any other consensus rule.