Asymmetric Moving Maxblocksize Based On Median

This is mostly a direct copy-paste from its previous location for preservation, but feel free to discuss below.

  BIP: ?
  BUIP: ?
  Title: Asymmetric Moving Maximum Block Size Consensus Rule Based On Median Block Size
  Author: @im_uname, adapted from https://github.com/bitpay/bips/blob/master/bip-adaptiveblocksize.mediawiki
  Status: WIP
  Type: Standards Track
  Created: 2018-10-21
  Latest edit: 2019-01-05

==Abstract==

We propose a dynamic limit to the block size based on the the ‘’‘largest’’’ number out of the following:

  1. Median block size over the last 12,959 blocks (target 90 days at average 10 minute blocks) multiplied by 10 and calculated when a block is connected to the blockchain.

  2. Median block size over the last 52,559 blocks (target 365 days at average 10 minute blocks) multiplied by 10 and calculated when a block is connected to the blockchain.

  3. Pre-activation consensus maximum blocksize limit, currently (2018-12-25) at 32,000,000 bytes (32MB).

==Motivation==

This is a chain-based signaling mechanism that will allow miners to have certainty that the blocks they build will be accepted by the rest of the network. If wished, a majority of miners will be able to affect the limit whether upwards or downwards, with a cost over a window of time. With this BUIP in place, future hard forks related to block size should be unnecessary.

The mechanism is expected to accomodate increases in usage over time in a manner predictable for all network participants, while ensuring that spikes in usage rarely exceeds the upper cap at any given time. Network robustness is enhanced by the predictable and slow-moving cap that can be planned against, reducing the risk of unexpected outages and splits.

==Specification==

** Extract median blocksizes from the last 12,959 blocks’ and 52,559 blocks’ sizes (90 and 365 days of blocks respectively) after activation using either method. For any blocksize that is either before activation time (hence not available to some pruned nodes) or less than 3,200,000 bytes (3.2MB), it should be counted as 3,200,000 bytes for median calculation.
** The larger of the two medians * 10 (10 is the growth multiplier) is the new maximum block size consensus.

===Activation===
Activation: By flag time or BIP135 voting, as specified above.

Fully validating older clients are not compatible with this change. The first block exceeding the old limits on block size will partition older clients off the new network.

SPV (simple payment validation) wallets are compatible with this change.

==Rationale==

By having a maximum block size that adjusts based on the median block size of the past blocks, the degree to which a single miner can influence the decision over what the maximum block size is directly proportional to their own mining hash rate on the network. The only way a single miner can make a unilateral decision on block size limit would be if they had greater than 50% of the mining power sustained over the span of three months, or one year in case he decides to shrink maximum block size. In this case, Bitcoin Cash has existential problems beyond the scope of this BIP. Using the median block size multiplied by 10 achieves predictable growth in the max block size over time, while allowing seasonal and circumstantial spikes in real usage up to 10x, yet protects the network against excessively large blocks causing instability both known and unknown.

Choosing the median block size versus using an arithmetic mean is an important design decision. Using the moving average over the look back period to calculate the maximum block size consensus rule would allow individual miners to influence this consensus rule in a way that is not proportional to their historical hash rate on the network. In other words, a single miner could have a disproportionately large influence over the block size limit by building very large or very small blocks.

===Miners’ Collective Manual Intervention at a Cost===

We expect miners to optimize their actual mined blocksizes by weighing transaction fees from broadcasted transactions against orphan risk of larger blocks (see [[https://www.bitcoinunlimited.info/resources/feemarket.pdf | analysis]]). In the cases where the allowances of this BIP proves to be inadequate and miners collectively wish to increase or decrease blocksize limit beyond normal usage (for example, to either grow in anticipation of a large incoming usecase, or to stave off a newly discovered rare attack at the edge of the blocksize limit:

  • If >51% of miners wish to increase blocksize limit, they can fill their blocks with transactions to themselves over 1/10th of their desired limit, and sustain it over a period of 3 months. This will incur a cost in orphaning risk versus their peers who do not hold such desires. Such a cost does not apply in case a very large miner over 51% decides to abuse it, but such a miner brings other, even more problematic risks to the network; and the long adjustment window allows other network participants to observe any potential manipulation and reach their own decisions in time.

  • If >51% of miners wish to decrease blocksize limit, they can restrict their mined blocksize to below 1/10th the desired limit, and sustain it over a period of one year. This will incur a cost in giving up transaction fees.

These adjustments, tempered by their associated costs and the rolling adjustment window, are not considered “attacks” and should be regarded as miners rationally choosing to intervene for their investments, should they be employed.

If a severe and immediate vulnerability is discovered for larger blocks and cannot be adequately addressed by the decrease mechanism, miners shall implement a manual soft fork with either a flag date or a BIP9/BIP135 mechanism.

===Asymmetric Increase/Decrease Mechanism===

Under this BIP, by design the limit is harder to decrease than increase, with decreases happening over a longer time period than increases. The Bitcoin Cash community has been known to focused on relentless growth, including aggressively preparing for future capacity only bounded by network robustness concerns, and this mechanism reflects that. Network capacities that have been reached and deemed safe in the past are unlikely to become unsafe in the future, partly due to ever-falling price per unit capacity of computer hardware, and partly because participants who has invested in higher capacity are likely to inertially retain it in anticipation of returning volume even if there is a short term or seasonal fall in usage.

===Choice of parameters===

===Fastest Growth Scenarios===
The maximum rate of growth is capped at the growth multiplier, which is 10. So, the fastest the limit can increase is if every block is mined to capacity continuously over time. In this case, the absolute fastest growth rate is a 10x of maximum block size every 6480 blocks (45 days). While this is a really aggressive pace of growth, it only happens when 100% of the miners participate, and decreases in speed as less miners participate.

In the case of extreme growth in usage that exceeds this pace, there is a tradeoff in terms of multiplier: A higher multiplier can accomodate even faster growth as well as further improve user experience at the edge, but comes at a cost of less predictability for both network participant investments and security.

===Mining Ramifications===
We don’t expect this consensus rule change to impact miners who choose to mine from block headers before the full block is transferred, validated and connected to the active chain. The same behavior will be observed before and after this change. Since this consensus rule will be calculated as a precondition to adding a new block at the tip, the maximum block size rule for the next block will not be known until after the last block is connected to the main chain.

Since the limit moves slowly and is kept at a high multiplier above median usage, miners who wish to not accidentally exceed current limits should have ample room to safely add large number of transactions to their block - simply prepare a block targeting the limit as if the next incoming block will be empty.

===Choice of Long Adjustment timeframes and Larger Multiplier===
There have been comments about whether to significantly shorten the increase timeframe and reduce multiplier, so that extraordinary spikes in usage can be better accomodated.

One of the desirable properties of an adaptive system is predictability: that node operators, both mining and economic, can anticipate increased maximum throughput of the network and plan their hardware deployment as well as mining strategies accordingly ahead of time. It is not essential that the network accomodate operators that cannot keep up when more network capacity is needed; however, it is very beneficial for any operators, even those who can keep pace, to know ahead what to deploy and how much to invest. With a 90 days median cycle and a maximum 45 days - 10X increase, we allow them a comfortable window of predictability.

A much shorter timeframe removes this predictability, resulting in uncertainty both in investment and network health. Even at a very conservative multiplier of 2x, which will result in frequent full blocks and a less predictable user experience where confirmations are needed, a short timeframe - say, 48 blocks - can result in a worst-case 64 times increase in blocksize limit over a single day. This would open node operators and businesses to many types of denial of service problems without clear indicators on how much hardware and software they should invest in, making attack scenarios that require softforks more difficult to handle, while still offering inferior experience to users in non-attack scenarios with normal fluctuation.

Within a reasonable range, the author will recommend using a long adjustment period and large multiplier to accomodate spikes in usage, rather than a short adjustment period paired with small multiplier.

==Example Psuedocode==

SHORT_MEDIAN_WINDOW = 12959
LONG_MEDIAN_WINDOW = 52559
MULTIPLIER = 10
MIN_BLOCK_SIZE = 3200000

def median(values):
“”" Calculate approximate median of integer values.
“”"
# only allow odd number of values!
assert len(values)%2 == 1
return sorted(values)[(len(values) - 1) // 2]

def raise_floor(past_realblocksizes):
“”“Get real blocksizes, fill in the holes and any size <3.2M with 3.2M
“””
# Assume previous code fills non-available blocksizes with 0
return [MIN_BLOCK_SIZE for x < MIN_BLOCK_SIZE else x for x in past_realblocksizes]

def allowed_blocksize(past_blocksizes):
“”“Calculate allowed size for incoming block at height h, given
past_blocksizes, an array of integers holding the past block sizes with
blocksizes[-1] being the block size of the current chain tip, that is the
block at height (h-1)
“””
past_blocksizes = raise_floor(past_realblocksizes)
median_30d = median(past_blocksizes[-SHORT_MEDIAN_WINDOW:])
median_365d = median(past_blocksizes[-LONG_MEDIAN_WINDOW:])

 return max(median_30d*MULTIPLIER,
            median_365d*MULTIPLIER,)

===FAQ===

Q: Is it possible for the maximum block size consensus rule to drop below the current rule of 32,000,000 bytes under this proposal?
A: No, this proposal states that the maximum block size consensus rule will never drop below 32,000,000 bytes. Miners can build any size blocks they want, but the rule states that the maximum block size will not be lower than 32,000,000 bytes.

Q: Why should the maximum block size consensus rule be computed every time a block is connected to the active chain? Why not every 144 blocks or 2016 blocks, etc.?
A: Quite simply, the decision to calculate the maximum block size after each block was the least arbitrary of the all options available to us. In other words, we considered the choice of calculating the maximum block size at alternate intervals to be more arbitrary than every block for no gain.

Q: How would this proposal affect miners’ behavior? More specifically, how does this affect miners that begin hashing using the last reported successfully mined block header only?
A: We don’t think that SPV miners will be affected by this proposal. This proposal posits that block size increases will happen in response to the need to process transactions on the Bitcoin payment network but be moderated by limitations of current technology. Example, miner A builds a large block and propagates it. Miner B has very limited hardware and/or network capabilities. Miner B chooses to mine from miner A’s block header only and actually finds the next block (this happens at the present time). Miner B does not have the ability to add any transactions from their memory pool for fear they appeared in the last block. So, miner B mines a block with only the coinbase transaction. This very small block will then be a factor in lowering the maximum block size consensus rule in the immediate future.

Q: Will this proposal lead to a change in incentives for miners? If so, how?
A: We don’t think miner incentives will change at all. The chances of a miner (or pool) building the next valid block is directly portional to their percentage of hashing power on the network. Similarly, the degree to which a miner (or pool) can influence what the maximum block size consensus rule is also directly proportional to their hashing power on the network, while incurring a cost if it deviates from the cost optimal.

Q: What about “weak blocks”; could the development of this technology invalidate the assumption that larger blocks have a higher orphan risk?
A: While validation and propagation times might be significantly improved by weak blocks, adding transactions to a block has a cost and will always have a cost. As propagation and validation technologies improve, the cost falls and miners will naturally have incentives to build larger blocks with lower per transaction fees; this is expected and encouraged in Bitcoin Cash. Storage costs are not expected to be a problem, as network participants target their investments against the comfortable upper multiplier limits, whose increase is moderated by the rolling window.

Q: What about consideration of systems used in other cryptocurrencies like Meni Rosenfeld’s flexcap system in Monero?
A: The simplicity of using the multiple of median block size over a look back range was very attractive to us. Some of those flexcap proposals allow the block size to momentarily surge in response to high demand. We believe this is a mistake. The network cannot magically conjure up additional capacity in response to surges in demand. We believe the proper mechanism to address surges in demand is to prepare for it using higher multiples that allows network participants to invest around it well ahead of time.

Q: What about other consensus rules that are currently directly derived from maximum block size such as maximum signature operations per block and maximum standard transaction signature operations? How does this proposal affect those consensus rules with respect to current rules?
A: It doesn’t. This proposal does not address other block limitations because we believe they should have independent limits. It can be debated whether they should have an adaptive limit mechanism as well.

Q: What if an extraordinarily pressing need to increase blocksize was in place that cannot be adequately addressed by simply mining full blocks that raises maxblocksize every 45 days?
A: Just like status quo, an extraordinary situation can be addressed by a hard fork. Given the relative ease to do it via adaptive blocksize versus a hardfork that can raise controversy, though, we expect almost all needs to not require a hardfork.

Q: Is this proposal compatible with various propose UTXO commitment / fast sync schemes that aim to enable syncing fully verifying nodes without downloading all historical blocks, such as [[https://github.com/tomasvdw/bips/blob/master/BIP-UtxoCommitBucket.mediawiki|this]] ?
A: This proposal adds additional requirements for any fast-sync schemes: in additional to hash of the UTXO, blocksizes need to be committed and verified as well. Fast-syncing clients should download blocksize commitments and use them to determine consensus maximum blocksize limit for new blocks.

==Prior Implementation for Reference==

==CC0 Waiver==
To the extent possible under law, im_uname has waived all copyright and related or neighboring rights to Asymmetric Moving Maximum Block Size Consensus Rule Based On Median Block Size.

4 Likes

Looks like a reasonable proposal to might. I like that it is slowmoving and predictable and would love to see sites likes cash.coin.dance show graphs over the max cap over time, and think that the 32mb limit as a floor is something that should probably be talked about more - it’s arbitrary right now, and when/if blocks get to be really large, it might make sense to hardfork to raise it further.

I’m a bit concerned about the UTXO commitment added costs, but I will leave it to people working more closely on UTXO commitments to make their voices heard with regard to this.

5 Likes

Another prior implementation reference of Bitpay’s old algo:

I actually remember having to fix a bug in one of their calculations, but the general concept was quite sound imo.

1 Like

I believe that the block size limit should be based on supply, not demand. That is, the limit should be based on what the software and hardware can handle, not based on how much of the block is being used. If the actual hard/software capacity is 100 MB, and usage/demand is only 1 MB, then the limit should be 100 MB. If the hard/software capacity is 100 MB, and there’s usage/demand for 300 MB, then the limit should be 100 MB.

I understand that there’s the desire to make this no longer a constant in the code in order to prevent 2015 from happening again, but I think there are better (and simpler) ways to do that. BIP101, for example, seems like a pretty good default trajectory: it should be enough to handle exponential growth in demand, while also reflecting exponential growth in hardware capacity.

3 Likes

I’m with Toomim here that while this proposal may well go correct for most of the time, I don’t really see this proposal solving the problem.
Miners will likely end up with a -blocksizeacceptlimit= (flowee name, others have similar) based on what their setup is comfortable with. And based on what the mining community wants.

In the first BCH hardfork we had this wording:

If UAHF is not disabled (see REQ-DISABLE), the client shall enforce that the “fork EB” is configured to at least 8,000,000 (bytes) by raising an error during startup requesting the user to ensure adequate configuration.

In this hard fork nobody talked about a consensus-level block-size limit. The implied effect is that in the consensus rules we don’t actually have a block size limit on BCH.

So, this is the status-quo. We don’t have a consensus-rule for block-size limits and as far as I know miners are very much capable of picking a size that protects themselves from big bad blocks. Add the fact that mines can still signal their EB (the historical name for blocksizeacceptlimit), and I’d conclude we actually have to raise the burden of proof not just on a good idea, but that the max-block-size is a parameter that belongs in the consensus rules in the first place.

My personal opinion is that the max block size is a node-local policy that is “set” by software developers only as a secondairy effect: the software being capable of bigger blocks.

1 Like

Note that in this proposal, miners can (and likely will) affect ongoing caps by softcaping the blocks they mine themselves. Simply putting a softcap of 1/10th the current size in mining software by default, for example, will result in the max cap never moving unless/until 51% of miners manually raise that softcap, and even then only after a while.

So with a proper implementation, this is basically a “miner decide” proposal, except they decide with the actual sizes of blocks they mine instead of voting up or down. If they really want to they can even stuff their blocks with garbage to “vote” without external transactions, albeit at a cost to their orphan rate over time.

3 Likes

Absolutely, I agree that they will cap themselves. And this is why I said that the outcome will not hurt (be correct), as you can see this like miners working around this consensus-level set limit.

While its great that a change doesn’t hurt miners, I’m not convince this is a change that anyone actually needs.

Well, there currently is no voting.

Yeah, sounds like you expect miners to work around this.

If you expect miners to game the system, and we currently have no system. What, then, is the reason for introducing a system?

I support a proposal such as this. I don’t accept the argument, “What the software can handle”. Which software on what machines? Who decides what software to use as a benchmark. It makes more sense to just raise the blocksize as demand increases and make sure there is room for spikes.

3 Likes

Have you thought of a case where an adjustable or increased minimum-max-blocksize would be meaningful?

I don’t have a particular reason to think it is important, but just to ask - did recent DAA discussion and issues around boundary conditions of a rolling window have any impact on your thoughts about the design here?

This is your friendly regular reminder that the blocksize limit was removed from the consensus rules on August 2017 with the following statement in the original BCH spec;

the client shall enforce that the “fork EB” is configured to at least 8,000,000 (bytes)

Notice the word configured here, this is a user-configured size. Very different from a consensus rule. The term EB backs that up because this is the BU-invented abbreviation for “Excessive Block”. This is the user-configurable size that blocks are deemed too big (excessive).

Notice the same “EB” being listed in the ‘bitcoin cash node versions’ table on Coin Dance | Bitcoin Cash Nodes Summary where according to a later upgrade that 8MB was changed to a 32MB default setting.


People sometimes still talk about us needing a protocol upgrade for block-size-limits in one form or another. Please realize that any rule we add to manage the block-size-limit makes Bitcoin Cash less free. It makes the block size limit again open to consensus discussions and such proposals can lead to chain forks based on such discussions.

Bitcoin Cash has a block-size limit that is completely open to the free market with size limits being imposed by technical capabilities and actual usage only.

I vastly prefer the open market deciding the limits, which is what we have today. I hope this is enough for others too.

4 Likes

Do we want small businesses running their own nodes to maintain their payment medium? Are they going to accept the overhead of having a node technician on site adjusting block size at the software level, or consultant fees on that?

I reckon most will want to simply -press a button and go-. For decentralization barrier to entry the issue is the same.

Those running a node should have the option to set their limits. The chain itself should also ensure its own integrity. If you leave something up to human failure, it’s bound to fail proportionally to how much it hasn’t been tested. BCH can’t test market based responsibility and preparation by nodes encountering larger scales in a live environment. BCH can test the ability for the network to mitigate itself against human error.

What’s the downside? You burn out hardware that can’t handle the load of the increased block size?

I agree, and maybe this proposal could be adjusted to establish a sort of slow-moving bottom, so once miners “win” a certain limit by demonstrating they can handle it, the network keeps that limit forever, until the next “proof of capacity” is produced by miners.

The problem with “basing on what software can handle” is it’s impossible to get an automatically adjusted metric out of that. We’ll be in the “do a CHIP and debate extensively every time we want to lift it” regime forever - whether that is desirable is subjective.

Many in BCH are deathly afraid of this regime given history, and rightly so; this proposal attempts to address such skepticism.

This proposal is basically BCH’s “we shalt never let blocks get consistently full” mission engraved into the most conservative possible automation regime, and assumes “if we move slow enough, we can make software catch up”. It further makes the assumption that “if we cannot do that, BCH has failed, so assuming software will catch up to a slow enough expansion is a reasonable assumption”. Whether that is actually reasonable is also subjective.

5 Likes

My suggestion is to use the median as proof of “what software can handle”. The whole proposal would be the same, but with 1 more candidate for establishing the max.:

add: 4. Highest 365 day median seen so far, times 2 (or some other multiplier)

Do we all agree with @tom 's statement?
Is (currently) the maximum block size is a consensus rule or is it not?

@im_uname @freetrader @Jonathan_Silverblood @emergent_reasons

1 Like

It’s absolutely a consensus rule, which for all practical purposes is “what the network accepts and what miners will build on right now”. People who don’t think it’s consensus should make a 33MB block and find out.

2 Likes

What does “highest 365 day median” mean - take 365 medians from each day, and take the max of those?

Yup, exactly that. Misread at first - I meant take a median of last 365 days every day, and update the max accordingly. So that any past success sticks around, and there’s no risk that some period of lower activity such as bear market or just economic depression or w/e would reduce the median and later require a whole year to adjust back to where we already know it could be.

The problem with taking median from a period as low as one day is that it becomes easier to manipulate, while the upside is it’ll respond faster (at the cost of software not being able to catch up to a rapid ramp). To demonstrate, consistently fill blocks to max and see how fast the limit ramps with or without the new rule.

For the limit to have a sticky bottom, note that rule 2 is already intended to address that - except instead of max of 365 medians (which in practice will likely push ceilings instead of providing floors), rule two is a simple median over the entire past year. that should ensure the “floor” falls really slowly, we can make it even slower by prolonging that duration.