Asymmetric Moving Maxblocksize Based On Median

@im_uname One more thing to ponder. Medians are good when you want to filter out extremes where they would break averages. But here the extremes would be bounded by 10x the median, so maybe a simple moving average would be better because they’d be smoother while the “pull” of extremes would have less impact. Medians can still produce a shock… imagine mining 1,1,1,1,10,10,10,10,10 the max stays at 10 until it suddenly jumps to 100 once 10s outnumber the 1s, and then you start mining 100s to make another 10x jump. Average would’ve lifted it to 90 and it would’ve arrived there smoothly.

Another point is that averages can be accumulated while medians require a bigger state to track and more operations to calculate. To update the average you can do something like new_average = (old_average * (CONSTANT - 1) + new_value) / CONSTANT, and it can be recorded in coinbase script so you don’t need old blocks to calculate the next average, and so on… so really I think we should look into past work on DAA to find the ideal function, and here we can start talking about what we want from it, like:

  • It should rate-limit the max. increase of the cap over the course of a year, 10x / year? This would allow node operators to see increases from a mile away and prepare for it (buy more storage, CPUs, etc…)
  • It should remember old maximums, so we don’t have a situation where we fall down from 10 to 1, and then have to work for a year just to bring it back up to 10 when we already know that the network gave the signal that it can handle 10s.
  • ??

Here’s a shot at finding the function

Algorithm Description

Every mined block will record the blocksize_limit state, which will limit the maximum block size of the next block.
First post-activation block’s blocksize_limit will have the pre-activation limit of 32MB.
Every subsequent block must update the blocksize_limit to either:

  1. previous_blocksize_limit or
  2. previous_blocksize_limit * GROWTH_FACTOR,

where the case will be decided by the actual size of the mined block.
If the blocksize is above some threshold, then the limit MUST be increased.
The threshold is defined as threshold_blocksize = previous_blocksize_limit / HEADROOM_FACTOR.

Proposed Constants

HEADROOM_FACTOR = 4
GROWTH_FACTOR = 1.00001317885378

The growth factor is chosen to rate-limit maximum growth to 2x/year and, to avoid floating-point math, rounded to max. precision fraction of two 32-bit integers (4294967295/4294910693) . The proposed GROWTH_FACTOR to the power of (365.25 * 144) is 2.00000649029968.

Pseudo-code

if (this_block.height <= ACTIVATION_HEIGHT) {
    if (this_block.size > 32MB)
        fail();
    if (this_block.height == ACTIVATION_HEIGHT)
        if (this_block.blocksize_limit != 32MB) // verifies initialization of the new coinbase field
            fail();
}
else {
    if (this_block.size > previous_block.blocksize_limit)
        fail();
    threshold_block_size = previous_block.blocksize_limit / HEADROOM_FACTOR;
    if (this_block.size > threshold_block_size)
        this_block.blocksize_limit = previous_block.blocksize_limit * GROWTH_FACTOR;
    else
        this_block.blocksize_limit = previous_block.blocksize_limit;
}

Effect

The above means that mining a block of 8.0001MB is enough to permanently increase the limit from 32 to 32.0004, next, mining a block of 8.0002MB is enough to permanently increase it to 32.0008, and so on. If a block of 7.9999MB or 0MB is mined, the limit stays the same so miners can soft-cap the blocksizes < threshold to prevent the limit from rising. On the other hand, if all blocks are mined with whatever size above the threshold and below the limit then the limit can increase at a maximum rate capped to 2x/year.

The blocksize limit will then be a function of how many blocks were mined above threshold since activation:

32MB * power(GROWTH_FACTOR, number_above_threshold).

YOY increase is then given by:

power(GROWTH_FACTOR, proportion_above_threshold * (365.25 * 144)).

Example scenario

Year Year Open Threshold Blocksize Year Open Blocksize Limit % of Blocks With Blocksize Above Threshold Year Close Threshold Blocksize Year Close Blocksize Limit YOY Increase
2024 8.00 32.00 0.00% 8.00 32.00 0.00%
2025 8.00 32.00 10.00% 8.57 34.30 7.18%
2026 8.57 34.30 20.00% 9.85 39.40 14.87%
2027 9.85 39.40 30.00% 12.13 48.50 23.11%
2028 12.13 48.50 40.00% 16.00 64.00 31.95%
2029 16.00 64.00 50.00% 22.63 90.51 41.42%
2030 22.63 90.51 60.00% 34.30 137.19 51.57%
2031 34.30 137.19 70.00% 55.72 222.86 62.45%
2032 55.72 222.86 80.00% 97.01 388.03 74.11%
2033 97.01 388.03 90.00% 181.02 724.09 86.61%
2034 181.02 724.09 100.00% 362.05 1,448.18 100.00%
2035 362.05 1,448.18 100.00% 724.09 2,896.37 100.00%
2036 724.09 2,896.37 0.00% 724.09 2,896.37 0.00%
2037 724.09 2,896.37 0.00% 724.09 2,896.37 0.00%
2038 724.09 2,896.37 100.00% 1,448.19 5,792.76 100.00%
2039 1,448.19 5,792.76 0.00% 1,448.19 5,792.76 0.00%
2040 1,448.19 5,792.76 0.00% 1,448.19 5,792.76 0.00%
2041 1,448.19 5,792.76 25.00% 1,722.20 6,888.80 18.92%
2042 1,722.20 6,888.80 0.00% 1,722.20 6,888.80 0.00%

“This proposal is basically BCH’s “we shalt never let blocks get consistently full” mission engraved into the most conservative possible automation regime, and assumes “if we move slow enough, we can make software catch up”. It further makes the assumption that “if we cannot do that, BCH has failed, so assuming software will catch up to a slow enough expansion is a reasonable assumption. Whether that is actually reasonable is also subjective.”

I think it’s perfectly reasonable and would create a minimal impact to the network, building confidence in Bitcoin Cash as a serious currency for the world stage.

This is your friendly regular reminder that the blocksize limit was removed from the consensus rules on August 2017 with the following statement in the original BCH spec;

the client shall enforce that the “fork EB” is configured to at least 8,000,000 (bytes)

Notice the word configured here, this is a user-configured size. Very different from a consensus rule. The term EB backs that up because this is the BU-invented abbreviation for “Excessive Block”. This is the user-configurable size that blocks are deemed too big (excessive).

(Notice the same “EB” you can find in any network-crawler that according to a later upgrade that 8MB was changed to a 32MB default setting.)


People sometimes still talk about us needing a protocol upgrade for block-size-limits in one form or another. Please realize that any rule we add to manage the block-size-limit makes Bitcoin Cash less free . It makes the block size limit again open to consensus discussions and such proposals can lead to chain forks based on such discussions.

Bitcoin Cash has a block-size limit that is completely open to the free market with size limits being imposed by technical capabilities and actual usage only.

I vastly prefer the open market deciding the limits, which is what we have today. I hope this is enough for others too.

I’m going to paraphrase what you said to clarify my understanding of what you wrote. Especially, since you took the time to write a detailed opinion and I’ve always valued your input.

Are you saying that block producers (from any source) should coordinate with one another for configurations and/or uncoordinated adjustments to blocksize configuration?

It’s very common knowledge that I’m a miner and this influences a lot of my opinions. One of which is that I do not want some “miner council” or some summit or whatever to come up with configurations. Generally, it is good to get mining decentralized and not have any cartels or whatever. This is sort of an external factor outside the protocol that guarantees security.

I’m not totally against what you are saying (as I understand it, feel free to correct me) , but I’m not exactly convinced. However, I do share the sentiment to not touch the protocol. It does seem like there is always a change to the protocol, this is one of those changes I could actually go for as of now.

Best Regards

There are many ways to ensure a steady block size increase. We should distinguish between two ways of coordinating that, though. One is any method that involves asking the software developers in order to make changes. Another are methods that do not require consensus from the software developers on such decisions at all.

The initial 1-MB rule was a rule that the software developers needed consensus on and a great demonstration of lock-in. I think we were lucky that this was a simple hard rule and we either hit the boundary, or we don’t. We were lucky because it showed the special interest and leverage created for developers instantly and clearly.
Rules that give the miners mostly what they want (but at a cost) are making it harder to see the special interests created for developers, but in such cases it still is there and it makes the coin less free.

I think there is a challange available to get a good blocksize-limit set by the market and keep the cost low. The original idea from BU was to just bluntly create blocks (multple in a row) and hope they would not get orphaned, and the cost of this was seen as too high by the market. So while this was a method that does not require software developers to create consensus, it was not acceptable either.

The problem should not be too hard to solve, but it is unlikely that miners would be putting a lot of time and effort into it while their hardware lifetime is probably longer than the lifetime of the current block-size-limit.

Here are some quick ideas that avoid the lock-in problem by not requiring developers to agree;

  • The idea of this topic (Asymmetric moving maxblocksize based on median) can be completely implemented in an advisery method only, where the miners adjust their max-block-size (EB) on such advice and a simple check of the blockheaders can show it being safe to increase created blocksize.

  • Miners can use the block-headers to ‘vote’ for wanting a bigger size, or limiting the blocksize with the ‘EB’ parameter. Having a relatively pure communication channel.

  • Software devs could increase the default EB in their software as it gets better and miners could just ignore this setting and realize at one point that they can safely increase the blocksize.

  • Miners could coordinate and schedule a block-size-increase by picking up the phone and announcing the changes on major channels. Much like the software devs plan the yearly upgrade.

I’m sure some way can be found that allows the blocksize to be safely increased on regular basis, possibly ideas I have not thought off.

The important part is that we have 2 variables, accepted blocksize and created blocksize. All that needs to happen is for the accepted size to be increased by all miners well ahead of a single miner increasing its created size in order for the cost of upgrade to be basically zero.

Okay, Tom, let me get back to you.

Yeah, about that.

Miners have repeatedly shown over and over and over and over again that they will run just whatever software the leading stables (the leading implementation) produces: meaning Bitcoin Cash Node at this moment.

Miners will not “vote” for anything. Miners voting for changes in protocol is a pipe dream that we hoped would happen in reality since 2015, but it never did.

Miners are people. People follow the herd (community consensus) or the alpha (BCHN), just like any other animals in nature [for more details see my human herd theory, also shorter version]. Believing and thinking anything else is just highly unrealistic. And miners have indeed shown that they follow and dont care very strongly.

So getting an automated default blocksize increase into the leading implementation(s) is a great idea. It will basically cement the consensus to steadily increase blocksize with demand for decades, because miners will just run it and dont care about details, like they always did (and will probably always do, from the looks of it).

Fun fact, the BIP9 setup of miners voting for upgrades was never marketed as “voting”, it was nothing but an indication that miners had upgraded to compatible software.
So, yeah, the only real vote we ever saw was the Segwit2X one, and I think we know that that was also really not avote.

But you realize nobody was making the argument, right?
Maybe that wasn’t clear, but the list which you quoted from was a list of options for the mining community to get inspiration from. If you think they won’t pick that, then sure. Maybe you want to help miners with a better solution.

As long as such a solution doesn’t involve a consensus rule. Because that just sets us up for a new lock-in where developer consensus is needeed and the only way out is a hard fork. Never again.

What miners have done quite well over the last almost 15 years is set the block size. We are quite happy with this, the mining community has been very responsible in this regard.

I don’t think anyone is proposing a hard coded block size algorithm, only a soft coded algorithm where it defaults to it, but can be selected off and adjusted.

Nah. Everybody was making all kind of arguments.

In the ends, the outcome was not dictated by arguments, but by psychomanipulation, propaganda, and, ultimately: Following.

People (especially miners) follow the herd or the alpha, rational arguments have nothing to do with anything. Or at least it happened in the BTC/BCH split.

Luckily in the next 2 splits (BSV/BCH, ABC/BCH) the enemy did not manage to corrupt/take over enough alphas (which is partially thanks to me, because I was very vocal about the manipulation and then became a moderator).

Dude, do you realize that Bitcoin Cash is “hard-forking” every year, right?

The proper term is “split”.

The network splits only when there is a major disagreement in the community, miners have nothing to do with it, basically.

The split never actually happens because of the miners. The split happens because of

  • Propaganda
  • Politics
  • Brainwashing of Community
  • Exchange policies

I have lived through this and I took part in it 3 times so believe me, I know.

Huh? Are we living on the same planet?

The miners have never set the blocksize. Ever. This never happened.

In any of the splits that happened, the blocksize was ultimately decided by developers and community leaders (meaning very often Twitter trolls) who rallied against any increase or they rallied to increase blocksize in a dumb way like they did in BSV.

Dude, BTC/BCH miners do not follow reason, logic, arguments or anything like that.

What they follow in their sheer mass is the “leading implementation”, the twitter herd and the alphas who dictate the price.

Whatever code the leading development project produces, miners will run. That is all.

I mean it’s not like I do not have a lot of proof. I have fuckton of proof. Me being right is absolute and undeniable at this point.

Miners – will not vote – to decide the parameters of the network, ever - with very high probability. You gotta Deal with it.

PS.

And because miners will never vote to choose a leading implementation, the leading implementation has to decide for them.

And we - developers, politicians, influencers, moderators and all other people who have any kind of power - it is our role to make sure that the leading implementation(s) runs good, working code. Everybody else will just follow.

This is unfortunately how humanity works right now.

So the decision to make an automated algorithm that decides blocksize in the leading implementation(s) is a good decision.

There are places where you can post a bunch of nonsense and expect to be corrected ad nauseam. Called the Bitcoin Up, Gold Down thread. Check it out. You’d love it.

1 Like

Offtopic ramblings, ignored as usual.

Try to stay on topic, this is not reddit. We actually solve problems here.

It helps if you learn your history. The current miners are not mining 32MB blocks, right? That is because they set a max blocksize. The same has happened all the way back to the 250KB blocksize in the early years.

Miners have always been the one to set the blocksize. Until they couldn’t due to the maxblocksize limit (1MB).

ps. the initial 8MB limit that BCH split with, was also at the request of miners.

I just noticed I am talking to Tom Zander.

Yeah, Tom - we both know the history. You were there too.

Do you remember a large miner ever saying

  • “We will not run Bitcoin Core because it has 1MB blocksize only, instead we will run Bitcoin Unlimited/Classic/XT” or
  • “We will not run BSV because having unlimited blocksize without performance improvements is dumb” or
  • “We will not run Bitcoin ABC, because giving out 8% of block reward to random person seems like tax” ?

No, you don’t because it never happened. Miners never made the decision.

The miners never set that. They were running on default settings most or all of the time.

What miners have done is running the reference implementation on default settings. Always. They never run anything else, actually.

You have never heard a large miner say that they will not run something because of blocksize issues and you (most probably) never will.

I don’t remember this request.

What miners requested this and when? Can you link me the source?

1 Like

This is the second time but in a different thread where you have just declared yourself correct. This is why I stopped engaging you before and can’t take you serious here. It looks like a complete waste of time to correct you or talk to you in any manner. I imagine you have held a lot of incorrect beliefs for a long time (making it even more difficult to unlearn) because you don’t accept feedback, which is crucial to learning.
“Me being right is absolute and undeniable at this point”.
You solve problems in your own fantasy land.

Yet another irrelevant offtopic ad-personam attack.

Ignored.

Thank you for your participation, have a good day.

The idea of this topic (Asymmetric moving maxblocksize based on median) can be completely implemented in an advisery method only, where the miners adjust their max-block-size (EB) on such advice and a simple check of the blockheaders can show it being safe to increase created blocksize.

Having a blocksize cap formula doesn’t preclude miners from hard-forking an additional increase (or soft-capping it lower), it just makes the cap adjust slowly upwards in the meantime, potentially making blocksize hard-forks unnecessary (but still possible).

The idea I outlined records the state of blocksize cap in every block and uses the state of the last block to determine allowed size of the next one, and so on. This would make it possible for miners to get together and record an arbitrary number into the blocksize cap state, and then formula would resume its work from there (take whatever value was recorded in the last block, compare with threshold, and either keep it the same or adjust slightly upwards). Difference is in the effort required to increase… hard-fork override (which is what we’re currently doing anyway but the value remains fixed till the next one) requires a consensus upgrade, while formula lets us automatically creep upwards according to user demand (miners could still soft-cap it lower).

Having a formula in reduces the risk of lock-in by not being able to gather consensus for a hard-fork upgrade, but still allows for a possibility of having such an upgrade.

1 Like

The current situation is that miners can change the blocksize to anything. No software developers need to have any consensus on that. This is the extreme opposite of lock-in. From todays baseline of zero the suggestion that we can reduce (needing less than zero software developer consensus) is not possible.