Asymmetric Moving Maxblocksize Based On Median

I support a proposal such as this. I don’t accept the argument, “What the software can handle”. Which software on what machines? Who decides what software to use as a benchmark. It makes more sense to just raise the blocksize as demand increases and make sure there is room for spikes.

3 Likes

Have you thought of a case where an adjustable or increased minimum-max-blocksize would be meaningful?

I don’t have a particular reason to think it is important, but just to ask - did recent DAA discussion and issues around boundary conditions of a rolling window have any impact on your thoughts about the design here?

This is your friendly regular reminder that the blocksize limit was removed from the consensus rules on August 2017 with the following statement in the original BCH spec;

the client shall enforce that the “fork EB” is configured to at least 8,000,000 (bytes)

Notice the word configured here, this is a user-configured size. Very different from a consensus rule. The term EB backs that up because this is the BU-invented abbreviation for “Excessive Block”. This is the user-configurable size that blocks are deemed too big (excessive).

Notice the same “EB” being listed in the ‘bitcoin cash node versions’ table on Coin Dance | Bitcoin Cash Nodes Summary where according to a later upgrade that 8MB was changed to a 32MB default setting.


People sometimes still talk about us needing a protocol upgrade for block-size-limits in one form or another. Please realize that any rule we add to manage the block-size-limit makes Bitcoin Cash less free. It makes the block size limit again open to consensus discussions and such proposals can lead to chain forks based on such discussions.

Bitcoin Cash has a block-size limit that is completely open to the free market with size limits being imposed by technical capabilities and actual usage only.

I vastly prefer the open market deciding the limits, which is what we have today. I hope this is enough for others too.

4 Likes

Do we want small businesses running their own nodes to maintain their payment medium? Are they going to accept the overhead of having a node technician on site adjusting block size at the software level, or consultant fees on that?

I reckon most will want to simply -press a button and go-. For decentralization barrier to entry the issue is the same.

Those running a node should have the option to set their limits. The chain itself should also ensure its own integrity. If you leave something up to human failure, it’s bound to fail proportionally to how much it hasn’t been tested. BCH can’t test market based responsibility and preparation by nodes encountering larger scales in a live environment. BCH can test the ability for the network to mitigate itself against human error.

What’s the downside? You burn out hardware that can’t handle the load of the increased block size?

I agree, and maybe this proposal could be adjusted to establish a sort of slow-moving bottom, so once miners “win” a certain limit by demonstrating they can handle it, the network keeps that limit forever, until the next “proof of capacity” is produced by miners.

The problem with “basing on what software can handle” is it’s impossible to get an automatically adjusted metric out of that. We’ll be in the “do a CHIP and debate extensively every time we want to lift it” regime forever - whether that is desirable is subjective.

Many in BCH are deathly afraid of this regime given history, and rightly so; this proposal attempts to address such skepticism.

This proposal is basically BCH’s “we shalt never let blocks get consistently full” mission engraved into the most conservative possible automation regime, and assumes “if we move slow enough, we can make software catch up”. It further makes the assumption that “if we cannot do that, BCH has failed, so assuming software will catch up to a slow enough expansion is a reasonable assumption”. Whether that is actually reasonable is also subjective.

5 Likes

My suggestion is to use the median as proof of “what software can handle”. The whole proposal would be the same, but with 1 more candidate for establishing the max.:

add: 4. Highest 365 day median seen so far, times 2 (or some other multiplier)

Do we all agree with @tom 's statement?
Is (currently) the maximum block size is a consensus rule or is it not?

@im_uname @freetrader @Jonathan_Silverblood @emergent_reasons

1 Like

It’s absolutely a consensus rule, which for all practical purposes is “what the network accepts and what miners will build on right now”. People who don’t think it’s consensus should make a 33MB block and find out.

2 Likes

What does “highest 365 day median” mean - take 365 medians from each day, and take the max of those?

Yup, exactly that. Misread at first - I meant take a median of last 365 days every day, and update the max accordingly. So that any past success sticks around, and there’s no risk that some period of lower activity such as bear market or just economic depression or w/e would reduce the median and later require a whole year to adjust back to where we already know it could be.

The problem with taking median from a period as low as one day is that it becomes easier to manipulate, while the upside is it’ll respond faster (at the cost of software not being able to catch up to a rapid ramp). To demonstrate, consistently fill blocks to max and see how fast the limit ramps with or without the new rule.

For the limit to have a sticky bottom, note that rule 2 is already intended to address that - except instead of max of 365 medians (which in practice will likely push ceilings instead of providing floors), rule two is a simple median over the entire past year. that should ensure the “floor” falls really slowly, we can make it even slower by prolonging that duration.

To clarify - this is not what I meant (edited above), I wanted to say - moving median with a 365-day window, and every day you update the floor with something like if (today_365median > floor_cap) floor_cap = today_365median;

1 Like

I see, won’t that make it effectively the same as rule 2?

Yes, except that it can’t adjust back down. Once it moves the bar up, the bar stays there. We could have a whole year of low activity where the median would drop but the bar would stay in place until activity ramps up again enough to move it up more. Rationale being: if miners could have set the bar to X at some past period of 365 days, then that is proof that X can be sustained, so why would we drop the cap below X?

1 Like

I agree with this as long as the multiple used is low enough. With a high enough multiplier you can end up with a floor based on a false claim of sustainability.

1 Like

hmm, if “the limit should basically never go down” is a desired trait, we can simply remove rule 2 and hard-specify “the limit never goes down”, or otherwise modify rule 2 to be a higher percentile (say, 90th percentile instead of median), no? Both of these have much less complexity than your proposed rule, and should have similar effects.

That is a very interesting take which has as a result to take power back from the miners into the arms of developers. And it is also a false comparison. A slightly selfish one, I might add.

The proper way of looking at this is asking which group of people can change the blocksize. Because the false idea of anyone being able to do this alone is like Imaginary wrote, well, a silly idea. Not being able to move the blocksize alone is not an indication of anything.

The proper question is; which part of our network can change the blocksize today.

  1. Is it the miners, can there be enough hashpower that needs not ask permission in order to change the blocksize?
  2. Is it the people that mostly use this forum in order to decide things for the network that need to design some new rules before the blocksize can be raised?

If the answer is 1, the miners can decide this alone, then activating the “Asymmetric Moving Maxblocksize etc” is litereally taking power away from the market and from the miners.

Taking power to set the blocksize away from the miners may be the way to go, but be bloody honest about your intention and explain why this is a net-gain.

With a 90th percentile, and if we really hit some low activity period that could span a year, then on day 328 miners would be forced to start gaming it to preserve the cap, by padding blocks with dummy data for 37 days. Allowing it to drop would require them to build it up again from a potentially much lower base, requiring them to work their way up to prove again what has already been proven.

“the limit never goes down” could be dangerous if applied to both 1. and 2., because then it would take only 45 days of consistently bigger blocks to lift it forever.
If applied to only 2., then it translates into floor=max(previously observed 365-medians), and it would require 183 days of bigger blocks to lift it forever.