Asymmetric Moving Maxblocksize Based On Median

Note that in this proposal, miners can (and likely will) affect ongoing caps by softcaping the blocks they mine themselves. Simply putting a softcap of 1/10th the current size in mining software by default, for example, will result in the max cap never moving unless/until 51% of miners manually raise that softcap, and even then only after a while.

So with a proper implementation, this is basically a “miner decide” proposal, except they decide with the actual sizes of blocks they mine instead of voting up or down. If they really want to they can even stuff their blocks with garbage to “vote” without external transactions, albeit at a cost to their orphan rate over time.

3 Likes

Absolutely, I agree that they will cap themselves. And this is why I said that the outcome will not hurt (be correct), as you can see this like miners working around this consensus-level set limit.

While its great that a change doesn’t hurt miners, I’m not convince this is a change that anyone actually needs.

Well, there currently is no voting.

Yeah, sounds like you expect miners to work around this.

If you expect miners to game the system, and we currently have no system. What, then, is the reason for introducing a system?

I support a proposal such as this. I don’t accept the argument, “What the software can handle”. Which software on what machines? Who decides what software to use as a benchmark. It makes more sense to just raise the blocksize as demand increases and make sure there is room for spikes.

3 Likes

Have you thought of a case where an adjustable or increased minimum-max-blocksize would be meaningful?

I don’t have a particular reason to think it is important, but just to ask - did recent DAA discussion and issues around boundary conditions of a rolling window have any impact on your thoughts about the design here?

This is your friendly regular reminder that the blocksize limit was removed from the consensus rules on August 2017 with the following statement in the original BCH spec;

the client shall enforce that the “fork EB” is configured to at least 8,000,000 (bytes)

Notice the word configured here, this is a user-configured size. Very different from a consensus rule. The term EB backs that up because this is the BU-invented abbreviation for “Excessive Block”. This is the user-configurable size that blocks are deemed too big (excessive).

Notice the same “EB” being listed in the ‘bitcoin cash node versions’ table on Coin Dance | Bitcoin Cash Nodes Summary where according to a later upgrade that 8MB was changed to a 32MB default setting.


People sometimes still talk about us needing a protocol upgrade for block-size-limits in one form or another. Please realize that any rule we add to manage the block-size-limit makes Bitcoin Cash less free. It makes the block size limit again open to consensus discussions and such proposals can lead to chain forks based on such discussions.

Bitcoin Cash has a block-size limit that is completely open to the free market with size limits being imposed by technical capabilities and actual usage only.

I vastly prefer the open market deciding the limits, which is what we have today. I hope this is enough for others too.

4 Likes

Do we want small businesses running their own nodes to maintain their payment medium? Are they going to accept the overhead of having a node technician on site adjusting block size at the software level, or consultant fees on that?

I reckon most will want to simply -press a button and go-. For decentralization barrier to entry the issue is the same.

Those running a node should have the option to set their limits. The chain itself should also ensure its own integrity. If you leave something up to human failure, it’s bound to fail proportionally to how much it hasn’t been tested. BCH can’t test market based responsibility and preparation by nodes encountering larger scales in a live environment. BCH can test the ability for the network to mitigate itself against human error.

What’s the downside? You burn out hardware that can’t handle the load of the increased block size?

I agree, and maybe this proposal could be adjusted to establish a sort of slow-moving bottom, so once miners “win” a certain limit by demonstrating they can handle it, the network keeps that limit forever, until the next “proof of capacity” is produced by miners.

The problem with “basing on what software can handle” is it’s impossible to get an automatically adjusted metric out of that. We’ll be in the “do a CHIP and debate extensively every time we want to lift it” regime forever - whether that is desirable is subjective.

Many in BCH are deathly afraid of this regime given history, and rightly so; this proposal attempts to address such skepticism.

This proposal is basically BCH’s “we shalt never let blocks get consistently full” mission engraved into the most conservative possible automation regime, and assumes “if we move slow enough, we can make software catch up”. It further makes the assumption that “if we cannot do that, BCH has failed, so assuming software will catch up to a slow enough expansion is a reasonable assumption”. Whether that is actually reasonable is also subjective.

5 Likes

My suggestion is to use the median as proof of “what software can handle”. The whole proposal would be the same, but with 1 more candidate for establishing the max.:

add: 4. Highest 365 day median seen so far, times 2 (or some other multiplier)

Do we all agree with @tom 's statement?
Is (currently) the maximum block size is a consensus rule or is it not?

@im_uname @freetrader @Jonathan_Silverblood @emergent_reasons

1 Like

It’s absolutely a consensus rule, which for all practical purposes is “what the network accepts and what miners will build on right now”. People who don’t think it’s consensus should make a 33MB block and find out.

2 Likes

What does “highest 365 day median” mean - take 365 medians from each day, and take the max of those?

Yup, exactly that. Misread at first - I meant take a median of last 365 days every day, and update the max accordingly. So that any past success sticks around, and there’s no risk that some period of lower activity such as bear market or just economic depression or w/e would reduce the median and later require a whole year to adjust back to where we already know it could be.

The problem with taking median from a period as low as one day is that it becomes easier to manipulate, while the upside is it’ll respond faster (at the cost of software not being able to catch up to a rapid ramp). To demonstrate, consistently fill blocks to max and see how fast the limit ramps with or without the new rule.

For the limit to have a sticky bottom, note that rule 2 is already intended to address that - except instead of max of 365 medians (which in practice will likely push ceilings instead of providing floors), rule two is a simple median over the entire past year. that should ensure the “floor” falls really slowly, we can make it even slower by prolonging that duration.

To clarify - this is not what I meant (edited above), I wanted to say - moving median with a 365-day window, and every day you update the floor with something like if (today_365median > floor_cap) floor_cap = today_365median;

1 Like

I see, won’t that make it effectively the same as rule 2?

Yes, except that it can’t adjust back down. Once it moves the bar up, the bar stays there. We could have a whole year of low activity where the median would drop but the bar would stay in place until activity ramps up again enough to move it up more. Rationale being: if miners could have set the bar to X at some past period of 365 days, then that is proof that X can be sustained, so why would we drop the cap below X?

1 Like

I agree with this as long as the multiple used is low enough. With a high enough multiplier you can end up with a floor based on a false claim of sustainability.

1 Like

hmm, if “the limit should basically never go down” is a desired trait, we can simply remove rule 2 and hard-specify “the limit never goes down”, or otherwise modify rule 2 to be a higher percentile (say, 90th percentile instead of median), no? Both of these have much less complexity than your proposed rule, and should have similar effects.