CHIP 2023-04 Adaptive Blocksize Limit Algorithm for Bitcoin Cash

    Title: Excessive Block-size Adjustment Algorithm (EBAA) Based on Weighted-target Exponential Moving Average (WTEMA) for Bitcoin Cash
    First Submission Date: 2023-04-13
    Owners: bitcoincashautist (ac-A60AB5450353F40E)
    Type: Technical, automation
    Layers: Network, consensus-sensitive

This has been in draft for a while and still is, but thought to make a new topic for it, as continuation of discussions here.

The idea of automating adjustment of blocksize limit has been entertained for a long time. The code-fork of BCH launched by BitcoinUnlimited actually implemented the dual-median algorithm proposed by @im_uname

The last post in the linked discussion compares the algos back-tested against actaul BCH and ETH blocks, I’ll repeat it here:

  • single median, like Pair’s (365 day median window, 7.5x multiplier)
  • dual median, like im_uname’s OP here (90 and 365 day windows, but 7.5x multiplier)
  • wtema with zeta=10 and 4x/yr max, my old iteration
  • wtema with zeta=1.5 and 4x/yr max, multiplied by fixed constant (5x)
  • wtema with zeta=1.5 and 4x/yr max, multiplied by a variable constant (1-5x) - my current proposal

BCH data:

Ethereum data (note: size is that of “chunks” of 50 blocks, so we get 10-min equivalent and can better compare), the right fig is the same data but zoomed in on early days, notice the differences in when the algos start responding and their speed. Also notice how the fixed-multiplier WTEMA pretty much tracks the medians but is smooth and max speed is limited as observed in the period 547 to 1095.

Key difference between wtema with zeta=10 and wtema with zeta=1.5 is in algo stability at %hashrate mining at max. With z=1.5, 50% mining at max and 50% mining at some flat value would be stable, while with zeta=10 it would not and the limit would continue growing.

To illustrate, below is a simulated scenario where we initialize the algorithm with 32 MB limit & multiplier=1 and then:

  • 1 year where 80% hash-rate mines exactly at the limit, and 20% mines flat 8 MB - this epoch results in about 5x increase of the limit
  • 1 year where 50% hash-rate mines exactly at the limit, and 50% mines flat 8 MB - this epoch results in reduction of about 25% (and asimptotically approaches a point of stability)

Left - wtema zeta=1.5 with variable multiplier 1-5
Right - wtema zeta=10

image image

4 Likes

You have two ways to set prices in the world. It is either a free market, or it is controlled by a central deciding system. The latter has failed again and again and again. Central groups are really bad at predicting what needs to happen. See Economics in One Lesson - Wikipedia

This idea of making something consensus a setter of limits, which implicitly sets the blocksize, which implicitly defines the price of service, is not a free market.
You can add as many control levers in there as you wish to make it seem like its controlled by some 3rd party, it will continue to not be a free market.

Moving Bitcoin Cash away from the free market it is today, where no central entity has ANY influence over the blocksize, is putting a noose on the coin. It will eventually be captured just as certain as the BTC chain has been captured due to its (rather crude in comparison) 1MB limit.

We will need to repeat this as long as you keep pushing this bad idea that simply shows you and im-uname lack understanding of basic economics.

1 Like

It is consensus because the configuration parameter is consensus-sensitive since any block over the limit will be straight out rejected without any other consideration given.

Consensus doesn’t set the blocksize, the aggregate of all miners individual max-produced + user demand for blocksize sets the blocksize:

  • If there are no users making TX-es, blocksize will be low.
  • If miners don’t lift their individual max-produced, blocksize will be capped at that regardless of max-accept, and if users aren’t making enough TX-es, blocksize will still be lower than max-produced.
  • If miners don’t lift their individual max-produced but there is demand for TX-es, mempools could start to grow, and users could start bidding more, until miners are convinced to lift their individual max-produced - up to the limit set by everyone’s max-accept.

The max-accept won’t interfere with the price of anything as long as the market-negotiated max-produced is below the max-accept which is the current situation: blocks are about 200kB, max-produced is mostly 8MB, and max-accept is 32MB where it matters. That is how it was on Bitcoin for the period 2009-2014.

What happens when there’s user demand but max-produced gets close to max-accept like what happened with Bitcoin in 2015? If engineering permits it, then it should be moved, right? But because it’s consensus-sensitive, moving it is risky due to not being sure what others will do, not being sure your block won’t get reorged, and not being sure the network/markets will even buy the hashes made for the too big blocks (by giving block reward liquidity and market price).

Imagine Satoshi planned ahead and when he set the 1MB limit also had an algorithm to automatically adjust it when adoption started to approach the limit. We could’ve had 3MB limit before 2017, and adoption wouldn’t halt for artificial causes, and the max-produced wouldn’t have reached max-accept to start interfering with the market. This is how it could have looked like:

image

With the algo, the max-accept would reach 3MB by 2017 even if all miners kept max-produced at 1MB. Any miner adjusting max-produced upward and mining a bigger block would slightly increase the rate of increase of max-accept.

BCH increase of max-accept from 8MB to 32MB was a coordination event, the MTP at which to switch 8MB → 32MB was agreed upon in advance and coded into upgraded software. It did not happen smoothly through some fuzzy market process of testing the limits and paying the reorg price. Everyone changed it at exact same time, because they knew everyone else knew that everyone else will change it at exact same time to exact same value. What if coordination fails next time and we get stuck with 32MB, or someone wants to YOLO to 256MB before infrastructure is ready? This is the risk I want to avoid.

With the algo, we could avoid coordination events, or need them only in cases where algo would become inadequate (if it ever would).

The same method you imagine people will use to adjust the flat max-accept we have now can be used to adjust the algo’s parameters or revert to some flat value again

Why do you keep ignoring this in all talks? BitcoinUnlimited actually has RPC to adjust max-accept at runtime. Imagine this proposal as a little program offered to the market, that would automatically adjust the max-accept for the node that wants to run it. Other nodes can keep their flat max-accept, they’ll just have to occasionally bump it up to stay above whatever the algo would produce for other nodes on the network.

With the algo you’d run a node with

./bitcoind <-excessiveblocksize 32000000> <-ebaaconfig n0,alpha0_xB7,gammaReciprocal,zeta,thetaReciprocal,delta,alphaL_xB7

instead of
./bitcoind <-excessiveblocksize 32000000>

(and currently your don’t need to specify it, it defaults to 32MB)

So now we have default of 32MB. With the algo it’d default to 32MB + (algo’s dynamic allowance, depending on conditions & algo config).

Nodes running with the algo would be consensus-compatible with nodes running with a flat value above what the algo would produce on algo nodes!

“Consensus” is a tricky word. Check out what it means in Bitcoin context here:

https://en.bitcoin.it/wiki/Consensus

That sound fine.

You remove the ‘consensus’ part (notice that was the ONLY thing I quoted) and provide this as an optional solution (only full nodes that want it, only users that want it) that other people can just ignore. That would be great.

Let this idea compete on merit and avoid the community being locked into this solution until such a day we hard-fork to remove it because, as it turns out, models inevitable fail.

1 Like

The definition of “consensus” in this context is something that is industry wide. Its a 10 year old industry and millions of people, we defer to that definition.

The core point of said definition is that while it is possible to add a consensus rule with either a soft or a hard fork, it requires a hard fork to remove one.

This later part has always been the core of my contention with your proposal. And I gave open markets as a reason.

I found a much simpler reason that may appeal to you.

Your entire point of doing this, I understand, is to avoid a future “coordination event” (like a protocol upgrade, or what the rest of the industry calls a hard fork).

But as long as you insist to do this as a consensus rule that means that should this ever fail or need to be disabled, your proposal is the source of the need to have such a coordination event. Disabling your new consensus rule, by definition, requires a coordination event and subsequent hard fork.

That fact is the main reason why I’ll always object to this proposal. Its not doing what you think its doing.

And the only reason I have heard why you think this is supposed to be a consensus rule is because you don’t believe enough people will voluntarily enable this concept to make it work…


Do you really insist that this is a consensus rule proposal? Knowing that the definition of a consensus rule is that it requires a hard fork to remove it later? Does your stuff really qualify for that?

1 Like

It is not a new rule. It is a proposal about automatically adjusting the existing consensus rule’s parameter at node’s run-time.

You had shipped Flowee with 128 MB as the default parameter, right?

If BCHN implemented this proposal, it would auto-adjust the parameter from 32 MB to X, and both Flowee and BCHN would stay in consensus until the auto-adjusted value of BCHN reaches X > 128 MB and a block above 128 MB is mined and most hash-rate continues extending that chain.

From then on, Flowee would stay on some minority or dead-end chain until node operator would intervene. He could simply bump the config to some higher flat value, like 160 MB, and it would again stay in consensus with BCHN nodes running with the algo until algo brings the X to > 160 MB. Likewise, people running BCHN could restart their node to use flat 160 MB instead of the algo, and they’d also stay in consensus. The time it takes for the algo to go from 128 MB to 160 MB is predictable, max rate of auto-change is about 0.38% / day, so if you switched from algo to flat 160 MB, you can be sure you won’t fall out for at least 2 months.

In a hypotetical scenario where everyone restarted their nodes to go back to flat, the algo would be no more, as if it never was.

What I’m trying to say is - even with the algo, the network would be agnostic of how someone has set their max-accept value. Algo or no algo, if both node’s values are above what’s getting mined, they’d stay on the same network.

It is not introducing a new consensus rule but it is tweaking the consensus-sensitive parameter at run-time, so what label would you apply to this?

1 Like

Moving goalpost detected.

Your statement is actually quite irrelevant to the topic at hand. Your proposed rule requires a hard fork to be removed / altered or whatever if it is to be a consensus rule. Thats not my opinion, thats simply following the defintion that is also not my writing but what is used in the much bigger industy.
If you disagree it requires a hard fork (and coordination event) to be removed or changed, it likely is not a consensus rule. :man_shrugging:

Anyway, I have my answer, I fail to see why you are so stubborn on this, it seems like a cheap option to just make it optional, best of both worlds. No lock-in created. You avoid the coordination event you want to avoid in all cases. If you believe in this proposal, don’t force it down people’s throats, let them freely choose it. Why not?

You have my position, as long as it is a consensus level change it is not acceptable because the cost of being wrong is too high. It creates the same ingredients that caused BTCs slow painful to watch death.

To close this; please do try steel-manning the non-consensus-change CHIP-Block-Growth that accomplishes the same with less moving parts and using simple economics. (Allow producers to adjust over time to a changing environment.) It also shows effectively that the excessive size feature is NOT a consensus rule because it does not require a hard fork to change the effective network size.

No, it is attempting to translate it into your mental model. Let’s try again.

Can you now restart Flowee with max-accept 129 MB, then restart it tomorrow with 130 MB, then next day with 131 MB? What if you exposed the config via RPC and allowed it to be adjusted at run-time, then change manually daily? Is it a new consensus rule? What if you delegated changing it to a script?

The proposal would be updating the max-accept config at run-time. If altering max-accept causes a hard-fork, then adjusting it manually without an algo (by changing the config manually by the node operator) ALSO causes a hard-fork. At the same time, you’re claiming that the way of adjusting it manually DOESN’T cause a hard-fork since your CHIP proposes exactly that nodes play with the config and adjust it upwards ahead of max-mined actually reaching it. Algo would be doing the same, but automated. I don’t understand where’s the gap in understanding here. Changing max-accept either does or doesn’t cause a HF, what does the method of changing it matter?

Maybe the source of our misunderstanding is in that changing the max-accept doesn’t immediately cause a HF, but it exposes nodes to hard-fork potential. Split could happen only if hash-rate would suddenly start to mine with max-mined above what some nodes have. Splits happen only if something actually tests the rule in such a way that different nodes have different views. As long as all nodes have their max-accept above miner’s max-mined they should be fine, why does it matter how the nodes decided their max-accept, manually or automated? Like, even without the algo, if BCHN has it at 32 MB and Flowee at 128 MB, there won’t be a fork unless most miners switch to Flowee and set their max-mined to 33 MB.

Let’s say most miners run BCHN with the algo, imagine they have max-mined at 20 MB for a while and average mined blocks actually get mined at some 16 MB or w/e. After some time the algo would adjust their max-accept to some 40 MB all the while max-mined would still be at 20 MB. All nodes would still be in consensus: BCHN with flat (32 MB), BCHN with algo (32 MB + X), and Flowee with flat (128 MB). The algo BCHN could revert back to 32 MB flat without causing a HF.

What does “optional” in this context? Optional for who? If BCHN implements the algo and has it as default (but possible to disable with config flag), would you say it is optional? Flowee could stay at 128 MB and it wouldn’t be forced to do anything.

1 Like

If someone were to write a bash script or something that will automatically tune my BCHN’s --excessiveblocksize according to this algorithm without me having to touch it, I’d probably run it.

If enough node operators run such a script, then both bitcoincashautist and Tom are happy. :slight_smile:
No need to implement it as a consensus rule or directly into any node software, since as bcha said, it’s just tuning the already-existing runtime parameters. And I can choose to stop using the script if it turns out that the parameters its setting don’t work for my infrastructure. Open market, yay.

Edit: side-thought on this (assuming double-posting is frowned upon here)

I think sometime in our future, it will be nice to say from the service/app-provider level that my node’s blocksize parameters are expected to grow according to this-or-that schedule. It could be bcha’s algorithm, or it could be something simpler like Gavin’s original scaling plan (which would have us all raise our excessiveblocksize to 64mb next year).

The “fruit” of bcha’s proposal is removing the ambiguity in timing that Tom’s proposal doesn’t really address. We mustn’t forget that as we add value to the network, the incentive to keep the network from fracturing grows. Most real users will be represented by SPV providers, and most real economic activity will be represented by service/app-providers (much like exchanges have wielded considerable power over “consensus” in the past). Those providers are all incentivized to be actively aware of any potential threat to network cohesion (as their business depends on it), and participate to prevent such an outcome. Chain splits obliterate value and we all recognize that must do everything we can to prevent them.

As long as the SPV providers serve the chain that provide the most economic value to users, the users won’t care about the blocksize, it’s irrelevant to them.

The SPV providers only care to the extent that hosting their full node/indexer becomes prohibitive… as long as blocksize scales reasonably, this is fine. These providers can also profit on other services that benefit from a full node.

Miners are mostly concerned about supplying hashpower to the chain that benefits them the most economically. They have the most to lose from a chainsplit, so they mine well under excessiveblocksize until it’s proven that the network will support larger blocks.

This is where I see bcha’s proposal as useful, simply as a signalling mechanism. If I run the algo as a script over my existing BCHN, then I can sign a message that says that “bch.ninja’s node will upgrade according to the algorithm” and if enough other nodes/service providers do this, then miners have long-term confidence in increasing the size of the blocks they mine.

2 Likes

and

Look,
the definition of a “consensus rule” is that it takes a hard fork to remove it.
The question YOU have to answer is if your propoal is of type “consensus”.

If it is:

  • then it is mandatory for all (full node) clients to implement your exact algo.
  • disabling or removing it will take a hard fork.

If it is not of type “consensus”:

  • I have no problem with it, it is then simply a permissionless innovation.
  • It can simply be a possibly good enough solution that a lot of people will love.
  • It can be improved upon in the future without a coordination event.
  • BCHN can ship it by default on, no problem.
  • Other in-consensus full nodes may not implement it at all.

As I wrote before:

To me it sounds from your writing it is NOT actually meant to be of type consensus. As per the definition that is used by the wider cryptocurrency ecosystem spanning millions of people. Your thinking that it is consensus is likely due to the bad naming of the term. :man_shrugging:

My point of view is that the max-accept is not a consensus. It stands to reason that your stuff isn’t either. If you agree, please update your proposal to no longer make it be of type consensus. The implications are pretty big of that one change, and IMOHO all of them positive.

1 Like

It is this:

Ok, so our whole dispute then is about how to label it?

By that same logic, the algorithm to auto-adjust the max-accept by nodes that implement it would also not be consensus. But what’s the appropriate label then, consensus-sensitive? Because if the node’s max-accept isn’t adequate (ends up lagging behind max-mined of mining majority), then it will fall out of consensus once other nodes & miners go above the value.

1 Like

Labels in specifications have a lot of power. They imply things not directly stated otherwise. The usage of ‘layer: consensus’ implied that it becomes mandatory for all full nodes and can’t be removed easy. That implied behaviour was the dispute, and maybe you never intended those side-effects which makes the dispute easy to resolve.

A full node that is somehow not going to download or apply a block may indeed stall and stop following the main chain. This can happen due to a number of reasons, the simplest being that it’s Internet connection went offline. It could also simply have manually set a low value because that is what the hardware is capable of and the operator decides it is better to have it stop following the chain then simply dying or destroying the hardware due to heavy swap usage.

Is a node that is turned off for some time out of consensus? (answer: no). Does BCHN retry the formerly excessive block when restarted with a higher max-accept number? (answer: probably yes).

Just because a full node fails to follow the main-chain doesn’t make the property causing it a ‘consensus’ property. A good Internet connection is not a “consensus” thingy either, the amount of memory in a device isn’t consensus either.

It seems to me that most types of CHIPs are Ok for this, as mosts of them don’t really have the baggage that ‘consensus’ has.

1 Like

How’s this then:

    Type: Technical, automation
    Layers: Network, consensus-sensitive

The max-accept affect the node’s interaction with the network, right? It’s consensus-sensitive because mismatch exposes it to potentially splitting off until config is updated.

1 Like

What about “Layer: Full Node”.

(post deleted by author)

I’m making a small but important tweak to the algo, changing:

  • εn = max(εn-1 + γ ⋅ (ζ ⋅ min(xn-1, εn-1) - εn-1), ε0) , if n > n0

to:

  • εn = εn-1 + (ζ - 1) / (αn-1 ⋅ ζ - 1) ⋅ γ ⋅ (ζ ⋅ xn-1 - εn-1) , if n > n0 and ζ ⋅ xn-1 > εn-1

  • εn = max(εn-1 + γ ⋅ (ζ ⋅ xn-1 - εn-1), ε0) , if n > n0 and ζ ⋅ xn-1 ≤ εn-1

This means that response to block size over the threshold will adjust smoothly up until the max. instead of having max. response to any size between εn-1 (“control block size”) and yn-1 (“excessive block size”).

The per-block response is illustrated with the figures below.

Old version:

image

New version:

In other words, instead of having the response clipped (using the min() function), we now scale it down proportional to the multiplier increase so it always takes 100% full blocks to get max. growth rate, as opposed to previous version, where any size above control block size would trigger max. response.

Back-testing:

  • BCH

image

  • BTC

image

  • ETH

image

Scenario 03

  • Year 1: 10% at 100% fullness, 90% at 16 MB
  • Year 2: 10% at 100% fullness, 90% at 24 MB
  • Year 3: 10% at 100% fullness, 90% at 36 MB
  • Year 4: 10% at 100% fullness, 90% at 54 MB

image

Scenario 04

  • Year 1: 50% at 32 MB, 50% at 8 MB
  • Year 2: 20% at 64 MB, 80% at 32 MB
  • Year 3: 30% at 80 MB, 70% at 40 MB
  • Year 4: 10% at 80 MB, 90% at 64 MB

image

Scenario 05

  • Year 1: 80% at 100% fullness, 20% at 24 MB
  • Year 2: 33% at 100% fullness, 67% at 32 MB
  • Years 3-4: 50% at 100% fullness, 50% at 32 MB

image

Scenario 07

  • Year 1: 50% at 100% fullness, 50% at 32 MB
  • Year 2: 50% at 100% fullness, 50% at 8 MB
  • Year 3: 10% at 100% fullness, 90% at 48 MB
  • Year 4: 10% at 100% fullness, 90% at 72 MB

image

1 Like

In general I have seen this topic being discussed now for about 7 years, first talks about something like this started on Reddit.com in 2016 or so, maybe even earlier.

I have also seen you work on tweaks for this and previous, similar algorithm [maybe it was the same and this is the upgraded version] for about 2 years.

I think this code could easily be reaching a production-ready stage right now. Am I wrong here?

1 Like

It has seen many iterations but the core idea was the same: observe the current block and based on that slightly adjust the limit for the next block, and so on. I believe that the current iteration now does the job we want it to do. Finding the right function and then proving it’s the right function takes the bulk of work and the CHIP process.

The code part is the easy part :slight_smile: Calin should have no problems porting my C implementation, the function is just 30 lines of code.

2 Likes

Truly Excellent!

2 Likes

Over the last few days I had an extensive discussion with @jtoomim on Reddit, his main concern is this:

This does not guarantee an increase in the limit in the absence of demand, so it is not acceptable to me. The static floor is what I consider to be the problem with CHIP-2023-01, not the unbounded or slow-bounded ceiling.

BCH needs to scale. This does not do that.

and he proposed a solution:

My suggestion for a hybrid BIP101+demand algorithm would be a bit different:

  1. The block size limit can never be less than a lower bound, which is defined solely in terms of time (or, alternately and mostly equivalently, block height).
  2. The lower bound increases exponentially at a rate of 2x every e.g. 4 years (half BIP101’s rate). Using the same constants and formula in BIP101 except the doubling period gives a current value of 55 MB for the lower bound, which seems fairly reasonable (but a bit conservative) to me.
  3. When blocks are full, the limit can increase past the lower bound in response to demand, but the increase is limited to doubling every 1 year (i.e. 0.0013188% increase per block for a 100% full block).
  4. If subsequent blocks are empty, the limit can decrease, but not past the lower bound specified in #2.

My justification for this is that while demand is not an indicator of capacity, it is able to slowly drive changes in capacity. If demand is consistently high, investment in software upgrades and higher-budget hardware is likely to also be high, and network capacity growth is likely to exceed the constant-cost-hardware-performance curve.

This is easily achievable with this CHIP, I just need to tweak the max() part to use a calculated absolutely-scheduled y_0 instead of the flat one.

There were some concerns about algorithm being too fast - like what if some nation-state wants to jump on-board and it pushes the algorithm beyond what’s safe considering technological advancement trajectory. For that purpose we could use BIP101 as the upper bound, and together get something like this:

image

To get the feel for what the upper & lower bounds would produce, here’s a table:

Year Lower Bound (Half BIP-0101 Rate) Upper Bound (BIP-0101 Rate)
2016 NA 8 MB
2020 32 MB 32 MB
2024 64 MB 128 MB
2028 128 MB 512 MB
2032 256 MB 2,048 MB
2036 512 MB 8,192 MB

If there’s no demand, our limit would stick to lower bound. If there’s demand, our limit would move towards the upper bound. The scheduled lower bound means recovery from a period of inactivity will be easier with time since at worst it would be starting from a higher base - determined by the lower bound.

1 Like