Asymmetric Moving Maxblocksize Based On Median

This one is likely not well known;

3: the AD mechanism specifically was about jumping chain when a certain number of orphaned blocks were found which were bigger than the EB allowed. Or, in other words, the designers of this sceme wanted miners to start mining blocks they knew would be orphaned by some others. Then the client would re-think that orphaning after several blocks and would jump to the ‘right’ chain after n (default was 6) blocks to avoid a chain-split.

Miners utterly rejected this because ANY orphaning was unacceptable and have made clear that they will talk between themselves in order to decide what blocksize they will limit themselves to. Realizing a technical solution doesn’t solve a social problem.


Imaginary Username puts forward similar arguments I made, so I absolutely agree with those.

As a closing point I want to suggest that in order to convince miners to mine bigger blocks we need to not just argue between non-miners, but actually show that mining bigger blocks is safe and they do not get orphaned. (talk is cheap). This starts with a scalenet and ends with one miner actually mining on mainnet bigger blocks so other miners can see they are safe.
Naturally this effort is useless until there is economic activity to actually fill those blocks.

Miners are our partners in this effort. We move together when both miners and software devs are satisfied changes in size are a good idea. In a soft way, to avoid hard rules that orphan blocks.

They don’t need convincing, they need something to fill those blocks with, and for that we need users. What’s the current accepted-blocksize value? What’s the current created-blocksize median? YTD BCH had most blocks in 200-300kB range, usage levels BTC has back in like 2013/14. No wonder BCH market cap is at 2013/14 levels, too. Plenty of headroom till we hit the 32MB cap. If miners need anything, they need users to actually generate enough transactions so they have something to fill those blocks with, and increase the total value of SHA-256 hash market.

I kind of get your point about it being a configurable option - even if 100% of the miners need to coordinate to configure it the same. It could be a checkbox: “stick to the algo” or temporary override. The algo would then pick up from whatever override was mined.

Why bother making the algorithm opt-in and not opt-out? Unless I misunderstood your statement.

Soft coded only means it’s end user (not developer or ‘developer’ via recompiling) configurable, configurable doesn’t mean it’s opt-in or opt-out.

All default values of soft coded parameters in any program are by nature opt-out and not opt-in, unless the program forces the end user to configure it (without copy paste from some tutorial) before the program can run.

The algorithm should be soft coded opt-out.

I concur.

After all we mostly agree here that miners are passive players in this ecosystem, so they should only be bothered to change the values if there is an emergency or something is not working right.

So opt-out (meaning ON by default) would seem to be the logical solution.

Of course we do not want to break anything and cause problems, so extensive testing on testnet has to be done.

Thanks for your understanding.
I don’t want to hijack this exciting discussion with merely an administrative matter.
But we can open a parallel post and talk about this in more detail.
I don’t think that a suboptimal direction is the consequence of the issue being highly political, nor do I think that conflicts have to be avoided; on the contrary, it is better to face them and do everything possible to resolve them.
Perhaps the problem lies in the fact that sometimes the boundaries between conflict and violence become blurred. That is a warning that we cannot let pass.

2 Likes

Just adding this for future reference:

I think the main issue with BSV in general is that it’s not quite possible to predict the blockchain size at all. The current limit seems to be 4 GB and people were actively testing to hit it. So potentially it’s 4 * 144 = 576 GB of blockchain data every day. Plus indexes. Plus services like block explorers run their own database (we run even two for extra speed and analytics). So for Blockchair this is potentially up to 60 terabytes a month just with the current limit (which is expected to get increased).

The second important issue is that if it was some useful data like real transactions, real people would come to block explorers to see their transactions, businesses would buy API subscriptions, so we’d be able to cover the disk costs, the development costs, the cost of trying to figure our how to fit 10 exabytes into Postgres (not very trivial I think), etc.

But the reality is that 99.99% or so of Bitcoin SV transactions are junk, so despite being the biggest Bitcoin-like blockchain with most transactions, Bitcoin SV constitutes only 0.3% of our visitor numbers and there are very few API clients using Bitcoin SV (0.2% of all API requests most of which are free API calls for the stats). Unfortunately, this doesn’t cover all these costs. So that’s why we can’t run more than 2 nodes, and even these two nodes will get stuck at some point because we’ll go bankrupt buying all these disks to store the junk data. But we’re trying our best :slight_smile:

With this amount of junk data I just don’t see a business model for a BSV explorer which would work in the long term (maybe an explorer run by a miner?). The same goes for exchanges for example I think. If you have to buy 10 racks of servers to validate the blockchain, but you only have 10 clients paying trading fees, you’ll go bankrupt.

3 Likes

Just to illustrate, I plotted how the above algorithm would have behaved were it in there since block 0. The algorithm wouldn’t have allowed block sizes above the green line.

100KB initialization

100KB initialization + one-time hard-fork bump to 500KB at block 175,000
Note that the algo wouldn’t have to be changed for the HF since it doesn’t have memory, it looks at whatever MAX was recorded in the previous block and picks up from there. The HF would just do one-time override of the value written into coinbase script, and the next block would calculate the max based off that and simply continue from there.

8MB initialization

Edit: after some discussion with @tom on Telegram I realized the approach of having the algorithm’s state written in every block coinbase and enforced by consensus is flawed. I was proposing a new consensus rule for no reason, but it’s not the algorithm that’s the problem but writing algorithm state in coinbase and having the state of the coinbase field consensus enforced. This would prevent someone syncing from scratch to just set his -excessiveblocksize to whatever is the latest, he’d have to verify the algorithm’s field even after it has served its purpose.

So, I will change the approach to “excessiveblocksize autoconfig”, and if miners would like it and make it default, then everyone else could either snap to algorithm just as well, or set a constant -excessiveblocksize to some size above whatever the algorithm could produce.

Ok, new approach, where the algorithm would be an extension of -excessiveblocksize.
If a node would specify -autoebstart <AEBHeight> then it would enforce a fixed EB (like it does now) until the AEBHeight, from where the node’s EB would start slowly growing if blocks are more than 12.5% full. It doesn’t matter if it’s 50% or 20% full, the growthRate/block is the same for anything above threshold so whatever gets mined the growth is capped at 2x/year and yearly rate will be in the 0%-100% range, depending on frequency of blocks above threshold 12.5% fullness.

Nodes could coordinate a switch back to flat EB (obv it would have to be greater than previously mined block sizes), or they could coordinate shifting the algo start time, so if we want to tweak it as we go we could still coordinate a change of defaults on upgrade days without requiring making exceptions in the code - it’d be just a change of default config, but if we ever stop doing coordinated upgrades then sticky defaults will be like a dead-man switch, giving everyone assurance that EB can’t ever get stuck or go down. I mean, it could - some future people could SF to remove it, but action would be required, right? No action would mean - it will keep growing. In '17 it was the opposite, action was required to grow and no action meant growth would stop.

I plotted some hypothetical scenarios. Imagine Satoshi had the algo right from the start with EB=500KB (red line). Then he’d HF to 1MB in 2010 but with the algo (zoomed in, '10 HF would change the curve from red to green), so in response to adoption the EB would move up to adjust to demand. He’d then disappear and we’d be fine and maybe the wars would be about soft-forking to remove the algo hah, because it’d keep growing and reach 8MB in '17 even if actual mined blocks would have always been of mined size <1MB.

Then in '17 we’d change config to 8MB and bump the curve more up, then in '18 we’d bump it to 32MB and the algo would only get us up to 32.27MB today because our actual usage had very few blocks >12.5% full which would’ve triggered a few slight increases, so if we did tests with 256MB we could bump it up again in '24 to like 64MB and the algo would pick up from there.

Made a little program to calculate this quickly, can be found here: ac-0353f40e / Autoebs · GitLab

PS how about “Monotonic Excessive Blocksize Adjuster (MEBA)” for the name :slight_smile:

Nice evolving thought here!

Some comments;

  1. Any algorithm is fine by me as long as it doesn’t concern protocol-rules and as a result make participation optional and voluntary.
  2. changes in such probably should follow established capacity of widely used infrastructure. So the BCHN node currently ships with EB=32MB because we know it can handle that. When that moves to 50MB or 64MB at some time in the future, again based purely on the software being able to handle that, your algorithm may become more agressive.
    You can make this internally consistent for it to simply respond to the user-set EB, which for most is the software-default EB.
  3. Any changes to the algo have nothing to do with the yearly protocol upgrades. I would even suggest to stagger them or otherwise de-couple any changes to make clear that they are not related. That such auto-max-generated sizes are not part of the protocol and thus don’t coincide with said upgrades.
    For instance you could do changes in January based on the idea that software has been released half of November and many will upgrade.
  4. (repeat from my Telegram message). I expect that when blocks get considerably bigger we will see a separation of block-validation and block-building. Two pieces of software that communicate. In fact the block-template being created by the full node is partly already ignored by mining software and duplication of efforts there add up the larger a block gets.
    So a mining software that gets the transactions as they come in from the full node and periodically builds a new block to mine based on the mining-software local mempool. This separation suddenly allows a lot more power to the mining operator in the form of configurability.
    Mining software can suddenly start selecting transactions based on reaching a certain amount of fees income.
    Miners are really the customer of this and features are enabled based on merit. Does this increase the paycheck of the miner.
    So, in the end, any such ideas as you write here follow the same market where you can build it, but the ones to try to sell it to are the miners. Do they actually want to use it? Does this give them value?
    I honestly don’t know, which is why I can’t really comment more on this thread. I’m not the customer of this new iteration of this feature.
1 Like

Just my 2 cents here:

This seems like a good idea, it should decrease potential contention on yearly upgrades.

As long as these changes do not produce a maximum hard limit, but only a soft/suggested limit that can be easily overriden by config files, it will be OK.

1 Like

I removed the part where it would introduce a new rule. It’s the same EB, but adjusted upwards by those who’d decide to stick to the algo. Those running with EB=32 and those with EB=32+algo would be compatible for as long as majority of hashpower doesn’t produce a block of >32MB and <(32+algo_bonus). If a >32MB chain would dominate, then those running with EB32 could either bump it up manually to some EB64 and have peace until algo catches up to 64, or just snap to algo too.

changes in such probably should follow established capacity of widely used infrastructure. So the BCHN node currently ships with EB=32MB because we know it can handle that. When that moves to 50MB or 64MB at some time in the future, again based purely on the software being able to handle that, your algorithm may become more agressive.
You can make this internally consistent for it to simply respond to the user-set EB, which for most is the software-default EB.

If need be, a faster increase could be accommodated ahead of being driven by the algo, by changing the EB config so nodes would then enforce EB64(+algo_bonus). It wouldn’t make the algo more aggressive - it would just mean it will not do anything until mined blocks start hitting the treshold_size=EB/HEADROOM_FACTOR.

Any changes to the algo have nothing to do with the yearly protocol upgrades.

I see what you mean, but IMO it’s more convenient to snap the change in EB to protocol upgrades, to have just 1 “coordination event” instead of 2 and avoid accidentally starting reorg games.

Mining software can suddenly start selecting transactions based on reaching a certain amount of fees income.
Miners are really the customer of this and features are enabled based on merit. Does this increase the paycheck of the miner.

It would be nice to suggest some good (optional) policy for mined block size, one such that would maximize revenue and minimize orphan risks and account for the miner-specific capabilities, risk-tolerance, and connectivity. Right now it seems like it’s mostly flat 8MB, and no miner bothered to implement some algo by himself even though nothing is preventing him from doing so.


In the meantime I had a few other discussions that might be of interest:

Some backtesting against Ethereum (left - linear scale, right - log scale) in trying to understand whether headroom (target “empty” blockspace, set to 87.5%) is sufficient to absorb rapid waves of adoption without actually hitting the limit:

The red line is too slow, it doesn’t reach the target headroom till the recent bear market.

Here’s some more back-testing against BTC+BCH, assuming the algo was activated at same height as the 1MB limit (red- max speed 4x/yr, orange- 2x/yr):

It wouldn’t get us to 32MB today (due to blocks in dataset all below 1MB until '17, otherwise it actually could have) but it would have gotten us to 8MB in 2016 even with mined blocks <1MB !!

The thing I really like is it “saving” our progress. If there was enough activity and network capacity in the past, then can we assume capacity is still there and that activity could come rushing back at any moment? See how it just picks up where it left off in beginning of 2022 and continues growing from there, even if median blocksize was below <1MB because the frequency of blocks above threshold (1MB for EB8) was enough to start pushing the EB up again (at a rate slower than max, though). The algo limit would only cut off those few test blocks (I think only 57 blocks bigger than 8MB).


New CHIP

I believe this approach can be “steel-manned” and to that end I have started drafting a CHIP, here below is a 1-pager technical description, would appreciate any feedback - is it clear enough? I’ll create a new topic soon, any suggestions for the title? I’m thinking CHIP-2022-12 Automated Excessive Blocksize (AEB)

Some other sections I think would be useful:

  • Rationale
    • Choice of algo - jtoomim’s and my arguments above in this thread
    • Choice of constants (tl;dr I got 4x/yr as max speed from backtesting, target headroom- similar rationale as @im_uname 's above, and the combo seems to do well tested against historical waves of adoption)
  • Backtesting
    • Bitcoin 2009-2017 (1mb + algo)
    • BCH 2019-now (1mb + algo)
    • BTC+BCH full history flatEB VS autoEB, with eb config changes to match historical EB changes (0, 0.5mb, 1mb, 8mb, 32mb)
    • Ethereum
    • Cardano?
  • Specification - separated from math description because we need to define exact calculation method using integer ops so it can be reproduced to the bit
  • Attack scenarios & cost of growth

Technical Description

The proposed algorithm can be described as intermittent exponential growth.
Exponential growth means that excessive block size (EBS) limit for the next block will be obtained by multiplying a constant growth factor with the current block’s EBS.
Intermittent means that growth will simply be skipped for some blocks i.e. EBS will carry over without growing.

The condition for growth will be block utilization: whenever a mined block is sufficiently full then EBS for the next block will grow by a small and constant factor, else it will remain the same.
To decide whether a block is sufficiently full, we will define a threshold block size as EBS divided by a constant HEADROOM_FACTOR.

The figure below illustrates the proposed algorithm.

image

If current blockchain height is n then EBS for the next block n+1 will be given by:

excessiveblocksize_next(n) = excessiveblocksize_0 * power(GROWTH_FACTOR, count_utilized(ANCHOR_BLOCK, n, HEADROOM_FACTOR))

where:

  • excessiveblocksize_0 - configured flat limit (default 32MB) for blocks before and including AEBS_ANCHOR_HEIGHT;
  • GROWTH_FACTOR - constant chosen such that maximum rate of increase (continuous exponential growth scenario) will be limited to +300% per year (52595 blocks);
  • count_utilized - an aggregate function, counting the number of times the threshold block size was exceeded in the specified interval;
  • AEBS_ANCHOR_HEIGHT is the last block which will be limited by the flat excessiveblocksize_0 limit;
  • HEADROOM_FACTOR - constant 8, to provide ample room for short bursts in blockchain activity.
1 Like

I have discussed this topic elsewhere with bitcoincashautist, and based on the following:

  • The maximum yearly increase has been increased to 4x (from 2x in a previous version)
  • Miners can always increase the limit regardless of this algo if need be
  • We start with the current 32MB block limit

I tentatively support this initiative.


I am still open to consider competing alternatives, especially stateless. But while no other appear, this one is good.

3 Likes

The need for coordination goes away if you let the EB be set by the software devs (based on the capability of the software) and your algo only have an effect on the max-mined-block-size.

1 Like

During some more brainstorming with @mtrycz he proposed we’d want the following features:

  • stateless/memoryless
  • fast responding to txrate increases
  • allowed to decay slowly

and so I ended up rediscovering WTEMA (the other candidate for BCH new DAA), but our “target” here is block fullness (mined block size divided by excessive blocksize limit), and “difficultiy” is the excessive blocksize limit.

The algorithm is fully defined as follows:

  • yn = y0 , if n ≤ n0
  • yn = max(yn-1 + γ (ζ xn-1 - yn-1), y0), if n > n0

where:

  • n stands for block height;
  • y stands for excessive block size limit;
  • n0 and y0 are initialization values, so the limit will be flat y0 until height n0+1
  • x stands for mined block size;
  • γ (gamma) is the “forget factor”;
  • ζ (zeta) is the “headroom factor”, reciprocal of target utilization rate.

The core equation yn-1 + γ (ζ xn-1 - yn-1) is much similar to the first order IIR filter, described in Arzi, J., “Tutorial on a very simple yet useful filter: the first order IIR filter”, section 1.2 and has all the nice properties of that filter.

Our proposed equation differs in 2 ways, in filter terms:

  • The input is delayed by 1 sample (xn-1 instead of xn), so that we don’t have to recalculate the excessive block size limit when updating the block template;
  • The input is first amplified by the headroom factor.

This way the algorithm will continuously adjust the excessive block size limit towards some value ζ times bigger than the exponential moving average of previously mined blocks.
The greater the deviation from target, the faster the response will be.
To illustrate, we will plot a step response:

image

We can observe that the response to a step change decreases according to exponential law, and that it reaches the target after a constant number of samples.

Because the mined block size can’t be less than 0 or greater than the excessive block size limit, then maximum deviation from the target will be limited, so rate of change will be bounded.
Extreme rates of change can therefore be calculated just from the 2 constants γ and ζ, by plugging in yn-1 or constant 0 as xn-1.
Rate of change is given by r = yn / yn-1 - 1, and from that we can calculate the extremes:

  • rmin = -γ
  • rmax = γ (ζ - 1)

We can observe that for ζ = 2 the extremes would be symmetrical i.e. rmin, ζ=2 = -rmax, ζ=2 , and it would provide headroom only for a +100% spike in activity since point of stability would be at 50% fullness.
We instead propose ζ = 8, meaning target block fullness will be 12.5%.
With such factor, the extreme rates of change will be asymmetric:

rmax = -7 rmin .

Here’s how it looks like back-tested with 500kB initialization from genesis, γ chosen such to limit max YOY increase as indicated in the legend.

Allowing the max blocksize (= EB) to decrease below 32MB would open BCH up to a chain split attack.

I don’t see any good reasons to drop below 32MB, and I think that should be kept as the minimum value below which no algorithm should drop. It is firmly in the realm of the manageable in terms of processing, all round.

1 Like

Sorry, should’ve made this clear - the proposal would be to have y_0=32MB config so it’d always be >32MB regardless of actual mined sizes. The plotted y_0=500kB config is just for info, so we can observe how it would’ve responded to historical data. If I’d have plotted using y_0=32MB config against historical data it’d just be a flat line and wouldn’t show us much :slight_smile:

1 Like

Update - working on new iteration.

The above approach would work but it has two problems:

  1. Minority hashrate could make the limit go up by mining at max_produced = EB (100% full blocks), while everyone else mines at flat max_produced = some_constant. EB growth would be much slower, though, with growth-rate proportional to hashrate mining 100% full blocks.
  2. Concern raised by @emergent_reasons: some network steady-state would make the limit be 10x the average mined sizes. This is good now, but will it be good if we get to 1GB blocks, the algo would then work to grow the limit to 10GB. This is because the above iteration targets a fixed %fullness.

The new iteration aims to address both by reducing zeta while introducing a secondary “limit multiplier” function applied after the main function.

To address 1. we must first reduce zeta. If exactly 2 then 50% would have to mine empty blocks to prevent some 50% mining max blocks from lifting the limit further, but that’s no good because who’d want to mine at 0 for the greater good while there are fee-paying TX-es in the mempool. Reducing zeta to 1.5 would allow 50% honest hashrate to mine blocks at 33% fullness while the other 50% mines at 100% by stuffing blocks with 0-fee TXes.

That doesn’t give us much headroom. We could apply a flat multiplier (labeled alpha in figure below) to provide more room:

image

To address the problem (2.) we make the multiplier variable and respond to growth rate of the underlying zeta=1.5 function:

image

This way, miners can allow rapid growth in scenario where the limit is far below technological limits, or they can limit their max-produced to be conservative, and only allow careful growth by moving it in smaller, stretched-out steps, as technological limits are improved.

How it works is: rapid growth means “alpha” expansion and extra room to accomodate the growth, while slower growth (or flat or degrowth) means alpha decays back to lower bound.

Testing against actual BTC+BCH data:

Scenario all miners would remain at max_produced = 1MB:

Scenario where 10% miners would defect and mine at max, while 90% would mine at 1MB:

Function Definition

I’ll just post the definition here, detailed explanation and more back-testing and testing against other scenarios will be in the CHIP

The function driving the proposed algorithm belongs to the exponential moving average (EMA) family and is much similar to the weighted-target exponential moving average (WTEMA) algorithm, which was the other good candidate for the new Bitcoin Cash difficulty adjustment algorithm (DAA).

The full algorithm is defined as follows:

  • yn = y0 , if n ≤ n0

  • αn = α0 , if n ≤ n0

  • εn = y0 / α0 , if n ≤ n0

  • εn = max(εn-1 + γ (ζ min(xn-1, εn-1) - εn-1), y0 / α0) , if n > n0

  • αn = min(αn-1 + δ max(εn - εn-1, 0) / εn-1 - θ (αn-1 - αl), αu) , if n > n0

  • yn = max(εn αn, y0) , if n > n0

where:

  • y stands for excessive block size limit;

  • x stands for mined block size;

  • n stands for block height;

  • ε (epsilon) is the “control function”;

  • α (alpha) is the “limit multiplier function”;

  • n0, α0, and y0 are initialization values;

  • αl and αu are headroom multiplier’s lower and upper bounds.

  • γ (gamma) is the control function’s “forget factor”;

  • ζ (zeta) is the control function’s “asymmetry factor”;

  • δ (delta) is the limit multiplier function’s growth constant amplifier;

  • θ (theta) is the limit multiplier function’s decay constant;

1 Like

The proposal re-assigns a meaning to the existing “EB” property. The Excessive Blocksize is a property meant for a full node to protect itself. Re-purposing that property removes the ability of software platforms to indicate what is a tested-safe blocksize.

Please consider introducing a new property to avoid losing existing functionality.

How exactly does falling out of network “protect” a node? It falls out either way - if it has a too low EB it will fail gracefully, and if it has it too high but some internal limit breaks then it will fail ungracefully, either way - it ends up falling out. We should come up with “correctness limit”, well above EB and established by tests - what size is max. at which the node software is able to correctly validate a block (even if it takes too long to actually stay in sync) and not hit some buffer overflow bug or something which would result in undefined behavior.

Can you point me to these test results of well known services? Software runs on hardware, even if BCHN established that it behaves well with 32MB on most hardware, the entity running it is expected to have adequate hardware. If it does not, what should it do, set the limit to 8MB and just fail on hitting the first >8MB block? Also, the entity running BCHN could be having services attached to it that may start cracking well before 32MB, so according to you they should set their EB to some smaller value, and again not be able to sync? Or should they upgrade their stack to keep up with what everyone else is doing?

EB is not about the single node, it’s about the whole network having consensus on a minimum requirement for all nodes, mining or not - you have to be able to handle blocks at EB or you won’t be able to stay in sync in case miners produce even 1 block at max. If that limit is raised too high, then less nodes will be able to participate.

The proposal re-assigns a meaning to the existing “EB” property.

How I see it, it is you who has reassigned it to suit fantasies laid out in your CHIP.

EB has a rocky history with the dark smokey room decision to make it into 32MB, which was kind of weird as that was never what the limit was for.
It probably is the reason for the confusion here, because every single other piece of usage, documentation and difference between implementations indicates that it has always been to protect nodes operations.
Which is not a new idea, it started in 2016 or so when full nodes (Core, and XT only) started getting abused by them not having a mempool policy, so someone send so many transactions until a full node crashed due to memory usage.
A node protecting itself is the first concept on decentralized software development. We learned over the years what that means.
BU crashed many times due to the same issue, its EB invention didn’t exist yet and people had crafted a thin-block that expanded to a huge size which crashed many BU instances. This was the main source of the “Excessive Block” invention. A block that was too big to be safely prococessed.

Dude. Calling names is not ok. You may not understand the history and we can get into differences of opinions, but calling me names is just not Ok.