Asymmetric Moving Maxblocksize Based On Median

Just to illustrate, I plotted how the above algorithm would have behaved were it in there since block 0. The algorithm wouldn’t have allowed block sizes above the green line.

100KB initialization

100KB initialization + one-time hard-fork bump to 500KB at block 175,000
Note that the algo wouldn’t have to be changed for the HF since it doesn’t have memory, it looks at whatever MAX was recorded in the previous block and picks up from there. The HF would just do one-time override of the value written into coinbase script, and the next block would calculate the max based off that and simply continue from there.

8MB initialization

Edit: after some discussion with @tom on Telegram I realized the approach of having the algorithm’s state written in every block coinbase and enforced by consensus is flawed. I was proposing a new consensus rule for no reason, but it’s not the algorithm that’s the problem but writing algorithm state in coinbase and having the state of the coinbase field consensus enforced. This would prevent someone syncing from scratch to just set his -excessiveblocksize to whatever is the latest, he’d have to verify the algorithm’s field even after it has served its purpose.

So, I will change the approach to “excessiveblocksize autoconfig”, and if miners would like it and make it default, then everyone else could either snap to algorithm just as well, or set a constant -excessiveblocksize to some size above whatever the algorithm could produce.

Ok, new approach, where the algorithm would be an extension of -excessiveblocksize.
If a node would specify -autoebstart <AEBHeight> then it would enforce a fixed EB (like it does now) until the AEBHeight, from where the node’s EB would start slowly growing if blocks are more than 12.5% full. It doesn’t matter if it’s 50% or 20% full, the growthRate/block is the same for anything above threshold so whatever gets mined the growth is capped at 2x/year and yearly rate will be in the 0%-100% range, depending on frequency of blocks above threshold 12.5% fullness.

Nodes could coordinate a switch back to flat EB (obv it would have to be greater than previously mined block sizes), or they could coordinate shifting the algo start time, so if we want to tweak it as we go we could still coordinate a change of defaults on upgrade days without requiring making exceptions in the code - it’d be just a change of default config, but if we ever stop doing coordinated upgrades then sticky defaults will be like a dead-man switch, giving everyone assurance that EB can’t ever get stuck or go down. I mean, it could - some future people could SF to remove it, but action would be required, right? No action would mean - it will keep growing. In '17 it was the opposite, action was required to grow and no action meant growth would stop.

I plotted some hypothetical scenarios. Imagine Satoshi had the algo right from the start with EB=500KB (red line). Then he’d HF to 1MB in 2010 but with the algo (zoomed in, '10 HF would change the curve from red to green), so in response to adoption the EB would move up to adjust to demand. He’d then disappear and we’d be fine and maybe the wars would be about soft-forking to remove the algo hah, because it’d keep growing and reach 8MB in '17 even if actual mined blocks would have always been of mined size <1MB.

Then in '17 we’d change config to 8MB and bump the curve more up, then in '18 we’d bump it to 32MB and the algo would only get us up to 32.27MB today because our actual usage had very few blocks >12.5% full which would’ve triggered a few slight increases, so if we did tests with 256MB we could bump it up again in '24 to like 64MB and the algo would pick up from there.

Made a little program to calculate this quickly, can be found here: ac-0353f40e / Autoebs · GitLab

PS how about “Monotonic Excessive Blocksize Adjuster (MEBA)” for the name :slight_smile:

Nice evolving thought here!

Some comments;

  1. Any algorithm is fine by me as long as it doesn’t concern protocol-rules and as a result make participation optional and voluntary.
  2. changes in such probably should follow established capacity of widely used infrastructure. So the BCHN node currently ships with EB=32MB because we know it can handle that. When that moves to 50MB or 64MB at some time in the future, again based purely on the software being able to handle that, your algorithm may become more agressive.
    You can make this internally consistent for it to simply respond to the user-set EB, which for most is the software-default EB.
  3. Any changes to the algo have nothing to do with the yearly protocol upgrades. I would even suggest to stagger them or otherwise de-couple any changes to make clear that they are not related. That such auto-max-generated sizes are not part of the protocol and thus don’t coincide with said upgrades.
    For instance you could do changes in January based on the idea that software has been released half of November and many will upgrade.
  4. (repeat from my Telegram message). I expect that when blocks get considerably bigger we will see a separation of block-validation and block-building. Two pieces of software that communicate. In fact the block-template being created by the full node is partly already ignored by mining software and duplication of efforts there add up the larger a block gets.
    So a mining software that gets the transactions as they come in from the full node and periodically builds a new block to mine based on the mining-software local mempool. This separation suddenly allows a lot more power to the mining operator in the form of configurability.
    Mining software can suddenly start selecting transactions based on reaching a certain amount of fees income.
    Miners are really the customer of this and features are enabled based on merit. Does this increase the paycheck of the miner.
    So, in the end, any such ideas as you write here follow the same market where you can build it, but the ones to try to sell it to are the miners. Do they actually want to use it? Does this give them value?
    I honestly don’t know, which is why I can’t really comment more on this thread. I’m not the customer of this new iteration of this feature.
1 Like

Just my 2 cents here:

This seems like a good idea, it should decrease potential contention on yearly upgrades.

As long as these changes do not produce a maximum hard limit, but only a soft/suggested limit that can be easily overriden by config files, it will be OK.

1 Like

I removed the part where it would introduce a new rule. It’s the same EB, but adjusted upwards by those who’d decide to stick to the algo. Those running with EB=32 and those with EB=32+algo would be compatible for as long as majority of hashpower doesn’t produce a block of >32MB and <(32+algo_bonus). If a >32MB chain would dominate, then those running with EB32 could either bump it up manually to some EB64 and have peace until algo catches up to 64, or just snap to algo too.

changes in such probably should follow established capacity of widely used infrastructure. So the BCHN node currently ships with EB=32MB because we know it can handle that. When that moves to 50MB or 64MB at some time in the future, again based purely on the software being able to handle that, your algorithm may become more agressive.
You can make this internally consistent for it to simply respond to the user-set EB, which for most is the software-default EB.

If need be, a faster increase could be accommodated ahead of being driven by the algo, by changing the EB config so nodes would then enforce EB64(+algo_bonus). It wouldn’t make the algo more aggressive - it would just mean it will not do anything until mined blocks start hitting the treshold_size=EB/HEADROOM_FACTOR.

Any changes to the algo have nothing to do with the yearly protocol upgrades.

I see what you mean, but IMO it’s more convenient to snap the change in EB to protocol upgrades, to have just 1 “coordination event” instead of 2 and avoid accidentally starting reorg games.

Mining software can suddenly start selecting transactions based on reaching a certain amount of fees income.
Miners are really the customer of this and features are enabled based on merit. Does this increase the paycheck of the miner.

It would be nice to suggest some good (optional) policy for mined block size, one such that would maximize revenue and minimize orphan risks and account for the miner-specific capabilities, risk-tolerance, and connectivity. Right now it seems like it’s mostly flat 8MB, and no miner bothered to implement some algo by himself even though nothing is preventing him from doing so.


In the meantime I had a few other discussions that might be of interest:

Some backtesting against Ethereum (left - linear scale, right - log scale) in trying to understand whether headroom (target “empty” blockspace, set to 87.5%) is sufficient to absorb rapid waves of adoption without actually hitting the limit:

The red line is too slow, it doesn’t reach the target headroom till the recent bear market.

Here’s some more back-testing against BTC+BCH, assuming the algo was activated at same height as the 1MB limit (red- max speed 4x/yr, orange- 2x/yr):

It wouldn’t get us to 32MB today (due to blocks in dataset all below 1MB until '17, otherwise it actually could have) but it would have gotten us to 8MB in 2016 even with mined blocks <1MB !!

The thing I really like is it “saving” our progress. If there was enough activity and network capacity in the past, then can we assume capacity is still there and that activity could come rushing back at any moment? See how it just picks up where it left off in beginning of 2022 and continues growing from there, even if median blocksize was below <1MB because the frequency of blocks above threshold (1MB for EB8) was enough to start pushing the EB up again (at a rate slower than max, though). The algo limit would only cut off those few test blocks (I think only 57 blocks bigger than 8MB).


New CHIP

I believe this approach can be “steel-manned” and to that end I have started drafting a CHIP, here below is a 1-pager technical description, would appreciate any feedback - is it clear enough? I’ll create a new topic soon, any suggestions for the title? I’m thinking CHIP-2022-12 Automated Excessive Blocksize (AEB)

Some other sections I think would be useful:

  • Rationale
    • Choice of algo - jtoomim’s and my arguments above in this thread
    • Choice of constants (tl;dr I got 4x/yr as max speed from backtesting, target headroom- similar rationale as @im_uname 's above, and the combo seems to do well tested against historical waves of adoption)
  • Backtesting
    • Bitcoin 2009-2017 (1mb + algo)
    • BCH 2019-now (1mb + algo)
    • BTC+BCH full history flatEB VS autoEB, with eb config changes to match historical EB changes (0, 0.5mb, 1mb, 8mb, 32mb)
    • Ethereum
    • Cardano?
  • Specification - separated from math description because we need to define exact calculation method using integer ops so it can be reproduced to the bit
  • Attack scenarios & cost of growth

Technical Description

The proposed algorithm can be described as intermittent exponential growth.
Exponential growth means that excessive block size (EBS) limit for the next block will be obtained by multiplying a constant growth factor with the current block’s EBS.
Intermittent means that growth will simply be skipped for some blocks i.e. EBS will carry over without growing.

The condition for growth will be block utilization: whenever a mined block is sufficiently full then EBS for the next block will grow by a small and constant factor, else it will remain the same.
To decide whether a block is sufficiently full, we will define a threshold block size as EBS divided by a constant HEADROOM_FACTOR.

The figure below illustrates the proposed algorithm.

image

If current blockchain height is n then EBS for the next block n+1 will be given by:

excessiveblocksize_next(n) = excessiveblocksize_0 * power(GROWTH_FACTOR, count_utilized(ANCHOR_BLOCK, n, HEADROOM_FACTOR))

where:

  • excessiveblocksize_0 - configured flat limit (default 32MB) for blocks before and including AEBS_ANCHOR_HEIGHT;
  • GROWTH_FACTOR - constant chosen such that maximum rate of increase (continuous exponential growth scenario) will be limited to +300% per year (52595 blocks);
  • count_utilized - an aggregate function, counting the number of times the threshold block size was exceeded in the specified interval;
  • AEBS_ANCHOR_HEIGHT is the last block which will be limited by the flat excessiveblocksize_0 limit;
  • HEADROOM_FACTOR - constant 8, to provide ample room for short bursts in blockchain activity.
1 Like

I have discussed this topic elsewhere with bitcoincashautist, and based on the following:

  • The maximum yearly increase has been increased to 4x (from 2x in a previous version)
  • Miners can always increase the limit regardless of this algo if need be
  • We start with the current 32MB block limit

I tentatively support this initiative.


I am still open to consider competing alternatives, especially stateless. But while no other appear, this one is good.

3 Likes

The need for coordination goes away if you let the EB be set by the software devs (based on the capability of the software) and your algo only have an effect on the max-mined-block-size.

1 Like

During some more brainstorming with @mtrycz he proposed we’d want the following features:

  • stateless/memoryless
  • fast responding to txrate increases
  • allowed to decay slowly

and so I ended up rediscovering WTEMA (the other candidate for BCH new DAA), but our “target” here is block fullness (mined block size divided by excessive blocksize limit), and “difficultiy” is the excessive blocksize limit.

The algorithm is fully defined as follows:

  • yn = y0 , if n ≤ n0
  • yn = max(yn-1 + γ (ζ xn-1 - yn-1), y0), if n > n0

where:

  • n stands for block height;
  • y stands for excessive block size limit;
  • n0 and y0 are initialization values, so the limit will be flat y0 until height n0+1
  • x stands for mined block size;
  • γ (gamma) is the “forget factor”;
  • ζ (zeta) is the “headroom factor”, reciprocal of target utilization rate.

The core equation yn-1 + γ (ζ xn-1 - yn-1) is much similar to the first order IIR filter, described in Arzi, J., “Tutorial on a very simple yet useful filter: the first order IIR filter”, section 1.2 and has all the nice properties of that filter.

Our proposed equation differs in 2 ways, in filter terms:

  • The input is delayed by 1 sample (xn-1 instead of xn), so that we don’t have to recalculate the excessive block size limit when updating the block template;
  • The input is first amplified by the headroom factor.

This way the algorithm will continuously adjust the excessive block size limit towards some value ζ times bigger than the exponential moving average of previously mined blocks.
The greater the deviation from target, the faster the response will be.
To illustrate, we will plot a step response:

image

We can observe that the response to a step change decreases according to exponential law, and that it reaches the target after a constant number of samples.

Because the mined block size can’t be less than 0 or greater than the excessive block size limit, then maximum deviation from the target will be limited, so rate of change will be bounded.
Extreme rates of change can therefore be calculated just from the 2 constants γ and ζ, by plugging in yn-1 or constant 0 as xn-1.
Rate of change is given by r = yn / yn-1 - 1, and from that we can calculate the extremes:

  • rmin = -γ
  • rmax = γ (ζ - 1)

We can observe that for ζ = 2 the extremes would be symmetrical i.e. rmin, ζ=2 = -rmax, ζ=2 , and it would provide headroom only for a +100% spike in activity since point of stability would be at 50% fullness.
We instead propose ζ = 8, meaning target block fullness will be 12.5%.
With such factor, the extreme rates of change will be asymmetric:

rmax = -7 rmin .

Here’s how it looks like back-tested with 500kB initialization from genesis, γ chosen such to limit max YOY increase as indicated in the legend.

Allowing the max blocksize (= EB) to decrease below 32MB would open BCH up to a chain split attack.

I don’t see any good reasons to drop below 32MB, and I think that should be kept as the minimum value below which no algorithm should drop. It is firmly in the realm of the manageable in terms of processing, all round.

1 Like

Sorry, should’ve made this clear - the proposal would be to have y_0=32MB config so it’d always be >32MB regardless of actual mined sizes. The plotted y_0=500kB config is just for info, so we can observe how it would’ve responded to historical data. If I’d have plotted using y_0=32MB config against historical data it’d just be a flat line and wouldn’t show us much :slight_smile:

1 Like

Update - working on new iteration.

The above approach would work but it has two problems:

  1. Minority hashrate could make the limit go up by mining at max_produced = EB (100% full blocks), while everyone else mines at flat max_produced = some_constant. EB growth would be much slower, though, with growth-rate proportional to hashrate mining 100% full blocks.
  2. Concern raised by @emergent_reasons: some network steady-state would make the limit be 10x the average mined sizes. This is good now, but will it be good if we get to 1GB blocks, the algo would then work to grow the limit to 10GB. This is because the above iteration targets a fixed %fullness.

The new iteration aims to address both by reducing zeta while introducing a secondary “limit multiplier” function applied after the main function.

To address 1. we must first reduce zeta. If exactly 2 then 50% would have to mine empty blocks to prevent some 50% mining max blocks from lifting the limit further, but that’s no good because who’d want to mine at 0 for the greater good while there are fee-paying TX-es in the mempool. Reducing zeta to 1.5 would allow 50% honest hashrate to mine blocks at 33% fullness while the other 50% mines at 100% by stuffing blocks with 0-fee TXes.

That doesn’t give us much headroom. We could apply a flat multiplier (labeled alpha in figure below) to provide more room:

image

To address the problem (2.) we make the multiplier variable and respond to growth rate of the underlying zeta=1.5 function:

image

This way, miners can allow rapid growth in scenario where the limit is far below technological limits, or they can limit their max-produced to be conservative, and only allow careful growth by moving it in smaller, stretched-out steps, as technological limits are improved.

How it works is: rapid growth means “alpha” expansion and extra room to accomodate the growth, while slower growth (or flat or degrowth) means alpha decays back to lower bound.

Testing against actual BTC+BCH data:

Scenario all miners would remain at max_produced = 1MB:

Scenario where 10% miners would defect and mine at max, while 90% would mine at 1MB:

Function Definition

I’ll just post the definition here, detailed explanation and more back-testing and testing against other scenarios will be in the CHIP

The function driving the proposed algorithm belongs to the exponential moving average (EMA) family and is much similar to the weighted-target exponential moving average (WTEMA) algorithm, which was the other good candidate for the new Bitcoin Cash difficulty adjustment algorithm (DAA).

The full algorithm is defined as follows:

  • yn = y0 , if n ≤ n0

  • αn = α0 , if n ≤ n0

  • εn = y0 / α0 , if n ≤ n0

  • εn = max(εn-1 + γ (ζ min(xn-1, εn-1) - εn-1), y0 / α0) , if n > n0

  • αn = min(αn-1 + δ max(εn - εn-1, 0) / εn-1 - θ (αn-1 - αl), αu) , if n > n0

  • yn = max(εn αn, y0) , if n > n0

where:

  • y stands for excessive block size limit;

  • x stands for mined block size;

  • n stands for block height;

  • ε (epsilon) is the “control function”;

  • α (alpha) is the “limit multiplier function”;

  • n0, α0, and y0 are initialization values;

  • αl and αu are headroom multiplier’s lower and upper bounds.

  • γ (gamma) is the control function’s “forget factor”;

  • ζ (zeta) is the control function’s “asymmetry factor”;

  • δ (delta) is the limit multiplier function’s growth constant amplifier;

  • θ (theta) is the limit multiplier function’s decay constant;

1 Like

The proposal re-assigns a meaning to the existing “EB” property. The Excessive Blocksize is a property meant for a full node to protect itself. Re-purposing that property removes the ability of software platforms to indicate what is a tested-safe blocksize.

Please consider introducing a new property to avoid losing existing functionality.

How exactly does falling out of network “protect” a node? It falls out either way - if it has a too low EB it will fail gracefully, and if it has it too high but some internal limit breaks then it will fail ungracefully, either way - it ends up falling out. We should come up with “correctness limit”, well above EB and established by tests - what size is max. at which the node software is able to correctly validate a block (even if it takes too long to actually stay in sync) and not hit some buffer overflow bug or something which would result in undefined behavior.

Can you point me to these test results of well known services? Software runs on hardware, even if BCHN established that it behaves well with 32MB on most hardware, the entity running it is expected to have adequate hardware. If it does not, what should it do, set the limit to 8MB and just fail on hitting the first >8MB block? Also, the entity running BCHN could be having services attached to it that may start cracking well before 32MB, so according to you they should set their EB to some smaller value, and again not be able to sync? Or should they upgrade their stack to keep up with what everyone else is doing?

EB is not about the single node, it’s about the whole network having consensus on a minimum requirement for all nodes, mining or not - you have to be able to handle blocks at EB or you won’t be able to stay in sync in case miners produce even 1 block at max. If that limit is raised too high, then less nodes will be able to participate.

The proposal re-assigns a meaning to the existing “EB” property.

How I see it, it is you who has reassigned it to suit fantasies laid out in your CHIP.

EB has a rocky history with the dark smokey room decision to make it into 32MB, which was kind of weird as that was never what the limit was for.
It probably is the reason for the confusion here, because every single other piece of usage, documentation and difference between implementations indicates that it has always been to protect nodes operations.
Which is not a new idea, it started in 2016 or so when full nodes (Core, and XT only) started getting abused by them not having a mempool policy, so someone send so many transactions until a full node crashed due to memory usage.
A node protecting itself is the first concept on decentralized software development. We learned over the years what that means.
BU crashed many times due to the same issue, its EB invention didn’t exist yet and people had crafted a thin-block that expanded to a huge size which crashed many BU instances. This was the main source of the “Excessive Block” invention. A block that was too big to be safely prococessed.

Dude. Calling names is not ok. You may not understand the history and we can get into differences of opinions, but calling me names is just not Ok.

It existed since Satoshi first set it to 1MB, it was just called a different name.

And what if the node just dropped out of network because it’d refuse to attempt blocks over some limit? How is that practically different from crashing out? From the PoV of the node, or from the PoV of rest of the network.

against what, staying in consensus? it either crashes or drops out to a dead chain if too low EB

I didn’t call you any names. Your CHIP is not your person, it’s not good to get attached to ideas. I’m attacking your ideas laid out in the CHIP, not you. Same like you’re attacking my ideas about how the EB should get adjusted, and not attacking me.

1 Like

The Excessive Blocksize is a property meant for a full node to protect itself.

In every interpretation of EB, the initial protective response of a node has been to not extend the chain which features a block with size > EB.

The complex emergent consensus algorithm within which this “excessive blocksize” concept was born, was not adopted by the market.

Indeed there was a paper claiming to have basically invalidated that approach:

https://www.researchgate.net/publication/321234622_On_the_Necessity_of_a_Prescribed_Block_Validity_Consensus_Analyzing_Bitcoin_Unlimited_Mining_Protocol

Either way, Bitcoin Cash ended up adopting EB as a term for the hard limit above which blocks are firmly rejected.

I suggest if you want an alternative definition, propose it rigorously. If you want the same definition that BU introduced when they introduced the term, then at least attempt to refute the above paper’s challenge to the security claims of that algorithm (which apart from BU no other full node in Bitcoin Cash adopted - and I think not even any other full node of any cryptocurrency that I’m aware of). Notably, no-one from BU mounted a rigorous defense of the emergent consensus algorithm or a refutation of that paper.

1 Like

Even Andrew Stone seems to be in favor of a prescribed algo, here’s a quote from my recent talk with him on BU Telegram:

EC is more a recognition of reality not an algorithm. If you are considering EC verses an algorithmic max block size like you proposed for BCH, go with the alg. The EC reality is there anyway… hopefully never employed because a dynamic schelling point means nobody has to disrupt the network to raise the block size.

By the way, Bitcoin Unlimited has implemented exactly the above proposed algorithm for their new blockchain: The adaptive block size algorithm of Nexa | by jQrgen | Jan, 2023 | Medium

1 Like

Interesting interpretation, I can see the appeal of this limited idea as its not entirely wrong, but it is indeed entirely missing the point.
EB has from day 1 been about dismissing a block before it is validated. This step is important. It is, and always has been, about protecting the node from doing work (and maybe spending too much memory so it gets killed).

All nodes implementing this indeed don’t even try to validate a block that is excessive in size.

You are adding a completely different concept here, BU indeed also had an “AD” concept which they liked, but miners didn’t. For good reason it was rejected. It feels off topic to add this concept here as the BCH 2017 hard fork specifically uses EB, while this “emergent consensus” has never been adopted by anyone but BU.

I think if you look at the dictionary and look up what “Excessive” means, you’ll notice that it falls neatly in line with the history and designs that I’ve reminded you guys about in the last messages. I’m not introducing new definitions of existing terms. This is all historical, BU’s Bitcoin Forum is still up. You can look it up.

I think the proposal here is doing this. So we don’t have to argue about what it is today.

And if BCA can avoid using that EB variable but use something else (like, a max block var, which makes MUCH more sense), then we suddenly don’t have to have any arguments. They can run at the same time. Would be nice if harsh words no longer have to be spoken because of a disbelieve in the merit of some other’s ideas.

Yes, that’s how it’s implemented src/validation.cpp · master · Bitcoin Cash Node / Bitcoin Cash Node · GitLab

Same how the original 1MB limit was implemented: cleanup, · bitcoin/bitcoin@8c9479c · GitHub

“Day 1” was 2010-09-12.

Which is why I’m having trouble with understanding your claim that BCH “invented” this EB limit. We just relabeled it. It’s the same thing as original 1MB limit. Sure, you can restart your node and change it because it’s a configurable parameter now, and BU node even allows changing it at run-time. It’s the same impact.

Agreed, failure to reject a block before attempting to load some too big block candidate would be an implementation bug.

No, the whole point of the proposal here is to fill in the blank:

Out of scope: actual coordination

Well, the coordination method proposed here is to use on-chain size data as signal to everyone so they’re sure to be moving their limit at exact same time and exact same value as everyone else, without the “you first, no, you first” problem.

Regarding the “max produced” limit:

That can be adjusted at miner’s whim with no need of coordination, if they want an algorithm to help them optimize their mining, nothing stopping them from implementing whatever they want - no need to coordinate with anyone.

Ok, now that we have went through the same old argument for the Nth time, here’s a little update, I implemented few more algorithms to see how they compare:

  • single median, like Pair’s (365 day median window, 7.5x multiplier)
  • dual median, like im_uname’s OP here (90 and 365 day windows, but 7.5x multiplier)
  • wtema with zeta=10 and 4x/yr max, my old iteration
  • wtema with zeta=1.5 and 4x/yr max, multiplied by fixed constant (5x)
  • wtema with zeta=1.5 and 4x/yr max, multiplied by a variable constant (1-5x) - my current proposal

BCH data:

Ethereum data (note: size is that of “chunks” of 50 blocks, so we get 10-min equivalent and can better compare), the right fig is the same data but zoomed in on early days, notice the differences in when the algos start responding and their speed. Also notice how the fixed-multiplier WTEMA pretty much tracks the medians but is smooth and max speed is limited as observed in the period 547 to 1095.

1 Like