Asymmetric Moving Maxblocksize Based On Median

@tom

PS.

I will address your second and third arguments later, I am busy now.

Which miners do you mean exactly? The 3-5% of miners that supported BCH fork in August 2017?

Or perhaps you mean the 1.5-2% miners (combined ABC+BSV) that chose ABC instead of BSV?

Or maybe you mean the 1.5% miners that chose BCHN instead of ABC in 2020 split ?

In the best scenario we are talking about 3% of total SHA256 miners. And by “miners” I mean all SHA256 miners, why should I omit BTC miners? There is no logical reason to do so.

So your claim that “In Bitcoin Cash the miners have always been our partners” is totally bogus.

I am sorry, but this is complete nonsense

Lack of blocksize increases killed BTC. What we are doing is the reverse.

And keep in mind that we are not actually deciding about blocksize increases here. Talking about “increase” is a strawman.

We are merely automatically allowing for the blocksize to increase with (expected) increase of popularity of BCH without needing further discussions about it, nothing more.

The mathematics of the algorithm could be a point of contention.

I just don’t see a reason not to auto-scale the block size as default, except development time. Whereas the upside is not relying on community ‘relationships’ to have miner blocksize consensus. Exploiting those who just ‘run default’ (who is 90%+ of any end user of any software) for the benefit of the coin is just a winning move.

I see a bunch of strawmen about ‘we shouldn’t set a hardcoded blocksize limit’ when you literally don’t have to and that it’s ‘getting rid of end user choice’ when it literally isn’t - it’s actually giving more choice. You’re just making a continually adjusting default via algorithm. You can still allow end users to set custom static block sizes. It can still be a ‘market place of choice’, just with a healthy default.

1 Like

Spot on.

I generally agree, however I wouldn’t call it exploiting, because it is not not exploiting any more than exploiting the fact that water flows, fire burns, and particles with opposite electrical charge attract each other.

People following other people, like animals following other animals is just basic nature of the universe.

People follow the herd (community consensus) or the alpha (BCHN), just like any other animals in nature [for more details see my human herd theory , also shorter version].

Yes, and it would be ideal if the accepted-blocksize adjusted auto-magically instead of requiring effort by the whole ecosystem every time we get close. Not only it requires effort, but as we grow we will be more inert and it will take more time to gather the effort. Having it auto-magically adjusted upwards would remove this existential risk forever. It is almost equivalent to entirely removing the cap but safer because it would happen at predictable pace and everyone would be synced to the same value. If it will be too slow it’s still better than not moving at all (like now), and we have a chance to bump it up when we do hard-fork yearly upgrades. So really, the risk of “getting it wrong” is not big either. Similar to DAA. We had one that kind of worked, then it had problems, then we fixed them, and now it works great and we probably won’t need to touch it again - ever. It would be real nice if we had an auto-pilot solution for blocksize cap, too.

It will be good to lay some framework for discussions and clarify few things not to get lost into discussing same points ad-nauseum, I’ll use your terms:

  • created-blocksize - actual size of whatever some block that has been mined and accepted,
  • accepted-blocksize - maximum size of blocks that some node will accept,

and define few more:

  • threshold-blocksize - some arbitrary level that will quantify whether created-blocksize has got too close to accepted-blocksize,
  • orphan-blocksize - maximum size of blocks that a producing (mining) node will extend, this would be used for some soft-fork mechanics.

Of all the 4 variables, only 1 is defining a node’s consensus specification, and here I define the term as a set of block and transaction processing rules where incompatibility will lead to a hard-fork. Maybe my definition doesn’t match yours, let’s clarify it then and please let’s not confuse accepted-blocksize with orphan-blocksize because we’ll then talk past each other.

Implementation details don’t matter here, whether our accepted-blocksize lives in a config file or as a hard-coded constant is of no importance, so please let’s stop pretending you need “a developer” to change a single value and recompile. If people wanted to change it in the past, they could have - but they didn’t, because they knew that without a broad coordination the consequence would be forking themselves off the network and wasting their hashes and as a consequence - money. Miners are not in the mining business to lose money, and even when they’re earning money margins are thin so don’t expect them to take any risks. Every hash must be sold.

Ok, now let’s analyze some scenarios.

What if everyone had accepted-blocksize = 32MB? Mining blocks with any created-blocksize < accepted-blocksize would orderly extend the chain. Any block <32MB would be ok, empty blocks, 1MB blocks, 32MB blocks.

What if a single producing node misconfigured it to accepted-blocksize = 33MB?

As long as it still produced blocks with created-blocksize < 32MB it would orderly extend the chain when it lands a block, so it would orderly participate in the network. Problem is, if such a node mined a 33MB block, then it would extend the chain - and get reorged, because others would reject and eventually mine a <32MB block on top of which everyone would continue extending the chain. So, the misconfigured mining node would be wasting hashes and with that - losing money.

What if that single node happened to have 51% hash-power?

The rest of the ecosystem would straight out reject 33MB blocks regardless of the PoW and automatically hard-fork away, so the node would end up extending a lonely chain and both chains would at first slow down to 20-min blocks until DAA would adjust them back to 10-min. Problem for this single 33MB node would be that it wouldn’t have a place to sell its mined coins because the chain wouldn’t be recognized as a currency and it wouldn’t have a market value since it has exactly 1 user - the misconfigured mining node itself, so problem with paying electricity bill would make it re-evaluate its life choices. Not only that, but it would still see the other chain as valid and would be at risk of being automatically merged back into main chain and losing all the block rewards. This is why no miner would risk adjusting it upwards without being damn certain everyone else will do the same. The miner needs to know not just his own accepted-blocksize, but also everyone else’s. Misconfiguration means the risk of irreversibly losing the value of block rewards.

Adjusting it downwards is similar. If it was misconfigured to 31MB everything would work fine as long as everyone mined blocks with created-blocksize < 31MB. Problem is, some other mining node could mine a 32MB block, and everyone else would happily extend that chain and again the misconfigured 31MB node would be on a lonely chain wasting electricity in extending a chain nobody uses.
When it comes to reorg risk, it would be the “everyone else” at risk by the lone but powerful miner messing up people’s transactions by reorging their 32MB blocks.

What if a non-producing node misconfigured the cap?

It would risk living in a parallel reality and taking all dependent services and their users with it. Let’s say a hard-fork split happened and now 2 chains exist, one with 1MB and one with 32MB. How does the node know which one is “real”, as in - to what currency each chain maps to? It needs external info - economic and social. It needs to know what everyone else is thinking, it needs to know the market price, where it can be exchanged and with whom, which one we will call BTC and which one we will call BCH? Even removing the cap entirely is a decision which will impact on which chain this node ends up. It is non-producing, economic nodes, which determine the mapping from some consensus specification to a real-world currency recognized by users and businesses.

SHA-256 miners will extend ANY viable chain that has a market price. Miners don’t need to care about consensus specifications, because BTC locked it in forever, succeeded at becoming “digital gold” and so it ensures their business models. Whatever other SHA-256 coins exist are good for them too and their survival is in miners interest too because they increase the total reward for SHA-256 hashes, and if some minority coin like BCH hits a home run and starts growing - great. If not - no biggie, they can still sell their hashes to BTC network which pays well.

What if everyone removed the accepted-blocksize cap entirely?

Then we would be removing the blocksize from consensus specification, and the consequence would be that hashpower would decide on a soft-cap through orphan-blocksize and using soft-fork mechanics. Miners don’t like orphan risk because it irreversibly costs them electricity, but at least here the risk would be contained to just miners - and they could phone each other or w/e, without impacting the rest of the ecosystem (unless they mine too big blocks which the rest can’t handle). Do we want a cabal of miners phoning each other and essentially becoming a single entity or do we want decentralization where you just buy a miner, plug it in, and don’t need to phone anyone to know your blocks won’t get orphaned? Just mine according to consensus specification of the SHA-256 coin and you’re safe. My point is - miners WANT a stable consensus specification, they DON’T want to have to be burdened with extra-blockchain activities just to be certain their hashes won’t go to waste.

Playing soft-fork orphan games is messy, nobody likes it, and BCH doesn’t do that. BTC does soft-forks which are essentially consensual 51% attacks on itself because it has no other choice since consensus specification “lock-in” has essentially become a feature. The risk for BTC is reduced through convoluted signaling and activation mechanisms (95% activation threshold IIRC) that ensure the upgrade never gets rolled back by a reorg.

In this scenario there’s also the problem with non-producing nodes. Removing the cap would introduce the risk of miners mining too big blocks and surprising service backend non-producing nodes so they could break - causing service downtimes, reputation loss, user frustrations, etc.

Ok, enough for now… I wanted to make an argument about recording accepted-blocksize into coinbase (regardless of the process we use to set it), but the post is already too long, some other day :slight_smile:

1 Like

You may have missed my point several times in this tread that the idea of having some suggested blocksize is a great idea to me. As long as it is not a protocol rule but a suggestion. Make the defaults increase according to some algoritm. I don’t care.

We already agreed on that, no need to keep on arguing that point, Shadow.

What some people are pushing for is a change that requires the software devs to agree if the market needs that rule to be broken. You have never responded to that danger, the lock-in that this generates where capture of some software devs can lead to stiffling growth just like on BTC.

What is wrong with the idea of a suggested max block size (default parameter that someone claimed no miner changes anyway) being increased automatically? (see earlier post).
You never answered that either. You keep on arguing a point that is not in dispute.

So you mean exactly the “suggestion” we are having right now. This is already in place.

Suggestions do not work however, because every time a suggestion is being considered to be taken sertiously, it requires further discussion. Like the one we are having now, for example. The discussions cause contention. Like the one we are having now. Contention causes splits and problems.

So how about we do the reverse?

We establish a hard-coded time-based automated increse plan to allow for max blocksize to scale on its own and then discuss every time it needs to be tweaked because there is a problem?

99% of BCH community (except obvious trolls) already wants to scale BCH with demand, so this is the consensus now. We are not doing anything remotely controversial by implementing what BCH fans always wanted into the protocol.

Sounds like a much better idea.

1 Like

LOL, really?

Some posts between someone that makes a high-speed but not used node and a guy that likes to discuss agressively is going to cause a chain split? Really? I think you are giving either of us waaay more power than we have :roll_eyes:

ps. you didn’t answer the question.

Unfortunately, this discussion has evolved in a suboptimal direction, which several forum participants and moderators have noted.
Several posts have even been flagged by people who have found certain sections of its content offensive.
We strongly encourage the participants in the ongoing discussion to remain calm, reduce animosities and take paths that promote a respectful and well-intentioned exchange of ideas.
Please review your posts and edit any content that may offend others.
We have a policy aimed at making this forum a civilized place. Please visit our FAQ for more information.
Please avoid at any cost: name-calling and ad-hominem attacks.

Destroying is much easier than creating… And we need creators with the ability to work together.

Thanks for your understanding.

2 Likes

Which one? I remember I answered 3 or 4 main points of your posts.

I will gladly answer all questions.

The issue here is highly political, which is causing the evolution in the suboptimal direction.

As long as everybody is civil and strong arguments are being provided, everything should be alright, because we will make some progress.

I mean you did not actually expect the economic future of the world being decided without at least minor conflicts?

EDIT:

PS.

I am also noticing Reddit is becoming less active development discussions-wise and the devs are moving here. The trolls will unfortunately follow, so you should be prepared for worse.

Protocol hard-coded, not designed to be adjusted (as in turned off by choice isn’t possible) in the mining software, block size algorithm: No.

Soft-coded, designed to be adjusted in the mining software, DEFAULT to on, block size algorithm: Yes.

^(This applies toward both the generated and allowed block size parameters, with the appropriate mathematics.)^

Alternatives:
Suggesting to miners a particular block size via community channels and arguing about it: No.

Static adjustments to static block size defaults in mining software: This is already the current state of things anyway, correct? If so: No. (Proposing the automatic algorithm instead.)

The OP did not make clear IMO if it was supposed to be hardcoded or soft-coded. This is almost a different discussion depending on what the OP means, which is where the contention seems to be coming from.

I gave my positions above.

Correct, by “hard-coded” I did not mean “fixed in stone”, but just the default.

Sorry for being slightly unclear.

I am not the author of this proposal anyway, but this is the way that is being worked on by @bitcoincashautist, so I assumed it is obvious.

Just skimmed this thread - in order to save potential readers some time, I’ll like to ask anyone who have trouble following the thread to do the following homework on a few things:

  1. Literally all consensus rules are "default"s. Sometimes they are adjustable literally by conf (activationtime, for example), in other times they’re mostly easily adjusted via changing a few characters in sourcecode and recompiling. Why does nobody ever do this? What does it mean that a rule is consensus? Is it important or unimportant that it’s “hardcoded” vs “softcoded” if it is consensus?

  2. What is a “softcap” blocksize limit vs a “hardcap”? Which limit’s adjustment is being discussed here? Which is consensus? Why is it consensus? Can it ever not be consensus? What do consensus rules mean to a miner?

  3. Why did the EB / AD mechanism (implemented by BU back in 2016; google if unfamiliar) fail utterly?

  4. EB alone is configurable. Why does no miner ever adjust it themselves?

  5. A certain other chain decided EB isn’t good enough, so they pulled out all the stops to make their miners “decide”. It went on a really stupid path full of spectacles. Why?

  6. Which among the following would you rather have, given the goal to keep blocks from getting full:
    6(i) Have loud debates over social media and developer channels every time the blocks get close to being full (note: they need to be loud, or you won’t have good participation/representation);
    6(ii) Have perpetual debates over social media, developer channels and miner telegram channels about the above;
    6(iii) Have something good enough in the background so the limit cease to be a focus, and people can spend their effort on more productive things instead

In any case it’s not like we’re under grave pressure to expand blocksize limit, so parties who like to do so can continue to debate at their leisure.

I suspect that in the end if we got a good/safe enough CHIP through at some point, these debates will all look rather pointless in hindsight. Blocksize limit is just plumbing that should get out of the way for as many people as possible on the way to permissionless money, nobody should spend a minute more on it (and its consequences) than absolutely necessary.

3 Likes

This one is likely not well known;

3: the AD mechanism specifically was about jumping chain when a certain number of orphaned blocks were found which were bigger than the EB allowed. Or, in other words, the designers of this sceme wanted miners to start mining blocks they knew would be orphaned by some others. Then the client would re-think that orphaning after several blocks and would jump to the ‘right’ chain after n (default was 6) blocks to avoid a chain-split.

Miners utterly rejected this because ANY orphaning was unacceptable and have made clear that they will talk between themselves in order to decide what blocksize they will limit themselves to. Realizing a technical solution doesn’t solve a social problem.


Imaginary Username puts forward similar arguments I made, so I absolutely agree with those.

As a closing point I want to suggest that in order to convince miners to mine bigger blocks we need to not just argue between non-miners, but actually show that mining bigger blocks is safe and they do not get orphaned. (talk is cheap). This starts with a scalenet and ends with one miner actually mining on mainnet bigger blocks so other miners can see they are safe.
Naturally this effort is useless until there is economic activity to actually fill those blocks.

Miners are our partners in this effort. We move together when both miners and software devs are satisfied changes in size are a good idea. In a soft way, to avoid hard rules that orphan blocks.

They don’t need convincing, they need something to fill those blocks with, and for that we need users. What’s the current accepted-blocksize value? What’s the current created-blocksize median? YTD BCH had most blocks in 200-300kB range, usage levels BTC has back in like 2013/14. No wonder BCH market cap is at 2013/14 levels, too. Plenty of headroom till we hit the 32MB cap. If miners need anything, they need users to actually generate enough transactions so they have something to fill those blocks with, and increase the total value of SHA-256 hash market.

I kind of get your point about it being a configurable option - even if 100% of the miners need to coordinate to configure it the same. It could be a checkbox: “stick to the algo” or temporary override. The algo would then pick up from whatever override was mined.

Why bother making the algorithm opt-in and not opt-out? Unless I misunderstood your statement.

Soft coded only means it’s end user (not developer or ‘developer’ via recompiling) configurable, configurable doesn’t mean it’s opt-in or opt-out.

All default values of soft coded parameters in any program are by nature opt-out and not opt-in, unless the program forces the end user to configure it (without copy paste from some tutorial) before the program can run.

The algorithm should be soft coded opt-out.

I concur.

After all we mostly agree here that miners are passive players in this ecosystem, so they should only be bothered to change the values if there is an emergency or something is not working right.

So opt-out (meaning ON by default) would seem to be the logical solution.

Of course we do not want to break anything and cause problems, so extensive testing on testnet has to be done.

Thanks for your understanding.
I don’t want to hijack this exciting discussion with merely an administrative matter.
But we can open a parallel post and talk about this in more detail.
I don’t think that a suboptimal direction is the consequence of the issue being highly political, nor do I think that conflicts have to be avoided; on the contrary, it is better to face them and do everything possible to resolve them.
Perhaps the problem lies in the fact that sometimes the boundaries between conflict and violence become blurred. That is a warning that we cannot let pass.

2 Likes

Just adding this for future reference:

I think the main issue with BSV in general is that it’s not quite possible to predict the blockchain size at all. The current limit seems to be 4 GB and people were actively testing to hit it. So potentially it’s 4 * 144 = 576 GB of blockchain data every day. Plus indexes. Plus services like block explorers run their own database (we run even two for extra speed and analytics). So for Blockchair this is potentially up to 60 terabytes a month just with the current limit (which is expected to get increased).

The second important issue is that if it was some useful data like real transactions, real people would come to block explorers to see their transactions, businesses would buy API subscriptions, so we’d be able to cover the disk costs, the development costs, the cost of trying to figure our how to fit 10 exabytes into Postgres (not very trivial I think), etc.

But the reality is that 99.99% or so of Bitcoin SV transactions are junk, so despite being the biggest Bitcoin-like blockchain with most transactions, Bitcoin SV constitutes only 0.3% of our visitor numbers and there are very few API clients using Bitcoin SV (0.2% of all API requests most of which are free API calls for the stats). Unfortunately, this doesn’t cover all these costs. So that’s why we can’t run more than 2 nodes, and even these two nodes will get stuck at some point because we’ll go bankrupt buying all these disks to store the junk data. But we’re trying our best :slight_smile:

With this amount of junk data I just don’t see a business model for a BSV explorer which would work in the long term (maybe an explorer run by a miner?). The same goes for exchanges for example I think. If you have to buy 10 racks of servers to validate the blockchain, but you only have 10 clients paying trading fees, you’ll go bankrupt.

1 Like