2021 BCH upgrade items brainstorm

It was just a joke. I’m sorry that wasn’t clear.

4 Likes

FWIW, I got it and thought it was funny. :wink:

4 Likes

I am not very technical but think about it from the perspective of onboarding any use case that might bring in a very high amount of usage
1. Name: Default blocklimit behavior to be adjustable
2. Description in a couple sentences what the change is technically:
Changing the default blocksize limit behavior to be dynamic, however miners could change this setting and set their own limit
3. Problem it’s trying to solve
Even though limit is currently adjustable, there can be some friction on changing the settings to allow larger blocks. Having the default behaviour be an adjustable limit would bring confidence to people / usecases that will require a very high number of transactions and assurement that they will not be driven out by fees
4. Potential positive impact on ecosystem
Large businesses wanting to onboard a lot of users or a high number of transactions will be able to do it with a higher confidence of maintained lower fees (would answer the concerns of heads of state if they want to transact on BCH or large businesses with millions of users building their infrastructure on top of BCH)
5. Potential negative impact on ecosystem
Could be abused by some people if not implemented in the best way.
Can affect miner revenue (however, they will always have the option to set their new limit manually,i.e. opt-in)
The adjustable limit could exceed the network capacity or have guardrails that will act as a blocksize limit

We introduced a new customisable dividend tool that is automated in the Zapit wallet as demonstrated here. We had to hardcode a limit of 900 addresses due to the 50 transaction limit (as each SLP transaction can have only 18 outputs).

Most of the token projects with some sort of utility will have to send dividends/airdrops to well over 900 addresses which isn’t possible due to the current 50 transaction limit. Of course one can always send out manually but its a huge blow to user experience which we at Zapit are focusing on.

The dividend tool can be used by any regular individual without much knowledge about BCH since everything is automated but the 900 address limit is still a problem if larger projects want to use the tool.

We also reward users using a payment interface that has millions of transactions per day. To tap into that market, we need to make sure that the limit is at least raised before we move forward with onboarding more users.

As we want to be able to send rewards every time users make a transaction with that payment interface, if there are more than 50 users that have to be rewarded within 10 minutes, we face a problem.

1 Like

while the 50 limit will likely be worked on in the near future, isn’t the limit for 18^50 instead of 18*50? It’s just slightly more complex arithmetic :smiley:

Name

Multiple OP_RETURNs

Description in a couple sentences what the change is technically

Change in either standardness rules regarding OP_RETURN to allow multiple outputs with OP_RETURN in the same transaction, or change in interpretation of OP_RETURN itself to use it as a separator within the same OP_RETURN output.

Problem it’s trying to solve

Making OP_RETURN based protocols interoperable.

Potential positive impact on ecosystem

By supporting multiple OP_RETURNs in a way that makes it possible to have interoperable OP_RETURN protocols, we signal to developers that are interested in making such protocol that BCH desires their business. This would allow new usecases as well as empower existing usecases, ultimately working towards more adoption and a stronger network effect for BCH.

Potential negative impact on ecosystem

Depending on developer preference, could be a point of contention.
Depending on implementation, could end up encouraging more data storage on-chain.
Depending on implementation, could disrupt existing OP_RETURN based protocols or tooling.

5 Likes

This feature has become rather urgent for our development. Is there an ongoing topic in place here about it or should I create a new one? We’d really like to see this get included in May’s fork and I’d like to address any concerns or blocker asap.

1 Like

This feature means multiple op returns?

I know someone will soon publish a more comprehensive suggestion about how to go about proposing and moving network upgrades ahead. In the meantime, I think the TLDR is that if anyone has a specific requirement and believes it is a good idea for the network as well, then they would need to create and own a proposal to do it. The proposal can have any level of detail on problem statement, implementation, RFC, etc. as long as it starts somewhere and the owner is committed to an iterative exploratory process with no expectation of eventual inclusion. In other words, a shared process is our arbiter and there is no single entity that can make it happen.

2 Likes

I’m preparing that now with details of our protocol that has the requirement of multiple OP_RETURNs so it would serve as a real-world use case. If that person wants to forward their suggestion on organizing proposals we might be inclined to be their guinea pig.

– Ben Scherrey

1 Like

Congratulations to all involved on successful May 2022 upgrade activation!

Let’s have a little update:

DONE (2021 and 2022)

Also, not mentioned above:

  • Native Introspection

2023 Candidates

Now being merged with CashTokens2.0

I overlooked this one while I was focused on Group, but now with Introspection I expect more developments on public covenants so it’s really important because 160bits is vulnerable against birthday attacks so for big pots of money locked up it may become a real risk, and we’re actually behind BTC on this one as SegWit enabled 32b contract hashes.

Additionally, and not mentioned above:

  • UTXO commitments
  • VM limits
  • tx size != 64
  • lock tx version field

Also there’s now some janitor work to do, I think I’m gonna make some PRs…

Update docs:

and chip list:

5 Likes

I think these are pretty straight forward and already have informal untested consensus. I hope they are both pushed for early this cycle so we can get ahead of any remaining work needing to be done and schedule them for inclusion. If anyone disagrees or know of any reason why they shouldn’t be, please raise your concerns so they can be addressed.

If someone has time and want to help, updating and reaching out to and double checking with potential stakeholders would hopefully solidify things.

6 Likes

Some folks were asking about BCH roadmap and I remembered this thread, it could work as informal and loose roadmap (as if we could have any other kind hah). Anyway, thought to update it:

May 2023 Upgrade CHIPs

All implemented here:

Testnet4 is up and running, node built off that MR must be started with -upgrade9activationtime=1662250000 (testnet)

3 Likes

It’s that time of the year, only 1 CHIP is on track for May 2024

I would’ve loved to see OP_MULDIV too, but the idea came in late and not enough time to fully research it and make the case for it, although it did get pretty far, which is nice.

UTXO fast-sync has been moving slow, that one would’ve been great together with adaptive blocksize limit, I hope it can be ready for '25.

May 2024 Upgrade CHIPs

CHIPs / ideas for next cycle(s)

4 Likes

If “Adaptive Blocksize Liimt” is the only one,

does it make sense to clarify that it is NOT a consensus change? If a full node does not add the code it is still pretty trivial for said full node to stay in consensus. We probably won’t even expect 10MB blocks in the next year anyway, making no-action already stay in consensus.

This means that BCHN can ship this blocksize chip outside of the coin-upgrade cycle.

So, the question should be raised:

does skipping a year of “mandatory upgrade” not gain us more?

This is purely about the PR. Call it a year of building. Stability is good. Troutner was asking pointed questions about how ‘wormhole’ died due to a protocol upgrade. While the answer is more complex, stability is a selling point.

To re-iterate, this is purely PR because the blockize algo is not a hard fork, it is not causing nodes to get out of sync if they don’t implement it. A simple user-level setting is enough to stay in sync. A user-level setting that doesn’t even need to be set for quite some time.

How about them PR gainz, can we do that? Stability sells. Let us build.

All valid, just one question:

Clarify to whom? Node devs? Node operators?

I think you and BCA already made it clear that this is not a hard forking “consensus” change.

Sticking to the established upgrade cycle is good PR regardless. :slight_smile:

1 Like

Clarify to the world,
to investors. To people interested in Bitcoin Cash.

"This year we decided to let the ecosystem build as we have already done X and Y and Z, which is already putting us well ahead of the competition. etc.

The point is: free positive publicity.

It’ll probably be Bitcoin Cash Foundation, BCHN, you, and BCH Podcast who will be the primary source of news that outlets like Bitcoin.com, Coindesk, etc pick up from. The first wave of news is the most important for setting those expectations I think.

1 Like

Would love to see if we can get some consensus on the lack of consens-change :laughing: (and how to explain the why)

I think everyone’s used to the software upgrade cycle by now, even if the upgrade is not consensus (like the May 2021 upgrade, which only changed TX relay rules and mempool limits).

It’s still consensus-sensitive since it changes the acceptance limit of nodes that run with it. Because of that, it’s good to have the nodes flip from flat limit to algo-adjusted limit at a pre-determined time so everyone will have a clear window during which to prepare for the possibility of some test blocks being mined (either by implementing the algo, or bumping up the flat limit).

Right, non-algo nodes would just need to bump it to like 33 MB and it would probably be fine for a while, unless hash-rate decides to mine a sequence of 530 100% full blocks which would bring it to 33 MB, or if “only” 50% hash-rate would do that - then it would take total 1396 blocks to stretch it to 33 MB.

It is not, but - someone may want to test whether the nodes will really auto-adjust the limit, and for that they’d need about 25% hash-rate: mine 1 block of 32 MB to trigger algo nodes to bump the limit to 32.0018, and then mine another slightly greater than 32 MB before the algo decays back to 32 MB - a window of 4 blocks time. Nodes still running with flat 32 MB would fall out of sync until they bump their limit to at least 32.0018 MB or whatever other value they feel comfortable with.

3 Likes

Why would you encourage miners to mine a 32MB block when we have on average maybe 1MB blocks?

That, personally, sounds like an even better reason to avoid making this into an “upgrade”. As that is just bad for the ecosystem.

Maybe it makes sense to highlight that what BCH needs right now is not more protocol features, it needs the wider ecosystem to build and catch up to the many features as of yet unexplored.
The opportunity shows itself where there is no actually breaking protocol change planned, and as such my suggestion is to take that opportunity and put to ease the various builders that this is a stable and still exciting platform to build on.

Mining a 32MB block is very likely going to cause problems for various people’s tools. Please do not suggest this seriously!