- 256-bit P2SH addresses
- P2SH addresses are currently 160-bit, see BIP16. I propose another standard script “OP_HASH256 <32-byte data> OP_EQUAL”. These scripts should get the same consensus rule as bip16 scripts, i.e., in addition to checking the hash, the last item on the input stack, the redeemScript is evaluated. Moreover wallets should support cashaddr addresses with type P2SH (1) and size 256 (3) and translate them to these kind of output scripts.
- Currently only 160-bit addresses are supported, which are prone to collision attack, especially in the setting where p2sh scripts are used for smart contracts between two or more parties. A malicious party could produce a collision, give the other party an innocent looking public key that allows them to redeem the p2sh address using a colliding scripts for which they have full control.
- Secure smart contracts without complicated key commitment schemes to avoid collision attacks. Very simple extension of a widely-used and well-tested script type.
- This is both a consensus change (soft fork) and a wallet infrastructure change. If wallets don’t support the new address format, they obviously cannot be used to fund a smart contract using the new addresses. This also affects withdrawal addresses on exchanges or hardware wallets. The wallet change doesn’t have to be deployed by all wallets at the same time, though, since a different wallet can be used to fund the smart contract address.
this may very well be the beginning of a new transaction format!
- Xthinner proto-
Wait… im_uname stole my idea and is trying to take credit for it himself. Fine…
- Blocktorrent protocol
- Better block propagation method that splits blocks into independently verifiable IP packet-sized chunks. Allows blocks to be transmitted in a swarm at the speed of Bittorrent instead of Napster.
- Scaling
- Enable scaling to the 1 GB/ (3k tx/sec) level and beyond
- A lot of new code and 0-day risk
Both Blocktorrent and Xthinner are opt-in and require no backwards-incompatible protocol changes. They can be deployed asynchronously and permissionlessly by nodes.
- Blocksize limit increase after Xthinner and/or Blocktorrent
- Increase the default block size limits for BCH, potentially according to a default schedule like BIP101
- Prevent a repeat of 2015-2017 in which blocksize increases weren’t performed before they were needed
- Fidelity effect
- Orphans; distraction from other projects
Link to an ongoing discussion regarding improved math capability, especially larger integers and multiplication.
- Block time reduction
- Reduce the target block time to e.g. 2.5 minutes or 1.0 minutes after Blocktorrent is active
- User experience improvement
- Improves confirmation times. Makes solo mining easier (lower variance). Reduces chain limit pressure.
- Somewhat complicated. Requires changing halving schedule, coins per block, and some opcode changes for height-based locktime. Also increases baseline block orphan rate and may reduce scalability (but Blocktorrent may mitigate this).
This item is probably more like 2022 or 2023, because it’s a huge change, but I’m including it here because I think it’s fascinating and worth looking into.
- Block DAG (e.g. Jute)
- Convert the blockchain into a block DAG in order to neutralize the incentive problem with orphans. Instead of marking entire blocks invalid and throwing them out, we only throw out individual invalid transactions.
- Allows for very fast block times (~6 sec)
- Claims to solve selfish mining. Makes mining variance a non-issue. Massive UX improvement. Dramatically enhanced scalability (no need to keep orphan rate ≤ 3%). Reduces effect of network latency on revenue.
- Huge changes. Changes block header format. Changes game theory. Threat model and security claims have not been fully reviewed by third parties.
Please calm down, you were being credited and nobody else saw his post as taking credit.
Also, this site is about collaboration first and foremost. It would be nice to aim to share ownership of ideas, that makes things go much smoother.
It was just a joke. I’m sorry that wasn’t clear.
FWIW, I got it and thought it was funny.
I am not very technical but think about it from the perspective of onboarding any use case that might bring in a very high amount of usage
1. Name: Default blocklimit behavior to be adjustable
2. Description in a couple sentences what the change is technically:
Changing the default blocksize limit behavior to be dynamic, however miners could change this setting and set their own limit
3. Problem it’s trying to solve
Even though limit is currently adjustable, there can be some friction on changing the settings to allow larger blocks. Having the default behaviour be an adjustable limit would bring confidence to people / usecases that will require a very high number of transactions and assurement that they will not be driven out by fees
4. Potential positive impact on ecosystem
Large businesses wanting to onboard a lot of users or a high number of transactions will be able to do it with a higher confidence of maintained lower fees (would answer the concerns of heads of state if they want to transact on BCH or large businesses with millions of users building their infrastructure on top of BCH)
5. Potential negative impact on ecosystem
Could be abused by some people if not implemented in the best way.
Can affect miner revenue (however, they will always have the option to set their new limit manually,i.e. opt-in)
The adjustable limit could exceed the network capacity or have guardrails that will act as a blocksize limit
We introduced a new customisable dividend tool that is automated in the Zapit wallet as demonstrated here. We had to hardcode a limit of 900 addresses due to the 50 transaction limit (as each SLP transaction can have only 18 outputs).
Most of the token projects with some sort of utility will have to send dividends/airdrops to well over 900 addresses which isn’t possible due to the current 50 transaction limit. Of course one can always send out manually but its a huge blow to user experience which we at Zapit are focusing on.
The dividend tool can be used by any regular individual without much knowledge about BCH since everything is automated but the 900 address limit is still a problem if larger projects want to use the tool.
We also reward users using a payment interface that has millions of transactions per day. To tap into that market, we need to make sure that the limit is at least raised before we move forward with onboarding more users.
As we want to be able to send rewards every time users make a transaction with that payment interface, if there are more than 50 users that have to be rewarded within 10 minutes, we face a problem.
while the 50 limit will likely be worked on in the near future, isn’t the limit for 18^50 instead of 18*50? It’s just slightly more complex arithmetic
Name
Multiple OP_RETURNs
Description in a couple sentences what the change is technically
Change in either standardness rules regarding OP_RETURN to allow multiple outputs with OP_RETURN in the same transaction, or change in interpretation of OP_RETURN itself to use it as a separator within the same OP_RETURN output.
Problem it’s trying to solve
Making OP_RETURN based protocols interoperable.
Potential positive impact on ecosystem
By supporting multiple OP_RETURNs in a way that makes it possible to have interoperable OP_RETURN protocols, we signal to developers that are interested in making such protocol that BCH desires their business. This would allow new usecases as well as empower existing usecases, ultimately working towards more adoption and a stronger network effect for BCH.
Potential negative impact on ecosystem
Depending on developer preference, could be a point of contention.
Depending on implementation, could end up encouraging more data storage on-chain.
Depending on implementation, could disrupt existing OP_RETURN based protocols or tooling.
This feature has become rather urgent for our development. Is there an ongoing topic in place here about it or should I create a new one? We’d really like to see this get included in May’s fork and I’d like to address any concerns or blocker asap.
This feature means multiple op returns?
I know someone will soon publish a more comprehensive suggestion about how to go about proposing and moving network upgrades ahead. In the meantime, I think the TLDR is that if anyone has a specific requirement and believes it is a good idea for the network as well, then they would need to create and own a proposal to do it. The proposal can have any level of detail on problem statement, implementation, RFC, etc. as long as it starts somewhere and the owner is committed to an iterative exploratory process with no expectation of eventual inclusion. In other words, a shared process is our arbiter and there is no single entity that can make it happen.
I’m preparing that now with details of our protocol that has the requirement of multiple OP_RETURNs so it would serve as a real-world use case. If that person wants to forward their suggestion on organizing proposals we might be inclined to be their guinea pig.
– Ben Scherrey
Congratulations to all involved on successful May 2022 upgrade activation!
Let’s have a little update:
DONE (2021 and 2022)
Also, not mentioned above:
- Native Introspection
2023 Candidates
Now being merged with CashTokens2.0
I overlooked this one while I was focused on Group, but now with Introspection I expect more developments on public covenants so it’s really important because 160bits is vulnerable against birthday attacks so for big pots of money locked up it may become a real risk, and we’re actually behind BTC on this one as SegWit enabled 32b contract hashes.
Additionally, and not mentioned above:
- UTXO commitments
- VM limits
- tx size != 64
- lock tx version field
Also there’s now some janitor work to do, I think I’m gonna make some PRs…
Update docs:
and chip list:
I think these are pretty straight forward and already have informal untested consensus. I hope they are both pushed for early this cycle so we can get ahead of any remaining work needing to be done and schedule them for inclusion. If anyone disagrees or know of any reason why they shouldn’t be, please raise your concerns so they can be addressed.
If someone has time and want to help, updating and reaching out to and double checking with potential stakeholders would hopefully solidify things.