agreed with above & moving forward
my apologies to the group for being disruptive
agreed with above & moving forward
my apologies to the group for being disruptive
No need to apologize. You can be more disruptive next year.
We do not need a perfect solution right now. We only need a very good solution.
Once it is “locked in” in the brains of the populace (and especially miners) as a default way forward, we can bikeshed improvements ad infinitum.
This morning I have reviewed the complete CHIP and suggested a number of typo / grammar corrections to @bitcoincashautist.
My two non-typo points that I raised were:
That said, I think there’s been an immense amount of quality work done on this proposal, and I am in full support of it.
I agree heavily with @ShadowOfHarbringer’s points about social consensus, we shouldn’t let perfect be the enemy of the good (or in this case, already very very good) & activating this CHIP in the 2024 upgrade would be a massive step forward. Delaying it for a year seems foolhardy, it seems pretty much everyone is onboard with an algorithm in the first place so it’s much better to be thereafter discussing a change to the algorithm rather than whether or not to do an algorithm and if so which exact algorithm.
@Jessquit it’s not exactly a topic for this thread, but I’m not so much a fan of the “Untethered” idea. I think the marketing play on words is fine, but I am averse to this kind of ““limitless”” messaging because it strongly reminds me of the endless BSV arguments and opinions which always seem to come down to “Well now the Bitcoin blockchain has been UNLEASHED! And you will see the TRUE power of Bitcoin/the free market now that we have UNLEASHED it with no blocksize/data limits/consensus changes etc.”. Their arguments always come down to some naive lack of engineering tradeoffs or un-nuanced belief in the supposedly endless powers of “free markets”. In my mind, their advocacy always turns into what sounds like some kind of Dragon Ball Z villain (“haha, my true powers have been unveiled, NOW you’ll see what I can do”). They’ve been “unleashing” BSV for years, and it’s not impressing anyone lol. So I think we should avoid marketing BCH in any style reminiscent of that, because it tends to attract loud and opinionated people with a binary and uneducated view that any kind of sensible engineering is an affront to their free market sensibilities. But that’s a separate discussion to have, and I like that you’re thinking up new promotional ideas - let’s workshop some of this some more in another venue.
Oh, just discovered “Light wallets” affected by this change means full nodes syncing from a UTXO committment backwards, not pruned nodes and not SPV. That fixes my Selene Wallet concern entirely, hopefully the spec can be updated to avoid any confusion there.
That’s fine, nobody is required to use the idea. It’s there for people who find it useful.
Proposed final draft of Executive Summary online now:
“CHIP-2023-04 Adaptive Blocksize Limit Algorithm for Bitcoin Cash” (ac-0353f40e / Adaptive Blocksize Limit Algorithm for Bitcoin Cash · GitLab) is implementation ready. Note that CHIP title has been updated.
If anyone want to have a go at test implementation or just review the CHIP and state approval/abstention/disapproval activating the CHIP, now is the time!
To get a feel for how it works, I suggest check out the risks section first:
The CHIP has reference implementations both in C and C++, and a simple test suite that locally generates simple .csv test vectors, covering full range of inputs to the algorithm.
My endorsement post:
Blocksize Algorithm CHIP endorsement
I (and The Bitcoin Cash Podcast & Selene Wallet) wholly endorse CHIP-2023-04 Adaptive Blocksize Limit Algorithm for Bitcoin Cash for lock-in 15 November 2023. A carefully selected algorithm that responds to real network demand is an obvious improvement to relieve social burden of discussion around optimal blocksizes plus implementation costs & uncertainty around scaling for miners & node operators. There is also some benefit to the community signalling its commitment to scaling & refusal to repeat the historic delays resulting from previous blocksize increase contention.
The amount of work done by bitcoincashautist has been very impressive & inspiring. I refer to not only work on the spec itself but also on iteration from feedback & communicating with stakeholders to patiently address concerns across a variety of mediums. Having reviewed the CHIP thoroughly, I am convinced the chosen parameters accomodate edge cases in a technically sustainable manner.
It is a matter of some urgency to lock in this CHIP for November. This will solidify the social contract to scale the BCH blocksize as demand justifies it, all the way to global reserve currency status. Furthermore, it will free up the community zeitgeist to tackle new problems for the 2025 upgrade.
A blocksize algorithm implementation is a great step forward for the community. I look forward to this CHIP locking-in in November & going live in May 2024!
Jeremy
The revisions to this CHIP have made it significantly easier to understand and clearly addresses all major concerns that I’ve seen raised in regards to this issue.
As a Fulcrum server operator, I endorse this proposal. I know I’d be able to keep up with operation costs even if we implemented BIP-101 instead, which would presently have us at 64mb block capacity.
I also recognize the urgency of solving this potential social attack before it becomes a problem again. I encourage adoption of this proposal for the November 2023 lock-in, allowing it to be activated in May 2024.
Like Jeremy, I also endorse this CHIP on behalf of Selene Wallet.
(Ignore this, I got confused).
I support this CHIP but I do want to point out one thing:
Once this is deployed “pruned nodes” aren’t technically fully validating anymore until there is a blocksize commitment – a node can’t verify if a block is valid without knowing the historical blocksizes and can calculate the state of the algorithm.
The CHIP touches on it when discussing (potential) fast-sync schemes but I think more clarification is needed that it does impact the nature of some current nodes.
But they are if they validated the history before pruning it - they will know the algo’s state and can resume from there, you only need algo’s state at any 1 block to be able to continue calculating the limit. Same with UTXO state tracking. So nothing fundamentally changes there, right? They have to build their UTXO state somehow - and while they do that they can work out the algo’s state too, and they can locally store the entire blocksize history while at it, since it’s a small dataset anyway.
It affects implementation of pruned nodes, though - in addition to UTXO state they’ll need to keep track of algo’s state at least some X blocks back to be able to recover from crash or something. That part we could highlight for implementers.
If that was not yet somehow clear enough from my previous comments, I hereby officially endorse this CHIP.
Post title updated as per author’s request.
Likewise, I formally endorse this CHIP.
I personally would like agreement and early implementation on at least two clients by November if at all possible.
Has there been any official statement or acknowledgement from BCHN, BU, or BCHD on this?
I’ve been talking with them today and opened an MR as a result
Pinged Josh (Verde) too, he’s busy migrating node’s DB from MySQL to custom solution, after which he’ll have bandwidth to resume work on fast-sync and check out this CHIP.
BCHD is busy getting up to speeed with CashTokens etc., I’ll reach out once they catch up.
Fernando (kth) has been busy with other stuff, too.
Not sure about BU, I had some talks with Andrew months ago and he informally blessed the approach, maybe @Griffith knows better? How’s dual-median been performing for your new chain? You guys wanna replace dual-median with this one while at it?
An interesting real life example of miners not self-limiting block size when block propagation breaks down during high transaction volumes:
[The spam]'s impact on mining was significant and immediately apparent. At times, up to 80% or more of the network’s total hashrate was dropped, but this number stabilized at approximately 60%. This “lost” hashrate was unable to keep up with the blocks from other miners, resulting in at least 3 distinct and competing chains which I was able to identify. At some points, some of the slow miners were able to catch up with the main chain, only to split off again soon after. The numerous forks relatively quickly re-established consensus after the conclusion of [the spam].
Pirate chain is a code fork of Zcash that permits only private shielded transactions. The block verification time per transaction is much slower than bitcoin’s.
Nice find!
How do we avoid the risk for BCH? The maximum rate of change is chosen such to stay under original BIP-101 (What if too fast?) which is a good estimate for Bitcoin-tech growth of deployable capacity. Another blockchain like Zcash would need their own variant of BIP-101 with lower base and maybe lower rate.
Fees are the 1st line of defense against irrational use of the network, and I think Zcash / Pirate Chain simply had it too low. On BCH, one can not just run a “Free And Unscheduled Scalability Audit” (FAUSA)" as they did on Pirate Chain, because 1 satoshi / byte is not free. It is cheap for regular use, but it is not free, meaning introducing an artificial TX load at scale will still cost any single actor too much.
I’ll add some good numbers specific to BCH, which Jonathan Toomim laid out here. For those going to the original comment, his remarks about the algo are referring to an older iteration with 4x/yr rate limit at the extreme, which was deemed too fast, as it could more easily intercept the original BIP-101 curve.
The BCH network currently has enough performance to handle around 100 to 200 MB per block. That’s around 500 tps, which is enough to handle all of the cash/retail transactions of a smallish country like Venezuela or Argentina, or to handle the transaction volume of (e.g.) an on-chain tipping/payment service built into a medium-large website like Twitch or OnlyFans.
If you mine a 256 MB block with transactions that are not in mempool, the block propagation delay is about 10x higher than if you mine only transactions that are already in mempool. This would likely result in block propagation delays on the order of 200 seconds, not merely 20 seconds. At that kind of delay, Gorilla would see an orphan rate on the order of 20-30%. This would cost them about $500 per block in expected losses to spam the network in this way, or $72k/day. For comparison, if you choose to mine BCH with 110% of BCH’s current hashrate in order to scare everyone else away, you’ll eventually be spending $282k/day while earning $256k/day for a net cost of only $25k/day. It’s literally cheaper to do a 51% attack on BCH than to do your Gorilla spam attack.
If you mine 256 MB blocks using transactions that are in mempool, then either those transactions are real (i.e. generated by third parties) and deserve to be mined, or are your spam and can be sniped by other miners. At 1 sat/byte, generating that spam would cost 2.56 BCH/block or $105k/day. That’s also more expensive than a literal 51% attack.
Currently, a Raspberry Pi can keep up with 256 MB blocks as a full node, so it’s only fully indexing nodes like block explorers and light wallet servers that would ever need to be upgraded. I daresay there are probably a couple hundred of those nodes. If these attacks were sustained for several days or weeks, then it would likely become necessary for those upgrades to happen. Each one might need to spend $500 to beef up the hardware. At that point, the attacker would almost certainly have spent more money performing the attack than spent by the nodes in withstanding the attack.
honestly… not sure… has not had a good go yet. not enough tx volume. we had some issues early on because it was not possible to adjust the blocksize higher for the first… year? (i forget the exact params) due to the algo median windows and the baseline block size that was configured for the dual median approach was too small (100KB). there was not enough baseline space to accommodate short term spikes in the network tx rate due to exchange output consolidation. when enough time had passed for the network to consider raising the maximum block size, the exchanges had already fixed all of the consolidation issues so the size still has not risen off of the baseline.
BCHN implementaion ongoing: Draft: Work-in-progress ABLA implementation (!1782) · Merge requests · Bitcoin Cash Node / Bitcoin Cash Node · GitLab