Open Discussion on Block Propagation Times

As we are discussing CHIP-2025-03 Faster Blocks for Bitcoin Cash, while not the same, I wanted to open the discussion on block propagation times.

As I recall, three technologies have been discussed/implemented:

  • Xthin
  • Xthinner (only one that requires CTOR(?))
  • Graphene (Benefits from but does not strictly need CTOR(?))

The obvious first callout is that we are not doing the transaction volume today that makes these technologies uber important today, however, might not be bad to look now for future implementation.

Are we still looking at such technologies today? I believe BU uses graphene. I’m not certain of other node implementations (BCHN, BCHC, BCHD, Kth, Verde). Is there reason to explore these now? Are there drawbacks that stopped implementation of the more efficient Xthinner/Graphene into more nodes?

1 Like

re-reading the spec: https://github.com/BitcoinUnlimited/BitcoinUnlimited/blob/release/doc/graphene-specification-v2.2.mediawiki

It hit me that my calculations in the CHIP are slightly inaccurate: forgetting each TX in the block will add 6 bytes to cmpctblock, so for a 32 MB block even the fastest 0.5 * RTT case would need to transmit about 1 MB. It shouldn’t change much, would add about 300ms to my calc’d propagation for 10-min blocks or 30ms for 1-min blocks so additional 0.05% to calc’d orphan rates. If bandwidth grows faster than our block size limit it would only get reduced with time.

Graphene does not strictly need CTOR but it does need some sort of agreed ordering algorithm between peers. If one is not set by consensus one would need to be negotiated as part of network communications before block/tx data transmission could take place. Having CTOR by consensus removes this negotiation requirement (everyone has pre-agreed on an algorithm).

1 Like

That makes sense, thanks for the clarification!

1 Like

For those interested, the explanation about Graphene requiring an ordering algorithm can be found here: https://github.com/BitcoinUnlimited/BitcoinUnlimited/blob/release/doc/graphene-specification-v2.2.mediawiki#intuition

read the paragraph that starts with “Our fifth and best solution is a combination of both data structures.”

2 Likes

There are a bunch of important ingredients here.

First, naturally, is header first mining. Which is now used by everyone. The idea that a miner assumes a block header with the correct difficulty and proof of work is representative of a real block. Which has as a direct effect that it takes away orphan risk associated with bigger blocks. Which makes it a really important part of block propagation.

Second is the idea that has been growing more and more that miners want to have their mempool look mostly the same. Plus or minus some transactions, but generally they will not disagree on conflicts and such.
I think there has been various efforts in such matters where pools actively exchange transactions continuously in order to avoid doing that only AFTER a block is found.
A big deal there, which is relatively easy to do when the time comes for it to be useful, is the idea of having a mempool that has as many transactions as possible for the situation of receiving a newly mined block. Because if you get a compact block that refers to transactions you already have, that saves milliseconds.

But a second “mempool” is then going to be used to create a sub-set of valid transactions that are actually the ones the miner would mine. Think of it as a live update version of a GetBlockTemplate. Allowing instant built times, a concept needed to get much bigger block (templates).

This second part, the holding-everything mempool, is quite relevant as well as that helps immensely in setting assumptions and direction on how to propagate that block. (not to mention that miners can set policies on what they mine without that increasing double spend attack surface, but that is a digression)

My last point is that I’m surprised you don’t mention what we use today.
Bitcoin Cash full nodes don’t send full blocks, we already use CompactBlocks (bip152) which is “good enough” for quite a bit longer.

And, yes, it is relevant to know that any new block propagation algorithm is possible to implement permissionless. You hardly need any coordination either. As such this is one of those things that will happen when the pain it would resolve gets big enough for there to be an incentive to make it happen. No pre planning required.

2 Likes

It takes away orphan risk, but also it is living on borrowed time because it requires subsidy for it to be worthwhile. Without subsidy, mining header first would mine an empty block with 0 revenue and from PoV of miner it is no better than an orphaned block.

I think compact block relay (which you point out) is a bigger deal for propagation, especially if mempools are in sync in which case a block announcement will require minimal data and have propagation of 0.5 * RTT + time it takes to transmit the list of short TXIDs (6 bytes each) - for BTC that would be typically just about 30 kB download, and for BCH it would be 1 MB for a 32 MB block.

Yes, and even if there will be some missing TXs, there won’t be a lot of them, so minimal download and impact on propagation. Analysis from Chin et al. (2024) (Springer) reveals mempool overlap often exceeds 99%: over 95% of requests involve 19 or fewer missing transactions (4 kB total).

According to Bitnodes, average propagation time is typically under 300ms.

Isn’t that what they currently do? But current implementations are limited by RAM so if node’s mempool grows too big (can happen if there’s a demand burst) then the node will evict some TXs and just forget about them. What do you think about a kind of mempool swap? So nodes can save those TXs to disk, and later when the min. fee bar drops down replay them into mempool again. Also, what if some other miners had a different eviction policy - then you can just find those TXs in your swap rather than having to download them again.

Yeah it could last us for a long time, especially if bandwidths keep growing faster than network use.
With 256 MB blocks, the list of TXs would be about 8 MB, and shouldn’t be a problem to transmit considering over 80% of VPS hosts provide ≥436 Mbps upload (VPS Benchmarks, 2025-03-30).

Graphene has some blocksize/bandwidth threshold where it will pay off by further compacting the data needed to sync, but what if bandwidth just keeps growing and the threshold is never reached?

Only by grace of us centrally controlling properties like minimum relay fee.
Only because the cost of validation (as measured by the VM) is not taking into account in any relays either.
And the mempools also only have most transactions because miners empty the mempool and don’t filter.

The risk that I’m thinking here is that miners may very well want to change some of those properties. Some may want to start mining half a sat per byte transactions should the price of a single BCH go up a lot.
I don’t think that is unrealistic. In fact, if miners want to use their ability to pick and choose what they want to put in a block (or not), I think we should welcome that.

So, this is a multi step logical argument. Sorry for jumping over them before. I hope I can get it across fully. Let me try;

So, a miner may want to mine something different than some other miner. They can set local filters, change their relay rules etc. Because (this is how it works) their local mempool is what is being mined by that miner (or pool).

So the risk here is that should a miner choose to play with what they want to mine, they are instantly putting requirements on what their view of the world is. They NEED to change what is in their mempool. A miner decides to mine nothing below 2 sats/byte? Their mempool is forced to reflect that.
And that is pretty bad the moment it comes to that miner receiving another miner’s block. The further the two miners are out of balance, the more costly it is to validate the block because he now has to download those transactions that he rejected before for the block he wanted to mine.

So, the goal of giving a miner the right to mine whatever he wants to mine is not really reachable in a more mature bitcoin cash.

Therefore the separate goal is stated by me above that a mempool should have as many transactions as reasonably possible, in order to be able to reconstruct a block without downloads. Additionally such a mempool is helpful to detect double spends.

So, to wind back, the miners want their mempool to look uniform. That is the goal I think is healthy to continue to aim for. And this is mostly true today, but the moment miners may want to start innovating in what they include and when, this may become the first victim.

So my thinking is, we should look into the future of what the miner needs and have everyone agree to set as goal to have the mempools mostly identical. Becase then when miners start to innovate it will become clear they “shall” to not touch the mempool. Instead they need to do their tx-selection in a different pool of transactions.

A memory pool that is the same between the majority of the bitcoin cash network participants is what keeps block propagation look and smooth. Which is the topic today. It should be remarked that we can have have future development possibilities and free market innovation just fine. But that needs more software developement.

1 Like