Tailstorm: A Secure and Fair Blockchain for Cash Transactions

From the abstract:

Tailstorm merges multiple recent protocol improvements addressing security, confirmation latency, and throughput with a novel incentive mechanism improving fairness. We implement a parallel proof-of-work consensus mechanism with k PoWs per block to obtain state-of-the-art consistency guarantees [29]. Inspired by Bobtail [9] and Storm [4], we structure the individual PoWs in a tree which, by including a list of transactions with each PoW, reduces confirmation latency and improves throughput. Our proposed incentive mechanism discounts rewards based on the depth of this tree. Thereby, it effectively punishes information withholding, the core attack strategy used to reap an unfair share of rewards.

Paper link: https://arxiv.org/pdf/2306.12206

If we want to achieve faster TX confirmations on Bitcoin Cash then we could consider this as the most promising direction of research. This is because it could offer fast sub-block confirmations (10 seconds for a sub-block) without negative impact to orphan rates that plain block time reduction would incur.

The main benefits or reducing block time are discussed here, and with Tailstrom we could claim more of the benefits and do it with much better trade-offs, as already discussed here:

Figure - Block arrangements in Blockchain, blockDAG, and Tailstorm (source: Andrew Stone, slightly edited here for accuracy)

Let’s use this thread to understand it all better!

7 Likes

Without rehashing too much, I see TailStorm as an absolute win, provided it performs as expected.

  1. Faster confirmations without a need to reduce (summary) block times
  2. Significantly improved orphan rates
  3. More fair system with discouragement of selfish mining
  4. (Just sprinkles (that has no basis on us exploring an upgrade or not, but still a good mention)) Standout major upgrade that could be the rage. Differentiating us from just a faster block time like litecoin, significantly improving UX (as discussed in shorter block times) without the drawbacks, encouraging other devs from all protocols to take a serious look at BCH, and keeping with fully PoW SHA256.

Other great benefits are that BU has already spun this up on a testnet. It is in active development. So this is more than just theory. It is real.

2 Likes

soooo … wen Tailnet?? :smirk:

2 Likes

net? I don’t get the reference

1 Like

TailStorm essentially heavily changes Nakamoto Consensus by introducing parallel non-linear processing.

The worst part about it is that, to discourage double-spending, miners get their reward slashed, which was not a thing in Nakamoto Consensus.

The whole concept of parallel processing is completely new to Blockchains and actually has been done before (braid/braided blockchain, google it) without success [I remember one of the coins doing it failed catastrophically and is not even in TOP100 right now].

Such technology is essentially BETA-quality is not suited to a live system that is supposed to deliver always working system with ~100% uptime.

After the TailStorm will be running in another coin for 3-5 years, I may change my mind.

But right now, having a completely experimental technology with unknown real-life drawbacks is just dangerous and reckless to implement on a LIVE🔴 system with ~100% expected uptime.

4 Likes

It still incentivizes linear structure, but with more graceful failure mode - merge a side sub-block as opposed to discarding it. I guess whether this counts as “heavy” is in the eye of beholder.

Let’s not use “slashed” as the term is usually associated with PoS systems, and that’s just not what’s happening here.

Yes, if out-of-chain sub-block is merged, then everyone’s rewards in that epoch are equally reduced. I believe that’s what makes it work - it creates an incentive to mine as chain whenever possible, while improving outcomes if/when some failure happens. If some miner doesn’t like to have less reward, he’s free to try chainify it by finding an additional sub-block to extend the chain structure and kick out some side sub-block.

Individual miner’s profits per hash would stay the same or even increase, while collective miner revenue may be reduced on occasions.

Why would this be the “worst” part, why would it be a problem at all? Miners can choose to not emit some coins, which actually already happened so our max. theoretical supply is already down to 20999821.02921183 BCH.

Found this: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/2_breaking_the_chain_1_mcelrath.pdf

It looks like a similar/precursor idea that had problems which Tailstorm fixed.

Bitcoin is still in beta :slight_smile: But yeah, caution is warranted.

So, BU has plans to ship it on Nexa, imagine they deliver it in 2025, +5 years, 2030 we get CHIP’n, and have it activated in 2032. What’s the opportunity cost in that scenario?

Interestingly, I think the whole Tailstorm could be implemented as a soft fork, in which case blocks would slow down for unknown reason until DAA would adjust it down to 1/k, and old nodes would start witnessing some extra data in coinbase TX, and variance would magically go down to 99% blocks being in 9-11min range.

Testnet for Tailstorm = Tailnet

3 Likes

Let me flip that: what is the opportunity cost of continuing to have drastically inconsistent block times which is very damaging to consumers’ UX? Can we afford to leave it an issue for so long?
With more hash and a more consistent price, our variance would absolutely decrease. However, the best way to ensure that happens is to ship the best UX possible as quickly as possible (for clarity, real issues of course would be included in the “quickly as possible”). This applies to TailStorm or any other proposal. Granted, this is a bit of a chicken and an egg problem, however, improving the UX is something controllable that absolutely has an impact on price/dominance (again, for clarity, no change should be exclusively for price, but price drives adoption, awareness, use of 0-conf, etc), whereas it is very hard to predict what price will be or how to directly impact it. ((I again want to clarify that price is not just for bag holders NGU, that would be foolish, but again, price impacts every other aspect which then leads to more freedom for all – heck, take price out of the equation but a better UX improves awareness, keeps people engaged, etc)).

As a separate thought, Shadow says he may put together a different proposal of his own design. Great, if it works. However, nearly all of his major concerns about implementing TailStorm would apply to his own proposal, but be even further delayed as it does not have a white paper, does not have proof of concept testnet, etc. TailStorm is far ahead in those regards. Years ahead probably.

Does that mean TailStorm is the absolute answer? No, absolutely not. But these are all things we need to consider.

And to be frank, the confirmation inconsistency is one of the biggest UX complaints that exists today. And it’s not the fault of the user. 0-conf is hard to come by because services lock in some confirmation number. Education will help, but end of the day if services will continue requiring confirmations, having more consistent block times will be critical.
And that’s something that shorter block times by themselves may help with, in some regards: where some services won’t bother to change their 1 or 10 confirmation count, and so users see a big difference! However, for the major services that very likely will just increase the number of confirmations accordingly, the consistency issue is not fixed.
If TailStorm, or another proposal, can significantly improve UX, we have a responsibility to diligence it in full!

1 Like

Can someone steelman this proposal? And maybe Tl;dr it for people that are not apt in reading scientific papers (which don’t take real world very seriously).

I’d like to know what are the downsides.
What happens if miners ignore it? What happens when a 40% hashpower miner ignores it and thus all the blocks they mine contain different data than tailstorm stuff?

I hear absurtly small blocktimes, what does that do with bandwidth requirements for all players?

Anything else that breaks it, please share that here as well.

1 Like

just suggesting that they’ll need to be a Tailnet before this tech is “allowed/supported” on Chipnet…

1 Like

I just wanted to kick it off, and start expanding on things as we discuss and learn. Thanks for the prompt, let’s go!

The main concern highlighted by Shadow is that miners can have their rewards reduced. That’s actually what makes the scheme work to incentivize chain formations (d=3) as opposed to fully parallel structure (d=1), as illustrated by figure below.

image

This is why miners would be incentivized to update their (sub)block template to point to longest subchain as soon as they become aware of it.
Under current consensus, if they happen to be late and create a branch then it results in total loss of their work (reorg), whereas with Tailstorm it can be merged when full block is announced - failure becomes more graceful.
Other miners could still choose to try mine another subblock and reorg the side one, in order to avoid the haircut.
Consider the d=2 case, if another subblock is found inside that same epoch then it’d make total reward r=3/3 and it would kick whichever extra subblock happens to be on the shorter branch.

Consider a bigger subdivision, imagine you’re a slow not so well connected miner, and there’s 30 subblocks already but you lagged and forked a new branch, what do? You can just continue mining on it and not suffer from lag anymore because you’re mining on top of your own blocks, and by the time others get to 50 you could be at 2, then others get to 57 and you’re at 3. Someone announces a new subblock and now in total there are 60 - now anyone can announce a block deterministically composited of those 60 and get on with mining subblocks in next epoch - and all miners would suffer only a little haircut because there’s just 1 extra branch.

Thing is - miners as a collective may get less revenue - BUT they get better ROI, and from PoV of individual miners - they each get MORE revenue because reduction of orphans more than offsets the occasional haircut when would-be-orphans are merged.

The main downside is data redundancy and additional overheads. You need to know from which subblock each block’s TX came, you need some data structure for subblock headers, and you need to keep stored any conflicting TXs just so you can verify merkle roots (even though the conflicting TX will be ignored and won’t affect UTXO state).
Nice thing is - it does not break SPV, clients can just ignore this extra data and use only regular headers.

Of course, there’s also the implementation costs & risks, it’s a bigger change than block time adjustment.

It’s a consensus change - they can’t ignore it. Another drawback.

Good question, I am not sure at this moment, but I guess it inevitably adds more overheads, I just don’t know how much.

Another good Q you asked on Tg:

or can miners still use the same software for mining BTC and BCH?

hmm not sure about this actually, if subblock header format is the same, maybe software could pretend it’s just mining shorter blocktimes

2 Likes

by talenet, do you mean a testnet for TailStorm?

If so, agreed. I think that’s the basic flow anyways.

I have confirmed this with @Griffith, mining would not need to change they would simply have a lower difficulty target to mine against.

I was asking him about hypothetical implementation, too:

I’ve been wondering what’d be the least-invasive way to implement it on BCH, here’s some thoughts:

  1. Block header can stay the same, it would point to previous summary block, but target would get reduced to 1/60 (in case of 10s sub-blocks)
  2. Consensus would require coinbase TX to have OP_RETURNs with references to parent sub-blocks or the summary block
  3. We need to know from which sub-block a TX came, so we need maybe additional 2 bytes/TX to encode this in compressed manner
  4. We need a place for subblock headers

this would make it so that non-upgraded nodes don’t notice anything, summary block announcement would look just like normal block announcement now, the only thing they’d observe is reduction in difficulty to 1/60

what about mining software etc., if subblock header format would be same as block format, they could pretend they’re mining a reduced blocktime chain, right?

and he pretty much confirmed my understanding.

1 Like

In order to keep this discussion productive and forward going I ask you to be more technically / practically in comparisons or statements.

Your statement that in current consensus miners create a branch is not true. Miners instantly adopt a block any other miner has mined, after only seeing and validating the 80 byte header.
Your comparison then is also less impactful, it solves a problem that the current system does not have.

I’d like to avoid being that annoying one that needs to go and correct statements of fact, so please help me by being factual from the start.

If this is as good as you claim, fully truthful representing the idea should be enough. Someone needing to correct your statements actually makes it look like a sales pitch and not a good one.

So, the ENTIRE first part seems superfluous and solves a problem that does not exist. We don’t have a significant number of orphans on BCH. It is interesting to read that the miners get collective less revenue, that’s a negative. Without an actual benefit on the other side.

What your not saying out loud is that anyone mining needs to then have all the transactions included in other miners blocks, at least if they want to extend that chain.

Does that mean that a miner can no longer decide which transactions to include in the blocks they mine?

Imagine a miner that has 200KB free (zero fee) transactions, and they fill the block to the rim with other transactions.

Imagine another miner just taking all highest fee transactions.

Both blocks are full to the max blocksize allowed.

What happens? This is a relevant question as miner business models highly depend on the miner being able to decide which transactions they want to include.

Similar question when a miner doesn’t have the capacity to make big blocks, they make 8MB blocks. Then another miner makes a 32MB block. Does that mean the small miner can no longer participate?

If those points are true, then the statement you guys made on Telegram yesterday is invalid. All ecosystem participants need to do work to support this. With the sole exception of SPV wallets. If the way that a basic block is stored (pt 3), that means nobody will be able to use the coin without work.

1 Like

I’ll try, the above was a clumsy way to say: right now a reorged block results in total loss of reward for the miner who found it, whereas with Tailstorm his block could be integrated alongside the longest subchain and he can still get paid for finding a winning nonce.

image

What if they mined their own block seconds before being notified of another block at the same height found by another miner? They have to make a choice: build on top of their own, or switch to building on top of the other one. All other miners have to pick a side, too. They use the 1st seen, right? But that’s subjective, not everyone has to see it the same.

Thanks for this, I want to avoid any misunderstandings, so we can best evaluate this. First we must correctly understand all the implications, costs/risks/benefits, then we can proceed to making judgements whether it would be worth it, and everyone would have to be convinced. For that to happen, everyone must first understand it.

It’s just my enthusiasm, because it looks so promising I can’t help myself from being excited by the possibility of improving BCH.

My node has logged some alternative tips (current best 856932): 848151, 839006, 834892, 833934, 832208, 826704. From that, current orphan rate is about 0.02%, which is nice. But our blocks are small right now. Wouldn’t orphan rates grow with block size?

That’s the whole argument against simple reducing block time, right? With 200 MB blocks maybe we’d hit 1%, but if we’d reduce block time then at same TPS (but say, with 50MB x 4 for 2.5-min blocks) we could be looking at what 2, 3, 10%?

Hold on, I think there could be some misunderstanding here. Isn’t that the same as now? You need to have the parent block’s TXs if you want to be sure you’re extending a valid block. Same here, you need to validate the parent (sub)block if you want to build on top of it, as if we just have simple block time reduction. But, while you validate the latest subblock, you can just mine on top of the previous one in parallel, and if you find a block, it can still be integrated and you get the reward for contributing PoW.

Maybe you mean the scenario where there are 2 competing subblocks with almost the same set of TXs in each. After all, if everyone’s mining off public mempool, their block header is expected to commit to almost same set of TXs, right? So they both announce their blocks, and how do nodes currently deal with storing the orphaned block, isn’t there some deduplication method? All you have to do is store a list of TXs, right? The orphaned blocks are not part of permanent blockchain records, they’re just local to nodes that happened to see them. Tailstorm would integrate them into permanent blockchain records.

So, when time comes to merge these 2 blocks (enough accumulated PoW for a summary block), how do we deduplicate? That’s why I said +2byte/TX, but it would depend on chosen subdivision because a single TX can appear in any number of subblocks (worst case fully parallel, where some TXs are replicated K times). With +2 bytes and using a bitfield you could subdivide up to K=15 (you need the last bit for rejected conflicting TXs).

Why would it mean that? If subblock tip has been updated he can’t include the TXs already included in the parent subblock, but if he mined a parallel block then he could include the same TXs and even a conflicting TX, and the individual TX conflicts would be resolved based on subchain length, when they later get merged in the summary block.

They can, imagine the small miner being the side-block in the above sketch. Even if his block was a subset of TXs found in the parallel block, his block would get merged and he’d get his fair share of rewards.

I’m not sure how fees are split in the summary when distributing rewards, another good Q emerged here.

Why? Can’t the extended block data be added and passed around in a non-breaking way? Non-upgraded software would just see plain old blocks every 10 minutes.

1 Like

Blocksize limitations is largely based on what the miner is comfortable with (has tested). This is a combination of bandwidth, hardware and software capability.

We currently have a block compression that is consistent (well) above 90%, miners all do ‘header first’ mining and back in, I think 2019, we had a couple 32MB blocks on the BCH chain. Just concluding that bigger blocks will somehow make orphans more likely is oversimplifying the way things work. Just like driving faster will make more people die in car accidents is a correlation that is statistically not easy to dismiss, but certainly not causation.

Today a miner (or pool) decides which transactions to include in the final block (there are only “final” blocks). So, for instance, if they don’t want to mine op-return based transactions, they have that right. If they want to mine empty blocks, they have that right. And as another good example, miners have the right to say they want to create a fee market. Make transactions with zero fee get mined, but only in 5 hours, for instance.

But back to the point: a miner in future when we have very little to no block-reward will need to find enough fee-paying transactions to actually turn on that mining hardware. To allow this, we need an open and competitive mining market.

From what you’re explaining this tailstorm idea turns everything into a smoothing operation that removes competition and open-market incentives.

Because the block datastructure as Satoshi designed it is how all software now expects it. Add some bytes after every transaction and that software breaks. Conclusion, all ecosystem participants (with the exception of SPV wallets) need work.

Basic rule of blockchain; 100% of all data is covered by some hash. Merkle-root covers the entire block (after the header).
This makes sense, since if any byte is not covered by the blockid hash, then that byte can be changed to invalidate the block while the ‘actual’ block is valid.

So, barring segwit style second merkle-tree in the ugliest softfork ever, no, you can’t.

1 Like

We could say that, yeah: it smooths out reward payouts similar to how pools smooth it out. This would add a layer of smoothing at the protocol level, so pools would receive smoother payouts which they’d then further smooth when paying out individual miners.

I think it’s not correct to say this. Or, it would depend on the detail of how exactly are fees from subblocks redistributed in the summary block where miner payout happens. The paper is barely mentioning fees (not sure how implementation handles them), so this is a gap we have to figure out, some thoughts below.

If the epoch is a pure chain of subblocks then it’s easy to just pass the fees to the same miner who collected the TXs in their subblock, and there can’t be duplicate or conflicting TXs in this case. Remember, this case is just like as if network reduced block time. It’s the failure mode that differs (reorg vs merge).

So what if we need to merge another subchain or subblock (like in above sketch) and block template contains the same set of TXs as a sibling block? Then each miner should get exactly half of collected fees, since they both included all the same TXs and found a winning block at almost the same time. This is no different than how pool would distribute rewards to miners who contributed, they all contributed hash to same block template, and they all take their fair share of subsidy+fees in that template.

If the sets of sibling subblock TXs are overlapping, we can do 50:50 split for the TXs found in both, while award full fees from TXs exclusively found in one or the other to their corresponding miner. Can such fee accounting be computed easily? When a full block is announced the proposed bitfield will tell you in which subblock(s) each TX appeared. So just make a pass through TX list and that TX-extra bitfield I mentioned, and match the bit index with coinbase output index to see how much it’s allowed to claim. For those TXs that have multiple bits flipped, just split the fee accordingly e.g. 1/3:1/3:1/3 if same TX is found in 3 subblocks.

Depends on how you add it, more on that below.

In any case the summary block will cover all TXs, just as it does now. The header will point to previous summary block, so you can verify all pre-upgrade consensus rules without having the Tailstorm extra data. To verify Tailstorm, you’d need some more data, and that data must be somehow covered by the summary block hash.

Subblock headers are part of the extra data. Each will have its own merkle root to cover the subset of TXs later to be found in summary (+any conflicting TX, which is NOT to be found in summary) and the hash of each subblock header must meet the 1/K target.

So all we have to do is add subblock hashes to some OP_RETURN in the summary, and this can easily fit in coinbase OP_RETURN.

The Tailstorm extra data doesn’t have to be specially covered by the summary block, because it is transiently covered, i.e. through summary merkle root -> coinbase opreturn -> subblock headers -> subblock TXs.

So if someone gives you just the full block in current format without Tailstorm extra data, what can you verify with just that?

You can verify integrity of all TXs and all consensus rules. And you can verify the partial PoW of just the summary block and that it’s correctly chained with previous summary block. Target will be lower than pre-upgrade, so an adversary could more easily produce a forged header because non-upgraded SPV doesn’t know that there’s extra PoW in a new place (subblock header chain). SPV client could be extended to fetch coinbase TX + just the subblock headers, then it can correctly verify PoW accumulated in the subblock header chain.

For IBD, nodes could just ask for the summary block + Tailstorm extra (dashed rectangles).

Online nodes will have obtained the data from individual subblock announcements.

I may have added some confusion with this bitfield idea, it’s not strictly necessarry I think, just helped me reason about deduplication when storing this stuff locally and communicating it all during IBD.

We don’t need to cover the bitfield with any hash! Why? Because subblock merkle roots cover both the TXs and their particular order, and main block coinbase opreturn contains the merkle roots, so it’s all covered by main merkle root. You need to know which TXs belong to which (sub)block and in which order just so you can verify the subblock roots. Once verified, you can throw away this extra info.

1 Like

Talking about fees with @tom I had a realization that this issue highlighted by @ShadowOfHarbringer is solvable while still maintaining incentives to form subchains rather than mining subblocks in parallel.

Instead of burning, just require the penalty to be sent to an anyone-can-spend output in the coinbase TX. This means the BCH will still be created as normal, and once the UTXO matures any miner can spend it to claim it as fee in some N+100 epoch.

Back to fees, the paper is lacking in that regard, the only mention is this:

On a separate note, Carlsten et al. [12 ] demonstrate that selfish mining becomes more profitable when considering transaction fees in addition to mining rewards. They present a strategy targeting Bitcoin which leverages transaction fees to outperform honest behavior for any α > 0 and γ < 1. Similar attacks are likely feasible against all PoW cryptocurrencies, including Tailstorm, however they also exceed the scope of this paper.

1 Like

Yeah maybe, but you would essentially have to re-do their whole whitepaper and probably their synthetic tests/AI attack tests too in order to evaluate the possible dangerous scenarios again.

You essentially created a new version of TailStorm that is even more experimental than the old version. So, this increases the instability risk further because it’s completely novel.

1 Like

Yup, it’s a big job, gotta get started somehow.

More like, trying to tune it to our needs, like how we did it with native tokens. I had a chat with Griffith on fees, in their variant they’re pooled together and redistributed equally in summary (each miner gets 1/K of total fees, no matter which TXs they contributed). We could do what I proposed above, so each miner gets their own fees from TXs they included, unless multiple miners had the same TX in which case they’d share.

Also, the WP penalizes all subblocks equally for non-chain formations, but I wonder whether incentives would still work if you didn’t punish the longest subchain at all, and if there are 2x K/2-long subchains then maybe yes, but if there’s (K/2)+1 and (K/2)-1 then (K/2)-1 would take a hit while (K/2)+1 wouldn’t.

EDIT: Would be interesting to explore the difference in punishment schemes, need to think more about potential complications arising, but not sure of any that haven’t already been discussed yet.

1 Like