Spearheading the future implementation of Weak Blocks (Intermediate Blocks) for BCH proof of work

Skimming the PDF provided by Griffith, parallel mining is used to largely avoid wasted blocks.

Doesn’t seem to work how you’re thinking, either, though. sub blocks act as votes for summary blocks — actually benefiting a bunch of issues that exist today — still reading, but this isn’t the same idea as weak and strong blocks — this is far more interesting.

@Griffith Very curious – is there a reason Bobtail was not implemented on Nexa? Thusfar I see mostly good in this paper.

Well if this is so, then this is not really what I want and what I am pushing.

The sub-blocks should be merge-mined both at the same time, so hashpower is never wasted.

Any sub-block that accidentally achieves the difficulty of normal block, simply becomes normal block.

Another question is, whether such a scheme is possible.

I will investigate how merged mining works in order to maybe come up with such a solution.

Have you read the paper yet? I’m not sure you’re grasping the concept (and that’s likely because of my rushed explanation). The votes concept isn’t exactly accurate, was on a train. Many sub blocks make up a summary block – so little wasted. They can work like this

On my way there. I need proper sleep too, you know.

Just edited my comment that should help with visualization.

There are plans to implement tailstorm in a hardfork. It was not included at chain launch because it was not done yet. That is really the only reason.

A little history on the progression of the protocols BU worked with…
weakblocks is the term used in Storm. They are called proofs in Bobtail. Tailstorm calls them subblocks.

I think it was… Peter Rizun? who began tinkering with the weakblock idea that became storm and awemany who had started to implement it. It was originally thought that the weakblocks should be optional because amaury was gatekeeping consensus changes at the time. However, we found that storm needed to be added to consensus otherwise incentives dictated that it was better not to use the protocol (i touched on that briefly in a previous post). Without consensus changes the feature was broken and was dropped.

Bobtail, as you pointed out, has a lot of positives. The issue is that there are intra-block attacks possible with bobtail that had not been considered. A few proof-witholding attacks were found. The original paper never claimed to address all attacks, it only showed that inter block attacks could be mitigated with bobtail. This was the reason for tailstorm.

Tailstorm adds mitigation for proof/subblock withholding attacks (intra-block attacks). Discounting the subblock reward based on the topology of how they were linked seemed to encourage people to reveal their subblocks quickly. It also allows for the proofs/subblocks to be used for preconsensus without overriding PoW like avalanche did.

If you want you can think of tailstorm as bobtail 2.0, a bobtail with more consideration for attack mitigation.

4 Likes

Nit: even now chainwork of a block is determined by the target difficulty and NOT by actual difficulty of the winning hash. In other words, if a miner gets lucky and finds a hash with more 0s than needed, his block WILL NOT weigh more than a competitor with less 0s (but still above target).

1 Like

So basically we can sort blocks by their number of zeroes and then put the ones with “enough” difficulty into the “strong block” bag, and anything lower we drop into the “sub block” bag.

[ Again, assuming such a scheme is even possible. ]

I will read tailstorm whitepaper today and think about it more.

Hey everybody. I’m one of the authors of the Tailstorm paper linked above. I’ll try to answer your questions in case you have any!

I largely agree with @Griffith 's summary of how the different proposal came to be. It’s accurate coming from the perspective of Storm and Bobtail and how these ideas ended up in Tailstorm. A core insight for this discussion here should be that multiple attempts to add weak blocks without adapting the consensus rules have failed before.

I want to add that Tailstorm merges not only Storm and Bobtail, but also (my) Parallel Proof-of-Work. Storm/Bobtail provide the weak block idea. Tailstorm itself adds a nifty reward scheme to punish block withholding. Parallel Proof-of-Work provides a compatible consensus algorithm.

If you decide here, that the improved properties are worth changing the consensus, then you should also check out my most recent proposal. While Tailstorm structures the weak blocks as trees (inherited from Bobtail), my new protocol puts them into a DAG. This change comes without modifying the consensus rules for the strong blocks. In Tailstorm, branches in the weak block tree can only be merged with the next strong block. The new protocol’s DAG structure allows merging branches back into a linear chain asap. I also propose a DAG-based reward scheme for punishing withholding which turns out to perform better than Tailstorm’s. There is no fancy name yet, so I refer to this as parallel proof-of-work with DAG-style voting.

References:

[ I wanted to post a longer list of references here but new users can only post two links ]

3 Likes

References cont’d:

2 Likes

References cont’d:

[ I’m also limited to three messages, so I’ll edit this here to respond to a recent message above ]

This idea has some history. I don’t know it completely, but some relevant pointers are: (no links due to new-user restriction)

  • Garay et al., The Bitcoin Backbone Protocol, 2015. Search for “2-for-1 POWs”
  • Pass/Shi, Fruitchains: A Fair Blockchain, 2017. Search for “2-for-1 trick”
  • Yu et al., 2020. OHIE: blockchain scaling made simple. Search for “Mining in OHIE”
1 Like

Attached PDF here for easier reference: Parallel Proof-of-Work with DAG-Style Voting & Targeted Reward Discounting [NextGen TailStorm].pdf (276.1 KB)

1 Like

I actually I do have a question. I am deep-reading(some sections I read up to 8 times) TailStorm Whitepaper right now. I have stumbled upon something that is hard to me to understand.

There is an equation (allegedly by Peter Rizun) that defines block propagation delays and orphan rates:

image

So according to this, the delays are equal to

Network latency(τ0) + ( Bandwidth(z) * Block Size(Q) ).

I must be misunderstanding something, because greater bandwidth generally speeds up block propagation, not multiplies it, doesn’t it? The bigger the bandwidth, the faster a block can propagate. So it would seem that the equation should be

Network latency(τ0) + ( Block Size(Q) / Bandwidth(z) ).

Can you explain this for me?

1 Like

View of Subchains: A Technique to Scale Bitcoin and Improve the User Experience (pitt.edu) – can find the same formula here on page 1 (item definitions below)

Glad you asked because I was going to do the same thing. My initial thought skimming the reference papers was that z means propagation impedance (as it is defined as in the above referenced paper by Peter Rizun) which made me think the units were backwards and should be [seconds / byte] (in order to get the bytes to cancel out) and also fits with the definition. However, when I look to Peter’s paper that is referenced in the above paper ( 1txn.pdf (bitcoinunlimited.info)) z is defined as “maximum theoretical network bandwidth” (seemingly alternatively called propagation impedance as well!) – and the formulas below it support that the units are bytes / second.

BUT THEN! There is another twist! THAT paper references this paper ( feemarket.pdf (bitmex.com)) that shows propagation impedance as seconds / [M]byte as well!

So I think there was a mistranslation somewhere. I think the formulas are correct but z actually represents seconds / byte rather than bytes / second.

It seems to me that Peter at some point changed what z meant for one paper and changed his formulas, but then returned to the original definition linked in the feemarket.pdf paper.

2 Likes

Thanks for your comment.

Since the whitepaper author is here I will also wait for his answer as well.

I believe I may have found 2 vulnerabilities in your protocol that were not taken into account when performing threrat analyses:


Cloned summary problems:

It looks to me that Byzantine Generals problem for summary blocks is not completely solved by the TailStorm protocol. Assuming summaries can differ from each other even if containing the same list of subblocks (for example, they could differ by timestamps or some other unique identifier), there is a “summary spam” attack possible due to a race condition where an attacker supplies thousands of different summaries with the same basic sub-blocks in order to confuse nodes and cause some kind of chaos.

What kind of “chaos” would that be is difficult to say because I can only perform basic and simple simulations directly in my brain, unlike the authors of the Whitepaprer who used advanced algorithms and AI. But right now I can already imagine something like spamming millions of summary block candidates to all non-mining nodes and thus using their CPU power, delaying propagation of honest summary blocks by significant times or confusing other miners so that they mine on top of invalid block tip and lose their profits as their next summary blocks become invalid because they picked the wrong summary to build on. This potentially creates a scenario where full summary blocks with all included sub-blocks can be orphaned.

Bitcoin solves this, because each block needs to prolong the chain with added Proof Of Work, and miners throw dice in order to unilaterally decide which of duplicates is “real” (contains most PoW).

Possible quick and easy solutions:
A: Summary block should not just include a summary, but should include it’s own added PoW, so the network can choose the summary with most Proof Of Work, instantly solving Byzantine Generals problem and the race condition.
B: Each summary block with the same subsections should be identical, generated automatically/procedurally in a way that it is impossible for 2 different summary blocks with identical contents to co-exist in the network.

Unfortunately, in case of solution A, some Proof Of Work validating Summary block would be partially wasted, because summary blocks do not contain transactions themselves and therefore cannot earn additional block reward for processing transactions to be paid for the burned power.

So solution B would be probably more beneficial in this case, assuming it is possible.

[ The whitepaper does not address above attack scenario, I have thoroughly read all attack analysis scenarios. ]


Sub-block UXTO spending duplication:

In case 2 sub-blocks end up having the same PoW/hash and contain transactions spending the same UXTO twice, how is such conflict resolved?

The whitepaper talks about it in this manner:

What if hash(suba ) == hash(subb), which should theoretically be possible to happen, since the sub-blocks are mined completely in parallel?

EDIT:

This attack scenario is resolved, this is comparing hashes and not difficulties. I have no idea why I thought this is comparing difficulties, maybe I did not get enough sleep.


Invalid UXTO spending spam:

The third problem is the possible attack where either a dishonest miner or a dishonest node that has good connectivity to miners sends conflicting UXTOs to different miners.

The whitepaper addresses it in this manner:

Assuming above is true, it should be theoretically possible to spam conflicting transactions spending the same UXTO twice knowing they will be ignored. The attacker’s intention is increasing the size of the blockchain in order to increase the costs of maintenance of all related services (miners, block explorers, other nodes).

It would be only possible to spam in this manner assuming that the attacker can selectively connect to different miners and supply specific transaction to miner A and then conflicting transaction only to miner B, thus massively increasing the chance of the miners mining conflicting transactions.

According to the whitepaper, such transaction would have to be kept in the blockchain for eternity.

Bitcoin solves this by enforcing a strict block ordering at all times, parallel transaction insertion is impossible, which makes this scenario also impossible.

How does TailStorm deal with this attack?

[ The whitepaper does not address precisely this attack scenario ]


EDIT: Addendum to this attack scenario.

In the scenarios described in the Whitepaper, k=3 is used, meaning 3 subblocks make up 1 full summary block.

Assuming we want 10-second sub-blocksize and 10 minues strong blocksize (like in current Bitcoin), it is strictly theoretically possible to generate up to 60 parallel blocks by different miners. Of course such scenario is unrealistic because there never has been 60 miners in Bitcoin system with hashpower distributed so evenly.

So realistically, let’s assume a(still edge case of course) scenario of 15 sub-blocks being mined in parallel.

In this scenario, it would be theoretically possible for an adversarial player to generate 15 different versions of the same conflicting transaction: 1 valid and 14 invalid spending the same UXTO and send them to 15 different miners.

So this in effect would make spamming Bitcoin(Cash) network 15 times cheaper than it is now, as the transaction fee has to be only paid once, the other 14 invalid blocks do not require a fee for inclusion.

That is of course an edge case, but calculating this in more realistic manner adjusting for the current number of pools and their percentage of total hashpower, scenarios with 5-8 sub-blocks being created in parallel would/could be pretty common.

1 Like

While waiting for a solution for the [possible] TailStorm vulnerabilities, I am going to investigate how precisely merged mining works, in order to evaluate whether it is more suitable for the Intermediate Block protocol.

Found some documentation here:

https://en.bitcoin.it/wiki/Merged_mining_specification

Toomim wrote block_prop_time = first_byte_latency + block_size/effective_bandwidth here.

Maybe it’s an error in the paper.

What if now we get hash(blockA) == hash(blockB)? Bitcoin security relies on that being computationally impossible to achieve. 256-bit collision is a 2^128 problem and should not happen, not even once.
The assumption is always: if hashes are the same then the contents are the same. If that were not the case then the whole system would be broken.