Lets talk about block time

Tailstorm would offer benefits immediately upon activation, depending on implementation it can immediately:

  • reduce variance so that 95% of the time you have to wait less than 13 minutes for 1-conf (vs 47 minutes now).
  • reduce block time so that 95% of the time you have to wait less than 1 minute for 1-conf.

Based on my research I think there are 3 ways (edit: 4 actually) to improve confirmation time:

effect plain block time reduction plain subblocks “inner” Tailstorm “outer” Tailstorm
Reduced target wait time variance (e.g. for 10 or 60 min. target wait) Y Y Y Y
Increased TX confirmation granularity 1-2 minutes Opt-in, 1-2 minutes Opt-in, 10-20s 10-20s
Requires services to increase confirmation requirements to maintain same security Y N N Y
Legacy SPV security full 1/K 1/K near full
Breaks legacy SPV height estimation Y N N Y
Increases legacy SPV overheads (headers) Y N N Y
Selective opt-in “aux PoW” SPV security N Y Y N
Breaks header-first mining every Kth (sub)block N Y Y Y
Additional merkle tree hashing N Y, minimal if we’d break CTOR for summary blocks Y, minimal if we’d break CTOR for summary blocks minimal
Increased orphan rate Y Y N N
Reduces selfish mining and block witholding N N Y Y

I could say the same: pretending there’s no confirmation time problem is harmful for BCH.

I can accept this criticism, we’re not yet in the stage where we could hype anything as a solution.

These two tables should be sufficient reason, unless you will hand-wave the need to ever wait for any confirmations.

Table - likelihood of first confirmation wait time exceeding N minutes

Table - likelihood of 1 hour target wait time exceeding N minutes

2 Likes

This is a great summary – thank you.

Might be helpful to tag this consolidated reply back to the TailStorm chain too.

1 Like

I made a schematic to better illustrate this idea, it would be just like speeding up blocks, but in a way that doesn’t break legacy SPV:

It would preserve legacy links (header pointers) & merkle tree coverage of all TXs in the epoch.
This doesn’t break SPV at all, it’d be just as if price did 1/K and some hash left the chain.
Legacy SPV clients would continue to work fine (at reduced security) with just the legacy headers.
However they could be upgraded to fetch and verify the aux PoW proofs just for most recent blocks, to prove that the whole chain is being actively mined with full hash.
So, the increased overheads drawback of just accelerating the chain would be mitigated by this approach, too.

2 Likes

ok, now your only advantage left is to have more consistent block-times, what about not having all this tailstorm and all this complexity but just allow a more advanced p2peer way of mining.

Specifically I’m thinking that a p2peer setup may be extended to be cumulative.

P2Peer is today a mining standard that fits in the current consensus rules and allows miners with a partial proof-of-work to update the to-be-mined block with a new distribution of the rewards and such.
Where it differs with your suggested approach is that, simply said, you lower the number to throw on a 20-sided dice to be 16 instead of 20, but you requires a lot more of them to compensate.

This is the simple and conceptual difference that you claim is the reason that tailstorm has more consistent blocktimes.

My point is, if that is your argument you should dress it down and make those 20 (or whatever number) of block-headers be shipped in the block and make the consensus update to allow that.
Minimum change, direct effect.

I dislike the soft-fork lying changes that say it is less of a change because it hides 90% of the changes from validating peers. That is a lie, plain and simple. It is why we reject segwit, it is why we prefer clean hard forks. As such, a simple and minimum version of what you propose (a list of proof of work instead of a single proof of work) is probably going to be much easier to get approved.

A simple list of 4 to 8 bytes per POW-item, one for each block-id that reached the partial required PoW, can be added in the beginning of the block (before the transactions) to have this information.
The block-header would stay identical, the difficulty, the merkle etc are all shared between POW items and the items themselves just cover the nonce and maybe the timestamp-offset. (offset, so a variable-size for that one)

The main downside here is that a pool changing the merkle-root loses any gained partial PoW, as such while confirmations may be much more consistent, the chance of getting into the next block drops and most transactions should expect to be in block+2.

Reasonably simple changes:

  • block format changes slightly. Some bytes added after the header.
  • A simple header can no longer be checked to have correct PoW without downloading the extra maybe 200 bytes. Which should be included in the ‘headers’ p2p calls.
  • Miner software should reflect this, though there is no need to actually follow this, they just need one extra byte for the number of extra nonce-s.
  • blockid should be calculated over the header PLUS the extra nonces. And next block thus points back to the previous one PLUS the extra nonces. Which has the funny side-effect of the block-ids no longer starting with loads of zero’s :man_shrugging:

New block-header-extended:

  • Current 80 bytes header: Block Header
    Used to calculate the PoW details over, what we always called block-id is now called ‘proof-hash’.
  • A start-of-mining timestamp. (4 bytes unsigned int)
  • Number of sub-work items. (var-int)
    This implies the sub-item targets. If there are 10, then the target of PoW is adjusted based on that. Someone do the math to make this sane, please.
  • Subwork: nonce (4 bytes)
  • Subwork: time-offset against ‘start of mining’. (Var-int).

This entire dataset is to be hashed to become the block-id which is used in the next block to chain blocks.

To verify one takes the final blockheader and hashes that to get the work. Then for each sub-work item in the list replace in the final block-header the nonce and the time. The time is to be replaced by taking the ‘start-of-mining’ timestamp and adding to that the offset. After that hash the new 80 bytes header to get the work and add this to the total work done.

Now, I’m not suggesting this approach. It is by far the best way to do what tailstorm is trying to do without all the downsides, but I still don’t think it is worth the cost. But that is my opinion.
I’m just saying that if you limit your upgrade to JUST this part of tailstorm, it will have a hugely improved chance of getting accepted.

That’d be the only immediate advantage. However, nodes could extend their API with subchain confirmations, so users who’d opt-in to read it could get more granularity. It’d be like opt-in faster blocks from userspace PoV.

This looking like a SF is just a natural consequence of it being non-breaking to non-node software. It would still be a hard fork because:

  • We’d HF the difficulty jump from K to 1/K. To do this as a SF would require intentionally slowing down mining for the transition so difficulty would adjust “by itself”, and it would be a very ugly thing to do. So, still a HF.
  • Maybe we’d change the TXID list order for settlement block merkle root, not sure of the trade-offs here, definitely a point to grind out, the options I see:
    • Keep full TXID list in CTOR order when merging subblock TXID lists. Slows down blocktemplate generation by the time it takes to insert the last subblock’s TXIDs.
    • Keep them in subblock order and just merge the individual subblock lists (K x CTOR sorted lists), so you can reuse bigger parts of subblock trees when merging their lists.
    • Just merge them into an unbalanced tree (compute new merkle root over subblock merkle roots, rather than individual TXs).

Just to make something clear, the above subchain idea is NOT Tailstorm. What really makes Tailstorm Tailstorm is allowing every Kth (sub)block to reference multiple parents + the consensus rules for the incentive & conflict-resolution scheme.

With the above subchain idea, it’s the same “longest chain wins, losers lose everything” race as now, it is still fully serial mining, orphans simply get discarded, no merging, no multiple subchains or parallel blocks.

Nice thing is that the above subchain idea is forward-compatible, and it could be later extended to become Tailstorm.

Sorry but all that looks like it would break way more things and for less benefits, but I’m not sure I understand your ideas right, let’s confirm.

First, a note on pool mining, just so we’re on the same page: When pools distribute the work, a lot of work will be based off same block template (it will get updated as new TXs come in, but work distributed between updates will commit to same TXs). Miners send back lower target wins as proof they’re not slacking off and they’re really grinding to find the real win, but such work can’t be accumulated to win a block, because someone must hit the real, full difficulty, win. Eventually 1 miner will get lucky and win it, and his reward will be redistributed to others. He could try cheat by skipping the pool and announcing his win by himself, but he can’t hide such practice for long, because the lesser PoWs serve the purpose of proving his hashrate, and if he doesn’t win blocks as expected based on his proven hash the pool would notice that he suspiciously has a lower win rate than expected.

Now, if I understand right, you’re proposing to have PoW be accumulated from these lesser target wins - but for that to work they’d all have to be based off same block template, else how would you later determine exactly which set of TXs the so accumulated PoW is confirming?

I think it would work to reduce variance only if all miners joined the same pool so they all work on the same block template so the work never resets, because each reset increases variance. Adding 1 TX resets the progress of lesser PoWs. Like, if you want less variance you’d have to spend maybe first 30 seconds to collect TXs, then lock it and mine for 10min while ignoring any new TXs.
Also, you’d lose the advantage of having subblock confirmations.
And the cost of implementing it would be a breaking change: from legacy SPV PoV, the difficulty target would have to be 1 because of:

SPV clients would have to be upgraded in order to see the extra stuff (sub nonces) and verify PoW, and that would add to their overheads, although the same trick I proposed above could be used to lighten those overheads: you just keep the last 100 blocks worth of this extra data, and keep the rest of header chain light.

yes, very good to avoid, in other words.

If you disagree then the onus of proof lies on you.

You understand right, and the tech spec I added in a later edit last night makes this clear. There is exactly one merkle-root.

You are wrong to say that in order to reduce variance ALL miners must join the same pool for the same reason the opposite of all miners being solo miners is not that there is exactly 1 pool.
Every pool added will already have the effect of reducing variance.

You can suggest that making it mandatory for all miners to join 1 pool is better, but then I’d have to retort with the good old saying that socialism is soo good, it has to be made mandatory.
In other words, don’t force 1 pool, but allow pools to benefit the chain AND the miner.

Actually, this is incorrect, SPV mining doesn’t derive the difficulty (and thus work) from the block-id. There is a specific field in the header for it. I linked the specification in my previous message if you want to check the details.

The details on how it does work is also in the original post. Apologies for editing it, which means you may not have seen the full message in the initial email notification.

Again, not promoting this personally. Just saying that this has the same effective gains as your much more involved system suggestions, without most of the downsides.
I still don’t think this is a good idea, even though the avoidance of subblocks and avoidance of changing difficulty and all the other things are useful, the balance is still not giving us enough benefit.

1 Like