Lets talk about block time

This speculation is easy to check in the real world. For instance coinsbee is my go-to cards site. It’s more EU focussed.

But their support of BCH is excellent. They instantly notice on their webpage that a transaction is seen etc. They send emails when enough confirmations are seen. But you can also keep the webpage open which updates and unlocks the ‘secret codes’ etc.

So, whatever the reasons for bitrefill is, it is not because it can’t be done. We KNOW it can be. It works for a competitor.
In practice such companies are very often known to actively hate BitcoinCash specifically. As someone that has done a lot of out-reach I know that this is the case. We have a PR problem that stops companies ever looking at things like block times.

2 Likes

it :100: can be done! no doubt about it, but the UX is different with BCH … the UX you described CANNOT work during an in-person POS, as the wait is indeterminate – whereas the payment options offered by Bitrefill (aside from BTC which is probably just there for decoration) can ALL be fulfilled in less than 2mins (on average)

i certainly DO NOT know why Bitrefill has not implemented BCH, but i’m only pointing out that it’s NOT fair to just assume “hate”, when there are clear “UX challenges” to accommodate with ~10min confs

2 Likes

So I’ve been looking into TailStorm as well, I think that’s the most promising R&D direction because it gives us the benefits of shorter blocks while actually improving orphan rates, wow! And we could go down to 15 seconds sub-blocks rather than 2 minutes!

2 Likes

I failed in explaining my position well. Please allow me to retry:

There is a huge amount of PR issues in BCH. People openly call it a fraud. It is slowly improving, though. We are gaining ground there, which is awesome.

The 10min or sometimes 2 hours blocktime is a separate problem and it is being raised as one that is a root cause of people not adopting BCH. I argue that that conclusion is too simple. Blocktime may be improved, but that won’t really move the needle on adoption or recognition. Just like the dozen other unique features we added over the years on BCH didn’t do much on adoption and price and NOT being labelled a scam.

I think we’ll all have a general agreement on the cost of changing the blocktime.
Where we then differ is that in my assessment the benefit is hugely overstated. Having 2 minute blocks will make people happy that are already accepting of the coin. It will hardly add any new adoptees.

And this statement, I realize as I’m writing this, may look like I’m just making this up. But it is based on knowledge of people and groups. Experience in trying to get people to adopt Bitcoin Cash and running into the problem of “but all crypto is a scam”.
This is a people problem. Not a technical problem.
The core issue is that the majority opinion is that we are to be avoided. This goes for either crypto as a whole, or bch specifically. And I’ve seen this many times as well, especially in the last years with the government pushed ideas, people can start to change their minds for sure.
And when they do, the little details are utterly irrelevant.

More specifically, when the people start to change their minds on crypto and BCH specifically the companies today rejecting it will likely abruptly change their position and start to implement zero-conf. Because it is not only cool, but it sells. It is superiour tech that has a real UX improvement.

And as such my assessment is that the core reason for the lack-luster uptake isn’t blocktimes. The real reason is very different, as described above, and when the real reason is resolved then the blocktime isn’t relevant anymore as what can be made zero conf will be.
What needs confirmations doesn’t have a time-limit (you won’t be waiting for it).

And this last part we already figured out a decade ago when btc wasn’t completely hijacked yet.

2 Likes

Just a very brief note to this, TailStorm introduces a very neat mechanism. We still have our 10 minute summary blocks, but we get 15 second confirmations. Kinda a win-win.

1 Like

i disagree that the “benefit” has been overstated, at least not by @bitcoincashautist … unfortunately, NGU ppl will always WANT “this” to the panacea that solves all, and so they’ll argue this doesn’t do that … in my view, this has reasonably been presented as “a part” of the solution to improve the BCH UX

i can concede that it’s MORE of a people problem, but there are certainly technical issues (imo being completely ignored); like the current 10-block Checkpoint Consensus that MUST be fixed eventually, right?!

:100: agree!

some blockchains (L2 w/ VC funding) setup Business Development teams that work around-the-clock to foster relationships with “the real world”, both business and individuals … but they have FUNDING to support them…

imho, BCH will never realize its true potential until there is some consistency and general sustainability of the value-offerings made to its beloved Community – and “The Funding Problem” is allowed to be discussed openly and publicly :+1:

(thank you for making such an effort express your opinions so clearly – it’s very much appreciated :pray:)

I would encourage everyone in this discussion to look at TailStorm being discussed here: Tailstorm: A Secure and Fair Blockchain for Cash Transactions - #2 by _minisatoshi

All the benefits + more of shorter block times, without the drawbacks.

2 Likes

With a random process like PoW mining is, there’s a 14% chance you’ll have to wait more than 2 times the target (Poisson distribution) in order to get that 1-conf.

This is not really correct. It’s much worse than that: 40.6%. Russell O’Connor discusses this here.

The 14% is the probability that the time difference between two blocks is longer than 20 minutes. (The time between blocks has an exponential distribution with rate parameter 1/10 minutes).

That is not the probability that a user waits longer than 20 minutes for their first confirmation. A user is not equally likely to broadcast a transaction during a long block time and a short block time. They are much more likely to broadcast during a long block time because long block times cover longer periods. (We assume that transactions are confirmed in the next mined block and that the timing of a broadcast transaction and the timing of a block being mined are independent.)

The probability distribution of a user’s wait time to first confirmation is an Erlang distribution with shape parameter 2 and rate parameter 1/10 minutes. This is the same as a gamma distribution with shape parameter 2 and rate parameter 1/10 minutes.

This statement is also not correct: “With 2-minute blocks, however, there’d be only a 0.2% chance of having to wait more than 12 minutes for 1-conf!” The correct probability is about 1.7%. You can get this probability by inputting this statement in R: pgamma(12, shape = 2, rate = 1/2, lower.tail = FALSE).

IMHO, O’Connor’s explanation isn’t very detailed, but you can convince yourself that the user’s wait time is an Erlang(2, 1/10) distribution with a simulation. In R it would be:

set.seed(314)

exp_draws <- rexp(1e+07, rate = 1/10)
# Draw ten million block inter-arrival times

user_wait_index <- sample(length(exp_draws), size = 1e+06, replace = TRUE, prob = exp_draws)
# Draw one million indexes from the exp_draws vector. prob = exp_draws means
# that the probability of selecting each index is proportional to the inter-arrival time.

user_waits <- exp_draws[user_wait_index]
# Create the user_waits vector by selecting the appropriate exp_draws elements

prop.table(table(user_waits >= 20))
# The proportion of user_waits that are longer than 20 minutes:
#    FALSE     TRUE
# 0.593514 0.406486

ks.test(user_waits, pgamma, shape = 2, rate = 1/10)
# Kolmogorov-Smirnov test fails to reject the null hypothesis that the
# user_waits empirical distribution is the same as a gamma(2, 1/10) (i.e. Erlang(2, 1/10))
# Asymptotic one-sample Kolmogorov-Smirnov test
# data:  user_waits
# D = 0.00074215, p-value = 0.6404
# alternative hypothesis: two-sided

# Make a histogram of user_waits and compare it to the
# probability density function of gamma(2, 1/10)
hist(user_waits, breaks = 200, probability = TRUE)
lines(seq(0, max(user_waits), by = 0.01),
  dgamma(seq(0, max(user_waits), by = 0.01), shape = 2, rate = 1/10),
  col = "red")
legend("topright", legend = c("Histogram", "PDF of gamma(2, 1/10)"),
  lty = 1, col = c("black", "red"))

I’m not 100% sure about this, but I think your table also is incorrect. Let x be the number of confirmations and y be the average block time. We use the Erlang distribution again because the shape parameter is the number of events that we are waiting to occur. We add 1 to the shape parameter to factor in the probability that the user broadcasted their transaction during an “unluckily” long block time interval (similar reasoning as before). The distribution of the waiting time would be Erlang(x + 1, 1/y).

When the user is “expecting” a certain number of confirmations in 60 minutes, it would be 6 with the current 10 minute block time and 30 with a 2 minute block time. The distribution of waiting times for a user waiting for 6 blocks with a 10 minute average block time would be Erlang(6 + 1, 1/10). For 30 blocks with a 2 minute block time it would be Erlang(30 + 1, 1/2).

If my conjecture is correct, to fill in the table you would input pgamma(c(70, 80, 90, 100), shape = 6 + 1, rate = 1/10, lower.tail = FALSE) in R for the first column and pgamma(c(70, 80, 90, 100), shape = 30 + 1, rate = 1/2, lower.tail = FALSE). That would give you:

expected to wait actually having to wait more than probability with 10-minute blocks probability with 2-minute blocks
60 70 45.0% 22.7%
60 80 31.3% 6.2%
60 90 20.7% 1.2%
60 100 13.0% 0.16%
2 Likes

For reference, I simply used the spreadsheet Poisson function to calculate my numbers:

=1-POISSON(0,time/target_time,0)

Which calculates inverse of observing exactly 0 occurences during the time I expect to see time/target_time occurrences.

Oh I see, my numbers are really numbers of blocks longer than X, but from individual user PoV when he randomly decides to make a TX he can land anywhere between 2 occurrences with duration X between them and because longer durations are well, longer, they take a bigger % of the timeline, so there’s a higher probability of landing in one of the longer ones, and so from user PoV the probability of wait will be different so we need to use Erlang rather than Poisson. Got it. And wow, it’s even worse than I thought. :woozy_face:

Thanks for the peer review!

PS I reproduced the above numbers using LibreOffice Calc spreadsheet function GAMMA.DIST:

For the 6-conf wait time >70min probability is: =1-GAMMA.DIST(70, 7, 10, 1) with 10-min blocks.
For 1-conf wait time >20min probability is: =1-GAMMA.DIST(20, 2, 10, 1) with 10-min blocks.

2 Likes

So, Poisson distribution gives us a birds eye view of block times by answering the question: “What % of N block intervals will be longer than X minutes.”
But that’s not answering the question I wanted to ask: “If I make a TX, what are the odds I’ll have to wait more than X minutes for N confirmations.”

As explained above by @Rucknium, the Erlang / gamma distribution answers that.

And it looks like 50% chance of having to wait longer than 17 minutes, and 19.9% chance for longer than 30 minutes, wow! We’d need 6 minute target if we wanted 50% chance of <10min.

Put another way, with 10 minute target you can have only 50% confidence you’ll get that 1 confirmation before 17 minutes passes.

image

2 Likes

Further and further we go towards why TailStorm is so promising.

Moving my reply here to keep on-topic:

I’ll take the opportunity to share the historical ideas on this.

Zero conf is (and always used to be) secure enough for most. It is a risk level that most merchants will be confident with up until maybe $10000. Heavily dependent on the actual price of a unit and the cost of mining a single block (wrt miner assisted ds).

This is like insurance. Historical losses decide the risk profile. Which means, the better we do with merchants not losing money due to double spends, the higher the limit becomes for safe zero-conf payments.

As such, in a world where Bitcoin Cash is actually used for payments the situation you refer to as “requiring a confirmation or more” tends to be the type that doesn’t really care about block time.
They are the situations where you’re giving your personal details and do a bank transfer. They are the situations where you order it online and it will not be delivered today anyway. These are the situations where, in simple words, the difference between 10 minutes and 2 hours confirmations are completely irrelevant.

1 Like

I don’t disagree at all about the security of 0-conf. Not one bit. Doesn’t mean that we should completely stay away from improving the confirmation experience to impact real world users and situations for the foreseeable future. Not to mention the other benefits.

1 Like

Nobody disagrees it would be nice. But the cost is outlandish. Would you buy a $500 pair of jeans for a kid that will outgrow them in a year. What about somene buying a $5000 bicycle just to cover the year until it becomes legal to drive?

This is the disconnect that really gets me…

When do you think any such changes are possible to become used by actual people? A TailStorm will take several years before it can be deployed, another couple before it is used by companies (if ever). Reminder that SegWit usage took 7 years to become the majority used address type.
So, you’re advocating ideas that are intermediary, but can’t possibly be in the hands of users in less than 5 years… See how that is a contradiction?

I’m not stopping you, I’m just realistic about what can be done and what gives the best return on investment.

But it has come to the point that it needs to be clarified that this series of ideas is mostly just harmful for BCH at this point. If it stayed on this site it wouldn’t be harmful, but a premature idea that nobody endorses is being pushed in the main telegram channels daily, is pushed on reddit and on 𝕏. The general public thinks this is happening. While not a single stakeholder is actually buying into this.
Hell, there isn’t even any actual reason given for this shortening that stands up to scrutiny.
That is mostly on @bitcoincashauthist, but you both are not listening to stakeholders and just marching on. Again, if its just here, that’s no problem. It is the going to end-user locations with this that is giving a completely different impression.

Tailstorm would offer benefits immediately upon activation, depending on implementation it can immediately:

  • reduce variance so that 95% of the time you have to wait less than 13 minutes for 1-conf (vs 47 minutes now).
  • reduce block time so that 95% of the time you have to wait less than 1 minute for 1-conf.

Based on my research I think there are 3 ways (edit: 4 actually) to improve confirmation time:

effect plain block time reduction plain subblocks “inner” Tailstorm “outer” Tailstorm
Reduced target wait time variance (e.g. for 10 or 60 min. target wait) Y Y Y Y
Increased TX confirmation granularity 1-2 minutes Opt-in, 1-2 minutes Opt-in, 10-20s 10-20s
Requires services to increase confirmation requirements to maintain same security Y N N Y
Legacy SPV security full 1/K 1/K near full
Breaks legacy SPV height estimation Y N N Y
Increases legacy SPV overheads (headers) Y N N Y
Selective opt-in “aux PoW” SPV security N Y Y N
Breaks header-first mining every Kth (sub)block N Y Y Y
Additional merkle tree hashing N Y, minimal if we’d break CTOR for summary blocks Y, minimal if we’d break CTOR for summary blocks minimal
Increased orphan rate Y Y N N
Reduces selfish mining and block witholding N N Y Y

I could say the same: pretending there’s no confirmation time problem is harmful for BCH.

I can accept this criticism, we’re not yet in the stage where we could hype anything as a solution.

These two tables should be sufficient reason, unless you will hand-wave the need to ever wait for any confirmations.

Table - likelihood of first confirmation wait time exceeding N minutes

Table - likelihood of 1 hour target wait time exceeding N minutes

2 Likes

This is a great summary – thank you.

Might be helpful to tag this consolidated reply back to the TailStorm chain too.

1 Like

I made a schematic to better illustrate this idea, it would be just like speeding up blocks, but in a way that doesn’t break legacy SPV:

It would preserve legacy links (header pointers) & merkle tree coverage of all TXs in the epoch.
This doesn’t break SPV at all, it’d be just as if price did 1/K and some hash left the chain.
Legacy SPV clients would continue to work fine (at reduced security) with just the legacy headers.
However they could be upgraded to fetch and verify the aux PoW proofs just for most recent blocks, to prove that the whole chain is being actively mined with full hash.
So, the increased overheads drawback of just accelerating the chain would be mitigated by this approach, too.

2 Likes

ok, now your only advantage left is to have more consistent block-times, what about not having all this tailstorm and all this complexity but just allow a more advanced p2peer way of mining.

Specifically I’m thinking that a p2peer setup may be extended to be cumulative.

P2Peer is today a mining standard that fits in the current consensus rules and allows miners with a partial proof-of-work to update the to-be-mined block with a new distribution of the rewards and such.
Where it differs with your suggested approach is that, simply said, you lower the number to throw on a 20-sided dice to be 16 instead of 20, but you requires a lot more of them to compensate.

This is the simple and conceptual difference that you claim is the reason that tailstorm has more consistent blocktimes.

My point is, if that is your argument you should dress it down and make those 20 (or whatever number) of block-headers be shipped in the block and make the consensus update to allow that.
Minimum change, direct effect.

I dislike the soft-fork lying changes that say it is less of a change because it hides 90% of the changes from validating peers. That is a lie, plain and simple. It is why we reject segwit, it is why we prefer clean hard forks. As such, a simple and minimum version of what you propose (a list of proof of work instead of a single proof of work) is probably going to be much easier to get approved.

A simple list of 4 to 8 bytes per POW-item, one for each block-id that reached the partial required PoW, can be added in the beginning of the block (before the transactions) to have this information.
The block-header would stay identical, the difficulty, the merkle etc are all shared between POW items and the items themselves just cover the nonce and maybe the timestamp-offset. (offset, so a variable-size for that one)

The main downside here is that a pool changing the merkle-root loses any gained partial PoW, as such while confirmations may be much more consistent, the chance of getting into the next block drops and most transactions should expect to be in block+2.

Reasonably simple changes:

  • block format changes slightly. Some bytes added after the header.
  • A simple header can no longer be checked to have correct PoW without downloading the extra maybe 200 bytes. Which should be included in the ‘headers’ p2p calls.
  • Miner software should reflect this, though there is no need to actually follow this, they just need one extra byte for the number of extra nonce-s.
  • blockid should be calculated over the header PLUS the extra nonces. And next block thus points back to the previous one PLUS the extra nonces. Which has the funny side-effect of the block-ids no longer starting with loads of zero’s :man_shrugging:

New block-header-extended:

  • Current 80 bytes header: Block Header
    Used to calculate the PoW details over, what we always called block-id is now called ‘proof-hash’.
  • A start-of-mining timestamp. (4 bytes unsigned int)
  • Number of sub-work items. (var-int)
    This implies the sub-item targets. If there are 10, then the target of PoW is adjusted based on that. Someone do the math to make this sane, please.
  • Subwork: nonce (4 bytes)
  • Subwork: time-offset against ‘start of mining’. (Var-int).

This entire dataset is to be hashed to become the block-id which is used in the next block to chain blocks.

To verify one takes the final blockheader and hashes that to get the work. Then for each sub-work item in the list replace in the final block-header the nonce and the time. The time is to be replaced by taking the ‘start-of-mining’ timestamp and adding to that the offset. After that hash the new 80 bytes header to get the work and add this to the total work done.

Now, I’m not suggesting this approach. It is by far the best way to do what tailstorm is trying to do without all the downsides, but I still don’t think it is worth the cost. But that is my opinion.
I’m just saying that if you limit your upgrade to JUST this part of tailstorm, it will have a hugely improved chance of getting accepted.

That’d be the only immediate advantage. However, nodes could extend their API with subchain confirmations, so users who’d opt-in to read it could get more granularity. It’d be like opt-in faster blocks from userspace PoV.

This looking like a SF is just a natural consequence of it being non-breaking to non-node software. It would still be a hard fork because:

  • We’d HF the difficulty jump from K to 1/K. To do this as a SF would require intentionally slowing down mining for the transition so difficulty would adjust “by itself”, and it would be a very ugly thing to do. So, still a HF.
  • Maybe we’d change the TXID list order for settlement block merkle root, not sure of the trade-offs here, definitely a point to grind out, the options I see:
    • Keep full TXID list in CTOR order when merging subblock TXID lists. Slows down blocktemplate generation by the time it takes to insert the last subblock’s TXIDs.
    • Keep them in subblock order and just merge the individual subblock lists (K x CTOR sorted lists), so you can reuse bigger parts of subblock trees when merging their lists.
    • Just merge them into an unbalanced tree (compute new merkle root over subblock merkle roots, rather than individual TXs).

Just to make something clear, the above subchain idea is NOT Tailstorm. What really makes Tailstorm Tailstorm is allowing every Kth (sub)block to reference multiple parents + the consensus rules for the incentive & conflict-resolution scheme.

With the above subchain idea, it’s the same “longest chain wins, losers lose everything” race as now, it is still fully serial mining, orphans simply get discarded, no merging, no multiple subchains or parallel blocks.

Nice thing is that the above subchain idea is forward-compatible, and it could be later extended to become Tailstorm.

Sorry but all that looks like it would break way more things and for less benefits, but I’m not sure I understand your ideas right, let’s confirm.

First, a note on pool mining, just so we’re on the same page: When pools distribute the work, a lot of work will be based off same block template (it will get updated as new TXs come in, but work distributed between updates will commit to same TXs). Miners send back lower target wins as proof they’re not slacking off and they’re really grinding to find the real win, but such work can’t be accumulated to win a block, because someone must hit the real, full difficulty, win. Eventually 1 miner will get lucky and win it, and his reward will be redistributed to others. He could try cheat by skipping the pool and announcing his win by himself, but he can’t hide such practice for long, because the lesser PoWs serve the purpose of proving his hashrate, and if he doesn’t win blocks as expected based on his proven hash the pool would notice that he suspiciously has a lower win rate than expected.

Now, if I understand right, you’re proposing to have PoW be accumulated from these lesser target wins - but for that to work they’d all have to be based off same block template, else how would you later determine exactly which set of TXs the so accumulated PoW is confirming?

I think it would work to reduce variance only if all miners joined the same pool so they all work on the same block template so the work never resets, because each reset increases variance. Adding 1 TX resets the progress of lesser PoWs. Like, if you want less variance you’d have to spend maybe first 30 seconds to collect TXs, then lock it and mine for 10min while ignoring any new TXs.
Also, you’d lose the advantage of having subblock confirmations.
And the cost of implementing it would be a breaking change: from legacy SPV PoV, the difficulty target would have to be 1 because of:

SPV clients would have to be upgraded in order to see the extra stuff (sub nonces) and verify PoW, and that would add to their overheads, although the same trick I proposed above could be used to lighten those overheads: you just keep the last 100 blocks worth of this extra data, and keep the rest of header chain light.

yes, very good to avoid, in other words.

If you disagree then the onus of proof lies on you.

You understand right, and the tech spec I added in a later edit last night makes this clear. There is exactly one merkle-root.

You are wrong to say that in order to reduce variance ALL miners must join the same pool for the same reason the opposite of all miners being solo miners is not that there is exactly 1 pool.
Every pool added will already have the effect of reducing variance.

You can suggest that making it mandatory for all miners to join 1 pool is better, but then I’d have to retort with the good old saying that socialism is soo good, it has to be made mandatory.
In other words, don’t force 1 pool, but allow pools to benefit the chain AND the miner.

Actually, this is incorrect, SPV mining doesn’t derive the difficulty (and thus work) from the block-id. There is a specific field in the header for it. I linked the specification in my previous message if you want to check the details.

The details on how it does work is also in the original post. Apologies for editing it, which means you may not have seen the full message in the initial email notification.

Again, not promoting this personally. Just saying that this has the same effective gains as your much more involved system suggestions, without most of the downsides.
I still don’t think this is a good idea, even though the avoidance of subblocks and avoidance of changing difficulty and all the other things are useful, the balance is still not giving us enough benefit.

1 Like