I agree with that. It would be a great step forward to removing a silly limit in the software.
Ah, gotcha. Yeah, that’s a fair criticism. To summarize your point to make sure I understand it: even if the limit is removed, all of the problems described in the anecdote don’t just vanish. Transactions can fail for a plethora of other reasons. I agree with this; application code doesn’t become completely streamlined only with the removal of the limit. It does reduce the likeliness that the edge cases are encountered though, and more importantly: it’s one less thing to worry about among a sea of others.
In the end, I think it’s a criticism of the document though, right? Not the limit itself. I’ll add a paragraph to the document in a near-future revision that explicitly calls out that BCH still won’t be “fire and forget” with removal of the limit so it’s clear for historic reasons and for the other stakeholders.
I totally agree. Application developers still need to handle the edgecases–of which there are many. I’m just hoping we can make it ever so slightly easier for them.
I think this is super valuable feedback for the release of the CHIP. Thanks, Tom. I want to include anecdotes such that the need is communicated well, but at the same time we definitely need to be clear that this change doesn’t promise things that aren’t true. I’ll revise this weekend or early next week. Would appreciate your feedback on that once it’s done.
Thanks for raising this CHIP.
BCHN is still evaluating whether we can only raise (and by how much), or whether we can eliminate this limit entirely for May.
At this point I don’t have numbers yet on the performance of the options we’re investigating, but we should have those quite soon and we’ll report back here once we’ve got them.
For reference, General Protocols have also encountered problems with regards to the 50-unconfirmed chain limit. We don’t create these transactions ourselves, but we have had submissions sent to us that we were unable to broadcast due to the limit.
My reasoning for a two-step approach is that we get some real-world testing between 0 (current state) and 1 (fully removed). We might be really confident right now, but I feel it’s prudent to get more real-world data.
Ideally, the 25->50 raise would’ve given us data, but the change was so small that I just can’t say I learned anything from it, other than that “50 is not enough”.
As for the 5000 number, I feel that if there’s anything we might’ve missed, maybe it’s visible at 5k? I’m not sold on any specific number, but I don’t want a case where it’s just a small minor bump and we still don’t learn what stresses there can come from it.
This tweet kindof shows why I feel the way I feel about this: https://twitter.com/brenankeller/status/1068615953989087232
As for removing CPFP, I think you’re absolutely right in that we shouldn’t let that deter from the focused discussion and it’s up to each node to offer or not. Generally speaking I like clean solutions, and sometimes that means cutting features no one, or only a few, actually uses in order to get to a better state for everyone else.
Text from summary:
GP is supportive of the premise of the CHIP. In order for GP to have confidence that the CHIP is worth investing resources into, we would require it to have a more explicit owner who takes on some accountability for the eventual outcome, and a more complete description of the costs, risks and benefits of the various options. As one example, CPFP is objectively still used, and misaligned changes could lead to fragmented mempools, degraded 0-conf reliability, and ultimately damage BCH utility and value. More generally, since this is not a consensus change, it may also be a good idea to add discussion about what safe cooperation looks like. In an effort to be constructive, GP created this template as an example of how to fit these concerns into a CHIP.
GP looks forward to providing more specific feedback and support as the CHIP evolves.
In BCHN, we figured out that part of the reason we need a limit is because of performance issues associated with the counting needed to check the limit. Fully removing the limit could thus reduce code complexity and improve performance compared to simply raising the limit. So unless there is a reason to keep the limit, we should remove it. As noted before, even if there is no limit, in practice you still have the mempool memory limit - but that does not have performance issues because the node has much more freedom on what to do if that limit is hit.
The only reason I can offer is what I’ve given above - but I’m developing a full node myself so my views are from a more generalized perspective. It’s also just an opinion, and not a conviction or faith, so if everyone lands on the same page and says “lets’ remove the limit entirely”, then I 'd be happy about that outcome as well.
I think its’ not good enough. If I fail to send given the UTXO’s I’ve selected, and I have other UTXOs I can use, then I shouldn’t be expected to wait. The problem is I can’t go around and try to brute-force the UTXO selection by broadcasting over and over.
Particulary so since this isn’t concensus, and one node might accept while another rejects.
I would like to be able to know the policy of the node I’m broadcasting too as part of any errors (too-long-mempool, utxo [outpoint] has X ancestors while we only allow Y), and better access to understanding my own UTXOs (how many ancestors does this UTXO have).
I don’t know if @BigBlockIfTrue has concluded something, but work on CPFP removal is still in progress. There are some open questions related to the keeping of counts (which we wanted to retain if possible even if we remove the CPFP feature), and the status is that we don’t know feasible the retention of that counting is. If we can’t count accurately, how can we maintain a limit? - this is one open question. The other is the performance under CPFP fully removed - we have yet to assess whether it would allow us to remove the limit entirely or how much we could raise.
Figures mentioned in this thread are obviously higher than the x10 increase for which we were conservatively aiming so far.
Even if we get positive results for the path of complete CPFP removal, I’d prefer is we also assess how much performance we can get out of it while keeping CPFP.
If push comes to shove, we could probably remove CPFP in BCHN first, and re-introduce it if there is an actual need for it, but it’s not my preferred route.
Was asked to add my comments here after making them on Telegram. I think they echo some already made but here goes:
Cache management is one of the hardest things to get right in “computer science” - and that’s ultimately what the Mempool is for our decentralized ledger. Right now we’re depending on undefined behavior (as in its not part of the “official” specification) where nodes have created an arbitrary limit as to how deep it can go. Worse yet, different nodes can have different limits. This gives us an excuse to ignore the actual realities of physical resource limits of whatever hardware the node happens to run on and likely results in an underutilization of its potential in most cases. Best to recognize that there are physical limits that will be hit and enact official behavioral guidance policies that will inform transaction submitters on what conditions might make their txs more likely to get prioritized. But absolutely make clear that a pure “fire & forget” wallet model in a decentralized trustless system is utterly irresponsible.
So I think we need to remove this limit but that’s only the first step in a much more complex and longer effort to make clear what is defined and undefined behavior for the network side of the protocol. The goal should be to clearly define the boundaries of what can be counted on and good heuristics to help devs understand the conditions that might make their transactions get kicked out and have to be submitted again. I admit our own initial BCH development assumed a fire and forget model as there wasn’t any documentation that implied otherwise. Perhaps a good specification of what a well behaved wallet model should be would be in order? Anyway - I’m glad to see the efforts BCHN are making to get this first step done and wish them luck.
I think we can say now with high confidence that we will be able to remove the unconfirmed tx chain limit from BCHN completely in May.
The time before May seems too short for us to fully explore keeping CPFP around in the way that BU did in their client. So BCHN client will most likely NOT support CPFP after May, at least for a while. If there is some real demand for CPFP that would merit additional complexity, we would consider re-instating it at some later time.
I’ve updated the post to match the proposed CHIP format/structure and updated this CHIP’s version to 1.1. These changes are reviewable at bitcoin-cash-chips/unconfirmed-transaction-chain-limit.md at master · SoftwareVerde/bitcoin-cash-chips · GitHub .
Nice update! GP has accordingly updated its statement.
Without intending to be rude, I want to draw attention to the fact that the Technical Description section of this CHIP is essentially garbage at this time (unused template stuff and a link to another document without any context). I really want to urge the CHIP authors to improve this, quickly. At the very minimum, this section should specify the exact activation time and mechanism of the change.
Dude! This is just not helpful.
A CHIP is not meant as a technical document, its main (probably only) purpose is to convince the community to do something. And, indeed, the CHIP having technical details is immensely helpful to fulfull that goal.
But, really, when all clients already implemented the change what point is there to alter the CHIP? Its served its purpose. There is really nobody left to convince and hence the CHIP has fulfulled its purpose. People support it in its currenf form.
So, on top of you being rude you misunderstood the goal and you put on someone work that you have no right expecting them to do.
As you seem interested in some soft of technical document for some reason, I invite you to write it. You want it, you get it written. Decentralized, no need to wait for anyone.
If you want the community to do something technical, it is very helpful to include an actual technical description of what that something is, exactly.
Which clients have already implemented the change? How did they implement it? What activation logic do they use?
I am a BCHN maintainer and I am pretty sure that at least BCHN did not implement the change yet. We are working hard on getting it implemented, though. I am requesting technical details to be included in the CHIP so that we can be sure different implementations are in fact compatible.
Support in current form is meaningless if the current form does not include an actual technical description of what we are going to do.
I indeed have no right to expect this CHIP to be written properly. You also have no right to expect BCHN to implement this CHIP.
We’re trying to solve a collective problem here. I propose all parties do their part. For BCHN, that means getting the implementation finished. For the CHIP authors, that means getting the CHIP finished.
@joshmg is listed as the CHIP owner. I am noting a significant deficiency in his CHIP, so it’s primarily his responsibility to address the problem. I believe he is very capable to write this technical description himself, but if he would like assistance from BCHN devs, that is possible. (I similarly assisted with improving the multiple OP_RETURNs CHIP already.) Either way, a technical description belongs in the CHIP - as evidenced by the CHIP template chosen for writing this very CHIP.
Yeah, keep pushing. Realy nice.
I really don’t think its fair or even useful to bully others into doing work you want done. Really quite sad to see you act like this.
Remove the chain limit at the previously defined timepoint of May 15th.
Can you say what more details you want?