I agree with that. It would be a great step forward to removing a silly limit in the software.
Ah, gotcha. Yeah, thatâs a fair criticism. To summarize your point to make sure I understand it: even if the limit is removed, all of the problems described in the anecdote donât just vanish. Transactions can fail for a plethora of other reasons. I agree with this; application code doesnât become completely streamlined only with the removal of the limit. It does reduce the likeliness that the edge cases are encountered though, and more importantly: itâs one less thing to worry about among a sea of others.
In the end, I think itâs a criticism of the document though, right? Not the limit itself. Iâll add a paragraph to the document in a near-future revision that explicitly calls out that BCH still wonât be âfire and forgetâ with removal of the limit so itâs clear for historic reasons and for the other stakeholders.
I totally agree. Application developers still need to handle the edgecasesâof which there are many. Iâm just hoping we can make it ever so slightly easier for them.
I think this is super valuable feedback for the release of the CHIP. Thanks, Tom. I want to include anecdotes such that the need is communicated well, but at the same time we definitely need to be clear that this change doesnât promise things that arenât true. Iâll revise this weekend or early next week. Would appreciate your feedback on that once itâs done.
Thanks for raising this CHIP.
BCHN is still evaluating whether we can only raise (and by how much), or whether we can eliminate this limit entirely for May.
At this point I donât have numbers yet on the performance of the options weâre investigating, but we should have those quite soon and weâll report back here once weâve got them.
For reference, General Protocols have also encountered problems with regards to the 50-unconfirmed chain limit. We donât create these transactions ourselves, but we have had submissions sent to us that we were unable to broadcast due to the limit.
My reasoning for a two-step approach is that we get some real-world testing between 0 (current state) and 1 (fully removed). We might be really confident right now, but I feel itâs prudent to get more real-world data.
Ideally, the 25->50 raise wouldâve given us data, but the change was so small that I just canât say I learned anything from it, other than that â50 is not enoughâ.
As for the 5000 number, I feel that if thereâs anything we mightâve missed, maybe itâs visible at 5k? Iâm not sold on any specific number, but I donât want a case where itâs just a small minor bump and we still donât learn what stresses there can come from it.
This tweet kindof shows why I feel the way I feel about this: https://twitter.com/brenankeller/status/1068615953989087232
As for removing CPFP, I think youâre absolutely right in that we shouldnât let that deter from the focused discussion and itâs up to each node to offer or not. Generally speaking I like clean solutions, and sometimes that means cutting features no one, or only a few, actually uses in order to get to a better state for everyone else.
Link: GP Statement on CHIP âRequest Update to the Unconfirmed-Transaction Chain Limitâ
Text from summary:
GP is supportive of the premise of the CHIP. In order for GP to have confidence that the CHIP is worth investing resources into, we would require it to have a more explicit owner who takes on some accountability for the eventual outcome, and a more complete description of the costs, risks and benefits of the various options. As one example, CPFP is objectively still used, and misaligned changes could lead to fragmented mempools, degraded 0-conf reliability, and ultimately damage BCH utility and value. More generally, since this is not a consensus change, it may also be a good idea to add discussion about what safe cooperation looks like. In an effort to be constructive, GP created this template as an example of how to fit these concerns into a CHIP.
GP looks forward to providing more specific feedback and support as the CHIP evolves.
In BCHN, we figured out that part of the reason we need a limit is because of performance issues associated with the counting needed to check the limit. Fully removing the limit could thus reduce code complexity and improve performance compared to simply raising the limit. So unless there is a reason to keep the limit, we should remove it. As noted before, even if there is no limit, in practice you still have the mempool memory limit - but that does not have performance issues because the node has much more freedom on what to do if that limit is hit.
The only reason I can offer is what Iâve given above - but Iâm developing a full node myself so my views are from a more generalized perspective. Itâs also just an opinion, and not a conviction or faith, so if everyone lands on the same page and says âletsâ remove the limit entirelyâ, then I 'd be happy about that outcome as well.
I think itsâ not good enough. If I fail to send given the UTXOâs Iâve selected, and I have other UTXOs I can use, then I shouldnât be expected to wait. The problem is I canât go around and try to brute-force the UTXO selection by broadcasting over and over.
Particulary so since this isnât concensus, and one node might accept while another rejects.
I would like to be able to know the policy of the node Iâm broadcasting too as part of any errors (too-long-mempool, utxo [outpoint] has X ancestors while we only allow Y), and better access to understanding my own UTXOs (how many ancestors does this UTXO have).
@freetrader @BigBlockIfTrue have you been able to conclude whether BCHN can remove the limit without adverse effect, or am I misunderstanding your take?
I donât know if @BigBlockIfTrue has concluded something, but work on CPFP removal is still in progress. There are some open questions related to the keeping of counts (which we wanted to retain if possible even if we remove the CPFP feature), and the status is that we donât know feasible the retention of that counting is. If we canât count accurately, how can we maintain a limit? - this is one open question. The other is the performance under CPFP fully removed - we have yet to assess whether it would allow us to remove the limit entirely or how much we could raise.
Figures mentioned in this thread are obviously higher than the x10 increase for which we were conservatively aiming so far.
Even if we get positive results for the path of complete CPFP removal, Iâd prefer is we also assess how much performance we can get out of it while keeping CPFP.
If push comes to shove, we could probably remove CPFP in BCHN first, and re-introduce it if there is an actual need for it, but itâs not my preferred route.
Was asked to add my comments here after making them on Telegram. I think they echo some already made but here goes:
Cache management is one of the hardest things to get right in âcomputer scienceâ - and thatâs ultimately what the Mempool is for our decentralized ledger. Right now weâre depending on undefined behavior (as in its not part of the âofficialâ specification) where nodes have created an arbitrary limit as to how deep it can go. Worse yet, different nodes can have different limits. This gives us an excuse to ignore the actual realities of physical resource limits of whatever hardware the node happens to run on and likely results in an underutilization of its potential in most cases. Best to recognize that there are physical limits that will be hit and enact official behavioral guidance policies that will inform transaction submitters on what conditions might make their txs more likely to get prioritized. But absolutely make clear that a pure âfire & forgetâ wallet model in a decentralized trustless system is utterly irresponsible.
So I think we need to remove this limit but thatâs only the first step in a much more complex and longer effort to make clear what is defined and undefined behavior for the network side of the protocol. The goal should be to clearly define the boundaries of what can be counted on and good heuristics to help devs understand the conditions that might make their transactions get kicked out and have to be submitted again. I admit our own initial BCH development assumed a fire and forget model as there wasnât any documentation that implied otherwise. Perhaps a good specification of what a well behaved wallet model should be would be in order? Anyway - Iâm glad to see the efforts BCHN are making to get this first step done and wish them luck.
I think we can say now with high confidence that we will be able to remove the unconfirmed tx chain limit from BCHN completely in May.
The time before May seems too short for us to fully explore keeping CPFP around in the way that BU did in their client. So BCHN client will most likely NOT support CPFP after May, at least for a while. If there is some real demand for CPFP that would merit additional complexity, we would consider re-instating it at some later time.
Iâve updated the post to match the proposed CHIP format/structure and updated this CHIPâs version to 1.1. These changes are reviewable at bitcoin-cash-chips/unconfirmed-transaction-chain-limit.md at master ¡ SoftwareVerde/bitcoin-cash-chips ¡ GitHub .
Without intending to be rude, I want to draw attention to the fact that the Technical Description section of this CHIP is essentially garbage at this time (unused template stuff and a link to another document without any context). I really want to urge the CHIP authors to improve this, quickly. At the very minimum, this section should specify the exact activation time and mechanism of the change.
Dude! This is just not helpful.
A CHIP is not meant as a technical document, its main (probably only) purpose is to convince the community to do something. And, indeed, the CHIP having technical details is immensely helpful to fulfull that goal.
But, really, when all clients already implemented the change what point is there to alter the CHIP? Its served its purpose. There is really nobody left to convince and hence the CHIP has fulfulled its purpose. People support it in its currenf form.
So, on top of you being rude you misunderstood the goal and you put on someone work that you have no right expecting them to do.
Edit:
As you seem interested in some soft of technical document for some reason, I invite you to write it. You want it, you get it written. Decentralized, no need to wait for anyone.
If you want the community to do something technical, it is very helpful to include an actual technical description of what that something is, exactly.
Which clients have already implemented the change? How did they implement it? What activation logic do they use?
I am a BCHN maintainer and I am pretty sure that at least BCHN did not implement the change yet. We are working hard on getting it implemented, though. I am requesting technical details to be included in the CHIP so that we can be sure different implementations are in fact compatible.
Support in current form is meaningless if the current form does not include an actual technical description of what we are going to do.
I indeed have no right to expect this CHIP to be written properly. You also have no right to expect BCHN to implement this CHIP.
Weâre trying to solve a collective problem here. I propose all parties do their part. For BCHN, that means getting the implementation finished. For the CHIP authors, that means getting the CHIP finished.
@joshmg is listed as the CHIP owner. I am noting a significant deficiency in his CHIP, so itâs primarily his responsibility to address the problem. I believe he is very capable to write this technical description himself, but if he would like assistance from BCHN devs, that is possible. (I similarly assisted with improving the multiple OP_RETURNs CHIP already.) Either way, a technical description belongs in the CHIP - as evidenced by the CHIP template chosen for writing this very CHIP.
Yeah, keep pushing. Realy nice.
I really donât think its fair or even useful to bully others into doing work you want done. Really quite sad to see you act like this.
Remove the chain limit at the previously defined timepoint of May 15th.
Can you say what more details you want?