CHIP - Unconfirmed Transaction Chain Limit

I would like to see this limit completely removed.

6 Likes

I think I would prefer a two-step approach: schedule a raise to something like 5000 for may 2021, and the complete removal for may 2022.

I also think that the CPFP “feature” should be removed entirely - it doesn’t have a strong enough usecase to motivate its existence unless you aim for a congested network as the default state - and Bitcoin Cash aims to support usage, not limit it.

3 Likes

I think I explained why it is, the why is because there are limits in the software, hardware and other setups. I don’t like the accusations of being called arrogant. What I’m trying to do is not over promise.

The simple fact of the matter is that unlimited is impossible. See Andrews post: mempool limits are a great example here. Should those go away too? The result will be full nodes crashing due to out of memory (they did that in 2015 or so because they didn’t have limits yet).

Demanding the impossible repeatedly won’t change the facts. It will just cause people to lie to you and give you the impression its unlimited while it really isn’t.

The network can not promise that 100% of the transactions sent to the network will get mined in a fire-and-forget manner. This is impossible in centralized solutions, and double so in decentralized ones.

So, I repeat, what is a number that you and other stakeholders find acceptable?

ps. this is not a consensus rule, to think that will be “fights” in future about this limit is not substantiated by anything I can see. Just like our block size, the limits will increase as the software and hardware limits increase. If some infrastructure software is lagging in this demand, they will be outcompeted on the open market.

This is great to see! I have some technical and operational feedback on the details but that can come later. For now, from my perspective every CHIP requires an owner who commits to driving discussion and minimizing polarization.

Can someone in the list of authors commit to ownership, communication and sensitive handling of this proposal in a canonical location that can be pointed at and tracked?

2 Likes

Doing a bit of an inventory here, and hoping that stakeholders chip in with what is needed.

P2P

On the P2P network a client can send a transaction to a full node (like SPV wallets may do) and they get a “reject” message when the transaction failed for some reason (more).
Messages are standardized, at least in satoshi nodes, with texts like “bad-txns-fee-negative” or “too-long-mempool-chain” for our specific topic of today.

RPC

The main way to broadcasts a transaction is using the sendrawtransaction end-point. It replies with either with the txid, or the exact same errormessage as given in the p2p message.

ElectronX servers (Fulcrum at all).

The exact same behavior that RPC has is also seen here. Error messages with text and code are forwarded from the RPC as-is.

Bitcoin(.)coms REST API.

On the “rawtransactions/sendRawTransaction” end-point the reply has an error field in the json which includes the same (but uppercased) version of the error messages we see full nodes generate. This would presumably include the “too-long-mempool-chain” error.


The CHIP requests better tooling, but with error messages being available on all levels of the APIs I’m not entirely sure what would be needed. A service that generates a transaction which isn’t accepted can just re-broadcast it after a next block has come in.

Can stakeholders comment on this strategy? Is that good enough?

1 Like

Points of order regarding OP:

  • CHIPs are intended to be concrete specifications, not vague feature requests.
  • CHIPs need to be made available under an open licence and include licensing info.

So this needs more work before this can become an actual CHIP.

See here for more info: CasH Improvement Proposal (CHIP)

1 Like

Hi Tom. I hope you don’t think I was calling you arrogant–my intention was to describe the potential for animosity if node/protocol-developers begin viewing the other developers as people node-developers need to teach lessons to. I apologize if my original verbiage did not accurately communicate that well.

To my knowledge there isn’t a resource that becomes exhausted by a deep 0-conf chain if we remove CPFP. Since 0-conf in general is already bounded by the mempool limits, I don’t think we need to have the chaining limit defined explicitly–the current limit is redundant and doesn’t solve any problems and only creates more problems for no one’s benefit.

If there is a resource that is consumed by chaining, then I will change my personal opinion about unlimited vs limited, and I would advocate that change of opinion to the other stakeholders. Furthremore, if there is indeed a technical limit, then I would advocate for node-developers to find what a responsible value is for that limit and suggest that here. That being said, I have seen no evidence of a need for such a limit, and I encourage you to bring some forward. Until that’s done, I think unlimited is the most responsible path forward.

I don’t see how the unconfirmed transaction chaining limit asserts that promise. I think the debate at hand is do we artificially limit the tx chaining. If there is a technical reason for the limit then that will persuade me (as I already addressed above).

I believe I speak for all associated stakeholders in this document that, unless there is a technical limitation otherwise, the limit should be removed.

1 Like

I’d enjoy to hear the reason for executing a two-step approach. Where does the number 5000 come from? If there’s a technical reason why 5000 is better/safer/etc than 2147483647 then I think it would be great to have a discussion around that.

While I personally agree, I don’t want to distract from the discussion around unconfirmed transaction chaining. The way I see it is if the network agrees to remove the limit then it’s up to the implementations to decide if that means removing CPFP (if they even had it to begin with–not all do), or improving it so that a new implementation can support unlimited.

I think this proposal is a precursor to what you’re describing. It is my intention that the discussion will take place here, and in the process the CHIP will be updated and created. With this process, there will be documented history for why the proposal is what it is instead of all of that being hidden away. In the end, this is an experiment; if this process works then great–if it doesn’t then that’s okay. It’s the beauty of a permissionless environment.

This is a great callout and an accidental oversight on our part. I put the document in the public domain so that if another party disagrees with the recommended result they can make their own changes and propose an alternative. Finally, I have put the document under version control here: bitcoin-cash-chips/unconfirmed-transaction-chain-limit.md at master · SoftwareVerde/bitcoin-cash-chips · GitHub

2 Likes

Absolutely agreed. Nobody is saying this, so we are good.

I don’t think we need to have the chaining limit defined explicitly

And they are not. They are not part of the consensus rules. Even today there are different implementations that have different limits and some have no limits. Nobody will suggest those that have no limits should implement limits. Again, this is not a consensus rule.

You are changing the topic and focusing on one parameter in a more complex system. The complex system is the one your anecdotes have an issue with, not one integer limit. I assume we are here to fix the issues you put on the table.

Let me get us back to the topic by quoting myself:

1 Like

Firstly, this is great, thank you, Tom. I agree that (at least some) tooling exists. I’ll even go so far to say that it’s perhaps the 3rd-party library developers that need to put in more work to make other developers’ lives easier rather than node developers. I’m certain it’s a joint effort at least in some capacity, though.

I think the other points are very good, but this one definitely has room for improvement. The P2P error messages aren’t well-tied to the offending message. For instance, if a node transmits multiple transactions in succession, it will receive an error message but that error message does not have (canonical) identifying information to know which transaction failed. Instead, the response looks something like: too many descendants for tx <txid> [limit: 25], which is “okay” for a human, but for a computer it should be its own field within the response. Otherwise the sender has to grep through the error message for any one of the transactions it’s sent recently, and hope that there’s a match, and also hope that the particular node doesn’t do something slightly differently. In other words, this part is super brittle and can definitely be improved. I’ve personally been on the receiving end of accounting for these edge cases, and I know first-hand that the solutions aren’t straight forward.

EDIT: After speaking with Tom offline, I’m convinced we have the ability to extract the offending txid reliably by looking at the extra field within the error message for P2P messages.

Are you sure? That is for the logfile (debug.log), the text I pasted is te one that goes to the sender.

I think this is the root of our miscommunication. This discussion is ONLY about the unconfirmed transaction chaining limit. I am not advocating BCH begins promising “fire and forget guaranteed”. The other stakeholders are also not making this claim as a part of this document. Please help me find how I can better articulate this distinction so that others do not also assume the same.

Does this clarification change your opinion regarding the unconfirmed chaining limit? I assume and hope that it will, but if not, then let’s please focus on the technical reasons why a limit should be imposed and what that limit should be, because at the current point in time I still don’t see anyone providing evidence supporting its continued existence.

I believe I am that person. I’m representing (to the best of my ability) the people that have endorsed this request. I am also taking care to point out when I’m speaking on behalf of others or when I am speaking to my own personal opinion. Although I’m sure there will be mistakes.

2 Likes

Your anecdotes specifically refer to the problem where you, verde, had a problem where you did a fire-and-forget which failed. And you seem to conclude that the limit should be changed. The reality is that this is too simple.

The other stakeholders have had exactly the same kind of stories where they claim a single interger will change their product from non working to working. Again, that is simply not true and too simple.

I think I fuly agree that this is the core of the discussion. But the one that failed to communicate this is me. I apologize and I will be more clear.

I want to make clear that solving the problems stakeholders have stated will take more than changing one number. Changing one number will be much like BSV changing to a huge block-size. That change doesn’t accomplish scaling. It just does for a litlte while, until it doesn’t.

Removing the limit doesn’t make stakeholders’ usecases more reliable for very long either.

Edit: We can make this a simple technical statement without any usecases of a number that should change. Nothing more, nothing less.

As long as the chip is about solving usecases that assume fire-and-forget, then you should include not just this magic number, you should actually solve the usecase not just for this year, but also for next year. So it doesn’t come back to bite you or the other stakeholders.

2 Likes

This is unlimited for all intends and purposes, and if other nodes can achieve this as well, it’d be a great solution.

2 Likes

I agree with that. It would be a great step forward to removing a silly limit in the software.

2 Likes

Ah, gotcha. Yeah, that’s a fair criticism. To summarize your point to make sure I understand it: even if the limit is removed, all of the problems described in the anecdote don’t just vanish. Transactions can fail for a plethora of other reasons. I agree with this; application code doesn’t become completely streamlined only with the removal of the limit. It does reduce the likeliness that the edge cases are encountered though, and more importantly: it’s one less thing to worry about among a sea of others.

In the end, I think it’s a criticism of the document though, right? Not the limit itself. I’ll add a paragraph to the document in a near-future revision that explicitly calls out that BCH still won’t be “fire and forget” with removal of the limit so it’s clear for historic reasons and for the other stakeholders.

I totally agree. Application developers still need to handle the edgecases–of which there are many. I’m just hoping we can make it ever so slightly easier for them.

I think this is super valuable feedback for the release of the CHIP. Thanks, Tom. I want to include anecdotes such that the need is communicated well, but at the same time we definitely need to be clear that this change doesn’t promise things that aren’t true. I’ll revise this weekend or early next week. Would appreciate your feedback on that once it’s done.

2 Likes

Thanks for raising this CHIP.

BCHN is still evaluating whether we can only raise (and by how much), or whether we can eliminate this limit entirely for May.

At this point I don’t have numbers yet on the performance of the options we’re investigating, but we should have those quite soon and we’ll report back here once we’ve got them.

2 Likes

For reference, General Protocols have also encountered problems with regards to the 50-unconfirmed chain limit. We don’t create these transactions ourselves, but we have had submissions sent to us that we were unable to broadcast due to the limit.