CHIP - Unconfirmed Transaction Chain Limit

Request Update to the Unconfirmed-Transaction Chain Limit



John Jamiel, Software Verde
Josh Green, Software Verde
Doug McCollough, City of Dublin, OH
Emil Oldenburg,
Roger Ver,
Mark Lamb, CoinFLEX

About the Stakeholders

Software Verde is a custom software development company based out of Columbus, Ohio, USA that has been in operation since 2011 and working within public and crypto sectors since early 2017. Software Verde has extensive experience working with local governments to promote the adoption of blockchain technology and utilization of cryptocurrencies, and is the author and maintainer of the BCH Full Node, Bitcoin Verde.

City of Dublin, OH is a municipality of approximately 50k residents that has made an investment into the adoption of blockchain technology. In turn, Dublin has built a blockchain-based digital identity management system utilizing BCH SLP tokens as a reward mechanism. In late 2019, Dublin’s identity management project moved into a beta-testing phase where Software Verde was tasked with creating digital IDs for city employees and rewarding them with tokens for their participation. is a provider of Bitcoin Cash related financial services and the owners of the wallet, one of Bitcoin Cash’s most popular non-custodial mobile friendly wallets. Their website provides important services such as a cryptocurrency exchange, reporting network related news, and providing information to help influence the growth and adoption of Bitcoin Cash and Bitcoin Cash related businesses.

CoinFLEX is a popular cryptocurrency exchange service as well as the providers of the first Bitcoin Cash futures and lending exchange. CoinFLEX is the primary distributor of Flex Coins, an SLP token used to pay dividends to their users. Their business provides several unique financial services that attract cryptocurrency investors to the network, as well as foster a culture of professional trading within the Bitcoin Cash community.


When a transaction is first transmitted on the Bitcoin Cash network, it is considered “unconfirmed” until it is “mined” into a block. These transactions that are not yet mined are also referred to as “zero-conf” transactions. Transactions are dependent upon other transactions, such that they are chained together; the value allocated by one transaction is then spent by a subsequent transaction. Currently, the Bitcoin Cash network only permits transactions to be chained together 50 times before a block must include them. Transactions exceeding the 50th chain are often ignored by the network, despite being valid. Once a transaction is submitted to the network, it cannot be revoked.


When a transaction exceeds the unconfirmed transaction chaining limit, it is still considered a valid transaction, however it is often ignored by the network. This leaves the value transferred by this transaction in an ambiguous state: it has been transferred, but some (or all) of the network may not record this transfer. Once value has been transferred by a transaction, the balance may not be distributed in a different proportion or to a separate receiver (often known as a “double-spend”) due to the network’s convention to prefer the first-seen transfer rather than the newer transfer. Therefore, the only viable path forward in this scenario is to transmit the same (perfectly identical) transaction again to the network. However, if the wallet or service is connected to peers that accepted the transaction, rebroadcasting the same transaction does not cause the connected peers to retransmit it themselves–causing the transaction to be stuck with no recourse other than hoping to connect to a new peer that has not yet seen the transaction. For this reason, it is important that all nodes agree on the unconfirmed transaction chain limit.

Additionally, determining if the transaction was not accepted by the network is a difficult to solve problem with the current available toolset. Error-responses from nodes rejecting a transaction have not been standardized, and often times nodes will silently reject the transaction–sometimes the node may not even be aware that the transaction is invalid because its dependent has not been seen yet, and the node itself cannot determine the transaction’s chain depth.

It is also not always known to the user/service/wallet how deep the unconfirmed transaction already is when it’s received; it’s possible that the coins they’ve received are already at the 50th limit, and determining that state can be near-impossible without a full-node.

The problem from a user/app’s perspective is that they have created a valid transaction and are given little indication that it will not be mined within a block. The tools for recourse are limited, and the tools for monitoring for such a situation is also limited.

The unconfirmed transaction chain limit is mostly an artifact of an unused feature heldover from artificially restricting block size, a feature called “Child Pays For Parent”. This feature is not used in BCH yet still restricts the user experience and increases the complexity of development of wallets and applications built on top of Bitcoin Cash.


During a Dublin Identity beta test with real users, an issue occurred causing sign-ups to periodically fail. After investigation, it was identified that users’ transactions from the server used to fund SLP tokens transfers were not being accepted by the network. After further research it was discovered the reason the transactions were being rejected was due to the transaction chain limit being enforced by the network.

CoinFLEX uses SLP to distribute FLEX token dividends to its users. The server distributes these dividends periodically, via chaining transactions. These distributions were found to periodically fail due to reaching the unconfirmed transaction chaining limit. CoinFLEX is mitigating this problem by using multiple UTXOs to spawn the chains, however their large user base and small limit of 50 transactions per UTXO, are causing disproportionate complexity within their system. Raising the limit combined with increasing the base number of originating UTXOs helps to limit backend complexity. Removing the limit would remove significant complexity for rejection edge-cases where a rejection was unable to be determined or was left unnoticed.

During BCH meetups it is not uncommon for users of the wallet to make more than 50 transactions within the timespan of a block, especially due to the encouraged behavior of brand-new users to transfer their BCH to other members of the meetup to “try it out”. The user experience and “wow” factor of the technology is quickly doused when a new user’s transaction fails to send because their received UTXO is deeply chained. Varying block times exacerbates this problem.

Software Verde has developed multiple applications that create and distribute transactions across the BCH network. Managing multiple UTXO pools in order to handle appropriately scaling is doable but creates much unwanted complexity. While transactions will likely never completely be “fire and forget” on BCH, creating a balance with a larger buffer (i.e. supporting a longer chain limit) and having better available tools would allow us to produce applications more reliably and for less cost, facilitating the adoption of Bitcoin Cash to businesses and enterprises.


Ideally, the unconfirmed transaction chain limit would be removed completely from BCH. If it is deemed necessary or worthwhile to keep the unconfirmed transaction chain limit in some capacity then a significantly larger increase to the limit would be a reasonable alternative. A 32MB block can hold approximately 135k transactions, which could serve as a hypothetical starting point for the unconfirmed transaction chain limit, although feedback from the protocol developers and developers and members of the greater BCH community is very much welcomed. Additionally, increased tooling for determining a transaction’s current chain depth and standardization around error/rejection messages would enable non-protocol developers to be better equipped with taking on the responsibility of owning and monitoring the transactions they’ve transmitted.


The change to the unconfirmed transaction chaining limit is a network-change only, meaning it is only enforced at the network/p2p layer and therefore cannot cause a chain split. This proposal is low risk and provides high benefit to the network as a whole. For this reason, and the severity of impact caused by non-protocol developers, it is requested that this change be implemented in a coordinated manner as soon as possible. We believe the most sensical timeline for that is the previous, although deprecated, date of May 15th. Choosing this date in particular is not significant in and of itself, although there seems to be no reason to deviate from history purely for the sake of it.


This document is placed in the public domain.


Notice the strong similarity of this problem with one described on Thinking about TX propagation - Technical - Bitcoin Cash Research. Which states:

There are two sides here. Most of the stakeholders are on the side of the “services” that really like the “fire and forget” approach. What is not to like there! Right?

On the other side you have the infrastructure people running the network that would LOVE to provide this service, and I am the first to admit that the 50 tx limit is silly low and needs to be made more in line with realistic limits.

I suggest we compromise. The limits are raised but not removed, for the simple reason that nobody can promise that ‘fire-and-forget’ actually works. Memory isn’t unlimited, nodes don’t have infiite uptime and other real world problems stop the limit being completely removed. So the question is, what limit is reasonably to the stakeholders? Is 1000 outputs as a limit good enough for now?

On the side of the services people, we need to make sure that they accept that no-limits is not a promise the network can make. They need to accept that the “fire and forget” is only Ok for non-critical solutions. The ones where (thumb in air) 99.9% of the transactions will get mined. For more critical solutions the services need to re-broadcast transactions till they get mined.

Let us know what you think!

1 Like

Bitcoin Unlimited’s release supports “unlimited” unconfirmed transaction chains, where “unlimited” actually means “limited by the maximum size of your mempool”. I like this approach because its one less error message that tx creators need to deal with, but at the same time I can see that raising the limit incrementally is the more conservative approach.

tldr; I think we should do something for May 15. I’d prefer unlimited, but am ok with a bump to 1000 or more.


I believe this is covered in the request section. This section requests to remove the limit completely and to provide better tooling so that non-protocol developers have the ability to take responsibility for their transactions. It is my opinion an artificial limit just for the sake of “sending a message” to non-protocol devs is counter-productive (and could be perceived as arrogant). Let’s not set ourselves up for another future limit discussion when 1,000 becomes insufficient, instead let’s just remove it and move on to better things.

If your counter argument to that is to make it 10,000, then let’s just make it 2,147,483,647.

1 Like

I would like to see this limit completely removed.


I think I would prefer a two-step approach: schedule a raise to something like 5000 for may 2021, and the complete removal for may 2022.

I also think that the CPFP “feature” should be removed entirely - it doesn’t have a strong enough usecase to motivate its existence unless you aim for a congested network as the default state - and Bitcoin Cash aims to support usage, not limit it.


I think I explained why it is, the why is because there are limits in the software, hardware and other setups. I don’t like the accusations of being called arrogant. What I’m trying to do is not over promise.

The simple fact of the matter is that unlimited is impossible. See Andrews post: mempool limits are a great example here. Should those go away too? The result will be full nodes crashing due to out of memory (they did that in 2015 or so because they didn’t have limits yet).

Demanding the impossible repeatedly won’t change the facts. It will just cause people to lie to you and give you the impression its unlimited while it really isn’t.

The network can not promise that 100% of the transactions sent to the network will get mined in a fire-and-forget manner. This is impossible in centralized solutions, and double so in decentralized ones.

So, I repeat, what is a number that you and other stakeholders find acceptable?

ps. this is not a consensus rule, to think that will be “fights” in future about this limit is not substantiated by anything I can see. Just like our block size, the limits will increase as the software and hardware limits increase. If some infrastructure software is lagging in this demand, they will be outcompeted on the open market.

This is great to see! I have some technical and operational feedback on the details but that can come later. For now, from my perspective every CHIP requires an owner who commits to driving discussion and minimizing polarization.

Can someone in the list of authors commit to ownership, communication and sensitive handling of this proposal in a canonical location that can be pointed at and tracked?


Doing a bit of an inventory here, and hoping that stakeholders chip in with what is needed.


On the P2P network a client can send a transaction to a full node (like SPV wallets may do) and they get a “reject” message when the transaction failed for some reason (more).
Messages are standardized, at least in satoshi nodes, with texts like “bad-txns-fee-negative” or “too-long-mempool-chain” for our specific topic of today.


The main way to broadcasts a transaction is using the sendrawtransaction end-point. It replies with either with the txid, or the exact same errormessage as given in the p2p message.

ElectronX servers (Fulcrum at all).

The exact same behavior that RPC has is also seen here. Error messages with text and code are forwarded from the RPC as-is.

Bitcoin(.)coms REST API.

On the “rawtransactions/sendRawTransaction” end-point the reply has an error field in the json which includes the same (but uppercased) version of the error messages we see full nodes generate. This would presumably include the “too-long-mempool-chain” error.

The CHIP requests better tooling, but with error messages being available on all levels of the APIs I’m not entirely sure what would be needed. A service that generates a transaction which isn’t accepted can just re-broadcast it after a next block has come in.

Can stakeholders comment on this strategy? Is that good enough?

1 Like

Points of order regarding OP:

  • CHIPs are intended to be concrete specifications, not vague feature requests.
  • CHIPs need to be made available under an open licence and include licensing info.

So this needs more work before this can become an actual CHIP.

See here for more info: CasH Improvement Proposal (CHIP)

1 Like

Hi Tom. I hope you don’t think I was calling you arrogant–my intention was to describe the potential for animosity if node/protocol-developers begin viewing the other developers as people node-developers need to teach lessons to. I apologize if my original verbiage did not accurately communicate that well.

To my knowledge there isn’t a resource that becomes exhausted by a deep 0-conf chain if we remove CPFP. Since 0-conf in general is already bounded by the mempool limits, I don’t think we need to have the chaining limit defined explicitly–the current limit is redundant and doesn’t solve any problems and only creates more problems for no one’s benefit.

If there is a resource that is consumed by chaining, then I will change my personal opinion about unlimited vs limited, and I would advocate that change of opinion to the other stakeholders. Furthremore, if there is indeed a technical limit, then I would advocate for node-developers to find what a responsible value is for that limit and suggest that here. That being said, I have seen no evidence of a need for such a limit, and I encourage you to bring some forward. Until that’s done, I think unlimited is the most responsible path forward.

I don’t see how the unconfirmed transaction chaining limit asserts that promise. I think the debate at hand is do we artificially limit the tx chaining. If there is a technical reason for the limit then that will persuade me (as I already addressed above).

I believe I speak for all associated stakeholders in this document that, unless there is a technical limitation otherwise, the limit should be removed.

1 Like

I’d enjoy to hear the reason for executing a two-step approach. Where does the number 5000 come from? If there’s a technical reason why 5000 is better/safer/etc than 2147483647 then I think it would be great to have a discussion around that.

While I personally agree, I don’t want to distract from the discussion around unconfirmed transaction chaining. The way I see it is if the network agrees to remove the limit then it’s up to the implementations to decide if that means removing CPFP (if they even had it to begin with–not all do), or improving it so that a new implementation can support unlimited.

I think this proposal is a precursor to what you’re describing. It is my intention that the discussion will take place here, and in the process the CHIP will be updated and created. With this process, there will be documented history for why the proposal is what it is instead of all of that being hidden away. In the end, this is an experiment; if this process works then great–if it doesn’t then that’s okay. It’s the beauty of a permissionless environment.

This is a great callout and an accidental oversight on our part. I put the document in the public domain so that if another party disagrees with the recommended result they can make their own changes and propose an alternative. Finally, I have put the document under version control here: bitcoin-cash-chips/ at master · SoftwareVerde/bitcoin-cash-chips · GitHub


Absolutely agreed. Nobody is saying this, so we are good.

I don’t think we need to have the chaining limit defined explicitly

And they are not. They are not part of the consensus rules. Even today there are different implementations that have different limits and some have no limits. Nobody will suggest those that have no limits should implement limits. Again, this is not a consensus rule.

You are changing the topic and focusing on one parameter in a more complex system. The complex system is the one your anecdotes have an issue with, not one integer limit. I assume we are here to fix the issues you put on the table.

Let me get us back to the topic by quoting myself:

1 Like

Firstly, this is great, thank you, Tom. I agree that (at least some) tooling exists. I’ll even go so far to say that it’s perhaps the 3rd-party library developers that need to put in more work to make other developers’ lives easier rather than node developers. I’m certain it’s a joint effort at least in some capacity, though.

I think the other points are very good, but this one definitely has room for improvement. The P2P error messages aren’t well-tied to the offending message. For instance, if a node transmits multiple transactions in succession, it will receive an error message but that error message does not have (canonical) identifying information to know which transaction failed. Instead, the response looks something like: too many descendants for tx <txid> [limit: 25], which is “okay” for a human, but for a computer it should be its own field within the response. Otherwise the sender has to grep through the error message for any one of the transactions it’s sent recently, and hope that there’s a match, and also hope that the particular node doesn’t do something slightly differently. In other words, this part is super brittle and can definitely be improved. I’ve personally been on the receiving end of accounting for these edge cases, and I know first-hand that the solutions aren’t straight forward.

EDIT: After speaking with Tom offline, I’m convinced we have the ability to extract the offending txid reliably by looking at the extra field within the error message for P2P messages.

Are you sure? That is for the logfile (debug.log), the text I pasted is te one that goes to the sender.

I think this is the root of our miscommunication. This discussion is ONLY about the unconfirmed transaction chaining limit. I am not advocating BCH begins promising “fire and forget guaranteed”. The other stakeholders are also not making this claim as a part of this document. Please help me find how I can better articulate this distinction so that others do not also assume the same.

Does this clarification change your opinion regarding the unconfirmed chaining limit? I assume and hope that it will, but if not, then let’s please focus on the technical reasons why a limit should be imposed and what that limit should be, because at the current point in time I still don’t see anyone providing evidence supporting its continued existence.

I believe I am that person. I’m representing (to the best of my ability) the people that have endorsed this request. I am also taking care to point out when I’m speaking on behalf of others or when I am speaking to my own personal opinion. Although I’m sure there will be mistakes.


Your anecdotes specifically refer to the problem where you, verde, had a problem where you did a fire-and-forget which failed. And you seem to conclude that the limit should be changed. The reality is that this is too simple.

The other stakeholders have had exactly the same kind of stories where they claim a single interger will change their product from non working to working. Again, that is simply not true and too simple.

I think I fuly agree that this is the core of the discussion. But the one that failed to communicate this is me. I apologize and I will be more clear.

I want to make clear that solving the problems stakeholders have stated will take more than changing one number. Changing one number will be much like BSV changing to a huge block-size. That change doesn’t accomplish scaling. It just does for a litlte while, until it doesn’t.

Removing the limit doesn’t make stakeholders’ usecases more reliable for very long either.

Edit: We can make this a simple technical statement without any usecases of a number that should change. Nothing more, nothing less.

As long as the chip is about solving usecases that assume fire-and-forget, then you should include not just this magic number, you should actually solve the usecase not just for this year, but also for next year. So it doesn’t come back to bite you or the other stakeholders.

1 Like

This is unlimited for all intends and purposes, and if other nodes can achieve this as well, it’d be a great solution.