CHIP - Unconfirmed Transaction Chain Limit


CHIP Owner:
Josh Green, Software Verde

John Jamiel, Software Verde
Doug McCollough, City of Dublin, OH
Emil Oldenburg,
Roger Ver,
Mark Lamb, CoinFLEX


Version 1.2.1

When a transaction is first transmitted on the Bitcoin Cash network, it is considered “unconfirmed” until it is “mined” into a block. These transactions that are not yet mined are also referred to as “zero-conf” transactions. Transactions are dependent upon other transactions, such that they are chained together; the value allocated by one transaction is then spent by a subsequent transaction.

Currently, the Bitcoin Cash network only permits transactions to be chained together 50 times before a block must include them. Transactions exceeding the 50th chain are often ignored by the network, despite being valid. Once a transaction is submitted to the network, it cannot be revoked. This situation, when encountered, can be extremely difficult to remedy with today’s available tools and simultaneously creates an unnecessary amount of complexity when being accounted for by network and applications developers. This CHIP is a formal request to remove the unconfirmed transaction chain limit=depth (50tx) and size (101kb) entirely from the Bitcoin Cash ecosystem.

Discussion URL: CHIP - Unconfirmed Transaction Chain Limit - #32 by proteusguy

Full Change History URL: bitcoin-cash-chips/ at master · SoftwareVerde/bitcoin-cash-chips · GitHub


Transactions exceeding the unconfirmed transaction chaining limit are often ignored by the network, despite being considered a valid transaction. For these transactions, this leaves the value transferred in an ambiguous state: it has been transferred, but some (or all) of the network may not record this transfer. Once value has been transferred by a transaction, the balance may not be distributed in a different proportion or to a separate receiver (often known as a “double-spend”) due to the network’s convention to prefer the first-seen transfer rather than the newer transfer. Therefore, the only viable path forward in this scenario is to transmit the same (perfectly identical) transaction again to the network. However, if the wallet or service is connected to peers that accepted the transaction, rebroadcasting the same transaction does not cause the connected peers to retransmit it themselves–causing the transaction to be stuck with no recourse other than hoping to connect to a new peer that has not yet seen the transaction. For this reason, it is important that all nodes agree on the unconfirmed transaction chain limit.

Additionally, determining if the transaction was not accepted by the network is a difficult to solve problem with the current available toolset. Error-responses from nodes rejecting a transaction have not been standardized, and often times nodes will silently reject the transaction–sometimes the node may not even be aware that the transaction is invalid because its dependent has not been seen yet, and the node itself cannot determine the transaction’s chain depth.

It is also not always known to the user, service, or wallet how deep the unconfirmed transaction already is when it’s received; it’s entirely possible the coins received are at the limit, and determining that state can be near-impossible without the help of a full-node.

The problem from a user/app’s perspective is that they have created a valid transaction and are given little indication that it will not be mined within a block. The tools for recourse are limited, and the tools for monitoring for such a situation is also limited.

The unconfirmed transaction chain limit is mostly an artifact of a relatively unused feature heldover from artificially restricting block size, a feature called “Child Pays for Parent”. According to research conducted by Tom Zander found here, there is very limited usage of CPfP on the BCH network. In short, in his 3 months of monitoring network activity there were only 7 valid use cases where CPfP was used to lift the transaction above the 1-sat-per-byte. This feature is not used in BCH yet still restricts the user experience and increases the complexity of development of wallets and applications built on top of Bitcoin Cash.

Issues with transaction chaining are exacerbated by the long block times periodically seen in Bitcoin Cash, the causes of which have been discussed elsewhere and were a major motivating factor in switching to the ASERT difficult adjustment algorithm. Having a static transaction chaining limit while blocks somewhat frequently take over an hour (or even two hours) to be mined, results in a scenario where transactions could be significantly more at risk than normal. Note, though, that even without these extenuating circumstances, this is always a risk with the proof of work system.

Given the motivations for implementing the transaction chaining limit are largely no longer relevant, the lack of sufficient tooling to allow SPV clients to account for it, and its poor interaction with the current semantics of transaction relaying, it appears that the transaction chaining limit provides little value while simultaneously increasing the difficulty of transacting on the Bitcoin Cash network.

Personal Impacts

During a Dublin Identity beta test with real users, an issue occurred causing sign-ups to periodically fail. After investigation, it was identified that users’ transactions from the server used to fund SLP token transfers were not being accepted by the network due to the transaction chain limit being enforced. This problem has since been mitigated by the limit being increased to 50, along with some process changes.

CoinFLEX uses SLP to distribute FLEX token dividends to its users. The server distributes these dividends periodically, via chaining transactions. These distributions were found to periodically fail due to reaching the unconfirmed transaction chaining limit. CoinFLEX is mitigating this problem by using multiple UTXOs to spawn the chains, however their large user base and small limit of 50 transactions per UTXO, are causing disproportionate complexity within their system. Raising the limit combined with increasing the base number of originating UTXOs helps to limit backend complexity. Removing the limit would remove significant complexity for rejection edge-cases where a rejection was unable to be determined or was left unnoticed.

During Bitcoin Cash meetups it is not uncommon for users of the wallet to make more than 50 transactions within the timespan of a block, especially due to the encouraged behavior of brand-new users to transfer their BCH to other members of the meetup to “try it out”. The user experience and “wow” factor of the technology is quickly doused when a new user’s transaction fails to send because their received UTXO is deeply chained. Varying block times exacerbates this problem.

Software Verde has developed multiple applications that create and distribute transactions across the BCH network. Managing multiple UTXO pools in order to handle appropriately scaling is doable but creates additional unwanted complexity. While transactions will likely never completely be “fire and forget” on BCH, creating a balance with a larger buffer (i.e. supporting a longer chain limit) and having better available tools would allow us to produce applications more reliably and for less cost, facilitating the adoption of Bitcoin Cash to businesses and enterprises.

Technical Description

The current policy limit of 50 unconfirmed ancestors or descendants, and the 101kb chain limit, is to be removed entirely once MTP >= 1621080000 and this limit removal remains in affect even in the case of a subsequent re-org to below that MTP.

Security Considerations

Uncoordinated changes to mempool rules would likely result in a degradation of 0-conf transaction security. 0-conf transaction security is dependent on the network’s solution to circumventing double-spends. If nodes do not agree to enforce the same limits, merchants accepting transactions that have exceeded the unconfirmed transaction chaining limit would be at an increased risk of encountering and accepting a double-spend transaction.

Example: A malicious user submits a transaction exceeding the current chaining limit, knowing the merchant is connected to a node that does not enforce the limit. The node accepts this transaction as it is considered valid and the merchant believes they’ve received a payment from a valid 0-conf transaction. Due to its unconfirmed chain depth, for this transaction to propagate the node in question must wait to broadcast the transaction to its peers until after a new block has been found. During this time a malicious user could prepare a second transaction spending the same coin. If submitted immediately after the new block has been found the two transactions will be in a race. Since the first transaction has not yet been broadcast to the rest of the network, there is an increased likelihood the second transaction could be seen by the majority of the network before the first transaction has had an opportunity to propagate. This situation is exacerbated if the node accepting the longer unconfirmed chain depth transaction does not also re-relay the transaction after a new block is mined that does not contain the transaction.

Implementation Costs and Risks

From our research and discussions removal of the Unconfirmed Transaction Chain Limit does not present any apparent risks if conducted in a coordinated manner and presents zero risk of a network split. According to the research conducted by developer FreeTrader of BCHN, there is no apparent loss of performance in BCHN with the limit removed. However, if changes to the mempool rules are not coordinated by the different node implementations, 0-conf transaction facility and security will likely suffer.

Costs associated with implementing this change are hard to encapsulate in this proposal. At a minimum, this CHIP recognizes there is an operational burden that coordinated network upgrades place on node developers and users. Overall, this change will require a non-negligible amount of development time to implement, translating to a cost of labor, of which is bound to vary depending on the full-node implementation and route to resolution.

Additionally, the cost of investigating solutions for the unconfirmed transaction chaining limit have been significant for those who have undertaken the task. Based on an informal survey of BU and BCHN members, General Protocols has estimated approximately 500 engineering hours have been invested in development and general investigation of increasing the chained tx limit. This commitment of hours has been useful to understand the potential limitations bounding the limit from being completely removed. After thorough investigation, no ill effect on performance has been found.

Evaluation of Alternatives

If it is deemed necessary to keep the unconfirmed transaction chain limit in some capacity then a significantly larger increase to the limit would be a reasonable alternative.

From our research, there isn’t a resource that becomes exhausted by a deep 0-conf chain. If there is indeed a technical limit, then we would advocate for node-developers to find what a responsible value is for that limit and suggest that here.

For the purposes of proposing an alternative solution: a 32MB block can hold approximately 135k transactions. This limit could serve as a hypothetical starting point.


Stakeholders relative to this proposal include:

Full-node implementations
Node Developers
Wallet Developers
Bitcoin Cash related businesses.

In our previous discussion we have engaged with several key stakeholders to understand their position on the requested change.

Stakeholders Engaged in Discussion
Bitcoin Unlimited
Bitcoin Verde
General Protocols
General Protocols

Stakeholders Position Unknown

Stakeholders Statements

Jonathan Silverblood - Casual Wallet

I believe that for money to be useful, it needs to be able to move at low cost and with ease. The current unconfirmed transaction chain limitation is effectively friction that makes Bitcoin Cash less useful as money, and I support a complete removal of the limit.

John Nieri - General Protocols

GP supports the updated recommendations of this CHIP and commits any reasonable resources toward its realization. There is still room to expand security considerations and costs which are not trivial. Although this is a non-consensus CHIP, the expansion would make it an even better precedent for the high bar that we want to establish in the BCH ecosystem.

CHIP Sponsors

Software Verde is a custom software development company based out of Columbus, Ohio, USA that has been in operation since 2011 and working within public and crypto sectors since early 2017. Software Verde has extensive experience working with local governments to promote the adoption of blockchain technology and utilization of cryptocurrencies, and is the author and maintainer of the BCH Full Node, Bitcoin Verde.

City of Dublin, OH is a municipality of approximately 50k residents that has made an investment into the adoption of blockchain technology. In turn, Dublin has built a blockchain-based digital identity management system utilizing BCH SLP tokens as a reward mechanism. In late 2019, Dublin’s identity management project moved into a beta-testing phase where Software Verde was tasked with creating digital IDs for city employees and rewarding them with tokens for their participation. is a provider of Bitcoin Cash related financial services and the owners of the wallet, one of Bitcoin Cash’s most popular non-custodial mobile friendly wallets. Their website provides important services such as a cryptocurrency exchange, reporting network related news, and providing information to help influence the growth and adoption of Bitcoin Cash and Bitcoin Cash related businesses.

CoinFLEX is a popular cryptocurrency exchange service as well as the providers of the first Bitcoin Cash futures and lending exchange. CoinFLEX is the primary distributor of Flex Coins, an SLP token used to pay dividends to their users. Their business provides several unique financial services that attract cryptocurrency investors to the network, as well as foster a culture of professional trading within the Bitcoin Cash community.


This proposal is low risk and stands to provide high benefit to the network as a whole. In addition, this request is a network-change only and therefore is not at risk of causing a chain split; however, uncoordinated changes to mempool rules by different full-nodes would likely result in a degradation of 0-conf transaction security. For these reasons, it is requested that this change be implemented in a coordinated manner on May 15th.

Choosing this date in particular is not significant in and of itself, although there seems to be no reason to deviate from history purely for the sake of it.


To the extent possible under law, this work has waived all copyright and related or neighboring rights to this work under CC0.


Notice the strong similarity of this problem with one described on Thinking about TX propagation - Technical - Bitcoin Cash Research. Which states:

There are two sides here. Most of the stakeholders are on the side of the “services” that really like the “fire and forget” approach. What is not to like there! Right?

On the other side you have the infrastructure people running the network that would LOVE to provide this service, and I am the first to admit that the 50 tx limit is silly low and needs to be made more in line with realistic limits.

I suggest we compromise. The limits are raised but not removed, for the simple reason that nobody can promise that ‘fire-and-forget’ actually works. Memory isn’t unlimited, nodes don’t have infiite uptime and other real world problems stop the limit being completely removed. So the question is, what limit is reasonably to the stakeholders? Is 1000 outputs as a limit good enough for now?

On the side of the services people, we need to make sure that they accept that no-limits is not a promise the network can make. They need to accept that the “fire and forget” is only Ok for non-critical solutions. The ones where (thumb in air) 99.9% of the transactions will get mined. For more critical solutions the services need to re-broadcast transactions till they get mined.

Let us know what you think!

1 Like

Bitcoin Unlimited’s release supports “unlimited” unconfirmed transaction chains, where “unlimited” actually means “limited by the maximum size of your mempool”. I like this approach because its one less error message that tx creators need to deal with, but at the same time I can see that raising the limit incrementally is the more conservative approach.

tldr; I think we should do something for May 15. I’d prefer unlimited, but am ok with a bump to 1000 or more.


I believe this is covered in the request section. This section requests to remove the limit completely and to provide better tooling so that non-protocol developers have the ability to take responsibility for their transactions. It is my opinion an artificial limit just for the sake of “sending a message” to non-protocol devs is counter-productive (and could be perceived as arrogant). Let’s not set ourselves up for another future limit discussion when 1,000 becomes insufficient, instead let’s just remove it and move on to better things.

If your counter argument to that is to make it 10,000, then let’s just make it 2,147,483,647.

1 Like

I would like to see this limit completely removed.


I think I would prefer a two-step approach: schedule a raise to something like 5000 for may 2021, and the complete removal for may 2022.

I also think that the CPFP “feature” should be removed entirely - it doesn’t have a strong enough usecase to motivate its existence unless you aim for a congested network as the default state - and Bitcoin Cash aims to support usage, not limit it.


I think I explained why it is, the why is because there are limits in the software, hardware and other setups. I don’t like the accusations of being called arrogant. What I’m trying to do is not over promise.

The simple fact of the matter is that unlimited is impossible. See Andrews post: mempool limits are a great example here. Should those go away too? The result will be full nodes crashing due to out of memory (they did that in 2015 or so because they didn’t have limits yet).

Demanding the impossible repeatedly won’t change the facts. It will just cause people to lie to you and give you the impression its unlimited while it really isn’t.

The network can not promise that 100% of the transactions sent to the network will get mined in a fire-and-forget manner. This is impossible in centralized solutions, and double so in decentralized ones.

So, I repeat, what is a number that you and other stakeholders find acceptable?

ps. this is not a consensus rule, to think that will be “fights” in future about this limit is not substantiated by anything I can see. Just like our block size, the limits will increase as the software and hardware limits increase. If some infrastructure software is lagging in this demand, they will be outcompeted on the open market.

This is great to see! I have some technical and operational feedback on the details but that can come later. For now, from my perspective every CHIP requires an owner who commits to driving discussion and minimizing polarization.

Can someone in the list of authors commit to ownership, communication and sensitive handling of this proposal in a canonical location that can be pointed at and tracked?


Doing a bit of an inventory here, and hoping that stakeholders chip in with what is needed.


On the P2P network a client can send a transaction to a full node (like SPV wallets may do) and they get a “reject” message when the transaction failed for some reason (more).
Messages are standardized, at least in satoshi nodes, with texts like “bad-txns-fee-negative” or “too-long-mempool-chain” for our specific topic of today.


The main way to broadcasts a transaction is using the sendrawtransaction end-point. It replies with either with the txid, or the exact same errormessage as given in the p2p message.

ElectronX servers (Fulcrum at all).

The exact same behavior that RPC has is also seen here. Error messages with text and code are forwarded from the RPC as-is.

Bitcoin(.)coms REST API.

On the “rawtransactions/sendRawTransaction” end-point the reply has an error field in the json which includes the same (but uppercased) version of the error messages we see full nodes generate. This would presumably include the “too-long-mempool-chain” error.

The CHIP requests better tooling, but with error messages being available on all levels of the APIs I’m not entirely sure what would be needed. A service that generates a transaction which isn’t accepted can just re-broadcast it after a next block has come in.

Can stakeholders comment on this strategy? Is that good enough?

1 Like

Points of order regarding OP:

  • CHIPs are intended to be concrete specifications, not vague feature requests.
  • CHIPs need to be made available under an open licence and include licensing info.

So this needs more work before this can become an actual CHIP.

See here for more info: CasH Improvement Proposal (CHIP)

1 Like

Hi Tom. I hope you don’t think I was calling you arrogant–my intention was to describe the potential for animosity if node/protocol-developers begin viewing the other developers as people node-developers need to teach lessons to. I apologize if my original verbiage did not accurately communicate that well.

To my knowledge there isn’t a resource that becomes exhausted by a deep 0-conf chain if we remove CPFP. Since 0-conf in general is already bounded by the mempool limits, I don’t think we need to have the chaining limit defined explicitly–the current limit is redundant and doesn’t solve any problems and only creates more problems for no one’s benefit.

If there is a resource that is consumed by chaining, then I will change my personal opinion about unlimited vs limited, and I would advocate that change of opinion to the other stakeholders. Furthremore, if there is indeed a technical limit, then I would advocate for node-developers to find what a responsible value is for that limit and suggest that here. That being said, I have seen no evidence of a need for such a limit, and I encourage you to bring some forward. Until that’s done, I think unlimited is the most responsible path forward.

I don’t see how the unconfirmed transaction chaining limit asserts that promise. I think the debate at hand is do we artificially limit the tx chaining. If there is a technical reason for the limit then that will persuade me (as I already addressed above).

I believe I speak for all associated stakeholders in this document that, unless there is a technical limitation otherwise, the limit should be removed.

1 Like

I’d enjoy to hear the reason for executing a two-step approach. Where does the number 5000 come from? If there’s a technical reason why 5000 is better/safer/etc than 2147483647 then I think it would be great to have a discussion around that.

While I personally agree, I don’t want to distract from the discussion around unconfirmed transaction chaining. The way I see it is if the network agrees to remove the limit then it’s up to the implementations to decide if that means removing CPFP (if they even had it to begin with–not all do), or improving it so that a new implementation can support unlimited.

I think this proposal is a precursor to what you’re describing. It is my intention that the discussion will take place here, and in the process the CHIP will be updated and created. With this process, there will be documented history for why the proposal is what it is instead of all of that being hidden away. In the end, this is an experiment; if this process works then great–if it doesn’t then that’s okay. It’s the beauty of a permissionless environment.

This is a great callout and an accidental oversight on our part. I put the document in the public domain so that if another party disagrees with the recommended result they can make their own changes and propose an alternative. Finally, I have put the document under version control here: bitcoin-cash-chips/ at master · SoftwareVerde/bitcoin-cash-chips · GitHub


Absolutely agreed. Nobody is saying this, so we are good.

I don’t think we need to have the chaining limit defined explicitly

And they are not. They are not part of the consensus rules. Even today there are different implementations that have different limits and some have no limits. Nobody will suggest those that have no limits should implement limits. Again, this is not a consensus rule.

You are changing the topic and focusing on one parameter in a more complex system. The complex system is the one your anecdotes have an issue with, not one integer limit. I assume we are here to fix the issues you put on the table.

Let me get us back to the topic by quoting myself:

1 Like

Firstly, this is great, thank you, Tom. I agree that (at least some) tooling exists. I’ll even go so far to say that it’s perhaps the 3rd-party library developers that need to put in more work to make other developers’ lives easier rather than node developers. I’m certain it’s a joint effort at least in some capacity, though.

I think the other points are very good, but this one definitely has room for improvement. The P2P error messages aren’t well-tied to the offending message. For instance, if a node transmits multiple transactions in succession, it will receive an error message but that error message does not have (canonical) identifying information to know which transaction failed. Instead, the response looks something like: too many descendants for tx <txid> [limit: 25], which is “okay” for a human, but for a computer it should be its own field within the response. Otherwise the sender has to grep through the error message for any one of the transactions it’s sent recently, and hope that there’s a match, and also hope that the particular node doesn’t do something slightly differently. In other words, this part is super brittle and can definitely be improved. I’ve personally been on the receiving end of accounting for these edge cases, and I know first-hand that the solutions aren’t straight forward.

EDIT: After speaking with Tom offline, I’m convinced we have the ability to extract the offending txid reliably by looking at the extra field within the error message for P2P messages.

Are you sure? That is for the logfile (debug.log), the text I pasted is te one that goes to the sender.

I think this is the root of our miscommunication. This discussion is ONLY about the unconfirmed transaction chaining limit. I am not advocating BCH begins promising “fire and forget guaranteed”. The other stakeholders are also not making this claim as a part of this document. Please help me find how I can better articulate this distinction so that others do not also assume the same.

Does this clarification change your opinion regarding the unconfirmed chaining limit? I assume and hope that it will, but if not, then let’s please focus on the technical reasons why a limit should be imposed and what that limit should be, because at the current point in time I still don’t see anyone providing evidence supporting its continued existence.

I believe I am that person. I’m representing (to the best of my ability) the people that have endorsed this request. I am also taking care to point out when I’m speaking on behalf of others or when I am speaking to my own personal opinion. Although I’m sure there will be mistakes.


Your anecdotes specifically refer to the problem where you, verde, had a problem where you did a fire-and-forget which failed. And you seem to conclude that the limit should be changed. The reality is that this is too simple.

The other stakeholders have had exactly the same kind of stories where they claim a single interger will change their product from non working to working. Again, that is simply not true and too simple.

I think I fuly agree that this is the core of the discussion. But the one that failed to communicate this is me. I apologize and I will be more clear.

I want to make clear that solving the problems stakeholders have stated will take more than changing one number. Changing one number will be much like BSV changing to a huge block-size. That change doesn’t accomplish scaling. It just does for a litlte while, until it doesn’t.

Removing the limit doesn’t make stakeholders’ usecases more reliable for very long either.

Edit: We can make this a simple technical statement without any usecases of a number that should change. Nothing more, nothing less.

As long as the chip is about solving usecases that assume fire-and-forget, then you should include not just this magic number, you should actually solve the usecase not just for this year, but also for next year. So it doesn’t come back to bite you or the other stakeholders.


This is unlimited for all intends and purposes, and if other nodes can achieve this as well, it’d be a great solution.