Lower the default relay fee, create a fee estimation algorithm

Thanks @ShadowOfHarbringer ! I also want it back, but if it turns out that it’s controversial, I don’t want a split over such matters. If people want it – I would love to see it return.

The code for it is dead simple and it fits in our current BCHN mempool scheme quite easily code-wise. It’s also easy to explain to wallet devs – it’s basically a simple formula.

I suspect they removed it because they desired a pure fee market with 0 free txns? That is my suspicion based on everything else they did in 2015 and 2016… to prep the chain to be a pure fee market on steroids.


They haven’t even cleared out their old backlog, it has definitely not been okay. As long as they remain in “free lunch” mode they’ll never be okay.

I would highly recommend that we don’t hijack default-fee-lowering coordination, a real problem that may legit affect usability in the future (though debatable how near term), with free/coindays-adjusted-free transactions, a sounds-good feature businesses really didn’t miss very much and is in any case tangential to the default fee discussion. If needed, it can go into its own thread.


I am actually thinking this could be a better solution. @cculianu pointed out that adding things to the coinbase transaction wasn’t as trivial as I thought due to BTC dependencies for the miners.

If we could use two of the nVersion bits in the block header for voting we could have this possible usage:
Every X blocks, accumulate all votes (-1, 0 or 1) during the last X blocks and if the sum is < -Y adjust the fee by -1 sat/kb. If the sum is > Y adjust the fee by +1 sat/kb with an absolute maximum value of 1000 sat/kb.

If we would like to be able to reach 500 sat/kb in one year we would need X ~= 105, a little over one correction per day and no steep adjustments.

SPV wallets would need to download the block headers (they need to do that anyway) and perform some trivial summation. Checkpoints, as today, could be built into the software to avoid going back too far for newly created wallets.

There already is a working-for-now solution that will work for the next year: Keeping a default value in bitcoin.conf of 1000 sat/kb. I think the correct move is to discuss and plan for a proper method of adjusting the fees, as we are doing right now.


Isn’t version-rolling potentially in conflict with Asicboost? Coinbase Scriptsig or opreturn is likely much safer as a signaling mechanism, in terms of not conflicting with mining operations.

One other thing about SPV wallets: they don’t actually have to do any calculation - it’s not consensus, just a “collective recommendation” from miners, so imo it’s safe enough to just ask the node it’s connected to. Unlike longest-chain and received-tx merkle verification, there’s no room for the node to fool the SPV wallet into accepting the wrong chain here, it’s just telling a nonconsensus policy parameter.

“What if the node lies and results in non-relay?” Well, nodes can already choose today to not relay a transaction from SPV wallet, and there’s little the wallet can do other than going somewhere else, so…

1 Like

There are no free lunches.

So we can return Coin Days Destroyed, but not in “free” mode. How much coin days destroyed will affect paid fees is another matter.

There can be a simple algorithm that weights CoinDaysDestroyed + Minimum fee and calculates final effective paid fee from that.

This is not really that hard. Somebody has to pick a number. Arbitrarily (like 45% CoinDays, 55% minimum fee). This is a start. Over time it can be improved.

Of course, these would be the default settings. Miners will be able to adjust them as they please. Maybe even broadcast their settings using a mechanism like BMP (tried to call @Javier Gonzales, but he does not seem to be here).

Another idea I just had is giving CoinDaysDestroyed very high fee-calculation-weight when it comes to relaying transactions (so old coins never “get lost in transit” even with low fees paid) and smaller weight when it comes to actual inclusion in block.

I am sensing this could be a stupid idea for some reason so you are free to criticize me [as always].

CDD isn’t bad as a congestion control mechanism, that’s for sure, and it may be quite useful generally if we find ourselves in long stretches of congestion. It will likely not be very useful for wallets to determine reliable sending policies due to the complexity, but may be useful to make sure congestion does less damage to “legit” transactions.

Note that this is also why it’s essential in a proposed “free space”, since that space will likely be permanently congested until/unless nobody’s interested in the coin anymore.

BCH does have though, as a very high priority goal, that we shall never have long stretches of congestion. With that in mind, usefulness of CDD might be limited - not gonna stop anyone from implementing it (…as long as they also want to maintain it), but doesn’t look like a priority to me either.

It’s not complex.

I think it just “feels” complex because we haven’t looked at it in 6 years or whatever. It’s a formula something like (pseudocode below):

foreach coin in tx.inputs:
   sum += coin.value * (current_height - coin.height)
priority_factor = sum / tx_byte_size
# priority_factor is then applied in some way to modify the fee requirement ...

Wallets always know their coin’s height – or if they do not, they can just not take advantage of this “optimization” and pay the standard 1 sat/B fee until someone writes code to remember the height…

If we do it right – either way nothing breaks. But people wanting to pay less or nothing during times of 0 congestion – can easily do so with a good wallet.

This is 1 idea.

It is indeed not complex for the node, but for the wallets, the main question is “what can I do to ensure my tx will send reliably?”

A 1sat/B fee is simple: just calculate output minus input divided by size, make sure it’s above 1.

A fee floor voted by miners/whatever is also simple: Fetch a number n from node, output minus input divided by size, make sure it’s above n.

CDD though will involve additional logic: now the wallet will have to first calculate CDD - fetch how old the input UTXOs are, something not immediately apparent just looking at the tx - then (in the simplest case i can imagine, feel free to suggest other possibilities) compare it against some “this is safe” CDD number supplied by the node. Wallets that sync all history (say, Electron-Cash) will have better access to this than wallets that depend more on their full node indexer (say, Bitcore). There’s additional logic there involving new information already >.>

Oh, this is very easy.

Whatever we, node builders do here will get implemented in all major nodes (BCHN, BU, Knuth, BCHD etc) and after that wallet authors will just copy/port the code.

This is a really simple matter of following. Wallet authors do not need to do anything, they just need to copy.

This is also pretty easily solvable.

  1. Wallets that do not have full history but instead rely on indexer, should calculate without CCD and pay the full fee (also display warning if user wants to send with lower fee)

  2. Wallets that see full history, like Electron Cash, can take advantage of the discount. Because CCD can be exactly this - a discount.

If we treat sending old/a lot of CCD coins for cheaper as a special discount while we keep default BCH fees <$0.01 anyway as they were already, this will solve itself basically.

Best wallets whose authors care about their users will implement the discount by fetching the full history.

Some other wallets won’t. But there will be no loss, because BCH fees will remain low anyway.

1 Like

i can’t speak for all wallets but Electron Cash which does SPV has to know which block a coin was mined in as part of its verification process (you can’t verify a tx without knowing the block it was in, and having its header).

Electron Cash stores the height of the UTXO with the UTXO… so for Electron Cash it’s simple to get to that information.

And as far as fetching data from the node for the CDD “rate” – that can be as simple as 1.0 sats/B is now. You start with that number, 1.0 sats/B – and then you can deduct your “coindays destroyed discount” (which may be 0 in some cases) – and voila → your new “optimized” fee.

Note that wallets can opt out of this and not optimize the fee – and just pay 1 sat/B. This is only for wallets wanting to take advantage of the discount. This is how it worked in old Electrum back in the day too…

I don’t see it as more complex than the other proposals. Is it more complex than simple 1 sat/B? Slightly yeah because now you have to calculate 2 things and deduct one from the other… But it’s simpler than some of the other proposals in this thread, I would argue.

Edit: @ShadowOfHarbringer yeah basically everything you just said. :slight_smile:

There’s a BIP that defines which versionbits are open to grinding by version rolling.

If we stay out of the way of those, we should be fine.

Of course, it’s something to double check.

1 Like

The basic concept of fee-setting is not a consensus rule today. On the other hand there is really only one “lever” to adjust by the network which is the min-relay-fee and our fear for zero-conf makes us state pretty loudly that miners should not actually touch this.

This is an undesirable situation, nobody really wants developers to have the final say here. And should the price reach $10k, the fees are going to make certain groups of people simply no longer able to participate.
At one point miners will take action and its our job to make sure they have tools to take action that is not destructive to the network.

A lot of questions came up in this discussion, both here and on other channels. I’ve collected various and write the answers as I’ve found them.


Q: Why is usecase X made more expensive and usecase Y made cheaper? Why are you deciding that?

So, this was in relation to my proposal that we turn our 1 lever (min-fee-relay) into a couple of levers where a transaction is given a priority based on several properties. 1: ratio of inputs vs outputs. 2: coin-days-destroyed and 3: actual fee paid (per byte).

To put the finger on this, a transaction that has one input and 400 outputs would be made really really low priority. And in many cases this is a transaction that is not good for the network. For instance the dusting transactions.

I would personally like it when miners would take really low priority transactions and simply reject them (not mine them) if they pay a low fee as well.
Now, I’m not the market but obviously such examples will make people ask why I suggest a valid usecase should be made more expensive.

Fair question that basically is sidestepped by the fact that any solution where we measure more variables and give more options to the miner should come with some sane defaults that should be easy to change. Maybe even required to be set to sane values by the ecosystem.
This is not consensus and iteration of options and tweaking of values is possible every day to get things right.

The bottom line is that someone needs to make the decisions what to down-prioritize and make more expensive. The network doesn’t scale infinite. And in my opinion it is our job to provide levers in order for the market to actually determine the actual limits they are willing to accept.

Q: How can we make sure that transactions still stay zero-conf safe?

This question is based on the fact that we have scared the ecosystem into this idea that if we do not have the same exact mempool policies network-wide, zero conf becomes unsafe and double spending is going to get out of hand.

Naturally, this has a core of truth. But the basic issue here is one of tooling. It is also one of perception.

The general perception for many is that they can treat “the network” like a central server. They can take the answer from the nearest node (accepted, rejected etc) and assume that the rest of the network agrees. And that once a transaction is delivered to one or two nodes, it will get mined.

This is an illusion that is very convenient but it can’t possibly be true for a decentralized system. There are 1000s of node operators that are able to tweak their properties and there are serious people out there that will spin up 10000 nodes if that will postpone the rise of BCH.

So, if we have to let go of the notion that we CAN keep all mempool policies the same, how then can we guarantee zero-conf safety? Here is a list of things that each will help tremendously:

  • Wallets need to keep ownership of transactions till they get mined. For this we need better communication and a wallet should be able to find out the mempool status of its tx.
  • Mempools should move from keeping a transaction for weeks to keeping it only for 4-6 hours. This helps a lot with mempool pressure.
  • Mempools should keep many more transactions (as many as possible, really) which may not actually get mined in a block. This also solves the problem of mining bigger blocks (orphan risk) as the receivers already have the transactions and won’t have to download&validate them.
  • Wallets should innovate on payment protocols. One item super important here is that a customer should be able to send a transaction to the merchant and the merchant should be able to say “No, that does not have enough fee or priority, please fix and try again”. This avoids the current silly situation where the first time a merchant sees a transaction when its already sent to a miner.
  • Double spend proofs should be rolled out in order to give notice to merchants.

Q: How do we avoid being flooded with zero-fee transactions filling up the mempool?

Re-instate the feature that used to be on the Satoshi client where low-priority
transactions are rate-limited for broadcast and mempool-entry.

Q: How can merchants be sure the receiving TX will get mined if its zero-fee?

The simple answer is that they can’t be sure, but that doesn’t mean that we should somehow forbid people sending any at all.

The problem is local to a certain group of people. Merchants accepting zero-conf. And those merchants will anyway need more advanced tools to protect themselves and lower their risk.

Most specific here is that a transaction that is too low fee should simply be rejected before it is sent to the network and the sender should correct or be prepared to wait for confirmations.

This requires better tooling on the (merchant’s) wallet side.

Q: How can a wallet know what fee is a good fee to get mined?

Many ideas have been going round, and I’d expect a lot of fear can be felt if we were to look at the BTC situation where fee calculation is simply a guessing game.
What we should thus make clear is that we need to keep the network as a whole healthy and operating well below its limit. Having many full blocks will really make this an impossible problem to solve.

Without always full blocks the algorithm becomes rather simple and any observer that has the full chain can determine the effective min fee. Which can be relayed via some API.

Simply said, when using zero-conf your merchant will likely force you to use the minimum fee. Regardless of how aged your coins are.

When you send something that may take 10 blocks to get mined, then you may try a lower fee but higher priority transaction.

Q: How can a full node actually implement this? (aka is there a spec?)

First of all, ideally a lot of work is going to be done on the wallet side as well. But, yeah, a full node:

  • mempool entries should have a priority added.
    The priority is based on input/output ratio, days-destroyed and fee/byte.
    User settings are used to determine the actual priority as some number.

  • mining gets some more properties to determine when to include transactions based on priority (much like the current min-mining-fee).
    One other property it can use is the existing timestamp when a tx was entered into the mempool.
    User settable properties (levers for the miner to adjust) can include ability to delay transaction inclusion based on low priority for a certain time. it includes the ability to include a certain amount of kilobytes of high-priority, low-fee transactions.
    What properties are important to expose should definitely get more research.

  • low priority transactions are relayed with a rate-limiter. (as it was done in the Satoshi code). We might need to adjust the original to block-size and expected tx/sec on the network. Take something like EB (—blocksizeacceptlimit).

  • mempool expulsion will go down to maybe 6 hours, after which people can re-send their transaction if it never got included. Including changing the fee.
    Tx priority may change this time so really low prio transactions get removed faster (say, 3 hours or 20 blocks).


I don’t have much more to add right now, other than that I enjoy Mr Zander’s thinking, in that he questions our premises (tx ownership, payment protocols, …)

I am generally unsure that we should allow 0 fee transactions. A 1sat/kB fee, or even a 1sat/tx fee, should be generally ok for anyone.

1 Like

It should be easy for EC and I did mention that :smiley: Not all wallet work that way though, most rely on some form of trusted-server. Last time I checked Coinbase wallet doesn’t even show history in wallet (likely due to compatibility with operating on ETH in a very light way), I wonder how they’ll cope with that.

But more to the point, the problem isn’t whether you can implement the new logic, but rather whether they will. We do live in a world where most wallets add BCH as an afterthought, convincing them to add new logic specifically for BCH is going to be quite an uphill battle.

If the new logic affects relayability of existing, simple 1sat/B in any way, existing multicoin wallets may become hostile to us. That’ll likely overwhelm any possible benefits of this pre-emptive fee-lowering effort.

If, as you said, the new logic strictly only discounts from 1sat/B with CDD (and do not deprioritize or make unrelayable in any way), these wallets don’t have to do anything, and that’s probably better. Adoption of the new logic may be very lacking for a long time though, people who are advocating for this policy may want to think twice whether that’s a good outcome.


This reply relates as much to your statement above as to @tom 's most recent post with the detailed thoughts on his proposal and the questions and answers.

I very much agree that this relies hugely on the “will they actually implement it” part, including when considering a new common mempool eviction policy.

Even though there are only a handful of full nodes right now, we can’t be sure they will all agree that such a new eviction policy is what they wish to make their default, and even if they do, it is our users (node operators) who have the final say, and if they have that knob to turn, they may decide to run with different parameters. It seems that could spoil the result and all the good intentions.

Maybe persuasion that a common eviction policy is needed would work, maybe not. If it turns out it doesn’t, then the switch to a more complicated “more knobs to adjust priorities” policy would be harmful.

My final thoughts here (this has been said elsewhere too):

To keep on topic, we should separate out the concerns / requests about:

  • free transactions
  • additional CDD priority mechanisms

And keep this thread purely about lowering the default minfee (relay+mining) in a way that sticks to the current model of a unified network fee.

This is something much simpler for nodes and wallets to implement + understand, can likely be reasonably maintained for the foreseeable future, doesn’t impact current use cases (unless it devolves to the state of a non-uniform policy) and keeps us focused in this topic.

The testing effort on such a simpler adjustment scheme is probably an order (or orders) of magnitude less than schemes where we introduce much more complexity through several new tweakable parameters. I think that really matters if the objective is to have something workable within reasonable time, given that the expectation that underpins this thread is that we need a solution for the high-fees problem that can occur unpredictably and perhaps relatively soon.

And to be fair to the originator of this thread, I suggest that we also move proposals that involve miner voting instead of a “fee estimation algorithm” out of here, since they are different things.

And another thing: I have this nagging doubt that f we were to make the mempool admission rate-limited by a more complex calculation, we create ourselves a new obstacle to scaling. Because now we need to put a rate limiter in front of the mempool. I’m not yet sure it’s needed, the major economic factor to disincentivize spam would be to pay some fee, so a simple “did this transaction pay my required minimum fee?” check seems all that is needed to cover the major spam deterrent. Maybe I’m wrong, maybe such rate limiting can be quite well in parallel too. I suppose it ought to be workable. But it’s requires some architectural change to BCHN at least, I don’t know about others.

1 Like

Side note: I like that nearly everyone in this thread strongly agrees that CCDs need to come back.

Their removal never made any sense, it was just a Core idiocy.

1 Like

I am going to open a separate thread for CDD prioritization discussion.

That’s an opinion mixed with what looks like a genetic fallacy (“becomes it came from Core, it must be devoid of sense”).

Here is where I think we need to be more specific on actual usefulness thereof (hence a new thread).

nearly everyone in this thread strongly agrees that CCDs need to come back

I’m not sure that’s true, it may be more an impression you got?
To check, you can enumerate who you think “strongly agrees” in this thread.

Count me out, since I’m not convinced.

Actually I meant it was devoid of sense regardless, not because it came from Core.

I am here for a very long time and I remember Coin Days Destroyed working splendidly in 2015 and before. I did not even know it got removed before this discussion.

Removal of CCDs was dumb, regardless whether Core did it or somebody else did it. But surely it all comes together and makes exponentially more sense when you add that Core wanted to destroy on-chain transactions and this was their hidden goal all along.

Well, so far I think that Tom Zander, you, Calin, mtrycz and somebody else agreed that CCDs need come back.

Maybe I overexxgerated a little with “strongly” agreeing (I do that for more dramatic effect like Filmmakers add CGI effects in their movies because otherwise long discussions get boring), but generally yes, we almost universally agreed that CCD needs to come back in some form.