Rolling minimum fee mempool policy with decay

Back in 2015, Bitcoin Core introduced in PR6722 a concept of “rolling minimum fee policy”, where in the case of full mempool, the minimum acceptance fee would be set to the fee of a transaction that was evicted.

However, this fee would not be reset on finding a new block, but decay over time. This appears to be a DoS measure. morcos has a good comment on the logic behind it and did the math behind this policy in this comment.

Future transactions should be obligated to pay for the cost of transactions that were evicted (and their own relay fee) otherwise a large package of transactions could be evicted by a small tx with a slightly higher fee rate. This could happen repeatedly for a bandwidth attack.

It appears this code is still present in the codebase of BCHN.

Possible problem:

This is not a big issue on BTC, where transactions are expected to be unreliable. However, on BCH, it is assumed that low-fee transactions will be accepted within reasonable time. I do believe both popular wallets, as well as smart contracts (especially covenant contracts) hard code fees in their client due to this assumption. This decay is not well known and will probably cause surprises if it takes effect.

I speculate it is possible to flood the mempool to bump the fee floor, and while waiting for the decay to take effect:

  • Users will be confused to see their transactions be rejected, even though mempool is not full.
  • It will more plausible to do 0-conf double spends due to other nodes not having this decay.

Is this a problem? How would we mitigate the DoS vector described by morcos if we were to remove the decay?

4 Likes

I ran into this code when rewriting the TrimToSize() mempool eviction function to use my WCFeeRate heuristic instead of the <descendant_score> sort method in !832.

One of the things that bothers me about the current mempool eviction and fee floor strategy is that it bumps the fee floor by 1 sat/byte (or, in the code, 1000 sat/kB) each time the mempool fills up. This, combined with the delayed decay, could mean that a node that accepted 0.5 sat/byte transactions while miners only accept 1.0 sat/byte could easily be made to bump that floor up to 1.5 sat/byte, at which point the node would stop receiving transactions for quite a while. If anything, I think that this should be a multiplicative increase (e.g. increase by 5%), not an additive one.

We should also consider that the BTC mempool capacity defaults to (IIRC) 300 blocks, whereas the BCH mempool size defaults to slightly less than 10 blocks.

I think my preference would be to change this to increase multiplicatively by 5% each time the mempool fills, and decrease by 5% each time a block is found.

Also noteworthy: my TrimToSize() WCFeeRate MR (!832) changes the eviction behavior slightly, and in a fashion that makes the bandwidth attack that Morcos mentions impossible or ineffective. In Morcos’s bandwidth attack, the min fee bump from a single 200 byte tx causes the eviction of a 25 tx * 100 kB = 2.5 MB package, which allows the coins to be re-spent at a marginally higher fee for another 2.5 MB of bandwidth used. This is possible because the current (legacy) TrimToSize() code chooses to sort transactions by the feerate of the tx-and-descendants (i.e. parents-paid-for-by-children); and if a root tx has a low feerate, but its children are higher, that means that the rest of the package can be safe until the root tx needs eviction, at which point the whole thing goes at once.

But with the WCFeeRate version of TrimToSize(), only childless transactions are considered for eviction, and they’re considered based on the worst-case estimated feerate of the tx-and-ancestors (i.e. child-after-having-paid-for-parents). This results in purely incremental eviction; a single 200-byte transaction can only trigger the eviction of 200 bytes of other transactions (rounded up), or at most 100 kB given standardness rules. If the root tx has a low feerate, then that lowers all of its descendants’ WCFeeRate scores, causing a descendant to be the first to be pruned instead of the root.

To solve this I think it is very useful to take a step back and consider some observations:

  • Software that generates a transaction is expected to re-submit that transaction regularly in case it didn’t get seen or got ejected from a mempool.
  • The design of using fees in the mempool is based on the idea of a fee market based on limited block size. This has been known to be incorrect since about 2015.
    We do want a fee market, and Peters paper “A Transaction Fee Market Exists Without a
    Block Size Limit” proves we can do that without limiting block size.
  • In Bitcoin Cash we actively sponsor (keep fees artificially low) transactions because of two (main) reasons:
    1. We want more transactions, making the price very low helps.
    2. Our mempool code doesn’t follow the economic model we actually want.
  • A fee market is not going to be based purely on fees. Popular ideas are days-destroyed etc. Additionally we want free transactions back.

The Bottom line here is that a new mempool design could be made today, but I just don’t think it makes sense to have a mempool be anything more than a collection of transactions. Regardless of fee, regardless of priority. Removing transactions when its full is fine, but I’d suggest to do this based on age and not fee.

The longer term solution would include something like this:

  • a separation of mempool for validation (receiving a block) and a mempool for mining [more].
  • Make mining prioritize (select transactions to include) based on a combination of factors, fee being only one of them.
  • Make a validation mempool have as its primary task the collection of (all valid) transactions in order to make block-transfer protocols work better.
  • Make this validation mempool expunge transactions after 6 hours instead of 72.
  • Fill the mining mempool from the validation one, which can reject “bad” transactions that the miner thinks are not something he wants to include.
1 Like

If the mempool fills up (and it will fill up, as memory is a finite resource), you have two options when receiving a transaction:

  1. Drop it.
  2. Evict another transaction to make room.

So, I think there is no way around the fact that if the mempool fills up, then the user experience is degraded.

The choice to A) evict low feerate transactions and B) raise fee floor are consistent and at very least they provide a natural cost increase for any DoS, and way for normal people to get their transactions past the DoS. How the fee floor decays is an interesting question but I don’t think it changes anything fundamental.

We have at least gotten rid of one of the biggest problems here which was the badly designed BCH SigOps limiting rule, which had inadvertently made it much cheaper to sustain mempool filling DoSes.

1 Like

It’s also worth mentioning a few things:

  • Mempool size is a config parameter and probably ought to be increased at some point.
  • In BCHN at least, the calculation of the size of mempool is not based on adding up raw tx sizes, rather it is a more accurate (though still imperfect) estimate of how much memory is consumed by having the tx in memory, including all overhead in data structures. These data structure sizes can change between software releases, differ between architectures, and will of course vary between node implementations.
  • Based on the above point, different nodes will start evicting at different times, so they will all have different feerate floors in the case of a mempool flooding event.
  • Due to the above, and also inevitable race conditions, even after a mempool flooding event has finished and everyone’s fee floor is reset back to 1 sat/byte, it can easily be the case that some transactions only exist on certain nodes. If you only ask one node, you have no way of knowing how widespread a tx might be; it may never be mined (and resending the tx to the same node will do nothing of course).
  • Any tendency to evict means that unconfirmed transactions can be threatened even when the mempool is quiet, since the mempool flooding can happen after the tx is broadcast. Not evicting means that flooding can be a hard blockade.

But as a mildly comforting note, I would point out that mempool flooding events are rare. BTC mempools did fill up in mid and late 2017, as can be seen by the loss of 1 sat/byte bands around that time on johoe’s explorer.

Exactly, thanks for explaining this in steps. This is why it is relevant to have mempools drop “old” transactions faster (as I mentioned above). Instead of 72 hours I moved Flowee the Hub to 6h. It can possibly be shortened even further, but I’m hesitant to do so without actual emperical data.

It is and stays the responsibility of the receiver of the transaction (typically the merchant) to rebroadcast it at interval until it gets mined.

I’m fondly remembering the time when some people flooded the BCH network and then there were miners that mined 8MB blocks making the problem go completely away in 2 or 3 blocks.
The point being that it is more rare as the space between usage and max-capacity increase because the cost will simply be too high. Even without a fee floor increase.

  1. Dump the tx to disk. (And store the txid in a bloom filter so that we know the tx is there.) We’d need to be careful with this to avoid DoS vulnerability, though. Disk space is pricier than network throughput, although disk throughput is cheaper than network.

BTC’s mempool is about 150x the size of its base blocksize limit. BCH’s mempool is about 5x the size of its base blocksize limit. Flooding on BCH might be more feasible than you’d think.

This raises the question why you would prefer to keep the tx over just tossing it. Remember, the originator of the tx (typically the merchant that receives it) is going to be able to re-send it regularly.

Since, IMO, first-seen is important to keep we need to make sure that people are going to have the flexibility to double-spend their own transaction after N hours should the network dislike it for whatever reason. Keeping it forever in some mempool (on disk or not) stops this.

So, why would you prefer to keep it over tossing it and allowing the originator of the tx to get his wallet to re-submit it for another stay in the mempool?

Dumping to disk sounds tricky, insofar as you want to keep the tx linkages (spent TXOs, and new TXOs) in memory still. I guess that you just need multiple bloom filters.

I guess you might actually want cuckoo filters since element removal is an important property.

Anyway, it’s interesting to note the mempool flooding on ABC right now. Obviously exacerbated by lack of people mining blocks, but informative nonetheless. (And note that practically, ‘full mempool’ on ABC is only ~90-100 MB of tx data, for the txes being used. The 300 MB limit is, as I mentioned, including data structure overheads.)

Perhaps, but I would love it if we had mempool set reconciliation that works to heal any source of mempool mismatch.

(Here is an article describing the real-life full-mempool problem and mempool inconsistencies, on BTC: https://b10c.me/blog/001-the-300mb-default-maxmempool-problem/)

Time to get this going again!

1 Like

Bumping this as it’s been brought up recently.

My 2c: wallets should get smarter about their fees.

I think too many people default to min. fee and got used to having the TX be mined in the next block. This is only possible because our network is underutilized and mempool is mostly empty. This is not something the network can guarantee as it would require infinite capacity.

we can’t expect all users to think about fees too much… one day someone will spend 1 BCH to blast 100MB worth of TXs, and a bunch of users who defaulted to 1sat / byte will be like “bch is sloooow reeee, my tx had to wait for 5 blocks!!”

and then some users/wallets may be like “lol, that’s what you get for not thinking ahead, OUR users were not affected”

2 Likes

I have a dream… Ok, recurring dream. Hmm, checking. Yeah, earliest blogpost I found is Sept 2017 where I introduced these ideas. But they have been refined ever since. The way-to-complicated post nr 3 above shows some progress but not much eloquence on explaining it.

The ideas are themselves not exactly complex, but they are quite different and thus they need a lot of supporting descriptions. So I sat down when this came up on Telegram again and started writing that. I’m sure I missed some concepts, please feel free to ask so I can improve it.

Again, the concepts are not specifically hard, they are just different from what Bitcoin Core introduced, I tried to learn from their mistakes and do it better. They added on and it has become a mess with RBF and more. Instead I think the step back and looking at this slightly differently will help.

I feel that these are the ideas that will help our child leave the nest and start living on their own.

1 Like

Love the writeup. Quick thought, though. Why would wallets getting smarter themselves be an issue? As long as they put the min fee necessary to be in the next block, they should be mostly fine.

On the contrary, using an algorithm to score/weight tx priority, we have little way of predicting future types of transactions, so we don’t know for certain if this might need to be weighted differently in the future. Also now with CashTokens, this gets significantly more complicated. Imagine a financial institution settling stocks built on Cash Tokens. They will pay a higher fee to guarantee they are in the next block regardless. But they might engage in some really weird transaction format, or they could be swapping the same CTs 1,000 times in a day. And that could all be very valid economic activity.

Scoring transactions, I think, might be taking it too far and introducing another potential complication. Maybe putting together a filter that will look at recursive transactions on a P2SH (similar to what I did in tests), but if filtering for economic or anti-attack purposes, people will always refine their attack. I think the economic hurdle is big enough that we don’t need to worry significantly.

1 Like

A transaction that increases the value of BCH is marked as an “economic transaction”. This is much more narrow than the same term used in finance. And I expect that to have caused confusion here.

The point is that a miner that gets paid in BCH is supposedly glad to sponsor a tranaction that directly increases the value of BCH, but not one that just moves tokens around which likely are only very indirectly connected to the value of the coin.

It is an unpopular opinion, I understand, but really nothing anyone should worry about because it is not a huge difference. A cashtoken transaction is in actual fact not buying something in exchange BitcoinCash, it is thus not an economic transaction in the sense that the simple fact that people voluntarily did that trade, the bitcoincash total value increased.
As a result it should be a slightly higher fee BECAUSE todays low-fees are sponsored by miners that want more economic activity and higher price.

No, it is actually a required part of a healthy coin. You keep the balance as described in;
https://twitter.com/FloweeTheHub/status/1664722936744271872

A choice quote:

Everyday trades are the trunk of our tree, the more there are the bigger and sturdier the tree-trunk and thus our entire tree. On a big tree there is no problem with some side-branches sticking out quite far.
We just have to keep the balance, should the side-branch get bigger than the trunk, the whole thing will collapse.

To not keep the balance is what we see in BTC and in BSV.

BSV has almost no actual economic transactions, all are fake. Impossible to filter on, but nobody will disagree they are fake. And the tree has collapsed more than once.

BTC has ordinals and those are not economic transactions. They can have it because the btc value doesn’t stem from utility anyway, but it becomes clear when every time people make a big deal of yet another block without any transactions and then their ACTUAL economic activity has to pay more fees just to get mined…

Not prioritizing your actual economic activity is not healthy. I think BCH did well by keeping fees discounted, but as the worry today is about people abusing that discount, I’ll challange anyone to solve it better.

Simply making actual economic activity (what people refer to as “wallets” above) pay more without any other adjustments is not good enough IMO.

Let me be more clear about the whole idea that’s still floating around of “cashtokens should get the same fee as other transactions”.

We, the kind of central planners, made the miners give a fee discount with the 1 sat/byte.

The idea is to stop us central planners do cental planning. Which means we give the power to set fees to the miners.

What do you think will happen if we don’t create any tools? What will happen is that ALL fees will rise. Obviously. Can’t demand from miners to give away stuff for free.

Alternatively, my suggestion, we can produce tools to help differentiate between economic transactions and non-economic transactions. If we agree we need to stop central controlling the fees and the miners can set it, then they can use those tools to NOT rise the prices on economic transactions.

Thereby helping the most people.

1 Like

A cashtoken transaction is in actual fact not buying something in exchange BitcoinCash

Every BCH transaction is buying something with BCH: it’s buying hashes from a miner.

Fee is the ultimate “score” of TXs economic worthiness. Value is subjective and the fee is proof that the TX has at least that much economic value to whomever made the TX.

Why should miners discriminate their fee-paying customers?

1 Like

Yes, but that is a tiny tiny revenue stream on the whole. A rounding error.

Me paying travalla a couple hundred euros or the transaction paying less than a cent in fees. Both the same transaction.

Which do you think will move the BitcoinCash price?

Because money gains value from usage. People that use your infrastructure without creating value are thus to be charged more.

Only if you haven’t figured out that USING bitcoincash (the money, not the chain) makes the value of the coin go up can you possibly disagree with that…