Rolling minimum fee mempool policy with decay

Back in 2015, Bitcoin Core introduced in PR6722 a concept of “rolling minimum fee policy”, where in the case of full mempool, the minimum acceptance fee would be set to the fee of a transaction that was evicted.

However, this fee would not be reset on finding a new block, but decay over time. This appears to be a DoS measure. morcos has a good comment on the logic behind it and did the math behind this policy in this comment.

Future transactions should be obligated to pay for the cost of transactions that were evicted (and their own relay fee) otherwise a large package of transactions could be evicted by a small tx with a slightly higher fee rate. This could happen repeatedly for a bandwidth attack.

It appears this code is still present in the codebase of BCHN.

Possible problem:

This is not a big issue on BTC, where transactions are expected to be unreliable. However, on BCH, it is assumed that low-fee transactions will be accepted within reasonable time. I do believe both popular wallets, as well as smart contracts (especially covenant contracts) hard code fees in their client due to this assumption. This decay is not well known and will probably cause surprises if it takes effect.

I speculate it is possible to flood the mempool to bump the fee floor, and while waiting for the decay to take effect:

  • Users will be confused to see their transactions be rejected, even though mempool is not full.
  • It will more plausible to do 0-conf double spends due to other nodes not having this decay.

Is this a problem? How would we mitigate the DoS vector described by morcos if we were to remove the decay?

3 Likes

I ran into this code when rewriting the TrimToSize() mempool eviction function to use my WCFeeRate heuristic instead of the <descendant_score> sort method in !832.

One of the things that bothers me about the current mempool eviction and fee floor strategy is that it bumps the fee floor by 1 sat/byte (or, in the code, 1000 sat/kB) each time the mempool fills up. This, combined with the delayed decay, could mean that a node that accepted 0.5 sat/byte transactions while miners only accept 1.0 sat/byte could easily be made to bump that floor up to 1.5 sat/byte, at which point the node would stop receiving transactions for quite a while. If anything, I think that this should be a multiplicative increase (e.g. increase by 5%), not an additive one.

We should also consider that the BTC mempool capacity defaults to (IIRC) 300 blocks, whereas the BCH mempool size defaults to slightly less than 10 blocks.

I think my preference would be to change this to increase multiplicatively by 5% each time the mempool fills, and decrease by 5% each time a block is found.

Also noteworthy: my TrimToSize() WCFeeRate MR (!832) changes the eviction behavior slightly, and in a fashion that makes the bandwidth attack that Morcos mentions impossible or ineffective. In Morcos’s bandwidth attack, the min fee bump from a single 200 byte tx causes the eviction of a 25 tx * 100 kB = 2.5 MB package, which allows the coins to be re-spent at a marginally higher fee for another 2.5 MB of bandwidth used. This is possible because the current (legacy) TrimToSize() code chooses to sort transactions by the feerate of the tx-and-descendants (i.e. parents-paid-for-by-children); and if a root tx has a low feerate, but its children are higher, that means that the rest of the package can be safe until the root tx needs eviction, at which point the whole thing goes at once.

But with the WCFeeRate version of TrimToSize(), only childless transactions are considered for eviction, and they’re considered based on the worst-case estimated feerate of the tx-and-ancestors (i.e. child-after-having-paid-for-parents). This results in purely incremental eviction; a single 200-byte transaction can only trigger the eviction of 200 bytes of other transactions (rounded up), or at most 100 kB given standardness rules. If the root tx has a low feerate, then that lowers all of its descendants’ WCFeeRate scores, causing a descendant to be the first to be pruned instead of the root.

To solve this I think it is very useful to take a step back and consider some observations:

  • Software that generates a transaction is expected to re-submit that transaction regularly in case it didn’t get seen or got ejected from a mempool.
  • The design of using fees in the mempool is based on the idea of a fee market based on limited block size. This has been known to be incorrect since about 2015.
    We do want a fee market, and Peters paper “A Transaction Fee Market Exists Without a
    Block Size Limit” proves we can do that without limiting block size.
  • In Bitcoin Cash we actively sponsor (keep fees artificially low) transactions because of two (main) reasons:
    1. We want more transactions, making the price very low helps.
    2. Our mempool code doesn’t follow the economic model we actually want.
  • A fee market is not going to be based purely on fees. Popular ideas are days-destroyed etc. Additionally we want free transactions back.

The Bottom line here is that a new mempool design could be made today, but I just don’t think it makes sense to have a mempool be anything more than a collection of transactions. Regardless of fee, regardless of priority. Removing transactions when its full is fine, but I’d suggest to do this based on age and not fee.

The longer term solution would include something like this:

  • a separation of mempool for validation (receiving a block) and a mempool for mining [more].
  • Make mining prioritize (select transactions to include) based on a combination of factors, fee being only one of them.
  • Make a validation mempool have as its primary task the collection of (all valid) transactions in order to make block-transfer protocols work better.
  • Make this validation mempool expunge transactions after 6 hours instead of 72.
  • Fill the mining mempool from the validation one, which can reject “bad” transactions that the miner thinks are not something he wants to include.

If the mempool fills up (and it will fill up, as memory is a finite resource), you have two options when receiving a transaction:

  1. Drop it.
  2. Evict another transaction to make room.

So, I think there is no way around the fact that if the mempool fills up, then the user experience is degraded.

The choice to A) evict low feerate transactions and B) raise fee floor are consistent and at very least they provide a natural cost increase for any DoS, and way for normal people to get their transactions past the DoS. How the fee floor decays is an interesting question but I don’t think it changes anything fundamental.

We have at least gotten rid of one of the biggest problems here which was the badly designed BCH SigOps limiting rule, which had inadvertently made it much cheaper to sustain mempool filling DoSes.

It’s also worth mentioning a few things:

  • Mempool size is a config parameter and probably ought to be increased at some point.
  • In BCHN at least, the calculation of the size of mempool is not based on adding up raw tx sizes, rather it is a more accurate (though still imperfect) estimate of how much memory is consumed by having the tx in memory, including all overhead in data structures. These data structure sizes can change between software releases, differ between architectures, and will of course vary between node implementations.
  • Based on the above point, different nodes will start evicting at different times, so they will all have different feerate floors in the case of a mempool flooding event.
  • Due to the above, and also inevitable race conditions, even after a mempool flooding event has finished and everyone’s fee floor is reset back to 1 sat/byte, it can easily be the case that some transactions only exist on certain nodes. If you only ask one node, you have no way of knowing how widespread a tx might be; it may never be mined (and resending the tx to the same node will do nothing of course).
  • Any tendency to evict means that unconfirmed transactions can be threatened even when the mempool is quiet, since the mempool flooding can happen after the tx is broadcast. Not evicting means that flooding can be a hard blockade.

But as a mildly comforting note, I would point out that mempool flooding events are rare. BTC mempools did fill up in mid and late 2017, as can be seen by the loss of 1 sat/byte bands around that time on johoe’s explorer.

Exactly, thanks for explaining this in steps. This is why it is relevant to have mempools drop “old” transactions faster (as I mentioned above). Instead of 72 hours I moved Flowee the Hub to 6h. It can possibly be shortened even further, but I’m hesitant to do so without actual emperical data.

It is and stays the responsibility of the receiver of the transaction (typically the merchant) to rebroadcast it at interval until it gets mined.

I’m fondly remembering the time when some people flooded the BCH network and then there were miners that mined 8MB blocks making the problem go completely away in 2 or 3 blocks.
The point being that it is more rare as the space between usage and max-capacity increase because the cost will simply be too high. Even without a fee floor increase.

  1. Dump the tx to disk. (And store the txid in a bloom filter so that we know the tx is there.) We’d need to be careful with this to avoid DoS vulnerability, though. Disk space is pricier than network throughput, although disk throughput is cheaper than network.

BTC’s mempool is about 150x the size of its base blocksize limit. BCH’s mempool is about 5x the size of its base blocksize limit. Flooding on BCH might be more feasible than you’d think.

This raises the question why you would prefer to keep the tx over just tossing it. Remember, the originator of the tx (typically the merchant that receives it) is going to be able to re-send it regularly.

Since, IMO, first-seen is important to keep we need to make sure that people are going to have the flexibility to double-spend their own transaction after N hours should the network dislike it for whatever reason. Keeping it forever in some mempool (on disk or not) stops this.

So, why would you prefer to keep it over tossing it and allowing the originator of the tx to get his wallet to re-submit it for another stay in the mempool?

Dumping to disk sounds tricky, insofar as you want to keep the tx linkages (spent TXOs, and new TXOs) in memory still. I guess that you just need multiple bloom filters.

I guess you might actually want cuckoo filters since element removal is an important property.

Anyway, it’s interesting to note the mempool flooding on ABC right now. Obviously exacerbated by lack of people mining blocks, but informative nonetheless. (And note that practically, ‘full mempool’ on ABC is only ~90-100 MB of tx data, for the txes being used. The 300 MB limit is, as I mentioned, including data structure overheads.)

Perhaps, but I would love it if we had mempool set reconciliation that works to heal any source of mempool mismatch.

(Here is an article describing the real-life full-mempool problem and mempool inconsistencies, on BTC: https://b10c.me/blog/001-the-300mb-default-maxmempool-problem/)