Lower the default relay fee, create a fee estimation algorithm

It’s not complex.

I think it just “feels” complex because we haven’t looked at it in 6 years or whatever. It’s a formula something like (pseudocode below):

foreach coin in tx.inputs:
   sum += coin.value * (current_height - coin.height)
priority_factor = sum / tx_byte_size
# priority_factor is then applied in some way to modify the fee requirement ...

Wallets always know their coin’s height – or if they do not, they can just not take advantage of this “optimization” and pay the standard 1 sat/B fee until someone writes code to remember the height…

If we do it right – either way nothing breaks. But people wanting to pay less or nothing during times of 0 congestion – can easily do so with a good wallet.

This is 1 idea.

It is indeed not complex for the node, but for the wallets, the main question is “what can I do to ensure my tx will send reliably?”

A 1sat/B fee is simple: just calculate output minus input divided by size, make sure it’s above 1.

A fee floor voted by miners/whatever is also simple: Fetch a number n from node, output minus input divided by size, make sure it’s above n.

CDD though will involve additional logic: now the wallet will have to first calculate CDD - fetch how old the input UTXOs are, something not immediately apparent just looking at the tx - then (in the simplest case i can imagine, feel free to suggest other possibilities) compare it against some “this is safe” CDD number supplied by the node. Wallets that sync all history (say, Electron-Cash) will have better access to this than wallets that depend more on their full node indexer (say, Bitcore). There’s additional logic there involving new information already >.>

Oh, this is very easy.

Whatever we, node builders do here will get implemented in all major nodes (BCHN, BU, Knuth, BCHD etc) and after that wallet authors will just copy/port the code.

This is a really simple matter of following. Wallet authors do not need to do anything, they just need to copy.

This is also pretty easily solvable.

  1. Wallets that do not have full history but instead rely on indexer, should calculate without CCD and pay the full fee (also display warning if user wants to send with lower fee)

  2. Wallets that see full history, like Electron Cash, can take advantage of the discount. Because CCD can be exactly this - a discount.

If we treat sending old/a lot of CCD coins for cheaper as a special discount while we keep default BCH fees <$0.01 anyway as they were already, this will solve itself basically.

Best wallets whose authors care about their users will implement the discount by fetching the full history.

Some other wallets won’t. But there will be no loss, because BCH fees will remain low anyway.

1 Like

i can’t speak for all wallets but Electron Cash which does SPV has to know which block a coin was mined in as part of its verification process (you can’t verify a tx without knowing the block it was in, and having its header).

Electron Cash stores the height of the UTXO with the UTXO… so for Electron Cash it’s simple to get to that information.

And as far as fetching data from the node for the CDD “rate” – that can be as simple as 1.0 sats/B is now. You start with that number, 1.0 sats/B – and then you can deduct your “coindays destroyed discount” (which may be 0 in some cases) – and voila → your new “optimized” fee.

Note that wallets can opt out of this and not optimize the fee – and just pay 1 sat/B. This is only for wallets wanting to take advantage of the discount. This is how it worked in old Electrum back in the day too…

I don’t see it as more complex than the other proposals. Is it more complex than simple 1 sat/B? Slightly yeah because now you have to calculate 2 things and deduct one from the other… But it’s simpler than some of the other proposals in this thread, I would argue.


Edit: @ShadowOfHarbringer yeah basically everything you just said. :slight_smile:

There’s a BIP that defines which versionbits are open to grinding by version rolling.

If we stay out of the way of those, we should be fine.

Of course, it’s something to double check.

1 Like

The basic concept of fee-setting is not a consensus rule today. On the other hand there is really only one “lever” to adjust by the network which is the min-relay-fee and our fear for zero-conf makes us state pretty loudly that miners should not actually touch this.

This is an undesirable situation, nobody really wants developers to have the final say here. And should the price reach $10k, the fees are going to make certain groups of people simply no longer able to participate.
At one point miners will take action and its our job to make sure they have tools to take action that is not destructive to the network.

A lot of questions came up in this discussion, both here and on other channels. I’ve collected various and write the answers as I’ve found them.

FAQ:

Q: Why is usecase X made more expensive and usecase Y made cheaper? Why are you deciding that?

So, this was in relation to my proposal that we turn our 1 lever (min-fee-relay) into a couple of levers where a transaction is given a priority based on several properties. 1: ratio of inputs vs outputs. 2: coin-days-destroyed and 3: actual fee paid (per byte).

To put the finger on this, a transaction that has one input and 400 outputs would be made really really low priority. And in many cases this is a transaction that is not good for the network. For instance the dusting transactions.

I would personally like it when miners would take really low priority transactions and simply reject them (not mine them) if they pay a low fee as well.
Now, I’m not the market but obviously such examples will make people ask why I suggest a valid usecase should be made more expensive.

Fair question that basically is sidestepped by the fact that any solution where we measure more variables and give more options to the miner should come with some sane defaults that should be easy to change. Maybe even required to be set to sane values by the ecosystem.
This is not consensus and iteration of options and tweaking of values is possible every day to get things right.

The bottom line is that someone needs to make the decisions what to down-prioritize and make more expensive. The network doesn’t scale infinite. And in my opinion it is our job to provide levers in order for the market to actually determine the actual limits they are willing to accept.

Q: How can we make sure that transactions still stay zero-conf safe?

This question is based on the fact that we have scared the ecosystem into this idea that if we do not have the same exact mempool policies network-wide, zero conf becomes unsafe and double spending is going to get out of hand.

Naturally, this has a core of truth. But the basic issue here is one of tooling. It is also one of perception.

The general perception for many is that they can treat “the network” like a central server. They can take the answer from the nearest node (accepted, rejected etc) and assume that the rest of the network agrees. And that once a transaction is delivered to one or two nodes, it will get mined.

This is an illusion that is very convenient but it can’t possibly be true for a decentralized system. There are 1000s of node operators that are able to tweak their properties and there are serious people out there that will spin up 10000 nodes if that will postpone the rise of BCH.

So, if we have to let go of the notion that we CAN keep all mempool policies the same, how then can we guarantee zero-conf safety? Here is a list of things that each will help tremendously:

  • Wallets need to keep ownership of transactions till they get mined. For this we need better communication and a wallet should be able to find out the mempool status of its tx.
  • Mempools should move from keeping a transaction for weeks to keeping it only for 4-6 hours. This helps a lot with mempool pressure.
  • Mempools should keep many more transactions (as many as possible, really) which may not actually get mined in a block. This also solves the problem of mining bigger blocks (orphan risk) as the receivers already have the transactions and won’t have to download&validate them.
  • Wallets should innovate on payment protocols. One item super important here is that a customer should be able to send a transaction to the merchant and the merchant should be able to say “No, that does not have enough fee or priority, please fix and try again”. This avoids the current silly situation where the first time a merchant sees a transaction when its already sent to a miner.
  • Double spend proofs should be rolled out in order to give notice to merchants.

Q: How do we avoid being flooded with zero-fee transactions filling up the mempool?

Re-instate the feature that used to be on the Satoshi client where low-priority
transactions are rate-limited for broadcast and mempool-entry.

Q: How can merchants be sure the receiving TX will get mined if its zero-fee?

The simple answer is that they can’t be sure, but that doesn’t mean that we should somehow forbid people sending any at all.

The problem is local to a certain group of people. Merchants accepting zero-conf. And those merchants will anyway need more advanced tools to protect themselves and lower their risk.

Most specific here is that a transaction that is too low fee should simply be rejected before it is sent to the network and the sender should correct or be prepared to wait for confirmations.

This requires better tooling on the (merchant’s) wallet side.

Q: How can a wallet know what fee is a good fee to get mined?

Many ideas have been going round, and I’d expect a lot of fear can be felt if we were to look at the BTC situation where fee calculation is simply a guessing game.
What we should thus make clear is that we need to keep the network as a whole healthy and operating well below its limit. Having many full blocks will really make this an impossible problem to solve.

Without always full blocks the algorithm becomes rather simple and any observer that has the full chain can determine the effective min fee. Which can be relayed via some API.

Simply said, when using zero-conf your merchant will likely force you to use the minimum fee. Regardless of how aged your coins are.

When you send something that may take 10 blocks to get mined, then you may try a lower fee but higher priority transaction.

Q: How can a full node actually implement this? (aka is there a spec?)

First of all, ideally a lot of work is going to be done on the wallet side as well. But, yeah, a full node:

  • mempool entries should have a priority added.
    The priority is based on input/output ratio, days-destroyed and fee/byte.
    User settings are used to determine the actual priority as some number.

  • mining gets some more properties to determine when to include transactions based on priority (much like the current min-mining-fee).
    One other property it can use is the existing timestamp when a tx was entered into the mempool.
    User settable properties (levers for the miner to adjust) can include ability to delay transaction inclusion based on low priority for a certain time. it includes the ability to include a certain amount of kilobytes of high-priority, low-fee transactions.
    What properties are important to expose should definitely get more research.

  • low priority transactions are relayed with a rate-limiter. (as it was done in the Satoshi code). We might need to adjust the original to block-size and expected tx/sec on the network. Take something like EB (—blocksizeacceptlimit).

  • mempool expulsion will go down to maybe 6 hours, after which people can re-send their transaction if it never got included. Including changing the fee.
    Tx priority may change this time so really low prio transactions get removed faster (say, 3 hours or 20 blocks).

3 Likes

I don’t have much more to add right now, other than that I enjoy Mr Zander’s thinking, in that he questions our premises (tx ownership, payment protocols, …)

I am generally unsure that we should allow 0 fee transactions. A 1sat/kB fee, or even a 1sat/tx fee, should be generally ok for anyone.

1 Like

It should be easy for EC and I did mention that :smiley: Not all wallet work that way though, most rely on some form of trusted-server. Last time I checked Coinbase wallet doesn’t even show history in wallet (likely due to compatibility with operating on ETH in a very light way), I wonder how they’ll cope with that.

But more to the point, the problem isn’t whether you can implement the new logic, but rather whether they will. We do live in a world where most wallets add BCH as an afterthought, convincing them to add new logic specifically for BCH is going to be quite an uphill battle.

If the new logic affects relayability of existing, simple 1sat/B in any way, existing multicoin wallets may become hostile to us. That’ll likely overwhelm any possible benefits of this pre-emptive fee-lowering effort.

If, as you said, the new logic strictly only discounts from 1sat/B with CDD (and do not deprioritize or make unrelayable in any way), these wallets don’t have to do anything, and that’s probably better. Adoption of the new logic may be very lacking for a long time though, people who are advocating for this policy may want to think twice whether that’s a good outcome.

2 Likes

This reply relates as much to your statement above as to @tom 's most recent post with the detailed thoughts on his proposal and the questions and answers.

I very much agree that this relies hugely on the “will they actually implement it” part, including when considering a new common mempool eviction policy.

Even though there are only a handful of full nodes right now, we can’t be sure they will all agree that such a new eviction policy is what they wish to make their default, and even if they do, it is our users (node operators) who have the final say, and if they have that knob to turn, they may decide to run with different parameters. It seems that could spoil the result and all the good intentions.

Maybe persuasion that a common eviction policy is needed would work, maybe not. If it turns out it doesn’t, then the switch to a more complicated “more knobs to adjust priorities” policy would be harmful.

My final thoughts here (this has been said elsewhere too):

To keep on topic, we should separate out the concerns / requests about:

  • free transactions
  • additional CDD priority mechanisms

And keep this thread purely about lowering the default minfee (relay+mining) in a way that sticks to the current model of a unified network fee.

This is something much simpler for nodes and wallets to implement + understand, can likely be reasonably maintained for the foreseeable future, doesn’t impact current use cases (unless it devolves to the state of a non-uniform policy) and keeps us focused in this topic.

The testing effort on such a simpler adjustment scheme is probably an order (or orders) of magnitude less than schemes where we introduce much more complexity through several new tweakable parameters. I think that really matters if the objective is to have something workable within reasonable time, given that the expectation that underpins this thread is that we need a solution for the high-fees problem that can occur unpredictably and perhaps relatively soon.

And to be fair to the originator of this thread, I suggest that we also move proposals that involve miner voting instead of a “fee estimation algorithm” out of here, since they are different things.

And another thing: I have this nagging doubt that f we were to make the mempool admission rate-limited by a more complex calculation, we create ourselves a new obstacle to scaling. Because now we need to put a rate limiter in front of the mempool. I’m not yet sure it’s needed, the major economic factor to disincentivize spam would be to pay some fee, so a simple “did this transaction pay my required minimum fee?” check seems all that is needed to cover the major spam deterrent. Maybe I’m wrong, maybe such rate limiting can be quite well in parallel too. I suppose it ought to be workable. But it’s requires some architectural change to BCHN at least, I don’t know about others.

1 Like

Side note: I like that nearly everyone in this thread strongly agrees that CCDs need to come back.

Their removal never made any sense, it was just a Core idiocy.

1 Like

I am going to open a separate thread for CDD prioritization discussion.

That’s an opinion mixed with what looks like a genetic fallacy (“becomes it came from Core, it must be devoid of sense”).

Here is where I think we need to be more specific on actual usefulness thereof (hence a new thread).

nearly everyone in this thread strongly agrees that CCDs need to come back

I’m not sure that’s true, it may be more an impression you got?
To check, you can enumerate who you think “strongly agrees” in this thread.

Count me out, since I’m not convinced.

Actually I meant it was devoid of sense regardless, not because it came from Core.

I am here for a very long time and I remember Coin Days Destroyed working splendidly in 2015 and before. I did not even know it got removed before this discussion.

Removal of CCDs was dumb, regardless whether Core did it or somebody else did it. But surely it all comes together and makes exponentially more sense when you add that Core wanted to destroy on-chain transactions and this was their hidden goal all along.

Well, so far I think that Tom Zander, you, Calin, mtrycz and somebody else agreed that CCDs need come back.

Maybe I overexxgerated a little with “strongly” agreeing (I do that for more dramatic effect like Filmmakers add CGI effects in their movies because otherwise long discussions get boring), but generally yes, we almost universally agreed that CCD needs to come back in some form.

This is how small it’s impact on usage is. You didn’t even notice for 2+ years that someone removed it. And you ARE a person who transacts on the network, I know that. So this is more an indication that it’s not a significant thing.

Uh.

I did not notice it, because I stopped using BTC after 2017. And in BCH congestion/high fees were never a problem so CDDs were (initially) not needed.

I did notice it disappearing from charts on blockchain.info (and others) though and always was wondering why did it disappear.

Well now I know.

So this makes no sense to me as an argument for reintroducing it.

The whole point is to keep congestion / high fees from happening, if CDD’s only use is in that situation, it is effectively of no use on BCH.

I’m not saying that it might not have some use (in fact I played devil’s advocate for it yesterday on BCHN’s telegram-bridge) but I think this discussion belongs in a separate thread to explore exactly what those uses are.

I did notice it disappearing from charts on blockchain.info (and others) though and always was wondering why did it disappear.

Chart sites can still compute it. Blockchain.com doesn’t even have BCH charts (I wish they would)

Hm perhaps you could be right.

But what about spam prevention in relaying transactions? CDD could be used as an additional countermeasure.

Ah yeah. Case in point.

Reintroduction of CDDs is offtopic in this discussion and should be done in another thread.

Opened thread for collecting CDD information and benefits:

2 Likes

There is a clear fallacy in tihs logic.

The basic fact is that there is no guarentee that block-space supply will outspace demand. All those < $0.01 transactions will find companies exploiting it. With chain success I can almost guarentee blocks will be full.
So what you put up as 'the whole point" is an unattainable goal. Congestion WILL happen in a space where the price of a transaction is lower than the competition, and with the current codebase this WILL lead to high fees. The only question is the timeline.

I’m personally arguing for more levers to make A not automatically lead to B. To avoid the connection that full blocks lead to high fees and lead to cutting off the people living on $2 a day.

The ability to use more features (coin-days and inputs vs outputs) will allow us to not fall in the same trap of success that BTC and ETH have stepped into.

1 Like

There is also a clear fallacy here, in that this space isn’t inevitably used by others - this has not happened on other blockchains (even very similar ones like BSV).

It could of course happen to BCH, but I think it is more likely to be in the form of a hostile action than everyone rushing to use our competitively priced block space.

I mean, costs right now are < $.0.01 for average transactions. And we can see that our 32MB’s are not being filled up. And haven’t been for the past 3 years or so.

I’ll acknowledge that congestion WILL happen if we don’t stay ahead of demand.
But there’s a condition in there, I’m not going to argue that the effect of people rushing to use our lower-fees chain is necessarily so quick that we cannot meet it.

Also, more levers might be useful, but they also come with costs as I have pointed out (complexity, maintenance costs etc)

That is not a fallacy. This is you putting new data on the table that you draw new conclusions from that you say conflict with my conclusions. The problem with your conclusions based on BCH and BSV is your logic has another fallacy in it.
The fact that BCHs blocks are smaller than 32MB can much easier be explained by us not having grown our ecosystem to contain enough people yet that fill it.
The example is a good one, though, because on BSV and BCH both the percentage of transactions that pays very low fees is quite significant. And that proves my argument.

Good, you do agree with my (seemingly not fallacious) conclusion. :smiley:

I do agree that there is a chance we may keep innovation aheaad of usage. I personally don’t think that is going to be the case.

Adoption of successful products goes via a so called S-curve, and we hope that BCH is going to be successful and follows the same kind of adoption. Staying ahead of adoption is a nearly impossible thing and BCH frankly doesn’t work on scaling to make me think we will have no problems in the next 2 - 5 years.

I mean, even if you disagree that adoption will be lower than block-space increase, it still is good practice to plan for when your optimistic view fails.

1 Like