The basic concept of fee-setting is not a consensus rule today. On the other hand there is really only one “lever” to adjust by the network which is the min-relay-fee and our fear for zero-conf makes us state pretty loudly that miners should not actually touch this.
This is an undesirable situation, nobody really wants developers to have the final say here. And should the price reach $10k, the fees are going to make certain groups of people simply no longer able to participate.
At one point miners will take action and its our job to make sure they have tools to take action that is not destructive to the network.
A lot of questions came up in this discussion, both here and on other channels. I’ve collected various and write the answers as I’ve found them.
FAQ:
Q: Why is usecase X made more expensive and usecase Y made cheaper? Why are you deciding that?
So, this was in relation to my proposal that we turn our 1 lever (min-fee-relay) into a couple of levers where a transaction is given a priority based on several properties. 1: ratio of inputs vs outputs. 2: coin-days-destroyed and 3: actual fee paid (per byte).
To put the finger on this, a transaction that has one input and 400 outputs would be made really really low priority. And in many cases this is a transaction that is not good for the network. For instance the dusting transactions.
I would personally like it when miners would take really low priority transactions and simply reject them (not mine them) if they pay a low fee as well.
Now, I’m not the market but obviously such examples will make people ask why I suggest a valid usecase should be made more expensive.
Fair question that basically is sidestepped by the fact that any solution where we measure more variables and give more options to the miner should come with some sane defaults that should be easy to change. Maybe even required to be set to sane values by the ecosystem.
This is not consensus and iteration of options and tweaking of values is possible every day to get things right.
The bottom line is that someone needs to make the decisions what to down-prioritize and make more expensive. The network doesn’t scale infinite. And in my opinion it is our job to provide levers in order for the market to actually determine the actual limits they are willing to accept.
Q: How can we make sure that transactions still stay zero-conf safe?
This question is based on the fact that we have scared the ecosystem into this idea that if we do not have the same exact mempool policies network-wide, zero conf becomes unsafe and double spending is going to get out of hand.
Naturally, this has a core of truth. But the basic issue here is one of tooling. It is also one of perception.
The general perception for many is that they can treat “the network” like a central server. They can take the answer from the nearest node (accepted, rejected etc) and assume that the rest of the network agrees. And that once a transaction is delivered to one or two nodes, it will get mined.
This is an illusion that is very convenient but it can’t possibly be true for a decentralized system. There are 1000s of node operators that are able to tweak their properties and there are serious people out there that will spin up 10000 nodes if that will postpone the rise of BCH.
So, if we have to let go of the notion that we CAN keep all mempool policies the same, how then can we guarantee zero-conf safety? Here is a list of things that each will help tremendously:
- Wallets need to keep ownership of transactions till they get mined. For this we need better communication and a wallet should be able to find out the mempool status of its tx.
- Mempools should move from keeping a transaction for weeks to keeping it only for 4-6 hours. This helps a lot with mempool pressure.
- Mempools should keep many more transactions (as many as possible, really) which may not actually get mined in a block. This also solves the problem of mining bigger blocks (orphan risk) as the receivers already have the transactions and won’t have to download&validate them.
- Wallets should innovate on payment protocols. One item super important here is that a customer should be able to send a transaction to the merchant and the merchant should be able to say “No, that does not have enough fee or priority, please fix and try again”. This avoids the current silly situation where the first time a merchant sees a transaction when its already sent to a miner.
- Double spend proofs should be rolled out in order to give notice to merchants.
Q: How do we avoid being flooded with zero-fee transactions filling up the mempool?
Re-instate the feature that used to be on the Satoshi client where low-priority
transactions are rate-limited for broadcast and mempool-entry.
Q: How can merchants be sure the receiving TX will get mined if its zero-fee?
The simple answer is that they can’t be sure, but that doesn’t mean that we should somehow forbid people sending any at all.
The problem is local to a certain group of people. Merchants accepting zero-conf. And those merchants will anyway need more advanced tools to protect themselves and lower their risk.
Most specific here is that a transaction that is too low fee should simply be rejected before it is sent to the network and the sender should correct or be prepared to wait for confirmations.
This requires better tooling on the (merchant’s) wallet side.
Q: How can a wallet know what fee is a good fee to get mined?
Many ideas have been going round, and I’d expect a lot of fear can be felt if we were to look at the BTC situation where fee calculation is simply a guessing game.
What we should thus make clear is that we need to keep the network as a whole healthy and operating well below its limit. Having many full blocks will really make this an impossible problem to solve.
Without always full blocks the algorithm becomes rather simple and any observer that has the full chain can determine the effective min fee. Which can be relayed via some API.
Simply said, when using zero-conf your merchant will likely force you to use the minimum fee. Regardless of how aged your coins are.
When you send something that may take 10 blocks to get mined, then you may try a lower fee but higher priority transaction.
Q: How can a full node actually implement this? (aka is there a spec?)
First of all, ideally a lot of work is going to be done on the wallet side as well. But, yeah, a full node:
-
mempool entries should have a priority added.
The priority is based on input/output ratio, days-destroyed and fee/byte.
User settings are used to determine the actual priority as some number.
-
mining gets some more properties to determine when to include transactions based on priority (much like the current min-mining-fee).
One other property it can use is the existing timestamp when a tx was entered into the mempool.
User settable properties (levers for the miner to adjust) can include ability to delay transaction inclusion based on low priority for a certain time. it includes the ability to include a certain amount of kilobytes of high-priority, low-fee transactions.
What properties are important to expose should definitely get more research.
-
low priority transactions are relayed with a rate-limiter. (as it was done in the Satoshi code). We might need to adjust the original to block-size and expected tx/sec on the network. Take something like EB (—blocksizeacceptlimit).
-
mempool expulsion will go down to maybe 6 hours, after which people can re-send their transaction if it never got included. Including changing the fee.
Tx priority may change this time so really low prio transactions get removed faster (say, 3 hours or 20 blocks).