CHIP 2021-05 Minimum Fee Rate Voting Via Versionbits

Some feedback on the proposal:

  • I think that voting up or down is too stateful and complicated.
  • IF we were to go with a scheme like this, I would prefer we use 3 bits at least and just have a fixed array of 7 or 8 values for fee rates such as: 1 sat/KB, 5, 10, 50, 100, 500, 1000, .
    • This covers just about anything anybody would want to do and is simpler to evaluate (all you need is the last 1008 headers).
  • And up/down mechanism requires you to evaluate every fee rate that ever existed – meaning you need to download headers since the beginning of this scheme’s deployment if you wanted to evaluate the fee rate trustlessly. (After a year of deployment that would be over 50k headers!).
  • If you just have an absolute array you just need 1008 headers. This is much easier to evaluate for an SPV wallet. Also is less bugprone for devs.
  • Having a fixed array of values makes it very clear what the actual relay fee is without rounding errors and other things accumulating.
  • It’s easier to configure as well. Just set your node to a target and fire and forget.
  • How do you configure 50 sats/kB?
    • What happens when the fee rate is 44 sats/kB? Does your node vote up?
    • If it does it will overshoot. (To 55 sats/kB).
    • If it votes “no change” it will forever be unhappy with the undershoot.
  • A fixed array doesn’t have this problem. The user has a set of choices he can make and he will know what to expect at the end based on the choice he made.

Also, I think a simple majority might be beneficial, for the reasons @BigBlockIfTrue mentioned: you can have a situation where a previous majority voted for something and now only a minority supports it but we’re stuck with it.


As for @alex_asicseer 's thoughts: I am all-for just doing it immediately or soonish without all of this, FWIW. I actually think we are overshooting the fee rate drastically now and we can just agree as a network to lower it to say 100 sats/kB or whatever right now and nobody would even get mad.

My two cents.

5 Likes

Thanks for the feedback, @cculianu .
I am almost persuaded by the “use a simple majority argument”, and it’s very likely I will modify the CHIP in that regard unless I receive strong arguments against it.

I think that voting up or down is too stateful and complicated.

We will have to disagree there, because I think the alternatives, except for a one-time reduction, still involve voting up or down and are in that regard, not significantly less stateful.

IF we were to go with a scheme like this, I would prefer we use 3 bits at least and just have a fixed array of 7 or 8 values for fee rates such as: 1 sat/KB, 5, 10, 50, 100, 500, 1000, . This covers just about anything anybody would want to do and is simpler to evaluate (all you need is the last 1008 headers).

You may need more than 1008 headers anyway since you might be into the start of a new period and you’d need not the last 1008, but (1008 + however many headers past the last vote evaluation you’re at)

Practically I don’t think it makes much difference, as I explained in the CHIP that SPV wallets could cache the last vote result and come with checkpoints (e.g. at checkpointed block X, the minfee was Y" which means they don’t need to evaluate all the fee changes since this CHIP is put into effect, but only since they’ve last been updated.

And up/down mechanism requires you to evaluate every fee rate that ever existed – meaning you need to download headers since the beginning of this scheme’s deployment if you wanted to evaluate the fee rate trustlessly. (After a year of deployment that would be over 50k headers!).

No in most cases for the reason of caching/checkpointing above.

Even for 50,000 headers: a “whopping” 4MB or less than 3 floppy disks (anyone remember the 1.44MB ones?)
This is not even a concern to me. Such software needs to obtain the headers anyway to decide on its chain to follow.
This data just falls out as a byproduct, and the calculation is dirt cheap and fast.

If you just have an absolute array you just need 1008 headers. This is much easier to evaluate for an SPV wallet. Also is less bugprone for devs.

That is an advantage of signaling for values in an absolute array, yes.

I think the complexity is not much less than simple up/down voting and therefore not really less bug prone.
But I may be convinced otherwise if I see a CHIP for it that has dramatically simpler proposed code etc.

Having a fixed array of values makes it very clear what the actual relay fee is without rounding errors and other things accumulating.

Yes, that is an advantage of a fixed fee table, you can just look up the new fee value without math.

This could really be a successful alternative to this CHIP. I invite anyone to specify it as such.

It’s easier to configure as well. Just set your node to a target and fire and forget.

Yet it also does not solve the undershoot/overshoot problem.

How do you configure 50 sats/kB?

You could configure it as a target as per @im_uname’s suggestion so that your node always votes “in the direction of 50 sat/kB”. Logically your node’s votes would oscillate around the target as long as it is not hit exactly, if the implementation does not make a choice to say “closer than X to the target value is good enough for me to vote ‘no change’”.
I view a target fee setting via parameter as proposed by @im_uname is a good interface suggestion for implementors of this CHIP.

What happens when the fee rate is 44 sats/kB? Does your node vote up?

Yes.

If it does it will overshoot. (To 55 sats/kB).

Yes.

If it votes “no change” it will forever be unhappy with the undershoot.

Maybe not. There can still be one or more fee walks that reach exactly 50 sat/kB and make your node supremely happy.
How long that may take exactly depends on the choices made by other network partipants, but it is not guaranteed to take forever.

A fixed array doesn’t have this problem.

Disagree. Taking your example array, if the user wants 200 or 250 sat/kB, that’s simply not a value they will ever be able to set or get.

The user has a set of choices he can make and he will know what to expect at the end based on the choice he made.

Half agree. Yes, they can make a choice, but I think an individual “voter” will have greater uncertainty about the outcome, since it can
be any of the values in the array, and not simply one of three choices (current fee or one of the new values computed for “up” or “down”).
As I understand your proposal with the fixed array, the outcome still depends on the choice of all others who generate blocks. And they can vote for any value in the array.

Overall it’s not a big problem as people will be able to see what votes come in over the time of the voting window, and can set their expectations accordingly.

So I’m not saying a fixed array scheme couldn’t work. Only that I think it makes its own set of tradeoffs, beginning with granularity and choice of values. And we can discuss those in details once someone makes a concrete proposal to that effect.

1 Like

No, they can not.

Your step by step is mostly accurate, except for the fact that we still have a min-relay-fee property that is essentially putting up a wall between wallets and cheaper-miner.

It is like you advertise for cheaper food, but it is not available in any supermarket.

The level of relay-fee is network-wide, and your CHIP is also network wide. Anyone wanting to under that can negotiate (2 foxes and a sheep voting on eating setup, like posted elsewhere)…

The problem I have wth it is that you (as BCHN core member) have the power to relax the network fees without hurting the network (we have 5 years of emperical data on that) using the strategy posted on this reply of mine.
Yet you choose to go and add an extra layer of control that will affect the entire network. Because any transaction lower than X-fee will simply not be relayed to miners.

I get what you’re saying @tom .

Can I summarize your position as “there is no need to have a coordination scheme for such a minimum, just let market participants set what they like” ?

Therefore you see this CHIP as an unnecessary instrument of control which does not fully meet your definition of a free market.

My position is that participants, if unhappy with the minimum, will either introduce CHIPs to lower the fee floor (perhaps right down to its absolute floor of zero) or just not implement the CHIP and carry on using the existing configuration options to set what they like.

Right now, it seems people value the fact that transactions can simply adhere to some minimum fee rate and be sure to propagate well.

Those who go below this rate have been informed that they are at risk of double spend attacks against them, and with more rollout of awareness features for double spend, that may become a safer option.

I raised this CHIP because there was pressure to reduce the fees, and I’d prefer to see it happen in a coordinated (although decentralized) manner that reduces this fragmentation risk as long as Double Spend Proofs are still in beta, implementations still do not cover as many types transactions as they can, and wallet support is minimal.
This may all change, but even then I think it will take time, and some people think a “High Fee Event” may occur on BCH sooner than many would think.

1 Like

Well, I’m not directly disagreeing, but I think your simple statement has the issue that it is dismissing how the free market works. Innovation has to happen and markets have to be able to react. The point of free markets is that the devs do not put up any boundaries to the free trade. Any imposition is limiting the power of free trade.

I believe that if you do basic things to protect the full node, you can let the market decide on the fees.

Your CHIP is not volutairy, even if you write it to be so, as any minority can not escape it.

Ok, if we go back to the beginnings of Bitcoin, well before Core started adding misfeatures to create a fee-market, this was the default. At that point the market was completely free. Transactions propagated without issues.

If you open BCHN and go to the input-selection dialog (enable it in options if you didn’t already), you see that inputs have a priority. All-high priorty inputs create a transaction that is free from rate-limiting even with zero fees. Back then you could know a transaction would propoagate well. But if you had fears, you just added a bit of fee.

If your worry is that you want people to have this peace of mind, be sure that this will not change. As long as there are miners mining low-fee transactions, people can expect low-fee transactions to get mined.

Ehm, what are you talking about? Can you point to such a past informative message?
The current state is that if you go below this level, you will get a message from your nearest full node that your transaction is not acceptable. Most wallets do not even allow you to lower the fees because of this.

Ok, economics wise this equivalent to you proposing wifes to make a proposal to lower the price of baby-food.
The way that the market works, if it is healthy and open, is that someone actually capable of producing at this lower rate puts that on the market and the wifes vote with their money by buying those products instead of those from the competitor.

Your proposal would dictate how all non-mining nodes LIMIT the flow of transactions, at the behest of a majority of miners.
Miners that are more capable of lowering fees do not have the option of doing so because all the non-mining nodes are still tethered to the vote of the majority.

Your position is thus not realistic and not making economic sense.

Generally agreed. I think that some of current crop of client software used min relay fee to protect itself to some extent, and that may just be popular because it is easy to understand and can be used to establish a common policy on the network which hasn’t diverged much from what miners consider acceptable.

I’d like to hear feedback from relevant mining pools whether they would be ok with less reliance on such a min relay fee (perhaps to the point of removing it completely) and instead more using priority as a tool to protect their nodes. However, that is considered outside the scope of this CHIP (as pointed out in the CHIP), even though still worth following up on.

I think min relay fee is a basic form of those protections, and the market can already decide it (by each node operator configuring their node as they see fit, although I think it is currently limited down to 1 sat/kB and no further - but principally this an an implementation concern, that floor could be lowered as the CHIP already acknowledges but considers out of scope).

A min relay fee > 0 is just the node operator telling the rest of the market that they’ll consider any transactions above that fee. That’s also still part of the functioning of the free market.

I have tried to describe in my previous comment that this is not the case, a minority could escape it by operating nodes which accept a lower fee, and mining those transactions. They would be opening the door to any users wishing to pay a lower fee. So I struggle a bit to see how you conclude that “a minority can not escape it”.
They do not have to adhere to the votes of those implementing this CHIP.

In BCH everyone remains free to choose which policy to run on their node(s), which transactions to mine into their blocks, … none of that is changed by this CHIP.
As @im_uname has pointed out, it is only a coordination tool.

Now, I have taken your point about such a minimum fee possibly not being needed at all if other protections against flooding are adequate. Maybe that is not the only concern or reason for their existence. I would like to hear more opinions from actual miners & pools.

Agree that Core’s fee market was (and is) disastrous, but it’s false to claim that transactions always propagated without issues before Core.

In fact, the 1MB limit was set (temporarily) precisely because as an emergency measure against a spam flood which happened because flooding the network was ultra cheap. Core did not introduce that limit, so we should not pretend that everything was fine before them.

Seriously, all of this discussion is off-topic for this CHIP as measures to better protect a node against flooding are out of scope.

Pretty sure this was removed by ABC, I see no such priority displayed for inputs nor means of selecting it (I am looking at BCHN v23.0.0). Since I don’t remember BCHN removing it, it must have already happened before February 2020.

If I’m just being dense and not seeing a field, please post a screenshot of what you see and in which BCHN version.

It would be best we continue this in a more suitable thread on re-introduction of rate-limiting through priority.

I am talking about the double spend risks that those incur who lower their fee rate below the currently established policy of 1sat/byte.

This Reddit comment by @emergent_reasons contains a brief discussion of such a situation:

emergent_reasons comments on It is already time to implement fractional satoshis per byte if we want to be money for the world.

I base my “have been informed” on this information being generally available in the public sphere, in discussion threads on this topic.

Again, this proposal does not dictate. If you read the fine print, it says “SHOULD” in the places where it mentions what nodes do with the new relay min fee value.
There isn’t a requirement that they MUST adhere to it.
They are still free to operate their own policies and bear the responsibility of doing so.

Yes, I will point out again that they DO have the option, despite your claims to the contrary.
They can break free of the majority and run their own lower fees pool.

This is not a consensus rule.

I like the idea of not imposing unnecessary constraints on the system too.
However, I would not say that this proposal makes no economic sense - I leave that up to the economic actors in the system to decide.

You should check your facts before writing them here.

You say that someone able to operate cheaper should invest in duplicating the full-node network in order to allow users to maybe be able to find such a place to buy cheaper transactions.

There definitely is something wrong with the economics… :frowning:

You are right, it was preventative instead of an emergency measure.

Why “invest in duplicating the full-node network”?

They are already running nodes. They just set their minfee to lower than others and run like that.

Advertising the existence of your service is nothing new. It does not require duplication of the entire node network. Fallacious argument there.

Yes and no. Miners have their private network of nodes (count depending on size of mining operation, typically pools have a series of them all over the world).
But those mining nodes are not reachable from the Internet. Basic security, you understand. And thus could not be used for the purpose that you state.

If you use 25% increases and 20% decreases, then it can be made equivalent to a fixed fee table. Instead of recursively calculating the fee level, it can be calculated directly without intermediate rounding:

fee rate = 1 sat/B * 1.25 ^ (total number of increases - total number of decreases)

For convenient implementation the exact values of 1.25^n can be stored in a look-up table.

2 Likes

Thanks for the bitcoin.com charts link.

While I didn’t verify the accuracy of their raw data (still something on my todo list but it will take longer) I did download their CSV data and ran it through some stats software at face value.

timeframe min max mean sstdev q1 q3 iqr median
2009-01-10 to now 135 10223 521.04438065798 402.92721552031 383 615 232 497
2017-08-01 to now 235 10223 600.65883190883 549.29857662921 388 600.5 212.5 481
Last 12 months to 2021-06-04 278 1682 681.88524590164 316.83560051598 382.75 940.75 558 627
Last 6 months to 2021-06-04 278 1163 421.77049180328 118.37921838001 342.5 478 135.5 382
Last 3 months to 2021-06-04 298 830 419.32258064516 104.92359137987 344 481 137 378

iqr is the inter-quartile range (third quartile value minus first quartile value)

The mean and median on this data seem to be higher over the entire BCH lifetime up to now than say, in the last 12, 6, or 3 months, with tendency toward smaller average transaction sizes.

Over the whole BCH lifetime the mean of the daily averages (I suppose this is what the bitcoincom data contains) is closer to 600 bytes, but more recently closer to 400.

500 bytes as you suggested seems a decent middle ground for an overall average based on this data (very roughly).

I have updated the fee projection table in v0.2 of this CHIP.

1 Like

If you use 25% increases and 20% decreases, […]

Thanks, I think this is a good simplification (hopefully), I’m planning to adopt it in v0.3.
Will inform when that is ready for review.

3 Likes