Lower the default relay fee, create a fee estimation algorithm

Interesting point of view. Its obvious that a lot of people would not mind stuff being free, so thats half of your argument gone. Your point that it would somehow not be reliable has been solved already in Thinking about TX propagation.
The concept of “Free transactions” is also missing the bigger picture I tried to explain. This is not just about fees, this is about the wider picture with several metrics. Transactions that have very very low priority need to pay fees. Transactions that have high priority may omit this, but can wait an extra block or two.

This is thus a false assumption, nobody was suggesting a system.

I invite you to look at my post above
where I explain the economics where miners are fully allowed to be greedy, without hurtning the system. No need to trust or distrust.

This statement is shown to be false. Imagine 1 restaurant asking twice the price of all the other ones. The only effect it will have is that they will not get any customers and thus get even less income. Even if 90% of the miners ask for higher fees, the only effect it will have is that people paying lower fees wait for more confirmations.
Your assertion is not supported by basic economics.

Yes. And more to the point, the fee/price should be left to the open market to allow discovery of the optimal. Centrally planned fees, costs, prices etc have been tried and they always cause massive issues. Free market, learn to trust it :slight_smile:

Interesting thoughts.
First of all, I don’t think that asking people to pay a minimal fee is equal to discouraging them from participation.
My design is that a mining policy prioritizes transactions for inclusion. And transactions that need to be in the same block as 100 of its parent transactions can get that guarenteed by paying a small fee. As the system grows and we reach 10k tx/sec then the amount of fee per transaction is going to be very low in order or for miner to still get a good income every 10 minutes. So don’t worry about this being anything like a high fee.
Like freetrader said; they are paying more because they use more block-space.

The abuse is a good point to take into account should a mining software implement this idea: the cost of doing 1 → 300 and then doing 300 → 1 should not end up costing the same as a 1->1. I’m not worried about this being too hard, finding some good curves to fine-tune this doesn’t seem a big problem and this is not a consensus part so iterate and test.

Notice that this would not be any complexity in the “system”. This would be a better algorithm that miners use to build a block. Even if a small number of miners use this, the benefit would be that the system as a whole becomes more responsive to economic incentives. None of the other components need more complexity.

1 Like

I’m not sure whether tom forgot to read most of my actual post before picking out of context quotes to straw-man against, or deliberately chose not to. But I’ll encourage other interested readers to go through my original post.

To be honest, I felt the same way when you simply dismissed my argument with rather strong but scary statements in your reply. To be fair, you started out with a disclaimer of you going against everyone.

To be constructive; your argument rests on the conclusion that free transactions are pointless, based on the assertion that they have no reliabilty.

This assertion makes no sense to me. Reliability is not impacted. Most likely you have not understood the proposal to expand priority to not only be about fees but also about other factors. How would a different way of having priority affect reliability?

I would also really like to ask you to be more constructive and cooperative. There is an proposal and it would be nice if you try to understand it and not just dismiss it like I’m a junior that has no clue. You are exhausing to work with. Please, questions instead of dismissive statements.

1 Like

A quick question to better understand it. Let’s say Alice’s node has 1sat/KB relay fee and Bob runs a BCHN node with default settings. Someone gives Alice a TX with 1 sat/KB and she relay it to Bob. Will Bob’s BCHN node accept it into its mempool?

Will Bob’s BCHN node accept it into its mempool?

No.

From the BCHN method that accepts a transaction into the mempool (or not):

// No transactions are allowed below minRelayTxFee except from
// disconnected blocks.
// Do not change this to use virtualsize without coordinating a network
// policy upgrade.

That second sentence should be taken generally imo, as "do not change the policy here without coordinating a network upgrade.

Thank you! But if somebody mines this 1sat/KB transaction, then the block will be accepted still, right?

Dammit, I can’t have an answer less than 20 characters.

My answer would have been just:

Yes.

Adding to the BCHN answer that they do not allow anything below relay fee.

This is not what the Satoshi client did for various years. This is behavior that was later changed.
In Flowee this still lives; libs/server/validation/TxValidation.cpp · master · Flowee / thehub · GitLab

Relevant upstream ⚙ D4745 remove priority free transactions mechanism (currently off by default)

The proposal I wrote above is essentially to go back to what Satoshi designed: priority transactions based not only on fee, but also on hard measurable things that can not be spoofed. It is my opinion that that would create a more stable system.

1 Like

While there may be some fancy more permanent way to handle min relay fee through an algorithm. I think the simplest approach would be the best. And we already use this simple approach when deciding on block size which is to just change the variable through coordination. Just lower the min relay fee variable. I suggest decreasing min fee from 1,000 sat per KB to 100 sat per KB. A factor of 10 decrease. And if I remember correctly lowering the min relay fee will also lower the dust limit by the same amount.

EDIT: To elaborate why I prefer a simple approach over a more algorithmic one is because the algorithmic suggestions here may take a longer time to discuss, research, and implement. A simple approach will be less risky and gather more support much quicker rather than a complex untested solution.

1 Like

Also I would like to add that SLP tokens are impacted by the min relay fee when it comes to the dust limit. Each time my application sends someone a SLP token it costs me the network fee + dust limit, which is 3x more expensive than a BCH transaction.

1 Like

A “simple” adjustment also opens node developers as targets to continued future lobbying/harassments no matter the actual situation, such as what led to this thread. There is no pressing, urgent need to adjust minfee, a parameter deeply intertwined with 0-conf reliability and general perception of network reliability. Yet node devs who could be spending their time working on hard scaling to stay ahead of congestion - the actual specter that will kill low fee situations very quickly - are socially pressured into reading and participating in threads like this way ahead of any real need.

Solutions that do not free node developers from future lobbying efforts like this, imo, cannot be good solutions.

I agree that as of today the min relay fee is not a pressing issue. I do share the same concerns that OP has pointed out which is the fact that market price of a coin can change drastically in a very short period of time. If Bitcoin Cash rises sharply in price in a short period of time the min relay fee will become a really big issue really fast. Same problems you would get if blocks got filled up.

It would be smart to be proactive on min relay fee like we are with block size by leaving a lot of room for growth.

I agree with you that devs should not spend a whole lot of time on this issue, which is why I think the simplest approach of decreasing the min relay fee would save people time and effort.

I don’t have a response on developer lobbying and harassment. That sounds like a separate issue entirely.

@im_uname, you mention “a parameter deeply intertwined with 0-conf reliability and general perception of network reliability” and that “there is no pressing, urgent need to adjust [it]”.

Which means that this parameter is very important in your opinion and must not be changed in a haste. Why would it be then better to start thinking about this when there is a “pressing, urgent need”? Are you sure it’s not better to think about this now, when there is no pressure?

I really don’t think people think better under pressure.

This pressure might materialize within days at any time.

Devs will suddenly be forced to come with a solution right now about a parameter that (in your own words) is “deeply intertwined with 0-conf reliability and general perception of network reliability”.

…or people will leave for a network that offers better fees. Which is what we see with Ethereum and alternatives like BSC and Solana, etc… There is a pressing, urgent need. Nobody can do anything now, because it’s way harder to change now.

@im_uname I’m also very surprised about this attitude of “BCH devs don’t care about what you, the actual user, want or foresee, let’s only work on things that are on fire right now and wait until something new catches fire… oh and please refrain from contacting/lobbying BCH devs about your petty problems in the future, they have more important things to do than read your babble about BCH” (Paraphrased, obviously, but the gist is there)

There is no pressing, urgent need to adjust minfee, a parameter deeply intertwined with 0-conf reliability and general perception of network reliability. Yet node devs who could be spending their time working on hard scaling to stay ahead of congestion - the actual specter that will kill low fee situations very quickly - are socially pressured into reading and participating in threads like this way ahead of any real need.

Solutions that do not free node developers from future lobbying efforts like this, imo, cannot be good solutions.

This is the first time I have my doubts about BCH since the split. Very surprising. Eye-opening.

Both are important. If we have 1TB block limit and actual capability to send and process 1TB blocks, yet only 100KB in transactions per block and $50,000 per coin (~BTC) with transaction fees of 1 sat/b, which would mean minimum transaction of ~$0.20 fee for the cheapest transaction and like $10-50 fees for consolidation transactions, none of that 1TB stuff will save BCH. Both are important, in my opinion.

I can shut up and avoid getting devs “socially pressured into reading and participating in threads like this”, but it won’t change the fact that we need to understand how we’re going to solve this problem when it arises, instead of dismissing it and having this same talk while BCH businesses will be crashing and burning with high fees, while having 99.9% empty blocks. We’ll suddenly have to coordinate an emergency change and emergency coordinate network upgrade with a parameter that’s uber-important.

I seriously thought that “low fees forever” was one of our key selling points. Boy, was I mistaken!

P.S. There doesn’t seem to be a way to “close” this thread, so I’m unfollowing it. That reply was the most disappointing thing I’ve ever read about BCH.

There are two types of failures, both apply to fee in different ways.

  1. Gradual, linear failure: Fee rise due to coin value/minrelayfee rise is such a type. The fee rises linearly with price, which goes up and down but does not fail catastropically (see #2 below). Sometimes price does change dramatically, even many folds so, in a matter of weeks; during such times people are more concerned about maintaining their coin’s value than whether the fees are 0.5 cents or 2 cents anyway. There is no hard “failure line” beyond which things suddenly become unusable - things can become stressed, but there will always be time to do things.

  2. Qualitative, catastrophic failure: Failure to scale blocksize safely falls into this category. There is a huge difference between averaging 28MB and 32MB, while the rise from 2MB to 28MB will barely feel like anything. If we hit a prolonged 32MB period we have outright failed no matter what, the much feared fee market can pump fees to arbitrary levels and choke off businesses dramatically. This does require planning far ahead, for failure is decidedly not graceful and likely fatal.

Ethereum and BTC both failed at #2. I have yet to see any actual case of chains failing by #1 no matter the price fluctuations. I really hope you see where it’s unproductive to mix different categories of problems.

3 Likes

I’m not sure how you came to this conclusion reading the post, I was trying to make a point that solutions need to be human-free and long term, else developers - not the best people to sit in committees listening to complaints about how knobs should be turned 50% or 80% lower - will be forever bound to the kind of scorn you expressed here, no matter what they do and no matter how much time they sink into this. It’ll keep coming up over and over and over again. One-time knob-turning solutions that don’t free people from this vicious cycle aren’t real solutions.

1 Like

Ethereum and BTC both failed at #2.

ETH devs: “let’s ignore the fee problem until it becomes real”
(“Qualitative, catastrophic failure” comes)
ETH devs (later): “Oh, the problem is real, but it seems we’ll require a few years and a machinery that blows everyone’s minds to solve it!”
ETH fees: $100+
Users: What’s that Solana and BSC thing?

BCH devs: “let’s ignore the fee problem until it becomes real”


I totally agree that one-time solutions don’t solve anything - hence the word “algorithm” in the topic title. Yet you dismissed the problem at all: “There is no pressing, urgent need to adjust minfee … node devs who could be spending their time working on [other problem] are socially pressured into reading and participating in threads like this way ahead of any real need”

I disagree that this problem needs to be discussed later, when fees suddenly creep up, users leave and we find ourselves in THIS discussion where we figure out that none of us has ANY IDEA how to solve this. Literally, we don’t have any good idea.

So, we could have this discussion now (“way ahead of any real need”), or when fees explode under our feet and people yell and leave - but since we’ll still be the same humans, I guarantee that the discussion will be exactly the same then. Nobody will have any idea how to solve it fast. We will not have any immediate solution. Look at ETH devs. (Or we could discuss it now, without dismissing it as “way ahead no real need”)

In the second case, we’ll have possible huge network of nodes that might be unwilling to upgrade or something. We might even find ourselves in a situation where opinions on what to do next differ so wildly that we’ll experience yet another split. (As you can see, it is a contentious issue)

We’re in the same situation that BTC experienced. Waiting top up the block size for so long that it became impossible and it because a huge issue.

We’re in the same situation that ETH experienced. Dismissing fee issue for later and finding it impossible to solve as fast as they thought they could be.

This is what scares me. BCH is tiny. If we already have the attitude of not discussing inevitable (again, inevitable!) problems because devs want to work on something else, think about what will happen when 1 million people use BCH… 1 billion? Think about how harder will it be to discuss any changes then. We have no future if we don’t want to discuss it now, when we’re tiny and barely few (mostly cooperative) voices. It will be 1000x harder to discuss when we’re 10x-100x bigger. We will repeat the same mistake that BTC and ETH did.

I’ll like to apologize to you (and anyone else) who were offended by this exchange - it’s definitely not my intention to say that people should not think about this at all. Considerations about a longer term fee policy has been going on for far longer than the recent episode; in fact I think Shammah were even convinced enough to work on consolidation-encouraging fee policies back in 2019, you might be able to find that PR in the ABC repo.

This recent episode, though, was a very different kind of stress - it’s not “we should consider different solutions carefully”, but rather a bunch of ultimatums that say “lower it now or you failed” and “fee is already too high”. Folks were put under this sudden, and I’d argue undeserved, pressure from high-profile people seemingly out of nowhere, instead of getting the space to come up with good solutions. Maybe you haven’t seen those, and I’ll have to apologize again.

2 Likes

Thank you! I apologize too if I went over the line somewhere, was not my intention too.

I don’t think I’ve ever put any blame on anyone for fees, they are quite low, except for some cases like consolidation (which could be lower, since it’s actually good for the network that we’re decreasing the UTXO set size, which is the only thing that will eventually matter, but that’s another topic).

I want the community to come up with a plan for the future about how we will solve this problem, when it comes.

I would certainly prefer that we lower the fees 10x-100x now/next upgrade (since, as I explained it will be much harder to do later), and especially seeing that users are already unhappy as you describe it: “lower it now or you failed” and “fee is already too high”. I haven’t seen it, but aren’t BCH devs here for the users? Shouldn’t they listen to them and make sure they[users] can do what they do? I mean, I’m pretty sure that these people didn’t voice these opinions just because they are bored. They have a problem. BCH devs can solve them. Maybe 10x lower fee will not solve it forever, but it will surely go a long way now. As I described above - miners won’t care now, since fees aren’t even noticeable in their earnings.

Alternatively, set the default fee terribly low (1sat/MB) - that’s the solution that won’t require any developer intervention in the future. Miners will have to adjust it by hand. They will, since they want the network to work, they won’t try destroy their own business.

Right now we have 1sat/b fee, which is as arbitrary as 1MB block limit, just from the other side.

Though, of course, if there are better solutions - let’s discuss them. So far, it doesn’t seem like we have any universally acceptable better solution.

1 Like