Non-standard transactions & out of band miner submission

The BTC community are in contention over Ordinals/Inscriptions (which some BTC people don’t like) which has created pushback from some node runners to “#FixTheFilters” (update BTC relay policy to disallow or adjust how easy it is to broadcast these “unwanted” transactions). They’re doing this on an individual node level already, but want to add standardness filter changes to Bitcoin Core (which is controversial of course).

The latest outcome of this is that MARA pool has proactively launched a service to allow people to pay out of band for non standard transactions. So basically allowing the public to get around the filters in advance. It also gives access to everyone to these out of band shenanigans (such as used by Luxor to mine the original “4MB Wizard” of the Ordinals craze), in a similar way to mempool accelerators. It’s called Slipstream. This is ringing alarm bells.

It certainly isn’t a promising sign, but it seems the way the mining industry is going and of course BCH community don’t have much clout or influence there due to low price / hashrate.

At the same time, suggestions are arising in the BTC community about changes to fix some “non standard vulnerabilities.” See here.

There is already a rough draft 2019 BIP to address some of these issues. See Bip XXX from Matt Corallo. Here are the suggested changes:

It seems to me that if non standard transactions were “fixed” at the consensus layer, it would completely void any miner out of band relay services (they wouldn’t be necessary or work anymore, unless it was to the extent that the miner wanted to try a 51% attack over it). The BTC side are in for a nightmare to discuss or coordinate such a change, but potentially BCH isn’t.

I am not sure what to make of this. I also can’t really tell to what extent these suggested changes are needed or overlap with the BCH tech. One of the suggestions is to disallow transactions less than 65 bytes, and I know BCH already had a fix to prevent issues around transactions that were 64 bytes, so I would guess perhaps we already have that problem resolved but I can’t tell if the same is true of the other suggested changes.

Is this something the BCH community should try to be ahead of the game on? Should we also consider fixing non standard issues by changes straight into consensus? Do we need to, or have we already? Is there a “good reason” for non-standard transactions? Why is there a separation between consensus rules & node relay policies, that’s a bit out of my depth and I’ve never understood it maybe someone else can explain?

Would love some opinions.

2 Likes

Some interesting context, for scale. Source

On BTC, non standard tx are only 0.0074% since Nov 2021. I wonder what the BCH numbers are, I would guess even lower but perhaps I’d be surprised.

2 Likes

Alright, here I go with some opinions. I’ll qualify this immediately by saying I’m sometimes wrong, I am not a Bitcoin historian and I don’t have as deep an appreciation of why things are the way they are, as some others in the Bitcoin Cash community.

I’ll do your qestions in reverse:

Why is there a separation between consensus rules & node relay policies

IMO: the relay policies offer a protective layer with two functions:

  1. originally, to restrict the mainstream use cases to those transactions which is better understood in terms of performance impact on the network, i.e. more safe to scale
  2. layered defense - policy changes can often be adapted quickly and independently by node operators to mitigate new threats. Changes within policy scope do not cause chain splits. Whereas consensus is consensus - changing it is harder and requires wide coordination.

Is there a “good reason” for non-standard transactions?

I would see it in being able to push the envelope in terms of new use cases that can be allowed to develop while being challenged in the real economic environment - something which may not be revealed on testnets despite best intentions. Sort of an entry gate into later becoming accepted into standardness once they’ve proven themselves to be non-problematic. In practice I’m not sure we can say that there are good enough checks and balances in case some non-standard transactions turn out to cause problems.

But maybe someone else has a better reason than I do for non-standard. Of course the dichotomy arises directly from having the distinction between what policy permits and what consensus permits. Non-standard being just the term for that which is allowed by consensus but not by relay policy. So this boils down to “why allow such a bifurcation between relay policy and consensus in the first place”, which is again your first question, so … somewhat circular.

Should we also consider fixing non standard issues by changes straight into consensus? Do we need to, or have we already?

We can consider, but I don’t know that the need is very great, indeed if we have to ask this question. I do hear some people argue in certain specific cases, such as the discrepancy between max standard tx size vs. max consensus tx size, that this poses difficulties for some protocols and an alignment would be good.

Alignment would generally reduce complexity in code bases, and I would consider that a large benefit but it has to be done very thoughtfully of course. One doesn’t want to haphazardly relax rules that might serve a protective function, without being quite certain that exploitation is impossible or costly enough to deter.

And any alignment of policy <-> consensus comes with the cost of changing many codebases that have baked these rules in either way.

Is this something the BCH community should try to be ahead of the game on?

We should be ahead in all areas, so my answer is YES and I love the initiative to table this for discussion.

However, I think it’s less urgent and a much broader and long-winded topic than looking at the VM limits (CHIP) which has an analytical approach already and is more well-defined in terms of the benefits. I don’t think we in BCH have a burning issue here, unlike Core and the BSV crowd. For now, our challenge is to not fuck things up rather than unfucking things, and I think by filtering changes through the CHIP process carefully enough, we might be able to keep it that way :slight_smile:

4 Likes

Thanks for opening the topic @BitcoinCashPodcast. Copying my thoughts from BCHN Telegram for linking:


I wrote about where I’d like to see standardness further relaxed here:

and some of the issues we need to fix first (VM limits CHIP gets us pretty close). Most notable is that current standardness forces covenants and non-interactive multiparty contracts to waste at least 20-32 bytes per TX input.

I’d say it’s hard to summarize into “standardness good” or “standardness bad”; right now I think standardness is an excellent Fence, I wrote more about it here: Standard Vs. Non-Standard VMs

Over time I can see standardness fading away as we figure out more about contract design requirements and pick away at rough areas. I can also see standardness being a permanent and important part of upgrading the network with less downstream technical debt. I doubt I’ll be certain about either way any time soon.


For standardness issues like transaction fee rates and dust limit, I really like the ABLA approach of taking something with soft consensus and firming it up into a long-term reliable consensus rule. It seems like most of standardness could be replaced that way with enough research.


I’ll just add that BCH has done extremely well on stewarding the VM design and consensus parameters; it’s hard to even summarize all the technical details that BCH “got right” vs. various other bitcoin splits. The P2P cash community obviously had game theory missteps leading up to the split and loss of BTC, but it’s a nice silver lining that the ecosystem is more technically advanced and harder to kill now than we might have been without a remnant phase.

A VM example that comes to mind:

3 Likes

Thank you for opening this thread, it is indeed a topic that every now and then re-surfaces and todays discussion on telegram gave me the impression that even though people approach it from different directions the ideas and “what next” kind of thinking are remarkably similar.

The first thing to point out is that ‘standardness’ is a collection of rules. They cover a wide range of topics and it is not really useful to discuss them as a whole.

I echo Freetrader’s general points. The reason they are in standard instead of in consensus is because that makes changing them decentralized, which means that changes do not need a coordination event. No protocol upgrade needed. I think that is a good thing, especially as I still hope we can slow down the protocol changes to a trickle over the next decade or so.

In a large number of cases I’d say that what is a standard rule today is because of some problem elsewhere. We have seen miners create transactions of well over 100kb in size in order to consolidate outputs. We have a dust limit in the standard rules too. We also have an op_return limit in the standardness rules.

Those 3 together are there because there is no economic balance, no tools or policies present that allows these items to be decided by “the market”. The issue is best explained by going back to this blogpost by Gavin Andresen from 2016 One-dollar lulz

What the blogpost shows is that there had to be a max blocksize because it was too cheap to create a block, meaning there was no incentive against creating blocks others might reject.
The balance only started working when the reward for a single block went up to a significant amount.

Back to those 3 standardness rules: tx-size, dust & op-returns.
I postulate that we can provide the tools to allow the market to set the cost for every single transaction. It can be based on things like op-return size, it can be based on amount of dust transactions being consolidated into only a couple of outputs and naturally the fee that needs to be paid. And in such a world the standardness rules are no longer needed.
You don’t need a dust limit if it becomes nearly free to find old dusting outputs to consolidate.
You don’t need an op-return limit if the cost of creating such a transaction is many-fold the cost of an economic transaction.

Here too I’d like to refer to freetrader again, as I fully agree with that:

1 Like

Here’s an additional case we might want to consider regarding Output Standardness (P2PKH, P2SH, P2SH32). It can be worked around, but is inelegant and adds size/op-code overhead:

1 Like

So, to start, this is technically a different topic. Yes, they are both standard-rules, but they are not really all that much related.

That said, I’d be supportive of removing the standard rules requiring “only known templates in output scripts” completely. The p2sh wrap makes all things possible anyway, so it’s not really limiting people and your example is a good point why having that check actually hurts.

However, it probably should wait until after the VM-limits CHIP has activated because we likely don’t want to just allow any-length output scripts, and the VM-limits chip could be a great place to synchronize the introductiion of limits.

2 Likes