Raising the 520 byte push limit & 201 operation limit

I think there are two current limits which most constrain the functionality of the Bitcoin Cash VM. They are closely related, and any changes to them must be tested together.

What work needs to be done before these limits may be relaxed?

520 Byte Push Limit

This limit prevents items longer than 520 bytes from being pushed, OP_NUM2BINed, or OP_CATed on to the stack. In the C++ implementation, the constant is called MAX_SCRIPT_ELEMENT_SIZE. The limit was present as a magic number in the earliest versions of Bitcoin.

Increasing this limit has the potential to:

  • make hashing operations much more expensive
  • increase VM memory usage (OP_DUP OP_CAT OP_DUP OP_CAT [...])
  • magnify other pathological constructions, especially by allowing larger P2SH scripts

If we can reduce or eliminate these potential bottlenecks/DOS vectors, it will allow:

  • larger P2SH scripts – constraining scripts to 520 bytes prevents a huge variety of more complex use cases (including many useful CashToken-based public covenants)
  • more efficient P2SH scripts – some data is more efficient to include in the P2SH script itself, but the 520 byte limit forces authors to design contracts to pull and validate this data from the unlocking bytecode. By allowing larger P2SH scripts, this overhead can be avoided.
  • larger hash preimages – many OP_CHECKDATASIG use cases require inspecting the contents of a signed message; the 520 byte limit also limits the size of these messages. (This could be worked around using merkle trees, but at the cost of much more expensive hashing.) In particular, CashToken proofs must include their parent transactions, so this limit also sets the upper-bound for CashToken transaction sizes.

I think we should consider making this limit the same as the current MAX_SCRIPT_SIZE limit: 10,000 bytes.

This would effectively eliminate it as a roadblock to complex use cases, while still serving its purpose for DOS prevention.

Some things I think need to be done before this is possible:

What other DOS attacks are possible? What else could raising this limit break?

201 Operation Limit

This limit invalidates scripts which use more than 201 non-push opcodes. In the Satoshi implementation, the constant is called MAX_OPS_PER_SCRIPT.

This one is more straight-forward – more operations allows for more complex scripts.

What are some concerning constructions here?

  • A full block of transactions with <n> OP_HASH160 OP_HASH160 OP_HASH160 [...x201], where n is incremented for every script (preventing caches from being useful) – increasing the limit reduces the “transaction overhead” in this block, allowing a few more pathological transactions to be included

To raise this limit, I think we need:

  • some hashing-specific operation limit

Can anyone offer any other attacks which are magnified by raising these limits?

Also, if anyone has links to any analyses of relevant DOS attacks, it would be great if we could collect those here.


I see Core PR#16902 has been backported to ABC, I’ll look into backporting it to BCHN too.

1 Like

Hi, the developer of BlockUpload.io here,

I would vote NACK on this change unless OP_RETURN size limit is proportionally increased. The website uses 1-of-8 multisig in a P2SH where there is one real public key and the rest are fake 65-byte data pushes. OP_RETURN size limit was upgraded to 220 because DexX7 the Omni developer calculated that it was more feasible to use the P2SH method than OP_RETURN if the limit was less than 220 bytes. We should keep the limits proportional or OP_RETURN will only be used by those can’t afford changing the code.

It should be adjusted so that OP_RETURN is always more feasible (because that’s its purpose) otherwise we can expect everyone to push lots of data to move to P2SH multisig.


Thanks for the heads up! I had totally forgotten the origin of the 220 byte OP_RETURN limit. Here’s the commit in BCHN. Is there any more BCH-specific discussion we can link to here?

I don’t remember ever digging into this, and think I may be misunderstanding the analysis in the linked CIP6 document:

What is requiring the un-hashed data to be directly included in the 520 byte P2SH redeem script?

Is it not possible to include only a hash of the data in the redeem script (to protect the data from being modified before the TX is mined), then provide the un-hashed data before it in the unlocking bytecode?

If I’m not mistaken, the 1650 byte limit on unlocking bytecode (MAX_TX_IN_SCRIPT_SIG_SIZE) should give you well over ~1.5KB of space to commit data, and the 520 byte push limit only affects how many chunks that data must be divided into.

1 Like

You’re right - multisig doesn’t matter, as presented in CIP6.

That’s new to me, I’ll experiment with that idea and report back.

Could you please give a link to a source code where MAX_TX_IN_SCRIPT_SIG_SIZE is defined?

It’s in policy.h in most C++ implementations. :+1:

I e-mailed Dexx about it:

Hi Dexx7,

I’m the developer of a tool used to store files in the blockchain (blockupload) and
I would be grateful if you could share you opinion on something:

Jason Dreyzehner and I were discussing (link to this webpage)
about the most efficient way of pushing data. My website used Class B transaction, multsig-in-P2SH,
which I thought of as efficient. We were wondering if the following would work, and why Omni doesn’t use the following (in P2SH):

ScriptPubKey: OP_HASH160 <hash> OP_EQUALVERIFY 1 <fake public keys> <real public key> 7 OP_CHECKMULTISIG(VERIFY OP_DEPTH OP_0 OP_EQUAL)
ScriptSig: <1600 byte data push> <real signature>

The parts in parentheses are optional, but desirable malleability checks.

The scriptsig’s length limit can be found here: bitcoin/policy.cpp at 4a540683ec40393d6369da1a9e02e45614db936d · bitcoin/bitcoin · GitHub

Jason thought of this way as strictly superior than class B or even CIP-6 (link that you posted) that seems to be the current best. This seems unbelievably efficient to me. I was wondering whether you had any experience with an upload method similar to this, or why Omni decided not to use this?

Thank you,

OK, the 220 byte limit is no longer relevant since you just invented a second way that makes P2SH pushing 4x as feasible or 1.25x as CIP-6.
His answer:


thanks for reaching out. First of all, the proposal to raise the
OP_RETURN limit was not merged in Bitcoin Core, but it’s actually in
Bitcoin Cash.

Have you tested this script? I’m very curious. It’s an interesting
approach. :slight_smile:

In Omni, the bare-multisig approach is a relict from the past and we
haven’t added a new way so far.



I agree that this seems promising. I’ll try to upload something that
uses this method. I haven’t tried this script yet.

If this works, then the OP_RETURN limit needs to be QUADRUPLED to stay competitive
your graph at https://github.com/bitcoin/bitcoin/issues/12033 would become archaic.
It seems to be a nice candidate for class D transaction format.

I’ll report back, once I try this in a few weeks.


This is so exciting, although it’s off-topic to this discussion.

1 Like

I backported this to BCHN, see here:


Bitcoin Unlimited is also porting this: Port core 16902 O(1): OP_IF/NOTIF/ELSE/ENDIF script implementation (!2428) · Merge Requests · bitcoinunlimited / BCHUnlimited · GitLab

1 Like

I’m separately pushing for allowing multiple OP_RETURNs in a single transaction. Talk thus far was that the aggregate size of all OP_RETURNs would need to stay within the present 220 byte limit. Just running the question by you all to make sure that support for multiple OP_RETURNs won’t break any of this work. Any comments/feedback?

1 Like

Nope, I don’t think there will be any interaction with these two changes. :+1:

Want to start a new topic on allowing multiple OP_RETURNs? I’d love to see that happen.

In the past, I think a lot of resistance stemmed from the perception that OP_RETURN is a temporary hack in need of replacement by some formal “data” field in a new transaction format. (I think that misunderstands the transaction format, and OP_RETURN is actually an ideal solution within the TX model, e.g. SIGHASH_SINGLE.) So I think the strongest remaining concern is probably about the “fee structure” of adding extra data to the blockchain.

One conservative option might be to allow a total of N bytes across all OP_RETURN outputs, where the full, serialized size of the OP_RETURN output is counted. Also, I think we’ve just realized that the 220 byte limit was selected partially by mistake. It might be a good idea to select a new N based on the info above.


FYI the multiple OP_RETURN proposal is here: Multiple OP_RETURNS - This time for real!.