CHIP 2021-05 Targeted Virtual Machine Limits

When thinking about the effects of contract authors having targeted VM Limits, it came up that targeted VM Limits could serve as a natural intermediate step to see whether MAST would be a big optimization long term.

If with targeted VM limits we see contract authors write smart contract code in a single script instead of utilizing ‘side car outputs’ then the smart contract bytecode will have a lot of unused functions taking up space. This is might a reasonable tradeoff on Bitcoin Cash where transaction fees are low, and developer time (DX) is an important limiting factor.

MAST optimizes this case where contracts have a bunch of unused functions in the contract code, compared to the case where there are separate ‘side car outputs’ with the logic for unused scripts.

We would then have a better, more realistic estimate/calculation of the space-savings that MAST would offer :grinning_face_with_smiling_eyes:

2 Likes

Stack memory usage limit should count size of both the primary and “alt” stack


Note that there are two stacks in the VM – the primary stack and the “alt” stack (accessed by e.g. OP_FROMALTSTACK and OP_TOALTSTACK).

The current stack depth limit of 1000 applies to the depth of both stacks summed together.

Proposal: You should update the spec to specify that the 130,000 byte limit should apply to both altstack and the primary stack summed together.

In this way, this mirrors current logic for the stack depth limit.

2 Likes

Specification should specify at what point during execution the stack memory usage limit is enforced


I would suggest the specification specify that the 130,000 byte limit is enforced after the currently-executed OP code completes .

Why specify this? Because it makes it very clear and future-proofs the specification.

Note: the current stack limit of 1000 only is enforced after the execution of the current OP code completes.

Further rationale: We may introduce future opcodes that are complex and that temporarily exceed limits, only to resume back to below-limit after op-code-completion. As such, the limit should be specified to apply at some specific point. And it makes sense to mirror current operation of the stack depth limit (which only applies the stack depth limit after the current op-code completes execution).

3 Likes

In the spec, what is the definition of an “evaluation” for the purposes of hitting the hash ops limit? For example, a p2sh:

(1) evaluates the locking script, hashing the redeemscript via e.g. OP_HASH160 or whatever.
(2) then it does another evaluation using the redeemscript (which itself may contain further hashing op codes).

When (2) begins evaluating redeemscript, is the hash count reset to 0 or does it continue where (1) left off for the counts?

EDIT: So far in my test implementation, I am having it not reset to 0 as it proceeds to the redeemscript, thus the hashOps count is 1 when the redeemScript begins execution in the p2sh case.

EDIT2: For the hash limit: OP_CHECKSIG and friends don’t count towards the hash ops limit, right?

2 Likes

I’m late to the party, but I wanted to share, I brought myself up to date with the CHIP, and notably the 130kB limit lacks a rationale.

I understand that it is preserving the current maximum limit, but it is not stated as to why we’d want to keep the current maximum.

Thank you in advance, and sorry if this has already been addressed, the thread is way too long to parse in one go, sorry!

2 Likes

It has a rationale. Please read the spec section: GitHub - bitjson/bch-vm-limits: By fixing poorly-targeted limits, we can make Bitcoin Cash contracts more powerful (without increasing validation costs). – in particular expand out the " Selection of 130,000 Byte Stack Memory Usage Limit " bullet point.

In summary: 130KB is the limit we implicitly have now. The new explicit limit just preserves status quo.

1 Like

I worry now that with 10KB pushes, it might be possible for miner-only txns to contain data blobs in the scriptSig. We have 1650 byte limit as a relay rule for scriptSig – but what do people think about moving that limit to consensus perhaps as part of a separate CHIP?

Without having done the in depth research probably required, I am in favour of making consensus rules in alignment with relay rules, I’m still not entirely sure why they’re different. Maybe there is a good reason, so Chesterton’s Fence says not to fuck that up, but it seems like an area the BCH community should be looking into anyway. Of course, we’re all unreasonably busy.

2 Likes

My two cents here: fwiw I am 10000% in favor of just tightening consensus rules to 100% match relay rules now. The fact that the two differ seems to me like the original Bitcoin devs (before they were captured)… were deprecating some things and they intended to remove them. FWIW I think it would make life a lot easier on everybody if consensus tightened to precisely match relay. No surprises. No possibility also of perverse incentives. This is because consensus rules are so liberal they may end up allowing for… shenanigans. Better to plug that hole. My two cents.

3 Likes

I agree.

I am still waiting for somebody to give a good reason “why not”.

Until I hear some strong arguments against, I will remain in support of this.

2 Likes