CHIP 2021-05 Targeted Virtual Machine Limits

yeah, the CHIP mentions indeed that the baseline is p2pkh hashing.

And while I fully appreciate the concept of doing no harm and doing one step at a time. BUT it adds a link between the transaction size and the VM processing. Which I explained above is ‘weird’. So it didn’t actually fully remove the avoidance of harm. Removing that scriptsig-size link in the future requires a hardfork, so lets not add it in the first place, is my thinking.

So when we look at the actual cost of p2pkh (the baseline) and the cost of math and other opcodes, being conservative is not needed at all.

For reference, the 97ms massive expensive script would require a 40GB in inputs to be allowed to compute. THAT is just :exploding_head:. :laughing:

1 Like

How big is the Script? If you managed to generate 29kB worth of data with 20-ish bytes of Script and have the TX validate in 0.00003s (==30μs) then one could stuff a 10MB block with those TXs and have the block take… 5 minutes to verify?

OP_1 0x028813 OP_NUM2BIN OP_REVERSEBYTES generates you a 4999B number (all 0s except highest bit) while spending only 6 bytes, and you can then abuse the stack item (OP_DUP and keep hashing it or using OP_MUL on it or some mix. or w/e) to your heart’s content (until you fill up the max. script size).

Can you try benchmarking this:

StackT stack = {};
CScript script = CScript()
                << OP_1
                << opcodetype(0x02)
                << opcodetype(0x88)
                << opcodetype(0x13)
                << OP_NUM2BIN
                << OP_REVERSEBYTES
                << OP_DUP;
for (size_t i = 0; i < 3330; ++i) {
    script = script
                << OP_2DUP
                << OP_MUL
                << OP_DROP;
}
script = script << OP_EQUAL;

you can adjust the 3330 to w/e you want, the 3330 would yield a 9998-byte script

With density-based limits, the so generated 10kB input would get rejected after executing 20th OP_MUL or so, and the small one (say i < 1) would get rejected on 1st OP_MUL because it’d be too dense for even 1, so filling the block with either variant could not exceed our target validation cost.

I’m afraid you completely missed the point of my post. Nobody is genering data, no block is validated on arrival, nobody has been suggesting any removal of limits. Removal of lmits is needed to understand the relationship of cost and score. But obviously that is in a test environment. Not meant to be taken into production.

Everyone is in agreement we need density based limits. Read my posts again, honestly. You’re not making sense.

Hi all! Just a status update:

We’re up to 4 developers publicly testing the C++ implementation now, thanks @cculianu, @bitcoincashautist, and @tom for all of the review and implementation performance testing work you’re doing!

Reviews so far seem to range from, “these limits are conservative enough,” to, “these limits could easily be >100x higher,” – which is great news.

It also looks like one or two additional node implementations will have draft patches by October 1. I think we should coordinate a cross-implementation test upgrade of chipnet soonhow about October 15th at 12 UTC? (Note: a live testnet can only meaningfully verify ~1/6 of the behaviors and worst-case performance exercised by our test vectors and benchmarks, but sanity-checking activation across implementations is a good idea + testnets are fun.) I’ll mine/maintain the test fork and a public block explorer until after Nov 15.

I just published a cleaned up and trimmed down set of ~36K test vectors and benchmarks, the previous set(s) had grown too large and were getting unwieldy (many GBs of tests) – this set is just over 500MB and compresses down to less than 15MB, so it can be committed or submodule-ed directly into node implementation repos without much bloat. (And diffs for future changes will also be much easier for humans to review and Git to compress.) The test set now includes more ancillary data too:

  • *.[non]standard_limits.json includes the expected maximum and final operation cost of each test,
  • *.[non]standard_results.json provides more detailed error explanations (or true if expected to succeed)
  • *.[non]standard_stats.csv provides lots of VM metrics for easier statistical analysis in e.g. spreadsheet software:
    • Test ID, Description, Transaction Length, UTXOs Length, UTXO Count, Tested Input Index, Density Control Length, Maximum Operation Cost, Operation Cost, Maximum SigChecks, SigChecks, Maximum Hash Digest Iterations, Hash Digest Iterations, Evaluated Instructions, Stack Pushed Bytes, Arithmetic Cost

Next I’ll be working on merging and resolving the open PRs/issues on the CHIP repos:

  • Committing test vectors and benchmarks directly to the CHIP repos (now that they’ve been trimmed down),
  • A risk assessment that reviews and summarizes each testing/benchmarking methodology and results, and
  • Some language clarifications requested by reviews so far.

After that, I’ll cut spec-finalized versions of the CHIPs and start collecting formal stakeholder statements on September 23. Next week:

  • I’ll host a written AMA about the CHIPs on Reddit, Telegram, and/or 𝕏 on Wednesday, September 25;
  • I’ll be joining @BitcoinCashPodcast at 20 UTC, Thursday, September 26; and
  • I’ll be joining General Protocols’ Space on 𝕏 at 16 UTC, Friday September 27.

Thanks everyone!

5 Likes

Bitcoin Verde has announced a flipstarter for their v3.0 release, which includes bringing the node implementation back into full consensus (the May 2024 upgrade), a >5x performance leap, and support for the 2025 CHIPs!

We seek to immediately complete our technical review of the CHIP-2024-07-BigInt and CHIP-2021-05-vm-limits CHIPs, and implement those CHIPs as a technical proof of concept (to include integrating the CHIPs’ test-vectors) in order to facilitate the timely and responsible assurance of node cross-compatibility of the BCH '25 upgrade. We consider the goals outlined in these CHIPs to be a positive incremental betterment of the BCH protocol, and look forward to supporting their inclusion in the next upgrade. [emphasis added]

The flipstarter is here:

3 Likes