CHIP 2021-05 Targeted Virtual Machine Limits

How big is the Script? If you managed to generate 29kB worth of data with 20-ish bytes of Script and have the TX validate in 0.00003s (==30Ī¼s) then one could stuff a 10MB block with those TXs and have the block takeā€¦ 5 minutes to verify?

OP_1 0x028813 OP_NUM2BIN OP_REVERSEBYTES generates you a 4999B number (all 0s except highest bit) while spending only 6 bytes, and you can then abuse the stack item (OP_DUP and keep hashing it or using OP_MUL on it or some mix. or w/e) to your heartā€™s content (until you fill up the max. script size).

Can you try benchmarking this:

StackT stack = {};
CScript script = CScript()
                << OP_1
                << opcodetype(0x02)
                << opcodetype(0x88)
                << opcodetype(0x13)
                << OP_NUM2BIN
                << OP_REVERSEBYTES
                << OP_DUP;
for (size_t i = 0; i < 3330; ++i) {
    script = script
                << OP_2DUP
                << OP_MUL
                << OP_DROP;
}
script = script << OP_EQUAL;

you can adjust the 3330 to w/e you want, the 3330 would yield a 9998-byte script

With density-based limits, the so generated 10kB input would get rejected after executing 20th OP_MUL or so, and the small one (say i < 1) would get rejected on 1st OP_MUL because itā€™d be too dense for even 1, so filling the block with either variant could not exceed our target validation cost.

Iā€™m afraid you completely missed the point of my post. Nobody is genering data, no block is validated on arrival, nobody has been suggesting any removal of limits. Removal of lmits is needed to understand the relationship of cost and score. But obviously that is in a test environment. Not meant to be taken into production.

Everyone is in agreement we need density based limits. Read my posts again, honestly. Youā€™re not making sense.

Hi all! Just a status update:

Weā€™re up to 4 developers publicly testing the C++ implementation now, thanks @cculianu, @bitcoincashautist, and @tom for all of the review and implementation performance testing work youā€™re doing!

Reviews so far seem to range from, ā€œthese limits are conservative enough,ā€ to, ā€œthese limits could easily be >100x higher,ā€ ā€“ which is great news.

It also looks like one or two additional node implementations will have draft patches by October 1. I think we should coordinate a cross-implementation test upgrade of chipnet soon ā€“ how about October 15th at 12 UTC? (Note: a live testnet can only meaningfully verify ~1/6 of the behaviors and worst-case performance exercised by our test vectors and benchmarks, but sanity-checking activation across implementations is a good idea + testnets are fun.) Iā€™ll mine/maintain the test fork and a public block explorer until after Nov 15.

I just published a cleaned up and trimmed down set of ~36K test vectors and benchmarks, the previous set(s) had grown too large and were getting unwieldy (many GBs of tests) ā€“ this set is just over 500MB and compresses down to less than 15MB, so it can be committed or submodule-ed directly into node implementation repos without much bloat. (And diffs for future changes will also be much easier for humans to review and Git to compress.) The test set now includes more ancillary data too:

  • *.[non]standard_limits.json includes the expected maximum and final operation cost of each test,
  • *.[non]standard_results.json provides more detailed error explanations (or true if expected to succeed)
  • *.[non]standard_stats.csv provides lots of VM metrics for easier statistical analysis in e.g. spreadsheet software:
    • Test ID, Description, Transaction Length, UTXOs Length, UTXO Count, Tested Input Index, Density Control Length, Maximum Operation Cost, Operation Cost, Maximum SigChecks, SigChecks, Maximum Hash Digest Iterations, Hash Digest Iterations, Evaluated Instructions, Stack Pushed Bytes, Arithmetic Cost

Next Iā€™ll be working on merging and resolving the open PRs/issues on the CHIP repos:

  • Committing test vectors and benchmarks directly to the CHIP repos (now that theyā€™ve been trimmed down),
  • A risk assessment that reviews and summarizes each testing/benchmarking methodology and results, and
  • Some language clarifications requested by reviews so far.

After that, Iā€™ll cut spec-finalized versions of the CHIPs and start collecting formal stakeholder statements on September 23. Next week:

  • Iā€™ll host a written AMA about the CHIPs on Reddit, Telegram, and/or š• on Wednesday, September 25;
  • Iā€™ll be joining @BitcoinCashPodcast at 20 UTC, Thursday, September 26; and
  • Iā€™ll be joining General Protocolsā€™ Space on š• at 16 UTC, Friday September 27.

Thanks everyone!

7 Likes

Bitcoin Verde has announced a flipstarter for their v3.0 release, which includes bringing the node implementation back into full consensus (the May 2024 upgrade), a >5x performance leap, and support for the 2025 CHIPs!

We seek to immediately complete our technical review of the CHIP-2024-07-BigInt and CHIP-2021-05-vm-limits CHIPs, and implement those CHIPs as a technical proof of concept (to include integrating the CHIPsā€™ test-vectors) in order to facilitate the timely and responsible assurance of node cross-compatibility of the BCH '25 upgrade. We consider the goals outlined in these CHIPs to be a positive incremental betterment of the BCH protocol, and look forward to supporting their inclusion in the next upgrade. [emphasis added]

The flipstarter is here:

4 Likes

As this seems to not be taken seriously, let me re-type this out and maybe we can fix this before the CHIP is frozen.

Edit; this is not a new issue. Iā€™ve notified this thread about it quite some time ago and had long voice conversations with Jason about it. I did listen to all his arguments and spent time verifying the numbers he assumed. In the end I stated very clearly that this is an issue that should be solved and suggested various solutions. Communications died at that point. So, this leaves me no choice other than to air the dirtly laundry here.

the system has a misalignment of incentives as Jason designed it.

A 10KB transaction input has CPU time allotted to execute just below 8 million opCost.
Here an example that actually stays within this limit (7474730 is used):

std::vector<uint8_t> bigItem;
for (int i = 0; i < 9296; ++i) {
    bigItem.push_back(10);
}
CScript script;
script << bigItem;
for (int i = 0; i < 200; ++i) {
    script << OP_DUP << OP_9 << OP_DIV;
}

Since a 10KB (total) script is the maximum this is the most opCost points a script can gather.
The above executes in 1ms on my laptop. Use op-mul instead and it runs in 500-micro seconds.

Now,
if I were to move this script to live in the transaction-output, I can no longer run it.
To run it I would be forced to add big pushes to the spending transaction to allow the script to run.

There are two misalignments of incentives here;

  1. scripts have to be stored in the scriptSig (aka spending transaction) and not in the transaction output (locking transaction). The total amount of bytes on the blockchain goes up as a result. Overhead is between 25 and 40 bytes.
  2. To get more execution time, bogus bytes may be added in order to increase the opCost budget.

In Bitcoin Cash the fees are intentionally kept low, as a result the cost of pushing more bytes in order to allow a bigger CPU allowance is similarly very low. It is inevitable for people to start pushing bogus information on the blockchain just to get more CPU time.

Notice that wasting blockchain data is long term much more expensive than CPU time. This is the case because historical transactions do not get their scripts validated.

Example two

Consider usage of OP_UTXOBYTECODE. This copies data from another UTXO than the one we are spending. The result is an input script that has nearly no bytes. The script will end up being severely restricted in what it can do.
You literally canā€™t do a single OP_DIV from the above example in this case. (limit=37600, cost=46881)

Based on the fact that pushes are the only thing allowed in the spending transaction, this has the potential to forever lock funds on the chain. Until the rules are relaxed.

Understanding ā€˜costā€™.

The CHIP confusingly talks about OpCost and limits like they are the same. There is a term called ā€œdensity control lengthā€, which may be the source of the confusion since that is the actual base for the limits. But it is not called such.

In simple English;

opCost is calculated based on actual work done. Each opcode has its own formula on what op-cost they have. Running the program thus adds to the tallied op-cost every step of the way.
This is a concept used by various different chains, nobody has a problem with this.

When the cost exceeds the limit, the script fails. Simple as that.

Understanding the cost being a separate concept from the limit, we can agree that the cost calculation is great. Donā€™t change that. The limits, however, give the problems described in this post.

So this leaves the question of where the limit comes from.

The CHIP explains that a transaction has a maximum size which is then spread over all its inputs and thus the total amount of CPU time spent on a single transaction is bound.
Using the example above, a 100kb transaction can have 10 inputs of each 10KB. Because the max is 10KB. This leaves the maximum amount of CPU time in my example to be 10 ms to run the scripts on this transaction. (which is to say, very fast).

As such this seems like a sane solution at first. Workarounds to pack more inputs donā€™t work since the tx-size is still limitedā€¦
Yet, the solution does have those bad side-effects. I mentioned this above, and unfortunately my repeated pointing this out have yet to make it into the CHIP. I think the CHIP rules state problems should be mentioned clearly in the CHIP. Weather the author agrees with them or not.

Future opcodes

Today we had some discussions about concepts like MAST. We contemplated that op-eval may be better since it allows interaction with things like OP_UTXOBYTECODE. A cool idea since that means your script can stay very small because the actual locking script from a cashtoken can be copied and reused from another input.

The downside is that the limits will be so low that this is practically not going to work. Unless we add a bogus push to work around the limits coming from the unlocking script.

Conclusion

The CHIP as it stands today has economic issues. Users are incentivized to pollute the blockchain in order to ā€˜buyā€™ more CPU cycles (in the form of opCost).

This should not be activated like it is today. If this canā€™t be fixed, we should move this CHIP to May 2026.

The op-cost of those two opcodes should probably reflect the fact that one is twice as slow as the other.

It is taken seriously, there is a whole section in the rationale about it. Itā€™s just that others arenā€™t reaching the same conclusions as you. It is a defensible design choice, not a problem to be fixed.

And thereā€™s a good reason for that. Because if you could, then you also could prepare 24389 UTXOs with such Script, and then spend them all together in a single 1 MB TX, which would take 24 seconds to validate.

If you were to propose a flat limit, it would have to be set based on that edge case (empty input script spending a max. pain locking script). That is how Jason got to 41 in the (41 + unlocking_bytecode_length) * 800 formula - an empty inputā€™s overhead is exactly 41 bytes.

An alternative is to have a flat limit per input, and it would have to be of 41*800 or somewhere close to that value, else weā€™d introduce a worse pathological case than current ones. With such limit, nobody would be allowed to execute the above 10kB example script all at once in any context (as locking script, or as p2sh redeem script, even when it would be diluted enough by input data so that such TX wouldnā€™t be usable as a DoS vector anymore. So weā€™d be limiting harmless uses just so we can catch pathological ones, too. Density-based approach allows us to limit pathological cases while still allowing more of the harmless uses.

The total amount of prunable bytes goes up. Why would that be a problem long-term? Whether blockchain size goes up for this or that reason makes no difference - if it is used then the size will go up. The ultimate rate-limit for that is the blocksize limit, not incentives on which kinds of bytes will fill the space. Below that hard boundary, miners are free to implement custom policies for whatever filler bytes (which are highly compressible), if they would be seen as a problem.

Yes, they may. Why is that a problem?

Consider the alternative: say I really need to add two 10k numbers, but all I have is the flat 41*800 budget. What do I do then? I split the calculation to a bunch of biggest additions that can fit the budget and carry the calculation over a bunch of TXs, and those TXs will add up to more total data on the chain than if I just did it in one go in a single input with some filler bytes. The filler bytes save me (and the network) from the overhead of additional script bytes required to verify that the execution was carried over from the previous TX correctly (which means having redundant data in intermediate verification steps).

Also, filler bytes would be highly compressible for storage. The carry-over redundancy of splitting it across multiple TXs would not be so compressible.

Ordinary people canā€™t push more than 1,650 bytes in a single input because weā€™re keeping the relay limit the same. Also, if filler bytes are found to be a problem, then miners are free to implement their own policies to further restrict inclusion of inputs that are using padding, or just have a custom fee policy for these filler bytes and price them differently.

Copies, but doesnā€™t execute it, just puts it on stack of the input that executed it as part of its own script. Normally this is used in pair with OP_OUTPUTBYTECODE and then you just verify they match, and then verify some other things like BCH amount etc., usually scripts that do this will be few 10s of bytes or more, which will be more than enough budget for usual applications.

Yes, there is this side-effect, it is a trade-off of having a simpler system rather than implementing a more complex gas-like system. The CHIP does specifically mention padding in that context:

Finally, this proposal could also increase the operation cost limit proportionally to the per-byte mining fee paid or based on the declaration of ā€œvirtualā€ bytes, e.g. for fee rates of 2 satoshis-per-real-byte, the VM could allow a per-byte budget of 2000 (a 2.5 multiple, incentivizing contracts to pay the higher fee rate rather than simply padding the unlocking bytecode length). However, any allowed increase in per-real-byte operation cost also equivalently changes the variability of per-byte worst-case transaction validation time; such flexibility would need to be conservatively capped and/or interact with the block size limit to ensure predictability of transaction and block validation requirements. Additionally, future research based on real-world usage may support simpler alternatives, e.g. a one-time increase in the per-byte operation cost budget.

I did have more words about it in my proposed edit, maybe CHIP should give this side-effect more words? I still disagree that it is a problem to be solved, the system is good enough as proposed.

Disagree. Your conclusion was the reason for my thumb-down (which I wanted to undo on 2nd thought but forum wonā€™t let me).

2 Likes

Interesting to hear about your benchmarking Tom, where OP_DIV takes twice as long as OP_MUL.

I think @bitcoincashautist did a good job above re-stating the reason why input script length is used for the formula.

Strictly More Permissive

When considering your example 2 with OP_UTXOBYTECODE , itā€™s worth emphasizing the following parts of the rational:

this proposal ensures that all currently-standard transactions remain valid, while avoiding any significant increase in worst case validation performance vs. current limits

By design, this proposal reduces the overall worst-case computation costs of node operation, while significantly extending the universe of relatively inexpensive-to-validate contract constructions available to Bitcoin Cash contract developers.

So contract authors will only be able to use this opcode more permissively than is currently the case. Only abusive constructions intentionally trying to drain CPU cycles would get meaningfully limited.

Any construction that we can imagine currently where you split the utxobytecode, replace some variables of the contract and then hash it into a new locking bytecode script (this process has been called ā€˜simulated stateā€™) will still be possible in the same way, but now freed from the 201 opcode limit & 520 bytesize limit.

Push Operations Included

When I was going over the ā€˜OpCostā€™ table I was slightly concerned that OP_CHECKSIG and OP_CHECKDATASIGVERIFY had a cost of > 26,000 and OP_CHECKMULTISIGVERIFY even a cost of > k * 26,000 . However all honest contract usage involves pushing arguments to the stack (and not just spawning random sigchecks on the stack), signatures are already 65 bytes and the pk is also 33 bytes, so this results in a 80k budget for each sig+key.

Implications for Contract Authors

The CHIP is designed to prevent abusive contracts, and from reviewing the proposal I donā€™t expect ANY of the CashScript contract authors to run into these new accounting limits. Our aim should be to not increase the learning curve and barrier to entry of contract development, I do not want contract authors to have to worry/learn about opcode accounting costs, and I donā€™t believe they will have to.

To me the only redeeming quality the 201 opcode limit & 520 bytesize limit was that they were easy to understand. I expect the OpCost accounting to be invisible to normal contract developers and for the main limitation that contract authors would still have to be aware of after these VM limits changes to be the limit on standard input bytecode length

Maximum standard input bytecode length (A.K.A. MAX_TX_IN_SCRIPT_SIG_SIZE ā€“ 1,650 bytes),

1 Like

The first case seems like a theoretical situation that I donā€™t think has meaningful impact, but the second case looks more sensible to me.

I agree that there could be some scheme where you intentionally have very small inputs, that might get restricted by this VM-limit change.

If that is the case, they have the option of doing the now unnecessary push, but I think a better outcome can be had: Raising the base limit.

However, doing that would raise the worst case validation cost, and cannot be done as part of the VM-Limit chip, since that chip explicitly has as a design goal not to raise the current worst case validation cost, even if that has other benefits.

I think the use case you are imagining is interesting and Iā€™d love to see it well supported, but my preferred solution is to activate VM-Limits as they are, then raise the base limit slightly with a separate CHIP.

EDIT: Actually, I think such schemes would require non-standard transactions, and so would need to be miner-assisted. This further reduces the ā€œriskā€ of these things being negatively impacted and puts it at the point where I think itā€™s better for someone to build a proof of concept, then propose an updated limit later on, using the proof of concept as motivation for why the limit should be changed.

2 Likes

Well, first of all, it means that the limits can be worked around or artificially increased. Are they still limits if you can just increase them with ā€œthis one cheap trickā€ ?

Everyone hopefully expects blocks to be full at some point in the (far?) future :-). At this time the miners will end up deciding which transactions to prioritize, and which to make wait. So, if you canā€™t spent your money without adding a large push, you will likely have to pay a heavy fee for that bigger-than-average transaction. That sounds quite sub-optimal to me.

The actually mined blocksize going up is going to be based on what utility we provide. Miners are not going to invest in more hardware and faster Internet when the price is low and their income is low. So it is not a given that blocksize will increase. Miners still have to configure their nodes to mine that bigger blocksize.
With that basic economic fact in mind: the goal is to have as many economic transactions in a block as possible since that makes the value of the coin go up. Higher value, more actual real money for miners per block.
ā€œWastingā€ space on pushes that are workarounds has a direct adverse effect on that. The most extreme example of this is seen on BTC where the amount of ā€˜normalā€™ transactions has been a minority in a block filled with jpegs and whatever. If BTC price was based on utility, this would have been really bad news for the price. It already is pretty bad for utility.

To repeat the original red-flag:
p2sh takes about 40 more bytes of blockspace than a non-standard script solution. The proposal actively encourages people to use p2sh for non-trivial scripts.
There is no incentive for users to use less blockspace. So why would they? There now IS an incentive for a sub-set of contracts to use more blockspace. Which is what will happen as a result.

In short;

in a shared system like bitcoin (any) where resources are scarce and demand will hopefully exceed supply of blockspace, it is important to keep actual per-transaction byte-count low. Because that keeps per transaction COST low. And that keeps the system competitive in a world where there are dozens of chains willing to eat our lunch.

I understand there are various options to solve this. Iā€™m not in favor of any specifically. Even though I considered two above, Iā€™m not pushing anything specifically.

The problem Iā€™m running into is that the various alternatives people came up with are extremely hard to compare due to the lack of actual real world numbers in the CHIP. Here we make claims like ā€œit takes 20 seconds to verify a transaction!ā€. Regardless of that being true or not, it indicates we really really need absolute numbers to understand the different trade-offs. The different alternatives.

Maybe the current solution is the best, which would be scary, but we canā€™t tell at this point because the ā€˜rationaleā€™ sections in the CHIP are hand waving and making claims that look wrong on their face. It excludes discussion and is excludes proper assessment of solutions. It makes people scared to propose alternative options because it is too hard to understand the actual trade-offs of the made suggestions.

Nobody is using non-standard scripts for anything, maybe because they canā€™t (because hey, it is still non-standard) or maybe because thereā€™s no interest. If you want to relax standardness rules and allow people to experiment with those, fine, and I would also agree to relax those rules, but this is not the place for proposing that.

And theyā€™re not unfairly penalized by this CHIP, theyā€™re not even allowed by standardness rules, and they will always have to be limited ā€œunfairlyā€ because scripts light on input script but heavy on locking script can be more tightly packed into TXs and be used as DoS vectors so we always need to restrict them ā€œunfairlyā€ - because they have ā€œunfairā€ DoS advantage due to architecture of our system, because they can bring in a lot of data into local TX context - without it being limited by TX size or blocksize limit.

Do you have a particular app or a proof-of-concept youā€™d want to deploy as ā€œbareā€ script which would be too limited by having 41*800 budget?
Even ā€œbareā€ scripts (like currently standard P2PK, P2MS, P2PKH) will normally require some additional unlocking data (like keys & signatures) - and that data will increase their budget sufficiently so that they will work just fine - and the same data will dilute their CPU density enough that they canā€™t be packed into a DoS bomb TX.

After this CHIP you can make another CHIP to relax relay rules, and if we see people experiment with those and start hitting the limits then you will have empirical evidence to suggest something has to be done to enable this or that use-case. Right now, all you have is conjecture, which is not convincing.

FWIW, I believe a CHIP to relax standardness rules could pass, but why hasnā€™t anyone bothered to do the work and propose it all these years?

1 Like

This is indeed very likely the best long term outcome.

There is a big assumption today that we will be able to have a protocol upgrade next iteration (May 2026) and we can correct things then. Maybe the space 100x-ed and getting consensus is too hard. Or if we do have a protocol upgrade it may be by people that donā€™t have the intention of doing such a fix. As such, maybe we can avoid doing protocol upgrades that have negative side-effects with the assumption we can fix it later.

The Limits CHIP is today based on not moving the validation cost much. The side-effect of that is that all results are relative. Compare that to ABLA where there were dozens of graphs that with a little effort people can understand and place. Now we have a comparison to the time it takes to do an operation that has been around in Bitcoin since nearly the beginning. Weā€™re comparing CPU cost to an operation that was considered to be acceptable 15 years ago.

This is distorting peopleā€™s perception. When I reported the actual cost of simple operations (50 nano second for the most expensive xor etc) most people were surprised that this stuff is so cheap.

I strongly suspect that the question, when rephrased to be about absolute clock-time, will give very different answers. And we might come up with a different way to set a limit that doesnā€™t have side-effects like the ones I described.

Happy to hear youā€™d be on-board with that.

I think the loosening of standardness rules is quite a different beast than adding consensus rules in the core protocol. The standardness rules can be changed much cheaper and if you can accept the chaos, without actual synchronization point.

In short, adding [the objected part in the limits chip] now increases the cost of the changes to standardness rules. Maybe we donā€™t want to add something that now two people in a row say we can remove later againā€¦

I just walked out of a room after a great debate with a smile on my face. The 202* BitcoinCash crew is awesome.

Thans to Kallisti as sparring partner and discussion-lead. BitcoinCashAutist for the insights, ideas and experience and Jeremy for lightening the mood.

Copying Kallistiā€™s conclusion:

okay. so in this discussion we addressed Tomā€™s issue, walked through the logic on how itā€™s not a problem in reality, and also addressed his concern about the CHIPā€™s need for absolute time benchmarks including notes about the hardware the tests are running on; we then also described a mitigation strategy to tomā€™s primary concern such that the mitigation offers a cleaner way for contract authors to do complex ops that may fall under this edge case without contributing any additional stress to any network resource.

So, from the top.

Everyone agreed that the result of this CHIP may be that people will stuff bytes if they need to. Which surprised me. People were also a bit confused why the CHIP (no longer) mentions this.
Walking through the problems of people making their transactions intentionally bigger we realized there is a lot of confusion by the lack of numbers in the CHIP. Itā€™s like the numbers may be linked from the CHIP but there are only high level conclusions that feel falling from the sky.

The most specific problem with the lack of numbers is that having a grasp of what the physical cost is of this script processing is lacking. If you read the entire CHIP you may honestly walk away thinking that validating a p2pkh takes 0.1 second. While in reality it is closer to 0.00003 seconds. This is quite relevant with regards to understanding the idea behind limits, no?
Likewise the chip didnā€™t make the link between wall-clock duration and op-cost count, nor about wall-clock and the actual limits.

Based on this all, I asked; what if we can run all good scripts in 10 milliseconds or less? If that is the case, do we really need such a complex solution for finding upper-limits? We did the math, and yeah. We do.

So, then, how do we deal with people cheaply (1 sat per byte) filling blockspace with essentially a bunch of rubbish data.
Well, 2 ways.

  1. we need to have some system where the miner can decide to charge that dummy push not for 1 sat per byte, but more. Or wait for confirmations much longer. This is then able to ensure the throughput of ā€˜normalā€™ transactions. Memo dot cash transactions were mentioned here too :wink:
  2. BCA came up with the idea that we can introduce an opcode ā€œOP_PFXā€ that is like a push without the data. So you can add the OP_PFX to the beginning of your input with a number specifying how much. You then pay for the amount you scheduled like it was actual bytes on the blockchain. Without taking any actual space on the chain.
    So you pay for that dummy push, you get the rights for that dummy push and thus more CPU time, but it doesnā€™t actually cost more than 2 or 3 bytes on the blockchain.

Nobody expects these new ideas to be ready in the next months, so we still add risk by activating an upgrade that can have negative side-effects. But the actual side-effects are less than expected and we have a road forward that is long term viable.

2 Likes

Posting Jasonā€™s podcast appearance from yesterday here, in the episode he made a last call for technical reviews because the CHIP is very close to being finalized. The above discussion about non-standard transactions was mentioned as the one remaining open discussion point.

VM Limits & BigInt Podcast episode:

The Bitcoin Cash Podcast #130: Big Big Ints feat. Jason Dreyzehner

Additionally Jason also did a Reddit AMA titled:

ā€œI proposed the Limits & BigInt CHIPs for the May 2025 Upgrade, Ask Me Anything!ā€

Podcast Overview

VM Limits

In the episode it was laid out how the CHIP has undergone a long process starting in 2021. The CHIP is now at a point where stakeholder statements started being collected. The date laid out by the CHIP process for finalization is Oct 1st.

The one number parameter still mentioned to be actively discussed is the ā€˜Density Control Limit Calculationā€™ and is said to only affect non-standard transactions (which canā€™t be broadcast on the network). In the podcast it is also talked about how future CHIPs might want to expand the range of contracts by allowing the worst-case to get worse, the calculated ā€˜safety marginā€™ of the CHIP here is the important metric to look at, with the CHIP this margin would be 10-100 times the current limits.

It was mention how in 2020 BCH got signature checking under control (though the SigOps limit) and how that was an important prerequisite for the current VM CHIP. Jason explained how ā€˜the tooling needed to build the tooling to build the testsā€™ to get an accurate sense of the current limits required years to build up.
Right now honest/legit contracts can use less than 1% of the compute that malicious contracts can.

Further, Jason explained how the current CHIP has multiple layers of defense ā€˜built-inā€™ if an implementation were to have a bug in the implementation of an opcode and how these multiple layers prevent bugs from becoming exploits.

Jason confirmed that normal contract authors shouldnā€™t ever see these cost of operation limits, and only need to worry about the transaction limits. Jason expects that some people will hit the limits ultimately when doing some of the new things enabled by Big Ints. Careful thought went into the CHIP to future-proof it should we want to raise some of the limits in the future (for example for post-quantum crypto).

BigInts

Next BigInts was discussed: BigInt was explained to be technically be one of the VM limits (with the re-targeted limits) but that this differs from the perception that it is a stand-alone issue (and outside of the scope of the original CHIP), which is why it was separated into its own CHIP. The Big Int CHIP is downstream from the VM Limits CHIP and it is exceptional in this way.

In the podcast it was extensively discussed how the Big Int CHIP was technically too late according to the CHIP deadlines and how this puts us in a position of weigh the specifics of the CHIP being downstream from the VM limits CHIP and pushing for 10 month inclusion or waiting 22 months until next yearā€™s network upgrade. Jason said this consideration will still be added to the BigInt CHIP. Jason stressed the BigInt CHIP is technically just one sentence removing the maximum length on number size.

CHIP Process

There was extensive discussion about the CHIP process and how it relates to the VM limits CHIP & the BigInts CHIP.

ZCE

The last part about the podcast dives deep into ZCEs, which is best further discussed as a dedicated subject on the research forum.

4 Likes

Iā€™d like to elaborate more on this. It would declare some ā€œfillerā€ bytes, without actually encoding them and wasting everyoneā€™s bandwidth / storage. For all practical purposes weā€™d treat them as if they exist, as if some imaginary <0x00..00> OP_DROP NOP sequence is actually present in the input, meaning theyā€™d count against the min. relay fee, script size limit, TX size limit, and block size limit.

It could be defined like PREFIX_EXTRA_BUDGET, something encoded outside the input script, a ā€œprefixā€ to the actual input script, like how we have PREFIX_TOKEN (0xef) to extend the output format now, but instead extending the output format the PREFIX_EXTRA_BUDGET would be extending the input format:

Field Length Format Description
previous output transaction hash 32 bytes transaction hash The hash of the transaction containing the output to be spent.
output index 4 bytes unsigned integer The zero-based index of the output to spent in the previous outputā€™s transaction.
extra budget prefix and unlocking script length variable variable length integer The combined size of extra budget prefix and the unlocking script in bytes.
[PREFIX_EXTRA_BUDGET] 1 byte constant Magic byte defined at codepoint 0xf0 (240) and indicates the presence of an extra budget prefix.
[extra budget byte count] variable variable length integer Number of extra budget bytes. These bytes count for input compute budget calculation, and also against script, TX and block size limits.
unlocking script variable bytes The contents of the unlocking script.
sequence number 4 bytes unsigned integer As of BIP-68, the sequence number is interpreted as a relative lock-time for the input.

For example, an input of size 100 (0x64), and with additional budget of 1000 (0xe803) would be encoded as:

{txid} {index} 68 f0 fce803 {100 bytes of input script} {sequence}.

Such input would have compute budget of (41 + 100 + 1000) * 800 rather than just (41 + 100) * 800 (if no prefix). Smallest TX with this input would be 164 if using the prefix or 160 bytes if not, but the former would have about 8x compute allowance of the latter TX.

The extra bytes would count against min. relay fee, script size, TX size, and block size limits, so one could fit a max. of 27,491 of these compute-intense 164-byte TXs into a 32MB block, rather than 200,000 of compute-light 160-byte TXs, and the block would have a min. fee of 0.32 BCH in both cases.

Miners could easily start pricing these extra budget bytes differently, if need be.

3 Likes

Here is the latest Poster Photo for the Vilma T. upgrade:

image

EDIT:

Another proposition (colors can be fixed/changed to green in GIMP, AI will not do it right anyway):

EDIT:

Got one more good

Or go with a non AI-female name!
e.g. Quantum Surge

Did somebody actually not like the idea of giving the upgrades female names?

I think ABLA worked out great.

1 Like

BCH Podcast has published approval post for VM Limits.

1 Like

CHIP 2021-05 VM Limits: Targeted Virtual Machine Limits is now activated on chipnet at block 227,228 (0000000000b8dc4625844fa367b12317645fac7c9afbc5fb8def4025a6822c86)! :tada:

https://x.com/bitjson/status/1857434429230076167

4 Likes