This proposal increases the maximum length of VM numbers again (following the 2022 increase from 4 to 8 bytes), this time from 8 bytes to 258 bytes, the second limit selected by Satoshi in 2010.
—
Hi everyone,
I’ve spent the past few months reviewing and benchmarking various solutions for the VM Limits CHIP, and in the course of that research (and previously while working on Jedex) I spent quite a bit of time exploring approaches for contracts to use or emulate higher-precision math.
In short: emulated precision is woefully inefficient in the VM system, and Bitcoin Cash is well designed to offer native support for high precision arithmetic.
Some background
Satoshi originally designed the VM to support arbitrary-precision arithmetic; numbers had no direct limits at all, and the sign-magnitude number format used is typically more efficient/less complex for arbitrary-precision arithmetic implementations than other options (Satoshi originally used OpenSSL’s big number library).
In July 2010, Satoshi began sneaking emergency patches into his Bitcoin implementation to prevent a variety of possible attacks against the network. In his first big VM patch, he instituted a variety of changes and limits to the VM – among them limiting numbers to 258 bytes. However in August 2010, he decided to remove the VM’s dependency on OpenSSL entirely, following instability he previously discovered in the right shift operation which he now realized would have split the network. In that patch, he limited VM numbers to 4 bytes (now lacking a big number implementation he trusted) and disabled a number of opcodes (many of which Bitcoin Cash has now reenabled).
Relation to Limits CHIP
VM numbers are limited to 8 bytes because the VM’s current limit system cannot account for the actual cost of expensive arithmetic operations.
The Limits CHIP re-examines this system to replace the existing opcode and stack item length limits with better-targeted equivalent limits, allowing contracts more power/flexibility without increasing transaction processing or memory requirements.
It turns out that to replace these limits, it’s prudent for us to correctly account for expensive arithmetic operations; OP_MUL, OP_DIV, and OP_MOD are O(n^2) in the worst case, while OP_ADD and OP_SUB are only O(n).
Further, it’s critical that we carefully test both the system which limits arithmetic, and high-precision arithmetic itself, together. I.e. we want to discover any potential issues with arithmetic limits before they are activated, and the safest way to do that is to actually use them.
2025 Deployment
I had been considering proposing this CHIP for 2026, but after chatting with other developers, and given that a large part of the implementation must necessarily be activated by the limits CHIP (the arithmetic cost limit), I think we can and should increase the VM number limit at the same time.
Review & Feedback
Despite the close relation to the limits CHIP, I’ve written this as a separate CHIP for ease of discussion and feedback.
You’ll notice the technical specification has only one sentence; the necessary changes will be quite different from implementation to implementation, so I expect that the test vectors will do the majority of lifting in this CHIP. (In Libauth for example, the underlying implementation has always used arbitrary-precision arithmetic, so only the constant specifying the limit needs to be changed, 8
→ 258
. BCHN on the other hand, would need to re-introduce big number arithmetic.)
Thanks to @cculianu for implementing Libauth’s benchmarking suite in BCHN and answering my questions over the past few months, and thank you to everyone who joined in the impromptu discussion in various Telegram groups recently!