I will add an addendum to this, though. Would really like to see this activate in 2025 as well but we need some serious movement happening!
I would rather have everything well-tested and especially thoroughly benchmarked before we go with deployment.
Late update is 1000x better than a serious malfunction with massive repercusions and PR hit X years from now when we reach gigabyte blocks.
It’s already being tested, the test cases are extensive:
I’m a little annoyed with the “oh maybe we can’t make it” attitude, don’t make this into a self-fulfilling prophecy, like what the hell it’s a simple enough upgrade and there’s 2-3 months of time until “the freeze” and most of the work has actually been done by Jason already, it’s just a matter of wrapping in a nice CHIP wrapping and shipping it, and we have 2-3 months of time to do it, which is more than enough.
I would rather be too careful than sorry later.
Excellent, being annoyed will raise the brain activity, elevate the processing in the brain to “higher gear” and will make a person rethink things twice.
It’s just evolution.
Being too nice can often result in creating closed-loop thinking and “small circles of love and pleasure” where everybody follows everybody else that are bound to make mistakes, because everybody is afraid to criticize everybody else.
I am not saying “slow down” or “stop”. I am just saying “be veeeeeery careful”.
Performance impact can be assessed by “big O” complexity analysis of the opcodes, and will be confirmed by benchmarks, and I have some preliminary results obtained from Calin today and they look promising, I extracted the interesting part (ops near the cost limit):
# ID, TxByteLen, Hz, Cost, CostPerByte, Description
trxhzt, 366, 11876.3, 1.000, 1.000000, "[baseline] 2 P2PKH inputs, 2 P2PKH outputs (one Schnorr signature, one ECDSA signature) (nonP2SH)"
6w680g, 8419, 12328.7, 0.963, 0.041878, "[benchmark] OP_MUL 2048-byte number (alternating bits set) and 2048-byte number (alternating bits set) (P2SH32)"
34vzam, 8408, 8772.7, 1.354, 0.058930, "[benchmark] OP_MUL 4096-byte number (alternating bits set) and 4096-byte number (alternating bits set) (nonP2SH)"
So, with the budgeting system introduced by VM Limits CHIP, a 400 byte TX could be allowed to do one 2kB x 2kB multiplication, or a bunch of 256-bit multiplications, but in any case the CPU cost can’t exceed the cost of a typical P2PKH TX, because using up the budget means an invalid TX.
With the VM Limits CHIP framework you’d get an allowance of how many bytes your TX can operate on, and you can’t exceed it whatever you do because it would result in an invalid TX.
The BigInt CHIP is basically VM Limits CHIP applied to arithmetic opcodes the same way as others (hash ops, stack ops - as soon as counter would get exceeded, TX evaluates as invalid). It’s a no-brainer IMO, but Jason did the responsible thing and extracted it as separate CHIP.
The impact of BigInt on node operating costs will be 0, and we can have volunteers reproduce benchmark results on various hardware as final verification.
BigInt does introduce an additional implementation risk, which should be mentioned in the CHIP “risks” section: that of correctness of implementing BigInt operations using int64 ops in c++. This risk will be mitigated by node developer quality control process and extensive test suite, which can be verified against multiple open-source BigInt libraries.
Hi all, I just posted some updates to the Limits topic:
Just want to add for anyone interested, the “mega-bench
” branch here includes a combinatorial set of benchmarks over all 23 BigInt-manipulating opcodes at different combinations of lengths [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192]
and several types of “fills” (to expose performance differences related to e.g. carry behavior). These are the tests @cculianu got running against BCHN last week.
Since there’s >1GB of tests now, it’s going to take me a bit longer to merge these and continue building out additional test vectors and benchmarks (I’m having to revise how Libauth generates and runs them to support incremental builds and benchmarking, and also going to compress everything by default), but I should have things cleaned up more over the next week or two. Until then, implementers can just grab the .zip
files from that branch.
I also just merged this too: Consider dropping the VM number length limit · Issue #1 · bitjson/bch-bigint · GitHub (thanks for opening @bitcoincashautist!)
Going to copy the latest rationale:
Removal of Number Length Limit
With the activation of the Arithmetic Operation Cost Limit, the additional limitation on VM number length has no impact on worst-case transaction validation performance (see Tests & Benchmarks). This proposal removes the limit, allowing valid VM numbers to extend to the stack element size limit (A.K.A. MAX_SCRIPT_ELEMENT_SIZE
) of 10,000 bytes. See CHIP: Targeted Virtual Machine Limits
.
Alternatively, this proposal could raise the limit to a higher constant value like 258
, the constant selected by Satoshi Nakamoto in reverted makefile.unix wx-config -- version 0.3.6
(July 29, 2010). However, because the limit is no longer necessary for mitigating worst-case transaction validation cost, selection of any particular constant would be arbitrary.
By fully-removing the limit, overall protocol complexity is reduced, simplifying both future VM implementations and contract development. For VM implementations, eliminating out-of-range cases significantly reduces the combinatorial set of possible inputs and outputs for the numeric operations: OP_1ADD
(0x8b
), OP_1SUB
(0x8c
), OP_NEGATE
(0x8f
), OP_ABS
(0x90
), OP_NOT
(0x91
), OP_0NOTEQUAL
(0x92
), OP_ADD
(0x93
), OP_SUB
(0x94
), OP_MUL
(0x95
), OP_DIV
(0x96
), OP_MOD
(0x97
), OP_BOOLAND
(0x9a
), OP_BOOLOR
(0x9b
), OP_NUMEQUAL
(0x9c
), OP_NUMEQUALVERIFY
(0x9d
), OP_NUMNOTEQUAL
(0x9e
), OP_LESSTHAN
(0x9f
), OP_GREATERTHAN
(0xa0
), OP_LESSTHANOREQUAL
(0xa1
), OP_GREATERTHANOREQUAL
(0xa2
), OP_MIN
(0xa3
), OP_MAX
(0xa4
), and OP_WITHIN
(0xa5
). For contract authors, eliminating the possibility of out-of-range errors prevents a class of potential vulnerabilities arising from a contract system’s failure to validate that intermediate arithmetic results never exceed an (uncommonly-encountered) maximum number length limit.
Non-Inclusion of Implementation-Specific Technical Details
This proposal specifies only the necessary change to the consensus protocol: removal of the number length limit.
The software changes required to support this consensus change differ significantly from implementation to implementation. VM implementations which already internally use arbitrary-precision arithmetic for VM number manipulation may only need to disable code enforcing the previous limit, while other implementations may be required to integrate an arbitrary-precision arithmetic library or language primitive. In all cases, implementations should verify their functional behavior and performance characteristics against the Tests & Benchmarks.
Non-Inclusion of VM Number Format or Operation Descriptions
Beyond dropping the unnecessary limit on VM number length, this proposal does not modify any other properties of the VM. Notably, the precise format and behavior of VM numbers across all VM operations – especially OP_1ADD
(0x8b
), OP_1SUB
(0x8c
), OP_NEGATE
(0x8f
), OP_ABS
(0x90
), OP_NOT
(0x91
), OP_0NOTEQUAL
(0x92
), OP_ADD
(0x93
), OP_SUB
(0x94
), OP_MUL
(0x95
), OP_DIV
(0x96
), OP_MOD
(0x97
), OP_BOOLAND
(0x9a
), OP_BOOLOR
(0x9b
), OP_NUMEQUAL
(0x9c
), OP_NUMEQUALVERIFY
(0x9d
), OP_NUMNOTEQUAL
(0x9e
), OP_LESSTHAN
(0x9f
), OP_GREATERTHAN
(0xa0
), OP_LESSTHANOREQUAL
(0xa1
), OP_GREATERTHANOREQUAL
(0xa2
), OP_MIN
(0xa3
), OP_MAX
(0xa4
), and OP_WITHIN
(0xa5
) – are part of network consensus and do not otherwise change as a result of this proposal. For the avoidance of doubt, see the Tests & Benchmarks.
I’m now working on this CHIP in parallel with @bitjson, and there are 3 pull requests open:
- add detailed specification
- add costs & risks sections
- add some practical applications to benefits section
Still waiting on Jeson to review them, and if anyone wants to see the CHIP with those merged in the meantime, they can see it in my working repo here:
Amazing, exciting work as always.
Some feedback:
Under
Arithmetic Operations → OP_MOD (0x97)
“e.g. 7/3 will return 1, -7/3 will return -1, 7/-3 will return 1, and -7/3 will return -1.”
The second and fourth examples are the same, so I assume you meant
“and -7/-3 will return -1.” at the end.
Under
Ternary Operations → OP_WITHIN (0xa5)
“The 2nd-to-top item defines the “left” and includive…”
I think that’s supposed to be “inclusive”.
Also, you might want to clarify that the examples are in [evaluated][left boundary][right boundary] order; I was confused and had to reread a few times.
Under Arithmetic Operations → General requirements
Typo: “With this upgrade, that requirement has been removes”
Should say “removed”.
Under Unary Operations → OP_0NOTEQUAL (0x92)
It says “If value is not 0 change it to 1. If it is 0, leave it unchanged.”
But later says “if executed on aabbdd the result will be an empty stack item (0).”
Shouldn’t the result be 01 according to the description?
Thanks for your comment and feedback, implemented all your comments in the repo now (also some formatting improvements to examples)!
Welcome to Bitcoin Cash Research!
Just for reference, I found nice documentation on big integer algorithms (from bc
documentation, it is an arbitrary precision numeric processing language, and interpreter binary ships with most Linux distros):
Could be useful if later we want to add ops like OP_EXP
The chip says bigint libraries are available, but considering the new unlimited design, it is worth mentioning the standard upper limits on big integer size in major languages. The reasons are:
- 258 bytes… ok probably any arbitrary int library can handle that. maybe. But you have a little doubt unless you have actually checked the spec of a given language / library.
- 10k bytes… ok definitely getting into “probably but really not sure” territory.
- future 10k+ bytes…
I have looked into the answers to these to convince myself, but authoritative sources / examples should be cited in the CHIP. I’m not sure what the right sources would be. For example I found the limit in the v8 spec for bigint, but that’s not exactly the js spec.
In addition to this, it needs to be confirmed that those libraries perform 1:1 operations compared to the intent of VM math ops. Especially in negative numbers, handling of integer math diverges in various systems.
Already from this simple example below, you can see that AT LEAST either python or javascript will need special handling for exactly spec integer behavior. This is a problem that exists already, but I haven’t seen it spelled out anywhere. Minimally:
- What are the exact boundaries and edge conditions on arbitrary size VM integers?
- What is the exact behavior for negative numbers?
// javascript:
-10n / 3n
// result: -3n
# python:
-10 // 3
# result: -4
To pre-empt, this is not a shitting contest for languages you don’t like. It’s a practical issue that the CHIP should cover or at least mention.
That’s why I added examples here: https://github.com/bitjson/bch-bigint/pull/4/files#diff-5a831ea67cf5cf8703b0de46901ab25bd191f56b320053be9332d9a3b0d01d15R175
should add some text saying “floors towards zero (truncation)”
GP has proposed this PR which would work in replacement of PR 4 and in combination with PRs 2 and 3. We think these changes are required in some form and not optional before our endorsement can be considered for 2025 activation.
Great, thanks to General Protocols for the review! I’ll work on incorporating the feedback on both CHIPs.
Heads up, I’m working with Calin on creating an extensive suite of property tests for arithmetic opcodes, draft/WIP here: https://github.com/bitjson/bch-bigint/pull/7/files
Example, we know that a + b == b + a
must hold no matter what, so we run this script: <a> <b> OP_2DUP OP_ADD OP_SWAP OP_ROT OP_ADD OP_EQUAL
and we test it for many random values of a and b (such that a + b <= MAX_INT), and the script must always evaluate to true.
So far so good, all the test so far implemented pass as expected, giving us more confidence in BCHN’s BigInt implementation.
Test suite completed, all checks pass!
The suite executes total 1,765,429 script evaluations testing all combinations of arguments to each test!
Edit: property test plan moved from CHIP into BCHN repo (MR #1876) at /doc/bigint-script-property-tests.md
.