As this seems to not be taken seriously, let me re-type this out and maybe we can fix this before the CHIP is frozen.
Edit; this is not a new issue. Iāve notified this thread about it quite some time ago and had long voice conversations with Jason about it. I did listen to all his arguments and spent time verifying the numbers he assumed. In the end I stated very clearly that this is an issue that should be solved and suggested various solutions. Communications died at that point. So, this leaves me no choice other than to air the dirtly laundry here.
the system has a misalignment of incentives as Jason designed it.
A 10KB transaction input has CPU time allotted to execute just below 8 million opCost.
Here an example that actually stays within this limit (7474730 is used):
std::vector<uint8_t> bigItem;
for (int i = 0; i < 9296; ++i) {
bigItem.push_back(10);
}
CScript script;
script << bigItem;
for (int i = 0; i < 200; ++i) {
script << OP_DUP << OP_9 << OP_DIV;
}
Since a 10KB (total) script is the maximum this is the most opCost points a script can gather.
The above executes in 1ms on my laptop. Use op-mul instead and it runs in 500-micro seconds.
Now,
if I were to move this script to live in the transaction-output, I can no longer run it.
To run it I would be forced to add big pushes to the spending transaction to allow the script to run.
There are two misalignments of incentives here;
- scripts have to be stored in the scriptSig (aka spending transaction) and not in the transaction output (locking transaction). The total amount of bytes on the blockchain goes up as a result. Overhead is between 25 and 40 bytes.
- To get more execution time, bogus bytes may be added in order to increase the opCost budget.
In Bitcoin Cash the fees are intentionally kept low, as a result the cost of pushing more bytes in order to allow a bigger CPU allowance is similarly very low. It is inevitable for people to start pushing bogus information on the blockchain just to get more CPU time.
Notice that wasting blockchain data is long term much more expensive than CPU time. This is the case because historical transactions do not get their scripts validated.
Example two
Consider usage of OP_UTXOBYTECODE. This copies data from another UTXO than the one we are spending. The result is an input script that has nearly no bytes. The script will end up being severely restricted in what it can do.
You literally canāt do a single OP_DIV from the above example in this case. (limit=37600, cost=46881)
Based on the fact that pushes are the only thing allowed in the spending transaction, this has the potential to forever lock funds on the chain. Until the rules are relaxed.
Understanding ācostā.
The CHIP confusingly talks about OpCost and limits like they are the same. There is a term called ādensity control lengthā, which may be the source of the confusion since that is the actual base for the limits. But it is not called such.
In simple English;
opCost is calculated based on actual work done. Each opcode has its own formula on what op-cost they have. Running the program thus adds to the tallied op-cost every step of the way.
This is a concept used by various different chains, nobody has a problem with this.
When the cost exceeds the limit, the script fails. Simple as that.
Understanding the cost being a separate concept from the limit, we can agree that the cost calculation is great. Donāt change that. The limits, however, give the problems described in this post.
So this leaves the question of where the limit comes from.
The CHIP explains that a transaction has a maximum size which is then spread over all its inputs and thus the total amount of CPU time spent on a single transaction is bound.
Using the example above, a 100kb transaction can have 10 inputs of each 10KB. Because the max is 10KB. This leaves the maximum amount of CPU time in my example to be 10 ms to run the scripts on this transaction. (which is to say, very fast).
As such this seems like a sane solution at first. Workarounds to pack more inputs donāt work since the tx-size is still limitedā¦
Yet, the solution does have those bad side-effects. I mentioned this above, and unfortunately my repeated pointing this out have yet to make it into the CHIP. I think the CHIP rules state problems should be mentioned clearly in the CHIP. Weather the author agrees with them or not.
Future opcodes
Today we had some discussions about concepts like MAST. We contemplated that op-eval may be better since it allows interaction with things like OP_UTXOBYTECODE. A cool idea since that means your script can stay very small because the actual locking script from a cashtoken can be copied and reused from another input.
The downside is that the limits will be so low that this is practically not going to work. Unless we add a bogus push to work around the limits coming from the unlocking script.
Conclusion
The CHIP as it stands today has economic issues. Users are incentivized to pollute the blockchain in order to ābuyā more CPU cycles (in the form of opCost).
This should not be activated like it is today. If this canāt be fixed, we should move this CHIP to May 2026.