The problem is over-complicating stuff, an open standard that has too many little limits and features. I call it an open standard a little bit optimistically, but if we want more implementations in the decades to come we have to see it as such.
For instance the quadratic hashing issue caused a consensus change in BitcoinCash various years ago based on very similar discusions that did not include actual real hardware testing. Just models and speculation. Even though various years before that the actual issue was already shown to be a non-issue. I want to make sure we don’t make the same type of mistake again.
The idea I showed above about a cost to each opcode that is being run would replace a lot of different concepts:
Big chunks of:
And indeed the proposed 1000 hash-units from this chip draft.
So, to reiterate my point, requirements are great input to this discussion. I fully trust Jason to be capable and motivated to find the lower limits he feels we should be able to process in a BitcoinCash full node.
Now lets check what the actual, tested-in-running-software bottlenecks are before we introduce new consensus level limits, no?
Edit; sigop-limits were introduced by Core a very long time ago, similarly without actually doing the research which is why we replaced them in bch the first moment we got as they were useless to actually protect the thing they were supposed to protect.
Not a new thing, the problem I’m trying to fight here.