This is definitely needed, indeed.
I’ve been blunt with Jason in the last days, will do so again. Hope he will understand.
The bottom line with limits for the VM is that they are in practice limits for the C++ implementation as that is the only place where this is an actual realistic place we’ll run those validations with massive throughput expectations.
For stack-limits, for instance, the current design uses a vector of byte-arrays. Two actually, since we have an alt-stack too. By far the most risky part of massive usage of the stack will then be the calls to malloc (which std::vector does internally). Especially if there is an expectation of doing tens of thousands of script validations every second, malloc is a bad choice and unstable.
To gather good stack-limits, then, is to start a small project (would likely be less than a man-month of work) which is a single class that does a single mem-allocation at full-node startup which represents the stack for this single specific script invocation. @cculianu will likely understand the idea without much explanation.
If that approach is taken, I expect that the script limits for stack can trivially go up to some 25MB without any issues to the full node. Even running 100 threads in parallel means only 5GB static allocation for this.
So, to repeat, the bottom line is that we need an actual implementation where full, parallel validation of scripts is possible in order to find the limits that are possible today and indeed for a long time in the future (decades).
Nobody doubts that the current limits should be raised, but my vote for what the new limits should be goes to a technical analysis based on actual written code.