I’d like to talk a bit about this one…
There are a couple of assumptions that Core pushed upon us that are actually really not logical and they seem to have been used to come to the conclusions you made.
In short, I think the approach should be to base the budget on the combined output and input scripts.
The most important assumption that should be challenged is the concept that fees pay for transactions and pay for UTXO ‘storage’. The Satoshi client didn’t solely use that, it was one of various ways and Core has over time removed and changed things to make fees be the only way to differentiate transactions.
A lot of things went wrong there, from the removal of coin-days being used to the most harmful change that transaction priorities are no longer a concept decided by mners in a free market (min-relay fee).
If we internalize this, then the idea that a big output-script (aka locking script, aka scriptPubKey) does not have any influence on the budget of execution fails the sniff test. Someone already paid for that, afterall.
Look at this from the practical point of view:
imagine a setup where the full node remembers the actually consumed processing budget. The miner then uses that as an input for a transaction-priority. You have to imagine that as when a transaction uses a high amount of CPU (obviously not more than the limit) it lowers its priority and that may mean it can take 10 blocks to get mined.
Obvious way to counter this is to have a high coin-days-destroyed or as last resort to pay a larger fee. (all this is permissionless innovation that very likely will happen when blocks get full).
Bottom line here is that the fear stated in the rationale of “volatility of worst-case transaction validation performance” is solved in a neat way by simply charging actual usage (lets not call it gas fee, though). Again, in a permissionless way that spreads the load because that is the most profitable thing to do.
As an aside, the attack has always been based solely on a crappy utxo implementation, it was never a real problem of cpu consumption. That is because transactions are validated already well before the block-header comes in. So worries about that should not remove normal incentives. End aside.
To get back to the point, whatever is mined in the output script has been paid for in some way or other. Ignoring that already paid for data in the VM limits chip creates wrong incentives. As Calin above wrote:
Regardless of anyones opinion of p2sh or whatever, it should be obvious that VM limits should not have such side-effects. Side effects are bad. mkay?
Edit: I think the simple way to do this is to add the output-script of the UTXO and the input-script that unlocks it together in total size and use that size as the input.
This script is ALREADY going to be combined into one and passed as one to the VM. As such it is the natural approach that actually accounts for the VM usage.