Thanks for bringing it up!
I don’t have a strong opinion on data carrier outputs right now. In talking with stakeholders about it over the past few years, there’s a lot of technical disagreement on whether or not it should be limited at all (and if so, to what extent).
I don’t see the current ~223-byte limit as a barrier to contract development right now, so I’ve tried to avoid any impact to that status quo in the initial P2S CHIP draft. That being said:
(edit: adding headings to link to later)
Miscalculation in existing data carrier limit
There was a misunderstanding in the calculation behind the ~223-byte data carrier limit; assuming the original justification(s) for extending standard OP_RETURN to 220 bytes, that limit should arguably be higher. (And of course, the most relevant limit is currently ~100KB
.)
I’m not advocating for an increase in that limit, but the issue is relevant here if we were to try to simplify by raising the P2S limit from 201
to 223
bytes. Sticking instead to the existing 201-byte bare multisig limit is probably most conservative, especially if the data carrier limit is eventually corrected upward.
Standard locking bytecode length < standard data-carrier length
Under the existing rules, the maximum-length standard locking bytecode (201-byte multisig) is a little shorter than the max-length data carrier output (220 bytes). There may be a slight impact to some incentives if this proposal were to make them equivalent.
Data carrier outputs are still “slightly cheaper” in that they can have a value set to zero (no dust limit), but their longer contiguous limit may currently help to incentivize some use cases toward data carrier vs. less- or non-prunable data commitment techniques. (Again, arguably not important, but the P2S CHIP avoids changing the status quo out of an abundance of caution.)
If I understand the question, no – evaluation stages would work exactly as today: unlocking bytecode is evaluated first (10KB limit, restricted to push ops), then the stack is copied + intermediate validation. Then the locking bytecode is evaluated: limit of 201 bytes in standard mode (replacing most of the script-type pattern matching that happens today), or the current 10KB limit in nonstandard mode (block validation). P2SH and various follow up validation also remains the same.
The CHIP can’t invalidate any existing UTXOs/use cases, and it doesn’t create a difference between output (creation time) and UTXO (spend time) standardness validation. In the interest of staying as minimal as possible, I don’t think we should add any new schemes for e.g. extending the spending-standardness length limit if the unlocking bytecode is shorter than its maximum. After all, the remaining argument for keeping any standardness limit on locking bytecode length centers around UTXO set growth, and unlocking bytecode length is irrelevant there. (@bitcoincashautist did that answer your question?)