Brainstorming OP_EVAL

Thanks for reviewing!

Sort of possible, but the gains don’t apply to adversarial cases (<data> OP_HASH256 OP_HASH256 OP_HASH256 ...), so we can’t e.g. safely raise limits based on such gains.

In more detail:

Yes, there are some contracts that theoretically can be “evaluated in parts”, but efficiently-factored programs generally don’t have many such divisions: a static analyzer would have to split the program at locations with one or more data pushes, then determine if the operations following those pushes can manipulate them without looking deeper down the stack. Yes, some minor acceleration-via-parallelization at this level is theoretically possible, but you’re in an altogether different territory WRT optimization, and it’s hard to justify that level of complexity in important consensus code – especially when it can’t apply to adversarial cases.

On the other hand, we already have far more parallelization than can be used by consumer hardware: individual transaction inputs can always be validated in parallel. So not only is validation parallelized across all transactions flowing over the network, but also every separate contract evaluation inside those transactions is being performed in parallel. So while pushing the parallelization even further down into the contract layer might be able to improve validation latency of a single transaction, it’s hard to imagine it rendering any cumulative scaling benefits on consumer hardware (and the extra overhead would probably even reduce cumulative performance), even with orders-of-magnitude more available validation threads. And it’s a shape-of-the-problem thing (independent of scale) – if consumer hardware is capable of validating 1M transactions per some time period, our “parallelizability” over that time period is already, by definition, at least 1M.

3 Likes