P2SH32: a long-term solution for 80-bit P2SH collision attacks

Pay to Script Hash (P2SH, BIP16) addresses/contracts are vulnerable to a 2^80 collision attack which can also be performed with trivial memory usage in 2^82. The attack is made possible if the attacker can introduce data to a contract without being forced to pre-commit to the introduced data. (This problem has been mentioned before on BCR, but it’s probably time for a dedicated topic.)

This means:

  • Common single-signature addresses are not affected. (Pay to Public Key Hash, P2PKH)

  • Practically all multi-party contracts (P2SH) are vulnerable unless the wallet software uses a carefully implemented pre-commitement scheme: when creating the wallet, the parties must share a hash of their public keys before revealing actual public keys. This prevents any party from grinding for a collision that allows them to later sweep the wallet.

Further, because many kinds of contracts cannot practically implement the above pre-commitment scheme (e.g. if users can “join” or otherwise provide new data to a contract after it is initially funded):

This attack is still only practical/profitable against large wallets because it requires significant investment in hardware and/or electricity, but it’s also more profitable with scale (e.g. building ASICs). I’m not the right person to estimate cost, but it seems safest to assume an attack can be made profitable against vulnerable setups holding as little as a few hundred thousand (2021) USD in value. (Can anyone offer a more substantiated estimate? Note also, the fact that we’re discussing solutions is protective in that it reduces the expected value of attack infrastructure.)

Bitcoin Cash contracts are becoming much more useful in May 2022 with introspection and 64-bit math – use cases like long/hedge derivatives and recurring payments become much more efficient, and we can now build far more secure wallets. While many of these use cases can be carefully implemented to prevent the attack, pre-commitment strategies are also impractical for some important use cases (e.g. decentralized oracles). Even for use cases with viable pre-commitment strategies, these strategies can require both larger contracts and larger transaction sizes than would be required if a 32-byte P2SH solution were available. (And with current VM limits, that means some valuable use cases are also not possible to make secure.)

Additionally, because designing and auditing implementations of pre-commitment schemes is not trivial (even for simple multi-signature wallets), a 32-byte P2SH solution would be valuable for improving the overall security of the ecosystem, reducing the chance that an average BCH user is impacted by a vulnerability in one of their preferred wallets or applications. (When an opportunity arises to eliminate a class of vulnerabilities, we should take it.)

Finally, any solution here should not affect existing, 20-byte P2SH applications – simple multi-signature wallets may choose to continue using 20-byte P2SH in cases where a pre-commitment scheme is easily implemented, saving the additional 12 bytes per output.


32-byte P2SH

The Bitcoin Cash VM currently supports hashing algorithms with two output lengths:

  • 20 bytes (160 bits): RIPEMD160 and SHA1, and
  • 32 bytes (256 bits): SHA256.

The existing 20-byte P2SH uses OP_HASH160 (one pass through SHA256, then a pass through RIPEMD160), so the simple solution is to add a 32-byte P2SH construction which uses OP_SHA256. Because the existing P2SH feature is already implemented using pattern matching (and changing that would break many contracts/clients), implementing a 32-byte P2SH using the same strategy costs very little in terms of additional protocol complexity.

The CashAddress format also offers a simple, existing solution for representing 32-byte P2SH addresses – the Version byte has 3 bits devoted to Size, where a value of 3 (0b011) is already specified to represent 256 bit (32-byte) hashes. In fact, 32-byte P2SH CashAddresses have been in the test vectors and implemented in many libraries since 2018, in anticipation of some future solution to this issue.

Given all of this existing infrastructure – an existing hashing algorithm by which the attack is mitigated (OP_SHA256), an existing P2SH implementation/deployment strategy to reuse, and an existing 32-byte P2SH address format, there are surprisingly few technical decisions left to make.

All that to say: I think we should hammer out the details and deploy 32-byte P2SH in 2023.

Before I put together a CHIP, I’d appreciate any thoughts on some of these technical details I’m considering:

Eliminate boolean malleability (e.g. MINIMALIF)

I’d like to see boolean malleability resolved for 32-byte P2SH contracts – that would eliminate the remaining known sources of third-party transaction malleability, paying dividends in practical security for average users over the coming decade(s). (This was considered for the Nov. 2019 upgrade, but deemed better to leave for a future upgrade.)

In practice, eliminating boolean malleability means simpler, safer-by-default, easier-to-audit contracts, less error-prone wallet implementations, and reduced transaction sizes (e.g. this merkle tree leaf replacement construction would save ~3 bytes per level by eliminating the need for the extra malleability protection code). An incomplete solution (the MINIMALIF flag) has been present in most Bitcoin Cash implementations since before 2017, and is now part of consensus on other networks (BTC after BIP342).

Deploying this for 32-byte P2SH contracts offers immediate security gains: phasing in malleability protection for existing P2SH contracts would probably require multiple years (first with standardness, then consensus), but if boolean malleability is eliminated at the outset for 32-byte P2SH contracts, new development can completely avoid having to handle that class of vulnerabilities. (And I think we should also consider phasing protection into P2SH via standardness over the next few years.)

Using OP_SHA256 rather than OP_HASH256

Existing P2SH contracts use OP_HASH160 ... OP_EQUAL, so it’s easy to assume OP_HASH256 ... OP_EQUAL is the proper 32-byte equivalent. However, length extension attacks are not relevant to P2SH, so there’s no reason to spend an extra SHA256 digest iteration on 32-byte script hashes. (See also: BTC’s P2WSH uses single-SHA256.)

Limiting Contracts with MAX_TX_IN_SCRIPT_SIG_SIZE

P2SH contracts are currently limited to 520 bytes because BIP16 was introduced as a covert upgrade where the contract is technically pushed to the stack (and stack items are limited to 520 bytes). There’s actually no reason for implementations to stepwise-evaluate the “P2SH wrapper” – the VM is already pattern matching to process P2SH outputs differently, it can just as easily skip pretending the P2SH contract is pushed the stack. (Note, such an upgrade is only possible on Bitcoin Cash because Bitcoin Cash has been using opt-in upgrades since 2017; our upgrades aren’t designed to fool outdated full nodes into believing no upgrade occurred.)

Cleaning this up would allow P2SH contracts to use the same limit as the rest of the scriptSig (unlocking bytecode), MAX_TX_IN_SCRIPT_SIG_SIZE (1650 bytes). And with only a reasonable hashing limit, we can both eliminate the OP_CODESEPARATOR quadratic sighash issue and stop naively counting opcodes. That would allow developers to write much more interesting/advanced contracts without impacting node validation cost (and we don’t even need to increase maximum stack item length from 520 bytes).

10 Likes

(Posting this separately to avoid cluttering the initial post.)

Alternative Solutions

Several other relevant proposals have been made in the past 10 years. While some of these are possible directions for future upgrade proposals, I think the solution outlined above is the prudent choice for a 2023 upgrade.

OP_EVAL

The current 20-byte P2SH solution (BIP16) replaced an earlier OP_EVAL proposal (BIP12).

To summarize, OP_EVAL was abandoned in early 2012 primarily because it removed a hypothetically-valuable property of the virtual machine: the cost of executing a contract can be known prior to evaluating it. (This is commonly described as “static analysis of scripts”.) This mailing list post is probably the best summary of the topic, kicked off by this GitHub issue.

This was a strong argument in early 2012 – details of the system were poorly understood, and it was unclear if static execution cost analysis would prove important for scaling or Denial of Service prevention. However, developments since then seem to offer a strong counterargument: if static execution cost analysis of contracts were valuable, it likely would have been leveraged in the intervening 10 years. Instead, the technique remains unused (as far as I can tell), even by scaling-focused researchers and competing currency networks.

The intervening 10 years have also clarified which bottlenecks actually concern developers of bitcoin-like systems: bandwidth and storage requirements. If anything, contract execution cost are even more negligible than they were in 2012 – improvements to hashing and signature validation performance (in both hardware and software) have significantly outpaced improvements to bandwidth and storage availability (though both continue to improve rapidly). This reality has caused most cryptocurrency networks to focus on bandwidth/storage over contract execution costs. (See also: the block size debate and the BTC/BCH split.)

In fact, it’s clearer than ever why static execution cost analysis remains unused: it wastes effort in the common case. If you first “statically analyze” every contract before executing it, you waste effort for the vast majority of valid transactions in order to save – at most – milliseconds evaluating some invalid transactions sent by a misbehaving peer before it is disconnected and banned. Most node implementations are well hardened against misbehaving peers, and I’m not aware of any that have found that particular case to be worth optimizing.

So: I think reality now refutes the primary original reasoning for OP_EVAL's rejection. However, there are several other reasons we can disqualify OP_EVAL as a solution for 2023:

Output size – Some OP_EVAL-like proposal would likely specify a locking script like OP_DUP OP_SHA256 <32_byte_hash> OP_EQUALVERIFY OP_EVAL. Outputs of this format would cost 2 additional bytes beyond the OP_SHA256 <32_byte_hash> OP_EQUAL option, for negligible benefit. If we didn’t already have P2SH, we could say this introduced less technical debt to the protocol, but since P2SH is already part of the protocol, the only meaningful difference here is wasted bytes. (Note, if we did eventually get some OP_EVAL opcode, the OP_SHA256 ... OP_EQUAL construction remains valuable for this reason.)

OP_EVAL design decisions require further research – selecting a specific design for some OP_EVAL is not trivial, and we’d want to get it right the first time. Is OP_EVAL just a general strategy for enabling developers to commit to unique MAST/Taproot-like structures, and/or is it a sandboxing strategy for consumers/other contracts to contribute contract behavior? Would we need a new “instruction stack”? Or maybe instructions get parsed and inserted into the array of instructions at ip? The execution stack (OP_IF/ELSE/ENDIF) needs to account for the evaluation, how does that interact with loops? (If we have OP_EVAL, bounded loops are simpler and likely valuable to more contracts.) Should the alt stack be passed, cleared, or saved/restored? And I’m sure there are more questions to answer.

If Bitcoin Cash grows a sizable decentralized application ecosystem (e.g. using CashTokens), I’d bet that some sort of OP_EVAL could be a useful tool for a lot of use cases, but I expect we’ll need several more years of growth and development before we understand the problem space well enough to settle on a particular design. Because the OP_SHA256...OP_EQUAL already solves the problem of 80-bit P2SH collision attacks with minimal technical debt (existing P2SH strategy, the 2 byte savings remain valuable forever, etc.) it’s very hard to argue that an OP_EVAL solution should be deployed urgently, even if a solid-looking proposal existed.

Other Statically-Analyzable OP_EVAL Alternatives

Before P2SH/BIP16 was locked in, several other statically-analyzable OP_EVAL alternatives were proposed:

Given the existence of P2SH/BIP16 on Bitcoin Cash already, there’s little reason to use one of these strategies to achieve a similar goal.

Tail Call Execution Semantics

In more recent years, an OP_EVAL-like strategy for adding Tail Call Execution Semantics (BIP117) was proposed. While this idea might inform future upgrade proposals, it would require more research to demonstrate that it has made optimal design decisions (as with OP_EVAL). Also, for our more limited goal of resolving the 80-bit collision issue, this strategy is slightly inferior to OP_SHA256...OP_EQUAL, as outputs would be 1 additional byte OP_DUP OP_SHA256 ... OP_EQUALVERIFY. (So even if a future upgrade added this sort of evaluation mechanic, the simpler 32-byte P2SH solution would remain superior.)

A 32-byte P2SH Opcode

Another alternative we should have a rationale for dismissing: 32-byte P2SH could be deployed with a new opcode, i.e. <32-byte hash> OP_CHECKP2SH32. This has the advantage of saving one additional byte vs. the OP_SHA256...OP_EQUAL strategy, but it costs a codepoint in the instruction set and non-trivial additional protocol complexity/technical debt. There are several more effective strategies to save a single byte in transaction outputs at the cost of an opcode/protocol complexity – e.g. a compound OP_EQUALVERIFYCHECKSIG opcode would save at least 10x as many total bytes on-chain by reducing the size of adopting P2PKH outputs and some P2SH contracts (but still not worth the cost).

3 Likes

Ah, another detail to mention: the proposed 32-byte P2SH template OP_SHA256 ... OP_EQUAL (matching the current 20-byte P2SH) is valid on forks of Bitcoin Cash, so users with unsplit coins could be vulnerable to replay attacks. (BCH signatures are invalid on BTC, so only post-2017 splits are of concern.)

In practice, this means users with unsplit BCH/BSV/XEC outputs could have their transactions copied to those chains, and miners could sweep the funds by simply providing the un-hashed script (since those chains wouldn’t enforce the P2SH semantics). This is actually already the case for BSV (because they removed P2SH last year), so it’s primarily BCH users with unsplit XEC who would have XEC at risk.

I was more worried about this earlier, but in chatting with @im_uname, he pointed out that replay-able transactions should be extremely rare – use cases which immediately require 32-byte P2SH are generally complex multi-party contracts and covenants, and those will almost always use or be descendants of transactions which use the new introspection opcodes. So in practice, probably only a very negligible number of transactions would also be replayable on XEC. (And hopefully XEC will add 32-byte P2SH too.)

Aside: one possible optimization we could consider which would prevent replay, save a byte, and possibly simplify the P2SH pattern matching logic – we could just use the template OP_SHA256 <32_byte_hash> (without the OP_EQUAL). That would make 32-byte P2SH as theoretically efficient as possible, and the pattern matching would behave no differently. (It would even be an option to allow a matching OP_HASH160 <20_byte_hash> for future wallets to save a byte – the pattern matching could even use the same codepath as OP_HASH160 <20_byte_hash> OP_EQUAL.)

3 Likes

It sounds reasonable to me to only use one round of hashing if the hashing algorithm would be the same on both rounds otherwise.

As for the boolean malleability situation, I think it’s great but I’m concerned solving it in one place but not another would reduce incentive to do something that would solve it everywhere. If it came with a clear “this is step 1 and later we expect/intend to remove the vulnurable 20-byte setups”, that would eventually make it so all P2SH ends up protected - but I don’t know what that would mean in practice and there might be usecases or stakeholder that would disagree. I don’t think my concern is particulary strong though, so I’ve shared my initial reaction and will leave it to others for now.

the contract max size change would be much appreciated and perhaps easier to get consensus for than the targeting VM limits chip. That said, I think it’s still better to be principled and not bundle things that doesn’t need to be bundled, so each can be evaluated on their own.

3 Likes

Some notes:

a) Agree with the primary premise that p2sh is a significant part of bch future volume and security can’t be left to luck. In a CHIP, this would need to be spelled out in plain language so that at the highest level, stakeholders can understand it’s an existential issue for (roughly speaking) smart contracts on BCH.

b) Regarding “Using OP_SHA256 rather than OP_HASH256”, you might also mention that it’s not an insignificant cost issue in a potential future that is full of p2sh transactions.

c) “MAX_TX_IN_SCRIPT_SIG_SIZE” I actually was not aware of this possibility. Sounds great to remove an artificial cap that was never actually intended, and leave serious increases for later. If I understand correctly, there is a tiny issue, the same as math overflow, where there is hypothetically a system somewhere that depends on the network rejecting transactions that are oversized as part of its spec. Doesn’t seem like an issue to stop progress though. Most significantly, I agree with Jonathan that this should be pushed into its own CHIP, to be handled if we can get the other more critical things done with time to spare.

d) “OP_EVAL” We hypothesized using this to get around the 520 byte limit by injecting as much code as we want in blocks during unlock. But yikes. Yeah that’s a potential pandoras box that certainly wouldn’t be an obvious first choice.

4 Likes

:100: I agree – it would be fantastic to minimize long term technical debt by aiming for all modes to use the same exact VM implementation. I think it would be reasonable for a CHIP to specify a multi-year upgrade path – first boolean malleability is made non-standard in 2023 for existing modes (P2SH and “bare” contracts), and the rule is upgraded to consensus in 2024. That gives any impacted users (if any exist) a whole year to notice that boolean-malleable transactions can’t be relayed and either migrate or build consensus around a solution that works better for them. :ok_hand:

Yes, I don’t have a strong opinion on this yet – in principle I like breaking things apart as much as possible, but in practice we’re talking about the same ~10 lines of code in almost every implementation. It might be a serious hassle for implementors to try to stitch together an understanding from 2 or more specifications. It’s almost like the Summary, Benefits, Costs & Risks, etc. should be separate, but they should share one technical specification?

Right – this issue has been widely know for ~8 years, but BCH stakeholders need to understand that it’s now more important for us than in the past (or on any other chain): our contract system is becoming far more useful – enabling true decentralized applications/organizations – and some new use cases are far more exposed than the previous, less-practically-exploited cases. The larger the sum of money held in vulnerable cases, the larger the potential payoff of investment in attack infrastructure. If we wait too long, the per-contract exploit cost could fall much lower, so some urgency is warranted.

Good idea, I’ll make a note :+1: it’s also important for contracts that are close to some consensus hashing limit – it’s only one final digest iteration (hashing the first SHA256 result as one final 32-byte message), but that cost could really add up for contracts that need to validate many P2SH32 outputs.

Same! I only recently realized it’s viable. Previously, I expected many future contract use cases would require some sort of hash preimage inspection, but if we get a more efficient alternative like CashTokens, I’m now doubting that there are even hypothetical use cases for preimage inspection in contracts. If that’s the case, it’s possible we’ll never need longer stack items (the 520 byte limit) for other use cases, and the only remaining issue with the current stack size limit is just vestigial baggage from P2SH’s soft fork activation strategy. With good alternatives for both issues, we might never need a “stack memory limit”, the existing 1000 item * 520 byte boundaries would be sufficient. (Note: this means we could cut out half of the VM Limits CHIP – all we need is a hashing limit, and the opcode limit can be safely absorbed by contract length limits.)

Yes, though this is practically equivalent to deploying an opcode – only software that parses and evaluates the actual locking/unlocking bytecode of transactions would need to be updated. Even indexers which only parse (but do not evaluate) bytecode are unaffected – the would-be 1650 byte limit for P2SH contracts still fits in OP_PUSHDATA2 (the opcode for existing 520 byte contracts), so only software that actually validates transactions will even be capable of noticing the change.

Right. It’s certainly possible that some proposal in the next decade will identify the right way to treat data as code within the Bitcoin Cash VM, but it’s really not a viable strategy for solving this more urgent problem. For our purposes, it’s only relevant in that we want to make sure our P2SH32 solution will stand the test of time – we don’t want to add some baggage to the system that will be “deprecated” by some future “OP_EVAL”. Fortunately it’s easy to demonstrate that: P2SH32 saves 1-3 bytes per output vs. various hypothetical OP_EVAL solutions (depending on whether or not we require the OP_EQUAL in P2SH32).

3 Likes

The problems created with (traditional) p2sh are well explained and while I’m sure the idea of making a more complex p2sh (32) may work, I have to ask us to take a step back and design from base principles and see if the problem doesn’t just go away (I think it does).

The idea of p2sh was introduced as a solution to various problems, the main one was really not a core technical problem but it was a thinking problem. It is the basic concept of script-contracts that really don’t fit very well with our traditional “bank-account” thinking. P2PKH is already much closer and we have always seen tooling (the QR code for payment) and similar as a result. It fits nice and easy in our traditional thinking.

P2SH furthers this thinking, it is again a nice and easy address-like thing we pay to. Allowing a receiver to just give an address.

Other problems P2SH solved are more technical, like the conflict with the standardness checks for opcodes. But really the “it fits out traditional way of thinking” is the main one.

My question to bitjason and others here is this;
what is needed for people to build transactions with the “complex” script in the output instead of what p2sh does, in the input.

Advantages would be;

  • there is no hash. There is no birthday paradox, the one problem of p2sh just goes away.
  • less space used on-chain for most usages.
  • wallets do not need to keep a “complex” script for unlocking UTXOs. That is one script per UTXO. Which are a B*tch to backup compared to a HD wallet.

What I think is needed to make this a reality:

  • A more advanced payment protocol needs to be established since current ones assume that same “bank-account” concept and thus assume an address which is very limiting. This is not hard and 100% certain needed in the next year(s) anyway.
  • Some more freedom in the standardness rules.

Wallets coordinating a multisig or other multi-party contracts simply coordinate that using some different data, based on what I now call “templates”. The transaction sent to a miner then holds the template with data from each participant inserted. The script is in the output. Like P2PKH is in the output. And that is all there is to it.

I am a big fan of solving problems from base principles and when we notice that the problem just doesn’t even appear, because we do less work, that rocks! It makes the system as a whole easier to maintain, better to manage and cheaper.

How do others look at this, is it worth it to attempt?

1 Like

To some degree it seems very sensible, but it would also mean losing a significant advantage that the p2sh version has - hiding the script from prying eyes before it is being spent from.

It might not seem that important at first glance, but it actually provides both security and privacy for the users, as the time window in which a potential attacker have understand, find an exploit, and abuse a UTXO locked behind a hash is limited to after the spending transaction is known, which also puts them in a race condition.

Further, from a privacy perpsective, the hash allows the participants to keep information about what they are doing hidden until the point of spending.

Contracts with multiple possible spending mechanics could be made where you have the actual mechanic (say, illegal betting) under one condition, and then use an all-parties-agree-to-whatever condition that doesn’t reveal the full script and therefor doesn’t reveal the actual activity taking place, as long as the participants choose to agree with the outcome voluntarily. (this might need some more work to be made practical)

I’m not dead set in either way, so I’ll just see where the discussion goes for now.

1 Like

BU has done some work on that front, it seems well designed although a CHIP would need to elaborate more on design decisions: OP_EXEC

In short: the code modules being loaded count towards all limits of the “main” program, recursion depth and number of injections have sane limits, and the opcode user specifies how far into the stack the module will have access to and how many new items it may add. We have enough on our plate already, this is just to make note so it’s not forgotten when time comes to discuss this.

The privacy by hashing issue is an interesting tool in the bag of privacy options. Other options of bip69, cashfusion and using a new address every use probably are, if not equally as powerful, probably even more powerful. Afterall, the aim here is to make millions of people use these script-types. With well written wallets we can all but remove the ‘semi’ in Satoshi’s “Semi-anonymous”.
But, yeah, I understand the urge of using all tools available at this time.

I like your thinking when it comes to “illegal betting”, great example. Lets hope that Crypto as a whole is not made illegal any time soon for any of you guys. Notice, btw, that in almost all countries betting in general is illegal, unless the local goverment can tax it. Ehm, I mean, licence it. Betting online

happens anyway, will always happen anyway. :man_shrugging:

So, there are better ways to do that, but indeed some may want to use all tools available. Question is if that is a reason to make BCH harder to maintain.
I’ll just say that we techies tend to severely over-estimate the privacy perspective and needs. No websites listing all BIP69 compliant wallets has popped up yet :wink:

Thanks for the writeup! Agreed that it would be a valuable for improving the overall security of the ecosystem so there’s no need for contract developers to learn or worry about this hash collision risk. I think it’s a great idea to eliminate boolean malleability simultaneously, because this is another one of these quirks that is easily overlooked! I also like the idea of P2SH contracts using the same limit as the rest of the scriptSig and not being counted as one item being pushed to the stack.

Very glad to hear this!

Another major reason to keep complex scripts in the input besides privacy & security is who has to pay for the transaction fee. It does not make sense for the sender to have to pay for the receivers complicated contract.

3 Likes

This. :slight_smile: I’m all for this too, and it’s awesome that we can get more bang for the buck than just fixing a security hole in the P2SH feature.

The security argument is pretty much the same as it is for CashTokens/Group: we need 32 bytes because an attacker could create a collision and keep the more free contract to himself, secret for later use. Have everyone pay into the covenant, and when it grows big enough - steal everything.

Hehe, lets make sure transactions stay cheap :wink:

It is also relevant to know that most of the usecases of smart contracts, especially with multiple people, the transaction that locks up the funds is made by those same people and as such its fine that they pay for any fees.

Relaxing Output Standardness

I’d love to get to a place where outputs can have non-standard contracts!

That would be particularly useful for covenant applications – right now covenants have to waste 20 bytes (or for P2SH32, 32 bytes) and some contract bytes to place their next iteration in a P2SH output. And from the chain’s perspective, this sometimes wastes an extra copy of each covenant contract (the P2SH hash preimage has to be constructed in the first transaction for validation and also pushed again in the second transaction – though OP_ACTIVEBYTECODE usually eliminates this waste). It would be more efficient if covenants could simply validate the next output itself rather than doing the P2SH dance.

In practice though, there are some issues we need to solve first:

P2SH is currently protective in that it limits abuse of the contract system: with P2SH, transactions that are expensive to validate must include their own contract code within the spending transaction, so the expensive-to-validate transaction is much larger than it would be if that contract code was already present in the UTXO set.

This behavior is currently protecting miners from unwittingly creating expensive-to-validate blocks that include some malicious (non-standard) transactions and thus risking their blocks becoming stale during propagation (and/or set of miners possibly diverging due to such blocks). E.g. this worst-case contract could be stuffed into non-P2SH transactions with far fewer bytes per input. (Of course, part of this protection is that miners validate transactions themselves first before mining them, so honest miners are still reasonably protected if they only include transactions that have been broadcasted over the public network.)

In the same way, the quadratic sighash issue with OP_CODESEPARATOR is currently only avoided by the P2SH + isStandard strategy.

Both of those issues are handled for P2SH contracts by a hashing limit as proposed above (a simplification of the VM Limits CHIP), but to get rid of isStandard, do we need to instead define a limit in relation to spending transaction size (i.e. a per-byte hashing budget)? I think that requires some review.

Beyond those validation cost questions, there’s also the question of data storage – right now OP_RETURN outputs are generally limited to storing ~220 bytes of arbitrary data, but that limit is only meaningfully enforced by the concept of standardness. If output standardness is relaxed to allow custom contracts, there’s no reasonable way to prevent arbitrary data up to the same length as such contracts. So if, e.g., standardness was relaxed to allow output contracts up to 1650 bytes, OP_RETURN outputs of at least this size should also be allowed (and probably larger, since its in the network’s interest for data-carrier users to commit data in provably unspendable outputs, allowing us to prune that output from the UTXO set). I’ve written about how I think the current OP_RETURN limit is basically theater; I think in the long term we’ll probably want to simplify standardness to treat output OP_RETURN data and contracts similarly (e.g. the same per-output byte limit), but we probably need a really rigorous review of the topic to get widespread consensus.

One final development direction this brings up: increased occurrences of larger outputs (containing, e.g. 1650 byte contracts) would mean that many node implementations may want to revisit how they represent the UTXO set. Right now most implementations keep the full contents of each UTXO accessible to fast lookups, but it would be possible to instead only keep the hash of UTXOs – a UTXO Hash Set (UHS). This is part of the architecture explored by OpenCBDC. If a significant number of outputs are eventually larger than 32 bytes – for raw output covenants and/or something like CashTokens – some P2P protocol extension to enable pruned node implementations to use a UHS could be valuable.

So: I think relaxing standardness is a promising development direction. There’s a lot of work to be done; I’m not sure we can get all the way there before upgrade 2023, but I support the effort!

Nice! Do you have any links you can share related to your template work? I’ve also been working on a template concept in Libauth. (Here’s a test of multi-party contract creation and transaction signing.) It supports both P2SH and non-P2SH usage, and I’ve always hoped that BCH mainnet would eventually support non-P2SH usage.

If we relax output standardness, do we still need P2SH32?

Even if we were able relax output standardness by 2023, I still think it would be important to deploy some sort of P2SH32. Relaxing standardness would solve this 80-bit collision issue for “public” types of contracts (e.g. covenants), but it’s not a complete alternative to P2SH32; without P2SH32, other use cases would be forced to accept reduced privacy/security (as others have mentioned).

On privacy/security, I’d just add that in my view, a critical security feature of P2SH wallets is that unspent funds are hard for an attacker to analyze. This contributes little to long-term privacy (when they’re spent, they can be analyzed – you still want to use CashFusion regularly), but in practice, I think it offers meaningful operational privacy and therefore security (from meatspace attackers). An organization having funds in a set of P2SH-based multisig wallets (held by different operational teams) has quite a different privacy/security posture than the same set of wallets using raw outputs. UTXOs of well-designed P2SH wallets are not trivial to cluster, but for the same raw-outputs wallet, an attacker could determine with greater certainty how much is currently available to steal and which teams they need to kidnap/blackmail.

From a high level, a rough mapping of use cases for which I think each option is superior:

  • P2SH20 – Non-public, multi-party contracts with sufficiently interactive setups (can implement a pre-commitment scheme + HD derivation), saves 12 bytes vs. P2SH32, offers better privacy/security than raw outputs, and practically equivalent to Taproot for contracts with one/few spending paths. (Exception: highly-interactive use cases save even more with Taproot by looking like a single-signature spend.)
  • P2SH32 – Same as P2SH20, but better for use cases where interactive setup is more costly (maybe for some particular use case, 12 bytes per output is a reasonable price for avoiding pre-commitment schemes), or the wallet adds new addresses over time (not a covenant, but participants create new addresses via some highly-asynchronous coordination method).
  • Raw outputs – superior for covenants – the contract validates the spending transaction (publicly, on-chain), so the most efficient place to do this is raw outputs. (Hypothetically, partially-public covenants could be designed for Taproot-like outputs so that only one covenant path is revealed during spend – requires both Taproot and new opcodes though.)
  • Taproot (see @markblundeberg’s BCH Taproot discussion) – most designs could be superior to P2SH20/32 for many use cases (especially with sufficiently-interactive parties) by saving at least the cost of the hash in collaborative spends (that look like single-signature spends).

Aside: Taproot

I think some Taproot-like construction would be a viable alternative to P2SH32 for many use cases (in the same way it would be a superior alternative to the existing P2SH20), but given that the network already supports P2SH20 – and we aren’t going to “deprecate” P2SH20 – it’s reasonable that the network should support a “hardened” version of the P2SH primitive, too. The cost of adding a 32-byte variant in terms of protocol complexity/technical debt is trivial (P2SH20 already exists), but 1) the existence of a hardened P2SH32 option offers a much simpler upgrade path for vulnerable use cases, and 2) the hardened P2SH32 option offers some user-actionable resilience if a particular use case is discovered to be vulnerable to a new attack (when using the Taproot-like alternative).

And of course, as with OP_EVAL-like alternatives, I don’t expect we’ll have sufficient information to settle on a specific Taproot design before 2023. BTC has already deployed multiple versions of its virtual machine, so the relative increase in protocol complexity from its recent Taproot deployment is not as significant as it would be on BCH (considering BTC’s support for e.g. SegWit, the legacy sighash algorithm, etc.). BCH’s existing features and ecosystem also make deploying Taproot less valuable (BCH has more advanced contracts, covenants, and low fees + CashFusion), so we should take our time selecting a particular Taproot design, if any.

TL;DR

I think we should work on both: relaxing standardness would be great for covenants, and P2SH32 is important for privacy/security of non-covenant use cases.

Some future Taproot design could also replace P2SH (and P2PKH) for most use cases, but that doesn’t mean we should leave P2SH “partially broken” – we’re not going to deprecate P2SH20, so we should also support P2SH32 for completeness/out of an abundance of caution.

2 Likes

BTW, isn’t “Taproot” just a fancy name for “Threshold signature”? https://eprint.iacr.org/2020/1390.pdf

A threshold signature scheme (TSS) enables a group of parties to collectively compute a signature without learning information about the private key. In a (t, n)-threshold signature scheme, n parties hold distinct key shares and any subset of t + 1 ≤ n distinct parties can issue a valid signature, whereas any subset of t or fewer parties can’t. TSS’ setup phase relies on distributed key generation (DKG) protocol, whereby the parties generate shares without exposing the key. In practice, TSS is often augmented with a reshare protocol (a.k.a. share rotation), to periodically update the shares without changing the corresponding key.

Some quick responses. As usual, bitjsons posts are long and loaded with gems, so I’ll try to come back for other points that need more thinking.

This kind of issue has been the main reason for the sun-setting of sigops. We now use SigCheck which I think catches the issue you are talking about. It runs at validation time and thus it becomes agnostic to where the complexity comes from. It will catch it regardless of being part of an input or an output.
This then protects the network from blocks and/or transactions that are overly heavy on the validation phase.

I agree. From my point of view this is a economic matter that devs should not be in charge of. I don’t think it makes sense to define limits at any standardness or even consensus level to govern which data should be allowed to be mined. There are strong economic incentives available, I feel that the best way to solve this allocation of production-space is by allowing miners to set fees and priorities on them.
This is a discussion topic that indeed would be good to have, my current feeling is that we can redefine transaction-priority (which transactions combine into what size block) and move from our solution today where it is just about fees but create a score based on transaction-local properties like it spending more utxos than it creates. And, in the context of this point, the amount of block-space it uses which are not for economic activity.
Most of this is still quite irrelevant with current blocksizes, which is likely why its not been discussed much :slight_smile:

This is a very good point. The original UTXO did not actually copy the output script into the UTXO, that was added by Core much later. Anyone good at databases would loathe to see people dumping bytearrays in the same row as your primary-key. :roll_eyes:
What happened is that in order to make pruning work the output scripts had to go somewhere, since the original data would be deleted. Someone figured that the UTXO was to be that place.

I would expect an effort for the reference client that moves the output scripts out of the UTXO to not be a huge amount of work. Not simple, but not overly complex either. The goal is simply copying the output scripts somewhere safe and occasionally pruning those separately, so they don’t have any negative impact on the UTXO database.

Actually, each full node has a very different way of doing this today. Flowee the Hub has its own UTXO database (a raw C++ codebase), Bitcoin Verde uses a SQL one. BCHD is something different again (I don’t know really). BCHN still uses the way you are talking about.
But it should be pointed out that while this is more relevant the longer the data stored in there becomes, the tests on huge blocks shows that this is not a bottleneck any time soon. So those improvements are rather academic in nature.

1 Like

P2SH is something enforced by the Script VM’s hypervisor that is the native consensus code, it “hacks” the Script VM state from outside the sandbox.
The paradigm (send to address, spender reveals the contract later) has stood the test of time well, but in hind-sight it is obvious that the implementation could have been better.
It is what it is for historical reasons which deployed upgrades as soft-forks, and, as you clearly demonstrated, there’s still baggage of those upgrades to sort out.

We can stop pretending it belongs inside script - at all, and extract the contract hash and hashed contract code into their own respective transaction fields using the non-breaking PreFiX method.
Something like the below…

Output Format

  • transaction outputs
    • output0
      • satoshi amount, 8-byte uint
      • locking script length, compact variable length integer
      • locking script
        • PFX_LOCK_HASH, 1-byte constant 0xED
          • lock hash, 20 or 32 raw bytes
    • outputN

Consensus would never hand this to Script VM. From the point of view of Script VM it would be a NULL locking script.

From point of view of old software, this will still look like a script, one starting with a disabled opcode, one of the 2 possible “scripts”:

  • 0xED1122334455667788990011223344556677889900
  • 0xED1122334455667788990011223344556611223344556677889900112233445566

Consensus code would fail any TX where the script starts with 0xED and has (length != 20 && length != 32)

Input Format

  • transaction inputs
    • input 0
      • previous output transaction hash, 32 raw bytes
      • previous output index, 4-byte uint
      • unlocking script length, compact variable length integer
      • unlocking script
        • PFX_LOCK, 1-byte constant 0xED
        • real unlocking script, variable number of raw bytes
      • sequence number, 4-byte uint
    • input N

Introspection

  • New one, OP_OUTPUTLOCKHASH = 0xED - pops an index, returns the hash or empty stack item if feature is not used on the output. This completes definition of 0xED across all 3 contexts (input, VM, output).
  • OP_UTXOBYTECODE would return an empty stack item.
  • OP_ACTIVEBYTECODE would work the same.
  • OP_INPUTBYTECODE would work the same from Script VM PoV, it would return the whole thing i.e. concatenation of real unlocking script and redeem script

Speaking in relational database terms, we index UTXOs by their TXID/index and accept a level of denormalization with the locking script.

You’d want something like this instead:

(txid, index, satoshi_amount, locking script hash) M → 1 (locking script hash, locking script)

right?

When broadcasted, transactions could even omit the actual redeem script from the input if it matches one seen before, because nodes will already have it and could retrieve them unless pruned.

Fundamentally it doesn’t even matter how nodes learn of some script, whether it’s been broadcasted as locking script (relaxed standardness), or as redeem script (P2SH unlocking script), nodes will only need it at time of execution - when unlocking the input and updating the UTXO state. “bare” output gives it before it will be needed, and P2SH gives it the script’s primary key first, and data will come later.

I think we need to recognize that P2SH and OP_EVAL are fundamentally different even if they’d seemingly do the same thing, because they’re in different execution context. P2SH context is the hypervisor (consensus) layer while OP_EVAL is called within a particular VM, where the VM gets to control its execution state - it knows that it’s about to run some module of its own code authenticated with the OP_EVAL hash, and so the module must be a valid bytecode for that VM, and if unknown hashes were let in, then the contract could even pre-authenticate some known template at runtime, letting potential spenders keep their variable data private if the execution path will not be used.

My point is - it’s not one or the other - we’d want both P2SH32 (hopefully 2023) and OP_EVAL (later) :slight_smile:

Some more theoretical ramblings…
P2SH is fundamentally VM agnostic but its true nature may not be obvious because it’s been rolled out as if part of a VM. In theory it could support multiple VMs and languages while also hiding which VM it’ll use until time of execution comes, hiding everything about how some UTXO can be spent. @tom 's points about database structure made me realize that. Every UTXO is associated with some spending constraint. It’s a relationship (UTXOs) M → 1 (Constraints). If we use constraint hash as Constraints primary key it makes it easier to reason about - you just need to ignore those few bytes of P2SH wrapping that make it look like a Script when it’s really not.

Then, thinking in blockckain-as-database mental model we can better observe the difference between “bare” and P2SH, and how it relates to database operations:

  • “bare” executes 2 inserts: one in Constraints table (hash as key, code as value), other in UTXO table (outpoint ref. as key, hash as foreign key, sat amount as data that is related 1-to-1)
  • p2sh seemingly does the same where the Constraints table would have a null in place of Script (hash as key, NULL as value) but it does not ACTUALLY do this because the hash is available in the UTXO map so we save an insert into Constraints operation here

when execution time comes, what happens:

  • “bare” 1 delete from UTXO, (optional delete from Coinstraints)
  • p2sh: 1 insert into Constraints, 1 delete from UTXO, (optional delete from Coinstraints)

Deletes are authenticated by the spender, who provides the data for the Constraint to unlock, and in order to be able to provide data he ought to be aware of the Constraint before sending the TX, and so we relieve nodes of keeping that stuff in the blockchain state because they don’t need it until they get the unlocking values from the spender, and because Constraints always had some unique data then we didn’t bother keeping them around so we skipped managing the Constraints table entirely and saved an insert and later housekeeping deletes.

That held well till we got Introspection. It changes the game, because it makes possible to code contract where data is detached in another output, so we can have contract templates with a fixed hash, and if some contract template is often used then it would make sense to cache its code by nodes. Even so, relaxing the networking standarness rules would matter only for the first reveal of the contract (and saving some input bytes when forward-validating the next contract step). Alternative would be for contract authors to just do a single spend and reveal the contract to everyone and nodes could update the Contracts table just the same.

Next spend would only require the Contract key (hash) to be broadcasted and network messages could be optimized for this, so transactions could omit the Script code entirely if it matches a previously seen one.

Anyway, lots of interesting stuff for research, it’d be easy to get lost in it :sweat_smile: However, clock is ticking, if we want P2SH32 then let’s go for the least-friction upgrade for 2023: just extend the legacy wrapper format
0xA914111122223333444455556666777788889999000087 with 0xA820111122223333444455556666777788889999000011112222333344445555666687.

It would be real nice to snip off the 87, and also save a byte or two on the input’s push opcode with scripts longer than 75b.

However, clock is ticking, if we want P2SH32 then let’s go for the least-friction upgrade for 2023: just extend the legacy wrapper format

When given the choice between doing the “easy” ( all things considered ) thing and the elegant thing, always pick the easy path. :slight_smile:

1 Like