Supporting Input Aggregation/MuSig using Transaction Introspection

Transaction sizes could be significantly reduced if the same signature could be used to validate multiple inputs.

Consider a transaction with hundreds of inputs – right now, the inputs section of that transaction will cost at least input count * 100 to 108 bytes (in the best case, signatures are 65 bytes and public keys are 33 bytes, + 2 bytes of push ops). With TxInt supported in unlocking bytecode, that would be reduced to around input count * 3 to 11 bytes, saving many KBs for large transactions.

This has some overlap with the MuSig idea, which recently came up in the Transaction Introspection Telegram group. Background:

My summary/comments from the discussion:

There may now be a more promising strategy for implementing the MuSig idea than adding it as a new TX field: we could allow sibling inputs to pull signatures from each other’s stack using transaction introspection opcodes. That would even allow transactions to have multiple MuSigs, e.g. if there are two distinct aggregate public keys and many outputs for each need to be spent in the same transaction. […]

So some inputs could be as simple as e.g. <N> OP_INPUTBYTECODE, and the unlocking bytecode of input N would be <AggregateSig> <AggregatePubKey>. That would allow most inputs to have only 2 bytes, and the same signature(s) to be used by an unlimited number of inputs. […]

One non-obvious issue: anytime we allow a “non-constant” operation in an unlocking script, we introduce a malleability vector. An active third party can intercept the transaction and replace it with a TX where all inputs are “pre-computed” – the evaluated result on the stack is pushed directly rather than using the original unlocking bytecode. […]

In this particular case, the “precomputation malleability” isn’t as acute because the larger unlocking bytecode for the malleated transaction would make the relative fee rate lower than the original transaction. So if most of the network uses the 1 satoshi-per-byte heuristic, the malleated transactions would never be accepted. But that also seems insufficient to rely on. (E.g. it’s conceivable that some transactions could accidentally be malleable – someone can scan incoming transactions for snippets of repeated unlocking bytecode and grief them by broadcasting a more byte-efficient introspection opcode-based version. Wouldn’t be bad for the network, but would modify the eventual TXID and possibly disrupt the sending wallet.) […]

I’m wondering if this would be best solved by a new signing serialization/sighash type which also includes non-exempt unlocking bytecode (where exemption is somehow indicated by the presence of the sighash/type flag in each other input… the simplest option would be to only exclude the current input, which in practice would limit transactions to only one input using this sighash/type flag)

I’m going to think about this more (probably along the vein of a new sighash/signing serialization type), but if anyone has any ideas for resolving the malleability question, please share!

3 Likes

Update: the initial example was missing some necessary parsing – the other inputs in this scheme would still have to trim the pushing opcodes from the e.g. 0th input, which would actually require 11 bytes with the current instruction set:

0x00XX517f7701417f517f77 (IDE example):
<0> OP_INPUTBYTECODE <1> OP_SPLIT OP_NIP <65> OP_SPLIT <1> OP_SPLIT OP_NIP

This would be much simpler with @Tobias_Ruck’s OP_TXINPUTDATA/OP_INPUTDATAITEM idea, which reads a particular stack item from another input’s parsed instructions. That would bring this down to 6 bytes:
<0> <0> OP_INPUTDATAITEM <0> <1> OP_INPUTDATAITEM

And of course, if we had OP_EVAL (which was discarded for the same unfounded fears that kept covenants off of BTC – history, P2SH rationale), the other inputs would cost 3 bytes:
<0> OP_INPUTBYTECODE OP_EVAL

Chris Pacia also clarified here that I was misunderstanding the intention of his MuSig proposal – if we integrated that proposal as a transaction format change + sighash/signing serialization type change (or just as a special case for 0th inputs + new sighash/signing serialization type), the VM would calculate the aggregate public key given the public keys provided by each other input. So other inputs would be 34-35 bytes:
<1-byte sighashAggregate> <32-byte pubkey>

In this case, the VM would internally create the aggregated public key. Though the savings aren’t as large, they would be available to practically all P2PKH transactions, even if they weren’t pre-planned for MuSig aggregation. (Note: there’s a difference in CPU validation cost which we should study more, see background post.)


TL;DR: this is probably something we should continue thinking about – there’s not an obviously best answer yet. It seems like there are actually multiple issues to untangle, and possibly multiple solutions which would be reasonable to deploy together:

Large transactions moving many UTXOs paying to the same address – right now, these may duplicate the signature and public key hundreds or thousands of times. With a deduplication strategy, we could bring these from 100-108 bytes per input to 11 bytes or less per input.

CashFusion transactions, and other large-TX protocols – I think these can be made much smaller using any of the above techniques, but particularly using the pre-planned techniques, bringing them from 100 bytes to 11 bytes or less per input.

All other transactions – using Chris’ transaction-time MuSig strategy, practically all other P2PKH transactions (with wallet support) could cut sizes from 100 bytes to ~35 bytes per input.

I’m probably going to remain focused on some of the other protocol upgrades which I’m more confident will be ready for activation in May 2022, but if anyone else is interested in running with one of these areas, I’d be happy to pitch in.

2 Likes

Interesting history on the OP_EVAL stuff. FWIW this Input Aggregation & MuSig stuff seems quite compelling. I will definitely keep it in mind when considering some of the capabilities we’re trying to develop on top of BCH and see what real-world use cases that we have might benefit from it. My initial reaction is I’d like to see this move forward in a more detailed proposal.

1 Like

This looks like something that would be very useful, actually. I’ve always used a single-key address for my public donations (jonathan#100) and I know there are thousands of others uses in the same boat.

Furthermore, any charity that wishes to provide public auditability, or service that takes fees that also don’t care about the privact costs, would be able to reduce their transaction sizes, fees and blockchain network and storage cost impact.

Now, it does have some drawbacks though. In particular, it incentivizes people financially to sell the privacy benefit from using multiple addresses. On a congested blockchain with high fees, I’d probably be against this for that reason alone, but on Bitcoin Cash I think it’s something to consider.

It might only help a few % of the users, but for the users it impacts it could have a profound value.

Imagine a microtransaction usecase, something like a trading card game - where you buy random card packs or similar. If popular, that might mean tens of thousands of transactions per day (and we’re not talking tokenizing the cards - just paying for the service).

Furthermore, for those who don’t care about the privacy, their wallet could by default consume all inputs in every transaction, reducing the UTXO set and keeping it consistently small, while ensuring that if a fee market does show up in the future, you will already have done all necessary consolidation to minimize your fee impact.

1 Like