I think PMv3 is a great proposal. It’s minimal, powerful and backwards compatible.
It did take me several days to wrap my head around it, and I thought it might to useful to explain where I got confused along the way …
To start with it’s the first time I’ve seen the term witness used in context of Bitcoin Cash. We already have the terms input script, scriptSig, redeemScript and unlocking script, so I wasn’t sure why the new term. Then to make matters worse I read “hashed witness” to mean “the hash of the witness” as opposed to “the full witness which is referenced elsewhere by its hash”. I spent ages trying to understand why you’d optionally append a hash at the end of the transaction . This led me further astray as I progressed through the doc: “If the parent transaction chose to provide a hash”… my interpretation: oh that must be the optional hash at the end of the transaction.
The term Hashed Unlocking Script would still have confused me here. Perhaps Detached Input Script, Detached Unlocking Bytecode or just Detached Script / Detached Bytecode would have been clearer. Then you could replace “Optional Hashing of Witness Data” with “Optional Detachment of Input Scripts”.
In the rationale section I found it useful to think of this change as enabling provenance (restrictions on ancestors) as opposed to covenants (restriction on descendants). A simple example of provenance would be a proof that “my parent has the same redeem script as me”. This can be translated to “my grandparent pays the same P2SH address as my parent”. This can be proved as follows:
include both the parent and the grandparent transaction in the input script (in front of the redeem script)
in the redeem script, inspect them and verify they have the matching output scripts
in the redeem script, verify the embedded parent by comparing its hash to an outpoint hash in the current transaction
in the redeem script, verify the embedded grandparent by comparing its hash to an outpoint in the parent
To avoid infinite growth, embed truncated transactions which are sufficient to inspect their output scripts (in step 2) and calculate their hashes (in step 3 and 4) .
Side note: I use the term redeem script because locking script is ambiguous. It could refer to the redeem script (as I think it does in Bitauth IDE) or it could refer to the scriptPubkey. I might be a bit confused here and would be happy to be corrected. Also while on terminology I would like to suggest that there is no such thing as parent introspection, only parent inspection.
In the scheme above, the wallet which constructs the transaction is responsible for obtaining the raw parent and grandparent transactions so that they can be embedded. Nothing in the virtual machine allows inspecting the parent or grandparent transactions. But that’s not what I thought at first. Why? The citation of CashTokens as an example in the Medium article is an example of “Proof by Induction” but is not referenced in that section, it’s referenced under “Fixed Size Inductive Proofs” so I was looking to find some of the answers about PMv3 there. What I found was new opcodes, so I thought that these were parent inspection opcodes, but no mention of opcodes in the PMv3 spec . Eventually I realised they were introspection opcodes related to TxInt and were not part of PMv3, and that fixed size inductive proofs don’t need parent inspection opcodes (although they would certainly be one solution).
Basically I was completely confused due to lack of knowledge on the subject matter. Once I cleared up all of the above, the penny dropped and I had the eureka moment. All down to my lack background, and I’m only brain-dumping my confusion here in the hopes it will help others at my level get there faster. Because wow, it’s awesome! Well done on thinking this up.
I can’t see how Hashed Witnesses affect signing serialization, so I’m wondering if a transaction could be malleated by moving a witnesses from its usual location in the input to hashed witnesses section?
Thanks for the detailed walkthrough @rnbrady, this is really helpful!
I’ll try to revise the spec to make those areas clearer. And that step-by-step summary probably needs to be included too.
Also agree the terminology needs work – “detached” could be a good option, I’ll explore that when I working on incorporating everything else. (And thanks for the catch on “parent introspection” → “parent inspection”, I can’t unsee it now. )
Ah, you’re right! As specified, there’s a third-party malleability vector. Not sure if I have a solid answer yet. It’s just one bit of data we need to commit to for each signature (either true or false), so it’s a good candidate for a sighash/type flag. That will definitely need to be in the spec, thanks.
Just an update for those following this topic: I’m continuing to work on a large decentralized BCH application as a more full-featured example. It’s a “two-way-bridge” covenant for a prediction market with support for millions of shareholders, an on-chain continuous IPO, and a quarterly BCH withdrawal process (which uses a Hivemind-like voting/commitment system among shareholders).
I think a large, real-world example will be valuable for evaluating PMv3 and several of the other VM-related proposals. There are several dependencies I’ll need to iron out, so I’ll also release that tooling over the next few months as I’m working on the larger goal.
You know, I think another discussion made this finally “click” with me just now!
While pondering this, I realized that CashTokens is exactly where you’d arrive at if you want to solve those problems I describe! Let me see if I got it right:
You implement all CashToken interfaces in the actual contract, hash it, and it becomes the genesis. Nobody knows it’s a CashToken genesis because they only see the P2SH hash in the UTXO at this point and it could be any contract. Only those with whom the author shared the contract via a side-channel could know.
You then make another TX spending it, where the full contract is written to the blockchain in the input script and revealed to the world.
Through a covenant, you enforce the outputs to carry that same token contract, and it can be hash-compressed in the following TXes.
The new P2SH hash hashes the previous hash+token contract, that’s why it’s always different but that only proves the input script == output script, right?
The HashedWitness is then needed to prove that input script == previous TX output script
By induction, any CashToken output can then prove its lineage back to genesis
Anyone can verify that he’s got a token from just the receiving TX and the contract. I’m unsure where he obtains the contract, from the genesis TX? In all TXes that follow it’s compressed into a hash yeah?
Script stuff still scares me but the nice thing is: I don’t need to understand it to understand how CashTokens work!
One detail, where do you store the token state? It must live outside the part that’s enforced to be the same in this covenant chain. So I guess it’s somewhere in the input script and that part doesn’t need to satisfy the in==out but must satisfy CashToken semantics which are verified with the fixed part, something like that yeah?
Hey everyone, just wanted to share an update on PMv3:
I’ve spent a lot of time testing these ideas over the past few months, and I stumbled upon a modification to the PMv3 draft that both 1) fixes a malleability vector and 2) opens a path for signature aggregation.
In working on solutions, I spent a lot of time thinking about optimizing contracts, covenants, and transactions in general. I realized there are several closely related problems that an ideal solution should cover:
Malleability makes contracts less efficient and harder to validate – most non-trivial contracts must carefully validate all unlocking bytecode data to prevent vulnerabilities introduced by malleation, and this validation both bloats the contract and makes it harder to review for security. (For example, most covenants which use OP_SPLIT are vulnerable to a sort of “padding” attack which is not intuitive to first-time contract authors.)
The primary blocker to deduplication in transactions is unlocking bytecode malleability – because unlocking bytecode typically contains signatures (and signatures can’t sign themselves), unlocking bytecode is excluded from transaction signing serialization (“sighash”) algorithms. This is also the reason why unlocking bytecode must contain only push operations – the result of any non-push-including unlocking bytecode is a “viable malleation” for that unlocking bytecode. But if unlocking bytecode is signed, transaction introspection operations offer an opportunity to further reduce transaction sizes via deduplication. In a sense, if non-push operations could be used in unlocking bytecode, transactions would effectively have safe, efficient, zero-cost decompression via introspection opcodes.
Signature aggregation could save >20% of network bandwidth/storage – we know signature aggregation could save a lot of space and possibly improve transaction validation performance, but there’s no good place to put signatures which are shared between inputs. While we don’t want to bloat PMv3 with signature aggregation, a good v3 transaction format should not ignore this problem.
There are several possible solutions for each of these, but there is one particular solution I think is very elegant – it’s now specified in the CHIP:
“Hashed Witnesses” have been renamed to Detached Proofs.
Ah, sorry to keep you waiting for an answer here – your numbered steps are very close – maybe Richard’s summary will help to make 4 through 7 clearer:
In the latest CHIP revision, I’ve tried to make things a bit clearer too, but I may try to add an appendix of sorts to walk through this particular “cross-contract interface” in detail.
Yes! Each tokens’ “genesis” transaction hash It’s stored in the top-level “corporation” covenant, which holds the full set in a merkle tree. So CashTokens can be moved around independently, then eventually checked-back-in to the top-level covenant. If you haven’t read it yet, see the description in the CashTokens demo.
That may help to get a sense for why detached proofs are so useful – they allow contract designers to build interfaces between different contracts (without resorting to the “global state” model of systems like Ethereum).
@bitjson looking at the chip fresh after talking yesterday, there is one thing I notice that I didn’t mention before - I think a contrast exercise would be helpful for both high level general understanding and high level technical understanding. What I mean is taking a contract or use case and then showing where the dividing line is: Currently we can only do X. With pmv3 then Y becomes possible. The CHIP as I read it now basically skips straight to Y so it is harder to understand the scope of the gain.
Good idea, thanks! I’m just now realizing that this revision doesn’t even link to the CashToken demo anymore.
I’ll work on adding that appendix directly to the spec. Probably want 2 examples:
basic parent transaction inspection - the covenant checks that it’s been given an un-hashed copy of a parent transaction for one of its inputs (validated by comparing the hash with an outpoint TX hash), then inspects some simple parent property like transaction version.
inductive proof covenants - the covenant inspects two transactions back to confirm that its grandparent used the expected covenant and its parent spent that covenant successfully, proving that the grandparent either is or descends from an original “mint” transaction of the same covenant. (This one is what @rnbrady outlines above.)
As a followup and to make things more concrete I have the following suggestion for your proposal.
I would like to split the proposal into two parts with the aim to simplify activation and to make sure that we have a smaller change that I believe can reach consensus next May.
The actual changes in the transaction format would then become this;
2022-05. Transaction v3 introduced, where needed the input-script is replaced with a 32-byte hash and appended to the transaction is the list of detached proofs.
When it comes to the transaction format, this is the only change. This actually does unlock your many improvements and inductive proofs. It allows Script to look backwards in time. It enables tokens and it enables most of your cool examples scripts. Other scripting improvements are a separate CHIP.
2023: Transaction v4 introduced for variable-int sizes and various other ideas (tbd).
Variable int sizes is one of the ideas you separately came up with which was previously part of FlexTrans (BIP 134), there were some other neat ideas that would be useful to combine in such a transaction format change.
The direct result of seperating your one CHIP is that parsers that read current (v2) style transactions will very likely not need to be modified to accept v3 transactions. Afterall, you just add some more data to the end. Even TXID calculation is going to be unchanged.
This is going to make deployment much simpler and since the May 2022 upgrade already has a large number of proposals I think it makes sense to keep it simple.
Changing the integer-format (the var-int idea) in transactions is simply a much more invasive change, 100% of the existing transaction-parsing code will need to get support that and for that it really makes sense to wait until after Restrict Transaction Version has activated which is specifically meant to make this kind of deployments simpler. Hence the suggestion to push it forward.
So, the separation of those more invasive transaction changes out of the detached signature change would be nice to move to the upgrade of 2023 to give ourselves as well as the wider ecosystem plenty of time to get it right.
Hey @tom, thanks so much for the review, and really sorry to keep you waiting! I’ve been offline this past week, and I wanted to put a bit more research into this response.
(Sorry everyone for the length – I’ve added headings and bolding to make it a bit more skim-able. Also see TL;DR below.)
Proposed Advantages of FlexTrans (BIP134)
I’d like to get a full accounting of what advantages we could get from a more FlexTrans-inspired v4 like you mention. So far I see:
variable ordering of transaction elements – since each field is “labeled”, transaction contents don’t need to be specified in a particular order (like v1/v2 or PMv3 transactions). This could make some contracts simpler, e.g. some transactions could place their outputs first, then covenants which inspect those parent transactions would have an easier time of verifying properties of outputs (Note: this efficiency gain only applies for contracts which inspect parent transactions, since they couldn’t use introspection opcodes.)
ability to remove transaction elements – PMv3 leaves two sub-optimal fields as 4-byte Uint32s: sequence number and locktime. A labeled transaction format would allow these fields to be optionally excluded from transactions which don’t need them, saving up to 4 + 4 * input count bytes for many transactions.
easier implementation of future format changes – since transaction parsing code would be designed to read labels rather rely on field ordering, future transaction format changes which add fields could fit into the code more easily. Older implementations would ignore fields they don’t recognize, so new fields could sometimes be deployed without breaking changes.
Am I missing any other potential advantages here?
Before digging into those topics, I should note: the goal (in PMv3) of unifying transaction integer formats is to allow contracts to read/manipulate them. Transaction introspection opcodes would partially solve this issue, but for the primary PMv3 use case – parent transaction inspection – introspection opcodes will never be available (that would massively increase validation costs).
A core concern I would have with a FlexTrans-like transaction format (like BIP134) is that it would also break this contract compatibility: if labels are shorter than 1 byte, contracts would need to use combinations of bitwise operations (or a new set of opcodes) to parse each label and field (possibly in variable orders). If each label was a full byte, the average transaction would waste 20-30 bytes on field labels.
On variable ordering of transaction elements
As mentioned, variable ordering of transaction elements could theoretically simplify some contracts which can’t otherwise use introspection opcodes. In practice, I think I can save ~10 bytes in a covenant which inspects parent transaction outputs (by “skipping” the version and input fields). However, I think this advantage breaks down when we consider compatibility between multiple contracts: in order for this advantage to be utilized, all “downstream” contracts must support the same “transaction layout”: a particular, expected ordering of transaction fields.
In the v1/v2 and PMv3 transaction formats, all contracts can expect a standard layout (version, inputs, outputs, locktime). In a FlexTrans-like transaction format, if a contract gained efficiency from variable ordering, it would necessarily be making itself incompatible with similar contracts which don’t use that same layout. This doesn’t make it useless (there are many contracts which wouldn’t have a “layout affinity”), but I think it does significantly complicate any systems which would attempt to employ that kind of optimization in practice.
I think the downsides of variable ordering are pretty notable:
parsing is more complicated – parsing for v1/v2 and PMv3 transactions can be implemented without high-level functions or recursion – fields always occur in an expected order and can be parsed by simple, imperative operations. A format with labeled fields requires some abstraction to represent the parsed contents of the transaction, including which required fields have and have not yet been parsed. (For example, it’s currently fairly easy to parse most transactions in BCH VM bytecode – a variable-order format would make this practically impossible.)
ordering information costs bytes – v1/v2 and PMv3 transactions contain no field ordering information. A labeled-field transaction format must always spend some bytes encoding this information, and for contract-compatibility, we’d likely need labels to be one byte, adding 20-30 bytes per average transaction.
Given all this, I don’t see much value in adding this “layout” information to transactions.
On removing optional fields from transactions
When we talked about this on the last dev chat, I was most interested in the possibility of saving bytes by dropping unused sequence number fields. After thinking about it this week, I’m afraid a feature of sequence numbers actually makes this unwise in the same way it is unwise to drop or optimize the locktime field: changes to those fields can negatively interact with mining incentives.
How transaction formats can interact with mining incentives
Right now, many transactions include a locktime with a recent block height to disincentivize miners from “fee sniping” – mining one block behind to “snipe” fees paid to the miner of the most recent block.
I’m inclined to believe “fee sniping” is not as serious of a problem as it has been considered in the past, but I can’t confidently write it off – particularly as block subsidies fall below transaction fee payouts. If most mining revenue is derived from fees, lulls in network transaction throughput could incentivize some miners to attempt replacement of recent high-fee blocks (especially if hash rate falls during these network lulls).
From the network’s perspective, allowing locktime to be optionally excluded from transactions would be a direct tradeoff between bandwidth and this existing security measure against fee sniping: if locktime is optional, average users are incentivized to not include a locktime to save on bandwidth/fees, decreasing network security in general. The same is generally true (but more nuanced) for sequence number fields, since they can limit spending by UTXO “age”.
Especially in a world with many available sha256-based networks for miners to switch between, future miners aren’t guaranteed to behave in the best interest of any single network. Since we can’t rule out the possibility of fee sniping causing network instability in the future, I think it’s unwise for Bitcoin Cash to make either locktime or sequence number optional (or even variable-length).
Other than those two fields (and a possible 1-byte OP_RETURN optimization), I believe PMv3 is already as efficient as any uncompressed transaction format could be: there are no other wasted bytes, and any further optimization would require packing information from multiple fields into the same subset(s) of bytes (e.g. compression algorithms). (Note: if anyone identifies any other wasted bytes in PMv3, please respond here.)
Given all this, I believe PMv3 is already more byte-efficient than any equally-secure, labeled/tagged transaction format could be.
On ease of implementation / future changes
Tagged/labeled formats are popular for their flexibility (XML, JSON, Protocol Buffers, etc.) – they allow new fields to be added more easily and without breaking existing clients. However, I think Bitcoin Cash transactions are not well-suited to tagged formats.
I think there are two kinds of changes we could make in future transaction versions:
validation-affecting changes: these are changes like PMv3’s detached proofs and detached signatures – they alter how the transaction is validated, requiring consensus logic which is not yet known. If blindly accepted by existing clients (rather than in a specified context like PMv3), other validation-affecting changes necessarily create a vulnerability in outdated nodes: old clients might consider a transaction to be valid, but updated clients might know to reject transactions with an invalid value for a particular field (as was the case with BIP16/P2SH).
new “annotation” fields: these are other types of “data” fields which don’t affect transaction validation.
In the case of validation-affecting changes: I don’t believe it is ever wise for implementations to blindly accept new fields as valid. Financial software should “fail safe” – if the software receives a message it can’t interpret, it should reject the message rather than ignoring the offending fields. E.g. for a wallet, an unrecognized field in a received transaction could prevent the funds from being spent as expected in the future, or even open the funds to theft by some new mechanism – it’s never wise for the wallet to accept a transaction with an unrecognized field.
In the case of “annotation” fields: I would argue that OP_RETURN data-carriers are already better than any possible annotation field. Fundamentally, the transaction format allows owners of transaction inputs to commit to a set of payment destinations: outputs. The format even allows for variability in signing algorithm: some inputs may authorize only their corresponding output with SIGHASH_SINGLE and other may choose to authorize all outputs with SIGHASH_ALL. As such, outputs are also the natural place for annotations, commitments, claims, and other kinds of signable-data.
Adding special transaction fields for these “annotation”-type fields would duplicate this existing infrastructure (but effectively only supporting the equivalent of SIGHASH_ALL), while providing no additional value. (With PMv3, OP_RETURN outputs are just as byte-efficient as any “built-in” field could be.)
In short, I also don’t expect a tagged/labeled transaction format to make future transaction format changes any easier.
Conclusion on tagged/labeled transaction formats
So those are my latest thoughts on tagged/labeled transaction formats. There are certainly differences which seem advantageous on the surface, but I haven’t been able to identify many benefits which seem likely to hold in practice.
Should we deploy a PMv3-lite? (no integer changes)
On the idea of developing a minimum-viable PMv3 upgrade – In the past few weeks I’ve also spoken with the General Protocols and Software Verde teams, so I’m also going to document some realizations from those calls here too:
I’ll call this proposal “PMv3-lite” (PMv3 with only the detached signatures and detached proofs). I should note that this was how the very first draft of PMv3 worked – I expected that not modifying anything about the existing transaction format (before the locktime field) would make migration smoother for many wallets. This sounds good in theory, but after some research, I realized it doesn’t really hold in practice because:
few wallets even parse transactions, and
of the wallets that do, none will actually handle these hacked-for-backwards-compatibility transactions without errors, i.e. there is little value in making a new transaction format “more backwards-compatible”.
I’ll walk through the thought process – I’d appreciate if anyone can identify a concrete counterexample where a “PMv3-lite” would be advantageous.
All nodes must update
Let’s start by noting that a new transaction format (with new fields committed to the block merkle tree) requires that all nodes upgrade to support it for full validation. Even with a technical debt-filled soft fork deployment strategy (e.g. SegWit), we’d just be hiding the change from old nodes (who could be vulnerable to payment fraud if they never heard about the soft fork).
So: as with any other consensus change (like new opcodes or a new difficulty adjustment algorithm), if we change the transaction format, all nodes must upgrade. Nothing we do to the transaction format makes this any less disruptive. For these users, there’s no meaningful difference between PMv3 and a “PMv3-lite”.
Consensus-validating wallets and services must update
As with nodes, any consensus-validating or SPV wallets must also be updated to support either PMv3 or “PMv3-lite”. This includes Electron Cash, Pokket, BRD, wallet backends (like Bitcore and Blockbook), and likely several consensus-aware, proprietary systems.
It seems plausible that PMv3-lite could be designed to allow the newer transactions to be parsed with existing v1/v2 parsing infrastructure; we could avoid modifying any field formats, then mislabel the length of the transaction in the P2P message header to trick the client into not parsing beyond locktime. However, there are at least two issues for which PMv3-lite would still break every implementation I’ve reviewed:
The 0x00 byte marking an unlocking bytecode as a detached-proof hash – any existing parsing code should error on this byte – unlocking bytecode may not have a length of zero. We could hack around this by changing PMv3-lite to use some alternative indicator, like 0x21 (33) and prefixing the hash with a new OP_DETACHEDPROOF opcode, but that would still only work for software which doesn’t evaluate VM bytecode. (And it would cost us an opcode.)
The TXID is not actually compatible – the TXID calculation for PMv3 transactions includes any detached signatures (after locktime). Detached signatures are necessary for preventing malleation by detaching proofs, but they also comprehensively protect against all malleability. We could replace those with a different kind of sighash algorithm, but we’d be losing signature aggregation and related benefits. Even if we did kick detached signatures to a future upgrade, there are lots of implementation details that I still think would ultimately break our “backwards-compatibility” (E.g. how do you tell the difference between an old client and one which is maliciously dropping detached proofs to DOS peers? If upgraded clients often ban old clients, do we still pretend the change is backwards-compatible?)
And these are just the formatting issues – for the P2P message we’d still either be bifurcating the network (like SegWit) into “PMv3-enabled” and “legacy” peers or depending on “undefined behavior” where peers ignore (without errors) extra data at the end of P2P TX messages.
I don’t want to minimize the effort that will be required here: like any transaction format change, PMv3 would require more work across the ecosystem than, e.g. a new opcode (where practically only fully-validating nodes need to upgrade). Full nodes, wallet backends, SPV wallets, and indexer will all need to support new consensus rules to stay in-sync with the rest of the network.
However, I think the difference between PMv3 and a “PMv3-lite” in this case could be best described as “compatibility theater” (of the same variety as SegWit’s “backwards-compatibility”). Outdated implementations need to update or they’re at risk of payment fraud – “backwards-compatible”/“soft fork” deployment schemes do not alter this reality. I think the Bitcoin Cash community should spend our time and energy on pull requests to important software rather than adding new layers of technical debt to pretend that “the protocol never changed”.
Most wallets don’t require updates for PMv3
Somewhat surprisingly, many popular Bitcoin Cash wallets don’t require client-side software updates to support PMv3.
While these wallets hold their own keys and sign transactions, they do not connect to the P2P network themselves: they make API requests to indexing servers which respond with address balances, UTXO data, and other useful information (like current exchange rates). These messages rarely even include information about transaction versions – wallet software simply keeps track of whichever UTXOs the server has shared, selecting from them when building transactions.
While these wallets would still only be able to create and sign v1 or v2 transactions, they would be fully interoperable with PMv3-supporting wallets: 1) wallets send v3 transactions to addresses controlled by an outdated wallet, 2) the wallet’s backend node parses the v3 transactions and sends the expected UTXO representation to the outdated wallet client, 3) the outdated wallet can still create v1/v2 transactions as it always does with the new UTXOs.
For these wallet systems to support PMv3, only the backend nodes/services need to be updated (which need to be updated anyways to stay in consensus with the rest of the Bitcoin Cash network). Even if end users do not upgrade their wallet application, their wallets will continue working as expected.
So in this case too, there’s no meaningful difference in backwards-compatibility between PMv3 and PMv3-lite.
If you’re interested, here’s a summary of my API review of all popular non-custodial wallets (please comment if I’ve missed one we should review):
The BitPay app doesn’t parse inbound transactions itself – and doesn’t even identify in which format each UTXO was originally packaged – so we can assume that end-user wallet software will not require any updates to support receiving PMv3 payments. In the event of a PMv3 upgrade, only BitPay’s wallet API backend needs to be upgraded.
(The Bitcoin.com wallet isn’t open source, so I used mitmproxy to inspect its iOS API.)
The wallet requests a list of all UTXOs for all wallets over a GRPC API (https://neko.api.wallet.bitcoin.com/com.bitcoin.mwallet.TxService/getUtxos). The response seems to contain a unique identifier, the address to which the UTXO paid (in legacy, non-cash address format), a value, and a few other fields. As far as I can tell, the wallet application itself has no knowledge of inbound transaction formats and should not require any updates to support receiving PMv3 payments.
(The Jaxx Liberty wallet also isn’t open source, so I used mitmproxy to inspect its iOS API.)
Jaxx Liberty also fetches both balance and transaction information via a JSON API. (GET https://generic.jaxx.io/bch/v2/balance/[LEGACY_ADDR],[LEGACY_ADDR],[...] and GET https://generic.jaxx.io/bch/v2/txs/[TXID]). The app also doesn’t appear to parse any inbound transactions, so it should not require any updates to support receiving PMv3 payments.
(The Exodus wallet also isn’t open source, so I used mitmproxy to inspect its iOS API.)
Exodus appears to use some version of the old Insight v1 API (POSTs to https://bitcoincashnode.a.exodus.io/insight/addrs/txs), which returns a JSON message with a list of decoded transactions (parsed server-side). So Exodus also doesn’t appear to do any wallet-side inbound transaction parsing.
Notably, Exodus backend APIs seem to label BCH as bcash, and BCH payments on Exodus feel slow because they aren’t even registered in the app until they have been mined (no indication of unconfirmed payments), so I’m doubting it’s very popular with BCH users today? Nevertheless, I don’t think Exodus apps will require any client-side updates to support receiving PMv3 payments.
Trezor (and other Blockbook-based wallets)
Trezor is easy to confirm: it’s open-source backend, Blockbook, provides UTXO information in JSON format. Software used by Trezor end-users has no knowledge of which transaction version contains the UTXOs they receive over the API. If the Blockbook instance/node is updated (which must happen for the instance to remain on the Bitcoin Cash network after every upgrade), downstream wallet software does not require any updates to support receiving PMv3 payments. (The same is true for any other wallets which depend on Blockbook.)
To wrap up, I think this review demonstrates how little transaction formats actually matter to most (server-based) wallets in the wild: in every case I’ve reviewed, the real upgrade work lies with nodes, backends, and other consensus-validating software.
A “v3” CashAddress version is probably a waste of effort
I initially thought it might be a good idea to designate upgraded CashAddress version byte(s) to indicate that a wallet is “v3 ready” and can accept and re-spend UTXOs which it receives in v3 transactions. However, as far as I know, the only wallet for which this would matter is Electron Cash. And as far as I can tell, even Electron Cash would still “fail safe” without a new address version: if an outdated version of Electron Cash received a v3 transaction it didn’t know how to process, it would display a transaction decoding error to the end-user (rather than doing something which risks losing money).
With that in mind, I think it’s also likely a waste of time and effort to deploy a new CashAddress version – that would cost everyone in the ecosystem some implementation time and add technical debt (create “legacy” CashAddresses and also burn two available CashAddress version bytes) without measurably improving any end-user experiences. Instead of the Electron Cash receiver, only the payer would see the error, but the outdated Electron Cash wallet must still be upgraded to stay on the BCH network (as was necessary with the DAA upgrade, for example).
Other than what PMv3 is already doing (by maintaining most of the existing v1/v2 format), I don’t think there’s anything more we can do to make a transaction-format upgrade easier. If PMv3 is not ready by May 2022, I think we should simply delay it – there is no “PMv3-lite” which could save us any time/effort in practice.
It would be slightly more convenient if transaction versions were already restricted by consensus – if that were the case, we could pick a number which has never been used to identify PMv3 transactions, and parsing code could switch to PMv3 parsing based on that version.
Unfortunately, unless we get transaction version restriction in an update before PMv3 (or any transaction format change), we can’t rely solely on a transaction’s contents to determine its version – a rogue miner can mine a block with many transactions claiming every unused version, forever breaking such naive parsing code on mainnet (not to mention testnets where it is already broken). And it’s probably not worth the effort to deploy that in November since it costs so little to foil the value of such an upgrade (a single block of rogue transactions).
While it would be nice to parse a transaction’s version from its contents alone, I think a lot of developers have significantly overvalued the idea due to poor naming. Another way to view this problem: v1/v2 transactions begin with a 4-byte claimed_version field. It is not – and has never been – a version field. A proper version field would control parsing and fail transactions which do not follow the version’s scheme.
It’s possible we’ll be able to later hack this claimed_version field to serve as a real version field, but we are in an adversarial environment (with competing sha256 chains); it is safer and easier for implementations to simply re-evaluate their internal naming choices: v1/v2 transactions do not have a trustworthy version field, the version can only be determined by get_version(block_height, claimed_version). Until the block_height of a new transaction format, all claimed_version values greater than 1 indicate a version of 2.
As far as I can tell, a tagged/labeled transaction format doesn’t offer any benefit over v1/v2 or PMv3 transactions (and has serious downsides).
PMv3 is already as minimal as possible – kicking the integer format change to a future version would add technical debt and offer no additional backwards-compatibility.
PMv3 does not require most wallets to be updated; only consensus-validating nodes and SPV wallets are required to update, and a “v3” CashAddress version for PMv3 is probably a waste of effort.
We should just use 0x03 as the next transaction version value, despite bad historical choices and the existence of any v2 transactions with a “claimed_version” of 0x03.
Thank you very much to @tom for taking the time to review the PMv3 CHIP and for prompting this post, and sorry to everyone for the length! I’ll try to cut it down and get some of this rationale into the PMv3 CHIP itself.
Well, I never proposed FlexTrans to be reviewed, my entire point was that changing of the transaction format is a much bigger thing that should not be pushed into a cool change that is rather unrelated.
So thank you for writing down your thoughts and I certainly think they wil lbe useful to increase the quality of a v4 version that we can start designing in a year or so.
The idea for you to review it now is interesting, but really only improves the quaity of a breaking change to the transaction format, which will take longer to roll out. I thank you for that!
So far the focus is on a handful of wallets and the full nodes. But that is too little!
There are a LOT of companies that have their own software built on top of the protocol. From companies accepting payments in their website to companies walking the chain for their own purposes. And, yes, various wallets you actually listed later really do need to be changed. And we have several companies that are doing really interesting things and they built a new wallet for BCH too, for just their customers.
The ecosystem is considerably bigger than just a handful of popular wallets. It would be irresponsible to break them without giving ample notice. A breaking transaction change like this will need so many companies to upgrade their software (and maybe their users to upgrade too) that finalizing the proposal and the upgrade-date should be done and locked in probably a year before the actual activation. Those companies never read this forum and basically assume the basic protocol is not going to change very oten. So getting them to notice this is indeed going to take that long. Sorry.
To avoid your excellent ideas for past-tx-introspection being delayed that long, it would be useful to make your chip as non-breaking as it can be.
I mean, the May 2021 already is going to be plenty big. Keep it simple, avoid people losing money.
The lower the amount of parties that are required to make changes, the lower the number of stakeholders you have to engage with and convince to do work for. My suggestion to disconnect the two is really there to make sure your proposal actually has a much greater chance of getting activated. Much lower number of people to convince and less balls in the air.
I’m really impressed with the wall of text and your actual review of actual details. But it is also obvious you have not looked at the actual full node software in use. For instance
The nature of parsing a transaction that is not tagged is that you simply read each item (input, output, sequence etc) until you run out of items. And if you have some space left after the message, some coder had to write layer-breaking code to even notice. The default is that it just works. So you come up with a solution without checking if there is a problem in the C++ code (or Java code etc).
Reviewing these changes would be immensely easier if you split them up for separate review. I love reviewing things, and spent some weeks on this. But most people do not have that time. And those people will find the issues much later (if at all) with a much more complex set of not related changes in one CHIP.
Some more items that stood out which I found while you were offline;
SigHash concepts (determined how to calcualte the transaction digest (preimage in your chip)).
The chip states you want to remove ForkID. I suggest this to be a separate CHIP.
You add a new SigHash item for detached, dropping output selection (SigHash_SINGLE, SIgHash_ALL, SigHash_None). That seems wrong.
Numbers would be encoded in a transaction to follow a propriatairy design. Please follow a standard multi-byte one, and preferably move this to a disconnected CHIP to make review easier.
The hashing of the detached proof uses a not standard method of including in the hash the varint size bytes in front. This is inconsistent. We don’t include the blocksize or transaction size in their hashes, there is no need for this deviation. Please fix.
And generally speaking, malleability vectors have been fixed on BCH. Its good to not reintroduce new vectors, for sure.
But designs need to be minimal. The check we have today that all numbers are minimally encoded is an atomic solution that is neat and clean. A solution of including the sze in the hash doesn’t add anything as we already had a rule to avoid malleability there. Keep it simple
SigHash changes/Fork ID handling - The details are explained in that issue, but PMv3 now uses the same Fork ID mechanism as other signature types. (And the spec for signing serialization has been clarified.)
Hi All, I’d like to present some alternatives for discussion.
First avoids adding the hash TX fields by changing the TXID preimage to compute it on-the-fly from available data.
The second avoids changing the TXID preimage computation by introducing a data-carrier TX format and uses that to partition the actual TX across multiple merkle tree leaves. One leaf for each detached proof. Credit to imaginary_username who arrived to the same concept independently.
As for the third, I was trying to build a covenant from the ground up, first by using just the introspection, then realizing there’s a proving gap which detached proof would bridge so it all finally “clicked” with me, but there I also realized that the Group’s tokenID is a cryptographic witness which could be used for the same purpose, and it inspired an alternative, group-like, solution to the covenant problem.
Alternatives to CHIP-2021-01-PMv3 Detached Proofs
In studying alternatives we will consider the below as the objective of PMv3 detached proofs:
By “compressing” the unlocking bytecode of a detached proof into this hash, child transactions can efficiently inspect this transaction by pushing this transaction’s TXID preimage, comparing the TXID preimage’s double-SHA256 hash to one of the child’s Outpoint Transaction Hashes, then manipulating the TXID preimage to verify required properties. By enabling child transactions to embed and thereby inspect their parent(s), complex cross-contract interactions can be developed.
On-the-fly Proof Hash Computation
We can avoid additional TX fields by changing the TXID computation to hash the unlocking bytecode on-the-fly.
When building the TXID preimage we propose to use a hash of the unlocking bytecode instead of the unlocking bytecode.
It would still break unupgraded node-independent software because they’d consider all new transactions invalid.
Avoids changing the TX format.
Permanently increases CPU costs for transaction verification because every transaction would require additional input count number of hash operations.
We could make the proofs even smaller, and save on the CPU by hashing all unlocking scripts at once instead of individually, so TXID computation would be:
TXID = HASH(HASH(entire tx, but unlocking scripts left out) + HASH(all unlocking scripts))
Then, any script later reconstructing the TXID only needs to include the hash of the unlocking scripts and can reconstruct the other parts to make proofs about locking script.
Avoids input count number of hash operations and is just 1 additional per TX.
If we want to prove something about any input’s unlocking script, then all unlocking scripts must be “uncompressed” in the proof.
New Data Structure
The only way to avoid changing the TXID computation is to store the data outside the TX itself.
It should be stored somewhere inside the block where it can be part of the merkle tree.
A new TX format could be introduced just for the data:
Where consensus would enforce witness to be either:
Equal to a double SHA256 of byte concatenation of:
All TX input <TXID><index><satoshiAmount><pubKeyScript> fields in order, and
All outputs in order with their witness field left out.
Equal to any prevout’s witness.
The witness must be accessible from within Script VM, so we require introspection opcodes for accessing it:
OP_WITNESS to access the witness of the prevout being evaluated,
OP_UTXOWITNESS to access the witness of another intput’s prevout,
OP_OUTPOINTWITNESS to access the witness of another output.
These primitives would let us create covenants where we could make fixed sized proofs about the covenant genesis TX.
Avoids changing the TX format
Avoids changing TXID computation
Avoids changing the block header
Adds 33 bytes of data to each UTXO part of the covenant graph.
Unlocking bytecode not included in the preimage, so can’t make proofs about it. Possible to resolve this by introducing a SIGHASH_WITNESSGEN flag, which would exclude the genesis output witness field from the signature preimage, so the witness preimage could include unlocking bytecode hash(es).
Does this proposal really need to change TX and TXID formats to achieve its main objective?
The one key feature of PMv3 is the cryptographic “commitment” scheme that would commit to the whole TX in a way structured to suit contract proving needs, where inputs unlocking bytecodes are first hashed individually before being appended to the TX commitment preimage, and the commitment is later accessible from the scope of child transactions.
That alone would enable new kinds of contracts@bitjson hopes to build, and I will demonstrate below that it is possible to introduce the required commitment scheme which wouldn’t need to break TX/TXID formats.
I think “unforgeable covenants” would be a good descriptor for those kinds of contracts.
My understanding of how such contracts would work finally caught up, and this working example demonstrates why such contracts are currently not possible: because the proof size would grow with each TX.
If having those kinds of contracts using fixed-size proofs is the objective, and I have already convinced myself that it’s a desirable objective, then all other PMv3 changes are nice-to-have, but not really necessary.
With that in mind, let’s focus on the main objective first.
The current proposal (v2.1.0 fbe89e0d) wants to change the TXID preimage format so the TXID can double as the required commitment.
It also changes the TX format, which hasn’t been changed since Bitcoin genesis.
This is not to say that we can’t talk about changing it, but I’ve been led to believe that changing it would be a big and unknown cost.
Jason already recognized the problem and he did good to analyze what node-dependent software he could.
The problem is not node software, or well known and open-source wallet software.
The problem are the unknowns.
Do we know what software out there relies on a trusted node to feed it verified TXes and then does something with those TXes assuming the format was set in stone at Bitcoin genesis?
A change to the TX format would break this assumption, and by breaking the assumption it would break all software that relied on the assumption.
How big of a job would it be to thoroughly investigate this, and then coordinate a timely update of that assumption, so non-node software doesn’t suddenly stop working if such a change will be activated?
The TX/TXID format change should be thought of as a standalone feature, and it is really an optional dependency for this proposal.
Would changing the TX/TXID format solve an important enough problem that it would justify implementation costs?
“We do not break userspace!” – Linus Torvalds
Create a dedicated consensus-enforced commitment scheme independent of the TXID and tailored to contract proving needs. Introduce a new introspection opcode to access it:
OP_OUTPOINTALTHASH Pop the top item from the stack as an input index (Script Number). From that input, push the outpoint alternative transaction hash – the alternative hash of the transaction which created the Unspent Transaction Output (UTXO) which is being spent – to the stack in OP_HASH256 byte order.
The altHash would be generated appending the hash of each input’s unlocking script to the altHash preimage, instead of the actual input’s unlocking script. Same like TXID, the altHash doesn’t need to be stored anywhere in the “raw” block, but nodes must calculate it and extend the local db and cache to be able to efficiently execute the new opcode.
This way neither TX nor TXID format would be broken. The solution contains the implementation costs to node developers, as opposed to being a burden on the whole ecosystem if new TX/TXID format would be introduced.
This also solves the problem of having fields detached from the TXID preimage, that of banning TXes which would contain no or gibberish detached fields:
Note: node implementations must never blacklist TXIDs for non-presence of detached signatures/proof(s), but should instead ban peers which repeatedly transmit malformed transactions (such as transactions from which detached signatures/proofs have been dropped).
Prevout references would continue using the TXID, so proving something about the grandparent would rely on the TXID instead of altTXID. However, because the inductive proof is relying on altTXID the size of proof should grow only with ancestor depth, and not with contract steps.
If ability to prove the grandparent is needed then I think we’re faced with 2 choices: break the TXID or extend the inputs to use both TXID and altTXID at the same time, where the altTXID could be optional and added using the below described PFX method.
We can prefix the inputs unlocking script with “detached” signatures for that input, and signature opcodes would optionally parse the stack item as index of the input’s signature array:
[PFX_DS<N><signature0>[...<signatureN-1>]]<unlocking script>, where fields would be excluded from the SIGHASH preimage.
Note: PFX_DS would be a constant PreFiX byte to the script, read by consensus and its fields parsed and snipped off before passing the rest to the Script VM.
The distinction from the original proposal is that signatures are detached only from the input’s own unlocking script which is the key feature to achieve the objective of detached signatures, and they don’t need to be detached from the input itself.
Ranged Script Numbers
Same arguments against TX format breakage hold here. Alternative way would be to introduce opcodes to parse exiting VarInts that would solve the main problem RSN set out to solve, where costs and risks would again be contained to node software, as discussed here.
It can be done at network and local storage layers and doesn’t need to be implemented using consensus.
The TX format should be thought of as a standalone feature. When you really break it apart, PMv3 proposes all of this:
Update TX format.
Change TXID preimage, where it would break the assumption that TXID alone can be used to fully verify integrity of the TX.
Detached proof. Non-breaking alternative is possible by introducing an altTXID where it could be recorded alongside input prevout TXID references using the PFX approach.
Detached signature. Non-breaking alternative is possible with the PFX approach.
RSN, non-breaking alternative is possible with varint-2-num opcodes.
Compression. Non-breaking alternative is possible by compressing at network and local layers.
I think that groupID commitment primitive is a valid alternative to 3., even though it doesn’t work exactly the same way so contracts using groupID as the commitment primitive would look somewhat different but costs on everyone else should be smaller. I still have a lot of work to better illustrate the difference between the 2 approaches. Maybe we could combine the 2 approaches so groupID could be used as the altTXID while giving us savings when doing normal transfers where the “carried digest” feature of the Group proposal could shine.
As a follow-up to Spec Talk Held on 2022-02-01, I went ahead and created the 2 chips to capture those “big hitter” features of PMv3, but in a non-breaking and I think least-complex way. Just the tech description is written for now. RSN was dropped, and TX compression was sacrificed to keep it simple - but path is open to enable deduplication by allowing cross-references and signature aggregation in some later upgrade.
Together with native tokens I call them “the big 3”, and under “Deployment” section of each I marry them using codename “PMv3+”
General purpose induction covenants would be enabled through detached proofs, and group genesis is made simple again but it still commits to one input ref and itself so where it makes sense you can make some unforgeable contracts with a smaller proof.
There’s a nice synergistic benefit: by creating a token genesis you can preserve some TXID and carry it for later use in some contract’s proof by first unpacking groupID and then the TXID from its preimage.
The trickiest was “detached proofs” where credit goes to imaginary_username for cracking it with the dual API idea, old software would see the TX in compressed form: a disabled opcode followed by random 32 bytes instead of the actual input unlocking script
The breakthrough PFX method to insert new fields without messing with the Script or breaking old software was proposed by Calin Culianu long time ago while discussing the Group proposal
I’m now convinced that the problem space is well-defined enough to identify two basic “token” primitives which safely extend Bitcoin Cash’s “model of computation” by allowing contracts to offer interfaces for use by other contracts. With these primitives, Bitcoin Cash can support decentralized applications comparable to Ethereum contract functionality, while retaining Bitcoin Cash’s >1000x efficiency advantage in transaction and block validation.
The specification implements these primitives using only opcodes/VM codepoints. (Much like @andrewstone and @bitcoincashautist’s above Group proposals!) As such, the new primitives can be deployed in an upgrade requiring action only from full node operators.
I’ve posted an initial draft specification here:
There’s also a short blog post to summarize the main ideas:
Because CashTokens could now enable more efficient, user-friendly decentralized prediction markets than PMv3, I’m withdrawing the PMv3 CHIP. I hope that other PMv3 benefits – comprehensive malleability protection, signature aggregation, and an upgrade path for fractional satoshis – will remain goals for future transaction format proposals.
I deeply appreciate everyone who has contributed to the current PMv3 specification (helping to clarify the ideas now in CashTokens!) – here and via other channels. I’m working on tooling for developing and auditing contracts employing CashTokens – if you’re interested in helping explore this direction of Bitcoin Cash covenant development, please join CashToken Devs on Telegram! (There’s also a quiet, announcement-only CashTokens channel.)
To anyone working on a future transaction format proposal: I now disagree with the core premise of “Ranged Script Numbers” – given the existence of transaction introspection VM operations, I think the VM should be developed to remain transaction-format agnostic. (I’ll write something more formal later about backwards-compatibility in the context of an introspection-capable VM.) I think any new primitives (if developed) should be introduced with a matching inspection operation, and future transaction formats can be developed independently from Bitcoin Cash’s contracting system, allowing such development to focus on important properties of a transaction format: byte-efficiency and decoding speeds.