Post Quantum Cryptography

I hope that in '25 we’ll be activating CHIP 2021-05 Targeted Virtual Machine Limits

With 10k stack item limit and removal of opcode count, Lamport implementation could fit in a single input. Quoting Moonsetler on X:

i think you could do a variant of lamport specifically altered to fit bch script better, that is about 4kb (plus change) and fits into the ops limit.

Lamport requires use of one-time keys, so using the contract’s address for receiving payments would be risky because there’s no way to really ensure people will not pay into it after it has been used.

However, thanks to CashTokens, we can work around that and have a constant receiving address - one that requires a particular NFT (held by a Lamport locking bytecode) to be spent as sibling:

<0> OP_UTXOTOKENCATEGORY
<16> OP_SPLIT OP_DROP
<half_of_your_NFT_categoryID> OP_EQUAL

Then only the owner would need to interact with the Lamport contract, for when he wants to spend from the pay-to-token address, and he could rotate the key on every spend.

According to Pierre-Luc (CEO of Pauli group, focused on QC-hardening on Ethereum), by the end of this decade we could see ECC get broken:

The sweetspot is to aim at machines that can break keys in hours/days at first. It takes more error correction (i.e. more physical qubits overhead) for a calculation to last a whole year. The number of step is fixed (~10^8) and going slow doesn’t necessarily helps with resources.
Everyone is aiming at machines that can to about this number of steps by the end of the decade. Then there is still some distance to the machines that can do 10^12 operations to break RSA 2-3 years afterward.

If we’ll be able to implement Lamport in Script, then we could secure our wealth until we find something better, quoting Pierre-Luc again (responding to my idea of Lamport + pay-to-token):

Excellent, if the largest addresses add Lamport then already that’s most of the attack surface covered.
Yeah it makes sense to keep one address.
The BitVM folks are using the stateful to create state machine also it seems, there are a few neat tricks to play with hash based signatures.

2 Likes

Another thing that would be interesting would be how many have their public key exposed.

If you ever do the parser, it would be good to have something like:

P2PKH-secret: 20%
P2PKH-revealed: 80%

That would be more complex to parse though. You would have to create a record of all revealed keys first. This would also apply to P2SH.

Also, some may be revealed but never included in the blockchain.

Another question is how secure Armory outputs are. Even without re-use they may be breakable due to how they are generated.

2 Likes

I parsed the blockchain up until block 872000 and got the data on value:

image

That’s great. I assume tvl is total value?

P2PKH overwhelms the others by value.

So, most of the coins would be potentially protected against quantum computers. The exceptions would be for ones that have key re-use.

Even normal P2SH (160) would be moderately protected at 80-bit security. They would have lower than 128-bit security, but would be effort to break them one at a time.

The only ones that are definately exposed would be the P2PK. They are likely mostly old and maybe private keys are lost.

Hopefully quantum computers won’t suddenly jump to be able to create cryptographic keys instantly. If people have notice, it is likely almost everything (where the private key isn’t lost) could be protected.

1 Like

Yes

Yes, and if keys are exposed it likely means the keys are not lost. So, if need be, they could be moved to non-exposed P2PKH.

I’ll see if I can extract the exposed vs non-exposed info, too.

More than “moderately”. I was wondering about post-quantum preimage resistance of HASH160 addresses (both P2PKH and P2SH), too. Turns out, it would be physically possible but still infeasible to crack even 1 address:

We could design a black box function to break both P2PKH and P2SH (and P2WSH, etc.) addresses in 2^80 single-threaded quantum computer cycles.
Assuming a clock speed on scale of GHz, this would take about 10 million years.
Important to note is that splitting the work and doing it in parallel is not as beneficial as with classic computers because it would offer only a quadratic speedup (Fluhrer, S., Reassessing Grover’s Algorithm).
In other words, doing the work in 1 year would require building 100 trillion quantum computers because sqrt(100T) == 10M.
Therefore, we can say that breaking a 160-bit hash preimage is physically possible because 10M years is a finite amount of time and less than age of the universe.
However, it is still infeasible.

.

Some old keys have been recently observed moving on BTC, but still, Satoshi’s P2PK stash didn’t move, and it is still the biggest quantum-computing bounty in the world.

Yeah. There isn’t much that can be really done about it.

Anyone who knows what their keys would likely have time before the break is possible.

Once/If quantum computers can trivially break the outputs, it would be impossible to distinguish the original owner from someone with a QC. It isn’t clear why anyone should get those outputs.

Before that it would be worth looking into having a system to handle hash protected outputs.

It would probably be some kind of on-chain two stage release system. Submit the digest of the insecure public key and one that works with a new algorithm. Once it is buried, the new public key can be used to spend the funds.

1 Like

Sounds like a commit-delay-reveal scheme, I first saw it described here: Stewart, I., et. al. (2018). “Committing to quantum resistance: A slow defence for Bitcoin against a fast quantum computing attack”

On BCH, we can implement that as a smart-contract, here’s the proof-of-concept: Quantum-resistant One-time-use Lock

I assume that requires moving all P2SH outputs to new outputs (with the smart contract) manually first to protect them?

In that case, they could just be moved to quantum resistant outputs.

If a quantum computer was suddenly able to break the elliptic curve signature algorithm, then there would need to be a soft fork to force all P2SH outputs to additionally require a commit transaction.

After the soft fork, you have to send the commit first because P2SH outputs would require the normal signature and a reference to a commit transaction.

Edit:
Also, it would be a good idea if it could handle pre-signed transactions. If someone had a valid transaction in a safe, then the client would take that transaction and produce a valid commit transaction for sending.

1 Like

Really good talk from the BTC guys at OP_NEXT. I’ll need to watch this again, but for me the key points really were that the government is looking to be quantum secure 2030 - 2033, and somewhere in the range of 2026 to 2030 is probably ideal for Bitcoin.

This means if we’ve already locked in 2025 that we do need to get this onto the radar so that we can ship BCH quantum resistance changes 2026 - 2028 sometime.

It’s also a point of community discussion as to the economic effect. For instance, is there a case to hard fork in a “burn” of all Satoshi + other vulnerable coins after an announced window to move to quantum secure? Or do we let a huge amount of value in the UTXO set get grabbed by quantum computers?

The 2025 VM Limits upgrade may also help us out significantly I guess in terms of opening the ability for more cryptographic defenses.

See this graph of how you need to prep & migrate coins (with some margin for error) BEFORE the attack becomes viable obviously.

Government timeline:

1 Like

The PoC smart contract is only good for someone who’d want to preemtively move his funds to a quantum-resistant address. Once QCs become available, you’d want a fork to enable safely migrating funds from any stranded and vulnerable contracts (including any non-standard variations of P2PKH, P2PKH, P2SH) where a commit-delay-reveal could work to prove original ownership.

Caveat with commit-delay-reveal is that miners could collude to steal, making a 51% potentially more damaging (right now, 51% could only censor transactions, or execute a double-spend fraud, but not out-right steal someone else’s funds) but once signatures are broken there’s no way around it but to use your P2PKH / P2SH address as an aged hashlock. With a long enough aging requirement, I think this would not be a deal-breaker.

I don’t consider P2SH or P2PKH preimage to be breakable, so consideration here is just vulnerability of redeem scripts, where they’d become vulnerable only after exposing a vulnerable redeem script, same how P2PKH becomes vulnerable only after exposing the committed key.

With P2SH, it is more complex, because some contracts may already be quantum-resistant (signatureless covenants etc.), and only some would be vulnerable to QCs. Having a SF to require a commit for all of P2SH could inconvenience non-vulnerable contracts or could even break them (if contract requires a particular number of inputs & outputs and commitment would be provided by an additional input or output).

Therefore, I think a complete solution would be to extend the TX format so that this commit-delay-reveal happens “outside” the legacy TX, so that it doesn’t interfere with functioning of existing contracts and it would only be required for any script that executes OP_CHECKSIG, CHECKDATASIG, or CHECKMULTISIG with elliptic curve keys.

BTC Core has a speculative PR for Quantum resistant addresses as a soft fork: QuBit - P2QRH spending rules by cryptoquick · Pull Request #1670 · bitcoin/bips · GitHub

Of course it’s overengineered, yet another level of segregating the witness.
But anyway, nice doc, here’s the link to view the doc: https://github.com/bitcoin/bips/blob/e186b52cff5344c789bc5996de86697e62244323/bip-p2qrh.mediawiki

We are in big trouble if true?
https://x.com/0xRacist/status/1866952585644576835

smells like FUD, and conveniently there’s a bunch of new coins claiming quantum resistance

Check the community note. This is an industry/insider joke.

1 Like

Look guys, you need to not just follow what some companies or governments say.

Companies and universaties have had this quantum computing “dream” for more than a century and billions have gone into actual research for 40 years. The results are basically that we see papers and results published at a rate that is really only about getting more funding.

Governments are a really big part of getting funding nowadays, they are afterall experts in spending other people’s money.

And in the meantime we see no real progress. And the next billions burned in research and building something to make it look good.

There is basically zero risk to old funds stored in p2pkh transactions, even if some quantum crypto is available tomorrow with actually a useful amount of bits. So stop worrying and wasting money on protecting against something that literally doesn’t exist.

You have some reading up to do. It is evolving, it is not just for more research dollars. There is and will be a market. Things take time.
Google’s latest QC breakthrough

This is not to say it is close to being a threat, on that I would agree, but it also does not mean we should ignore it or assume:

That is not an assumption. A ripe160 hashed pubkey on chain is safe from quantum computers for quite some time yet.

(also, as ps. Please be more careful about quoting actual statements, not one that looks like one but isn’t)

Interesting, this is your response to my stating it is all about getting more funding. You might make the connection yourself, how google is indeed getting work from nasa based on this, how the Department of Energy is stating it will buy contracts based on this. (that took me 5 minutes of research, follow the money is not that hard!)

So, do your research and again: you need to not just follow what some companies or governments say.