Brainstorming OP_BLOCKHASHMATURED

We should compare the computation cost to the most computationally expensive OPcode today to verify whether it isn’t too much.

it would be better to think of each block hash as 200 bytes because we store the block hashes of each block as a key in a map pointing to a blockindex value rather than simply storing the block hashes alone. This map is important later.

block hash is 32, blockindex unless i added wrong is 144. block index is stored as a pointer (8 more bytes) bringing it to 184 bytes. there is also some data structure overhead but i am unsure how much so to be conservative, round it up to 200 bytes which also makes math easier. this works out to about 10.5 MB per year. or about 630 MB at your 60 year figure, not that this changes the conclusion at all.

equality comparisons between hashes is an incredibly cheap computation, a simple byte by byte comparison until a mismatch is found. no worries there.

Because the hashes are stored in a map, we do not not have to check every hash in the map to see if the hash we are looking for exists. searching for a specific hash in the map is a constant time complexity operation. it does not take longer to search as the number of hashes in the map grows.

the only possible issue with this opcode would be that it requires locking the blockmap to check for the hash but again it would not be locked for long because searching is fast.

this should be a far cheaper operation than running a signature check

2 Likes

Better version of the opcode: pop 2 stack items from the stack. Interpret one as height, other as block hash. If it matches a mature block header, return 1, else return 0. This makes it easier to implement and also lets the user know the height of blocks without having to unpack coinbase TX. For older blocks, without height being part of it, it would be impossible to tell the height with the version in the OP.

I think a better version would allow implementers to look it up either by height or hash, and would also tell users the height, see above.

Oh, this is much better indeed.

I started thinking about an attack vector against previous version of this opcode you proposed. So you create millions of transactions that have few OP_BLOCKHASHMATURED XHASHX, each of XHASHX is generated by taking an existing block hash and changing it in a way that it does not exist and is the longest to process (making sure that the search algorithm has to go through as many of existing blocks as possible just to determine that such block hash does not exist).

Your updated version invalidates this attack, because now you only need to check a single position at block height Y_BLOCKHEIGHT and going through whole index is unnecessary.

OK, so basically now we have [107,222.400 x (200 / 34) = 630,720,000 bytes].

Still acceptable by today’s computing standard and can be run on an average laptop. It does not make much of a difference.

While the discussion of cost is good to have, it does skip directly to the ‘cons’ of equasion. Probably a bit bikeshedding going on as nobody is talking about the ‘pro’ side.

The gains given have at best been explained only in terse forms. What new usecases are we missing out on if this isn’t there?

I’m a dev, but I do know the meaning of “devs just like to dev” and while the cost may be low of the execution, the cost of many any hard fork change is supposed to not be just about that…

I asked this before and din’t get an answer back then either.

When a new idea is posted the cons should always be discussed first because there is little point in thinking about if it should be done before considering if it is feasible. A lot of ideas die in the evaluation of feasibility. Once it has been established that it is feasible the arguments for why it should be done have their time to shine.

Collect a bunch of ideas, throw away the ones that are not possible, re-evaluate the list, throw out the ideas that could be done but serve no purpose, the remainder are possible changes to make based on ecosystem support.

I’d rather limit time spent on ideas to the ones that we actually deem to be needed for some purpose other than it looking good to some devs.

Remember op_reverse? Lets not do that again.

You mean OP_REVERSEBYTES? I was not there but was glad to find it once I started dabbling with Script and found use for it: parse block header pushed to stack and calculate chainwork from the compressed target. Part of the Script required conversion of BE to LE script numbers, and the opcode came in handy there.

Recent talk about tokenizing AnyHedge contract prompted this. Imagine you want to create a DAO that pools together tokenized AnyHedge contracts. The DAO needs to be able to verify, using nothing but Script and local TX context, that some NFT satisfies this:

  • Has rights to payout of a particular AnyHedge UTXO with such and such contract params;
  • The payout address is a pay-to-NFT;
  • There’s only 1 NFT of the target category.

As it is, we can set this up only if the NFT interacts with the funding TX, and it can interact in one of the 2 ways:

  • spend some public covenant’s NFT UTXO as an extra input so it can attest to the funding TX and emit a NFT as an extra output, with attestation of the TX encoded in the commitment;
  • have the funding TX set a covenant on output0 and the AnyHedge contract UTXO be some other index. The covenant would require creation of a NFT of new category in the next TX and the proof of funding TX existing would then be compressed in the categoryID of the NFT

Could we tokenize AnyHedge without changing the funding TX structure, by simply setting payout address(es) to some pay-to-token P2SH? Sure we could, but then the target token can’t make proofs to other contracts, and they couldn’t verify it for acceptance into some pool. This method of tokenization would only be usable for listing contracts on the secondary market as a means of early exit, but they wouldn’t be composable.

If we had something like the opcode above, then we’d have a way of tokenizing a proof of existence of any mature TX - and do so non-interactively. Simply have some public NFT covenant require the user to push whole SPV proof as input script data, and the contract would verify it matches a mature block hash, after which it would emit an attestation NFT with TXID encoded in the commitment.

Apart from that reaching an insane size, this isn’t exactly a usecase.

Side-note: I’m a bit vague on the technicals from your short version, but I don’t think it is possible to do what you said due to merkle-proofs having a variable number of hashes. And the only way you can do that without loops is to have the details already available at time of building the locking script, and then there is no point in including the proof at all.

Anyway, the point of me asking for a “usecase” is to avoid this going over the heads of 99% of the people.

I’m guessing this would go something like:

a user could already now sell their contract to a 3rd party without breaking the terms to the second party using tokens.
what this opcode adds is the ability to make derivative markets on-chain.

Personally I think the whole financial system is severaly broken and ever more complex ways of trading is Ok to leave behind. The financial system with all their derivatives and other fancy instruments extract more value out of normal people’s hands than any other system in existance. Why on earth would we want to invite those leaches onto crypto currencies? Let them do it via some centralized system. I’m sure they will eventually and likely prefer it over anything defi we build.

Keep it simple. Anyhedge (specifically their selling of risk contract) is probably the best idea that actually helps normal people. Adding more layers on top does not have my vote.

@bitcoincashautist

Can you think out some use case that will be useful for common people right now?

It may give us all better idea and jump-start our imagination about the potential usability of such an opcode.

1 Like

Same as Viability of Native Introspection of Mature Block Headers and Coinbase Transactions but simpler opcode, updated list:

  • non-interactive “introspecting” any transaction older than 100 blocks, SPV method;
  • source of entropy / randomness;
  • easier and more secure implementation of chainwork oracle, which could be used for truly decentralized AnyHedge betting on BCH / BCH_chainwork;
  • non-interactive tokenization of AnyHedge contracts (as opposed to interactive: needing to make tokenizer contract be part of funding TX).

Why would it be insane size? To generate the NFT you’d be pushing up to 520 bytes (raw TX) + some hashes + up to 520 bytes of redeem script that validates the commitment set on the new NFT. That’d be done only once per some TX of interest. The NFT would be P2PKH afterwards, free to move without having to replicate the validation script, and another covenant could enable cloning the NFT and its proof with much less code. The initial proof would have to be made only once, and NFT could be copied as needed to plug the compact proof into any number of contracts that need it.
Other contracts could then just inspect the NFT’s categoryID and commitment and be assured that the TXID exists simply because the NFT of the category exists. That could be used to switch some logic (like a contract verifying that some pre-signed TX has been indeed mined) or it could be used to decompress the whole TX to inspect parts of the TX (requires again pushing the whole TX to stack, but without having to again verify it against the header - that part is already done by an ancestor TX).
Here again you can do the prove once, use many times approach - a contract can verify some template of the examined TX and emit compact proofs as NFTs, where contracts that need the proof could simply trust the NFTs claim without having to inspect the TX in question by themselves.

No, it’s an outline of a generic procedure. I have ideas for some use-cases listed above, but it could enable more if people get creative.

I want to tokenize AnyHedge, so I can exit a hedge position early by trading it for BCH. It’s possible to tokenize it now but it would require a different setup of the funding TX.
With the opcode, I could tokenize current BCH_BULL contracts all by myself, without having to depend on them changing the funding TX structure.

1 Like

Thanks, this is promising.

AnyHedge has already been proven to be a powerful monster that could propel BCH to inifinity, so this is “good enough” for me.

1 Like

I agree.

However, AnyHedge has clearly shown that what they are doing has massive merit and a huge upside to BCH as a whole.

They can remove the volatility risk from merchants, like it never existed. And it will join merchants and speculators under common goal at the same time. It’s almost like magic.

I think if this proposal makes it easier for services like AnyHedge to achieve success and continue what they are doing, this is a path worth following.

That ‘however’ states disagreement. There is no disagreement.

Read my closing statement again:

To split that into easy to digest facts,

Anyhedge today works. I love it, I support it fully. Always have. No new opcodes are needed (we already activated those!).

Anyhedge could become tradable on-chain, allowing people to sell their contract that sold their risk. This is possible today using NFTs, no new opcodes needed. Source: BCA above my post.

However, what this opcode would possibly add is the betting on, selling of contracts which sold your risk.

So, sure, anyhedge is great. But that is not what we are talking about. Its like pushing through censorship laws to protect the children. You can’t disagree without seeming to want to harm children. Please, my IQ is above 100…
So I ask, do you really need to add all those layers on top, on-chain?

I also stick to my former point that traders and financial people will absolutely build such things again since BCH allows permissionless innovation. Most probably centralized. And arguably that will work better and cheaper than anything on-chain. So any argument that saying NO to this is blocking someone or something is plain false.

Come up with a better usecase and I might change my mind, I will not, however, support anything that is there purely to reinvent the worst parts in our current society.

To @bitcoincashautist, usecases are not some techincal statement like “source of entropy”, that is not a usecase. Usecases involve an actual person and their ‘wants’ and how they interact with the system to get what they want.

The advantage of that is when you say “we can do a lottery using this and a source of entropy” we can talk about it, and I point to Satoshi’s Dice and you may need to explain why that isn’t good enough. And we can actually talk about this without it being academic high level BS that nobody else can chime in on.
Usecases are essential here, otherwise we are just devs that likes to dev.

2 Likes

Well you do have strong argument here.

I agree that keeping BCH simple P2P Cash before all is the top priority, always has been.

Maybe we could wait for some stakeholders to actually present a case for such an addition to the protocol be beneficial to many users?

We should not “dev just to dev”, that will certainly lead to a disaster like SegWit or Lightning Network one day.

Code should solve common problems of everyday users, of people, like Bitcoin Cash solves the problem of separation of money from state.

I think this violates the unspoken but strict no-reevaluation policy of op codes. The value of this changes depending on blockchain state, specifically reorgs.

That means every transaction using this or descended from a transaction using this has to be re-evaluated continuously. That’s no bueno for scaling.

Or have I missed something?

3 Likes

You’re correct, but you did miss that we have a class of outputs that “violate” it: coinbase outputs & their descendants.
This is why we have the maturity coinbase rule: they’re unspendable before they mature, so it can’t really happen that their validity changes as it would require a 100-deep reorg.

This is why the idea here is to push / verify only the matured headers. If it’s not matured yet, the opcode would fail and TX would be invalid. Once it would become valid - it would become due to header becoming mature, and flipping back to invalid would require 100-deep reorg.

Some of the functionality is possible in a roundabout way btw: convince some hash-rate to include a covenant as a coinbase output, such that requires CashTokens genesis and allows anyone to clone NFTs. Each such NFT would have categoryID of a coinbase TX, and “unpacking” the hash would reveal the coinbase TX. From there, the user could extract the height + add the other merkle node hash and verify it against a header produced by a chain of covenants. This could pin the height, but I think it’d still allow alternative covenant states by replaying the same coinbase TX on an alternative chain so still we can’t 100% prove a header to some contract.

Side-note, this high-level idea would allow Drivechain-like hash-rate voting:

Hashrate would opt in by using a simple one-time vote covenant (OTVC) address that requires bridge DAO input as sibling.

Bridge DAO could then burn and tally these one-time vote covenants and accumulate them in NFT’s commitment. By “unpacking” the TXID of the vote covenant’s input, the DAO can verify the vote is indeed coming from a coinbase TX, and extract the height from the input to verify belongs in the voting window.

I guess so. Same order of magnitude issue as checking descendants of coinbase.

Flipping back to immature requires “only” a reorg of m blocks replacing n, where m<n, right? Not 100. I don’t know how / how efficiently nodes handle that.

It’s a nice idea. It would be nice to have for a very slow-reacting chainwork oracle which could potentially be incredibly valuable.

2 Likes

Indeed, I have encountered the same problem when designing OP_LOTTERY [temp name].

To prevent infinite reevaluation due to reorgs, the 100-block maturity rule has to be used there as well.