Pre-release of a new CHIP-SubSatoshi

Right, good catch!

Yeah, then I’m definitely leaning towards an opcode that switches ‘mode’.

2 Likes

Tx Version now 5

At the advice of people here, I investigated the actual usage of versions on-chain. The result is this CSV file; CHIP-SubSatoshi/tx-versions.csv at master - bitcoincash/CHIP-SubSatoshi - Codeberg.org

In short there is one version 3 transaction waaay back in 2015 and one v4 in 2016. More recent someone added 859 version 4 transactions.

Geeking out detail: I wanted to get the data out in a fast way, so in true software developer fashion I took the opportunity to learn cpp2 (an experimental compile-to-cpp language) and write it there. Took me nearly 2 days to write this simple app.
It was massively faster than calling bitcoin-cli a lot, and it was fun to play with new tech…

Cheers!

3 Likes

No problem. Just do what Linus did in the kernel. Increase / skip the numbers by an order of magnitude.

Instead of Version v4 we get Version v40.

Sure, lack of continuity is not aesthetically pleasant, but it fixes any backward compatibility problems and removes 99% of confusion.

2 Likes

That actually makes a lot of sense, and is quite attractive. Maybe jump to ver 10…

2 Likes

Linus literally said he went full alpha male on it and took the shot.

So just take the shot.

1 Like

Oh, so someone added a bunch, care to update this list?

1 Like

There is a CSV in my git repo. (see OP for link)

3 Likes

I just had a couple of minutes so checked testnet4.

I’m surprised too, but there are ONLY version 2 transactions there.

3 Likes

Ver X, per Apple :grinning_face_with_smiling_eyes:
Or go retro apple, and lets start naming versions with different cats!

1 Like

I did go for version 10. :wink:

In the meantime some more insight around the opcode has made me revisit that topic. The main one was the observation that you really don’t want to allow a script to change behavior mid-calculation between 8 or 11 digits behind the dot.

Some more stuff added too, please look at the merge request and check if it makes sense.

1 Like

I really dislike having state, but I can’t think of a better way to do this and that it MUST be done at script initialization seems sensible.

This opcode, when needed, shall be present once in the script-initialization. Script initialization is the start of the script, it can contain pushes and it can contain this new value-mode opcode. The script initialization ends when any other opcode is encountered by the interpreter.

Otherwise, the only non-stateful way I could see us doing this is having explicit distinct op-codes. E.g.

  • OP_UTXOVALUE_SUBSATS / OP_UTXOVALUE_BIGNUM
  • OP_OUTPUTVALUE_SUBSATS / OP_OUTPUTVALUE_BIGNUM

… which might turn into a bigger evil.

Regarding OP_BIGNUM_MODE, do we have any other situations (now or in future), where we might need state flags like this? Just thinking about whether something like ${bitFlags} OP_SETSTATE is worth considering or not.

1 Like

Some potential applications:

  • Switch VM arithmetics mode to uint64, but here it would be good to be able to toggle during runtime so IDK how it fits in
  • Upgrade CashTokens with new NFT capability (see here for some thoughts on upgradeability and here for some ideas on what other capabilities could be useful)
  • Switch to a whole new VM
  • Increase sequence/locktime precision and upgrade locktime opcodes to work around the Y2K38 issue
  • … really this way we can upgrade any opcode in a way that doesn’t break existing contracts.

Should it be flags or should it just be incremental VM versioning?

1 Like

This actually sounds a lot cleaner imo. We have versioned tx’s and support for versioned scripts (versioned VM) sounds like it would play along nicely with that. I think it might be less complexity to maintain and work with too?

Any notable down-sides?

1 Like

great list!

I would expect that at one time normal CPUs may go 128bits, at which point it would be practically free to have the arithmetics be bigger.
I also think that the only sane way to do that is for the script to request it. This is relevant because "op_BIGINT_mode’ may not be very forward looking in that case :wink:

Some OP_EXE_VERSION may be interesting, setting an execution-environment version. Things like op CLTV would have been better in that way than the tx-version way IMOHO. So not too bad an idea. Needs more thought.

1 Like

Sorry if this is already answered, I could not re-read the full discussion, but in your CHIP you said

It is fair to say that 1% of all coins will never be on a single input or output and as a result we could be mostly safe about adding an additional 2 digits for nearly free. It fits in the same 8 byte number, as long as no more than 1% of total issuable supply never ends up on a single UTXO.

So what happens if some large entity (exchange, bank, institution) controls 1% of BCH and wants to send all of it to a single address?

Just a warning message? Or some kind of malfunction of the network?

1 Like

TX would be rejected by the hypothetical consensus rules, they’d have to split the balance to 2 UTXOs belonging to the address.

Well then this is very suboptimal.

1% of BCH being moved at once is an unlikely scenario, but still very possible to happen at least once in the future. 1% is only 210,000 BCH, this is definitely not impossible.

If this is the case, I would be against the CHIP and for another CHIP that does a simple “clean cut” increase, with increasing the 8 byte field size.

Such an upgrade would be implemented quicker, but set to activate X blocks (I am thinking 2-4 years) in the future, to give the ecosystem time to upgrade all software of course.

You copied a part from the “Discussion more digits” section.
This is about a bigger discussion and maybe adding more digits than the CHIP already does.

Please quote it in context. The scenario you stated is NOT part of the proposal.

It being further in the future makes sense, if you read on in the CHIP my thinking is that this second-stage increase is possible and with the caveat explained by BCA, it may still be very useful. But likely several decades after this initial increase.

There is really nothing sub-optimial about the limitation of such a huge amount of funds on a single UTXO. Notice that a single UTXO is not a single address, or even equivalent to a single transaction. You can move all these funds in a single transaction using multiple inputs and multiple outputs even then. A minor annoyance, at best.

Frankly, if in several decades those 1% of all posssible funds (the 21M BCH) are not just collected by one entity, but also moved, then we likely have a mostly flawed ecosystem anyway. Don’t optimize for failure.

Either way, this was not the proposal. The proposal is a 3 digit increase because that is completely free from such limitations.

2 Likes

Indeed, I misunderstood the CHIP. Lack of sleep due to the nightly BCH pump is probably the culprit.

We can indeed add 3 zeros without any downsides right now.

Sorry for causing ruckus.

So, for almost the last two years, I’ve been programming, using, and explaining contracts that will/would run into this same long tail sub-sat problem that bitcoin itself will hit in 2140.

My approach, or stance, was to deliberately ignore the sub-sat problem so as not to prejudice anyone toward a solution on the basis that it’s needed for ### BCH TLV of perpetuities. I didn’t want to assume a solution without a chip and force or influence a chip outside the process.

Overall, this op_code flag and version 10 idea seems great and simple. However, I have to point out, that if this solution is adopted, and people start designing for sub-sat transactions, everyone will also start prejudicing a myriad of things about sub 1 sat per byte fees.

The default fee problem, which might start rearing it’s head again any day now, is tightly coupled with what can be done with sub-sat values.

If application developers have access to sub-sat logic, they can quite easily design things that may need to spend much lower sub 1 sat/byte fees, far into the future (decades), where the default fee schedule or policy isn’t known or defined.


Also, in terms of immediate priorities, it’s quite conceivable that the fiat value of the existing default fees may increase five or eighty-fold in a short time. The last time the fee schedule came up BCH was ~$1,500. I personally think nickle and dime size fees is a huge issue, and although a fee schedule is as hard or harder than a blocksize schedule, I think it’d be better if we had a known sub-sat fee policy far into the future.


Overall I think the proposal looks fine, but I think it is a problem that is coupled with fees.

1 Like