Pre-release of a new CHIP-SubSatoshi

Sorry if this is already answered, I could not re-read the full discussion, but in your CHIP you said

It is fair to say that 1% of all coins will never be on a single input or output and as a result we could be mostly safe about adding an additional 2 digits for nearly free. It fits in the same 8 byte number, as long as no more than 1% of total issuable supply never ends up on a single UTXO.

So what happens if some large entity (exchange, bank, institution) controls 1% of BCH and wants to send all of it to a single address?

Just a warning message? Or some kind of malfunction of the network?

1 Like

TX would be rejected by the hypothetical consensus rules, they’d have to split the balance to 2 UTXOs belonging to the address.

Well then this is very suboptimal.

1% of BCH being moved at once is an unlikely scenario, but still very possible to happen at least once in the future. 1% is only 210,000 BCH, this is definitely not impossible.

If this is the case, I would be against the CHIP and for another CHIP that does a simple “clean cut” increase, with increasing the 8 byte field size.

Such an upgrade would be implemented quicker, but set to activate X blocks (I am thinking 2-4 years) in the future, to give the ecosystem time to upgrade all software of course.

You copied a part from the “Discussion more digits” section.
This is about a bigger discussion and maybe adding more digits than the CHIP already does.

Please quote it in context. The scenario you stated is NOT part of the proposal.

It being further in the future makes sense, if you read on in the CHIP my thinking is that this second-stage increase is possible and with the caveat explained by BCA, it may still be very useful. But likely several decades after this initial increase.

There is really nothing sub-optimial about the limitation of such a huge amount of funds on a single UTXO. Notice that a single UTXO is not a single address, or even equivalent to a single transaction. You can move all these funds in a single transaction using multiple inputs and multiple outputs even then. A minor annoyance, at best.

Frankly, if in several decades those 1% of all posssible funds (the 21M BCH) are not just collected by one entity, but also moved, then we likely have a mostly flawed ecosystem anyway. Don’t optimize for failure.

Either way, this was not the proposal. The proposal is a 3 digit increase because that is completely free from such limitations.

2 Likes

Indeed, I misunderstood the CHIP. Lack of sleep due to the nightly BCH pump is probably the culprit.

We can indeed add 3 zeros without any downsides right now.

Sorry for causing ruckus.

So, for almost the last two years, I’ve been programming, using, and explaining contracts that will/would run into this same long tail sub-sat problem that bitcoin itself will hit in 2140.

My approach, or stance, was to deliberately ignore the sub-sat problem so as not to prejudice anyone toward a solution on the basis that it’s needed for ### BCH TLV of perpetuities. I didn’t want to assume a solution without a chip and force or influence a chip outside the process.

Overall, this op_code flag and version 10 idea seems great and simple. However, I have to point out, that if this solution is adopted, and people start designing for sub-sat transactions, everyone will also start prejudicing a myriad of things about sub 1 sat per byte fees.

The default fee problem, which might start rearing it’s head again any day now, is tightly coupled with what can be done with sub-sat values.

If application developers have access to sub-sat logic, they can quite easily design things that may need to spend much lower sub 1 sat/byte fees, far into the future (decades), where the default fee schedule or policy isn’t known or defined.


Also, in terms of immediate priorities, it’s quite conceivable that the fiat value of the existing default fees may increase five or eighty-fold in a short time. The last time the fee schedule came up BCH was ~$1,500. I personally think nickle and dime size fees is a huge issue, and although a fee schedule is as hard or harder than a blocksize schedule, I think it’d be better if we had a known sub-sat fee policy far into the future.


Overall I think the proposal looks fine, but I think it is a problem that is coupled with fees.

1 Like

I hear what you are saying, and naturally the fee market is relevant here indeed as fees may go down. Notice that fees are today per 1000 bytes, as such we already have a 3 magnitudes flexibility before any sub-sats that add another 3 magnitudes.

Based on the fact that fees are not actually consensus, but simply a default that is based on the total lack of tooling and algorithms, I expect the fee levels are a separate problem that essentially are about making the fee market a free market which solves the issue you talk about.

It’s my impression that a totally free market would lead to weather data and the like—gigablocks of junk instantly. A free market above some floor that ratchets down over decades might protect us from that. Otherwise, it would only take one miner to ruin The Commons, I feel there does have to be some future logic or agreement about it amongst them.

I don’t want to divert the discussion into a completely different topic. And consensus or network level rules aren’t really in my wheelhouse (you’ve certainly thought about it a lot more). We certainly can’t solve the future fees issue here.

But since miners have a gentlemen’s agreement about the fee threshold… I propose we say:

Application protocol developers should agree, in the absence of a long term fee schedule, that:

Long lived contracts should assume a constant fee threshold forever, even though that will probably not be the case. And it’s best practice in such cases to have a pathway, in script, to clean any outputs that cannot pay a 1 sat/byte fee.

Prepaying today’s fee far into the future will lead to more incentive for miners to maintain the network. And sub-sat fees are an issue REALLY far away from being necessary.

As long as application protocol developers can behave responsibly, things will be okay with sub-sat logic. And if an app breaks the fee rule and goes off the reservation, it’s on the developer if their app bricks their customer’s coins in 10-20 years.

It would be an error to think that the opposite of a centrally controlled market is one that has no boundaries or limits.

Instead, a free market still has boundaries and limits, the easiest are supply and demand. Also, in BitcoinCash there are various others that will have an effect on what a miner will want to mine.

I do agree that that is getting off topic :wink:

I want to chime in on this CHIP since it came up yesterday on the BeCash and Chill Twitter space.

Overall, this is a good idea imo

First off, I’m in full support of this CHIP being targeted for the 2025 upgrade cycle (Nov 2024 lock-in). I don’t really see any reason NOT to introduce sub-satoshis, especially if it is at neglible cost as this CHIP suggests. Simply using the rest of the byte space we have available to us already seems incredibly sensible; perhaps Satoshi thought ahead in this regard?

I see major benefits to enabling sub-satoshis (going forward I will call these “millisats,” explained below):

  1. As noted in the CHIP, if all bitcoins were distributed evenly across the human population, that only leaves ~2625 sats per person. In this scenario, the value of a single satoshi would likely be higher than many common goods and services. Having 2,625,000 millisats per person in circulation instead will allow significantly more robust pricing for the world’s economic activity.

  2. In addition to #1, millisats would better facilitate on-chain micropayments. I anticipate that in a future bitcoinized economy, there will be lots of value that can be derived and earned from an emergent “micro-economy.” I would expect such a “micro-economy” to function in some parts similarly to the way platforms like Twitter and Facebook pay their users a portion of ad revenue generated by their audience’s engagement. In other words, in a proper p2p, decentralized economy, many new forms of income generation become possible by realizing multiple/high-volume microtransaction streams.

May 2025 rationale

I also agree with the rationale for deploying in May 2025: there’s (ostensibly) negligible cost, reasonable benefit even today, and the costs increase as the value of BCH rises due to the millisat->sat conversion issue.

My two millisats on the implementation details

As far as implementation, based on the discussion here, a dedicated OP_BIGNUM_MODE seems to be a better solution than a new tx version. However, versioning the VM itself also seems like a potentially good idea - probably better than some ${flags} OP_SETSTATE. In this case I’d imagine an opcode like ${version} OP_ENV, and then the protocol would enforce VM behavior for each version. This would allow all scripts deployed previously to some upgrade to be guaranteed to have predictable behavior, as it would explicitly set its own execution context in the locking bytecode. The use cases proposed by bchautist seem like reasonable rationale for this approach.

The only downside to the OP_ENV approach that I can think of is that it may increase the computational costs of transaction validation… but in practice, we can benchmark this. A conditional like this should be cheap, I think… but I don’t actually know for sure :slight_smile:

Regardless of which opcode we end up going with, it seems sensible to reject any transaction that uses the opcode more than once, or outside of script initialization, in order to ensure the script behaves predictably.

On “millisats” and the Sats Standard (Ꞩ)

During the BeCash and Chill space yesterday, we came up with the idea of standardizing “sats” (Ꞩ) nomenclature like so: https://twitter.com/kzKallisti/status/1765837741177155848

'Sats Standard (Ꞩ)' denominations from 'millisat' to 'petasat'

Sats Standard (Ꞩ):

1 sat = 0.00000001 BCH
1 BCH = 100,000,000 sats


1 millisat = 0.001 sats (sub-satoshis)
1 sat = 1 Ꞩ

1000 sats = 1 Kilosat
1000 Kilosats = 1 Megasat = 1,000,000 sats

100 Megasats = 1 BCH = 100,000,000 sats

1000 Megasats = 1 Gigasat = 10 BCH
1000 Gigasats = 1 Terasat = 10,000 BCH
1000 Terasats = 1 Petasat = 10,000,000 BCH

2.1 Petasats = 21,000 Terasats = 21 million BCH = 2.1 million Megasats

Going forward, I will likely start using these denominations myself, even prior to the implementation of millisats.

The Fee Problem

While nodes currently will not relay any transaction with a fee less than 1000 sats/kB (1 sat/byte), this parameter is trivial to change on the node level, and miners can directly accept transactions with any fee rate they wish.

If social consensus agrees that fees on BCH should not exceed something like $0.10 (2024 USD), it’s inevitable that 1 sat/byte will not be able to deliver that promise. 1 sat per USD is a price of $100,000,000 USD/BCH. That would make $546 USD the smallest transaction possible at 1 sat/byte due to dust limits, and the fee to do so would be ~$219 USD.

Measuring fees in millisats is the only way to alleviate this. At 1 millisat per byte (1 sat/kB), the minimum transaction amount becomes $0.546 USD and the fee becomes $0.219 USD instead, which is significantly closer to the fee rates we expect on BCH.

Unfortunately, this still exceeds our arbitrary $0.10 USD desired fee threshold, but we can also reasonably expect that by the time we are measuring fees in millisats, $0.10 in 2024 USD will probably be something more like $1.00 in 2036 USD.

Some napkin math on miner revenue at these levels:

Assumptions:
BCH price: $100,000,000 USD/BCH
Fee threshold: 1 millisat / byte
2-in-2-out P2PKH transaction size: 360 bytes
Max blocksize: 256mb

Calculations:
Max transactions per block: 711,111
Fee earned per block: 255,999,960 millisats (~256 kilosats)
USD per block: $255,999.96

Conclusion: a mere 1% of the population making 1 transaction per day nets nearly $37 million USD in daily miner revenue; introducing millisat fees should not significantly disrupt mining operations at scale. Market forces will dictate the appropriate fee rate as miners will accept transactions that utilize their resources (blockspace) most efficiently. Having empty blocks due to restrictive/user-inaccessible fee rate policy is bad business.

3 Likes

We need both:

  • TX version would tell parsers they need to decode the TX differently.
  • ${version} OP_ENV would maintain past & future contracts consistent:
    • if executed then OP_*VALUE introspection opcodes would return milisats instead of sats (irrespective of TX version)
    • if not executed then OP_*VALUE introspection would return sats irrespective of TX version, and if underlying prevout/output used milisats then they are truncated e.g. returned value will be: milisats / 1000 where / is integer division.
1 Like

I want to give a lot of push-back to what I’m reading here.

Introducing a new VM mode and a new third transaction version to enable sub-satoshi denominations is a HUGE cost. For no immediate benefit this would have the biggest ecosystem wide need for changes of any upgrade Bitcoin Cash has ever done.

Past upgrades limited the ecosystem cost to only would all full nodes and indexers need to upgrade together with software which emulates the VM like the BitauthIDE.

This proposal would require all BCH software libraries to upgrade, it requires all end user wallets to upgrade, all blockexplorers and it would require all exchanges to update.

Introducing subsatoshis to libraries like cashscript would add great additional complexity.

Arguing in favor of subsatoshis should acknowledge these unprecedented ecosystem wide costs or not be taken seriously at all.

It’s easy to get carried away, and you’re right: the CHIP should carefully examine and address costs/risks/alternatives, and people who’d have to bear those costs would have to agree to bear the said costs.

It would be a subset of those: many of those don’t parse raw blocks or raw transactions themselves but instead request deserialized TX or a block through node’s RPC, so the response is a .json where node already did the parsing and put stuff in correct fields, and there the node could still report truncated sats instead of millisats to maintain API backwards-compatibility, and we’d have to add a millisats field to response for those ready to read them.

But you’re right, it could become a mess with many services wrongly displaying 1000x for people’s balances, and it would crate opportunity for scams, too (e.g. “look at this explorer, I pad you 10 BCH already” while really I paid 0.01 BCH).
If you’d load a raw TX using milisats in ElectronCash now, even with different version number, it would just report amounts 1000x, and funnily I think signing & broadcast operations on a raw TX would work without upgrade but it would display wrong balances.
UTXO selection might break, tho.: ElectronCash could think it has 1000 BCH instead of 1 BCH and then try to build a TX ver2 paying out 999.9 BCH change which would get rejected. Not sure of how it works under the hood, does it parse TXs alone or depends on Fulcrum/BCHN and caches the amounts as reported by the node RPC.

I was suggesting an alternative, to add a new field to outputs using the PFX method, like we did with CashTokens:

And there ElectronCash would again display something wrong but instead of amount it would be the locking script that’s not displayed correctly (remember how, before it got upgraded, EC was showing CashToken outputs as some garbled locking script).

Here the alternative is again to do it the CashTokens way: just add 2 more introspection opcodes: OP_UTXOMILLISATS & OP_OUTPUTMILLISATS (and old ones would report truncated sats)

You make a great point, this indeed would be useful to go over in the CHIP. Have a bit of thinking of how this small change at the core of our system ends up rippling out. Are there barriers available to protect players from changes? Or does everyone need to do “something”…

I’d love to know from the cashscript people what is easier or cleaner to do: a vm-version style opcode, or a single “op_enable_sub_sats” opcode. The vm-version idea is new but would potentially be a building-block for future such changes and so the work now would be avoided in future changes. But at the same time, we don’t know for sure if that isn’t a premature optimization. So I’m still not sure. Would love more people share their thoughts.

So, the full node woud not be a huge amount of work to support this.

  • the actual amount data structure would need minor adjustments only.
  • the idea to have a transaction-version decide on how to parse the data is not complex, just needs to be very well tested as some refactors may be needed to encapsulate the price. Good thing is that bchn already does some of that encapsulation today .
  • The RPC needs an extra field in various places to show the sub-sats.

Middle-layer, Fulcrum:

  • methods like get_balance need to be told how to react. Just like with cashtokens this may be a connection boolean.
  • the internal database needs to be capable of storing the bigger number, I’d estimate that a new version would do a database-upgrade that does a x1000 for all balances stored in the db.
  • It obviously needs to actually consume the new RPC from the full node.

Middle-layer, chaingraph:

  • it likely can follow the same idea, don’t break old APIs but add new ones. But this needs to be verified with the maintainers.

Exchanges:

An upgrade would be good, but frankly it is not a big problem if they don’t. The point in the CHIP is this:

This proposal uses the fact that a fractional-Satoshi is practically free as an advantage. While the loss to a user is near zero going from a transaction with sub-Satoshi to one without, there is much less push for the entire ecosystem to rush to support this. And that makes the upgrade cheaper. When the price goes up 1000x, the cost of enabling this upgrade also becomes somewhat more expensive.

An exchange takes a small fee anyway, an exchange doesn’t let you transfer a balance much higher than 1 sat out of there. A user losing their sub-sats is really not going to even be noticed by either the exchange or the user.

So, the exchanges can and should support it. But they can do this on their own timeline and simply use the old RPCs to figure out the balance of transactions they receive. The cost: sub-sats (so per definition less than 1 sat) are paid to the miner instead of to the exchange balance.

BlockExplorers:

To the best of my knowledge, they use the RPC for getting their information. So same thing again, it would be nice to get them to upgrade but stuff won’t break as those RPCs are backwards compatible. The need to upgrade is there, user pressure and all, but not doing so won’t break anything.

Wallets:

The majority of wallets use Fulcrum APIs or a full node’s RPC. Neither should actually change behavior, they should just add new data. The parsing of the transaction itself can be done by libAuth, which indeed needs support (see below).

Flowee Pay and maybe some more high end wallets actually parse the actual transaction binary, so they should be made aware.

Libraries:

The majority of the libraries are all based on APIs discussed above. Electron, RPC etc. Yes, it would be good to get them to support it, but there is no big rush as nothing will break and no substantial amount of money will get lost (go to miners) if they don’t.

libAuth stands out as one that actually parses the actual raw transaction, it does seem that that one is required to be upgraded in time in order to avoid issues.

BitAuth / CashScript

As there is no requirement for anyone to start sending tx-version-10 transactions the moment the upgrade is done, this is similar a great thing to have but nothing breaks if they do not have support at the time of the protocol upgrade (planned May 2025).

I think it makes a lot of sense to talk to each and everyone of those groups to help them support sub-satoshis in a way that is the best for them and the best for the rest of BitcoinCash.

Upgrade early, add support later.

I think the approach of getting the actual plumbing into the full node early, while the price is low and support is optional, is the best I’ve heard yet. Waiting until we need sub-sats, or waiting until everyone coded it, those options don’t sound realistic to me.

Adding support later in apps is very similar to cash-tokens. Much less intrusive, even, as the majority of the ecosystem can be shielded from changes by backwards compatible RPC and electroncash APIs. I highly recommend those teams picking handshakes and APIs that help shield the rest of the ecosystem from such a change until those players are ready to upgrade. Which, to be clear, can be in various years from now. Even on BTC with its price today at $70000, a single satoshi is worth 0,0007 dollar. People losing a fraction of THAT is not incentive for people to rush and add this code.

But it is great to have this upgrade in place when we need it.

1 Like

fun fact,

getrawtransaction prints like this:

"vout": [
    {
      "value": 10.00000000,
      "n": 0,

So, apart from double-limitations, it might just be that adding more zeros at the end there is fully backwards compatible.

1 Like

Can’t agree more with this.

  • Although it does seem likely to be necessary at some point, it’s still a hypothetical benefit.
  • Massive social and financial cost. This kind of thing is going to require convincing, coordination, cooperation, software development, consulting, contracting, etc. on an unprecedented scale that our ecosystem does not currently have the social or financial capital to execute.

Even after reaching some kind of consensus that something of this scale is necessary, there is going to be at least a year of heavy effort to pave the way before activation. This is not only not a good idea for 2025, but not possible without potentially fatal consequences to the chain due to the inevitable fallout of the rush.

At some point, I agree that it’s probably needed. It’s the last thing to be rushed though.

1 Like

Can you give some examples?

For reference, this page (part of the CHIP) shows some background.

Choice quote:

In other words, if we do this correctly then the vast majority of teams need to do nothing until they are ready to support this feature. Which may be in several years.

You’re right, I was reminded of this the same day you wrote it, when I was updating my AnyHedge indexer from ChainGraph to node RPC.
ChainGraph was returning sats and I was consuming sats, and RPC returns BCH, and I was actually annoyed by that when I was switching hah, and it got me thinking how I can safely just mul with 1e8 and round because the double can’t exceed the max. number of digits due to 21M BCH supply.

With fractional sats, representation as double would not always be exact since a double can safely hold a maximum integer value of 9007199254740991 which would mean values greater than 90,071.99254740991 BCH would be imprecise if handled by software using double precision.

Yeah I don’t see a need to rush it, but we can still move towards it by working out the technical nits, building out the CHIP, and having it all ready to roll when time comes. What’s stopping software from getting ready now to parse a hypothetical new TX format ahead of time? When we locked TX version, we locked it precisely because maybe one day we might need to use it to switch something.

1 Like

Moving ahead is great! Pushing for 2025 is an absolute no-go for me, barring appearance of some miraculous incentive to make it necessary.

Also I think we will need 128 bit integers if we are going for subsats. 64 bits is already a tight jacket.

That sounds like a terrible idea. Since this CHIP is still in an early stage a piece of software that locks in the currently proposed functionality would severely misbehave if the CHIP would change and version=10 gets a different meaning. I would recommend any software that parses raw transactions to produce an error if it encounters any transaction with version>2 (possibly with exceptions for those old transactions with other versions…) and only add support for other versions once corresponding CHIPs are locked in.

2 Likes