Pre-release of a new CHIP-SubSatoshi

It’s my impression that a totally free market would lead to weather data and the like—gigablocks of junk instantly. A free market above some floor that ratchets down over decades might protect us from that. Otherwise, it would only take one miner to ruin The Commons, I feel there does have to be some future logic or agreement about it amongst them.

I don’t want to divert the discussion into a completely different topic. And consensus or network level rules aren’t really in my wheelhouse (you’ve certainly thought about it a lot more). We certainly can’t solve the future fees issue here.

But since miners have a gentlemen’s agreement about the fee threshold… I propose we say:

Application protocol developers should agree, in the absence of a long term fee schedule, that:

Long lived contracts should assume a constant fee threshold forever, even though that will probably not be the case. And it’s best practice in such cases to have a pathway, in script, to clean any outputs that cannot pay a 1 sat/byte fee.

Prepaying today’s fee far into the future will lead to more incentive for miners to maintain the network. And sub-sat fees are an issue REALLY far away from being necessary.

As long as application protocol developers can behave responsibly, things will be okay with sub-sat logic. And if an app breaks the fee rule and goes off the reservation, it’s on the developer if their app bricks their customer’s coins in 10-20 years.

It would be an error to think that the opposite of a centrally controlled market is one that has no boundaries or limits.

Instead, a free market still has boundaries and limits, the easiest are supply and demand. Also, in BitcoinCash there are various others that will have an effect on what a miner will want to mine.

I do agree that that is getting off topic :wink:

I want to chime in on this CHIP since it came up yesterday on the BeCash and Chill Twitter space.

Overall, this is a good idea imo

First off, I’m in full support of this CHIP being targeted for the 2025 upgrade cycle (Nov 2024 lock-in). I don’t really see any reason NOT to introduce sub-satoshis, especially if it is at neglible cost as this CHIP suggests. Simply using the rest of the byte space we have available to us already seems incredibly sensible; perhaps Satoshi thought ahead in this regard?

I see major benefits to enabling sub-satoshis (going forward I will call these “millisats,” explained below):

  1. As noted in the CHIP, if all bitcoins were distributed evenly across the human population, that only leaves ~2625 sats per person. In this scenario, the value of a single satoshi would likely be higher than many common goods and services. Having 2,625,000 millisats per person in circulation instead will allow significantly more robust pricing for the world’s economic activity.

  2. In addition to #1, millisats would better facilitate on-chain micropayments. I anticipate that in a future bitcoinized economy, there will be lots of value that can be derived and earned from an emergent “micro-economy.” I would expect such a “micro-economy” to function in some parts similarly to the way platforms like Twitter and Facebook pay their users a portion of ad revenue generated by their audience’s engagement. In other words, in a proper p2p, decentralized economy, many new forms of income generation become possible by realizing multiple/high-volume microtransaction streams.

May 2025 rationale

I also agree with the rationale for deploying in May 2025: there’s (ostensibly) negligible cost, reasonable benefit even today, and the costs increase as the value of BCH rises due to the millisat->sat conversion issue.

My two millisats on the implementation details

As far as implementation, based on the discussion here, a dedicated OP_BIGNUM_MODE seems to be a better solution than a new tx version. However, versioning the VM itself also seems like a potentially good idea - probably better than some ${flags} OP_SETSTATE. In this case I’d imagine an opcode like ${version} OP_ENV, and then the protocol would enforce VM behavior for each version. This would allow all scripts deployed previously to some upgrade to be guaranteed to have predictable behavior, as it would explicitly set its own execution context in the locking bytecode. The use cases proposed by bchautist seem like reasonable rationale for this approach.

The only downside to the OP_ENV approach that I can think of is that it may increase the computational costs of transaction validation… but in practice, we can benchmark this. A conditional like this should be cheap, I think… but I don’t actually know for sure :slight_smile:

Regardless of which opcode we end up going with, it seems sensible to reject any transaction that uses the opcode more than once, or outside of script initialization, in order to ensure the script behaves predictably.

On “millisats” and the Sats Standard (Ꞩ)

During the BeCash and Chill space yesterday, we came up with the idea of standardizing “sats” (Ꞩ) nomenclature like so: https://twitter.com/kzKallisti/status/1765837741177155848

'Sats Standard (Ꞩ)' denominations from 'millisat' to 'petasat'

Sats Standard (Ꞩ):

1 sat = 0.00000001 BCH
1 BCH = 100,000,000 sats


1 millisat = 0.001 sats (sub-satoshis)
1 sat = 1 Ꞩ

1000 sats = 1 Kilosat
1000 Kilosats = 1 Megasat = 1,000,000 sats

100 Megasats = 1 BCH = 100,000,000 sats

1000 Megasats = 1 Gigasat = 10 BCH
1000 Gigasats = 1 Terasat = 10,000 BCH
1000 Terasats = 1 Petasat = 10,000,000 BCH

2.1 Petasats = 21,000 Terasats = 21 million BCH = 2.1 million Megasats

Going forward, I will likely start using these denominations myself, even prior to the implementation of millisats.

The Fee Problem

While nodes currently will not relay any transaction with a fee less than 1000 sats/kB (1 sat/byte), this parameter is trivial to change on the node level, and miners can directly accept transactions with any fee rate they wish.

If social consensus agrees that fees on BCH should not exceed something like $0.10 (2024 USD), it’s inevitable that 1 sat/byte will not be able to deliver that promise. 1 sat per USD is a price of $100,000,000 USD/BCH. That would make $546 USD the smallest transaction possible at 1 sat/byte due to dust limits, and the fee to do so would be ~$219 USD.

Measuring fees in millisats is the only way to alleviate this. At 1 millisat per byte (1 sat/kB), the minimum transaction amount becomes $0.546 USD and the fee becomes $0.219 USD instead, which is significantly closer to the fee rates we expect on BCH.

Unfortunately, this still exceeds our arbitrary $0.10 USD desired fee threshold, but we can also reasonably expect that by the time we are measuring fees in millisats, $0.10 in 2024 USD will probably be something more like $1.00 in 2036 USD.

Some napkin math on miner revenue at these levels:

Assumptions:
BCH price: $100,000,000 USD/BCH
Fee threshold: 1 millisat / byte
2-in-2-out P2PKH transaction size: 360 bytes
Max blocksize: 256mb

Calculations:
Max transactions per block: 711,111
Fee earned per block: 255,999,960 millisats (~256 kilosats)
USD per block: $255,999.96

Conclusion: a mere 1% of the population making 1 transaction per day nets nearly $37 million USD in daily miner revenue; introducing millisat fees should not significantly disrupt mining operations at scale. Market forces will dictate the appropriate fee rate as miners will accept transactions that utilize their resources (blockspace) most efficiently. Having empty blocks due to restrictive/user-inaccessible fee rate policy is bad business.

3 Likes

We need both:

  • TX version would tell parsers they need to decode the TX differently.
  • ${version} OP_ENV would maintain past & future contracts consistent:
    • if executed then OP_*VALUE introspection opcodes would return milisats instead of sats (irrespective of TX version)
    • if not executed then OP_*VALUE introspection would return sats irrespective of TX version, and if underlying prevout/output used milisats then they are truncated e.g. returned value will be: milisats / 1000 where / is integer division.
1 Like

I want to give a lot of push-back to what I’m reading here.

Introducing a new VM mode and a new third transaction version to enable sub-satoshi denominations is a HUGE cost. For no immediate benefit this would have the biggest ecosystem wide need for changes of any upgrade Bitcoin Cash has ever done.

Past upgrades limited the ecosystem cost to only would all full nodes and indexers need to upgrade together with software which emulates the VM like the BitauthIDE.

This proposal would require all BCH software libraries to upgrade, it requires all end user wallets to upgrade, all blockexplorers and it would require all exchanges to update.

Introducing subsatoshis to libraries like cashscript would add great additional complexity.

Arguing in favor of subsatoshis should acknowledge these unprecedented ecosystem wide costs or not be taken seriously at all.

It’s easy to get carried away, and you’re right: the CHIP should carefully examine and address costs/risks/alternatives, and people who’d have to bear those costs would have to agree to bear the said costs.

It would be a subset of those: many of those don’t parse raw blocks or raw transactions themselves but instead request deserialized TX or a block through node’s RPC, so the response is a .json where node already did the parsing and put stuff in correct fields, and there the node could still report truncated sats instead of millisats to maintain API backwards-compatibility, and we’d have to add a millisats field to response for those ready to read them.

But you’re right, it could become a mess with many services wrongly displaying 1000x for people’s balances, and it would crate opportunity for scams, too (e.g. “look at this explorer, I pad you 10 BCH already” while really I paid 0.01 BCH).
If you’d load a raw TX using milisats in ElectronCash now, even with different version number, it would just report amounts 1000x, and funnily I think signing & broadcast operations on a raw TX would work without upgrade but it would display wrong balances.
UTXO selection might break, tho.: ElectronCash could think it has 1000 BCH instead of 1 BCH and then try to build a TX ver2 paying out 999.9 BCH change which would get rejected. Not sure of how it works under the hood, does it parse TXs alone or depends on Fulcrum/BCHN and caches the amounts as reported by the node RPC.

I was suggesting an alternative, to add a new field to outputs using the PFX method, like we did with CashTokens:

And there ElectronCash would again display something wrong but instead of amount it would be the locking script that’s not displayed correctly (remember how, before it got upgraded, EC was showing CashToken outputs as some garbled locking script).

Here the alternative is again to do it the CashTokens way: just add 2 more introspection opcodes: OP_UTXOMILLISATS & OP_OUTPUTMILLISATS (and old ones would report truncated sats)

You make a great point, this indeed would be useful to go over in the CHIP. Have a bit of thinking of how this small change at the core of our system ends up rippling out. Are there barriers available to protect players from changes? Or does everyone need to do “something”…

I’d love to know from the cashscript people what is easier or cleaner to do: a vm-version style opcode, or a single “op_enable_sub_sats” opcode. The vm-version idea is new but would potentially be a building-block for future such changes and so the work now would be avoided in future changes. But at the same time, we don’t know for sure if that isn’t a premature optimization. So I’m still not sure. Would love more people share their thoughts.

So, the full node woud not be a huge amount of work to support this.

  • the actual amount data structure would need minor adjustments only.
  • the idea to have a transaction-version decide on how to parse the data is not complex, just needs to be very well tested as some refactors may be needed to encapsulate the price. Good thing is that bchn already does some of that encapsulation today .
  • The RPC needs an extra field in various places to show the sub-sats.

Middle-layer, Fulcrum:

  • methods like get_balance need to be told how to react. Just like with cashtokens this may be a connection boolean.
  • the internal database needs to be capable of storing the bigger number, I’d estimate that a new version would do a database-upgrade that does a x1000 for all balances stored in the db.
  • It obviously needs to actually consume the new RPC from the full node.

Middle-layer, chaingraph:

  • it likely can follow the same idea, don’t break old APIs but add new ones. But this needs to be verified with the maintainers.

Exchanges:

An upgrade would be good, but frankly it is not a big problem if they don’t. The point in the CHIP is this:

This proposal uses the fact that a fractional-Satoshi is practically free as an advantage. While the loss to a user is near zero going from a transaction with sub-Satoshi to one without, there is much less push for the entire ecosystem to rush to support this. And that makes the upgrade cheaper. When the price goes up 1000x, the cost of enabling this upgrade also becomes somewhat more expensive.

An exchange takes a small fee anyway, an exchange doesn’t let you transfer a balance much higher than 1 sat out of there. A user losing their sub-sats is really not going to even be noticed by either the exchange or the user.

So, the exchanges can and should support it. But they can do this on their own timeline and simply use the old RPCs to figure out the balance of transactions they receive. The cost: sub-sats (so per definition less than 1 sat) are paid to the miner instead of to the exchange balance.

BlockExplorers:

To the best of my knowledge, they use the RPC for getting their information. So same thing again, it would be nice to get them to upgrade but stuff won’t break as those RPCs are backwards compatible. The need to upgrade is there, user pressure and all, but not doing so won’t break anything.

Wallets:

The majority of wallets use Fulcrum APIs or a full node’s RPC. Neither should actually change behavior, they should just add new data. The parsing of the transaction itself can be done by libAuth, which indeed needs support (see below).

Flowee Pay and maybe some more high end wallets actually parse the actual transaction binary, so they should be made aware.

Libraries:

The majority of the libraries are all based on APIs discussed above. Electron, RPC etc. Yes, it would be good to get them to support it, but there is no big rush as nothing will break and no substantial amount of money will get lost (go to miners) if they don’t.

libAuth stands out as one that actually parses the actual raw transaction, it does seem that that one is required to be upgraded in time in order to avoid issues.

BitAuth / CashScript

As there is no requirement for anyone to start sending tx-version-10 transactions the moment the upgrade is done, this is similar a great thing to have but nothing breaks if they do not have support at the time of the protocol upgrade (planned May 2025).

I think it makes a lot of sense to talk to each and everyone of those groups to help them support sub-satoshis in a way that is the best for them and the best for the rest of BitcoinCash.

Upgrade early, add support later.

I think the approach of getting the actual plumbing into the full node early, while the price is low and support is optional, is the best I’ve heard yet. Waiting until we need sub-sats, or waiting until everyone coded it, those options don’t sound realistic to me.

Adding support later in apps is very similar to cash-tokens. Much less intrusive, even, as the majority of the ecosystem can be shielded from changes by backwards compatible RPC and electroncash APIs. I highly recommend those teams picking handshakes and APIs that help shield the rest of the ecosystem from such a change until those players are ready to upgrade. Which, to be clear, can be in various years from now. Even on BTC with its price today at $70000, a single satoshi is worth 0,0007 dollar. People losing a fraction of THAT is not incentive for people to rush and add this code.

But it is great to have this upgrade in place when we need it.

1 Like

fun fact,

getrawtransaction prints like this:

"vout": [
    {
      "value": 10.00000000,
      "n": 0,

So, apart from double-limitations, it might just be that adding more zeros at the end there is fully backwards compatible.

1 Like

Can’t agree more with this.

  • Although it does seem likely to be necessary at some point, it’s still a hypothetical benefit.
  • Massive social and financial cost. This kind of thing is going to require convincing, coordination, cooperation, software development, consulting, contracting, etc. on an unprecedented scale that our ecosystem does not currently have the social or financial capital to execute.

Even after reaching some kind of consensus that something of this scale is necessary, there is going to be at least a year of heavy effort to pave the way before activation. This is not only not a good idea for 2025, but not possible without potentially fatal consequences to the chain due to the inevitable fallout of the rush.

At some point, I agree that it’s probably needed. It’s the last thing to be rushed though.

1 Like

Can you give some examples?

For reference, this page (part of the CHIP) shows some background.

Choice quote:

In other words, if we do this correctly then the vast majority of teams need to do nothing until they are ready to support this feature. Which may be in several years.

You’re right, I was reminded of this the same day you wrote it, when I was updating my AnyHedge indexer from ChainGraph to node RPC.
ChainGraph was returning sats and I was consuming sats, and RPC returns BCH, and I was actually annoyed by that when I was switching hah, and it got me thinking how I can safely just mul with 1e8 and round because the double can’t exceed the max. number of digits due to 21M BCH supply.

With fractional sats, representation as double would not always be exact since a double can safely hold a maximum integer value of 9007199254740991 which would mean values greater than 90,071.99254740991 BCH would be imprecise if handled by software using double precision.

Yeah I don’t see a need to rush it, but we can still move towards it by working out the technical nits, building out the CHIP, and having it all ready to roll when time comes. What’s stopping software from getting ready now to parse a hypothetical new TX format ahead of time? When we locked TX version, we locked it precisely because maybe one day we might need to use it to switch something.

1 Like

Moving ahead is great! Pushing for 2025 is an absolute no-go for me, barring appearance of some miraculous incentive to make it necessary.

Also I think we will need 128 bit integers if we are going for subsats. 64 bits is already a tight jacket.

That sounds like a terrible idea. Since this CHIP is still in an early stage a piece of software that locks in the currently proposed functionality would severely misbehave if the CHIP would change and version=10 gets a different meaning. I would recommend any software that parses raw transactions to produce an error if it encounters any transaction with version>2 (possibly with exceptions for those old transactions with other versions…) and only add support for other versions once corresponding CHIPs are locked in.

2 Likes

Yeah this, just get ready for the possibility that the TX version might be upgraded in some unknown way and have a plan on how to handle it.

Some seem to fear that if one upgrades, everyone needs to upgrade.

This is a good thing to talk about. What we propose is to make it completely invisible for anyone outside of the full node team to even realize that this milli-sats transaction has been sent.

This means that indeed if I sent a v10 transaction to the paytaca wallet, the only effect will be that they will not notice the sub-sat value. When they spent from it as normal, the effect will be that the milli-sats will end up being added to the fees paid.

Similarly,

if a user creates a v10 transaction and re-uses a script that is now using 64bit math, that script will not actually see the milli-sats. Even though they are present on the transaction itself, the switch to get the miillisats is a decision INSIDE the script.

This means that if you’re not certain 64bit math and milli-sats will work together for your usecase, you probably want to push for a 128bit math upgrade and wait for that before you start using milli-sats.

Just because millisats are available on the bitoincash, doesn’t mean you need to use it.
Just because someone may send you a v10 transaction with milli-sats, doesn’t mean you are forced to consume it.

That is a good idea, we even have it as a consensus rule for some time now.

I suggest to have a cut-off at version 9, that’s safer.

Just for clarity’s sake, granted this is all hypotheticals, $100,000,000 per coin isn’t really in the cards (without some major inflation, which maybe is in the cards in the next 10-20 years!), but is still a bit irrelevant.

There is approximately $49T in the M1 money supply and $83T for M2. Rounding to $50T and $100T nets a price of BCH at $2.38M and $4.76M, respectively. So the fee would be $7,000 or $15,000, roughly (respectively), rather than $255,000.

Though, inflation doesn’t matter, because the actual value of the currency wouldn’t change.

So for today’s value, an adjusted calc would be…roughly $12,000 per block in today’s dollars. And that value should scale up proportionally with inflation, whereas value would remain constant.

1sat/byte, assuming the $4.76M price per BCH, would be a fee of $17. Which in today’s dollars would still be far too expensive. So 1millisat/byte does make sense to keep the fee at just around 1 penny.

So at the end, the daily fee to miners would only be $1.7M (in today’s dollars) of value. But the beauty is that that’s only at 1%, and assuming no improvements are made to transaction sizes! So there should never really be a fee problem at scale.


EDIT:
For fun calc with today’s dollars/value in a full replacement scenario…
60,000tps * 60seconds * 10min * 360bytes/tx / 100,000,000,000millisats/bch * 4,760,000price/bch * 144blocks/day = $88.8M in daily miner fees!

That should be plenty of fee budget.

EDIT: Amount of daily storage needed at the above scale:
60,000tps * 60sec * 60min * 24hr * 360 bytes/tx = 1,866,240,000,000 bytes = 1.866TBytes Per day (or 1.738Tibibytes per day).
Maybe little by little I’ll break it down as I get bored, but some more assumptions!
Today, Seagate has a 30TB HDD (plans to retail for $450 iirc). So $450 will cover 16 days of transactions. Let’s say there are 100 mining pools. That’s $450 / 16 days * 100 pools = $2,800 per day for all mining pools! Doesn’t even make a dent in the daily mining fees. Except, that dent will become smaller over time as storage gets cheaper and cheaper in relative value.
The natural counter to this would be (fast storage is needed!). Ok, well 100TB SSDs exist today (in 3.5in form factor), but they cost $40,000. Let’s do that math:
$40,000 / 53 days * 100 pools = $75,000 – 0.08% of the daily mining revenue. This is basically non-existent.
But at the same time, if you can’t be bothered to spend THAT much, Run HDDs in RAID 10. Sure, now paying for extra drives, but the relative cost (to daily revenue) is still basically non-existent. Now you have fast, redundant storage, at a fraction of the cost.

Heck, let’s do those 100TB drives in a RAID 10 configuration. Double that cost for the SSDs! 0.16% of daily mining fees. Now this is where it becomes noticeable, but it is still so tiny. And this is before accounting for the decrease in relative cost for this storage, which would likely send this cost down 10x by the time 60,000tps would ever occur.

Now let’s think about internet bandwidth. 1.866TB / day. 1.866TB / 24hrs / 60min / 60sec = ~24MBps = ~ 200Mbps. That’s pretty close to a normal home internet connection today. Associated costs are completely insignificant.

Then what about for businesses/otherwise running nodes? I don’t really think it’s necessary to account for their cost since UTXO commitments will likely exist well before this sort of scale.

What about other high performance full archival nodes? Such services would be offered and likely charge for those services. They will charge what the market determines it is worth so this is not really a consideration.


What’s the TLDR?
60,000tps with a 0.001sat fee at $4.76M relative value per coin equates to $88.8 million of relative miner compensation a day.
This requires a daily incremental cost (over 0) of 0.16% of those fees assuming a high performance configuration.
Necessary uninterrupted internet connection would be 200Mbps, which is common today for homes and so this is completely insignificant.

1 Like

As adoption grows, so will the costs. But I don’t disagree at all with what you’re saying.
Just for curiosity’s sake, would there be a scenario where rather than changing sats, we could piggy back off of CashTokens and basically create a “centoshi”? Kind of like we have dollars (BCH) and coins (centoshis) – Not sure how to do this without a central body… maybe it would need some kind of decentralized bridge to do this… like a BCH stablecoin that stabilizes BCH…on BCH?

This contains good ideas for brainstorming and it sounds like the discussion succeeded in uncovering some hidden complexity.

The question the authors are asking is “What needs to be done now to ensure we can support more granular transactions in the future, if and when we need them?”

2 Likes

Won’t doing 128Bit math without actually having 128bit processors be very CPU-intensive?

There are libraries that do this, but the math is not done directly using CPU instructions as usual, but is “simulated”, like running not-supported software using an emulator.

Honestly I have no idea how computationally expensive it would be in effect. Maybe the overall load is so small nobody would notice.