Pre-release of a new CHIP-SubSatoshi

It’s easy to get carried away, and you’re right: the CHIP should carefully examine and address costs/risks/alternatives, and people who’d have to bear those costs would have to agree to bear the said costs.

It would be a subset of those: many of those don’t parse raw blocks or raw transactions themselves but instead request deserialized TX or a block through node’s RPC, so the response is a .json where node already did the parsing and put stuff in correct fields, and there the node could still report truncated sats instead of millisats to maintain API backwards-compatibility, and we’d have to add a millisats field to response for those ready to read them.

But you’re right, it could become a mess with many services wrongly displaying 1000x for people’s balances, and it would crate opportunity for scams, too (e.g. “look at this explorer, I pad you 10 BCH already” while really I paid 0.01 BCH).
If you’d load a raw TX using milisats in ElectronCash now, even with different version number, it would just report amounts 1000x, and funnily I think signing & broadcast operations on a raw TX would work without upgrade but it would display wrong balances.
UTXO selection might break, tho.: ElectronCash could think it has 1000 BCH instead of 1 BCH and then try to build a TX ver2 paying out 999.9 BCH change which would get rejected. Not sure of how it works under the hood, does it parse TXs alone or depends on Fulcrum/BCHN and caches the amounts as reported by the node RPC.

I was suggesting an alternative, to add a new field to outputs using the PFX method, like we did with CashTokens:

And there ElectronCash would again display something wrong but instead of amount it would be the locking script that’s not displayed correctly (remember how, before it got upgraded, EC was showing CashToken outputs as some garbled locking script).

Here the alternative is again to do it the CashTokens way: just add 2 more introspection opcodes: OP_UTXOMILLISATS & OP_OUTPUTMILLISATS (and old ones would report truncated sats)

You make a great point, this indeed would be useful to go over in the CHIP. Have a bit of thinking of how this small change at the core of our system ends up rippling out. Are there barriers available to protect players from changes? Or does everyone need to do “something”…

I’d love to know from the cashscript people what is easier or cleaner to do: a vm-version style opcode, or a single “op_enable_sub_sats” opcode. The vm-version idea is new but would potentially be a building-block for future such changes and so the work now would be avoided in future changes. But at the same time, we don’t know for sure if that isn’t a premature optimization. So I’m still not sure. Would love more people share their thoughts.

So, the full node woud not be a huge amount of work to support this.

  • the actual amount data structure would need minor adjustments only.
  • the idea to have a transaction-version decide on how to parse the data is not complex, just needs to be very well tested as some refactors may be needed to encapsulate the price. Good thing is that bchn already does some of that encapsulation today .
  • The RPC needs an extra field in various places to show the sub-sats.

Middle-layer, Fulcrum:

  • methods like get_balance need to be told how to react. Just like with cashtokens this may be a connection boolean.
  • the internal database needs to be capable of storing the bigger number, I’d estimate that a new version would do a database-upgrade that does a x1000 for all balances stored in the db.
  • It obviously needs to actually consume the new RPC from the full node.

Middle-layer, chaingraph:

  • it likely can follow the same idea, don’t break old APIs but add new ones. But this needs to be verified with the maintainers.

Exchanges:

An upgrade would be good, but frankly it is not a big problem if they don’t. The point in the CHIP is this:

This proposal uses the fact that a fractional-Satoshi is practically free as an advantage. While the loss to a user is near zero going from a transaction with sub-Satoshi to one without, there is much less push for the entire ecosystem to rush to support this. And that makes the upgrade cheaper. When the price goes up 1000x, the cost of enabling this upgrade also becomes somewhat more expensive.

An exchange takes a small fee anyway, an exchange doesn’t let you transfer a balance much higher than 1 sat out of there. A user losing their sub-sats is really not going to even be noticed by either the exchange or the user.

So, the exchanges can and should support it. But they can do this on their own timeline and simply use the old RPCs to figure out the balance of transactions they receive. The cost: sub-sats (so per definition less than 1 sat) are paid to the miner instead of to the exchange balance.

BlockExplorers:

To the best of my knowledge, they use the RPC for getting their information. So same thing again, it would be nice to get them to upgrade but stuff won’t break as those RPCs are backwards compatible. The need to upgrade is there, user pressure and all, but not doing so won’t break anything.

Wallets:

The majority of wallets use Fulcrum APIs or a full node’s RPC. Neither should actually change behavior, they should just add new data. The parsing of the transaction itself can be done by libAuth, which indeed needs support (see below).

Flowee Pay and maybe some more high end wallets actually parse the actual transaction binary, so they should be made aware.

Libraries:

The majority of the libraries are all based on APIs discussed above. Electron, RPC etc. Yes, it would be good to get them to support it, but there is no big rush as nothing will break and no substantial amount of money will get lost (go to miners) if they don’t.

libAuth stands out as one that actually parses the actual raw transaction, it does seem that that one is required to be upgraded in time in order to avoid issues.

BitAuth / CashScript

As there is no requirement for anyone to start sending tx-version-10 transactions the moment the upgrade is done, this is similar a great thing to have but nothing breaks if they do not have support at the time of the protocol upgrade (planned May 2025).

I think it makes a lot of sense to talk to each and everyone of those groups to help them support sub-satoshis in a way that is the best for them and the best for the rest of BitcoinCash.

Upgrade early, add support later.

I think the approach of getting the actual plumbing into the full node early, while the price is low and support is optional, is the best I’ve heard yet. Waiting until we need sub-sats, or waiting until everyone coded it, those options don’t sound realistic to me.

Adding support later in apps is very similar to cash-tokens. Much less intrusive, even, as the majority of the ecosystem can be shielded from changes by backwards compatible RPC and electroncash APIs. I highly recommend those teams picking handshakes and APIs that help shield the rest of the ecosystem from such a change until those players are ready to upgrade. Which, to be clear, can be in various years from now. Even on BTC with its price today at $70000, a single satoshi is worth 0,0007 dollar. People losing a fraction of THAT is not incentive for people to rush and add this code.

But it is great to have this upgrade in place when we need it.

1 Like

fun fact,

getrawtransaction prints like this:

"vout": [
    {
      "value": 10.00000000,
      "n": 0,

So, apart from double-limitations, it might just be that adding more zeros at the end there is fully backwards compatible.

1 Like

Can’t agree more with this.

  • Although it does seem likely to be necessary at some point, it’s still a hypothetical benefit.
  • Massive social and financial cost. This kind of thing is going to require convincing, coordination, cooperation, software development, consulting, contracting, etc. on an unprecedented scale that our ecosystem does not currently have the social or financial capital to execute.

Even after reaching some kind of consensus that something of this scale is necessary, there is going to be at least a year of heavy effort to pave the way before activation. This is not only not a good idea for 2025, but not possible without potentially fatal consequences to the chain due to the inevitable fallout of the rush.

At some point, I agree that it’s probably needed. It’s the last thing to be rushed though.

1 Like

Can you give some examples?

For reference, this page (part of the CHIP) shows some background.

Choice quote:

In other words, if we do this correctly then the vast majority of teams need to do nothing until they are ready to support this feature. Which may be in several years.

You’re right, I was reminded of this the same day you wrote it, when I was updating my AnyHedge indexer from ChainGraph to node RPC.
ChainGraph was returning sats and I was consuming sats, and RPC returns BCH, and I was actually annoyed by that when I was switching hah, and it got me thinking how I can safely just mul with 1e8 and round because the double can’t exceed the max. number of digits due to 21M BCH supply.

With fractional sats, representation as double would not always be exact since a double can safely hold a maximum integer value of 9007199254740991 which would mean values greater than 90,071.99254740991 BCH would be imprecise if handled by software using double precision.

Yeah I don’t see a need to rush it, but we can still move towards it by working out the technical nits, building out the CHIP, and having it all ready to roll when time comes. What’s stopping software from getting ready now to parse a hypothetical new TX format ahead of time? When we locked TX version, we locked it precisely because maybe one day we might need to use it to switch something.

1 Like

Moving ahead is great! Pushing for 2025 is an absolute no-go for me, barring appearance of some miraculous incentive to make it necessary.

Also I think we will need 128 bit integers if we are going for subsats. 64 bits is already a tight jacket.

That sounds like a terrible idea. Since this CHIP is still in an early stage a piece of software that locks in the currently proposed functionality would severely misbehave if the CHIP would change and version=10 gets a different meaning. I would recommend any software that parses raw transactions to produce an error if it encounters any transaction with version>2 (possibly with exceptions for those old transactions with other versions…) and only add support for other versions once corresponding CHIPs are locked in.

2 Likes

Yeah this, just get ready for the possibility that the TX version might be upgraded in some unknown way and have a plan on how to handle it.

Some seem to fear that if one upgrades, everyone needs to upgrade.

This is a good thing to talk about. What we propose is to make it completely invisible for anyone outside of the full node team to even realize that this milli-sats transaction has been sent.

This means that indeed if I sent a v10 transaction to the paytaca wallet, the only effect will be that they will not notice the sub-sat value. When they spent from it as normal, the effect will be that the milli-sats will end up being added to the fees paid.

Similarly,

if a user creates a v10 transaction and re-uses a script that is now using 64bit math, that script will not actually see the milli-sats. Even though they are present on the transaction itself, the switch to get the miillisats is a decision INSIDE the script.

This means that if you’re not certain 64bit math and milli-sats will work together for your usecase, you probably want to push for a 128bit math upgrade and wait for that before you start using milli-sats.

Just because millisats are available on the bitoincash, doesn’t mean you need to use it.
Just because someone may send you a v10 transaction with milli-sats, doesn’t mean you are forced to consume it.

That is a good idea, we even have it as a consensus rule for some time now.

I suggest to have a cut-off at version 9, that’s safer.

Just for clarity’s sake, granted this is all hypotheticals, $100,000,000 per coin isn’t really in the cards (without some major inflation, which maybe is in the cards in the next 10-20 years!), but is still a bit irrelevant.

There is approximately $49T in the M1 money supply and $83T for M2. Rounding to $50T and $100T nets a price of BCH at $2.38M and $4.76M, respectively. So the fee would be $7,000 or $15,000, roughly (respectively), rather than $255,000.

Though, inflation doesn’t matter, because the actual value of the currency wouldn’t change.

So for today’s value, an adjusted calc would be…roughly $12,000 per block in today’s dollars. And that value should scale up proportionally with inflation, whereas value would remain constant.

1sat/byte, assuming the $4.76M price per BCH, would be a fee of $17. Which in today’s dollars would still be far too expensive. So 1millisat/byte does make sense to keep the fee at just around 1 penny.

So at the end, the daily fee to miners would only be $1.7M (in today’s dollars) of value. But the beauty is that that’s only at 1%, and assuming no improvements are made to transaction sizes! So there should never really be a fee problem at scale.


EDIT:
For fun calc with today’s dollars/value in a full replacement scenario…
60,000tps * 60seconds * 10min * 360bytes/tx / 100,000,000,000millisats/bch * 4,760,000price/bch * 144blocks/day = $88.8M in daily miner fees!

That should be plenty of fee budget.

EDIT: Amount of daily storage needed at the above scale:
60,000tps * 60sec * 60min * 24hr * 360 bytes/tx = 1,866,240,000,000 bytes = 1.866TBytes Per day (or 1.738Tibibytes per day).
Maybe little by little I’ll break it down as I get bored, but some more assumptions!
Today, Seagate has a 30TB HDD (plans to retail for $450 iirc). So $450 will cover 16 days of transactions. Let’s say there are 100 mining pools. That’s $450 / 16 days * 100 pools = $2,800 per day for all mining pools! Doesn’t even make a dent in the daily mining fees. Except, that dent will become smaller over time as storage gets cheaper and cheaper in relative value.
The natural counter to this would be (fast storage is needed!). Ok, well 100TB SSDs exist today (in 3.5in form factor), but they cost $40,000. Let’s do that math:
$40,000 / 53 days * 100 pools = $75,000 – 0.08% of the daily mining revenue. This is basically non-existent.
But at the same time, if you can’t be bothered to spend THAT much, Run HDDs in RAID 10. Sure, now paying for extra drives, but the relative cost (to daily revenue) is still basically non-existent. Now you have fast, redundant storage, at a fraction of the cost.

Heck, let’s do those 100TB drives in a RAID 10 configuration. Double that cost for the SSDs! 0.16% of daily mining fees. Now this is where it becomes noticeable, but it is still so tiny. And this is before accounting for the decrease in relative cost for this storage, which would likely send this cost down 10x by the time 60,000tps would ever occur.

Now let’s think about internet bandwidth. 1.866TB / day. 1.866TB / 24hrs / 60min / 60sec = ~24MBps = ~ 200Mbps. That’s pretty close to a normal home internet connection today. Associated costs are completely insignificant.

Then what about for businesses/otherwise running nodes? I don’t really think it’s necessary to account for their cost since UTXO commitments will likely exist well before this sort of scale.

What about other high performance full archival nodes? Such services would be offered and likely charge for those services. They will charge what the market determines it is worth so this is not really a consideration.


What’s the TLDR?
60,000tps with a 0.001sat fee at $4.76M relative value per coin equates to $88.8 million of relative miner compensation a day.
This requires a daily incremental cost (over 0) of 0.16% of those fees assuming a high performance configuration.
Necessary uninterrupted internet connection would be 200Mbps, which is common today for homes and so this is completely insignificant.

1 Like

As adoption grows, so will the costs. But I don’t disagree at all with what you’re saying.
Just for curiosity’s sake, would there be a scenario where rather than changing sats, we could piggy back off of CashTokens and basically create a “centoshi”? Kind of like we have dollars (BCH) and coins (centoshis) – Not sure how to do this without a central body… maybe it would need some kind of decentralized bridge to do this… like a BCH stablecoin that stabilizes BCH…on BCH?

This contains good ideas for brainstorming and it sounds like the discussion succeeded in uncovering some hidden complexity.

The question the authors are asking is “What needs to be done now to ensure we can support more granular transactions in the future, if and when we need them?”

2 Likes

Won’t doing 128Bit math without actually having 128bit processors be very CPU-intensive?

There are libraries that do this, but the math is not done directly using CPU instructions as usual, but is “simulated”, like running not-supported software using an emulator.

Honestly I have no idea how computationally expensive it would be in effect. Maybe the overall load is so small nobody would notice.

In short, yes.

Multiplications are one of the most CPU intensive things to do and a 128 bit multiplication on a 64bit CPU will be emulated with 2 or 3 such instructions. (Karatsuba algorithm - Wikipedia)

I’m absolutely certain we’ll get it natively, though. x86 architectures are in the last phase. Apple is now competing with an entirely new design and the Intel / AMD /etc will have to move over to these new architectures to compete. Will probably take 10 years, though.

There is a pretty good chance that in that time the CPUs will go to 128bit. Simply because its 1000x times easier to do on a much more modern architecture and partly because it is competitive.

So, my personal preference is to wait for the CPUs to do it. But that wish is entirely based on the fact that I have no pressing need for the feature. When we went to 64 bit that pressing need didn’t exist either, we talked about it at length back then.

Going 128 bits will be much more disruptive than going 64bit, so I hope we can postpone it a bit longer. And likely use the OP_ENV idea to actually move over safely.

2 Likes

Not on Linux.

Usually going from one architecture to a higher architecture is very easy, since all you need to do is recompile things.

And SPV wallets on Android can just do the simulated computation. Wallets don’t really do a lot of computations to begin with, so this should be straightforward.

…and who runs their business/production nodes on Windows these days? Nobody.

the context of that statement was about the scripting langauge in BitcoinCash. Specifically about making multiplications done in the VM use 128bits instead of the standard 64.

It will be more disruprite for the simple reason that on the 32-bit setup the amount of math-using scripts were quite low. They didn’t really do anything interesting.
The current 64bit engine for BitcoinCash opened up a LOT of extra options. And it is impossible to know if those scripts will handle 128bit well. We simply can’t know since we can’t see the scripts until they are successfully spent.

This is why I wrote it will be a lot more difficult to do. Because I start with the assumption (stolen from the Kernel): we will never break user-space.

Unless they’re relying on overflow to fail the TX (which the CHIP explicitly discourages), they’d be fine.

Notice of Possible Future Expansion

While unusual, it is possible to design contracts which rely on the rejection of otherwise-valid Script Numbers which are larger than 8 bytes. Contract authors are advised that future upgrades may further expand the supported range of BCH VM Script Numbers beyond 8 bytes.

This proposal interprets such failure-reliant constructions as intentional – they are designed to fail unless/until a possible future network upgrade in which larger Script Numbers are enabled (i.e. a contract branch which can only be successfully evaluated in the event of such an upgrade).

As always, the security of a contract is the responsibility of the entity locking funds in that contract; funds can always be locked in insecure contracts (e.g. OP_DROP OP_1). This notice is provided to warn contract authors and explicitly codify a network policy: the possible existence of poorly-designed contracts will not preclude future upgrades from further expanding the range of Script Numbers.

To ensure a contract will always fail when arithmetic results overflow or underflow 8-byte Script Numbers (in the rare case that such a behavior is desirable), that behavior must be either 1) explicitly validated or 2) introduced to the contract prior to the activation of any future upgrade which expands the range of Script Numbers.

This is why I said that my assumption is that we should not break userspace. And I referenced kernel for a reason.

Because people WILL go against the suggestions made. Even moreso since I suspect that this suggestion is not actually shown in the docs for cashscript. Just in the CHIP. Which nobody reads after the activation.

So, while I completely agree people should really not be doing that, this isn’t a crashing app on Linux, this is about people losing access to their money. And there are going to be a LOT more scripts using math on-chain since we actually made it useful for a lot of usecases. A percentage of that will do stupid stuff. Fact of life. Downside of success.

And breaking ‘userspace’ in kernel is super annoying, but in BitcoinCash is makes people lose actual money forever. I’m just saying, we should be extremely reluctant and indeed do well to weigh all the options. OP_ENV being a winner in my book so far.

1 Like