Significance of the Issues in theCurrent Slp System

I’ve wrote an article about my observation of the issues with current SLP system.

Business and users are having troubles using it and IMHO a higher priority should be given to fix the current system. Problems that hinder adoption should be addressed if we want to keep our stockholders and invite more usage.

If any of the proposals or a better one cant make it in May 2022 maybe we should delay upgrade instead of having to wait 20 months till May 2023 to get the issue fixed.


Nice write-up! However, I don’t like the idea to change the upgrade schedule to accommodate late proposals.

I’d like to add this bit I realized about SLP, from risk point of view:

Now consider SLP consensus failure which can result in a total loss of funds. What impact on the image would there be if SLP had a catastrophic failure and loss of funds? I’d argue that moving the risk of having tokens from consensus to userland actually increased the risk for the whole ecosystem, and you give up levers of control because it’s permissionless so it can alone grow and blow up eventually, affecting BCH image in that. That risk never goes away because every new wallet/middleware software introduces it independently of what everyone else is doing.

If there was a competitive miner validated token solution from the beginning, then nobody would have bothered with SLP. If popularity of the current SLP solution increases, the risk only grows with time because it’ll be more funds at risk, more different software out there so both probability and severity increases with popularity. It can’t be simply fixed because it’s not a software issue, it’s an architecture issue, which is not really new and has been known from the start. SLP middleware can serve faulty data to a 100% compliant wallet, and the wallet will have a choice: trust it and risk catasthrophic failure, or do the DAG walk itself but then that doesn’t scale. So the choice is between security and scalability. Only a miner validated solution gives you both.

To reiterate, in the risk VS benefits framework can we agree that these premises hold?

  • As SLP adoption grows it increases both risks & benefits

  • With miner validated, post-activation risk remains fixed or close to fixed, while the benefits increase with adoption

Ok to be fair here I’m arguing for any kind of miner validated tokens, or even Script validated tokens which are also miner validated but it’s another layer so there’s some distinction. Still, a consensus failure in interpreting some opcode or sequence of operations would similarly result in a fork like Group tokens would, would it not? I don’t see how this risk of a fork is different when comparing a native vs Script token implementation. If we had all the primitives to build Script tokens now, then you could say there’s no new risk with Script tokens. But if enabling Script tokens requires a consensus change, then it also introduces this risk of consensus failure.

Quoting myself from:


I don’t like it either so I hope we don’t need it. I’ve just learned that the Graph Search feature which is supposed to help with SLP validation is pretty expensive on resources and currently only 3 servers are running BCHD with slp index vs. +24 SPV servers.

I’ve just learned that the Graph Search feature which is supposed to help with SLP validation is pretty expensive on resources

Interested in more details - I’ve not run such a server myself but I do agree strongly with your point that the community needs to do something to improve the SLP situation while alternatives are under development.

1 Like

I think @quest have more info about this as he said that graph search is super expensive. Also @blockparty-sh and hosts of the other servers.

What I can tell from my experiment Using EC SLP version that when you receive a token, all transactions involving the token are downloaded from the server then validation done locally. Tokens with high number of transactions will cause larger downloads.

One example is Honk token which cased +200 MB of download from the Graph Search server.
Here is a 300,000 lines 10 MB file just from the output of the local wallet doing the validation with high cpu usage:

For flexUSD it is reported to download 3GB of data.

A screenshot from a wallet that received the honk token.


Akad pointed me here, this looks like a great summary of the UX problem.

Curious, with all the focus on consensus-validated tokens, has anyone worked recently on other ideas for making the current validation strategy more efficient?

It seems like wallets and services end up hard coding lists of “trusted” tokens pretty often – what’s stopping us from developing a standard checkpointing system so end clients can do less validation?

Just put the list of currently valid tokens “as of 10 blocks ago” in a giant merkle tree, then use that as the starting point for validation. Could be updated every few months or so to reduce the size of these huge “graph search” data downloads. (And the latest merkle tree is easy for the community to validate: lots of different devs can run the software to generate the merkle tree as of a certain block to confirm it’s a valid checkpoint.)

A solution like that wouldn’t require network changes and could be deployed immediately and incrementally by a few heavily-used tokens. (And that would at least hold the ecosystem over nicely until other consensus-validated options are available.)

I think that most known and experienced people in the BCH ecosystem now agree that SLP has become a clusterfuck.

I sincerely hope we can upgrade this (clearly) obsolete tech to OP_GROUP and convert all tokens ASAP so it becomes easier to sustain and maintain token infrastructure on BCH.