Hi all, I’d like to help here.
I will do my best to understand Group Tokens, and then to present them in a way that Tom requires. Any pointers to where I can find the relevant documentation would be helpful.
How I see it, Tom cares about the process a lot and refuses to dig deeply into content if it’s not presented according to the process. That’s fine and understandable. We’re doing peer-review, but we need to fit it into a form so peers will want to do the reviewing. So there are 2 ways to go about this: nag Tom to look into it even though it’s not presented in a way that he requires, or help Andrew present it according to spec. I’d like to help by helping Andrew present it. I started some discussions on Reddit and that’s all good to bring attention but not good to package it into the form for the process and help move things forward here.
I will likely need to bother people to explain to me things that need explaining, and I’ll find channels to do that where it doesn’t increase the noise here.
Here’s what I have so far.
Somebody has to do the math for tokens to function. I believe it would be better for the whole ecosystem if miners did it. Do we want such tokens, or not? Because if we don’t want miners to do some arithmetic in a scalable way, we’re stuck with SLP, and that isn’t really taking off, because it lacks competitive advantage and has other issues do to with hacky way of implementation which couldn’t be avoided given the historical context. Now we are here and we could move forward. SLP now competes with other blockchains, so what would be the problem if it also competed with a solution on the same blockchain, and if that solution were better than other blockchains? The ecosystem would benefit, and maybe we’d attract more users, adoption and talent instead of having them build elsewhere.
This is not a technical argument, but still an important one. Do we want it? Why do we want it? Who will have to “pay” for it? Is the price acceptable for what we’d be getting? What do the miners think about paying that CPU price? What about opportunity cost of not implementing it? Anyway…
Below is how I addressed some concerns on Reddit. I think that’s a start and I hope I got it right and readable for the laymen, but if not please correct me.
Processing a BCH transaction today can use CPU time that is bounded by a linear function of transaction size.
Linear function scaling would continue to be the fact, even if Group Tokens were introduced. Right now I take Andrew’s claim(s) at face value, but it would be nice if others would verify it at this stage, and it would be a must to verify were it to be included in the HF.
A peer-review process. But we have to get the peers to review. Seems like Andrew could use some help motivating his peers to review, or presenting his work in a reviewable way. That’s what prompted me to get the ball rolling.
This includes database operations, since each of these can be performed in linear time and the number of these is bounded linearly by the size of the transaction.
Isn’t it bounded by the number of outputs, though? Size can mean different things. SLP tokens add to the size of transactions, do they not? Anything in the OP_RETURN adds to the size, and it’s not been a problem even though anyone’s allowed to put whatever they want in to increase the size of transaction, even now.
Thing is, with OP_RETURN, miners don’t have to do anything with that data, so it only increases the size in kilobytes of data passed around, it doesn’t add cycles. More transaction outputs, linearly more time. More outputs, linearly more time. Bigger outputs because more data in each, less than linear. Why? Because miners have to doSomething() with every output, whether it contains a token transaction or not. That doSomething() is quite big, they have to verify the signatures, perform crypto math etc. Every output increases the number of times a miner has to call doSomething() by 1. Adding Group Tokens doesn’t increase the number. It adds a little basic bookkeping math inside the doSomething() function. It piggy-backs on something miners have to do anyway, and then when all the doSomething() is finished checks the signatures + this group token running total and says whether the TX is valid or not.
Every Group Token TX is also a BCH TX because it has to pay the fee. This is no different than some TX which amounts to 0 BCH and has something in the OP_RETURN. It spends the entirety of inputs for fees and gets that piece of data written on the blockchain. The only difference is, it doesn’t ask the miner to add a few numbers to check whether the data in the OP_RETURN makes sense. Now, SLP users/nodes do that math.
Somebody has to do the math for tokens to function. The argument is - it would be better for the whole ecosystem if miners did it.
Processing time would continue to be bounded by a linear function, with slightly altered slope. That’s it. That’d be the cost of processing, if my understanding is correct.
Moreover any changes should not significantly increase the size of the UTXO database.
Why it shouldn’t increase it just a little, though? And how do we define what is little and what is significant? We aren’t the stakeholders here. It’s not our CPU and RAM that will have to process this, so we should really be asking nodes & miners that.
Would you work just a little extra to have the best simple token operations on the market? That’s what we have to ask nodes & miners. There are always trade-offs. Will refusing to take a little sacrifice now prevent BCH from achieving its potential? There’s this opportunity cost involved, which grows every day we’re not taking action. Users will enjoy the benefit for “free”, we’re not the stakeholders. All we have to do is pay the fee if we will want to use it. Users will also enjoy the benefit of adoption by both other users and new developers who may come to use our first class tokens. And having proper tokens should help there. They will all pay for the services provided by our blockchain through BCH fees.
There are two limiting possible UTXO implementations: a “fast” one that keeps database entries in RAM hashed by identifier; and a “cheap” one that stores database entries on random access storage.
We’re far behind the hardware. Changing the slope of linear scaling won’t make us get ahead of the hardware just like that, if ever. Maybe this could be argued better but I’m not equipped with arguments right now.
Any need for locking adds huge difficulty to the designer of node software.
Agreed. Thing is, it looks like Group Tokens aren’t locking, at least that’s what I see from recent discussions. If when you process each output you have to doSomethingWithOutput() then this function can be executed for each output in parallel. Then, you have to tally the outputs of transaction, and again we add a little math to doSomethingWithTXes(). This one has to wait for all output processors to finish, which it has to do anyway because it has to tally BCH balance of the TX. So no extra locks there, either.
Where are the locks?