Stack is temporary memory the running Script will use, it doesn’t exist as data on chain. What exists are data pushes that place contract or input’s constants on the stack so the data can be processed. The stack as a whole doesn’t matter, what matters is whether you can place the TX on it or not. To place a TX you have to provide it as an input’s data push. But if all TXs have same size limit, obviously it’s impossible to place a 1MB TX inside a 1MB TX, no matter the stack limits - because the TX has to be encoded inside the TX, for it to be pushed on stack.
No, with the May '15 upgrade, 10kB is the max. item size. Other items size is irrelevant.
Of course, during its runtime, a Script will produce and destroy stack items of various sizes. To get a “pass” it must leaving exactly one non-0 item of any size on stack.
We could, yeah, but it would not solve the problem, because you can’t have a 1 MB TX contain another 1 MB TX as input’s data.
Lots of payout addresses, typically P2Pool.
Yes. Can payout to just 1 output where a hash commits to unlimited number of addresses. Then the contract “unrolls” payouts in a bunch of TXs descendant of the coinbase.
It is, but nobody cared enough to build a P2P pool using this technique.
This works at any limit.
Doesn’t matter, because the data (payout address list) is provided piece-wise in later transactions as the coinbase is unrolled, and can be processed TX by TX. The contract could be unrolled as tree. Coinbase output commits just the root. The next transaction spends coinbase and pays out to child nodes. Each child node gets spent in the next TX and so on until you reach the leaves which are paymets to individual miners.
So the problem you are trying to solve is actually “recursive TX algorithms”.
You are not really trying to solve “the 2KB being too much”, but maximum size of TX (Coinbase TX) not being able to fit into another TX that can be a smart contract of some sort that could be very useful for P2Pool payouts (and generally inclusion of CB TX data into contracts, which too much data breaks because it is more than maximum TX size).
Honestly, you could be more clear about it. Your purely mathematical approach to things kind of sucks at explaining things clearly.
Now that I get what is going on more-or-less I initially support this proposal
However I would still like a comment from another person that is more technical than me (maybe calin).
this has nothing to do with enabling P2Pool payouts, those don’t need a TX within TX contract pattern, I only mentioned P2Pool payouts because really the only use of bigger coinbase TXs is P2Pool payouts, and 2kB would affect potential direct payouts to a big number of addresses
so I just showed there’s an alternative that could work even today and can pay out to even more addresses than what could fit in a 1MB coinbase TX, therefore a lower limit like 2kB wouldn’t prevent big P2Pools from operating
Not really, a general solution to recursive TX-within-TX would need something like SegWit or PMv3, because what if your TX already had a TX in it as some input’s data, and that TX had a TX in it… the size of TX to contain all that would keep growing. Jason wrote a nice blog post about that: PMv3: Build Decentralized Applications on Bitcoin Cash
When you want to push coinbase TX into an input, you don’t have that problem, because coinbase doesn’t have a parent TX, doesn’t have inputs, so it doesn’t run any Scripts that may want to operate on some past TX.
That’s nice too, it fixes things. Even 1-level reference is also useful.
So initially your scheme breaks P2Pool payouts.
All P2Pool software would have to be updated to support the new way of doing payouts.
I knew there will be some downsides, because the previous big coinbase payouts happened for a reason.
See? You were a little unclear about it.
To determine whether it is “worth it” maybe we (but actually you, since the idea is yours) should contact whoever codes current P2Pool software and ask what they think about it.
I mean infinite number of payout addresses seems nice, it would probably be useful to them. Unless they already do this today, then the problem is solved.
Miner pool centralization and related geopolitical pressure is a real thing, so P2Pools might get very important in the future.
P2Pools should not be treated lightly. They are the embodiment of decentralization, basically. If somebody in the future invents P2Pool that works as well as normal pools or better, it could be a cornerstone of the future for BCH.
It breaks hypothetical P2Pool payouts that would just extend the current method to more addresses, but it does not break current P2Pools, because we can see on chain that current ones don’t use more than 2kB (~60 addresses), and it doesn’t generally break the concept of huge P2Pools because the alternative method is possible even now.
Another thing is that there already may be a natural limit: ASIC design. Nonce is too small to grind, it gets exhausted quickly and then you have to get a new Merkle root to keep going, and if you want to avoid having to sync with node too often, you’d pack the coinbase TX + intermediate merkle hashes as part of the work given to the ASIC, so it can just grind coinbase TX to change the root and “reset” the nonce whenever it gets exhausted. This could be the reason why we don’t see >2kB coinbases anymore.
Yeah, that’d be part of a CHIP process to limit coinbase, but I’m not sure it’s worth kicking it off just yet. This is just some preliminary talk to see what people think about it.
Yup. I doubt they do it, because focus is on BTC and BTC can’t do it. But that CTV proposal would let BTC do it, too.
I have a bad feeling about this, sometimes putting a restrictive limit do good on the short term but ends up being very bad in the long term, similar to what happened with the blocksize limit everyone thought it would be a consensus at their time, and later in the future we got the blocksize war.
I think having an alternative solution for the main problem you mentioned would be the better route.
Shadow: You may not have any interest in avalanche, but it’s relevant to point out how a proposal interacts with other proposals. If you want to flag a post, please just do that and don’t add additional commentary noise.
Marius: This is a technical forum so it would be nice if you could confirm any AI input and curate your own notes about a cutting edge topic like this where AIs are well known to hallucinate.
I’m leaving this publicly as a note for others to use also. If you guys have any further comments on the topic, please do not post here and feel free to DM me.
What about my dreams of one day liberating the world from custodial mining and having pools pay out individual miners all in the coinbase txn with like ~25k outputs?
This proposal makes that dream harder. Yes, I know you can commit to some crazy contract, but there is nothing that beats the simplicity of doing it directly.
I am not a fan of artificially limiting it for that reason. I see the advantages of some imagined utopian mining scenario more advantageous than some imagined utopian coinbase txn vm verification scenario where it’s not clear to me how that would even be used in vm script in any way…
In general, we should take care of all participants of our ecosystem, including miners.
We should not “break userspace”. Breaking miner’s (including P2Pool miners) daily jobs or making them significantly harder is also “breaking userspace”.
In general, I like the idea (am thinking there might also be cases in future where Wallets would want the Coinbase Tx for some reason - e.g. some metadata that it provides - so keeping it reasonably sized might be beneficial). But now have some hesitations because of this.
I think it’s already limited due to how ASICs are designed, and that is unlikely to change, so you can pretty much count on them being small, but there’s still 0.01% chance someone decides to somehow mine a bigger one.
My main motivation for this was feeding coinbase TXs to a contract. With 2026 upgrade, a workaround will be available: it will be possible to generate a compact proof that a coinbase TX is bigger than 10kB - without actually needing to push the whole TX as proof. With this, the coinbase oracle can simply skip the too-big coinbases, which should be extremely rare anyway - but by implementing the skip option such TXs can’t break the contract.