CHIP 2021-01 Allow Transactions to be smaller in size

It’s likely beneficial to be as inclusive as possible when picking two possible changes with the same costs. Comparison between change and no change, though, should not follow the same line of thought, for changing the rules has an inherent cost over not changing.

A comparison between change and no change should absolutely follow the same logic as long as you add that inherent extra cost to the change in the change vs no change scenario. The evaluation process is the same, does the benefit over time outweigh the immediate cost.

Unrelated to the philosophy:
Tom has made a couple of good points in noting how the non negligible benefits of this chain in the post above mine. It is clear that for a negligible cost there are non negligible benefits.

1 Like

Yeah, this is another benefit I was thinking of, but didn’t feel I should bring up. Now that someone else mentioned it, I’m happy to pile on. As someone who has written an occasional regtest functional test for ABC, I can say it is definitely irksome to have to pad transactions, and this happens all the time because in regtest it’s most efficient to do mock transactions with OP_1 scriptpubkeys, no signature required. An OP_1 -> OP_1 tx is 61 bytes, or 71 if it has two such outputs; it is rare to hit 64 bytes exactly.

Another anecdote: The 100 byte limit caused many functional tests to fail in ABC. Amusingly they only found this out by surprise when the rule activated on mainnet, Nov 2018. (i.e., they had never tried running their functional test suite with the upgraded rules, and CTOR at the same time caused futher chaos).

2 Likes

I kinda agree with both you and im_uname. The topic is essentially that people love to fix things in a project they adopt, the question is what to evaluate when the fix is in the protocol that a bunch of projects today implement.

So, yeah, you absolutely need to take cost into account of all implementations that need to change. The product owner of this change can’t just assume that they can outsource the work of a change to all the stakeholders. There needs to be some negotiation on that betwen the PO and the individual stake holders.

As a second metric we have the project-wide cost of future adoption. For BTC there are SegWit and LN which makes it much harder for developers to enter their ecosystem. This is a metric that IMOHO should count for something too. Making things more complex for future stake-holders to enter should have a large cost.
Making things simpler for future stake-holders to join our movement may aliviate some of the cost put upon the current stakeholders basically due to the fact that we all want and need BCH to have massive growth over the next decade.

2 Likes

A CHIP: 2021-01-Minimum Transaction Size.md

4 Likes

Reviewed, looks good to me. Recommend activation of this CHIP in the first network upgrade after May 2021.

4 Likes

I definitely am a fan of this CHIP and think it can be grouped into the next protocol upgrade, but does not warrant a protocol upgrade by itself. My thought process is simple. Existing miners can use the exact same code they have today, and have extra padding in their coinbase transaction, it doesn’t require them to change anything. New miners or users will have a better experience and we aren’t continuing with a limit that wasn’t well specified. I complained about this issue to ABC, and would love to see it simply be != 64.

Then as a bonus there are a few transactions that are < 64 bytes that would be allowed.

3 Likes

Concur except I also prefer the more precise !=64 assertion because it helps with institutional memory. Imagine one day the issue with the merkle tree somehow goes away - this restriction can be easily removed with little risk. However, imagine also that prior to then some issue pops up in an unrelated “improvement” that would break if a tx is <65 bytes but it’s either not detected or documented because there’s already this wider scope restriction. Now when the attempt is made to remove the restriction because the merkle tree caveat no longer applies - we hit surprise runtime issues.

This is the kind of technical debt we find in long term “enterprisey” ™ projects and I’d prefer to nip it in the bud rather than introduce the potential for it.

Mostly I agree 1000% with Tom’s consideration of project-wide cost of future adoption and the desire to make things simpler for future stake-holders. This is critical and has clearly not been a community-wide priority given the current state of much of the core software. Little by little we should remedy this whenever possible and certainly not introduce more of the issue when not absolutely necessary.

2 Likes

I find that a very compelling argument, and frankly the first real argument to decide between the two options.

Together with the same feeling from quest I’m tempted to change the proposal to be “NOT 64 bytes”. I’ll leave this topic open here for a while longer to allow others to find it and comment before I change it, though.

3 Likes

Perhaps ≠64 makes more sense if you think the limit is going to be removed eventually, and >64 makes more sense if you think the limit will remain permanently. ≠64 is more work now, >64 is potentially more work in the future. The technical debt only exists if you anticipate future removal of the limit.

Personally I don’t think it’s likely that the limit is ever going to be removed, because removal requires significant changes across the ecosystem and there is now no more incentive to make these changes.

The origin of this story is, in its simplest form, that the merkle-proof message sends over a list of hashes and the definition if any hash is a leaf or a node is based on an assumption. An assumption that can go wrong should a transaction be an exact multiple of the hash-size.

The simplest solution is to avoid having such transactions.
The technically more correct solution is to fix the message-format for merkle-proofs and make things explicit there. A simple one byte addition per hash to indicate its type.

In future I’m sure that the message-format will change as the medium will change. People might start to send stuff over JSON as a simple example.

As such I’m quite optimisitic that as time goes on we fix this properly, in the right layer. It may take 10 years as communication layers are upgraded, but I do think that will happen. Probably in much less than 10 years.

And when that happens, there no longer is any need to forbid the 64 bytes tx.
Remember, the only reason this solution was picked is because it had practically zero impact.

3 Likes

I’ve reviewed the CHIP and support it as written. My interest is as a development stakeholder.

2 Likes

I reviewed this and would support activation of this CHIP (revision 75b97e22a3dc295de7373255025f38cd0911b866) in the first network upgrade after May 2021.

1 Like

I rename the chip, adding CHIP in the filename and generally making it clearer. Which made the link above fail.

Here is the new link: CHIP-2021-01-Allow Smaller Transactions.md

Are 63 bytes and smaller allowed now? The wording is ambiguous: the word “limit” in “the limit that transactions shall not be 64 bytes in size” could perhaps better be replaced with “restriction” or “rule”. The impact section still says changing the minimum from 100 to 65. But other sections appear to have changed in line with the argumentation for allowing 63 bytes and smaller.

1 Like

Thank you for proof reading it, I agree the wording was not as clear as it could have been. I pushed a commit to clarify and used the word “rule”.

I updated the last edit date to today, but no changes in version number based on this being a language change not a change that affects the spec.

Awemany (working as a BU dev) and I originally brought this problem to the BCH community’s attention. When ABC chose to limit at 100 bytes I pointed out this was idiocy way back before the original fork, and was ignored. So BU is happy to support a change that fixes yet-another-dumb-decision autocratically made during the ABC days.

3 Likes

Is this ready for implementation? (for May 2023 activation)

2 Likes

in my opinion it is ready for implementation into BCHN

3 Likes

The technically more correct solution is to fix the message-format for merkle-proofs and make things explicit there. A simple one byte addition per hash to indicate its type.

I think we should go that way. Or even better: make a tagged hash, to avoid using hashes in a different context (so that “some 512-bit tag” can be used to set the first 512-bit block for SHA-256). Also because merkle branch is not the only thing hashed by SHA-256d. There are also block headers (80 bytes per header), and it is possible to create some block header, that could be interpreted as a valid transaction. Of course, attacking in this way is hard, because it would require mining a lot of bytes, but technically, it can be quite well aligned (also because it is possible to set sequence to 0xffffffff, and then use locktime as a nonce to mine a transaction).

I think it would break waaay to many things, without a clear benefit.
If quantum computing ever becomes a threat, we’d need to move to 384-bit hashes which would break stuff all the same but the benefit would be: survival of the blockchain :smiley: So, in the same go we would be able to fix this too.

Is it, though? I don’t think it has enough degrees of freedom for it to even be theoretically possible to match a valid TX. I entertained this idea while working on group tokens (“unforgreable groups”) CHIP and had the idea of being able to also allow conbase TX-es to create a new group, where it would be generated from the previous block hash instead of TXID.

PS oh, but you mean using an 80-byte TX to match a block header… in a sort of collision attack? Ok, suppose you manage it and now there’s a TX and a block with the same hash, what’s the consequence?