-
Notifications
You must be signed in to change notification settings - Fork 10
Description
Left this out of my initial 2025 cleanup commit: I think we should also consider a more conservative approach of limiting hash digests by the size of the containing transaction. So hashing would have a density-based limit like SigChecks. (I'd also like to see SigChecks simplified to make validation more stateless – any set of valid transactions should make a valid block, but currently, there are block-level SigChecks limits.)
Context: I'd like to see a lot of mobile wallets internally running pruned nodes for the best privacy and security, and it would be nice to ensure TX validation requirements remain inconsequential vs. bandwidth requirements. So even if the limits as initially proposed are safe for the network itself, more careful targeting can still help to reduce worst-case power usage on mobile devices.
For standard transactions, there's little reason a transaction should ever use more than 1 hash digest iteration per 32 bytes (i.e. double-hashing every hash input, and every hash input is shorter than one 64-byte message block, so no efficiency gains from longer messages). Even if the hashed data is provided by the locking script/UTXO, because the pattern would necessarily be P2SH, the data must still be pushed in the transaction itself as redeem bytecode.
For nonstandard transactions, we would set the limit at ~2 digest iterations per byte, the density that is currently possible with today's limits. This would allow experimentation past the standardness limit for transaction relay and also ensure that any currently spendable UTXOs remain spendable indefinitely. (Aside, I think the 2020 SigChecks changes may have theoretical violated this, so on principle we should probably target slightly higher nonstandard signature operation limits to remedy that: https://gitlab.com/bitcoin-cash-node/bitcoin-cash-node/-/issues/109. I'll mention that OP_CODESEPARATOR would be a problem without sigchecks, and OP_CODESEPARATOR remains useful in some scenarios for allowing signatures to efficiently commit to an executed path within a transaction.)
So VM implementations would track both digestIterations and signatureChecks over the course of the evaluation, failing the evaluation if they exceed the fixed number allocated based on the transaction's size. When validating a full transaction, the value after each input's evaluation can be passed in as the initial value to the next input's evaluation to minimize total computation; validation can stop as soon as a limit is exhausted, no need to wait for all inputs to be evaluated and the transaction level metrics to be aggregated. (Note that parallelization of input validation within a transaction is somewhat possible but low-value, as double spending must still be prevented between inputs in the same transaction. It's more useful to parallelize validation at the transaction level, where the same coordination has to happen to prevent double spending too.)