Skip to content

Conversation

@petertodd
Copy link
Contributor

Essentially makes passing CHECKLOCKTIMEVERIFY (BIP65) an IsStandard() rule to make it easy to test applications using it; blocks containing transactions with invalid CHECKLOCKTIMEVERIFY arguments are still accepted.

This pull-req is not a soft-fork nor does it contain any code to start that process.

An alternative would be to make a command-line flag to enable CLTV, however actually implementing that is somewhat ugly. There is a small risk of people thinking that it's safe to depend on CLTV, although I think that's outweighed by the value of thoroughly testing everything but the soft-fork itself.

Thanks for @sipa and also @btcdrak for the rebase. Credit goes to @gmaxwell for the suggestion of comparing the argument against the transaction nLockTime rather than the current time/blockheight directly.

Background: BIP65 - https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki

@gmaxwell
Copy link
Contributor

Okay I was about to wag my finger at you about making it IsStandard making the hardfork possible then I caught that you said passing. Your text is reasonably clear, I'm leaving this comment just in case someone else makes the same reading error.

I'm unsure how useful this is. There is some risk that people will mistake IsStandardness enforcement for adequate security. ("I tried to cheat and couldn't!"). OTOH, it sounds like a really interesting way to roll out softforking changes that paces the risk out somewhat better.

@petertodd
Copy link
Contributor Author

Okay I was about to wag my finger at you about making it IsStandard making the hardfork possible

To be clear, what exactly were you thinking I'd done?

I'm unsure how useful this is. There is some risk that people will mistake IsStandardness enforcement for adequate security.

Yeah, OTOH, the many of the applications for CLTV like micropayment channels already have some of that risk in that they're vulnerable to tx mutability. Not as seriously, but it's analogous.

Anyway, people who make that kind of mistake are probably making other mistakes that'll have them lose money anyway...

@gmaxwell
Copy link
Contributor

@petertodd My initial read of the description had me thinking you made CLTV IsStandard with no constraint on validity ("No code to start the process of a soft fork"), creating a risk for miners who might included invalid ones.

@petertodd
Copy link
Contributor Author

@gmaxwell Ah! Yeah, the NOPx discouragement code still applies, and in fact has a special case to make sure that if CLTV isn't enabled, NOP2 usage is discouraged: https://github.com/bitcoin/bitcoin/pull/5496/files#diff-be2905e2f5218ecdbe4e55637dac75f3R339

(DISCOURAGE_UPGRADABLE_NOPS is unittested too, so without that the unittests would fail)

@btcdrak
Copy link
Contributor

btcdrak commented Dec 17, 2014

@petertodd I have to agree with @gmaxwell, something about your PR wording immediately made me jump to the wrong conclusion too.

@petertodd
Copy link
Contributor Author

Any suggestions on wording this better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why this check returns false and why it's not done before (nLockTime > (int64_t)txTo.nLockTime). The comment says that if IsFinal() = true, then the CHECKLOCKTIMEVERIFY is bypassed (accepted).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will read the spec again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're reading it backwards: we're trying to prevent someone from bypassing the nLockTime of the transaction by setting every nSequence field to MAXINT.

As for the order of the tests, I designed it to go from least data required to prove to most.

On 17 December 2014 09:21:43 GMT-05:00, SergioDemianLerner [email protected] wrote:

  • if (nLockTime > (int64_t)txTo.nLockTime)
  •    return false;
    
  • // Finally the nLockTime feature can be disabled and thus
  • // CHECKLOCKTIMEVERIFY bypassed if every txin has been
  • // finalized by setting nSequence to maxint. The
  • // transaction would be allowed into the blockchain, making
  • // the opcode ineffective.
  • //
  • // Testing if this vin is not final is sufficient to
  • // prevent this condition. Alternatively we could test all
  • // inputs, but testing just this input minimizes the data
  • // required to prove correct CHECKLOCKTIMEVERIFY execution.
  • if (txTo.vin[nIn].IsFinal())
  •    return false;
    

I don't understand why this check returns false and why it's not done
before (nLockTime > (int64_t)txTo.nLockTime). The comment says that if
IsFinal() = true, then the CHECKLOCKTIMEVERIFY is bypassed (accepted).


Reply to this email directly or view it on GitHub:
https://github.com/bitcoin/bitcoin/pull/5496/files#r21973702

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. The Bitcoin protocol first checks the nLockTime and THEN checks the sequence num if nLockTime check is false. I had to read again the IsFinalTx() function.

@petertodd petertodd force-pushed the mempool-only-checklocktimeverify branch from 70fa6b6 to bc2dc8b Compare December 25, 2014 06:20
@petertodd
Copy link
Contributor Author

Rebased on top of #5521

@NicolasDorier
Copy link
Contributor

Just to know if I understand well.

You would want such mempool only check of CLTV out for 0.11, then, if it works without any notable problems and with some people building stuff on top of it, pushing that into the miners' rule for 0.12 ?

I'm asking because I intend to build some stuff on top of it, and not really motivated to run and adapt my code for a viacoin node. So I guess my only option left for now it to wait for 0.11 ?

@TheBlueMatt
Copy link
Contributor

Hopefully we enforce more than mempool for 0.11, but enforcing for mempool is a one-line change after merging the code (which we should move forward with, whether with mempool enforcement or not)

@gmaxwell
Copy link
Contributor

@petertodd Say the relay rule were widely deployed. What would be the plan if our we discovered that the exact rule construction wasn't the one we wanted?

@NicolasDorier
Copy link
Contributor

I see. That's great to know !

I'll be waiting for 0.11, I don't have the time/skill to compile bitcoind from sources on linux.

Not a deal breaker though, I have my own script evaluator in .NET, I will port the CLTV + tests in it so I can test without a node. Not perfect, but I should be able to play with it truthfully enough.

@petertodd
Copy link
Contributor Author

Since it's just a relay rule at worst you just reject transactions with NOP2 from the mempool - the default behaviour - pick a new NOP#, and try out the new behaviour.

This means the implementation either does one or the other version - nice and clean - and neither version will interfere with the other. Of course both NOPs will make it into blocks, but that's not going to do us any harm as neither side has applied a soft-forking rule.

Now in many cases you don't need to do all this. For instance if we were to decide to make CLTV act against the block height/time rather than the tx one nodes with the old rule just reject txs from the less permissive version. You'd want to be careful re: DoS ban code, but we're finding that crops up over and over again anyway...

On 25 December 2014 17:25:58 GMT-05:00, Gregory Maxwell [email protected] wrote:

@petertodd Say the relay rule were widely deployed. What would be the
plan if our we discovered that the exact rule construction wasn't the
one we wanted?


Reply to this email directly or view it on GitHub:
#5496 (comment)

@petertodd petertodd force-pushed the mempool-only-checklocktimeverify branch from bc2dc8b to e65187a Compare April 21, 2015 04:43
@petertodd
Copy link
Contributor Author

Rebased on master.

@petertodd
Copy link
Contributor Author

^ @jgarzik @btcdrak

@morcos
Copy link
Contributor

morcos commented May 1, 2015

ACK
I reviewed this and tested by creating this small RPC test:
https://gist.github.com/morcos/61084ac8a33363278638
(it requires a rebase to master first as it uses some of the new python tools)

@jgarzik
Copy link
Contributor

jgarzik commented May 1, 2015

Concept ACK - I think this falls on the side of being a useful soft fork upgrade half-step.

Of course, I would prefer to be more aggressive and go the entire way in one step.

@petertodd
Copy link
Contributor Author

@morcos Actually your RPC test should work on master now; pull-req #5981 has been merged.

@jgarzik We don't know if the most recent soft-fork will actually trigger prior to the release of v0.11, especially with @laanwj's proposed schedule. Merging this for v0.11 would at least get 95% of the code in the release, with actual enforcement happening in a v0.11.1 release. Getting off the rebase treadmill would also be nice.

While there has been discussion about a relative CLTV by @TheBlueMatt and @jtimon I don't think any of it really impacts implementation of an absolute CLTV, which is also needed. My most recent proposal is that absolute CLTV be implemented as-is, and relative CLTV be left for a future scripting system rethinking: http://www.mail-archive.com/[email protected]/msg07418.html

@TheBlueMatt
Copy link
Contributor

I like merging this, but doing both CLTV things in one swoop would be really nice. Certainly if we're gonna use one of the precious few OP_NOPs we have we might as well make it more flexible. The relative-CLTV idea is incredibly useful in systems like the proposed Lightning network, a proposal to build on top of payment challen hub-and-spoke networks (though Ive been unsuccessful in getting them to comment on this specific issue here, as they cant seem to get their paper rewritten).

@petertodd petertodd force-pushed the mempool-only-checklocktimeverify branch from e65187a to b2f4b67 Compare May 9, 2015 08:09
petertodd added 4 commits May 9, 2015 04:29
While the existing numeric opcodes are all limited to 4-byte bignum
arguments, new opcodes will need different limits.
Will now be needed by CHECKLOCKTIMEVERIFY code.
<nLockTime> 1 CHECKLOCKTIMEVERIFY -> <nLockTime> 1

Fails if tx.nLockTime < nLockTime, allowing the funds in a txout to be
locked until some block height or block time in the future is reached.

Only the logic and unittests are implemented; this commit does not have
any actual soft-fork logic in it.

Thanks to Pieter Wuille for rebase.

Credit goes to Gregory Maxwell for the suggestion of comparing the
argument against the transaction nLockTime rather than the current
time/blockheight directly.
Transactions that fail CLTV verification will be rejected from the
mempool, making it easy to test the feature. However blocks containing
"invalid" CLTV-using transactions will still be accepted; this is *not*
the soft-fork required to actually enable CLTV for production use.
@petertodd petertodd force-pushed the mempool-only-checklocktimeverify branch from b2f4b67 to 14a63f4 Compare May 9, 2015 08:40
@petertodd
Copy link
Contributor Author

Updated to add a type field for future soft-forks, e.g. a relative CLTV upgrade.

BIP updated: bitcoin/bips#153

CLTV demos: petertodd/checklocktimeverify-demos@148e91e

This should satisfy all objections. I propose this is merged for v0.11.0, and the actual soft-fork implemented in v0.11.1 once the ongoing BIP66 soft-fork triggers. (likely with @sipa's new nVersion feature flags BIP)

@btcdrak
Copy link
Contributor

btcdrak commented May 11, 2015

I have tested the code and sent the following tx fc6fa964c510e79b55ccea5bc8f21a505867b8ed286f6da02172bf8c297720dd on testnet using the following redeem script 632102b0efefbe8f8feaabc74ebde89928bfacc6471fe5f4db43b19c2ff4e58fdfc290ad670400ca9a3b51b16d6821032c8296c603d3c5802027d57a8f3ae382d7464abfda6f80e4f7d592ce23d696e7ac which decodes to OP_IF 02b0efefbe8f8feaabc74ebde89928bfacc6471fe5f4db43b19c2ff4e58fdfc290 OP_CHECKSIGVERIFY OP_ELSE 1000000000 1 OP_NOP2 OP_2DROP OP_ENDIF 032c8296c603d3c5802027d57a8f3ae382d7464abfda6f80e4f7d592ce23d696e7 OP_CHECKSIG

ACK.

@NicolasDorier
Copy link
Contributor

The goal of doing that is to workaround the NOPX scarcity.
Would it not be better to take currently unused NOP for creating an "OP_EXT" instruction.

Then defines

0 OP_EXT would be for CHECKLOCKTIMEVERIFY,

1 OP_EXT would be for the next soft fork,

2 OP_EXT would be for the third soft fork, etc...

Your PR seems to only add extensibility for future relative CLTV, when I think, it can be broader without downside.

@btcdrak
Copy link
Contributor

btcdrak commented May 12, 2015

@NicolasDorier I am not sure I understand, the parameter that precedes OP_NOP2 already allows for any number of future CLTV implementations (e.g. relative CLTV and other implementations yet to be imagined), 1 OP_NOP2, 2 OP_NOP2, 3 OP_NOP2, 4 OP_NOP2, 5 OP_NOP2, 6 OP_NOP2, 7 OP_NOP2.... n OP_NOP2

@NicolasDorier
Copy link
Contributor

What I mean is that you just changed the CLTV by requiring 1 OP_NOP2 for doing it.
You did it for NOP scarcity reason, and (not a big deal though), it broke the simpler code before.

The rational for adding the PUSH 1, was permitting new mode and implementation for CLTV in the future without wasting other NOP.

My point is that NOP scarcity is not a problem worth considering.

Because the day we use all nops except NOP10, we can just say that "NOP10" is a "OP_EXT" operation. And then consider "1 OP_EXT" like a "NOP11", and "2 OP_EXT" like a "NOP12".

So I see no problem with using NOP3 for RCLTV or other mode we might invent in the future.

@NicolasDorier
Copy link
Contributor

Following what I just said, the latest PR is equivalent with choosing OP_EXT = NOP2, CLTV = NOP11 and RCLTV = NOP12, so why not waiting NOP10 for OP_EXT and just using NOP2 for CLTV and NOP3 for RCLTV.

@gavinandresen
Copy link
Contributor

I agree with @NicolasDorier, no reason to worry about NOP scarcity until we hit OP_NOP10-- and when we do, we can redefine OP_NOP10 as OP_EXT.

I think there is a good chance before then one of the OP_NOP's will be used to define an entirely different, redesigned scripting system, so we'll never get there.

@NicolasDorier
Copy link
Contributor

Just to be clear, this is not an objection with the parametized CTLV. I just wanted to point out that the reason for doing it were flawed but I don't feel strongly about that.

I would prefer a parametized CLTV for the 15may freeze than delaying the CLTV one more time for the reasons I gave.

@petertodd
Copy link
Contributor Author

I opened a pull-req with the unparameterized version as well, #6124

I personally prefer the unparameterized version without the type argument; I agree with @gavinandresen that running out of NOP's isn't a major concern. More importantly though, let's pick one and get this merged for v0.11.0

@btcdrak
Copy link
Contributor

btcdrak commented May 12, 2015

@NicolasDorier It's a valid point. I don't particularly care either way, although the PR was adjusted to account for discussion on the mailing list. I'm fine with either. Prefer the original #6124

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move all this to an inline function outside of EvalScript?
It will be build equivalent but much more readable (at the very least for the indentation, but seeing the parameters of each opcode is interesting too).
At some point we should do something like #5153 (having a check-identical-build script would help making that trivial to review and non-risky in a completely obvious way), so we could at least start doing it with the new opcodes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jtimon Could codestyle nits be left until after the PR is merged?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it would be delaying that for the later bigger cleanup of EvalScript, the point is just to make to history cleaner by squashing the fixup into petertodd@dc17027.
But adding a fixup commit that is proven to produce an identical build it's not really a blocker IMO.
What's blocking this is the lack of decision on the parametrized version versus the single opcode one.
In any case, this is just a nit, and if we decide before fixing the nit, I'll just remember to say this before considering anything else when the next opcode is proposed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the issue of using parameterised version or not is quite clear: there isn't scarcity as pointed out by @gavinandresen and @NicolasDorier because OP_NOP10 can be repurposed to a general extension. Until we reach the limits it surely makes sense to produce concise scripts without parameter hacks?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, another blocker is the new softfork deployment mechanism @sipa is working on.
And remember this doesn't have to be part of 0.11, we could just release 0.11.1 with this a week after releasing 0.11.
So, yes, I think we have time for an identical-build fixup commit (won't imply more testing), why clean later when you can avoid it in the first place?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jtimon I'm skeptical of making this an inline function when no other opcodes are precisely because when (if!) we change other opcodes to be inline functions the last thing we need is two different ways of doing it. Better to stick to the same, well-tested, mechanism everything else uses; making it an inline function now risks having to change it again anyway.

re: the softfork mechanism, I'm well aware of it, and have been reviewing it myself. The idea here is just to get all but the actual softfork into a release to make network-wide testing of the new functionality easy. In the unlikely event that the functionality has to change this gives us one last chance to do so prior to the actual soft-fork.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re softfork: ok, that makes sense.

I really hope is when and not if. Your argument sounds like "let's just dump the trash in the floor of the kitchen because the rest of the house is also dirty".
I don't understand why having ugly code should preclude us from written the new stuff in a cleaner way, it will be less work and smaller diffs in future cleanups.

same, well-tested, mechanism everything else uses

This would be equally well tested since the build would be identical. That's the main reason to make the function inlined (one could also argue about performance but I'm pretty sure the compiler is smart enough to "think that by itself").

I insist this is a nit and shouldn't be a blocker: "I'm too lazy to do that now. You should have asked before, too late, it will be more work for you or someone else later" would be a perfectly valid answer.

On #5153 in general, It's sad that the interpreter of other implementations (like libbitcoin's) is much more readable than libconsensus'...
But it's worse than people not only don't want to implement it but actively oppose to that kind of obvious, trivial and completely safe imprevements.
This also applies to a subset of #4646 and any other huge nesting we have.
Years of experience in software engineering show that it is a bad practice that leads to less readable and thus less maintainable code. That's why tools like checkstyle exist.

But, anyway, it's not worth it to discuss this further in here. Do the more-readable-but-equivalent-build-fixup commit or don't. It's completely up to you.
Nobody coded the identical-build-check tool that I proposed and you created a bounty for, so actually checking that the build is identical will be more work than making the inline function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why having ugly code should preclude us from written the new stuff in a cleaner way, it will be less work and smaller diffs in future cleanups.

Well, like I say, I think having to fix something in the future in two different ways rather than just one different way is more work. I'd give the analogy of a factory that produces drums of arsenic as a byproduct, which currently get dumped in a big pile out the back. You'd be better off continuing to put it all in the one big pile, rather than deciding "Ah, we're pretty sure we're going to dump it in the river in the future, arsenic doesn't bioaccumulate so the dilution will fix the problem. So stick all new drums in the river." - If it turns out that plan was a bad idea, now you have to clean up both a backyard and a river - twice as much work to plan the cleanup! Better having a problem with exactly one solution that can be repeatedly mechanically to fix it.

@petertodd
Copy link
Contributor Author

Seems we have rough consensus that a type parameter isn't needed, so closing in favor of #6124

@petertodd petertodd closed this May 15, 2015
@maaku
Copy link
Contributor

maaku commented May 16, 2015

Gah. I was not subscribed to this issue and didn't see the last few days debate.

Allow me to make an argument in favor of the type parameter. I have never seen opcode scarcity as the prime justification, and that was not the motivation for suggesting it as I did in an earlier PR. The reason I believe it is valuable to include the type parameter is for possible type checking efficiencies and future-proofing if the convention is adopted that OP_NOP(n) has the format [n n-1 ... 3 2 aux OP_NOPn] where the first parameter is the auxiliary opcode. So OP_NOP1 is used for opcodes which take no parameters [aux OP_NOP1]; OP_NOP2 for those like CLTV and RCLTV which take a single parameter [x aux OP_NOP2]; OP_NOP3 for those which take two parameters [y x aux OP_NOP3]; etc.

The implementation becomes a little bit cleaner as the OP_NOP(n) case statement can check that sufficient number of parameters are on the stack for all of its auxiliary opcodes, and future script authoring tools can have some rudimentary future-proof stack bounds checking.

I am strongly in favor of including the type parameter.

@maaku
Copy link
Contributor

maaku commented May 16, 2015

Also, I should mention that including the type parameter allows OP_CLTV and its relative variant to share nearly all of their implementation inline without a call-out. I hope to have code demonstrating this available in the very near future.

@jtimon
Copy link
Contributor

jtimon commented May 16, 2015

You can still reuse code in the checker (https://github.com/bitcoin/bitcoin/pull/6124/files#diff-be2905e2f5218ecdbe4e55637dac75f3R1129), having 2 separated opcodes for cltv and rcltv doesn't preclude code reuse.

@petertodd
Copy link
Contributor Author

@maaku I think that's making a lot of premature assumptions about how the scripting system will be extended in the future. I think we're a lot more likely to implement an OP_MAST/OP_EVAL which just calls a new scheme, rather than continue extending what we have now.

re: code reuse, I'm not convinced that avoiding code reuse in cases where we're talking about just a few lines of consensus-critical code really makes all that much sense. As @gmaxwell gmaxwell commented a few months ago, if every detail of a codebase needs to be reviewed, having everything in one place can be easier than splitting things up across many different functions.

@maaku
Copy link
Contributor

maaku commented May 16, 2015

@petertodd the only non-aesthetic argument I've seen for multiple top-level opcodes instead of an extended opcode is that it saves 1 byte. However that byte is in the scriptSig, as there truly isn't any reason to use CLTV / RCLTV within a scriptPubKey. On the other hand, there may be future extensions to script which we want the use of an unallocated NOP in scriptPubKeys, I'd much rather take the hit of an extra byte in the scriptSig for CLTV/RCLTV than the UTXO if find later that we've exhausted the NOP space.

@petertodd
Copy link
Contributor Author

@maaku Hmm? It's quite likely that we'll see CLTV use in scriptPubKey's for payment channels for backup reasons, and absolutely guaranteed they'll be in redeemScripts. CLTV arguments are certainly always going to be in scriptPubKey's/redeemScript's as they rarely, if ever, are something where it makes sense to provide to spend a txout.

Anyway, the cost of an extra script version byte when it comes time to eventually do an OP_MAST is <3% of the total storage requirements to store a single UTXO.

@jtimon
Copy link
Contributor

jtimon commented May 16, 2015

@maaku I believe the only non-aesthetic argument in the other direction is no_op scarcity.
The reuse argument is invalid and your other "primary argument" seems aesthetic to me as well.

@jgarzik
Copy link
Contributor

jgarzik commented May 16, 2015

Non-aesthetic? In a wider context, agree w/ @petertodd RE "I think we're a lot more likely to implement an OP_MAST/OP_EVAL which just calls a new scheme, rather than continue extending what we have now."

e.g. OP_ETHEREUM would likely just require one opcode.

@maaku
Copy link
Contributor

maaku commented May 16, 2015

Even though I'm the first to argue in favor of a script replacement, when discussing changes to the existing script system I think it is prudent to assume that we'll be stuck with script for much longer than we anticipate. I think that conservatism favors a two-byte extension opcode: (1) it leaves more single byte nopcodes available for OP_EVAL-like extensions which benefit more from brevity, and (2) if done in a certain way, it allows future-proof stack bounds checking in script authoring tools or policy checks.

@petertodd
Copy link
Contributor Author

Without a full script replacement why would you ever need any script authoring tools? Script is just too limited to warrant that kind of complexity right now - there are literally less than a dozen or so basic script fragments that are useful.

On 16 May 2015 17:50:58 GMT-04:00, Mark Friedenbach [email protected] wrote:

Even though I'm the first to argue in favor of a script replacement,
when discussing changes to the existing script system I think it is
prudent to assume that we'll be stuck with script for much longer than
we anticipate. I think that conservatism favors a two-byte extension
opcode: (1) it leaves more single byte nopcodes available for
OP_EVAL-like extensions, and (2) it done in a certain way, it allows
future-proof stack bounds checking in script authoring tools or policy
checks.


Reply to this email directly or view it on GitHub:
#5496 (comment)

@jtimon
Copy link
Contributor

jtimon commented May 16, 2015

(1) it leaves more single byte nopcodes available for OP_EVAL-like extensions which benefit more from brevity,

Let's reserve OP_16 for script2.0 and let's assume that will take ages to be deployed.
Let's reserve OP_15 for taking the extra byte instead of doing it now, that will give us 255 more instructions.
Let's reserve OP_15 255 to take a short (2 bytes) in a similar way, that will give us 65536 more.
Let's reserve OP_15 255 65535 to take 4 bytes, that will give us 4294967296 more.
Etc, etc...
I really think we're safe.

(2) if done in a certain way, it allows future-proof stack bounds checking in script authoring tools or policy checks.

This is the part I'm not understanding. I'm missing why the reuse of "stack bounds checks" is not just bikeshedding and how the alternative is any less "future-proof".
Can you elaborate on this?

@maaku
Copy link
Contributor

maaku commented May 16, 2015

Make OP_NOP2 fail unless there are at least two items on the stack. It doesn't care what items, but they must be >=2. At this point a script tool, or relay policy is written that has no idea what the extra OP_NOP2 functionality is, but it knows that at the point OP_NOP2 is executed, there must be two items on the stack. A script compiler or transaction validator throws an error if it can determine that the stack would contain less than two items when the OP_NOP2 is executed. For some scriptPubKeys, the minimum number of pushes in the scriptSig could be inferred based on this and other constraints.

Later, CLTV is implemented but none of this software has been updated to understand the new rules yet. Nevertheless, the compiler still throws an error when the lock time parameter is not provided, relay nodes drop the same transactions, and your script type checker helpfully reminds you that you are probably missing a value for that unrecognised OP_NOP2 extended instruction.

It's a very limited form of type checking, and not nearly as useful as it could be in a much better designed language. But it is an improvement over the status quo for existing script.

@petertodd
Copy link
Contributor Author

@maaku Relay nodes already drop transactions that use unknown opcodes; in fact I should have done that for this type patch... shows it's an ill-advised idea IMO - yet another thing to take into account over the simpler "one-opcode one-function" principle.

Again, I'm just not seeing the value of things like "type checkers" for the scripting language right now, given how utterly simple it is. Equally, in the future if the choice was to make the scripting language more or less complex for the sake of type checking vs. getting consensus, I'd 100% choose the latter, always. People are free to write scripts that are bad ideas; the important thing is that the consensus critical codebase stay simple to ensure we don't make mistakes that affect everyone. (like the one I made above!)

@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.