Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Automated Deduction — Cade-13
Speculating intermediate lemmas is one of the main reason of user interaction/guidance while mechanically attempting proofs by induction. An approach for generating intermediate lemmas is developed, and its e ectiveness is demonstrated while proving properties of recursively de ned functions. The approach is guided by the paradigm of attempting to generate a proof of the conclusion subgoal in an induction step by the application of an induction hypothesis(es). Generation of intermediate conjectures is motivated by attempts to nd appropriate instantiations for non-induction variables in the main conjecture. In case, the main conjecture does not have any non-induction variables, such variables are introduced by attempting its generalization. A constraint based paradigm is proposed for guessing the missing side of an intermediate conjecture by identifying constraints on the term schemes introduced for the missing side. De nitions and properties of functions are judiciously used for generating instantiations and intermediate conjectures. Heuristics are identi ed for performing such analysis. The approach fails if appropriate instantiations of non-induction variables cannot be generated. Otherwise, proofs of intermediate conjectures are attempted and the proposed method is recursively applied. The method has proven to be surprisingly very e ective in speculating intermediate lemmas for tail-recursive programs. The method is demonstrated using a number of examples on numbers and lists.
2010
We present a succinct account of dynamic rippling, a technique used to guide the automation of inductive proofs. This simplifies termination proofs for rippling and hence facilitates extending the technique in ways that preserve termination. We illustrate this by extending rippling with a terminating version of middle-out reasoning for lemma speculation. This supports automatic speculation of schematic lemmas which are incrementally instantiated by unification as the rippling proof progresses. Middle-out reasoning and lemma speculation have been implemented in higher-order logic and evaluated on typical libraries of formalised mathematics. This reveals that, when applied, the technique often finds the needed lemmas to complete the proof, but it is not as frequently applicable as initially expected. In comparison, we show that theory formation methods, combined with simpler proof methods, offer an effective alternative.
Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation, 2015
We consider the problem of automated reasoning about dynamically manipulated data structures. Essential properties are encoded as predicates whose definitions are formalized via user-defined recursive rules. Traditionally, proving relationships between such properties is limited to the unfold-and-match (U+M) paradigm which employs systematic transformation steps of folding/unfolding the rules. A proof, using U+M, succeeds when we find a sequence of transformations that produces a final formula which is obviously provable by simply matching terms. Our contribution here is the addition of the fundamental principle of induction to this automated process. We first show that some proof obligations that are dynamically generated in the process can be used as induction hypotheses in the future, and then we show how to use these hypotheses in an induction step which generates a new proof obligation aside from those obtained by using the fold/unfold operations. While the adding of induction is an obvious need in general, no automated method has managed to include this in a systematic and general way. The main reason for this is the problem of avoiding circular reasoning. We overcome this with a novel checking condition. In summary, our contribution is a proof method which-beyond U+M-performs automatic formula rewriting by treating previously encountered obligations in each proof path as possible induction hypotheses. In the practical evaluation part of this paper, we show how the commonly used technique of using unproven lemmas can be avoided, using realistic benchmarks. This not only removes the current burden of coming up with the appropriate lemmas, but also significantly boosts up the verification process, since lemma applications, coupled with unfolding, often induce a large search space. In the end, our method can automatically reason about a new class of formulas arising from practical program verification.
Lecture Notes in Computer Science, 2003
Automatically proving properties of tail-recursive function definitions by induction is known to be challenging. The difficulty arises due to a property of a tail-recursive function definition typically expressed by instantiating the accumulator argument to be a constant only on one side of the property. The application of the induction hypothesis gets blocked in a proof attempt. Following an approach developed by Kapur and Subramaniam, a transformation heuristic is proposed which hypothesizes the other side of property to also have an occurrence of the same constant. Constraints on the transformation are identified which enable a generalization of the constant on both sides with the hope that the generalized conjecture is easier to prove. Conditions are generated from which intermediate lemmas necessary to make a proof attempt to succeed can be speculated. By considering structural properties of recursive definitions, it is possible to identify properties of the functions used in recursive definitions for the conjecture to be valid. The heuristic is demonstrated on well-known tail-recursive definitions on numbers as well as other recursive data structures, including finite lists, finite sequences, finite trees, where a definition is expressed using one recursive call or multiple recursive calls. In case, a given conjecture is not valid because of a possible bug in an implementation (a tail-recursive definition) or a specification (a recursive definition), the heuristic can be often used to generate a counter-example. Conditions under which the heuristic is applicable can be checked easily. The proposed heuristic is likely to be helpful for automatically generating loop invariants as well as in proofs of correctness of properties of programs with respect to their specifications.
Lecture Notes in Computer Science, 2003
Using recent results on integrating induction schemes into decidable theories, a method for generating lemmas useful for reasoning about T-based function definitions is proposed. The method relies on terms in a decidable theory admitting a (finite set of) canonical form scheme(s) and ability to solve parametric equations relating two canonical form schemes with parameters. Using nontrivial examples, it is shown how the method can be used to automatically generate many simple lemmas; these lemmas are likely to be found useful in automatically proving other nontrivial properties of T-based functions, thus unburdening the user of having to provide many simple intermediate lemmas. During the formalization of a problem, after a user inputs T-based definitions, the method can be employed in the background to explore a search space of possible conjectures which can be attempted, thus building a library of lemmas as well as false conjectures. This investigation was motivated by our attempts to automatically generate lemmas arising in proofs of generic, arbitrary data-width parameterized arithmetic circuits. The scope of applicability of the proposed method is broader, however, including generating proofs for proof-carrying codes, certification of proof-carrying code as well as in reasoning about distributed computation algorithms. This research was partially supported by an NSF ITR award CCR-0113611. 1. cost(0) → 0, 2. cost(1) → 2, 3. cost(s(x) + s(x)) → cost(x) + cost(x) + 4. 4. cost(s(s(x) + s(x))) → cost(x) + cost(x) + 6.
Journal of Automated Reasoning, 1996
Zhang, Kapur, and Krishnamoorthy introduced a cover set method for designing induction schemes for automating proofs by induction from specifications expressed as equations and conditional equations. This method has been implemented in the theorem prover Rewrite Rule Laboratory (RRL) and a proof management system Tecton built on top of RRL, and it has been used to prove many nontrivial theorems and reason about sequential as well as parallel programs. The cover set method IS based on the assumption that a function symbol is defined by using a finite set of terminating (conditional or unconditional) rewrite rules. The termination ordering employed m orienting the rules is used to perform proofs by well-founded induction. The left sides of the rules are used to design different cases of an induction scheme, and recurslve calls to the function made m the right side can be used to design appropriate instantlations for generating induction hypotheses. A weakness of this method is that it relies on syntactic umficatlon for generating an induction scheme for a conjecture. This paper goes a step further by proposing semantic analysis for generating an induction scheme lot a conjecture from a cover set. We discuss the use of a decision procedure for Presburger arithmetic (quantifier-tree theory of numbers with the addition operation and relational predicates >, <, r =, ~>, ~<) for performing semantic analysis about numbers. The decision procedure is used to generate appropriate induction schemes for a conJecture by using cover sets of function taking numbers as arguments. This extension of the cover set method automates proofs of many theorems that otherwise require human guidance and hints. The effectiveness of the method is demonstrated by using some examples that commonly arise in reasoning about specifications and programs. It is also shown how semantic analysis using a Presburger arithmetic decision procedure can be used for checking the completeness of a cover set of a functxon defined by using operations such as + and -on numbers. With this check, many function definitions used in a proof of the prime factorization theorem stating that every number can be factored uniquely into prime factors, which had to be checked manually, can now be checked automatically in RRL. The use of the decision procedure for guiding generalization for generating conjectures and merging induction schemes is also illustrated.
Formal Aspects of Computing, 1997
This paper deals with a particular approach to the veri cation of functional programs. A speci cation of a program can be represented by a logical formula Con86, NPS90]. In a constructive framework, developing a program then corresponds to proving this formula. Given a speci cation and a program, we focus on reconstructing a proof of the speci cation whose algorithmic contents corresponds to the given program. The best we can hope is to generate proof obligations on atomic parts of the program corresponding to logical properties to be veri ed. First, this paper studies a weak extraction of a program from a proof that keeps track of intermediate speci cations. From such a program, we prove the determinism of retrieving proof obligations. Then, heuristic methods are proposed for retrieving the proof from a natural program containing only partial annotations. Finally, the implementation of this method as a tactic of the Coq proof assistant is presented.
Lecture Notes in Computer Science, 1974
9th International Conference on Automated Deduction
We propose the use of explicit proof plans to guide the search for a proof in automatic theorem proving. By representing proof plans as the specifications of LCF-like tactics, [Gordon et al 79], and by recording these specifications in a sorted meta-logic, we are able to reason about the conjectures to be proved and the methods available to prove them. In this way we can build proof plans of wide generality, formally account for and predict their successes and failures, apply them flexibly, recover from their failures, and learn them from example proofs. We illustrate this technique by building a proof plan based on a simple subset of the implicit proof plan embedded in the Boyer-Moore theorem prover, [Boyer & Moore 79].
Next Generation Design and Verification Methodologies for Distributed Embedded Control Systems, 2007
Inductive reasoning is critical for ensuring reliability of computational descriptions, especially of algorithms defined on recursive data structures. Despite advances made in automating inductive reasoning, proof attempts by theorem provers frequently fail while performing inductive reasoning. A user of such a system must scrutinize a failed proof attempt and do intensive debugging to understand the cause of failure, and then provide additional information to make a failed proof attempt succeed. A method for predicting a priori failure of proof attempts by induction is proposed. It is based on analyzing the definitions of function symbols appearing in a conjecture. Further, failure analysis is shown to provide information that can be used to make those proof attempts succeed for valid conjectures. The failure of proof attempts could be because of a number of reasons even when a conjecture is believed to be valid. It might be that an induction scheme used in a proof attempt is not powerful enough to yield useful induction hypotheses which can be applied effectively. Or, even when induction hypotheses are applicable, the proof attempt might not succeed because of missing lemmas. A method for speculating intermediate lemmas which can make induction hypotheses applicable and/or lead to simplification obtaining validity is proposed. The analysis can be automated and is illustrated on several examples. A preliminary implementation demonstrates the effectiveness of the proposed approach.
Journal of Automated Reasoning, 1991
The technique of proof plans, is outlined. This technique is used to guide automatic inference in order to avoid a combinatorial explosion. Empirical research to test this technique in the domain of theorem proving by mathematical induction is described. Heuristics, adapted from the work of Boyer and Moore, have been implemented as Prolog programs, called tactics, and used to guide an inductive proof checker, Oyster. These tactics have been partially specified in a meta-logic, and plan formation has been used to reason with these specifications and form plans. These plans are then executed by running their associated tactics and, hence, performing an Oyster proof. Results are presented of the use of this technique on a number of standard theorems from the literature. Searching in the planning space is shown to be considerably cheaper than searching directly in Oyster's search space. The success rate on the standard theorems is high. These preliminary results are very encouraging.
Journal of Automated Reasoning, 2010
We have developed a program for inductive theory formation, called IsaCoSy, which synthesises conjectures 'bottom-up' from the available constants and free variables. The synthesis process is made tractable by only generating irreducible terms, which are then filtered through counterexample checking and passed to the automatic inductive prover IsaPlanner. The main technical contribution is the presentation of a constraint mechanism for synthesis. As theorems are discovered, this generates additional constraints on the synthesis process. We evaluate IsaCoSy as a tool for automatically generating the background theories one would expect in a mature proof assistant, such as the Isabelle system. The results show that IsaCoSy produces most, and sometimes all, of the theorems in the Isabelle libraries. The number of additional uninteresting theorems are small enough to be easily pruned by hand.
Artificial intelligence, 1993
Bundy, A., A. Stevens, F. van Harmelen, A. Ireland and A. Smaill, Rippling: a heuristic for guiding inductive proofs, Artificial Intelligence 62 (1993) 185-253.
2009
We have implemented a program for inductive theory formation, called IsaCoSy, which synthesises conjectures about recursively defined datatypes and functions. Only irreducible terms are generated, which keeps the search space tractably small. The synthesised terms are filtered through counterexample checking and then passed on to the automatic inductive prover IsaPlanner. Experiments have given promising results, with high recall of 83% for natural numbers and 100% for lists when compared to libraries for the Isabelle theorem prover. However, precision is somewhat lower, 38-63%.
Manna's theorem on (partial) correctness of programs essentially states that in the statement of the Floyd inductive assertion method, "A flow diagram is correct with respect to given initial and final assertions if suitable intermediate assertions can be found," we may replace "if" by "if and only if." In other words, the method is complete. A precise formulation and proof for the flow chart case is given. The theorem is then extended to programs with (parameterless) recursion; for this the structure of the intermediate assertions has to be refined considerably. The result is used to provide a characterization of recursion which is an alternative to the minimal fixed point characterization, and to clarify the relationship between partial and total correctness. Important tools are the relational representation of programs, and Scott's induction.
1999
Mathematical induction is required for reasoning about objects or events containing repetition, e.g. computer programs with recursion or iteration, electronic circuits with feedback loops or parameterized components. Thus mathematical induction is a key enabling technology for the use of formal methods in information technology. Failure to automate inductive reasoning is one of the major obstacles to the widespread use of formal methods in industrial hardware and software development. Recent developments in automatic theorem proving promise significant improvements in our ability to automate mathematical induction. As a result of these developments, the functionality of inductive theorem provers has begun to improve. Moreover, there are some promising signs that even more significant improvements are possible. This enlarges the applicability of automated induction for "real world" problems and research topics for application have been discussed on the seminar. Automated induction is a relatively small subfield of automated reasoning. Research is based on two competing paradigms each having its merits but also its shortcomings as compared with the other: • Implicit induction evolved from Knuth-Bendix-Completion and most of the work based on this paradigm was performed by researchers concerned with term rewriting systems in general. • Explicit induction has its roots in traditional automated theorem proving. It resembles the more familiar idea of theorem proving by induction where induction axioms are explicitly given and specific inference techniques are tailored for proving base and step formulas. This seminar brought together leading scientists from both areas to discuss recent advancements within both paradigms, to evaluate and compare the state of the art and to work for a synthesis of both approaches. It summarized the results of a series of workshops held on automated induction in conjunction with the CADE conferences 1992 (Saratoga Springs) and 1994 (Nancy) and the AAAI conference 1993 (Washington DC). The success of this meeting was due in no small part to the Dagstuhl Seminar Center and its staff for creating such a friendly and productive environment. The organizers and participants greatly appreciate their effort. The organizers also thank Jürgen Giesl and Martin Protzen for their support in many organizational details.
Journal of Computer and System Sciences, 1975
Manna's theorem on (partial) correctness of programs essentially states that in the statement of the Floyd inductive assertion method, "A flow diagram is correct with respect to given initial and final assertions if suitable intermediate assertions can be found," we may replace "if" by "if and only if." In other words, the method is complete. A precise formulation and proof for the flow chart case is given. The theorem is then extended to programs with (parameterless) recursion; for this the structure of the intermediate assertions has to be refined considerably. The result is used to provide a characterization of recursion which is an alternative to the minimal fixed point characterization, and to clarify the relationship between partial and total correctness. Important tools are the relational representation of programs, and Scott's induction.
We introduce a general static analysis framework to reason about program properties at an infinite number of runtime control points, called instances. Infinite sets of instances are represented by rational languages. Based on this instancewise framework, we extend the concept of induction variables to recursive programs. For a class of monoid-based data structures, including arrays and trees, induction variables capture the exact memory location accessed at every step of the execution. This compile-time characterization is computed in polynomial time as a rational function.
1999
We present an automatic method which combines logical proof search and rippling heuristics to prove specifications. The key idea is to instantiate meta-variables in the proof with a simultaneous match based on rippling/reverse rippling heuristic. Underlying our rippling strategy is the rippling distance strategy which introduces a new powerful approach to rippling, as it avoids termination problems of other rippling strategies. Moreover, we are able to synthesize conditional substitutions for meta-variables in the proof. The strength of our approach is illustrated by discussing the specification of the integer square root and automatically synthesizing the corresponding algorithm. The described procedure has been integrated as a tactic into the NuPRL system but it can be combined with other proof methods as well.
1993
A for-loop is somewhat similar to an inductive argument. Just as the truth of a proposition P(n + 1) depends on the truth of P(n), the correctness of iteration n+1 of a for-loop depends on iteration n having been completed correctly. This paper presents the induce-construct, a new programming construct based on the form of inductive arguments. It is more expressive than the for-loop yet less expressive than the while-loop. Like the for-loop, it is always terminating. Unlike the for-loop, it allows the convenient and concise expression of many algorithms. The for-loop traverses a set of consecutive natural numbers, the induce-construct generalizes to other data types. The induce-construct is presented in two forms, one for imperative languages and one for functional languages. The expressive power of languages in which this is the only recursion construct is greater than primitive recursion, namely it is the multiply recursive functions in the rst order case and the set of functions expressible in G odel's system T in the general case. 0 Data Types We consider languages in which some of the data types are de ned by recursion as in Hoare's`Recursive Data Types' 8] or the language ML. The example in this paper use the following (polymorphic) types. tree 2() = empty j node(; tree 2(); tree 2()) list() = nil j :list() tree n() = empty j node(; list(tree n())) natural = 0 j natural 0 This article was processed using the L a T E X macro package with LLNCS style View publication stats View publication stats
This paper presents how to automatically prove that an \optimized" program is correct with respect to a set of given properties that is a speci cation. Proofs of speci cations contain logical and computational parts. Programs can be seen as computational parts of proofs. They can thus be extracted from proofs and be certi ed to be correct. The inverse problem can be solved: it is possible to reconstruct proof obligations from a program and its speci cation 18, 19]. The framework is a type theory where a proof can be represented as a typed-term 2, 14] and, particularly, the Calculus of Inductive Constructions 7]. This paper shows how programs can be simpli ed in order to be written in a much closer way to the ML one's. Indeed, proofs structures are often much more heavy than programs structures. The problem is consequently to consider natural programs (in a ML sense) and see how to retrieve natural structures of proofs from them.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.