Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023, APhEx, 28, 322-338.
…
23 pages
1 file
This paper examines one central problem in the philosophy of computer science, namely the question of whether computer programs can be verified by means of a mathematical proof. Firstly, program verification is defined and the classical debate on its feasibility is recalled under the light of a dual ontology of computational systems. Secondly, the current resurgence of the debate is analysed underlining its logical and philosophical motivations. Finally, it is shown how adopting a stratified ontology for computational systems one may recombine the different positions in the debate, arguing how program verification involves both deductive and inductive reasoning.
1993
Series Preface. Prologue Computer Science and Philosophy T.R. Colburn. Part I: The Mathematical Paradigm. Towards a Mathematical Science of Computation J. McCarthy. Proof of Algorithms by General Snapshots P. Naur. Assigning Meanings to Programs R.W. Floyd. An Axiomatic Basis for Computer Programming C.A.R. Hoare. Part II: Elaborating the Paradigm. First Steps towards Inferential Programming W.L. Sherlis, D.S. Scott. Mathematics of Programming C.A.R. Hoare. On Formalism in Specifications B. Meyer. Formalization in Program Development P. Naur. Part III: Challenges, Limits, and Alternatives. Formalism and Prototyping in the Software Process B.I. Blum. Outline of a Paradigm Change in Software Engineering C. Floyd. The Place of Strictly Defined Notation in Human Insight P. Naur. Limits of Correctness in Computers B.C. Smith. Part IV: Focus on Formal Verification. Social Processes and Proofs of Theorems and Programs R.A. Demillok, R.J. Lipton, A.J. Perlis. Program Verification: The Very ...
Communications of the ACM, 1988
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and compkfely reliable method for guaranteeing program performance is not even a theoretical possibility. I hold the opinion that the construction of computer programs is a mathematical activity like the solution of differential equations, that programs can be derived from their specifications through mafhemafical insight, calculation, and proof using algebraic laws as simple and elegant as those of elementary arithmetic. ' Popper advocates the conception of science as an objective process of "trial and error" whose results are always fallible. while Kuhn emph.kzes the social dimension of scientific communities in accepting and rejecting what he calls "paradigms." See, for example. 1231. [32-331. A fascinating collection of papers discussing their similarities and differences is presented in 1251.
Minds and Machines, 2013
Questions concerning the epistemological status of computer science are, in this paper, answered from the point of view of the formal verification framework. State space reduction techniques adopted to simplify computational models in model checking are analysed in terms of Aristotelian abstractions and Galilean idealizations characterizing the inquiry of empirical systems. Methodological considerations drawn here are employed to argue in favour of the scientific understanding of computer science as a discipline. Specifically, reduced models gained by Data Abstraction are acknowledged as Aristotelian abstractions that include only data which are sufficient to examine the interested executions. The present study highlights how the need to maximize incompatible properties is at the basis of both Abstraction Refinement, the process of generating a cascade of computational models to achieve a balance between simplicity and informativeness, and the Multiple Model Idealization approach in biology. Finally, fairness constraints, imposed to computational models to allow fair behaviours only, are defined as ceteris paribus conditions under which temporal formulas, formalizing software requirements, acquire the status of law-like statements about the software systems executions.
Artificial Intelligence, 1974
This paper describes a theorem prover that embodies knowledge about programming constructs, such as numbers, arrays, lists, and expressions. The program can reason about these concepts and is used as part of a program verification system that uses the Floyd-Naur explication of program semantics. It is implemented in the QA4 language; the QA4 system allows many pieces of strategic knowledge, each expressed as a small program, to be coordinated so that a program stands forward when it is relevant to theproblem at hand. The language allows clear, concise representation of this sort of knowledge. The QA4 system also has special facilities for dealing with commutative functions, ordering relations, and equivalence relations; these features are heavily used in this deductive system. The program interrogates the user and asks his advice in the course of a proof. Verifications have been found for Home's FIND program, a real-number division algorithm, and some sort programs,,as well as for many simpler algorithms. Additional theorems have been proved about a pattern matcher and a version of Robinson's unification algorithm. The appendix contains a complete, annotated listing of the deductive system and annotated traces of several of the deductions performed by the system.
2000
Much research in computer science, ever since its inception, has been devoted the problem:���How can we be sure that a computer program is correct?��� The general problem is extremely difficult, and the enormous variety of computer software in use demands a corresponding variety of approaches: eg structured design methods [YC86], automated testing [Ber91] and model checking [GL94].
Annals of the History of Computing, IEEE, 2003
Lecture Notes in Computer Science, 2008
A perspective on program verification is presented from the point of view of a university professor who has been active over a period of 35 years in the development of formal methods and their supporting tools. He has educated until now approx. 25 Ph.D. researchers in those fields and has written two handbooks in the field of program verification, one unifying known techniques for proving data refinement, and the other on compositional verification of concurrent and distributed programs, and communication-closed layers. This essay closes with formulating a grand challenge worthy of modern Europe. 1 Background Conjecture: It has become a real possibility that Germany's most powerful industrialist, Jürgen Schrempp, heading the largest industry of Germany, Daim-lerChrysler, will be fired next year because his company has not spent sufficient attention to improve the reliability of the software of its prime product, Mercedes Benz cars. For, as a consequence of the poor quality of the top range of Mercedes Benz limousines, BMW has now replaced Mercedes Benz as the leading top-range car manufacturer in Germany. And this fact is unpalatable for the main shareholders of DaimlerChrysler (Deutsche Bank, e.g.). 1 The underlying reason for this fact is that 60% of the current production of Mercedes Benz cars has to be frequently called back because of software failures, the highest percentage of any car manufacturer in the world. And this percentage cannot be changed in, say, a year, the period of time Schrempp has to defend again his industrial strategy to his shareholders (this year his defense took place on April 6, 2005). This conjecture is at least the second of its kind: The Pentium Bug convinced the top level chip manufacturers that chips should be reliable and bug-free to the extent that any bug occurring after the production phase should be removable, at least to the extent that patches should be applicable circumventing those bugs. A third fact, not a conjecture, would be that two crashes of a fully loaded Airbus 380 due to software failure in a row would lead to the demise of the European aircraft industry. And one such crash of the Airbus 380 would have
Lecture Notes in Computer Science, 2001
In this paper we describe a new protocol that we call the Curry-Howard protocol between a theory and the programs extracted from it. This protocol leads to the expansion of the theory and the production of more powerful programs. The methodology we use for automatically extracting "correct" programs from proofs is a development of the well-known Curry-Howard process. Program extraction has been developed by many authors (see, for example, [9], [5] and [12]), but our presentation is ultimately aimed at a practical, usable system and has a number of novel features. These include 1. a very simple and natural mimicking of ordinary mathematical practice and likewise the use of established computer programs when we obtain programs from formal proofs, and 2. a conceptual distinction between programs on the one hand, and proofs of theorems that yield programs on the other. An implementation of our methodology is the Fred system. 1 As an example of our protocol we describe a constructive proof of the well-known theorem that every graph of even parity can be decomposed into a list of disjoint cycles. Given such a graph as input, the extracted program produces a list of the (non-trivial) disjoint cycles as promised. Research partly supported by ARC grant A 49230989. The authors are deeply indebted to John S. Jeavons and Bolis Basit who produced the graph-theoretic proof that we use. 1 The name Fred stands for "Frege-style dynamic [system]". Fred is written in C++ and runs under Windows 95/98/NT only because this is a readily accessible platform. See http://www.csse.monash.edu.au/fred 2 We write "correct" because the word is open to many interpretations. In this paper the interpretation is that the program meets its specifications.
Grazer Philosophische Studien
In philosophical contexts, logical formalisms are often resorted to as a means to render the validity and invalidity of informal arguments formally transparent. Since Oliver (1967) and Massey (1975), however, it has been recognized in the literature that identifying valid arguments is easier than identifying invalid ones. Still, any viable theory of adequate logical formalization should at least reliably identify valid arguments. This paper argues that accounts of logical formalization as developed by Blau (1977) and Brun (2004) do not meet that benchmark. The paper ends by suggesting different strategies to remedy the problem. * We profited a lot from intensive discussions with Georg Brun about his book and earlier drafts of this paper. Moreover, we are grateful to Sebastian Leugger, Johannes Marti, Tim Raez, and to the anonymous referees for this journal for helpful comments on an earlier draft.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal on Software Tools for Technology Transfer, 2011
Acta Informatica, 1975
Communications of the ACM, 1979
Philosophy & Technology, 2014
Mathematics, Computer Science and Logic - A Never Ending Story, 2013
Mathematical Structures in Computer Science, 2009
Software Education …, 1994
Journal of Automated Reasoning, 2012
Proceedings Ninth Annual IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, 2002
Philosophy and Technology, 2019
The Mathematical Intelligencer, 1991
Proceedings of the 2nd …, 2007
Lecture Nnotes in Computer Science, 1974
Lecture Notes in Computer Science, 1985
L&PS - Logic and Philosophy of Science, 2011
IEEE Transactions on Software Engineering, 1976