Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015
Constructive t ype theories, such as that of Martin-L of, allow program construction and veri cation to take place within a single system: proofs may be read as programs and propositions as types. However, parts of proofs may be seen to be irrelevant from a computational viewpoint. We show h o w a form of abstract interpretation may be used to detect computational redundancy in a functional language based upon Martin-L of's type theory. T h us, without making any alteration to the system of type theory itself, we present a n automatic way of discovering and removing such redundancy. We also note that the strong normalisation property o f t ype theory means that proofs of correctness of the abstract interpretation are simpler, being based upon a set-theoretic rather than a domain-theoretic semantics.
1996
Constructive type theories, such as that of Martin-Lof, allow program construction and verification to take place within a single system: proofs may be read as programs and propositions as types. However, parts of proofs may be seen to be irrelevant from a computational viewpoint. We show how a form of abstract interpretation may be used to detect computational redundancy in a functional language based upon Martin-Lof's type theory.
This thesis presents a new algorithm for Normalisation by Evaluation for different type theories. This algorithm is later used to define a type-checking algorithm and to prove other meta-theoretical results about predicative type systems.
Logical Methods in Computer Science, 2011
We define a logical framework with singleton types and one universe of small types. We give the semantics using a PER model; it is used for constructing a normalisation-by-evaluation algorithm. We prove completeness and soundness of the algorithm; and get as a corollary the injectivity of type constructors. Then we give the definition of a correct and complete type-checking algorithm for terms in normal form. We extend the results to proof-irrelevant propositions. 1998 ACM Subject Classification: F.4.1. CC Creative Commons 2 A. ABEL, T. COQUAND, AND M. PAGANO
Proceedings of the 2005 ACM SIGPLAN workshop on Haskell - Haskell '05, 2005
Proof assistants based on dependent type theory are closely related to functional programming languages, and so it is tempting to use them to prove the correctness of functional programs. In this paper, we show how Agda, such a proof assistant, can be used to prove theorems about Haskell programs. Haskell programs are translated into an Agda model of their semantics, by translating via GHC's Core language into a monadic form specially adapted to represent Haskell's polymorphism in Agda's predicative type system. The translation can support reasoning about either total values only, or total and partial values, by instantiating the monad appropriately. We claim that, although these Agda models are generated by a relatively complex translation process, proofs about them are simple and natural, and we offer a number of examples to support this claim.
2008
We show how programming language semantics and definitions of their corresponding type systems can both be written in a single framework amenable to proofs of soundness. The framework is based on full rewriting logic (not to be confused with context reduction or term rewriting), where rules can match anywhere in a term (or configuration).
Computer Languages, 1990
Adoption of a scientific approach to language design, featuring the identification of language design with programming, leads to the advocacy of an explicitly hierarchical approach to language development. Choice of the lambda-calculus as basis of the linguistic hierarchy is determined when "elegance" is the predominant criterion for assessing the quality of designs. The consequence is that the hierarchy is primarily comprised of a family of untyped functional languages. We embark upon the hierarchical development of a quality data type system for such languages, including a powerful mechanism for generic data abstraction, in five stages. We show that the expressiveness of the system, in comparison with the widespread "polymorphic'" typing approach, is well worth the price of the necessarily dynamic checking for type conformance. Applicative (functional) programming Software tools and techniques Extensible languages
Journal of Functional Programming, 2002
Type systems are indispensable in modern higher-order, polymorphic languages. An important explanation for Haskell's and ML's popularity is their advanced type system, which helps a programmer in finding program errors before the program is run. Although the type system in a polymorphic language is important, the reported error messages are often poor. The goal of this research is to improve the quality of error messages for ill-typed expressions. In a unification-based system type conflicts are often detected far from the source of the conflict. To indicate the actual source of a type conflict an analysis of the complete program is necessary. For example, if there are three occurrences where x::Int and only one where x::Bool, we expect that there is something wrong with the occurrence of x::Bool. The order in which subexpressions occur should not influence the reported error. Unfortunately, this is not the case for unification-based systems. This article presents another approach to inferring the type of an expression. A set of typing rules is given together with a type assignment algorithm. From the rules and the program at hand we construct a set of constraints on types. This set replaces the unification of types in a unification-based system. If the set is inconsistent, some constraints are removed from the set and error messages are constructed. Several heuristics are used to determine which constraint is to be removed. With this approach we increase the chance that the actual source of a type conflict is reported. As a result we are able to produce more precise error messages.
1999
Research in dependent type theories [M-L71a] has, in the past, concentrated on its usein the presentation of theorems and theorem-proving. This thesis is concerned mainlywith the exploitation of the computational aspects of type theory for programming, ina context where the properties of programs may readily be specified and established.In particular, it develops technology for programming with dependent inductive familiesof datatypes
2007
Martin-Lof's type theory can be described as an intuitionistic theory of iterated inductive definitions developed in a framework of dependent types. It was originally intended to be a full-scale system for the formalization of constructive mathematics, but has also proved to be a powerful framework for programming. The theory integrates an expressive specification language (its type system) and a functional programming language (where all programs terminate). There now exist several proof-assistants based on type theory, and many non-trivial examples from programming, computer science, logic, and mathematics have been implemented using these. In this series of lectures we shall describe type theory as a theory of inductive definitions. We emphasize its open nature: much like in a standard functional language such as ML or Haskell the user can add new types whenever there is a need for them. We discuss the syntax and semantics of the theory. Moreover, we present some examples ...
Information Processing Letters, 2012
We describe a derivational approach to proving the equivalence of different representations of a type system. Different ways of representing type assignments are convenient for particular applications such as reasoning or implementation, but some kind of correspondence between them should be proven. In this paper we address two such semantics for type checking: one, due to Kuan et al., in the form of a term rewriting system and the other in the form of a traditional set of derivation rules. By employing a set of techniques investigated by Danvy et al., we mechanically derive the correspondence between a reductionbased semantics for type-checking and a traditional one in the form of derivation rules, implemented as a recursive descent. The correspondence is established through a series of semantics-preserving functional program transformations.
ACM SIGPLAN Notices, 2003
We develop an explicit two level system that allows programmers to reason about the behavior of effectful programs. The first level is an ordinary ML-style type system, which confers standard properties on program behavior. The second level is a conservative extension of the first that uses a logic of type refinements to check more precise properties of program behavior. Our logic is a fragment of intuitionistic linear logic, which gives programmers the ability to reason locally about changes of program state. We provide a generic resource semantics for our logic as well as a sound, decidable, syntactic refinement-checking system. We also prove that refinements give rise to an optimization principle for programs. Finally, we illustrate the power of our system through a number of examples.
1998
While visiting the Computer Science Department of the University of Torino in November/December 1997. vi The evaluation rules for PCF are a subset of the ones for PCFP (in Figure 2.2), since the rules APP 2 and PROJ i (i ∈ {1, 2}) are not needed. All the results in the previous sections apply to PCF as well. 2.6 PCFP with primitive recursion In this section we introduce an extension of PCFP, called PCFP T , which is more suitable for expressing programs extracted from formal proofs. The language PCFP T is obtained from PCFP by adding a program constructor for primitive recursion over natural numbers (rec). There are also two constructors for specifying simplified uses of primitive recursion: the constructor it (for iteration) and the constructor case. PCFP T , which is a variant of Gödel's system T , is the language considered in Part III of this thesis. The term formation rules for the new constructors are in Fig. 2.4, and the evaluation rules are in Fig. 2.5. Since for every PCFP T term there is an equivalent (w.r.t. the interpretation in all the closed term model described in Section 2.4) PCFP term (see Fact 2.10 below), we have that all the results for PCFP presented in the previous sections apply to PCFP T as well.
Lecture Notes in Computer Science, 2009
Rewriting logic semantics (RLS) was proposed as a programing language definitional framework that unifies operational and algebraic denotational semantics; see and the references there. Once a language is defined as an RLS theory, many generic tools are immediately available for use with no additional cost to the designer. These include a formal inductive theorem proving environment, an efficient interpreter, a state space explorer, and even a model checker. RLS has already been used to define a series of didactic and real languages .
ACM SIGPLAN Notices, 2017
While type soundness proofs are taught in every graduate PL class, the gap between realistic languages and what is accessible to formal proofs is large. In the case of Scala, it has been shown that its formal model, the Dependent Object Types (DOT) calculus, cannot simultaneously support key metatheoretic properties such as environment narrowing and subtyping transitivity, which are usually required for a type soundness proof. Moreover, Scala and many other realistic languages lack a general substitution property. The first contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on high-level, definitional interpreters, implemented in Coq. We present the first mechanized soundness proofs in this style for System F and several extensions, including mutable references. Our proofs use only straightforward induction, which is significant, as the combination of big-step semantics...
Mathematical Structures in Computer Science, 1993
The study of type theory may offer a uniform language for modular programming, structured specification and logical reasoning. We develop an approach to program specification and data refinement in a type theory with a strong logical power and nice structural mechanisms to show that it provides an adequate formalism for modular development of programs and specifications. Specification of abstract data types is considered, and a notion of abstract implementation between specifications is defined in the type theory and studied as a basis for correct and modular development of programs by stepwise refinement. The higher-order structural mechanisms in the type theory provide useful and flexible tools (specification operations and parameterized specifications) for modular design and structured specification. Refinement maps (programs and design decisions) and proofs of implementation correctness can be developed by means of the existing proof development systems based on type theories.
Meseguer and Rosu [MR04,MR07] proposed rewriting logic semantics (RLS) as a programing language definitional framework that unifies operational and algebraic denotational semantics. Once a language is defined as an RLS theory, many generic tools are immediately available for use with no additional cost to the designer. These include a formal inductive theorem proving environment, an efficient interpreter, a state space explorer, and even a model checker. RLS has already been used to define a series of didactic and real languages [MR04, MR07], but its benefits in connection with defining and reasoning about type systems have not been fully investigated yet. This paper shows how the same RLS style employed for giving formal definitions of languages can be used to define type systems. The same term-rewriting mechanism used to execute RLS language definitions can now be used to execute type systems, giving type checkers or type inferencers. Since both the language and its type system ar...
Anales de la 25 …, 1996
We propose a series of work areas related to type theory and functional programming. By type theory we mean the formulation of Martin-L of's set theory using the theory of types as logical framework, extended with record types and subtyping. The areas presented are: the ...
2007
The decidability of equality is proved for Martin-Löf type theory with a universeá la Russell and typed betaeta-equality judgements. A corollary of this result is that the constructor for dependent function types is injective, a property which is crucial for establishing the correctness of the type-checking algorithm. The decision procedure uses normalization by evaluation, an algorithm which first interprets terms in a domain with untyped semantic elements and then extracts normal forms. The correctness of this algorithm is established using a PER-model and a logical relation between syntax and semantics. * Research partially supported by the EU coordination action TYPES (510996). † Research partially supported by VR-project Typed Lambda Calculus and Applications. sional type theories such as Coq, Agda, and Epigram, all rely on the decision algorithm for a : A using normalization: the user attempts to prove the theorem A by building a construction a, and the system checks that a is indeed a proof of A.
1994
This paper gives an introduction to type theory, focusing on its recent use as a logical framework for proofs and programs. The rst two sections give a background to type theory intended for the reader who is new to the subject. The following presents Martin-L of's monomorphic type theory and an implementation, ALF, of this theory. Finally, a few small tutorial examples in ALF are given. 3 This work has been done within the ESPRIT Basic Research Action \Types for proofs and programs." It has been funded by NUTEK and Chalmers. An earlier version was published in the EATCS bulletin no 52, February 1994.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.