Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1995, IEEE Software
The principle behind concurrent logic programming is a set of processes which cooperate in monotonically constraining a global set of variables to particular values. Each process will have access to only some of the variables, and a process may bind a variable to a tuple containing further variables which may be bound later by other processes. This is a suitable model for a coordination language. In this paper we describe a type system which ensures the cooperation principle is never breached, and which makes clear through syntax the pattern of data flow in a concurrent logic program. This overcomes problems previously associated with the practical use of concurrent logic languages.
The principle behind concurrent logic programming is a set of processes which co-operate in monotonically constraining a global set of variables to particular values. Each process will have access to only some of the variables, and a process may bind a variable to a tuple containing further variables which may be bound later by other processes. This is a suitable model for a coordination language. In this paper we describe a type system which ensures the co-operation principle is never breached, and which makes clear through syntax the pattern of data flow in a concurrent logic program. This overcomes problems previously associated with the practical use of concurrent logic languages.
1999
This article introduces a concurrent object oriented language whose underlying operational semantics is based on the logic variable. The language is designed in reponse to Kahn's criticisms [Kahn 89] of previous attempts to build concurrent object-oriented languages on top of concurrent logic languages. We believe Aldwych is a language which removes the verbosity of concurrent logic language code, without removing the power for abstract concurrent programming.
Journal of Logic and Computation, 2014
This work presents three increasingly expressive Dynamic Logics in which the programs are described in a language based on CCS. Our goal is to build dynamic logics that are suitable for the description and verification of properties of communicating concurrent systems, in a similar way as PDL is used for the sequential case. In order to accomplish that, CCS's operators and constructions are added to a basic modal logic. Doing this, the semantics of CCS's parallel operator allows us to build dynamic logics that support communicating and concurrent programs. We build a simple Kripke semantics for these logics, provide complete axiomatizations for them and show that they have the finite model property. This contrasts with other dynamic logics with parallel operators presented in the literature, such as Peleg's Concurrent PDL with Channels, where either the parallel programs cannot communicate, or at least one of the properties mentioned above (simple Kripke semantics, complete axiomatization and finite model property) is missing.
Logic Programming, 1995
Tempo is a declarative concurrent programming language based on classical firstorder logic. It improves on traditional concurrent logic programming languages (e.g., Parlog) by explicitly specifying aspects of the behaviour of concurrent programs, namely their safety properties. This provides great advantages in writing concurrent programs and manipulating them while preserving correctness. The language has a procedural interpretation that allows the specification to be executed, also concurrently. Tempo is sufficiently high-level to simulate practical concurrent programming paradigms, and can act as a common framework in which algorithms for a variety of paradigms may be expressed, compared, and manipulated.
Lecture Notes in Computer Science, 2008
Coordination languages are often used to describe open-ended systems. This makes it challenging to develop tools for guaranteeing security of the coordinated systems and correctness of their interaction. Successful approaches to this problem have been based on type systems with dynamic checks; therefore, the correctness properties cannot be statically enforced. By contrast, static analysis approaches based on Flow Logic usually guarantee properties statically. In this paper we show how the insights from the Flow Logic approach can be used to construct a type system for statically ensuring secure access to tuple spaces and safe process migration for an extension of the language K.
Proceedings of the 18th …, 1991
Concurrent constraint programming Sar89,SR90] is a simple and powerful model of concurrent computation based on the notions of store-asconstraint and process as information transducer. The store-as-valuation conception of von Neumann computing is replaced by the notion that the store is a constraint (a nite representation of a possibly in nite set of valuations) which provides partial information about the possible values that variables can take. Instead of \reading" and \writing" the values of variables, processes may now ask (check if a constraint is entailed by the store) and tell (augment the store with a new constraint). This is a very general paradigm which subsumes (among others) nondeterminate data-ow and the (concurrent)(constraint) logic programming languages. This paper develops the basic ideas involved in giving a coherent semantic account of these languages. Our rst contribution is to give a simple and general formulation of the notion that a constraint system is a system of partial information (a la the information systems of Scott). Parameter passing and hiding is handled by borrowing ideas from the cylindric algebras of Henkin, Monk and Tarski to introduce diagonal elements and \cylindri cation" operations (which mimic the projection of information induced by existential quanti ers).
2008
There are a large class of applications, notably those in highperformance computation (HPC), for which parallelism is necessary for performance, not expressiveness. Such applications are typically determinate and have no natural notion of deadlock. Unfortunately, today's dominant HPC programming paradigms (MPI and OpenMP) are based on imperative concurrency and do not guarantee determinacy or deadlock-freedom. This substantially complicates writing and debugging such code. We present a new concurrent model for mutable variables, the clocked final model, CF, that guarantees determinacy and deadlockfreedom. CF views a mutable location as a monotonic stream together with a global stability rule which permits reads to stutter (return a previous value) if it can be established that no other activity can write in the current phase. Each activity maintains a local index into the stream and advances it independently as it performs reads and writes. Computation is aborted if two different activities write different values in the same phase. This design unifies and extends several well-known determinate programming paradigms: single-threaded imperative programs, the "safe asynchrony" of [31], reader-writer communication via immutable variables, Kahn networks, and barrier-based synchronization. Since it is predicated quite narrowly on a re-analysis of mutable variables, it is applicable to existing sequential and concurrent languages, such as Jade, Cilk, Java and X10. We present a formal operational model for a specific CF language, MJ/CF, based on the MJ calculus of [15]. We present an outline of a denotational semantics based on a connection with default concurrent constraint programming. We show that CF leads to a very natural programming style: often an "obvious" shared-variable formulation provides the correct solution under the CF interpretation. We present several examples and discuss implementation issues.
Computer Languages, 1992
Lecture Notes in Computer Science, 1997
We present a new type system for TyCO, a name-passing calculus of concurrent objects. The system captures dynamic aspects of the behaviour of objects, namely non-uniform service availability. The notion of processes without errors is loosened, demanding only weak fairness in the treatment of messages.
… of the 1998 ACM symposium on …, 1998
I was asked to contribute a personal, historical perspective of logic programming (and presumably its relation to concurrency). I once wrote my personal perspective for Communications of the ACM in 1993 [1]. The article was also intended to record a detailed, historical account of the design process of Guarded Horn Clauses (GHC) and the kernel language (KL1) of the Japanese Fifth Generation Computer Systems (FGCS) project based on GHC. The readers are invited to read the CACM article, which is full of facts and inside stories. My view of the field remains basically the same after thirteen years. Another related article appeared in the "25-Year Perspective" book on logic programming [2], in which I tried to convey the essence of concurrent logic/constraint programming and its connection to logic programming in general. If your view of concurrent logic/constraint programming is approximately "an incomplete variant of logic programming," I would like to invite you to read [2] and update the view. In a word, it is a simple and powerful formalism of concurrency. In this article, I'll try to convey the essence of the field to the present audience, highlighting my beliefs, principles and lessons learned. I'll also briefly describe diverse offspring of concurrent logic/constraint programming.
Artificial Intelligence, 1999
Proceedings Sixth IEEE International Conference on Engineering of Complex Computer Systems. ICECCS 2000, 2000
The task of programming concurrent systems is substantially more dificult than the task of programming sequential systems with respect to both correctness and eflciency. In this paper we describe a constraint-based methodology for writing concurrent applications. A system is modeled as: (a) a set of processes containing a sequence of "markers" denoting the processes points of interest; and (b) a constraint store. Process synchronization is specijied by incrementally adding constraints on the markers' execution order into the constraint store. The constraint store contains a declarative specijication based on a temporal constraint logic program. The store, thus, acts as a coordination entity which on the one hand encapsulates the system synchronization requirements, and on the other hand, provides a declarative specijication of the system concurrency issues, This provide great advantages in writing concurrent programs and manipulating them while preserving correctness.
2005
We introduce a novel way to integrate functional and concurrent programming based on intuitionistic linear logic. The functional core arises from interpreting proof reduction as computation. The concurrent core arises from interpreting proof search as computation. The two are tightly integrated via a monad that permits both sides to share the same logical meaning for the linear connectives while preserving their different computational paradigms. For example, concurrent computation synthesizes proofs which can be evaluated as functional programs. We illustrate our design with some small examples, including an encoding of the pi-calculus.
2012
Information flow for concurrent imperative languages is defined and studied. As a working formalism we use UNITY, where programs consist of sets of assignments executed randomly, i.e. without control flow. We study noninterference for programs which reach and do not reach fixed point a state which is not changed by a subsequent execution. We present logic formulation of noninterference as well as type system for it.
Proceedings of the 1999 ACM symposium on Applied …, 1999
1996
Many different notions of property of interest and methods of verifying such properties arise naturally in programming. A general framework of Specification Structures is presented for combining different notions and methods in a coherent fashion. This is then applied to concurrency in the setting of Interaction Categories.
1999
Concurrent constraint programming is classically based on asynchronous communication via a shared store. In previous work ([1, 2]), we presented a new version of the ask and tell primitives which features synchronicity, our approach being based on the idea of telling new information just in the case that a concurrently running process is asking for it. We turn in this paper to a semantic study of this new framework, called Scc. It is first shown to be different in nature from classical concurrent constraint programming and from CCS, a classical reference in traditional concurrency theory. This suggests the interest of new semantics for Scc. To that end, an operational semantics reporting the steps of the computations is presented. A denotational semantics is then proposed. It uses monotonic sequences of labelled pairs of input-output states, possibly containing gaps, and ending-according to the logic programming tradition-with marks reporting success or failure. This denotational semantics is proved to be correct with respect to the operational semantics as well as fully abstract. L. Brim and M. Křetínský thank the Grant Agency of the Czech Republic, grant 201/97/0456, for supporting their research.
1987
A logic programming environment should provide users with declarative control of program development and execution and resource access and allocation. It is argued that the concurrent logic language PARLOG is well suited to the implementation of such environments. The essential features of the PARLOG Programming System (PPS) are presented. The PPS is a multiprocessing programming environment that supports PARLOG (and is intended to support Prolog). Users interact with the PPS by querying and updating collections of logic clauses termed data bases. The PPS understands certain clauses as describing system configuration, the status of user deductions and the rules determining access to resources. Other clauses are understood as describing meta relationships such as inheritance between data bases. The paper introduces the facilities of the PPS and explains the essential structure of its implementation in PARLOG by a top down development of a PARLOG program which reads as a specification of a multiprocessing operating system.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.