Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1993
In a declarative programming language a computation is expressed in a static fashion, as a list of declarations. A program in such a language is regarded as a specification that happens to be executable as well. In this textbook we focus on a subclass of the declarative languages, the functional programming languages, sometimes called applicative languages. In these languages a program consists of a list of function definitions. The execution of a program consists of the evaluation of a function application given the functions that have been defined.
Functional Programming Languages and Computer Architecture, 1987
Clean is an experimental language for specifying functional computations in terms of graph rewriting. It is based on an extension of Term Rewriting Systems (TRS) in which the terms are replaced by graphs. Such a Graph Rewriting System (GRS) consists of a, possibly cyclic, directed graph, called the data graph and graph rewrite rules which specify how this data graph may be rewritten. Clean is designed to provide a firm base for functional programming. In particular, Clean is suitable as an intermediate language between ...
1997
The lambda calculus forms without any question* the* theoretical backbone of functional programming languages. For the design and implementation of the lazy functional language Concurrent Clean we have used a related computational model: Term Graph Rewriting Systems (TGRS's). This paper wraps up our main conclusions after 10 years of experience with graph rewriting semantics for functional programming languages.
Lecture Notes in Computer Science, 1991
Graph rewriting models are very suited to serve as the basic computational model for functional languages and their implementation. Graphs are used to share computations which is needed to make efficient implementations of functional languages on sequential hardware possible. When graphs are rewritten (reduced) on parallel loosely coupled machine architectures, subgraphs have to be copied from one processor to another such that sharing is lost. In this paper we introduce the notion of lazy copying. With lazy copying it is possible to duplicate a graph without duplicating work. Lazy copying can be combined with simple mmotations which control the order of reduction. In principle, only interleaved execution of the individual reduction steps is possible. However, a condition is deduced under which parallel execution is allowed. When only certain combinations of lazy copying and annotations are used it is guarantied that this so-called non-interference condition is fulfilled. Abbreviations for these combinations are introduced. Now complex process behavlours, such as process communication on a loosely coupled parallel machine architecture, can be modelled. This also includes a special case: modelling mnltlprocessing on a single processor. Arbitrary process topologies can be created. Synchronous and asyncbronons process communication can be modelled. The implementation of the language Concurrent Clean, which is based on the proposed graph rewriting model, has shown that complicated parallel algorithms which can go far beyond divide-and-conquar like applications can be expressed.
2012
Abstract This paper presents a new functional programming model for graph structures called structured graphs. Structured graphs extend conventional algebraic datatypes with explicit definition and manipulation of cycles and/or sharing, and offer a practical and convenient way to program graphs in functional programming languages like Haskell. The representation of sharing and cycles (edges) employs recursive binders and uses an encoding inspired by parametric higher-order abstract syntax.
2018
This article is a consideration of some aspects of functional programming, that are used for parallel computation and creation of asynchronous applications on the basis of referential transparency, pure functions; persistent data structures and data immutability. This is a review of features that functional algoirthms like Map/Reduce posess in the usecases of the parallel data processing of huge datasets. Анотація—В статті розглянуто деякі аспекти функціонального програмування, які використовуються для паралелізації обчислень та формування асинхронних додатків на основі прозорості посилань, чистих функцій; персистентних структури даних та іммутабельністі даних. Розглянуто особливості функціональних алгоритмів для паралельної обробки гігантських наборів даних, таких як Map/Reduce. Keywords—distributed programming, functional programming, execution in parallel. Ключові слова—дистрибутоване програмування, функціональне програмування, паралелізм.
Proceedings of 21th Scientific Conference “Scientific Services & Internet – 2019”, 2019
It is proposed to add a static system of types to the dataflow functional model of parallel computing and the dataflow functional parallel programming language developed on its basis. The use of static typing increases the possibility of transforming dataflow functional parallel programs into programs running on modern parallel computing systems. Language constructions are proposed. Their syntax and semantics are described. It is noted that the need to use the single assignment principle in the formation of data storages of a particular type. The features of instrumental support of the proposed approach are considered.
Journal of …, 1997
A generalized computational model based on graph rewriting is presented along with Dactl, an associated compiler target (intermediate) language. An illustration of the capability of graph rewriting to model a variety of computational formalisms is presented by showing how some examples written originally in a number of languages can be described as graph rewriting transformations using Dactl notation. This is followed by a formal presentation of the Dactl model before giving a formal definition of the syntax and semantics of the language. Some implementation issues are also discussed.
Scientific Conference “Scientific Services & Internet”, 2021
The article is devoted to the results of the analysis of modern trends in the field of functional programming, considered as a methodology for solving problems of organizing parallel computing. The paradigm analysis of languages and functional programming systems is involved. Taking into account paradigmatic features is useful in predicting the course of application processes of programs, as well as in planning their study and development. Functional programming helps to improve the performance of programs by preparing their prototypes in advance. The description of the semantic and pragmatic principles of functional programming and the consequences of these principles is given. The complexity of creating programs for solving new problems is noted. The role of the paradigmatic decomposition of programs in the technology of developing long-lived programs is noted. The perspective of functional programming as a universal technique for solving complex problems, burdened with difficult ...
Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2016
Declarative programming has been hailed as a promising approach to parallel programming since it makes it easier to reason about programs while hiding the implementation details of parallelism from the programmer. However, its advantage is also its disadvantage as it leaves the programmer with no straightforward way to optimize programs for performance. In this paper, we introduce Coordinated Linear Meld (CLM), a concurrent forward-chaining linear logic programming language, with a declarative way to coordinate the execution of parallel programs allowing the programmer to specify arbitrary scheduling and data partitioning policies. Our approach allows the programmer to write graph-based declarative programs and then optionally to use coordination to fine-tune parallel performance. In this paper we specify the set of coordination facts, discuss their implementation in a parallel virtual machine, and show-through example-how they can be used to optimize parallel execution. We compare the execution of CLM programs against the original uncoordinated Linear Meld and several other frameworks.
We present in this paper a graph-narrowing abstract machine which has been designed to support a sequential eager implementation of a functional logic language. Our approach has been to extend a purely functional, (programmed) graph reduction machine by mechanisms capable of performing unification and backtracking. We describe the structure of the machine and explain the compilation scheme which generates machine code from a given source program. Both the machine and the compilation scheme have been formally specified. A prototype emulator of the machine has been implemented in Occam on a transputer system. Future work is planned for incorporating lazy evaluation and parallelism to the machine.
1997
A methodology is developed for mapping a wide class of concurrent logic languages (CLLs) onto Dactl, a compiler target language based on generalized graph rewriting. We show how features particular to the generalized graph rewriting model (such as non-root overwrites and sharing) can be used to implement CLLs. We identify problems in the mapping of a concurrent logic program to an equivalent set of rewrite rules and provide solutions. We also show some important optimizations and compilation techniques that can be adopted in the process. Finally, we take advantage of the underlying graph reduction model to enhance a concurrent logic program with some capabilities found usually only in functional languages such as lazy evaluation, sharing of computation and higher order programming.
Lean is an experimental language for specifying computations in terms of graph rewriting. It is based on an alternative to Term Rewriting Systems (TRS) in which the terms are replaced by graphs. Such a Graph Rewriting System (GRS) consists of a set of graph rewrite rules which specify how a graph may be rewritten. Besides supporting functional programming, Lean also describes imperative constructs and allows the manipulation of cyclic graphs. Programs may exhibit non-determinism as well as parallelism. In particular, Lean can serve as an intermediate language between declarative languages and machine architectures, both sequential and parallel. This paper is a revised version of Barendregt et al. (1987b) which was presented at the ESPRIT, PARLE conference in Eindhoven, The Netherlands, June 1987.
Lecture Notes in Computer Science
Lean is an experimental language for specifying computations in terms of graph rewriting. It is based on an alternative to Term Rewriting Systems (TRS) in which the terms are replaced by graphs. Such a Graph Rewriting System (GRS) consists of a set of graph rewrite rules which specify how a graph may be rewritten. Besides supporting functional programming, Lean also describes imperative constructs and allows the manipulation of cyclic graphs. Programs may exhibit non-determinism as well as parallelism. In particular, Lean can serve as an intermediate language between declarative languages and machine architectures, both sequential and parallel.
1992
The seminar emphasized four issues: o sta.tic program analysis o extensions for progra.mmer control of pa.ra.llelism o functional+logic languages and constraints o implementa.tion of functiona.l la.ngua.ges There were two formal discussion sessions, which a.ddressed the problems of input/output in functional la.nguages, and the utility of static program analysis. It is gratifying that the first Dagstuhl seminar in this area. (Functional Languages: Optimization for Parallelism) had stimulated many developments which were reported a.t this one. A particular feature of this seminar was the la.rge number of prototypes which were demonstrated a.nd which vividly illustrated the issues raised in discussions and presentations. Static Program Analysis Static program analysis has been thoroughly investigated for optimising sequential implementations, but parallel ones offer new problems. Discovering properties of synchronisation, for example, requires richer domains than those used in the sequential setting, lea.ding to a combinatorial explosion in cost. Current sequential analyses operate at or beyond the limits of today s algorithm technology. The most expensive aspect is computing xpoints, which requires a convergence test am] therefore a decision procedure for equality of abstract values. New work reported here helps reduce the need for convergence tests. and, using the algebraic properties of the operators in the resulting straightline code, partial evaluation ca.n be used to generate very efficient parallel code with a high degree of processor utilization. This is exempli ed by the well-known problem of parallel evaluation of expressions (and, more generally, straight-line code) over a semi-ring. In this context, partial evaluation is evaluation over the induced polynomial semi-ring that can be executed in parallel by tree contraction.
1994
Parallel functional programming has a relatively long history. Burge was one of the first to suggest the basic technique of evaluating function arguments in parallel, with the possibility of functions absorbing unevaluated arguments and perhaps also exploiting speculative evaluation 22]. Berkling also considered the application of functional languages to parallel processing 17].
Journal of Programming Languages, 1997
A generalized computational model based on graph rewriting is presented along with Dactl, an associated compiler target (intermediate) language. An illustration of the capability of graph rewriting to model a variety of computational formalisms is presented by showing how some examples written originally in a number of languages can be described as graph rewriting transformations using Dactl notation. This is followed by a formal presentation of the Dactl model before giving a formal definition of the syntax and semantics of the language. Some implementation issues are also discussed.
Lecture Notes in Computer Science, 2001
We identify a set of programming constructs ensuring that a programming language based on graph transformation is computationally complete. These constructs are (1) nondeterministic application of a set of graph transformation rules, (2) sequential composition and (3) iteration. This language is minimal in that omitting either sequential composition or iteration results in a computationally incomplete language. By computational completeness we refer to the ability to compute every computable partial function on labelled graphs. Our completeness proof is based on graph transformation programs which encode arbitrary graphs as strings, simulate Turing machines on these strings, and decode the resulting strings back into graphs.
Theoretical Computer Science, 2001
A framework is presented for designing parallel programming languages whose semantics is functional and where communications are explicit. To this end, Brookes and Geva's generalized concrete data structures are specialized with a notion of explicit data layout to yield a CCC of distributed structures called arrays. Arrays' symmetric replicated structures, suggested by the data-parallel SPMD paradigm, are found to be incompatible with sum types. We then outline a functional language with explicitly distributed (monomorphic) concrete types, including higher-order, sum and recursive ones. In this language, programs can be as large as the network and can observe communication events in other programs. Such exibility is missing from current data-parallel languages and amounts to a fusion with their so-called annotations, directives or meta-languages.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.