Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1991
Many researchers have been trying to use the implicit parallelism of logic languages parallelizing the execution of independent clauses. However this approach has the disadvantage of requiring a heavy overhead for processes scheduling and synchronizing, for data migration and for collecting the results. In this paper it is proposed a different approach, the data parallel one. The focus is on large collections of data and the core idea is to parallelize the execution of element-wise operations.
Computer Languages, 1996
Much work has been done in the areas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs (including that exploited by data parallelism techniques). Data parallelism techniques on the other hand offer increased performance for a restricted class of programs. The thesis of this paper is that these two forms of parallelism are not fundamentally different and that relating them opens the possibility of obtaining the advantages of both within the same system. Some relevant issues are discussed and solutions proposed. The discussion is illustrated through visualizations of actual parallel executions implementing the ideas proposed.
1992
Abstract Many researchers have been trying to use the implicit parallelism of logic languages parallelizing the execu-tion of independent clauses. However this approach has the disadvantage of requiring a heavy overhead for processes scheduling and synchronizing, for data migration and for collecting the results. In this p~ per a different approach is proposed, the data paralfei one. The focus is on large collections of data and the core idea is to parallelize the execution ofelement-wise operations.
Abstract One of the advantages of logic programming (LP) and constraint logic programming (CLP) is the fact that one can exploit implicit parallelism in logic programs. Logic programs have two major forms of implicit parallelism: orparallelism (ORP) and and-parallelism (ANDP). In this work we survey some of work that has been taking place within the C LoPn project towards fast execution of logic programs.
Lecture Notes in Computer Science, 1998
Computer Languages, 1992
Information Processing Letters, 1994
An efficient backward execution method for AND-parallelism in logic programs is proposed. It is devised through a close examination of some relation over literals, which can be represented by paths in the data dependency graphs. The scheme is more efficient than other works in the sense that it performs less unfruitful resetting (and consequently, canceling) operations. Its rationale is also intuitively clear and natural. Furthermore, the relevant analyses needed for the proposal are largely achieved at compile time, which alludes to actual efficiency.
2000
One of the advantages of logic programming is the fact that it offers many sources of implicit parallelism, such as and-parallelism and or-parallelism. Arguably, or-parallel systems, such as Aurora and Muse, have been the most successful parallel logic programming systems so far. Or-parallel systems rely on techniques such as Environment Copying to address the problem that branches being explored in parallel may need to assign different bindings for the same shared variable.
2001
A b stract O ne of the advantages of logic programming is that it o ff ers several sources of implicit parallelism, such as and-parallelism (AN DP) and or-parallelism (OR P). R ecent research has concentrated on integrating the diff erent forms of parallelism into a single combined system. In this work we deal with the problem of integrating OR P and independent and-parallelism (Ia P), the two forms of parallelism most suitable for parallel Prolog systems.
In this work, we present an automatic way to parallelize logic programs for finding all the answers to queries using a transformation to low level threading primitives. Although much work has been done in parallelization of logic programming more than a decade ago (e.g., Aurora, Muse, YapOR), the current state of parallelizing logic programs is still very poor. This work presents a way for parallelism of tabled logic programs in XSB Prolog under the well founded semantics. An important contribution of this work relies in merging answer-tables from multiple children threads without incurring copying or full-sharing and synchronization of data-structures. The implementation of the parentchildren shared answer-tables surpasses in efficiency all the other data-structures currently implemented for completion of answers in parallelization using multi-threading. The transformation and its lower-level answer merging predicates were implemented as an extension to the XSB system.
2015
Declarative programming in the style of functional and logic programming has been hailed as an alternative parallel programming style where computer programs are automatically parallelized without programmer control. Although this approach removes many pitfalls of explicit parallel programming, it hides important information about the underlying parallel architecture that could be used to improve the scalability and efficiency of programs. In this paper, we present a novel programming model that allows the programmer to reason about thread state in data-driven declarative programs. This abstraction has been implemented on top of Linear Meld, a linear logic programming language that is designed for writing graphbased programs. We present several programs that show the flavor of our new programming model, including graph algorithms and a machine learning algorithm. Our goal is to show that it is possible to take advantage of architectural details without losing the key advantages of l...
Parallel Computing, 1999
The OASys (Or/And SYStem) is a software implementation designed for AND/OR-parallel execution of logic programs. In order to combine these two types of parallelism, OASys considers each alternative path as a totally independent computation (leading to OR-parallelism) which consists of a conjunction of determinate subgoals (leading to AND-parallelism). This computation model is motivated by the need for the elimination of communication between processing elements (PEs). OASys aims towards a distributed memory architecture in which the PEs performing the OR-parallel computation possess their own address space while other simple processing units are assigned with AND-parallel computation and share the same address space. OASys execution is based on distributed scheduling which allows either recomputation of paths or environment copying. We discuss in detail the OASys execution scheme and we demonstrate OASys eectiveness by presenting the results obtained by a prototype implementation, running on a network of workstations. The results show that speedup obtained by AND/OR-parallelism is greater than the speedups obtained by exploiting AND or OR-parallelism alone. In addition, comparative performance measurements show that copying has a minor advantage over recomputation. Ó 1999 Elsevier Science B.V. All rights reserved.
The Concurrent Prolog predicate for merging n input streams is investigated, and a compilation technique tor getting its efficient code is presented. Using the. technique, data are transferred with a .delay independent of n. Furthermore, it is shown that the addition and the removal of an input stream can be done within an average time of 0(1). The predicate for distributing data on an input stream to fa output streams can also be. realized as efficiently as n-ary merge. The compilation technique for the distributt predicate can further be applied to the implementation of mutable arrays thaf allow constant-time accessing and updating. Although the efficiency stated above could be achieved by a sOphisticated compiler, the codes should be provided directly by the system to get rid of the bulk ot source programs and the time required to compile them. 1 INTRODUCTION When we implement a large-scale distributed system in a parallel logic programming language such as Concurrent Prolog (Shapiro 1983) and PARLOG (Clark and Gregory 1984), the performance of the system will be influenced significantly by how efficiently streams as. interprocess communication channels can be merged and distributed. This paper deals with implementation techniques of the predicates that merge many input streams and those which distribute data on a single input stream into multiple output streams. The language we chose for the following discussions is Concurrent Prolog. However, the results obtained are applicable also to PARLOG. For readers unfamiliar with Concurrent Prolog, an outline of Concurrent Prolog is described in Appendix I. This paper focuses on implementation on conventional sequential computers. Of course, to demonstrate the viability of Concurrent Prolog on parallel computers, the scope of discussion cannot be limited to sequential computers. However, even on a parallel architecture, it would be very likely for each processor to deal with multiple processes for the following reasons. First, the number of processes a user can create should not be limited to the number of processors available. Second, even if a lot of processors are available, the best way to allocate two processes which communicate intensively with each other
ACM SIGPLAN Notices, 1998
ABSTRACT
1987
A logic programming environment should provide users with declarative control of program development and execution and resource access and allocation. It is argued that the concurrent logic language PARLOG is well suited to the implementation of such environments. The essential features of the PARLOG Programming System (PPS) are presented. The PPS is a multiprocessing programming environment that supports PARLOG (and is intended to support Prolog). Users interact with the PPS by querying and updating collections of logic clauses termed data bases. The PPS understands certain clauses as describing system configuration, the status of user deductions and the rules determining access to resources. Other clauses are understood as describing meta relationships such as inheritance between data bases. The paper introduces the facilities of the PPS and explains the essential structure of its implementation in PARLOG by a top down development of a PARLOG program which reads as a specification of a multiprocessing operating system.
Theory and Practice of Logic …, 2005
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.