Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2001
Tabling is an implementation technique that improves the declarativeness and expressiveness of Prolog by reusing solutions to goals. Quite a few interesting applications of tabling have been developed in the last few years, andsev eral are by nature non-deterministic. This raises the question of whether parallel search techniques can be usedto improve the performance of tabled applications. In this work we demonstrate that the mechanisms proposed to parallelize search in the context of SLD resolution naturally generalize to parallel tabledcomputations, and that resulting systems can achieve goodp erformance on multi-processors. To do so, we present the OPTYap parallel engine. In our system individual SLG engines communicate data through stack copying. Completion is detected through a novel parallel completion algorithm that builds upon the data structures proposed for or-parallelism. Scheduling is simplified by building on previous research on or-parallelism. We show initial performance results for our implementation. Our best result is for an actual application, model checking, where we obtain linear speedups.
2002
Abstract Tabling or memoing is a technique where one stores intermediate answers to a problem so that they can be reused in further calls. Tabling is of interest to logic programming because it addresses some of the most significant weaknesses of Prolog. Namely, it can guarantee termination for programs with the bounded term-size property. Tabled programs exhibit a more complex execution mechanism than traditional Prolog's left-to-right search with backtracking.
ACM Transactions on Programming Languages and Systems, 2001
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of non-determinism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this paper is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The paper describes the major techniques used for shared memory implementation of Or-parallelism, Andparallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.
ACM Transactions on Programming Languages and Systems, 2001
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenge...
Parallel Computing, 1992
This paper presents two algorithms as extensions to the basic Resolution Principle of logic programs which exploit parallelism retaining the full semantics of Prolog. The first algorithm, called SPU, allows parallel execution of unifications belonging to deterministic paths of the proof tree, giving in effect AND-parallel execution. The second algorithm, called MPU, retains the benefits of SPU while exploiting OR-parallelism. We also present simulation results which are indicative of the performance of the proposed algorithms and finally we discuss implementation issues which give rise to the development of a parallel machine.
2010
The paradigm of Tabled Logic Programming (TLP) is now supported by a number of Prolog systems, including XSB, YAP Prolog, B-Prolog, Mercury, ALS, and Ciao. The reasons for this are partly theoretical: tabling ensures termination and optimal known complexity for queries to a large class of programs. However the overriding reasons are practical. TLP allows sophisticated programs to be written concisely and efficiently, especially when mechanisms such as tabled negation and call and answer subsumption are supported. As a result TLP has now been used in a variety of applications from program analysis to querying over the semantic web. This paper provides a survey of TLP and its applications as implemented in XSB Prolog, along with discussion of how XSB supports tabling with dynamically changing code, and in a multi-threaded environment 1 .
Computing Research Repository, 2003
Logic programming languages, such as Prolog, provide a high-level, declarative approach to programming. Logic Programming offers great potential for implicit parallelism, thus allowing parallel systems to often reduce a program's execution time without programmer intervention. We believe that for complex applications that take several hours, if not days, to return an answer, even limited speedups from parallel execution can directly translate to very significant productivity gains. It has been argued that Prolog's evaluation strategy -SLD resolutionoften limits the potential of the logic programming paradigm. The past years have therefore seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -OPTYap -and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.