Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1997, Foundations of Software Technology and …
We study several data-structures and operations that commonly arise in parallel implementations of logic programming languages. The main problems that arise in implementing such parallel systems are abstracted out and precisely stated. Upper and ...
1991
Many researchers have been trying to use the implicit parallelism of logic languages parallelizing the execution of independent clauses. However this approach has the disadvantage of requiring a heavy overhead for processes scheduling and synchronizing, for data migration and for collecting the results. In this paper it is proposed a different approach, the data parallel one. The focus is on large collections of data and the core idea is to parallelize the execution of element-wise operations.
Abstract One of the advantages of logic programming (LP) and constraint logic programming (CLP) is the fact that one can exploit implicit parallelism in logic programs. Logic programs have two major forms of implicit parallelism: orparallelism (ORP) and and-parallelism (ANDP). In this work we survey some of work that has been taking place within the C LoPn project towards fast execution of logic programs.
Lecture Notes in Computer Science, 1998
2000
One of the advantages of logic programming is the fact that it offers many sources of implicit parallelism, such as and-parallelism and or-parallelism. Arguably, or-parallel systems, such as Aurora and Muse, have been the most successful parallel logic programming systems so far. Or-parallel systems rely on techniques such as Environment Copying to address the problem that branches being explored in parallel may need to assign different bindings for the same shared variable.
Computer Languages, 1996
Much work has been done in the areas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs (including that exploited by data parallelism techniques). Data parallelism techniques on the other hand offer increased performance for a restricted class of programs. The thesis of this paper is that these two forms of parallelism are not fundamentally different and that relating them opens the possibility of obtaining the advantages of both within the same system. Some relevant issues are discussed and solutions proposed. The discussion is illustrated through visualizations of actual parallel executions implementing the ideas proposed.
New Generation Computing, 1999
We formalize the implementation mechanisms required to support or-parallel execution of logic programs in terms of operations on dynamic data structures. Upper and lower bounds are derived, in terms of the number of operations n performed on the data structure, for the problem of guaranteeing correct semantics during or-parallel execution.
ACM SIGPLAN Notices, 1998
ABSTRACT
2001
A b stract O ne of the advantages of logic programming is that it o ff ers several sources of implicit parallelism, such as and-parallelism (AN DP) and or-parallelism (OR P). R ecent research has concentrated on integrating the diff erent forms of parallelism into a single combined system. In this work we deal with the problem of integrating OR P and independent and-parallelism (Ia P), the two forms of parallelism most suitable for parallel Prolog systems.
Information Processing Letters, 1994
An efficient backward execution method for AND-parallelism in logic programs is proposed. It is devised through a close examination of some relation over literals, which can be represented by paths in the data dependency graphs. The scheme is more efficient than other works in the sense that it performs less unfruitful resetting (and consequently, canceling) operations. Its rationale is also intuitively clear and natural. Furthermore, the relevant analyses needed for the proposal are largely achieved at compile time, which alludes to actual efficiency.
Computer Languages, 1992
1978
We investigate a certain model of synchronous parallelism. Syntax, semantics and complexity of programs within it are defined. We consider algorithmic properties of synchronous parallel programs in connection with sequential programs with arrays. The complexity theorem states that the class PP-time (pOlrnomial-time bounded parallel languages) is equal to P-space (languages requiring polynomial amount of memory). -675-cobegin (I1DP 1 ), .•. , (IrDP r ) coend where: 1. p. is the relation programmable in R and it is J writen in the form Ka for some program KeFS R and an open formula a, for j=l, ... ,r. (cf. [1]). 2. I. for j=l, ... ,r is a sequential program from J FS R • -679-3. For all j=l, ..• ,r the set of free index variables in I. and p. is the same. J J 4. For all j=l, ... ,r any index variable in I. can J not occur as a left side of substitution. (This restriction is implied by semantics, because p. J will assign those variables on which program I. 1 ) J J Denote: T. = {(nl, ... ,nk.):PJ.(nl, ... ,nk.)(v) = l} for J J J The set T. is the set of all sequence In the last two chapters of the paper we have to make some restrictions on the form of the programmable relation p .• If we allow p. to be an arbitrary formula J J K~, then even the problem of finiteness p. could become J undecidable. In order to obtain effectiveness and complexity theorem we assume that p. is given by a system J of linear inequalities (with respect to index variables), -685-J ces satisfying p .• If T. is not finite (in the case J J of system of linear inequalities the problem is decidable) then stop without result. 2. Following the semantics definition check if there exists unavoidable variable conflict (point ii)). If yes then stop with undefined result.
In this work, we present an automatic way to parallelize logic programs for finding all the answers to queries using a transformation to low level threading primitives. Although much work has been done in parallelization of logic programming more than a decade ago (e.g., Aurora, Muse, YapOR), the current state of parallelizing logic programs is still very poor. This work presents a way for parallelism of tabled logic programs in XSB Prolog under the well founded semantics. An important contribution of this work relies in merging answer-tables from multiple children threads without incurring copying or full-sharing and synchronization of data-structures. The implementation of the parentchildren shared answer-tables surpasses in efficiency all the other data-structures currently implemented for completion of answers in parallelization using multi-threading. The transformation and its lower-level answer merging predicates were implemented as an extension to the XSB system.
1992
Abstract Many researchers have been trying to use the implicit parallelism of logic languages parallelizing the execu-tion of independent clauses. However this approach has the disadvantage of requiring a heavy overhead for processes scheduling and synchronizing, for data migration and for collecting the results. In this p~ per a different approach is proposed, the data paralfei one. The focus is on large collections of data and the core idea is to parallelize the execution ofelement-wise operations.
1994
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slowdown , or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also, a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with médium to large parallel execution overheads.
Computing Research Repository, 2003
Logic programming languages, such as Prolog, provide a high-level, declarative approach to programming. Logic Programming offers great potential for implicit parallelism, thus allowing parallel systems to often reduce a program's execution time without programmer intervention. We believe that for complex applications that take several hours, if not days, to return an answer, even limited speedups from parallel execution can directly translate to very significant productivity gains. It has been argued that Prolog's evaluation strategy -SLD resolutionoften limits the potential of the logic programming paradigm. The past years have therefore seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -OPTYap -and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.