Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, New Generation Computing
…
28 pages
1 file
We formalize the implementation mechanisms required to support or-parallel execution of logic programs in terms of operations on dynamic data structures. Upper and lower bounds are derived, in terms of the number of operations n performed on the data structure, for the problem of guaranteeing correct semantics during or-parallel execution.
2000
One of the advantages of logic programming is the fact that it offers many sources of implicit parallelism, such as and-parallelism and or-parallelism. Arguably, or-parallel systems, such as Aurora and Muse, have been the most successful parallel logic programming systems so far. Or-parallel systems rely on techniques such as Environment Copying to address the problem that branches being explored in parallel may need to assign different bindings for the same shared variable.
Foundations of Software Technology and …, 1997
We study several data-structures and operations that commonly arise in parallel implementations of logic programming languages. The main problems that arise in implementing such parallel systems are abstracted out and precisely stated. Upper and ...
Abstract One of the advantages of logic programming (LP) and constraint logic programming (CLP) is the fact that one can exploit implicit parallelism in logic programs. Logic programs have two major forms of implicit parallelism: orparallelism (ORP) and and-parallelism (ANDP). In this work we survey some of work that has been taking place within the C LoPn project towards fast execution of logic programs.
ACM Transactions on Programming Languages and Systems, 1993
We discuss fundamental limitations of or-parallel execution models of nondeterministic programming languages. Or-parallelism corresponds to executing the di erent nondeterministic computational paths in parallel. A natural way to represent the state of (parallel) execution of a nondeterministic program is by means of an or-parallel tree. We identify three important criteria that underlie the design of or-parallel implementations based upon the or-parallel tree: constant-time access to variables, constant-time task creation, and constant-time task switching, where the term`constant-time' means that the time for these operations is independent of the number of nodes in the or-parallel tree, as well as the size of each node. We prove that all three criteria cannot be simultaneously satis ed by any or-parallel execution model based on a nite number of processors but unbounded memory. We discuss in detail the application of our result to the class of logic programming languages, and show how our result can serve as a useful way to categorize the various or-parallel methods proposed in this eld. We also discuss the suitability of di erent or-parallel implementation strategies for di erent parallel architectures. y This paper generalizes and expands the results in our paper, \Criteria for Or-Parallel Execution Models of Logic Programs," in
1991
Many researchers have been trying to use the implicit parallelism of logic languages parallelizing the execution of independent clauses. However this approach has the disadvantage of requiring a heavy overhead for processes scheduling and synchronizing, for data migration and for collecting the results. In this paper it is proposed a different approach, the data parallel one. The focus is on large collections of data and the core idea is to parallelize the execution of element-wise operations.
Information Processing Letters, 1994
An efficient backward execution method for AND-parallelism in logic programs is proposed. It is devised through a close examination of some relation over literals, which can be represented by paths in the data dependency graphs. The scheme is more efficient than other works in the sense that it performs less unfruitful resetting (and consequently, canceling) operations. Its rationale is also intuitively clear and natural. Furthermore, the relevant analyses needed for the proposal are largely achieved at compile time, which alludes to actual efficiency.
1989
Thus the extended and-or tree has two new nodes: a 'sequential' node (for RAP's sequential goals), and a 'cross-product' node (for the cross-product of solutions from and-or-parallel goals). The other main features of our approach are: (i) each processor's binding-array is accompanied by a base-array, for constant access-t ime to variables in the presence of andparallelism; (ii) coarse-grain parallelism is supported by our processor scheduling poli.q, to minimize the cost of binding-array updates during task-switching; (iii) essentially, two new classes of WAM instructions are introduced: the 'check' instructions of RAP-WA:VI, and 'allocate' instructions for the different types of nodes. (iv) Several optimizations are proposed to minimize the cost of task switrhing. This extended \\"AM is currently being implemented on a Balance Sequent 8000.
Computer Languages, 1996
Much work has been done in the areas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs (including that exploited by data parallelism techniques). Data parallelism techniques on the other hand offer increased performance for a restricted class of programs. The thesis of this paper is that these two forms of parallelism are not fundamentally different and that relating them opens the possibility of obtaining the advantages of both within the same system. Some relevant issues are discussed and solutions proposed. The discussion is illustrated through visualizations of actual parallel executions implementing the ideas proposed.
The Journal of Logic Programming, 1991
This paper presents the implementation and performance results of an ANn-parallel execution model of logic programs on a shared-memory multiprocessor. The execution model is meant for logic programs with "don't-know nondeterrninism", and handles binding conflicts by dynamically detecting dependencies among literals. The model also incorporates intelligent backtracking at the clause level. Our implementation of this model is based upon the Warren Abstract Machine (WAM); hence it retains most of the efficiency of the WAM for sequential segments of logic programs. Performance results on Sequent Balance 21000 show that on suitable programs, our parallel implementation can achieve linear speedup on dozens of processors. We also present an analysis of different overheads encountered in the implementation of the execution model.
2001
A b stract O ne of the advantages of logic programming is that it o ff ers several sources of implicit parallelism, such as and-parallelism (AN DP) and or-parallelism (OR P). R ecent research has concentrated on integrating the diff erent forms of parallelism into a single combined system. In this work we deal with the problem of integrating OR P and independent and-parallelism (Ia P), the two forms of parallelism most suitable for parallel Prolog systems.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Lecture Notes in Computer Science, 1991
Parallel Computing, 1999
Journal of Systems Architecture, 1997
Computing Research Repository, 2003