Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
PDPTA
Abstract Program parallelization becomes increasingly important when new multi-core architectures provide ways to improve performance. One of the greatest challenges of this development lies in programming parallel applications. Us-ing declarative languages, such as constraint ...
This article presents the qualitative and quantitative results of experiments applying parallelism to constraint programming. We do not set out to prove that parallel constraint programming is the perfect problem-solving tool, but rather to characterize what can be expected from this combination and to discover when using parallelism is productive. In these experiments, we do not deal with scalability involving large numbers of processors; nor do we try to solve extremely difficult or large problems by using brute force algorithms running for days, weeks or even longer. Instead, we concentrate on parallelism involving one to four processors and we restrict our solving time to small durations (around 15 minutes in general). This protocol is applied to different series of problems. The results are analyzed in order to gain a better understanding of the practical benefits of parallel constraint programming.
Lecture Notes in Computer Science, 1999
Many problems from artificial intelligence can be described as constraint satisfaction problems over finite domains (CSP(FD)), that is, a solution is an assignment of a value to each problem variable such that a set of constraints is satisfied. Arc-consistency algorithms remove inconsistent values from the set of values that can be assigned to a variable (its domain), thus reducing the search space. We have developed a parallelisation scheme of arc-consistency to be run on MIMD multiprocessor. The set of constraints is divided into N partitions, which are executed in parallel on N processors. The parallelisation scheme has been implemented on a CRAY T3E multiprocessor with up to thirty-four processors. Empirical results on speedup and behaviour are reported and discussed.
2007
Abstract. Constraint programming libraries are useful when building applications developed mostly in mainstream programming languages: they do not require the developers to acquire skills for a new language, providing instead declarative programming tools for use within conventional systems. Some approaches to constraint programming favour completeness, such as propagation-based systems.
1996
This paper describes the rst results from research on the compilation of constraint systems into task level procedural parallel programs. Algorithms are expressed as constraint systems. A data ow graph is derived from the constraint system and a set of input variables. The data ow graph, which exploits the parallelism in the constraints, is mapped to the target language CODE 2.0, which represents parallel computation structures as generalized dependence graphs. Finally, parallel C programs are generated. The granularity of the derived data ow graphs depends upon the complexity of the operations directly represented in the constraint system. To extract parallel programs of appropriate granularity, the following features have been included: (i) modularity, (ii) operations over structured types as primitives, (iii) de nition of sequential C functions. A prototype of the compiler has been implemented. The domain of matrix computations has been targeted for applications. Some examples have been programmed. Initial results are encouraging.
1996
Abstract This paper describes the extraction of parallel procedural programs from specifications of computations as constraint systems and an initial set of input variables. It is shown that all types of parallelism-AND-OR, task and data-can be extracted from constraint representations. Both computation and logic can be spanned in a single representation.
LISP and Symbolic Computation, 1994
In this paper we describe the parallelization of a medium-size symbolic fixed-point computation, CONSAT. CONSAT is a constraint satisfaction system that computes globally consistent solutions. The parallel version of CONSAT is implemented using abstractions from a parallel programming toolbox we developed. The toolbox is intended for novice parallel programmers, and programs based on abstractions from this toolbox may be executed on both uniprocessors and shared-memory multiprocessors without modifications. We explain how parallelism is introduced, and how concurrent accesses to shared data structures are handled. We will also describe the performance of CONSAT on sample inputs.
Lecture Notes in Computer Science, 2004
As a case study that illustrates our view on coordination and component-based software engineering, we present the design and implementation of a parallel constraint solver. The parallel solver coordinates autonomous instances of a sequential constraint solver, which is used as a software component. The component solvers achieve load balancing of tree search through a time-out mechanism. Experiments show that the purely exogenous mode of coordination employed here yields a viable parallel solver that effectively reduces turn-around time for constraint solving on a broad range of hardware platforms.
2011
With the increased availability of affordable parallel and dis-tributed hardware, programming models for these architectures has be-come the focus of significant attention. Constraint programming, which can be seen as the encoding of processes as a Constraint Satisfaction Problem, because of its data-driven and control-insensitive approach is a prime candidate to serve as the basis for a framework which effectively exploits parallel architectures.
1995
Abstract This paper reports on experimental research on extraction of coarse grain parallelism from constraint systems. Constraint speci cations are compiled into task level procedural parallel programs in C. Three issues found to be important are:(i) inclusion of operations over structured types as primitives in the representation,(ii) inclusion of modularity in the constraint systems, and (iii) use of functions in the constraint representation. The role of these issues is described.
International Conference on Logic Programming/Joint International Conference and Symposium on Logic Programming, 1998
We present in this paper a parallel execution model of arc-consistency for Constraint SatisfactionProblems (CSP), implemented on a scaleable MIMD distributed memory machine.We have adopted the indexical scheme, an adequate approach to arc-consistency for functionalconstraints. The CSP is partitioned into N partitions, which are executed in parallel on N processors.Each processor applies sequential arc-consistency to its subset of constraints, updatingremote
Lecture Notes in Computer Science, 1994
Because of synchronization based on blocking ask, some of the most important techniques for data flow analysis of (sequential) constraint logic programs (clp) are no longer applicable to cc languages. In particular, the generalized approach to the semantics, intended to factorize the (standard) semantics so as to make explicit the domain-dependent features (i.e. operators and semantic objects which may be influenced by abstraction) becomes useless for relevant applications. A possible solution to this problem is based on a more abstract (non-standard) semantics: the success semantics, which models non suspended computations only. With a program transformation (NoSynch) that simply ignores synchronization, we obtain a clp-like program which allows us to apply standard techniques for data flow analysis. For suspension-free programs the success semantics is equivalent to the standard semantics thus justifying the use of suspension analysis to generate sound approximations. A second transformation (Angel ) is introduced, applying a different abstraction of synchronization in possibly suspending programs and resulting in a framework which is adequate to suspension analysis. Applicability and accuracy of these solutions are investigated.
Theoretical Computer Science, 1997
Concurrent constraint programming (ccp), like most of the concurrent paradigms, has a mechanism of global choice which makes computations dependent on the scheduling of processes. This is one of the main reasons why the formal semantics of ccp is more complicated than the one of its deterministic and local-choice sublanguages. In this paper we study various subsets of ccp obtained by adding some restriction on the notion of choice, or by requiring con uency, i.e. independency from the scheduling strategy. We show that it is possible to de ne simple denotational semantics for these subsets, for various notions of observables. Finally, as an application of our results we develop a framework for the compositional analysis of full ccp. The basic idea is to approximate an arbitrary ccp program by a program in the restricted language, and then analyze the latter, by applying the standard techniques of abstract interpretation to its denotational semantics.
1997
Compositional semantics allow to reason about programs in an incremental way, providing the basis for the development of modular data-flow analysis. The major drawback of these semantics is their complexity. This observation applies in particular for concurren~ constralm programming ( ccp ). ln this work "-e consider an operational semantics of ccp by using sequences of pairs of finite constramts to represent ccp computatiOns which is equivalent to a denotational semantics, providing the basis for the development o.f an abstract interpretation framework for the analy~i~ of ccp .
Constraints, 2013
Concurrent Constraint Programming (CCP) has been used over the last two decades as an elegant and expressive model for concurrent systems. It models systems of agents communicating by posting and querying partial information, represented as constraints over the variables of the system. This covers a vast variety of systems as those arising in biological phenomena, reactive systems, netcentric computing and the advent of social networks and cloud computing. In this paper we survey the main applications, developments and current trends of CCP.
The computing industry is currently facing a ma- jor architectural shift. Extra computing power is not coming anymore from higher processor fre- quencies, but from a growing number of computing cores and processors. For AI, and constraint solv- ing in particular, this raises the question of how to scale current solving techniques to massively par- allel architectures. While prior work focusses mostly on small scale parallel constraint solving, we conduct the first study on scalability of constraint solving on 100 processors and beyond in this paper. We propose techniques that are simple to apply and show empir- ically that they scale surprisingly well. These tech- niques establish a performance baseline for parallel constraint solving technologies against which more sophisticated parallel algorithms need to compete in the future. 1 Context and Goals of the Paper A major achievement of the digital hardware industry in the second half of the 20th century was to engineer processors who...
1995
Abstract. Concurrent Constraint Programming (CCP) has been the subject of growing interest as the focus of a new paradigm for concurrent computation. Like logic programming it claims close relations to logic. In fact CCP languages are logics in a certain sense that we make precise in this paper.
Lecture Notes in Computer Science, 1996
Concurrent constraint programming is a simple but powerful framework for computation based on four basic computational ideas: concurrency (multiple agents are simultaneously active), communication (they interact via the monotonic accumulation of constraints on shared variables), coordination (the presence or absence of information can guard evolution of an agent), and localization (each agent has access to only a finite, though dynamically varying, number of variables, and can create new variables on the fly). Unlike other foundational models of concurrency such as CCS, CSP, Petri nets and the -calculus, such flexibility is already made available within the context of determinate computation. This allows the development of a rich and tractable theory of concurrent processes within the context of which additional computational notion such as indeterminacy, reactivity, instantaneous interrupts and continuous (dense-time) autonomous evolution have been developed. We survey the development of some of these extensions and the relationships between their semantic models.
Journal of Parallel and Distributed Computing, 1992
1998
This paper reports on a compiler for translation of constraint specifications into procedural parallel programs. A constraint program in our system consists of a set of constraints and an input set containing a subset of the variables appearing in the constraints. The compiler described in this paper successfully compiles a substantially larger class of constraint specifications to efficient programs than did its predecessors.
Proceedings of the International Symposium on Combinatorial Search
This paper introduces two adaptive paradigms that parallelize search for solutions to constraint satisfaction problems. Both are intended for any sequential solver that uses contention-oriented variable-ordering heuristics and restart strategies. Empirical results demonstrate that both paradigms improve the search performance of an underlying sequential solver, and also solve challenging problems left open after recent solver competitions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.