Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1989, Lecture Notes in Computer Science
…
15 pages
1 file
This paper discusses various parallel algorithmic techniques for combinatorial computation, emphasizing the efficiency gains achievable through parallel processing models, particularly the shared memory PRAM model. Key techniques include prefix computation, Euler tours, and graph algorithms for problems such as perfect matching. The authors provide insights into constructing higher-level algorithms using these techniques and highlight their utility in breaking down complex combinatorial problems into manageable parallel tasks.
Computers in Physics, 1989
Symposium on the Theory of Computing, 1982
1982
including computational physics, weather forecasting, etc. The current state of hardware capabilities will facilitate the use of such parallel processors to many more applications as the speed and the number of processors that can be tightly coupled increases dramatically. (A very good introduction to the future promise of "highly parallel computing" can be found in the January, 1982 issue of Computer, published by the IEEE Computer Society.)
Wiley-Interscience eBooks, 2005
Shared memory systems form a major category of multiprocessors. In this category, all processors share a global memory. Communication between tasks running on different processors is performed through writing to and reading from the global memory. All interprocessor coordination and synchronization is also accomplished via the global memory. A shared memory computer system consists of a set of independent processors, a set of memory modules, and an interconnection network as shown in Figure 4.1. Two main problems need to be addressed when designing a shared memory system: performance degradation due to contention, and coherence problems. Performance degradation might happen when multiple processors are trying to access the shared memory simultaneously. A typical design might use caches to solve the contention problem. However, having multiple copies of data, spread throughout the caches, might lead to a coherence problem. The copies in the caches are coherent if they are all equal to the same value. However, if one of the processors writes over the value of one of the copies, then the copy becomes inconsistent because it no longer equals the value of the other copies. In this chapter we study a variety of shared memory systems and their solutions of the cache coherence problem.
Acta Informatica, 1976
This paper presents a model of parallel computing. Six examples illustrate the method of programming. An implementation scheme for programs is also presented. t
International Conference on Parallel Processing, 1990
2010
This thesis reviews selected topics from the theory of parallel computation. The research begins with a survey of the proposed models of parallel computation. It examines the characteristics of each model and it discusses its use either for theoretical studies, or for practical applications. Subsequently, it employs common simulation techniques to evaluate the computational power of these models. The simulations establish certain model relations before advancing to a detailed study of the parallel complexity theory, which is the subject of the second part of this thesis. The second part examines classes of feasible highly parallel problems and it investigates the limits of parallelization. It is concerned with the benefits of the parallel solutions and the extent to which they can be applied to all problems. It analyzes the parallel complexity of various well-known tractable problems and it discusses the automatic parallelization of the efficient sequential algorithms. Moreover, it ...
Icpp, 1983
Recent developments in integrated circuit technology have suggested a new building block for parallel processing system::;: the single chip computer. This building block makes iL economically feaSible to interconnect large numbers of computers to ferm a muttiImcrocomputer network. Becat:.se the nodes of .men a network do not share any memory, it is Cl'llclUl that a inlerr,unneclion network capable of efficiently supporting message passing be found. We prp.sent a model of Lime varying computation based on task precedence graphs that corresponds closely to the beilav1rIl' of fork/join algorithms such as divide~nd conquer. Using thIS mond, we investigate the behavior f)f t:!ve interconncctiol~ndwod~s under varying "Workloads with distributed scheduling.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Theory of Computing Systems / Mathematical Systems Theory, 1999
IEEE Symposium on Foundations of Computer Science, 1980
Lecture Notes in Computer Science, 2012
Journal of Parallel and Distributed Computing, 1991
Wiley-Interscience eBooks, 2004
21st Annual Symposium on Foundations of Computer Science (sfcs 1980), 1980
Texts in Computational Science and Engineering, 2010
Undergraduate Topics in Computer Science, 2018
The Computer Journal, 2001