Parallel Computing Concepts - Complete Notes
1. Introduction
Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem
faster by dividing the work.
2. Key Terminologies
- Process: Independent program executing.
- Processor: Hardware that executes instructions.
- Speedup = Sequential Time / Parallel Time
- Efficiency = Speedup / Number of processors
3. Parallel Architectures
- Shared Memory: Multiple processors access the same memory.
- Distributed Memory: Each processor has its own local memory; communication via message passing.
4. Parallel Programming Models
- Message Passing (MPI): Explicit communication between processes.
- Shared Memory (OpenMP): Threads share memory space.
5. Common Parallel Algorithms
- Matrix Multiplication: Divide matrices into blocks; each processor computes a block.
- Prefix Sum: Compute cumulative sums in parallel using tree-based approach.
- Broadcast and Reduction: Communication operations to distribute or combine data.
6. Performance Metrics
- Speedup, Efficiency, Scalability, Load Balancing, Communication Overhead
7. Example Diagram: Recursive Doubling Broadcast on 8-node Ring
Step 1: Node 0 sends to Node 1
Parallel Computing Concepts - Complete Notes
0 --> 1
Step 2: Nodes 0 and 1 send to Nodes 2 and 3
0,1 --> 2,3
Step 3: Continue until all nodes receive the data