0% found this document useful (0 votes)
10 views6 pages

Parallel & Distributed Computing

Study Guide for Parallel & Distributed Computing

Uploaded by

Huzaifa Awan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views6 pages

Parallel & Distributed Computing

Study Guide for Parallel & Distributed Computing

Uploaded by

Huzaifa Awan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Parallel & Distributed Computing Quiz Study Guide

1. Introduction & Basic Concepts

What is Parallel Computing?


Real-world analogy: Think of cooking a large dinner party meal. Instead of one chef doing everything
sequentially (serial), you have multiple chefs working simultaneously on different dishes (parallel).

Definition: Using multiple processors/cores to solve a problem simultaneously, breaking it into


smaller tasks that can run concurrently.

Key Terms with Examples

Task

Definition: A logically discrete section of computational work


Example: In a restaurant, one task might be "chop vegetables," another might be "grill meat"
Computing example: Processing one frame of a video vs. processing the entire video

Pipelining

Definition: Breaking a task into steps performed by different units, like an assembly line
Real-world example: Car manufacturing - one station installs engines, next installs doors, etc.

Computing example: CPU instruction pipeline - fetch, decode, execute, write-back

Shared Memory vs Distributed Memory

Shared Memory: Like a shared whiteboard in an office - everyone can see and modify the same
information

Distributed Memory: Like separate offices connected by phone - each has their own workspace
but can communicate

2. Flynn's Classical Taxonomy (1966)


This classifies computers based on instruction and data streams:

SISD (Single Instruction, Single Data)


What it is: Traditional single-core computer

Example: Your old single-core PC running one program


Analogy: One chef following one recipe with one set of ingredients
SIMD (Single Instruction, Multiple Data)
What it is: Same operation on multiple data simultaneously

Example: Modern GPUs, image processing


Analogy: One instructor teaching 30 students the same math problem with different numbers

Real example: Applying Instagram filter to all pixels in a photo simultaneously

MISD (Multiple Instruction, Single Data)


What it is: Different operations on same data (rare in practice)
Example: Multiple antivirus programs scanning the same file

Analogy: Multiple doctors examining the same patient with different tests

MIMD (Multiple Instruction, Multiple Data)


What it is: Different programs running on different data

Example: Modern multi-core computers, supercomputers


Analogy: Multiple chefs making different dishes with different ingredients

Real example: Your laptop running Chrome, Spotify, and Word simultaneously

3. Memory Architectures

Shared Memory (SMP - Symmetric Multi-Processor)


How it works: All processors access the same physical memory
Analogy: Family members sharing one refrigerator

Advantages: Easy programming, fast communication


Disadvantages: Limited scalability, memory contention

Example: Your multi-core laptop

Distributed Memory
How it works: Each processor has its own local memory
Analogy: Each house has its own refrigerator, neighbors communicate by phone

Advantages: Highly scalable, no memory contention


Disadvantages: More complex programming, communication overhead

Example: Computer clusters, Google's data centers

4. Performance Metrics
Speedup

Speedup = Time_serial / Time_parallel

Perfect speedup: If you have 4 cores, ideally get 4x speedup


Reality: Usually less due to overhead

FLOPS (Floating Point Operations Per Second)


What it measures: Theoretical peak performance

Formula: Sockets × Cores × Clock_Speed × Operations_per_cycle


Example: Intel Core i7-970 calculation from your notes:
1 socket × 6 cores × 3.46 GHz × 8 ops/cycle = 166 GFLOPS

Amdahl's Law
The most important concept for your quiz!

Speedup = 1 / [(1-p) + p/s]

Where:

p = proportion that can be parallelized

s = speedup of the parallel portion

Real-world example: Building a house

30% of work can be parallelized (painting rooms simultaneously)


70% must be sequential (foundation before walls)

Even with infinite painters, you can't get more than 1/0.7 = 1.43x speedup

Computing example from your notes:

30% can be sped up 2x

Speedup = 1 / [(1-0.3) + 0.3/2] = 1 / [0.7 + 0.15] = 1.18x

5. Types of Parallelism

Process-based Parallelization
What it is: Separate programs running independently
Example: Running multiple applications on your computer

Characteristics: Separate memory spaces, communicate via IPC

Thread-based Parallelization
What it is: Multiple execution paths within same program
Example: Web browser using different threads for tabs, downloads, rendering

Characteristics: Shared memory space, faster communication

Vectorization (SIMD)
What it is: Single instruction operating on multiple data elements
Example: Adding two arrays element by element in one instruction

Modern example: AVX instructions processing 8 numbers simultaneously

GPU Processing
What it is: Using graphics cards for general computation

Why it works: GPUs have thousands of small cores vs CPU's few large cores

Example: Cryptocurrency mining, machine learning, image processing

6. Parallel Programming Challenges

Granularity
Coarse-grained: Large chunks of work between communication
Example: Each core processes entire image sections

Fine-grained: Small chunks, frequent communication


Example: Each core processes individual pixels

Synchronization
What it is: Coordinating parallel tasks

Example: Waiting for all workers to finish their part before proceeding

Problem: Can cause performance bottlenecks

Scalability
Strong scaling: Fixed problem size, more processors
Goal: Solve same problem faster

Weak scaling: Problem size grows with processors


Goal: Solve bigger problems in same time

7. Distributed Computing Types

Cluster Computing
What it is: High-end systems connected via LAN

Characteristics: Homogeneous, single management


Example: University's computer lab networked for research

Grid Computing
What it is: Heterogeneous systems across organizations
Characteristics: Different hardware/OS, wide-area network

Example: SETI@home - using volunteers' computers worldwide

Cloud Computing
What it is: On-demand access to computing resources
Examples: AWS, Google Cloud, Microsoft Azure

Benefits: Pay-as-you-use, infinite scalability

Pervasive/Ubiquitous Computing
What it is: Small, mobile, embedded systems
Examples: IoT devices, smartphones, sensor networks

Goal: Seamlessly blend into user's environment

8. Why Parallel Computing?

Main Reasons:
1. Save Time/Money: More resources = faster completion
Example: Rendering Pixar movie with 1000s of computers vs. 1

2. Solve Larger Problems: Some problems too big for single computer
Example: Weather simulation requiring petabytes of data

3. Provide Concurrency: Multiple things happening simultaneously


Example: Web servers handling thousands of users

4. Utilize Modern Hardware: Don't waste multi-core potential


Example: Your quad-core laptop should use all 4 cores
Quiz Preparation Tips

Key Formulas to Remember:


1. Speedup = Time_serial / Time_parallel
2. Amdahl's Law: Speedup = 1 / [(1-p) + p/s]

3. FLOPS = Sockets × Cores × Clock × Ops_per_cycle

Important Concepts:
Flynn's taxonomy (SISD, SIMD, MISD, MIMD)
Shared vs. Distributed memory trade-offs

Strong vs. Weak scaling


Granularity effects on performance

Why parallelism is necessary in modern computing

Practice Problems:
Try calculating speedup for different scenarios using Amdahl's law, and understand why perfect
speedup is rarely achieved in practice.

Real-World Applications to Remember:


GPUs for gaming and AI
Web servers handling multiple requests

Scientific simulations

Image/video processing

Cryptocurrency mining

Weather forecasting

You might also like