0% found this document useful (0 votes)
11 views2 pages

Detailed Parallel Computing Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views2 pages

Detailed Parallel Computing Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Detailed Lecture Notes: Parallel Computing

Week 2: Introduction to Parallel Computing


Parallel computing is the use of multiple processors or computers to perform tasks faster. It
improves performance by dividing a problem into smaller parts that run simultaneously. Example: If
five workers assemble different parts of a car at the same time, the work is done faster.

Concepts:
- Concurrency: Multiple tasks executing in overlapping time periods. Example: A person singing
while eating.
- Parallelism: Tasks executing simultaneously. Example: Cooking while talking on the phone.
- Threads & Processes: Threads are the smallest units of execution. A process can have multiple
threads.
- Synchronous vs Asynchronous: Synchronous tasks execute one after another, while asynchronous
tasks switch between tasks without waiting for completion. Example: Writing letters one by one
(synchronous) vs. putting clothes in a washing machine and making a sandwich simultaneously
(asynchronous).

Week 3: Multi-core Processors and Limitations of Single CPUs


Single-core processors have performance limits due to overheating, power consumption, and
memory size. Example: A single worker assembling an entire car is slow and exhausting.

Multi-core processors solve this by allowing multiple processing units (cores) to work in parallel.
Example: A factory where each worker focuses on a single task to speed up production.

Von Neumann Architecture:


- Uses stored program concepts where memory holds both data and instructions.
- Fetch and execute cycle: The CPU retrieves instructions, processes them, and executes tasks.
Example: Following a recipe step by step where ingredients (data) and instructions (code) are stored
in one place.

Week 4: Evolution of Multi-core Processors


As single-core performance hit a limit, computers started using multiple cores. Instead of increasing
processor speed, modern computers now increase the number of processing units.

Example: Instead of hiring one super-fast chef, a restaurant hires multiple chefs working on different
dishes.
Memory Hierarchy:
- Shared Memory: All cores access the same memory.
- Private Memory: Each core has its own memory (cache) for faster data access.
Example: Students sharing a single book (shared memory) vs. each having their own copy (private
memory).

Week 5: Cache Coherence and Thread Management


Cache coherence ensures all processors have updated and consistent data. If one core updates a
value in its cache, others must also see the change.

Example: If one person in a meeting updates a document, all attendees should receive the new
version.

Cache Coherence Protocols:


- MESI: Modified, Exclusive, Shared, and Invalid states ensure consistency.
- MOESI: Adds an 'Owned' state to improve performance.

Thread Scheduling:
- OS assigns threads to different cores efficiently.
- Soft Affinity: OS tries to keep a thread on the same core.
- Hard Affinity: Programmers can manually assign threads to specific cores.
Example: Assigning tasks to employees based on their expertise for efficiency.

You might also like