0% found this document useful (0 votes)
23 views2 pages

Parallel and Distributed Computing Outline

The course CS-362, titled Parallel & Distributed Computing, focuses on parallel processing and distributed systems, covering essential topics such as parallel algorithms, cluster computing, and optimization techniques. It includes a comprehensive course outline with practical applications in modern tools and programming models, aiming to equip students with the ability to design and analyze efficient parallel and distributed systems. The course also features a structured grading policy and a detailed weekly topic breakdown.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views2 pages

Parallel and Distributed Computing Outline

The course CS-362, titled Parallel & Distributed Computing, focuses on parallel processing and distributed systems, covering essential topics such as parallel algorithms, cluster computing, and optimization techniques. It includes a comprehensive course outline with practical applications in modern tools and programming models, aiming to equip students with the ability to design and analyze efficient parallel and distributed systems. The course also features a structured grading policy and a detailed weekly topic breakdown.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Course Title Parallel & Distributed Computing

Code CS-362
Credit hours 2+1
Prerequisite Object Oriented Programming, Operating Systems
Category Core
Course Description This course explores parallel processing and distributed systems, covering topics like
parallel algorithms, cluster computing, and grid computing, emphasizing the design and
optimization of algorithms for parallel execution across multiple processors and
distributed environments.
Course Outline Asynchronous/synchronous computation/communication, concurrency control, fault
tolerance, GPU architecture and programming, heterogeneity, interconnection topologies,
load balancing, memory consistency model, memory hierarchies, Message passing
interface (MPI), MIMD/SIMD, multithreaded programming, parallel algorithms &
architectures, parallel I/O, performance analysis and tuning, power, programming models
(data parallel, task parallel, process-centric, shared/distributed memory), scalability and
performance studies, scheduling, storage systems, synchronization, and tools (Cuda, Swift,
Globus, Condor, Amazon AWS, OpenStack, Cilk, gdb, threads, MPICH, OpenMP,
Hadoop, FUSE).
Aim & Objectives This course will address issues in the design of parallel and distributed systems focusing
on Architectural Models, Software System Models, Models of Synchrony Processes and
Threads and Synchronization
Mapping of CLOs with PLOs
Course Learning Outcomes (CLOs) Domain BT Level PLO
Learn about parallel and distributed computers C 1 2
Describe portable programs for parallel or distributed architectures C 2 2
using Message-Passing Interface (MPI)library
To analyze the modelling and performance of parallel programs C 3 4
Mapping of LLOs with PLOs
Lab Learning Outcomes (LLOs) Domain BT Level PLO
Discuss parallel and distributed computing solutions, incorporating C 2 2
efficient algorithms and optimizing resource utilization across
interconnected systems
Excel the lab as per requirement using modern tools P 5 5
Ability to communicate effectively; orally as well as in writing A 2 7
Reference Materials 1. Distributed Systems: Principles and Paradigms, A. S. Tanenbaum and M. V. Steen,
Prentice Hall, 2nd Edition, 2007
2. Distributed and Cloud Computing: Clusters, Grids, Clouds, and the Future Internet, K
Hwang, J Dongarra and GC. C. Fox, Elsevier, 1st Ed.
Grading Breakup Quiz 1: 10%
and Policy Quiz 2: 10%
Sessional 10%
Midterm Examination: 30%
Final Examination: 40%
Week# Topics
1 Asynchronous/synchronous computation/communication
2 Concurrency control
3 GPU architecture and programming
4 Heterogeneity
5 Interconnection topologies
6 Load balancing
7 Memory consistency model
8 Memory hierarchies
Mid Term Exam
9 Message passing interface (MPI)
10 MIMD/SIMD
11 Multithreaded programming
12 Parallel algorithms & architectures
13 Parallel I/O techniques and applications
14 Performance analysis and tuning
15 Power consumption in parallel computing, Power-saving techniques and architectures
16 Programming models (data parallel, task parallel, process-centric, shared/distributed
memory, synchronization, tools)
Final Term Exam

You might also like