0% found this document useful (0 votes)
48 views3 pages

CS633 FCH

The document outlines the course details for Parallel Computing (CS633), including objectives, content, evaluation scheme, and policies. It emphasizes the importance of parallel programming in solving real-world problems and covers topics such as message passing, performance, and supercomputer design. Prerequisites include familiarity with C/C++ and prior courses in operating systems, computer architecture, and computer networks.

Uploaded by

intakhab087
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views3 pages

CS633 FCH

The document outlines the course details for Parallel Computing (CS633), including objectives, content, evaluation scheme, and policies. It emphasizes the importance of parallel programming in solving real-world problems and covers topics such as message passing, performance, and supercomputer design. Prerequisites include familiarity with C/C++ and prior courses in operating systems, computer architecture, and computer networks.

Uploaded by

intakhab087
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

First Course Handout

January 2025

Course Title: Parallel Computing (CS633)


Credits: 3-0-0-9

Lecture Hours
MW: 3:35 - 5:00 PM (L16)

Office Hour
W: 5 - 6 PM (KD-221) Other times by appointment.

Extra Class
Saturday (time and venue will be announced via email)

Prerequisites:
Exposure to CS330 (Operating Systems), CS422 (Computer Architecture) and CS425 (Computer Networks)
is desirable. Familiarity with C/C++ is a must.

1 Course Objectives
Parallel programming is ubiquitous in today’s multi-core era and solves many real-world scientific problems.
Massive parallelism entails significant hardware and software challenges. The course is structured so that
the participants understand challenges in efficient execution of large-scale parallel applications. This course
will cover topics related to programming on multiple compute nodes using the message passing interface
paradigm. The assignment will be designed to strengthen understanding of parallel programming.

2 Course Content

1. Introduction: Why parallel computing? Amdahl’s law, speedup and efficiency.


2. Message passing: MPI basics, point-to-point communication, collective communication, synchronous/asynchronous
send/recv, parallel algorithms for collectives.

1
3. Parallel communication: Network topologies, network evaluation metrics, communication cost, routing
in interconnection networks, process-to-processor mapping.
4. Performance: Scalability, benchmarking, performance modeling, impact of network topologies, parallel
code analysis and profiling.
5. Parallel code design: Domain decomposition, communication-to-computation ratio, load balancing,
adaptivity, case studies: weather and material simulation codes.
6. Parallel I/O: MPI I/O algorithms, contemporary large-scale I/O architecture, I/O bottlenecks.
7. Supercomputer design: Study of most powerful supercomputers, exascale.
8. Advanced topics from recent research papers including parallel machine learning.

Approximate estimate of lecture hours may be found in [1].

3 Evaluation Scheme
We will follow relative grading scheme. The grading policy is shown in Figure 1. The quizzes (3–4) will
be pre-announced; there will be no make-up quiz if you miss a quiz. The assignment will be based on
programming in C/C++ with MPI, this will be a group assignment (group of 5). All group members will
be individually evaluated. Minimum 75% attendance is mandatory.

Figure 1: Grading Policy

4 Books/References

1. DE Culler, JP Singh and A Gupta, Parallel Computer Architecture: A Hardware/Software Approach


Morgan-Kaufmann, 1998.
2. A Grama, A Gupta, G Karypis, and V Kumar, Introduction to Parallel Computing. 2nd Ed., Addison-
Wesley, 2003.
3. Marc Snir, Steve W. Otto, Steven Huss-Lederman, David W. Walker and Jack Dongarra, MPI - The
Complete Reference, Second Edition, Volume 1, The MPI Core.

2
4. William Gropp, Ewing Lusk, Anthony Skjellum, Using MPI: portable parallel programming with the
message-passing interface, 3rd Ed., Cambridge MIT Press, 2014.

5. Peter S Pacheco, An Introduction to Parallel Programming, Morgan Kaufmann, 2011.


6. Research papers (will be notified during class).

5 Course Policy
Please refer to the CSE policy on plagiarism [2]. This will be strictly followed. You will not be allowed
to drop the course and reported to higher authorities if you are found to be involved in unfair means. A
minimum of 75% attendance is compulsory.

References
[1] CS633 Course Content. https://www.cse.iitk.ac.in/pages/CS633.html.
[2] CSE Plagiarism Policy. https://www.cse.iitk.ac.in/pages/AntiCheatingPolicy.html.

You might also like