RNS INSTITUTE OF TECHNOLOGY
Department of Computer Science and Engineering
QUESTION BANK – MODULE 3 & MODULE 4
MODULE 3 – Distributed Memory Programming with MPI
[Link] Question
1. What is MPI? Mention its key features.
2. Explain MPI_Init and MPI_Finalize functions.
3. Define communicator in MPI. What is MPI_COMM_WORLD?
4. What is message passing? List two advantages.
5. Differentiate between blocking and non-blocking communication in MPI.
6. Explain point-to-point communication with MPI_Send and MPI_Recv examples.
7. Discuss collective communication and give examples of MPI_Bcast and MPI_Reduce.
8. Describe the trapezoidal rule implementation using MPI.
9. Explain the importance of derived datatypes in MPI with an example.
10. What is MPI_Wtime()? Explain how performance is measured using it.
Illustrate the trapezoidal rule and parallelize it using MPI. Explain with code
11.
structure.
Describe a parallel sorting algorithm using MPI. Explain data distribution and
12.
communication steps.
Discuss performance metrics like speedup, efficiency, and scalability with suitable
13.
MPI examples.
14. Explain the steps to parallelize matrix multiplication using MPI with pseudo code.
Discuss the challenges of distributed-memory programming using MPI and how they
15.
can be minimized.
MODULE 4 – Shared Memory Programming with OpenMP
[Link] Question
1. What is OpenMP? State its advantages.
2. Write the syntax for #pragma omp parallel and explain its usage.
3. Differentiate between shared and private variables in OpenMP.
4. What is the role of the critical directive in OpenMP?
5. Explain loop-carried dependency with a small example.
6. Explain the use of reduction clause in OpenMP and its benefits over
critical.
7. Describe the various scheduling techniques in OpenMP (static,
dynamic, guided, runtime).