0% found this document useful (0 votes)
30 views4 pages

Analysis Modeling of Parallel Programs

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views4 pages

Analysis Modeling of Parallel Programs

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1. Explain various sources of overhead in parallel systems?

In parallel computing systems, overhead refers to any additional time, resources, or complexity
introduced beyond the essential computational tasks. Overheads can arise from various sources, and
understanding them is crucial for optimizing performance and efficiency. Here are some common
sources of overhead in parallel systems:
1. Communication Overhead:

 Data Transfer: Transmitting data between processes or nodes incurs overhead due
to communication protocols, network latency, and bandwidth limitations.

 Synchronization: Coordination and synchronization between parallel tasks or


processes can lead to overhead, especially when using barriers, locks, or other
synchronization mechanisms.

 Message Passing: Overhead associated with sending and receiving messages,


including message creation, buffering, and routing.

2. Load Balancing Overhead:


 Unequal distribution of workload among processing units can lead to load
imbalance, requiring overhead for load monitoring, redistribution, and adjustment.

 Dynamic load balancing mechanisms introduce additional overhead for workload


assessment, decision-making, and task migration.

3. Parallelization Overhead:
 Overhead incurred during the parallelization process, such as task decomposition,
mapping, and scheduling.

 Additional computation and resource consumption due to parallelization


frameworks, libraries, or runtime systems.

4. Resource Management Overhead:

 Overhead associated with managing resources such as processors, memory, and


storage in parallel systems.

 Resource allocation, deallocation, tracking, and scheduling operations contribute to


overhead.

5. Context Switching Overhead:


 In multiprocessor systems, switching between execution contexts (e.g., switching
between threads or processes) introduces context switching overhead due to state
saving and restoration.

 High frequency of context switches can degrade performance and increase


overhead.
Scalability Overhead:
Overhead that arises when scaling parallel systems to larger configurations or increasing the
number of processing units.

Scalability challenges, such as diminishing returns, increased communication


overhead, and contention for resources, can impact overall system performance.

2. What is granularity . what is effect of granularity on


performance of parallel system
Granularity in parallel computing refers to the size or scale of the tasks or operations that are
parallelized within a system. It can be categorized into two main types: fine-grained granularity and
coarse-grained granularity.

1. Fine-Grained Granularity:
 Fine-grained granularity involves dividing tasks into very small units of work. These
units often correspond to small portions of code or computations.
 Fine-grained parallelism allows for a high degree of concurrency, as many small tasks
can be executed simultaneously by different processing units.

 Examples of fine-grained parallelism include parallelizing loops, individual function


calls, or operations within a loop.
2. Coarse-Grained Granularity:

 Coarse-grained granularity involves dividing tasks into larger units of work. These
units are typically more substantial and encompass multiple smaller operations or
computations.

 Coarse-grained parallelism reduces the overhead associated with task scheduling,


communication, and synchronization compared to fine-grained parallelism.

 Examples of coarse-grained parallelism include parallelizing entire algorithms,


functions, or sections of a program that perform significant computations.

The effect of granularity on the performance of a parallel system can vary based on several factors:

1. Communication Overhead:

 Fine-grained parallelism can lead to higher communication overhead due to frequent


synchronization and data exchange between processing units.

 Coarse-grained parallelism reduces communication overhead as tasks are larger and


require less frequent coordination between processing units.

2. Task Scheduling and Load Balancing:

 Fine-grained tasks may require more sophisticated task scheduling algorithms to


efficiently distribute work among processing units and achieve load balancing.
 Coarse-grained tasks are easier to schedule and balance, as they involve fewer tasks
with potentially longer execution times.

3. Scalability:

 Fine-grained parallelism can improve scalability by allowing more tasks to be


executed concurrently, especially in systems with a large number of processing units.

 However, excessive fine-grained parallelism can lead to diminishing returns or


performance degradation due to increased overhead.

 Coarse-grained parallelism may be more scalable in certain scenarios where


communication overhead and synchronization costs are significant factors.

4. Resource Utilization:

 Fine-grained parallelism can lead to better resource utilization by exploiting more


concurrency and keeping processing units busy with smaller tasks.

 Coarse-grained parallelism may underutilize resources if tasks are too large and
processing units remain idle while waiting for tasks to complete.
In practice, the optimal granularity for a parallel system depends on the characteristics of the
workload, the architecture of the parallel system, the number of processing units available,
communication latency, and the trade-offs between concurrency, overhead, and resource utilization.
Finding the right balance between fine-grained and coarse-grained parallelism is essential for
maximizing the performance and efficiency of parallel systems.

3. What do you mean by Asymptotic Analysis of Parallel


Programs
Asymptotic analysis in the context of parallel programs involves evaluating the performance
characteristics and scalability of such programs as the input size grows towards infinity. It helps in
understanding how the performance of a parallel algorithm or program behaves as the problem size
increases and as more resources (such as processors) are utilized.

Here are some key aspects of asymptotic analysis of parallel programs:

Time Complexity:

 Just like in sequential algorithms, parallel algorithms have time complexity that describes the
relationship between the size of the input (n) and the computational steps required.
 Asymptotic analysis for time complexity in parallel programs often considers factors like the
number of processors available (p) and the efficiency of parallelization.
 Common notations such as O(), Θ(), and Ω() are used to express the upper bound, tight
bound, and lower bound of the time complexity, respectively.
Scalability:

 Asymptotic analysis helps in assessing the scalability of parallel programs. Scalability refers to
how well a program can handle increasing input sizes or utilize additional resources (such as
processors) to improve performance.
 Scalability analysis involves evaluating how the execution time or resource utilization
changes as the problem size or the number of processors varies.
 Ideally, a parallel program exhibits strong scalability, where increasing resources or input size
leads to proportional improvements in performance.

Speedup and Efficiency:

 Speedup is a metric used to measure the performance improvement achieved by


parallelizing a program. It is often expressed as the ratio of the execution time of the
sequential program to the execution time of the parallel program.
 Asymptotic analysis considers how the speedup of a parallel program changes with
increasing input size or the number of processors.
 Efficiency, another important metric, is the ratio of the speedup achieved to the number of
processors used. It indicates how effectively the available processors are utilized.

Work and Depth Analysis:

 Work and depth analysis is a common approach in asymptotic analysis of parallel programs.
It involves breaking down the computation into work (total computational steps) and depth
(longest path of dependencies or critical path).
 Work and depth analysis helps in understanding the inherent parallelism in a program and
estimating its potential for parallel execution.
 Algorithms with low depth and high work are often well-suited for parallelization, as they
exhibit high inherent parallelism.

In summary, asymptotic analysis of parallel programs involves analyzing their time complexity,
scalability, speedup, efficiency, and inherent parallelism as the problem size and the number of
processors change. It provides insights into the performance characteristics and helps in designing
efficient parallel algorithms and systems.

You might also like