0 ratings0% found this document useful (0 votes) 187 views30 pagesCSC 415 Lecture Note
Computer performance and evaluation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
er systems refers to the process of assessing and analysing
and overall performance of various components within a
ronment. It involves measuring and quantifying different aspects of system
Fesponse time, throughput, resource utilization, scalability, reliability, and
cy. Performance evaluation helps stakeholders understand how well a system
under different workloads, configurations, and operating conditions. It provides
able insights for optimizing system design, configuration, and resource allocation to meet.
performance goals and user requirements,
Objectives of Performance Study
| ———-H_EValuating design altematives (system design)
Comparing two or more systems (system selection)
Determining the optimal value of a parameter (ystem tuning)
Finding the performance bottleneck (botleneck identification)
© Characterizing the load on the system (workload characterization)
Determining the number and sizes of components (capacity planning)
Predicting the performance at future loads (forecasting),
BASIC TERMS
System:Any collection of hardware, software, and network.
Metrics:Criteria used to analysis the performance of the system or components.
_ Workloads:The requests made by the users of the system.
Importance of Performance Evaluation in Computer Systems:
Performance evaluation plays a crucial role in ensuring the reliability, scalability, and
‘efficiency of eomputer systems. Some Key reasons why performance evaluation is important
a rat eat be Letts cee
‘areas for improvement within a computer system. By analysing performance metrics,
system designers can optimize hardware, software, and configuration settings to
achieve better performance and resource utilization, ' :
2, Capacity Planning: Performance evaluation provides insights into the capacity and
scalability of computer systems. It helps predict future resource requirements and plan
for capacity upgrades or scaling strategies to accommodate growing workloads and
‘user demands.- that are sensitive to changes in system behaviour and
providing meaningful insights into system performance
‘= Consider a diverse set of performance metrics and measurement techniques to
capture different aspects of system performance comprehensively.
Applicability Across Layers:
'« Choose performance metrics and measurement techniques that ate applicable
§—_ and relevant aeross different layers ofthe computer system stack.
16, Cost and Resources: :
«Consider the cost, resources, and feasibility associated ‘measuring and
collecting performance data using different techniques, balancirig the trade-
offs between accuracy and resource consumption.
‘Standardization and Comparability:
_ «Prefer standardized performance metrics and measurement techniques that
facilitate comparability across different systems, environments, and
performance metrics and measurement techniques based on these
can conduct ‘effective performance evaluations that provide valuable
system performance, scalability, and efficiency to mect usercontention, to provide meaningful
ependencies: Components within a computer system are it
‘Changes in one component can impact the performance of other
‘components or the overall system. Evaluating performance requires ‘considering these
inte;dependencies and their effects on system behaviour and resource utilization.
5, Tradeoffs and Optimization: Performance evaluation offen involves trade-offs
a between competing objectives, such as speed versus accuracy, throughput versus
fateney, and cost versus performance. Optimizing performance requires balancing
these trade-offs while considering system constraints, such as hardware limitations,
‘budget constraints, and user requirements. :
6. Complexity of Analysis Techniques: Analyzing performance dats and drawing
meaningful conclusions often requires sophisticated analysis techniques, such as
‘atistical modelling ., queuing theory, simulation, and benchmarking. These
techniques may require specialized expertise and computational resources 10 apply
effectively
7. Reatworld Variability: Real-world conditions, such as hardware failures, network
an introduc. variability and
ongestion, software bugs, and environmental factors, ¢
factors and their
‘mcertainty into performance evaluation. Accounting for th
potential impact on system performance adds another layer of complexity 10 the
evaluation process.
Overall, computer performance evaluation is complex due to the diverse navure of computer
systems, the multitude of performance metries and factors to consider, the dynamic and
Unpredictable nature of workloads, the interdependencies among system romponents, the
‘trade-offs inherent in performance optimization, the complexity of analysis techniques, and
the variability introduced by real-world conditions. Addressing these complexities requires @
holistic and systematie approach to performance evaluation, combining theoretical
knowledge, practical experience, and empirical analysis techniques.
Performance Evaluation A
Performance evaluation activities involve assessing the performan
software applications, networks, and other technology-related entities. These a
vary depending or the specific context and objectives but generally include the following:
1. Benehnsarking: Benchmarking involves running standardized tests or benchmarks to
‘measuze the performance of hardware components (such as CPUs, GPUs, storage
devices) or software applications (such as databases, web servers, operating systems)
T
of computer systems,
ities canritations, and
Jin os ma epeatin
é and performance under extreme conditions.
such as high traffic spikes, resource exhaustion,
Lof-service attacks 10 evaluate system robustness and
‘involves analyzing the behaviour and resource utilization patterns
ications or system components to identify performance hotspots,
eS, and optimization opportunities. Profiling tools monitor CPU usage,
486, disk 1/0, network traffic, and other performance metrics to diagnose
issues and improve software performane
Monitoring and Logging: Monitoring involves continuously tracking and collecting
| Performance metrics, system health indicators, and operational data in real-time.
system performance, identify anomalies or
ive response to performance
olves. recording events, errors, wamings, and diagnostic
information for analysis, troubleshooting, and auditing purposes
Capacity Planning: Capacity planning involves forecasting future demand,
‘estimating resource requirements, ad provisioning infrastructure resources to meet
Performance objectives. Capacity planning considers factors such as. growth
Projections, workload trends, peak usage periods, and service-ievel agreements
(SLAs) to ensure adequate capacity and performance seata
E Tunlag and) Optimization: Tuning and optimization involve fine-suning. system
Eeafguraionssofirae stings, and resource allocations to maximize perfonaanen
Gia eiiceney: Optiization actives’ may include code optimization, dante
_ Tuning, network optimization, caching strategies, and hardware upgrades fo
‘system responsiveness, throughput, and resource utilization,
formance Modelling and Simulation; Perform:‘tools for understanding, analysing. predicting, and optimizi
‘across various disciplines, including science, engineering,
‘and social sciences. They provide a structured framework for inquiry,
decision-making. helping individuals and organizations navigate the
Fthe world around them.: Define the goals, objectives, and.
iy, including the system components, perfor
tions of the system components.
Estimation: Estimate the parameters of the performance
service demands, and resource capacities, based one modelling approach or technique b
ailable data, and the modelling objectives,
priate level of abstraction and complexity
"y with computational tractability,
Es:
nponents, parameters, variables, and relationships wi
Ssenting the behaviour and interactions of the system o
| the model
using mathematical equations,
simulation tools, or other modelling tools,
of the performance model based
Judgment,
to ensure that the mod
sad yadistribution of workload or user activity
resources. It measures the balancesich job systems
ime between submission of a requestthe actual processing time is short, delays in network
‘ean make the system feel slow to users.
Response time directly impacts business outcomes such as
sition, conversion rates, and revenue generation. Faster response times
higher user engagement, increased sales, and improved customer
n, while slower response times can resul
ce Level Agreements (SLAs): Many organizations define response time targets
art of their SLAs to ensure that services meet performance requirements and user
1 to response time SLAs is critical for maintaining service
in lost opportunities and negative
‘and meeting contractual obligations.
and Optimization:
yrement: Response time is typically measured using performance monitoring
je tools, or application performance monitoring (APM)
s. It is important to measure response time from the user's perspective, taking
count all components of the request-response cycle.
‘optimizing server-side processing through code optimization or eachin
“and reducing client-side processing through client-side cachingnunication: Throughput reflects the rate at
n nodes or devices in a network. It encompasses |
- of data transmission.
‘» Handling: Throughput measures the rate at which
ss operations can be processed by a system or application. Tt is
ional systems such as databases, e-commerce platforms,
nline transactions, where high throughput is essential for meeting
sncy and Scalability
‘ple concurrent requests or transactions efficiently. Seal
high throughput even as the workload or user demand inexi by the
4s the transmission medium (e.g., c
time it takes to transmit data or signals
“OF network, ‘Transmission delay is determined bj
and the size of the data being transmitted.
time it takes for a system to process, analyze, or
equests, Processing delay includes activities such as data
and queuing, which occur within the system,
The time spent w 4 queue or buffer before being proces
elay occurs when multiple requests or data packets arrive at a
in temporary storage or backlog,
‘The total time delay between sending a request and
ission time from the client to the server
Server before sending the response back tothe client,
‘The time delay between sending a request and
nt to the server (oli
(server-side latency)‘of an engine. The higher the horsey
‘a performance metric can refer to the rate at which a
fic task or operation. It is a measure of how quickly as
‘and it is often used in various contexts to evaluate
it’s important to note that "speed” can be a somewhar vague ter
red more precisely depending on the specific context. Here are a
‘can be considered as a performance metric A
Speed: This refers to the rate at which a system can process data, pe
‘ns, of enecute instructions. It is often measured in teins of operations p
influenced by factors such as CPU. clock sp
‘aastructions per cycle and
», and efficiency of algorithms.
or Speed: This refers to the rate at which data ean be transferred b
riponents of a system or between systems. It is commonly meast
‘ave transfer rate, bandwidth, or throughput and is influenced by factors s
Tatency, and protocol overhead.
‘This refers to the rate at which a system ean.
Tt is often measured in terms of response time
responsiveness and efficiency in processing user
the rate at which a u‘which a signal can be transmitted
in hertz (Hz) and represents the spectrum of fre
ansmission.
< 10 transmit data between devices or nodes. It is commonly used to meas
local area networks (LANs), and wide
‘Bandwidth: In storage systems, bandwidth refers to the rate at
from o written to storage devices such as hard drives, soliute Systen:s Performance Evaluation in Selection Problems
er sysiew: performance evaluation is crucial in selection problems because tg
on-nickers to ussess and compare the performance of different ‘computer systems
sileta, Jere’ how performance evaluation plays a role in selection problems
‘1. Objective Comparison; Performance evaluation provides an. obj
tems. By quantifying various metrics
comparing “tifferent computer sy
processing speed, memory utilization, throughput, and response tm aecision-m
‘can mak. informed choices about WI ‘hich system best suits their ‘need:
fag Requirements: Before sel c1
jdentify the specific requirements of the intended
helps in this process by d
. hand, For example, a data-inten
peed and memory capacity, while are
high throug: put.
Benchirarkeag: Benchmarking involves running standardized tsi 9 sim
ca the performance of different computer systems under identical ¢
ra fair comparison and helps in identifying strengths
‘tan include synthetic workloads, industry-standard
simuletions.
Performance evaluation also considers the s
how well they can handle increasing wor
to growvaluation is equally important in design problems as it helps
e system design for efficiency, scalability, and reliability.
jon plays a role in design problems:
ffs: Performance evaluation helps designers understand the trade-offs
s. For example, increasing the number of
cores may improve parallelism and throughput but could also lead to
er consumption and complenity. Performance evaluation allows designers
ed trade-offs based on the specific requirements of the system.
nce Modeling: plementing a computer system design, engineers
en create performance models to predict how the system will behave under Various
fons. ‘These models can include factors such as workload characteristics,
Utilization, and system architecture. Performance evaluation validates these
jodels and helps refine them to better reflect real-world behavior
and inefficiencies in the
" design, allowing engineers to optimize critical components. This could involve
signing algorithms, optimizing data structures, or fine-tuning hardware
‘configurations. By iteratively evaluating performance improvements, designers can
tem performance
‘Scalability Planning: Designing a system that can scale to accommodate growing
Performance evaluation helps engineers assess the
“Scalability of different design choices and identify potential scalability bottlenecks
“Tihs includes evaluating the impact of factors such as data volume, user concurrency,
a resource contention on system performance.
Performance evaluation informs decisions about resource
within the system, For example, it helps determine how to distribute
power, memory, and network bandwidth among different components to
performance. By understanding resource usage pattems and
igners can allocate resources more effectively.
Tolerance: Designing reliable and fault-tolerant
ton of performance implications. Performance evs‘ves ard criteria for success. These objectives may vary depending on t
© jxcxvam, but common goals include minimizing execution time, reduci
Y wiege, optimizing enerey consumption, improving responsiveness,
» Metrics: Selecting appropriate performance metric:
ecursiely measuring program performance. These metrics may
ry usage, CPU utilization, disk /O operations,
er a artang: = Benchmarking involves comparing the performances
against ‘or competing implementations. Benchmark suites
iptaizedtests provide a consistent methodology for evaluating performane
By benchmarking against
hmerks or competitors, developers can identify areas for improvement
ve prostess over time. ‘
: Profiling tools help developers identify perf
‘nd hotspots within their programs. These tools collect data on 1
-stion call frequencies, memory allocations, and other rele
«am: execution, By analyzing profiling data, developers ea
iey and prioritize optimization efforts.
Evaluating a program's scalability invoheffectively utilizes
r YoU need to assess how well your team members and yout
of delivering quality software on ti in budy
isa erucial process that can help you identify strengths, Weaknesses,
and opportunities for recognition and reward. But how do you choose
effective methods for evaluating software project performance? In this
lore some of the most common performance evaluation: methods for
es and discuss their advantages and disadvantages.
t widely used performance evaluation methods for software project managers
ed ‘evaluation. This method involves setting specific, measurable, achievable,
fimesbound (SMART) goals for each team member and the projeet manager at
‘of the project and reviewing them periodically throughout the project lifeeyele,
be related to various aspects of the project, such as scope, schedule, budget,
i takeholder satisfaction, and personal development, The main,
this method is that it provides a clear and objective way of measuring
d aligning it with the project objectives. The main disadvantage. that it ean.
ng to define SMART goals for complex and dynamic software projects that may
ges and adjustments. di
is a fundamental aspect of assessing my performance. Setting cl
at the beginning of a project provides a roadmap for s
le, I continually align my efforts with these goals,
in established timelines and budget const
a objectives not only quantifies m
-based evaluatioprofiling”, "software profiling") is a
that pains for example, the space (memory) o
the usage of particular instructions, or frequency and dura
ion is to aid program eptimi
by instrumenting either the program source code or its binary executable
ol called a profiler (or code profiler). A number of different techniques may
such as event-based, statistical
is the improvement of system performance. This is typically a computer
the same methods can be applied to economic markets, bureaucracies OF
Systems. The motivation for such activity is called a performance problem,
eal of anticipated. Most systems will respond to increased load with some
ing performance. A system's ability to accept a higher load is called
nd modifying a system to handle a higher load is synonymous to performance
these steps:
em and establish numeric values that categorize acceptable b
‘of the system before modification,
system that is critical for improving the p