What are Performance Metrics?
be able to answer questions like:
Is my network performing well enough to:
– Ensure a successful migration to the cloud?
– To increase the use of Unified Communication solutions for my
business?
– To ensure users can have the best experience when using web
applications?
Device Monitoring refers to monitoring the use of network resources
or network devices using SNMP monitoring protocol. For example, to
monitor the state of a firewall or determine the CPU usage or
bandwidth usage of an interface.
Network Performance Monitoring, is end-to-end network
monitoring of the end-user experience. It differs from traditional
monitoring because performance it is monitored from the end-user
perspective, and is measured between two points in the network, for
example:
The performance between a user, who works in the office, and
the application they use in the company’s data center
The performance between two offices in a network
The performance between the head office and the Internet
The performance between your users and the cloud
If there’s a problem with your Internet connection, you can't just
monitor your devices to find your problem. You need to monitor the
user experience to identify performance issues affecting your Internet
connection.
What is Network Performance?
network performance refers to the analysis and review of network
performance as seen by end-users.
There are three important concepts in that definition:
1. Network Analysis and Review:
Before you can analyze and compare network performance
measurements data over time, you must first measure key network
metrics associated with network performance and collect a history of
the data you’ve measured.
2. Measuring Network Performance:
Network Performance refers to the quality of the network. The quality
will be different depending on where in the network the
measurements are taken.
Therefore, your network performance may be great early in the
workday when less users are online, and begin to degrade later on in
the day when more users log on.
3. The End-User Experience: user experience is the most important
factor when measuring network performance. But just hearing about
the user experience isn't enough!
1. Latency
In a network, latency refers to the measure of time it takes for data to
reach its destination across a network. You usually measure network
latency as a round trip delay, in milliseconds (ms), taking into account
the time it takes for the data to get to its destination and then back
again to its source.
Measuring the round trip delay for network latency is important when
knowing how to measure network performance because a computer
that uses a TCP/IP network sends a limited amount of data to its
destination and then waits for an acknowledgement that the data has
reached its destination before sending any more. Therefore, this round
trip delay has a big impact on network performance.
When measuring latency, consistent delays or odd spikes in delay
time are signs of a major performance issue that can happen for a
variety of reasons. Most delays are actually undetectable from a user’s
perspective and can therefore go unnoticed but can have a huge
impact when using VoIP, or unified communication systems such
as Zoom, Skype, Microsoft Teams and so on.
A network performance monitoring (NPM) solution is a great network
latency monitor because it measures latency and can track and log
these delays to find the source of the problem.
How to Measure Latency
Learn how to measure network latency using Obkio’s Network
Monitoring software to identify network issues & collect data to
troubleshoot.
Learn more
How Latency Affects Throughput
When learning how to monitor latency, it's important to note that
latency also affects maximum throughput of a data transmission,
which is how much data can be transmitted from point A to point B in
a given time. We’ll be covering throughput in point 4.
But the reason that latency affects throughput is because of TCP
(Transmission Control Protocol). TCP makes sure all data packets
reach their destination successfully and in the right order. It also
requires that only a certain amount of data is transmitted before
waiting for an acknowledgement.
A common analogy of the relationship is to imagine a network path
like pipe filling a bucket with water. TCP requires that once the
bucket is full, the sender has to wait for an acknowledgement to come
back along the pipe before any more water can be sent.
If it takes half a second for water to get down the pipe, and another
half a second for the acknowledgement to come back, this equals a
latency of 1 second. Therefore, TCP would prevent you from sending
any more than the amount of data, or water in this example, that can
travel in any one second period.
Essentially, latency can affect throughput, which is why it’s so
important to know how to check network latency.
How to Identify Network Problems & Diagnose Network Issues
Learn how to identify network issues by looking at common
problems, causes, consequences and solutions.
Learn more
2. Jitter
To put it not so lightly, network jitter is your network transmission’s
biggest enemy when using real-time apps such as unified
communications, including IP telephony, video conferencing, and
virtual desktop infrastructure. Simply put, jitter is a variation in delay.
Otherwise known as a disruption that occurs while data packets travel
across the network.
There are many factors that can cause jitter, and many of these factors
are the same as those that cause delay. One difficult thing about jitter
is that it doesn’t affect all network traffic in the same way.
Jitter can be caused by network congestion. Network
congestion occurs when network devices are unable to send the
equivalent amount of traffic they receive, so their packet buffer fills
up and they start dropping packets. If there is no disturbance on the
network at an endpoint, every packet arrives. However, if the
endpoint buffer becomes full, packets arrive later and later.
If you’ve ever been talking to someone on a video call or other
unified communication system, and suddenly their voice speeds up
significantly, then slows down to catch up, or keeps fluctuating
between the two - you have a jitter problem.
When measuring network jitter, remember that jitter can also be
caused by the type of connection you use. A connection on a shared
medium, such as a cable, is more likely to have higher jitter than a
dedicated connection. So that’s something to keep in mind when
choosing a connection medium.
How to Measure Jitter
Learn how to measure network jitter using Obkio’s Network
Monitoring software to identify network problems & collect data to
troubleshoot.
Learn more
3. Packet Loss
Packet loss refers to the number of data packets that were successfully
sent out from one point in a network, but were dropped during data
transmission and never reached their destination.
It’s important for your IT team to measure packet loss to know how
many packets are being dropped across your network to be able to
take steps to ensure that data can be transmitted as it should be.
Knowing how to measuring packet loss provides a metric for
determining good or poor network performance.
If you're wondering how to measure packet loss easily, a network
performance monitoring software, like Obkio, uses a synthetic
monitoring tactic which involves generating and measuring synthetic
traffic in order to count the number of packets sent and the number of
packets received.
Packet loss is usually expressed as a percentage of the total number of
sent packets. Often, more than 3% packet loss implies that the
network is performing below optimal levels, but even just 1% packet
loss might be enough to affect VoIP quality.
Packet loss is something that is determined over a period of time. If
you record 1% packet loss over 10 minutes, it can suggest that you
have 1% during the whole 10 minutes, but it can also be that you have
10% packet loss over 1 min and then 0% over the remaining 9
minutes. How can you figure that out? Well that’s why Obkio
calculates packet loss every minute, so you always get an up-to-date
and precise measure of packet loss.
How to Measure Packet Loss
Learn how to measure packet loss using Obkio’s Network Monitoring
software to proactively identify problems in your network & collect
data to troubleshoot.
Learn more
4. Throughput
Throughput refers to the amount of data passing through the network
from point A to point B in a determined amount of time. When
referring to communication networks, throughput is the rate of data
that was successfully delivered over a communication channel.
Measuring network throughput is usually done in bits per second
(bit/s or bps),
"Internet Connection Speed" or "Internet Connection Bandwidth" is a
general term used by internet companies to sell you high-speed
internet, but is used by default to mean throughput, which is the actual
rate of packet delivery over a specific medium.
That’s why the best way to learn how to measure network throughput
is to use Speed Tests.
Measuring Network Throughput with Speed Tests
A Speed Test is the best solution for measuring network throughput to
give you an idea of how fast your Internet connection is right now.
Essentially, a speed test measures network speed by sending out the
most amount of information possible throughout your network, and
monitoring how long it takes to get delivered to its destination.
A network performance monitoring solution like Obkio allows you to
run speed tests manually, or schedule speed tests between monitoring
Agents, or a group of multiple Agents to ensure your speed or
throughput is constantly being monitored.
Obkio also allows you to perform speed tests with multiple TCPs at
the same time, which makes for the most accurate speed test results.
How to Monitor Network Speed
Learn how to monitor network speed with Obkio Network Monitoring
tool for end-to-end network monitoring and continuous network speed
monitoring.
Learn more
5. Packet Duplication
In a simplified way, packet duplication refers to when data packets are
duplicated somewhere in the network, and are then received twice at
their destination. Many times, if the source of the data believes that a
data packet was not transmitted correctly because of a packet loss, it
may retransmit that packet. The destination source may have already
gotten the first packet, and will receive a second duplicate packet.
Once again, in the example of a video chat, packet duplication may
cause you to hear as though someone is repeating words or sentences
as they’re speaking to you - which isn’t a very pleasant experience.
6. Packet Reordering
Packet reordering is also pretty self explanatory and occurs when data
packets arrive at their destination in the wrong order. This can happen
for various reasons, such as multi-path routing, route fluttering, and
wrong QoS queue configuration.
Packet reordering is also very simple to spot. If you’re talking to
someone over video call and all of a sudden words in their sentences
sound scrabbled and out of order, it may be because the data arrived
in the wrong sequence.
Once again, a network performance monitoring solution will be able
to catch these problems, right as they happen. Having continuous
monitoring of your network, whether from your head office, data
center, or home office, means that you’ll catch these network
issues long before you’re on an important video call with a client who
can’t understand a word you’re saying because of packet loss or
packet reordering.
Top 7 Reasons Why You Should Monitor Network Performance
Learn the 7 reasons to monitor network performance & why network
performance monitoring is important to troubleshoot issues &
optimize end-user experience.
Learn more
7. User Quality of Experience
Now you may be wondering how all these network performance
metrics could possibly play a part in how to measure network
performance. All the metrics we mentioned, in addition to user
requirements and user perceptions, play a role in determining the
perceived performance of your network.
Each metric on its own gives you an idea of how your infrastructure is
performing, but you need to look at all of these factors to give a true
measurement of network performance.
The best way to measure and quantify user experience is by
measuring User Quality of Experience (QoE). Quality of Experience
(QoE) allows you to measure performance from the end-user
perspective and is essentially the perception of the user of the
effectiveness and quality of the system or service. In fact, users base
their opinions about the network exclusively on their perception of
QoE.
Measuring QoE is a culmination of all these network metrics we
discussed, as well as the ability of the network to meet the user’s
expectations. That’s basically what network performance is all about.
Other network performance metrics you can use to measure QOE
include:
8. MOS Score
The MOS score was created by the ITU, a United Nations agency that
sought to facilitate international connectivity in communications
networks, and created a metric that could be measured and understood
by all.
The MOS has been originally developed for traditional voice calls but
has been adapted to Voice over IP (VoIP) in the ITU-T PESQ P.862.
The standard defines how to calculate a MOS score for VoIP calls
based on multiple factors such as the specific codec used for the VoIP
call.
The MOS score is a rating from 1 to 5 of the perceived quality of a
voice call, 1 being the lowest score and 5 the highest for excellent
quality.
You can learn more about MOS score in our article on Measuring
VoIP Quality with MOS.
9. VoIP Quality
VoIP quality refers to the quality of voice communications and
multimedia sessions over Internet Protocol (IP) networks, such as the
Internet
Obkio’s network performance monitoring software calculates the
VoIP Quality for each network performance monitoring session every
minute. Obkio measures VoIP Quality with MOS score even if there
is no ongoing call, to provide a proactive monitoring over packet
capture solution.
Don't wait for bad user experience complaints to start network
troubleshooting! This Quality of Experience (QoE) metric helps IT
pros understand the complex impact of network performance on
VoIP.
Nowadays, the best performance monitoring tools can synthesize a
rather precise evaluation of the QoE score based on their
measurement of all the previously discussed performance-affecting
metrics!
How to Measure VoIP Quality & MOS Score (Mean Opinion Score)
Learn how to measure VoIP Quality using MOS Score (Mean
Opinion Score) & Obkio’s VoIP monitoring solution to identify poor
VoIP Quality issues & dropped calls.
Learn more
Start Measuring Network Performance
As you can see, there are a lot of factors to keep in mind when
choosing how to measure network performance, and all them need to
be monitored simultaneously to really provide a concrete conclusion.
Lucky for you, the number one key to learning how to measure
network performance is finding a solution that measures throughput,
latency, packet loss, jitter, and more, to give you a simple and quick
overview of your network.
Obkio Network Monitoring is your personal network admin that
continuously measures network performance metrics in real-time to
help understand how they’re affecting your network’s performance.
As soon as a problem occurs, with any of the metrics being measured,
you’ll be notified - even before it reaches the end user.
Start measuring network performance for free!
Start Measuring Network Performance
Monitor your network performance and measure key network metrics
with Obkio. Deploy in minutes with Obkio's 14-day free trial.
Start for Free
As a reminder, you can check out the other articles in
our Introduction to Network Monitoring series for a complete
overview of everything you need to know about network monitoring:
Top 7 Reasons Why You Should Monitor Network Performance
Fault Monitoring vs. Network Performance Monitoring
How to Measure Network Performance: 9 Network Metrics (this
article)
How to Identify Network Problems & Diagnose Network Issues
Related Blog Categories:
Network Monitoring Basics
How To
Featured Articles
Top 7 Reasons Why You Should Monitor Network Performance
Learn the 7 reasons to monitor network performance & why network
performance monitoring is important to troubleshoot issues &
optimize end-user experience.
How to Identify Network Problems & Diagnose Network Issues
Learn how to identify network issues by looking at common
problems, causes, consequences and solutions.
How to Measure Packet Loss
Learn how to measure packet loss using Obkio’s Network Monitoring
software to proactively identify problems in your network & collect
data to troubleshoot.
Our M
Bandwidth requirements vary from one network to another, and
understanding how to calculate bandwidth properly is vital to building
and maintaining a fast, functional network.
As most network administrators can attest, bandwidth is one of the
more important factors in the design and maintenance of a functional
LAN, WAN or wireless network. Unlike a server, which can be
configured and reconfigured throughout the life of the network,
bandwidth is a network design element usually optimized by figuring
out the correct formula for your network from the outset.
Wondering how to calculate bandwidth requirements when designing
the network? What specific considerations apply? The answers to
these important questions follow.
Understanding bandwidth
The term bandwidth refers to the data rate supported by the network
connection or the interfaces that connect to the network. It represents
both volume and time, representing the amount of data that can be
transmitted between two points in a set period of time. Data coming
into the network is known as ingress traffic, and data leaving the
network is called egress traffic. Bandwidth is usually expressed in
terms of bits per second or, sometimes, in bytes per second.
Network bandwidth represents the capacity of the network
connection, though it's important to understand the distinction
between theoretical throughput and real-world results when figuring
out the right bandwidth formula for your network. For example,
a 1000BASE-T -- which uses unshielded twisted pair cables --
Gigabit Ethernet (GbE) network can theoretically support 1,000
Mbps, but this level can never be achieved in practice due to hardware
and systems software overhead.
Bandwidth vs. speed: 2 different measurements
One point to consider when thinking about how to calculate
bandwidth needs on your network is this: Bandwidth should not be
confused with throughput, which refers to speed. While high-
bandwidth networks are often fast, that is not always the case.
A helpful metaphor when thinking about bandwidth is cars on a
highway:
A high-bandwidth network is like a six-lane highway that can fit
hundreds of cars at any given moment.
A low-bandwidth network is like a single-lane road in which one
car drives directly behind another.
Although the large highway is likely to move vehicles faster, rush-
hour traffic can easily bring cars and trucks to a standstill. Or,
perhaps, cars can't get onto the highway quickly because it's clogged
with large delivery trucks that take up a lot of space on the road.
Similarly, even a high-bandwidth network can run slowly in the face
of problems, such as congestion and bandwidth-hungry applications.
What happens if you miscalculate your bandwidth needs?
These points make calculating bandwidth allowances and
requirements a challenge, yet the consequences of getting the
bandwidth formula wrong are considerable. If you don't procure
enough and hit your bandwidth limit, you all but guarantee the
network will run slowly. Yet, significantly overprovisioning
bandwidth can be cost-prohibitive for most enterprises.
So, how do you determine the right formula that will meet your
bandwidth requirements? The process begins with asking the right
questions: What applications are users running, and what is
the performance service-level agreement for these applications? Some
network managers are only concerned with how many users are on a
virtual LAN. But, to determine actual bandwidth usage, what you
need to know is what the users will be doing on the network.
It's possible that 200 users will cause less of a bottleneck than a group
of three users who really beat the heck out of the network because of a
funky client-server application or extensive use of a bandwidth-heavy
service, like high-definition video conferencing.
Coming up with a formula for how to calculate bandwidth
Calculating bandwidth requirements has two basic steps:
1. Determine the amount of available network bandwidth.
2. Determine the average utilization required by the specific
application.
Both of these figures should be expressed in bytes per second.
Consider the following formula: A 1 GbE network has 125 million
Bps of available bandwidth. This is computed by taking the amount of
bits -- in a 1 GbE network, that would be 1 billion -- and dividing that
by eight to determine the bytes:
1,000,000,000 bps / 8 = 125,000,000 Bps
After determining the network's bandwidth, assess how much
bandwidth each application is using. You can use a network
analyzer to detect the number of bytes per second the application
sends across the network.
To do this, follow these steps:
1. Enable the cumulative bytes column of your network analyzer.
2. Capture traffic to and from a test workstation running the
application.
3. In the decode summary window, mark the packets at the beginning
of the file transfer.
4. Follow the timestamp down to one second later, and then look at
the cumulative bytes field.
How to interpret the results of your bandwidth calculations
If you determine that your application is transferring data at 200,000
Bps, then you have the information to perform the calculation:
125,000,000 Bps / 200,000 Bps = 625 concurrent users. In this case,
the network will be fine even with several hundred concurrent users.
Look what would happen, though, if you had a 100 Mbps network:
13,102,000 Bps / 200,000 Bps = 65.51 concurrent users. You would
then have a network that couldn't support more than approximately 65
users running the application concurrently. Knowing the formula to
calculate bandwidth is extremely important to network administrators.
Here are some final recommendations:
Capture the data in 10-second spurts, and then do the division.
Check multiple workstations to ensure the number is reflective of
the general population.
Determine how many concurrent users you will have
Latency
Latency or network delay is the overall amount of time it takes for
information to be transmitted from the source to the destination in a
data network.
Latency can affect the interaction between the participants on the
call. If there is high latency, the caller and callee may begin talking
over one another or they may believe the other end has disconnected.
The highest recommended delay for latency is 150ms, after which the
call may be impacted.
How is Bandwidth Associated with Latency
The team at Bandwidth is obsessed with call quality. That is why we
work hard every day to improve our communications services. With
over a decade in the telecom and communications software industry,
Bandwidth is constantly innovating and improving our network to
provide our customers with the highest quality services. The
innovations that Bandwidth’s team creates aim to make quality issues
like latency a thing of the past.
How Bandwidth Works to Prevent Latency
Since Bandwidth owns and operates one of the largest All-IP Voice
Networks in the nation, we are able to quickly and directly address
any issues that might arise such as latency. When our customers
submit a support ticket, Bandwidth’s team doesn’t have to reach out
to a middleman, but is instead able to go to directly to the source.
This means our customers can experience faster rebound times while
directly collaborating with Bandwidth’s support team to improve our
service offerings. At Bandwidth, we believe in constantly striving to
improve our services to create the best possible communication
experience for every end user. With industry leading technology and
a world class team, our goal is to master communications software so
you can focus on communicating what matters most to you.
Terms Related to Latency
Voice Over Internet Protocol (VoIP)
All-IP Voice Network
Termination/Outbound Calling
Definition
Latency is the delay between a user’s action and a web
application’s response to that action, often referred to in networking
terms as the total round trip time it takes for a data packet to travel.
Overview
Latency is generally measured in milliseconds (ms) and is
unavoidable due to the way networks communicate with each other. It
depends on several aspects of a network and can vary if any of them
are changed.
There are four main components that affect network latency,
including:
1. Transmission medium: The physical path between the start point
and the end point. The type of medium can impact latency. For
instance, old copper cable-based networks have a higher latency than
modern optic fibers.
2. Propagation: The further apart two nodes are the more latency there
is as latency is dependent on the distance between the two
communicating nodes. Theoretically, latency of a packet going on a
round trip across the world is 133ms. In actuality, such a round trip
takes longer, though latency is decreased when direct connections
through network backbones are achieved.
3. Routers: The efficiency in which routers process incoming data has a
direct impact on latency. Router to router hops can increase latency.
4. Storage delays: Accessing stored data can increase latency as the
storage network may take time to process and return information.
How to reduce latency
Latency can be reduced by addressing the aforementioned
components and ensuring that they are working correctly. It can also
be reduced by using dedicated networks that streamline the network
path and provide direct communication between nodes.
Content Delivery Network (CDN) providers such as StackPath
provide customers with private networks that allow them to bypass
the public internet. These private networks reduce latency by
providing more efficient paths for data packets to travel on.
How Latency Works
Let’s look into how latency actually works and how, as a user, you’re
usually impacted by it. Consider that you are buying a product
through an online shop and you press the “Add to Cart” button for a
product that you like.
The chain of events that occur when you press that button are:
1. You press the “Add to Cart” button.
2. The browser identifies this as an event and initiates a request to the
website’s servers. The clock for latency starts now.
3. The request travels to the site’s server with all the relevant
information.
4. The site’s server acknowledges the request and the first half of the
latency is completed.
5. The request gets accepted or rejected and processed.
6. The site’s server replies to the request with relevant information.
7. The request reaches your browser and the product gets added to your
cart. With this, the latency cycle is completed.
The time it takes for all these events to complete is known as latency.
Latency vs bandwidth vs throughput
The performance of your network and application depend on latency,
bandwidth, and throughput, but it’s important not to confuse one with
the other.
Bandwidth is the amount of data that can pass through a network at
any given time. Consider bandwidth as how narrow or wide a pipe is.
A wider pipe allows for more data to be pushed through. Throughput,
on the other hand, is the amount of data that can be transferred over a
given period of time.
An efficient network connection is composed of low latency and high
bandwidth. This allows for maximum throughput. The bandwidth can
only be increased by a finite amount as latency will eventually create
a bottleneck and limit the amount of data that can be transferred over
time.
Examples of Latency
High latency can adversely affect the performance of your network
and greatly reduce how quickly your application communicates with
users. A content delivery network allows customers to reach their
users in the most efficient manner possible. These delivery channels
greatly reduce the network lag between the user and the application.
You can check the network latency of your internet connection with
any website by passing its web address or IP address in the command
prompt on a Windows or Mac. Here is an example of the command
prompt on Windows:
C:Usersusername>ping www.google.com
Pinging www.google.com [172.217.19.4] with 32 bytes of data:
Reply from 172.217.19.4: bytes=32 time=47ms TTL=52
Reply from 172.217.19.4: bytes=32 time=45ms TTL=52
Reply from 172.217.19.4: bytes=32 time=47ms TTL=52
Reply from 172.217.19.4: bytes=32 time=43ms TTL=52
Ping statistics for 172.217.19.4:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 43ms, Maximum = 47ms, Average = 45ms
Here you can see the result of pinging www.google.com . The
statistics show that the average time it takes for a roundtrip between
the given PC and Google’s network is 39ms.
Key Takeaways
Latency is the time it takes for a data packet to travel from the sender
to the receiver and back to the sender.
High latency can bottleneck a network, reducing its performance.
You can make your web applications less latent by using a CDN and a
private network backbone to transfer data.
Bandwidth and Throughput are frequently used terms in
telecommunications and sometimes we tend to use them as
synonyms, but there's a subtle difference between these two
terms.
Bandwidth refers to the data capacity of a channel. It is defined
as the total amount of data which can be transferred over a
network in a specific period of time. Throughput, on the other
hand, refers to the exact measurement of data transferred in a
specific time period. It is also termed as "effective data rate" or
"payload rate". Every network connection has a throughput,
which explains how bits are transmitted across a network and
the number of bits per second that a link can transmit.
Read through this article to find out more about Bandwidth and
Throughput and how they are different from each other.
What is Bandwidth?
Bandwidth is the quantity of data that can be transferred over a
network in a given amount of time. It refers to the
network/transmission medium's data carrying capability. In
computing, the maximum pace of data flow via a particular
route is known as the bandwidth. Network bandwidth, data
bandwidth, and digital bandwidth are all examples of
bandwidth.
Bandwidth also refers to the frequency range between the
lowest and highest attainable frequency. Analog signal
bandwidth is measured in Hertz.
It is always better to have a wide bandwidth because it enables
more users/subscribers to access the network at the same time
with lesser traffic or network congestion.
What is Throughput?
Throughput is the measurement of the amount of data being
transmitted across a network, interface, or channel in a given
length of time. Throughput is also known as "effective data
rate" or "payload rate". In general, throughput refers to the pace
at which something is produced or processed.
Throughput, also known as "network throughput", is the rate of
successful message transmission through a communication
channel in communication networks such as Ethernet or packet
radio. The data included in these messages may be transmitted
through a physical or logical link, or it may transit through a
network node.
The most common unit of measurement for throughput is bits
per second (bit/s or bps), although it may also be measured in
data packets per second (p/s or PPS) or data packets per time
slot.
The sum of the data rates delivered to all terminals in a network
is known as its "system throughput" or "aggregate throughput".
Throughput is basically equivalent to digital bandwidth
consumption; it can be calculated mathematically using the
queueing theory, where the load in packets per time unit is
represented by the arrival rate, and the throughput is marked by
the departure rate.
If the bandwidth of a channel is 100 Mbps, but its throughput is
50 Mbps, then it specifies that the maximum data that can be
transferred is 100 Mbps, but the channel is carrying only 50 Mb
of data per second.
In practice, throughput is always less than the bandwidth. In the
best case scenario, throughput can be equal to the bandwidth.
Difference between Bandwidth and Throughput
The following table highlights the major differences between
Bandwidth and Throughput.
Key Bandwidth Throughput
Bandwidth is the maximum Throughput is defined as the
quantity of data that can be actual measurement of data
Definition transmitted through a moving across a medium at
channel in a specific period any given point of time.
of time.
Measurement Bandwidth is measured in Throughput is measured in
Unit Bits. Bits per Second.
Physical Layer of OSI Any Layer of OSI model.
Layer
model.
There is no dependency. Throughput is dependent on
Dependency Latency does not affect the latency.
bandwidth.
Bandwidth is not impacted Throughput is highly
by physical obstructions. impacted by external
Impact interference, network
devices, and transmission
errors.
Speed of water coming out Actual water flown out of
Analogy of a tap in a particular time tap in a particular time
frame. frame.
Conclusion
From the above discussion, we can conclude that Bandwidth is
the total capacity of a channel, while Throughput is the actual
volume of data passing through a channel in a specific time
frame.
Bandwidth delay product is a measurement of how many bits can fill up a network
link. It gives the maximum amount of data that can be transmitted by the sender at a
given time before waiting for acknowledgment. Thus it is the maximum amount of
unacknowledged data.
Measurement
Bandwidth delay product is calculated as the product of the link capacity of the
channel and the round – trip delay time of transmission.
The link capacity of a channel is the number of bits transmitted per second. Hence,
its unit is bps, i.e. bits per second.
The round – trip delay time is the sum of the time taken for a signal to be transmitted
from the sender to the receiver and the time taken for its acknowledgment to reach
the sender from the receiver. The round – trip delay includes all propagation delays
in the links between the sender and the receiver.
The unit of bandwidth delay product is bits or bytes.
Example
Consider that the link capacity of a channel is 512 Kbps and round – trip delay time
is 1000ms.
The bandwidth delay product = 512 × 103 bits/sec × 1000 × 10−3 sec
= 512,000 bits = 64,000 bytes = 62.5 KB
Long Fat Networks
A long fat network (LFN) is a network having high bandwidth delay product which is
greater than 105 bits.
Ultra – high-speed LANs (local area network) is an example of LFN. Another
example is WANs through geostationary satellite connections
What is Goodput
1.
The bandwidth of the useful packets at the receiver side,
which is also called the effective receiving rate. Learn more
in: Towards a New Model for Knowledge Construction and
Evolution
2.
The ratio of number of packets successfully transmitted to
number of packets actually transmitted. Learn more in: TCP
Enhancements for Mobile Internet
3.
The bandwidth of the useful packets at the receiver side,
which is also called the effective receiving rate. Learn more
in: Scanning Multimedia Business Environments
4.
The application level throughput, that is, the number of useful
bits per unit of time forwarded by the network from a certain
source address to a certain destination, excluding protocol
overhead retransmissions, and so forth. Learn more in:
Evaluating Security Mechanisms in Different Protocol Layers
for Bluetooth Connections
5.
The bandwidth of the useful packets at the receiver side,
which is also called the effective receiving rate. Learn more
in: Analysis of TCP-Friendly Protocols for Media Streaming
Propagation delay is defined as the flight time of packets over the
transmission link and is limited by the speed of light. For example, if
the source and destination are in the same building at the distance
of 200 m, the propagation delay will be ∼ 1 μsec. If they are located
in different countries at a distance of 20,000 km, however, the delay
is in order of 0.1 sec. The above values represent the physical limits
and cannot be reduced.