Elements of Computer Networking - 1
Elements of Computer Networking - 1
Computer Networking
An Integrated Approach
By
Narasimha Karumanchi
Dr. A. Damodaram
Dr. M. Sreenivasa Rao
All rights reserved. No part of this book may be reproduced in any form or by any
electronic or mechanical means, including information storage and retrieval systems,
without written permission from the publisher or author.
Acknowledgements
ℎ and ℎ , it is impossible to thank you adequately for everything you have
done, from loving me unconditionally to raising me in a stable household, where you
persistent efforts traditional values and taught your children to celebrate and embrace
life. I could not have asked for better parents or role-models. You showed me that
anything is possible with faith, hard work and determination.
This book would not have been possible without the help of many people. I would like to
thank them for their efforts in improving the end result. Before we do so, however, I
should mention that I have done my best to correct the mistakes that the reviewers
have pointed out and to accurately describe the protocols and mechanisms. I alone am
responsible for any remaining errors.
First and foremost, I would like to express my gratitude to many people who saw me
through this book, to all those who provided support, talked things over, read, wrote,
offered comments, allowed me to quote their remarks and assisted in the editing,
proofreading and design. In particular, I would like to thank the following individuals.
, Senior Consultant, Juniper Networks Pvt. Ltd.
ℎ, Lecturer, Nagarjuna Institute of Technology and Sciences, MLG
- ℎ ℎ
M. Tech,
Founder, .
Other Titles by Narasimha Karumanchi
Data Structures and Algorithms Made Easy
Data Structures and Algorithms for GATE
Data Structures and Algorithms Made Easy in Java
Coding Interview Questions
Peeling Design Patterns
IT Interview Questions
Data Structure and Algorithmic Thinking with Python
Preface
Dear Reader,
Please Hold on! We know many people do not read the preface. But we would strongly
recommend that you go through the preface of this book at least.
There are hundreds of books on computer networking already flooding the market. The
reader may naturally wonder what the need of writing another book on computer
networking is!
This book assumes you have basic knowledge about computer science. Main objective
of the book is to provide you the of and ℎ
questions. Before writing the book, we set ourselves the following :
The book be written in ℎ that readers without any background in
computer networking should be able to understand it and .
The book should present the concepts of in
and straightforward manner with a − explanation.
The book should provide enough examples so that readers get better
understanding of the and also useful for the interviews.
We mean, the book should cover interview questions.
Please remember, the books which are available in the market are lacking one or many
of these goals. Based on my teaching and industrial experience, we thought of writing
this book aiming at achieving these goals in a simple way. A 3-stage formula is used in
writing this book, i.e.
+ +
We used very simple language such that a school going student can also understand
the concepts easily. Once the concept is discussed, it is then interlaced into problems.
The solutions of each and every problem are well explained.
Finally, interview questions with answers on every concept are covered. All the interview
questions in this book are collected from various interviews conducted by top software
development companies.
This book talks about networks in everyday terms. The language is friendly; you don’t
need a graduate education to go through it.
As a job seeker if you read complete book with good understanding, we are sure you will
challenge the interviewers and that is the objective of this book.
This book is very much useful for the students of engineering degree and masters
during their academic course. All the chapters of this book contain theory and their
related problems. If you read as a student preparing for competition exams (e.g. GATE),
content of this book covers all the required topics in full details.
It is that, at least reading of this book is required to get full
understanding of all the topics. In the readings, you can directly go to any
chapter and refer. Even though, enough readings were given for correcting the errors,
due to human tendency there could be some minor typos in the book.
If any such typos found, they will be updated at . . We request you to
constantly monitor this site for any corrections, new problems and solutions. Also,
please provide your valuable suggestions at: @ . .
Wish you all the best. We are sure that you will find this book useful.
ℎ ℎ Dr. A. Dr. M.
M. Tech, [Link]. (C.S.E.), Ph.D. (C.S.E) Dean MSIT Programme,
Founder, . Director, School of IT [Link]. (C.S.E.), Ph.D. (C.S.E)
Director, School of IT
Table of Contents
2. Introduction------------------------------------------------------------------------ 19
2.1 What is a Computer Network? ----------------------------------------------------------- 19
2.2 Basic Elements of Computer Networks ------------------------------------------------- 19
2.3 What is an Internet? ----------------------------------------------------------------------- 20
2.4 Fundamentals of Data and Signals------------------------------------------------------ 20
2.5 Network Topologies ------------------------------------------------------------------------ 24
2.6 Network Operating Systems -------------------------------------------------------------- 28
2.7 Transmission Medium --------------------------------------------------------------------- 29
2.8 Types of Networks -------------------------------------------------------------------------- 31
2.9 Connection-oriented and Connectionless services ------------------------------------ 34
2.10 Segmentation and Multiplexing -------------------------------------------------------- 35
2.11 Network Performance -------------------------------------------------------------------- 35
2.12 Network Switching------------------------------------------------------------------------ 42
Problems and Questions with Answers------------------------------------------------------ 48
Chapter
Organization of
Chapters 1
1.1 Why Computer Networks are needed?
Computer networking is one of the most exciting and important technical fields of our
time. Information and communication are two of the most important strategic issues for
the success of every enterprise.
is the mother of invention, and whenever we really need something, humans
will find a way to get it. Let us imagine we do not have computer networking in our lives
now. People can communicate only with each other through phone (or fax). For
example, sharing a printer between many computers will be difficult as printer cable
must be attached to the computer which requires printing. Removable media such as
diskettes or thumb drives must be used in order to share files or to transfer files from
one machine to another machine.
Just imagine you need to transfer a file to one thousand computers. To these
problems, computer networks are . Computer networks allow the user to
access and either of the same organization or from
other enterprises or public sources. Computer networks provide faster communication
than other facilities. Many applications or software’s are also developed for enhancing
communication. Some examples are Email, Instance Messages, and internet phone.
Also, TCP provides a mechanism for error correction. In TCP, error detection and
correction in TCP is achieved by:
1. Checksum
2. Acknowledgement
3. Timeout and retransmission
In this chapter, we discuss several error control algorithms such as Stop and
Wait ARQ, Go Back N ARQ, Selective Reject ARQ etc...
11. : TCP provides a way for the receiver to control the amount of
data sent by the sender. Nodes that send and receive TCP data segments can
operate at different data rates because of differences in CPU and network
bandwidth. As a result, it is possible for sender to send data at a faster rate than
the receiver can handle.
If the receiver is slower than the sender, bytes will have to be dropped from the
receiver’s sliding window buffer. TCP deals with this issue using what is known
as .
As an example, consider a conversation with your friend. One of you listens
while the other speaks. You might nod your head as you listen or you might
interrupting the flow with a "Whoa, slow down, you are talking too fast!" This is
actually flow control. Some of us are better at it than others, but we all do it to
some degree. You nod to indicate you understood and are ready for the next
statement of information or you tell your friend when they are going too fast.
That's control.
This chapter focuses on such controlling algorithms for computer networks
( mechanism, and ).
12. : In today’s world, the (TCP)
carries huge Internet traffic, so performance of the Internet depends to a great
extent on how well TCP works. TCP provides a reliable transport service between
two processes running on source and destination nodes.
In this chapter, we discuss another important component of TCP;
mechanism. The important strategy of TCP is to send packets into the
network and then to react to that occur. TCP congestion
control was introduced into the Internet in the late 1980s by ;
roughly eight years after the TCP/IP protocol stack had become operational.
To address these issues, multiple mechanisms were implemented in TCP to
govern the rate with which the data can be sent in both directions (
and ): , , and
. These are the subject of this chapter.
13. : The session layer resides above the transport layer, and provides
value added services to the underlying transport layer services. The session
layer (along with the presentation layer) adds services to the transport layer that
are likely to be of use to applications, so that each application doesn't have to
provide its own implementation. Layer 5 of the OSI reference model is session
layer. It does not add many communications features/functionality, and is thus
termed a very thin layer. On many systems, the Layer 5 features are disabled,
but you should nonetheless know what failures can be prevented by a session
layer.
The session layer provides the following services:
I. Dialog management
II. Synchronization
III. Activity management
IV. Exception handling
This chapter ends with discussion on major session layer protocols such as
AppleTalk Data Stream Protocol (ADSP), AppleTalk Session Protocol (ASP),
Chapter
Introduction 2
2.1 What is a Computer Network?
A computer network is a group of computers that are connected together and
communicate with one another. These computers can be connected by the telephone
lines, co-axial cable, satellite links or some other communication techniques.
Medium Network
Devices
Message
Communication begins with a message that must be sent from one device to another.
People exchange ideas using many different communication methods. All of these
methods have ℎ elements in common.
The first of these elements is the ( ). Message sources are
people, or electronic devices, that need to send a message to other
individuals/devices.
The second element of communication is the ( ), of the
message. The destination receives the message and interprets it.
Important Question
From the choices below, select the best definition of a network.
A. A collection of printers with no media.
B. Devices interconnected by some common communication channel.
C. A device that sends communication to the Internet.
D. A shared folder stored on a server.
Answer: B.
Analog data take values on some interval. Examples of analog data are voice
and video. The data that are collected from the real world with the help of transducers
are continuous-valued or analog in nature.
On the contrary, digital data take values. Text or character strings can be
considered as examples of digital data. Characters are represented by suitable codes,
e.g. ASCII code, where each character is represented by a 7-bit code.
Amplitude
Time
The readings from the thermometer are also continuous in amplitude. This means that
assuming your eyes are sensitive enough to read the mercury level, readings of 37,
37.4, or 37.440183432°C are possible. In actuality, most cardiac signals of interest are
analog by nature. For example, voltages recorded on the body surface and cardiac
motion is continuous functions in time and amplitude.
In this, signals 0 and 1 are transmitted as electric waves. A system of transmitting
analog signals is called system.
sound and video stored in this computer are held and manipulated as patterns of
binary values.
Time
2.4.7 Frequency
Frequency describes the number of waves that pass a fixed place in a given amount of
time. So if the time it takes for a wave to pass is 1/2 second, the frequency is 2 per
second. If it takes 1/100 of an hour, the frequency is 100 per hour.
High frequency waves Low frequency waves
1 0 1 1 1 0 0 0 1 0
Time
The speed of the data is expressed in bits per second (bits/s, bits per second or bps).
The data rate is a function of the duration of the bit or bit time:
1
=
Rate is also called ℎ C. If the bit time is 10 , the data rate equals:
R= = 10 bps or 100 million bits/sec
×
This can be used to translate baud into a bit rate using the following formula:
= ×
Baud can be abbreviated using the shortened form when being used for technical
purposes.
The significance of these formulas is that higher baud rates equate to greater amounts
of data transmission, as long as the bits per symbol are the same. A system using 4800
baud modems that has 4 bits per symbol will send less data than a system using 9600
baud modems that also has 4 bits per symbol. So, all other things being equal, a higher
rate is generally preferred.
2.4.10 Attenuation
(also called ) is a general term that refers to any reduction in the
strength of a signal. Attenuation occurs with any type of signal, whether digital or
analog. It usually occurs while transmitting analog or digital signals over long
distances.
How is it expressed?
As the name suggests, signal to noise ratio is a comparison or ratio of the amount of
signal to the amount of noise and is expressed in decibels. Signal to noise ratio is
abbreviated / and higher numbers mean a better specification. A component
with a signal to noise ratio of 100 dB means that the level of the audio signal is 100 dB
higher than the level of the noise and is a better specification than a component with a
S/N ratio of 90 dB.
Signal-to-noise ratio is defined as the power ratio between a signal and noise. It can be
derived from the formula
P µ
= =
P
where µ is the signal mean or expected value
the standard deviation of the noise
Node
Link
Only one computer at a time can transmit a packet on a bus topology. Systems in a bus
topology listen to all traffic on the network but accept only the packets that are
addressed to them. Broadcast packets are an exception because all computers on the
network accept them. When a computer sends out a packet, it travels in both directions
from the computer. This means that the network is occupied until the destination
computer accepts the packet.
The number of computers on a bus topology network has a major influence on the
performance of the network. A bus is a passive topology. The computers on a bus
topology only listen or send data. They do not take data and send it on or regenerate it.
So if one computer on the network fails, the network is still up.
Advantages
One advantage of a bus topology is . The bus topology uses less cable than other
topologies. Another advantage is the ease of . With the bus topology, we
simply connect the system to the cable segment. We need only the amount of cable to
connect the workstations we have. The ease of working with a bus topology and the
minimum amount of cable make this the most economical choice for a network
topology. If a computer fails, the network stays up.
Disadvantages
The main disadvantage of the bus topology is the difficulty of ℎ . When the
network goes down, usually it is from a break in the cable segment. With a large
network this can be tough to isolate. A cable break between computers on a bus
topology would take the entire network down. Another disadvantage of a bus topology is
that the ℎ the traffic, the the network.
Scalability is an important consideration with the dynamic world of networking. Being
able to make changes easily within the size and layout of your network can be
important in future productivity or downtime. The bus topology is not very scalable.
Link
Hub
Node
Advantages
One advantage of a start topology is the centralization of cabling. With a hub, if one link
fails, the remaining systems are not affected like they are with other topologies, which
we will look at in this chapter.
Centralizing network components can make an administrator’s life much easier in the
long run. Centralized management and monitoring of network traffic can be vital to
network success. With this type of configuration, it is also easy to add or change
configurations with all the connections coming to a central point.
Disadvantages
On the flip side to this is the fact that if the hub fails, the entire network, or a good
portion of the network, comes down. This is, of course, an easier fix than trying to find
a break in a cable in a bus topology.
Link
Node
As shown in figure, the ring topology is a circle that has no start and no end.
Terminators are not necessary in a ring topology. Signals travel in one direction on a
ring while they are passed from one system to the next. Each system checks the packet
for its destination and passes it on as a repeater would. If one of the systems fails, the
entire ring network goes down.
Advantages
The nice thing about a ring topology is that each computer has equal access to
communicate on the network. (With bus and star topologies, only one workstation can
communicate on the network at a time.) The ring topology provides good performance
for each system. This means that busier systems that send out a lot of information do
not inhibit other systems from communicating. Another advantage of the ring topology
is that signal degeneration is low.
Disadvantages
The biggest problem with a ring topology is that if one system fails or the cable link is
broken the entire network could go down. With newer technology this isn’t always the
case. The concept of a ring topology is that the ring isn’t broken and the signal hops
from system to system, connection to connection.
Another disadvantage is that if we make a cabling change to the network or a system
change, such as a move, the brief disconnection can interrupt or bring down the entire
network.
Link
Node
A mesh topology is not very common in computer networking. The mesh topology is
more commonly seen with something like the national phone network. With the mesh
topology, every system has a connection to every other component of the network.
Systems in a mesh topology are all connected to every other component of the network.
If we have 4 systems, we must have six cables— three coming from each system to the
other systems.
Two nodes are connected by dedicated point-point links between them. So the total
number of links to connect nodes would be
( )
[Proportional to ]
Advantages
The biggest advantage of a mesh topology is fault tolerance. If there is a break in a cable
segment, traffic can be rerouted. This fault tolerance means that the network going
down due to a cable fault is almost impossible.
Disadvantages
A mesh topology is very hard to administer and manage because of the numerous
connections. Another disadvantage is cost. With a large network, the amount of cable
needed to connect and the interfaces on the workstations would be very expensive.
Repeater-1
Repeater-2 Repeater-3
All the topologies discussed so far are symmetric and constrained by well-defined
interconnection pattern. However, sometimes no definite pattern is followed and nodes
are interconnected in an arbitrary manner using point-to-point links.
Unconstrained topology allows a lot of configuration flexibility but suffers from the
complex routing problem. Complex routing involves unwanted overhead and delay.
2.6.1 Peer-to-peer
Peer to Peer
Network
File Server
Other
equipment
Twisted pair cables are widely used in telephone network and are increasingly being
used for data transmission.
Larger the cable diameter, lower is the transmission loss, and higher transfer speeds
can be achieved. A co-axial cable can be used over a distance of about 1 KM and can
achieve a transfer rate of up to 100 Mbps.
On the flip side, difficult installation and maintenance procedures of fiber require
skilled technicians. Also, the cost of a fiber-based solution is more. Another drawback
of implementing a fiber solution involves cost for fitting to existing network
equipment/hardware. Fiber is incompatible with most electronic network equipment.
That means, we have to purchase fiber-compatible network hardware.
Fiber-optic cable is made-up of a core glass fiber surrounded by cladding. An insulated
covering then surrounds both of these within an outer protective sheath.
Since light waves give a much high bandwidth than electrical signals, this leads to high
data transfer rate of about 1000 Mbps. This can be used for long and medium distance
transmission links.
A MAN may be fully owned and operated by a private company or be a service provided
by a public company.
Projector
Camera
Scanner
Printer
This service does not have the reliability of the connection-oriented method, but it is
useful for periodic burst transfers.
Neither system must maintain state information for the systems that they send
transmission to or receive transmission from. A connectionless network provides
minimal services. User Datagram Protocol (UDP) is a connectionless protocol.
Common features of a connectionless service are:
Data (packets) do not need to arrive in a specific order
Reassembly of any packet broken into fragments during transmission must be
in proper order
No time is used in creating a session
No acknowledgement is required.
These large streams of data would result in significant delays. Further, if a link in the
interconnected network infrastructure failed during the transmission, the complete
message would be lost and have to be retransmitted in full.
A better approach is to divide the data into smaller, more manageable pieces to send
over the network. This division of the data stream into smaller pieces is called
. Segmenting messages has two primary benefits.
, by sending smaller individual pieces from source to destination, many different
conversations can be interleaved on the network. The process used to interleave the
pieces of separate conversations together on the network is called .
data in the segment, the host retransmits the segment. The time from when the timer is
started until when it expires is called the of the timer.
What should be the ideal be? Clearly, the timeout should be larger than the
connection's round-trip time, i.e., the time from when a segment is sent until it is
acknowledged.
Otherwise, unnecessary retransmissions would be sent. But the timeout should not be
much larger than the - ; otherwise, when a segment is lost, TCP would not
quickly retransmit the segment, thereby introducing significant data transfer delays
into the application. Before discussing the timeout interval in more detail, let us take a
closer look at the round-trip time ( ).
The round trip time calculation algorithm is used to calculate the average time for data
to be acknowledged. When a data packet is sent, the elapsed time for the
acknowledgment to arrive is measured and the mean deviation algorithm
is applied. This time is used to determine the interval to retransmit data.
= (1- ) * + *
The above formula is written in the form of a programming language statement - the
new value of is a weighted combination of the previous value of
and the new value for . A typical value of is = .1, in which
case the above formula becomes:
= .9 + .1
Next Router/Switch
or
Processing delay Destination
Processing delay
Router or Switch
= + + +
These are the latencies caused only by propagation delays in the transmission medium.
If you were the only one sending one single data bit and you had unlimited bandwidth
available, the speed of the packet would still be delayed by the propagation delay.
This delay happens without regard for the amount of data being transmitted, the
transmission rate, the protocol being used or any link impairment.
For example:
Serialization of a 1500 byte packet used on a 56K modem link will take 214
milliseconds
Serialization of the same 1500 byte packet on a 100 Mbps LAN will take 120
microseconds
Serialization can represent a significant delay on links that operate a lower
transmission rates, but for most links this delay is a tiny fraction of the overall latency
when compared to the other contributors.
Voice and video data streams generally use small packet sizes (~20 ms of data) to
minimize the impact of serialization delay.
the use of radio spectrum is controlled by nearly every government on the planet. The
amount of radio spectrum occupied by any given radio signal is called its bandwidth.
The nature of radio spectrum use is beyond this paper but it’s important to understand
that generally the occupied radio spectrum of a modem signal will increase with the
data rate:
Higher modem data rates cause the modem to occupy more radio bandwidth
Lower modem data rates will let the modem occupy less radio bandwidth
Since radio spectrum is a limited resource, the occupied radio bandwidth is an
important limiting factor in wireless and satellite links.
Noise in the radio channel will perturb the analog signal waveform and can cause the
demodulator at the receiver to change a digital one into a zero or vice versus. The effect
of noise can be overcome by increasing the power level of the transmitted signal, or by
adding a few extra error correcting bits to the data that is being transmitted. These
error correcting bits help the receiver correct bit errors. However, the error correction
bits increase the bandwidth that is required.
Bandwidth
The Bandwidth × Delay Product, or BDP for short determines the amount of data that
can be in transit in the network. It is the product of the availalbe bandwidth and the
latency. Sometimes it is calculated as the data link's capacity multiplied by its round
trip time (RTT). BDP is a very important concept in a Window based protocol such as
TCP.
It plays an especially important role in high-speed / high-latency networks, such as
most broadband internet connections. It is one of the most important factors of
tweaking TCP in order to tune systems to the type of network used.
The BDP simply states that:
( ) = ℎ( / ) × ( )
2.11.5 Revisiting Queuing Delay
2.11 Network Performance 39
Elements of Computer Networking Introduction
Queuing delay depends on the number and size of the other packets in the queue before
it as well as the transmission rate of the interface. With queuing delays, below is the
common question which arises frequently.
When is the queuing delay large and when is it insignificant?
The answer to this question depends on the rate at which traffic arrives at the queue,
the transmission rate of the link, and the nature of the arriving traffic, that is, whether
the traffic arrives occasionally or arrives in bursts. To get some insight here, let “ ”
denote the average rate at which packets arrive at the queue (" " is in units of
packets/sec). Also, assume that is the transmission rate; that is, it is the rate (in
bits/sec) at which bits are pushed out of the queue.
For simplicity, that all packets consist of bits. Then the average rate at which bits
arrive at the queue is bits/sec. Finally, suppose that the queue is very big, so that it
can hold basically an infinite number of bits. The ratio / , called the ,
often plays an important role in estimating the extent of the queuing delay.
×
=
Where, is the average arrival rate of packets (e.g. packets/sec)
is the average packet length (e.g. in bits), and
is the transmission rate (e.g. bits/sec)
If × / > 1, then the average rate at which bits arrive at the queue exceeds the rate
at which the bits can be transmitted from the queue. In this unfortunate situation, the
queue will tend to increase without bound and the queuing delay will approach infinity.
Therefore, one of the golden rules in traffic engineering is: Design your system so that
the traffic intensity is no greater than 1.
Now think about the case × / = 1. Here, the nature of the arriving traffic impacts
the queuing delay. For instance, if packets arrive periodically -that is, one packet
arrives every / seconds - then every packet will arrive at an empty queue and there
will be no queuing delay. On the other hand, if packets arrive in bursts but
occasionally, there can be a considerable average queuing delay.
For instance, suppose packets arrive simultaneously every ( / ) seconds. Then the
first packet transmitted has no queuing delay; the second packet transmitted has a
queuing delay of / seconds; and more commonly, the packet transmitted has a
queuing delay of ( − 1) × / seconds.
Most of us know from experience that the actual network speed is much slower than
what is specified. Throughput is the actual amount of data that could be transferred
through the network. That is the actual amount of data that gets transmitted back and
forth from your computer, through the Internet to the web server in a single unit of
time.
When downloading a file you will see a window with a progress bar and a number. This
number is actually the throughput and you must have noticed that it is not constant
and almost always has a value lower than specified bandwidth for your connection.
Several factors like the number of users accessing the network, network topology,
physical media and hardware capabilities can effect this reduction in the bandwidth. As
you can imagine, throughput is also measured using the same units used to measure
the bandwidth.
As you have seen, bandwidth and throughput seems to give a similar measurement
about a network, at the first glance. They are also measured using the same units of
measurement. Despite all these similarities they are actually different. We can simply
say that the bandwidth is the maximum throughput you can ever achieve while the
actual speed that we experience while surfing is the throughput.
To simplify further, you can think of the bandwidth as the width of a highway. As we
increase the width of the highway more vehicles can move through a specified period of
time. But when we consider the road conditions (craters or construction work in the
highway) the number of vehicles that can actually pass through the specified period of
time could be less than the above. This is actually analogous to the throughput. So it is
clear that bandwidth and throughput gives two different measurements about a
network.
10. : It is defined as the number of data bits sent per unit time.
It is found by dividing the number of data bits by the elapsed time between
sending two frames. For the unrestricted protocols the effective data rate is
. For the stop and wait protocol, it is .
Link
Node
It is not efficient to build a physically separate path for each pair of communicating end
systems. An alternative method to a - - connection is ℎ a
network. In a communication network, each communicating (or
or or ℎ ) is connected to a network node.
The interconnected nodes are capable of transferring data between stations.
Shared Medium
Communication Network
Depending on the architecture and techniques used to transfer data, two basic
categories of communication networks are broadcast networks and switched networks.
Broadcast Networks
2.12 Network Switching 42
Elements of Computer Networking Introduction
Switched Networks
Communication Networks
Node
Link
Broadcast is a method of sending a signal where multiple nodes may hear a single
sender node. As an example, consider a conference room with full of people. In this
conference room, a single person starts saying some information loudly.
During that time, some people may be sleeping, and may not hear what person is
saying. Some people may not be sleeping, but not paying attention (they are able to
hear the person, but choose to ignore). Another group of people may not only be awake,
but be interested in what is being said. This last group is not only able to hear the
person speaking, but is also listening to what is being said.
In this example, we can see that a single person is broadcasting a message to all others
that may or may not be able to hear it, and if they are able to hear it, may choose to
listen or not.
Communication Networks
Circuit Establishment
Data Transfer
Circuit Disconnect
Therefore, the channel capacity must be reserved between the source and destination
throughout the network and each node must have available internal switching capacity
to handle the requested connection. Clearly, the switching nodes must have the
intelligence to make proper allocations and to establish a route through the network.
packet size, are also exchanged between the source and the destination during call
setup. The virtual circuit is cleared after the data transfer is completed.
Virtual circuit packet switching is connection orientated. This is in contrast to
datagram switching, which is a connection less packet switching methodology.
hop, as the message travels through the path toward its destination. Hence, to ensure
proper delivery, each intermediate switch may maintain a copy of the message until its
delivery to the next hop is guaranteed.
In case of message broadcasting, multiple copies may be stored for each individual
destination node. The store-and-forward property of message-switched networks is
different from queuing , in which messages are simply stored until their preceding
messages are processed. With store-and-forward capability, a message will only be
delivered if the next hop and the link connecting to it are both available. Otherwise, the
message is stored indefinitely. For example, consider a mail server that is disconnected
from the network and cannot receive the messages directed to it. In this case, the
intermediary server must store all messages until the mail server is connected and
receives the e-mails.
The store-and-forward technology is also different from admission control techniques
implemented in packet-switched or circuit switched networks. Using admission control,
the data transmission can temporarily be delayed to avoid overprovisioning the
resources. Hence, a message-switched network can also implement an admission
control mechanism to reduce network’s peak load.
The message delivery in message-switched networks includes wrapping the entire
information in a single message and transferring it from the source to the destination
node. The message size has no upper bound; although some messages can be as small
as a simple database query, others can be very large. For example, messages obtained
from a meteorological database center can contain several million bytes of binary data.
Practical limitations in storage devices and switches, however, can enforce limits on
message length.
Each message must be delivered with a header. The header often contains the message
routing information, including the source and destination, priority level, expiration
time. It is worth mentioning that while a message is being stored at the source or any
other intermediary node in the network, it can be bundled or aggregated with other
messages going to the next node. This is called . One important
advantage of message interleaving is that it can reduce the amount of overhead
generated in the network, resulting in higher link utilization.
Question 2: Imagine the length of a cable is 2500 metres. If the speed of propagation in
a thick co-axial cable is 60% of the speed of light, how long does it take for a bit to
travel from the beginning to the end of the cable? Ignore any propagation delay in
the equipment. (Speed of light = 3 x 10 metres / sec)
: Speed of propagation = 60% × = 60 × 3 × 10 / 100 = 18 × 10 metres / sec.
So it would take a bit 2500 / 18 × 10 = 13.9 secs.
Question 3: Suppose that data are stored on 2.44 Mbyte floppy diskettes that weight
20 gm each. Suppose that an airliner carries 10 kg of these floppies at a speed of
2.12 Network Switching 48
Elements of Computer Networking Introduction
2000 km/h over a distance of 8000 km. What is the data transmission rate in bits
per second of this system?
: Let us first calculate the time for which data was carried.
=
So, Time = = = 4 hrs.
10 µ (micro second) and that the switch begins retransmitting immediately after it
has finished receiving the packet.
:
A) For each link it takes = 5 µ to transmit the packet on the link, after which it
takes an additional 10 µs for the last bit to propagate across the link. Thus for a LAN
with only with only one switch that starts forwarding only after receiving the whole
packet, the total transfer delay is the transmit delays + two propagation delays = 30 µ .
B) For 3 switched and thus 4 links, the total delay is 4 transmission delays + 4
propagation delays = 60 µ .
Question 10: Determine the maximum length of the cable (in km) for transmitting data
at a rate of 500 Mbps in an Ethernet LAN with frames of size 10,000 bits. Assume
the signal speed in the cable to be 2,00,000 km/s.
A) 1 B) 2 C) 2.5 D) 5
:B
= +
ℎ ℎ
= +
Propagation time
2× ℎ
=
Propagation time
10000 2× ℎ
=
500 × 1000000 bits/sec 200000 /
Length = 2 km
Question 11: A packet switch receives a packet and determines the outbound link to
which the packet should be forwarded. When the packet arrives, one other packet is
halfway done being transmitted on this outbound link and four other packets are
waiting to be transmitted. Packets are transmitted in order of arrival. Suppose all
packets are 1,200 bytes and the link rate is 3 Mbps. What is the queuing delay for
the packet? More generally, what is the queuing delay when all packets have length
S, the transmission rate is T, X bits of the currently being transmitted packet have
been transmitted, and packets are already in the queue?
: The arriving packet must first wait for the link to transmit 5,400 bytes or
43,200 bits. Since these bits are transmitted at 3 Mbps, the queuing delay is 14.3 msec.
× ( )
Generally, the queuing delay is .
Question 12: Suppose we would like to urgently deliver 60 Terabytes of data from
to . We have a 1000 Mbps dedicated link for data transfer
available. Would you prefer to transmit the data via this link or instead use AirMail
overnight delivery? Explain.
Answer: 60 Terabytes = 60 × 10 × 8 bits. So, if using the dedicated link, it will take 60
× 10 × 8 / (1000 × 10 ) = 480000 seconds = 5.6 days. But with AirMail overnight
delivery, we can guarantee the data arrives in one day, and it only costs us no more
than USD 100.
Question 13: Two nodes, A and B, communicate through a store & forward network.
Node A is connected to the network by a 10 Mbps link, while node B is connected
by a 5 Mbps link. Node A sends two back-to-back packets of 1000 bits each. The
difference between the arrival times of the two packets at B is 1 ms. What is the
smallest capacity of a link along the path between A and B?
Note: Assume that there are no other packets in the network except the ones sent
by A, and ignore the packet processing time. Assume both packets follow the same
path, and they are not reordered. The arrival time of a packet at a node is defined
as the time when the last bit of the packet has arrived at that node.
: Since packets are sent back-to-back, the difference between the arrival times of
the packets at B represents the transmission time of the second packet on the slowest
link in the path. Thus, the capacity of the slowest link is 1000 bits/1 ms = 1 Mbps.
Question 14: Consider an infinite queue that can send data at 10 Kbps. Assume the
following arrival traffic:
• During every odd second the queue receives an 1000 bit packet every 50 ms
• During every even second the queue receives no data.
Assume an interval I of 10 sec starting with an odd second (i.e., a second in which
the queue receives data). At the beginning of interval I the queue is empty. What is
the maximum queue size during interval I?
: 10 packets. There are 20 packets arriving during the 1 second and 10 packets
sent at the end of that second. Thus at the end of 1 second there are 10 packets in the
queue. All the 10 packets will be sent at the end of 2 second (since no new packets are
received). Thus, at the end of 2 second the queue size is 0. After that the process
repeats.
(Note: The following alternate answers: 11 packets, 10 Kb, and 11 Kb all received
maximum points. The 11 packets and 11 Kbps assume that at a time when a packet is
received and another one is sent out, the received packet is already in the queue as the
other packet is sent out.)
Question 15: For the Question 14, what is the average time (delay) a packet spends in
the queue during interval I?
:
• 1 packet arrives at time 0 and starts being transmitted immediately, at time 0. →
delay 0
• 2 packet arrives at 0.05 s and starts being transmitted at 0.1 s (after the first
packet) → delay 0.05 s
• 3 packet arrives at 0.1 s and starts being transmitted at 0.2 s (after first two
packets) → delay 0.1 s
• 4 packet arrives at 0.15 s and starts being transmitted at 0.3 s (after first three
packets) → delay 0.15 s
…
• packet arrives at ( − 1)×0.05 s and starts being transmitted at (k-1)*0.1 s
delay ( − 1)×0.05 s
This process continues every 2 seconds.
Thus, the average delay of the first 20 packets is
( … )× . × × .
= = 0.475 s
Alternate solution that approximates the average delay: We use Little’s theorem:
_ _ _
_ = . During an odd second the number of packets in the
_
queue increases linearly from 0 to 10 and during the next second it decreases from 10
to 0. This means that the average number of packets in the queue is 5. Over an odd and
an even second the average arrival rate is = 10.
_ _ _
Then, _ = = = 0.5sec.
_
(Note: The following answers also received maximum points: 0.575s and 0.5s.)
Question 16: Similar to Question 14 and Question 15, now assume that during odd
seconds the queue receives 1000 bit packets every 25 ms (instead of every 50 ms),
and during even seconds it still receives no data. For this traffic patterns answer
the same questions. What is the maximum queue size during interval I?
: 110 packets. In this case the queue is never empty. During the first 9 seconds
of interval I there are 5 s×(1 s/25 ms) = 200 packets received and 90 packets sent out.
Thus, at the end of the 9 second there are 110 packets in the queue.
(Note: The following answers also received the maximum number of points: 111
packets, 110 Kb, and 111 Kb.)
Question 17: For the Question 16, what is the average time (delay) a packet spends in
the queue during interval I?
: Packets received during first second
• 1 packet arrives at time 0 and starts being transmitted immediately, at time 0. →
delay 0
• 2 packet arrives at 0.025 s and starts being transmitted at 0.1 s (after the first
packet) → delay 0.075 s
• 3 packet arrives at 0.05s and starts being transmitted at 0.2 s (after first two
packets) → delay 0.15 s
• 4 packet arrives at 0.075 s and starts being transmitted at 0.3 s (after first three
packets) → delay 0.225 s
…
• packet arrives at ( − 1)×0.025 s and starts being transmitted at ( − 1)×0.1 s →
delay ( − 1) ×0.075s
The average delay of the packets in the first two seconds is
( … )∗ . × × .
= = 1.4625 s
Packets received during 3 second: note that at the beginning of the 3 second there
are still 20 packets in the queue
• 1 packet arrives at time 0 and starts being transmitted immediately, at time 2s →
delay 2
• 2 packet arrives at 0.025s and starts being transmitted at 2+0.1 s → delay
2+0.075 s
• 3 packet arrives at 0.05s and starts being transmitted at 2+0.2 s → delay 2+0.15
s
• 4 packet arrives at 0.075s and starts being transmitted at 2+0.3 s → delay
2+0.225 s
…
• packet arrives at (k-1)*0.025s and starts being transmitted at 2+( − 1) × 0.1 s
→ delay 2+( − 1) ×0.075 s
The average delay of the packets in the first two seconds is
( … )× . × × .
= = 3.4625 s
…
Packets received during 9 second: note that at the beginning of the 9 second there
are still 80 packets in the queue
• 1 packet arrives at time 0 and starts being transmitted immediately, at time 8s
→ delay 2
• 2 packet arrives at 0.025s and starts being transmitted at 8+0.1s → delay
8+0.075s
Alternate solution that approximates the average delay: The average arrival rate is 40
packets/2 sec = 20 packets/sec.
During the 1st sec the number of packets in the queue increases linearly from 0 to 30,
thus the average number of packets in the queue in the 1 sec is 15. During 2 second
the queue decreases linearly from 30 to 20, thus the average number of packets in the
queue is 25, and the average number of packets in the queue over the first two seconds
is 20.
During 3 and 4 seconds the process repeats with the difference that there are 20
packets in the queue at the beginning of the 3 second. Thus, the average number of
packets in the queue during the 3 and 4 seconds is 20+20 = 40.
Similarly, the average number of packets during the 5 and 6 seconds is 40+20 = 60,
during the 7 and 8 seconds 60+20=80, and during the 9 and 10 seconds is
80+20=100.
Thus the average number of packets over the entire interval I is = 60.
_ _ _
According to the Little’s theorem _ = = = 6sec.
_
(Note: In general the average number of packets over the interval defined by the 2× − 1
and 2× seconds is ×20, where >= 1.)
Question 18: Suppose a CSMA/CD network is operating at 1 Gbps, and suppose there
are no repeaters and the length of the cable is 1 km. Determine the minimum frame
size if the signal propagation speed is 200 km/ms.
: Since the length of the cable is 1 km, we have a one-way propagation time of
= = 0.005 ms = 5 µs. So, 2 = 10 µs.
/
: Period = = 1 ms.
Question 20: A digital signaling system is required to operate at 9600 bps. If a symbol
encodes a 4-bit word, what is the minimum required channel bandwidth?
: The formula to use is
Maximum number of bits/sec=2×Channel bandwidth×Number of bits per sample
Question 23: Suppose that data is stored on 1.4 Mbyte floppy diskettes that weight 30
grams each. Suppose that an airliner carries 10 Kg of these floppies at a speed of
1000 Km/h over a distance of 5000 Km. What is the data transmission rate in bps
of this system?
: Each floppy measures 30 grams. We have the following
Number of floppies = 10 × 10 /30 = 333333.33
Total number of bits transported = Number of floppies × 1.4 × 1024 (not 1000!) × 1024 ×
8
i.e., Number of bits/sec = Total number of bits transported/5*3600.
The answer is 217.4 Mbps.
: When we talk of computer momory, it is typically measured in powers of 2, thefore
1 Kb = 2 bytes. When it comes to networks, we use clocks to send data, if a clock is 1
Khz, we transmit at the rate of 1 kilobits per sec, where the kilo is 1000, we are
transmitting at the clock rate.
Question 24: Consider a 150 Mb/s link that is 800 km long, with a queue large enough
to hold 5,000 packets. Assume that packets arrive at the queue with an average
rate of 40,000 packets per second and that the average packet length is 3,000 bits.
Approximately, what is the propagation delay for the link?
: 800 km times 5 microseconds per km is 4,000 microseconds or 4 ms.
Question 25: For the Question 24:, what is the transmission time for an average length
packet?
: Link speed is 150 bits per microsecond, so a 3,000 bit packet can be sent in 20
microseconds.
Question 26: For the Question 24:, what is the traffic intensity?
: Bit arrival rate is 40,000 times 3,000 or 120 Mb/s. Since the link rate is 150
Mb/s, I=0.8
Question 27: For the Question 24:, what is the average number of packets in the queue?
: =4.
Question 28: What is the average number in the queue, if the average arrival rate is
80,000 packets per second?
: In this case, the traffic intensity is 1.6, so the queue will be nearly full all the
time. So, the average number is just under 5,000 packets.
Question 29: A user in Hyderabad, connected to the Internet via a 5 Mb/s connection
retrieves a 50 KB (B=bytes) web page from a web server in New York, where the
page references 4 images of 300 KB each. Assume that the one way propagation
delay is 20 ms. Approximately how long does it take for the page (including images)
to appear on the user’s screen, assuming persistent HTTP?
: Total time is 3RTT + Transmission time.
.
3RTT = 120 ms and Transmission time = = = 2 seconds
/ /
Total time = 2.12 seconds.
Question 30: For the Question 29, how long would it take using non-persistent HTTP
(assume a single connection)?
2(1 + number of objects in page)RTT + Transmission time
400 ms + 2 seconds = 2.4 seconds
Question 31: Suppose a movie studio wants to distribute a new movie as a digital file to
1,000 movie theaters across country using peer-to-peer file distribution. Assume
that the studio and all the theaters have DSL connections with an 8 Mb/s
downstream rate and a 4 Mb/s upstream rate and that the file is 10 GB long.
Approximately, how much time is needed to distribute the file to all the theaters
under ideal conditions?
: The total upstream bandwidth is about 4 Gb/s. Since the file must be delivered
to 1,000 studios, we have 10 TB of data to be delivered. At 4 Gb/s, this takes 20,000
seconds, or roughly 6 hours.
Question 32: For the Question 24:, suppose the studio wanted to use the client-server
method instead. What is the smallest link rate that is required at the studio that
will allow the file to be distributed in under 40,000 seconds?
: This time period is twice the time used for the first part, so the server’s
upstream bandwidth must be half as large as the upstream bandwidth of the peers in
the first part. So, 2 Gb/s is enough.
Question 33: Suppose a file of 5,000 bytes is to be sent over a line at 2400 bps.
Calculate the overhead in bits and time in using asynchronous communication.
Assume one start bit and a stop element of length one bit, and 8 bits to send the
byte itself for each character. The 8-bit character consists of all data bits, with no
parity bit.
: Each character has 25% overhead. For 10,000 characters, there are 20,000
extra bits. This would take an extra = 2.0833 seconds.
Question 34: Calculate the overhead in bits and time using synchronous
communication. Assume that the data are sent in frames. Each frame consists of
1000 characters - 8000 bits and an overhead of 48 control bits per frame.
: The file takes 10 frames or 480 additional bits. The transmission time for the
additional bits is = 0.2 seconds.
Question 35: What would the answers to Question 28 and Question 29 be for a file of
100,000 characters?
: Ten times as many extra bits and ten times as long for both.
Question 36: What would the answers to Question 28 and Question 29 be for the
original file of 10,000 characters except at a data rate of 9600 bps?
: The number of overhead bits would be the same, and the time would be
decreased by a factor of 4 = .
Chapter
stack, or peer, on the other device. This allows computers running different operating
systems to communicate with each other easily.
If you are having trouble understanding this concept, then imagine that you need to
mail a large document to a friend, but do not have a big enough envelope. You could
put a few pages into several small envelopes, and then label the envelopes so that your
friend knows what order the pages go in. This is exactly the same thing that the
Network Layer does.
Since other types of devices, such as printers and routers, can be involved in network
communication, devices (including computers) on the network are actually referred to
as . Therefore, a client computer on the network or a server on the network would
each be to as a .
While sending data over a network, it moves down through the OSI stack and is
transmitted over the transmission media. When the data is received by a node, such as
another computer on the network, it moves up through the OSI stack until it is again in
a form that can be accessed by a user on that computer.
Each of the layers in the OSI model is responsible for certain aspects of getting user
data into a format that can be transmitted on the network. Some layers are for
establishing and maintaining the connection between the communicating nodes, and
other layers are responsible for the addressing of the data so that it can be determined
where the data originated (on which node) and where the data's destination is.
An important aspect of the OSI model is that each layer in the stack provides services to
the layer directly above it. Only the layer, which is at the top of the stack,
would not provide services to a higher-level layer.
The process of moving user data down the OSI stack on a sending node (again, such as
a computer) is called . The process of moving raw data received by a node
up the OSI stack is referred to as - .
Data Data
Sender Receiver
Physical Link
To encapsulate means to enclose or surround, and this is what happens to data that is
created at the Application layer and then moves down through the other layers of the
OSI model. A header, which is a segment of information affixed to the beginning of the
data, is generated at each layer of the OSI model, except for the Physical layer.
This means that the data is encapsulated in a succession of headers—first the
layer header, then the layer header, and so on. When the data
reaches the ℎ layer, it is like a candy bar that has been enclosed in several
different wrappers.
When the data is transmitted to a receiving node, such as a computer, the data travels
up the OSI stack and each header is stripped off of the data. First, the layer
header is removed, then the layer header, and so on. Also, the headers are not
just removed by the receiving computer; the header information is read and used to
determine what the receiving computer should do with the received data at each layer of
the OSI model.
In OSI model, the sending computer uses these headers to communicate with the
receiving computer and provide the receiving computer with useful. As the data travels
up the levels of the peer computer, each header is removed by its equivalent protocol.
These headers contain different information depending on the layer they receive the
header from, but tell the peer layer important information, including packet size,
frames, and datagrams.
Control is passed from one layer to the next, starting at the application layer in one
station, and proceeding to the bottom layer, over the channel to the next station and
back up the hierarchy.
Each layer's header and data are called . Although it may seem confusing, each
layer has a different name for its service data unit. Here are the common names for
service data units at each level of the OSI model.
Layer Encapsulation
Name Devices Keywords/Description
# Units
Network services for application
7 Application data PC processes, such as file, print,
messaging, database services
Standard interface to data for the
application layer. MIME
6 Presentation data encoding, data encryption,
conversion, formatting,
compression
Inter host communication.
Establishes, manages and
5 Session data
terminates connection between
applications
Provides end-to-end message
delivery and error recovery
(reliability).
4 Transport segments
Segmentation/desegmentation of
data in proper sequence (flow
control).
Logical addressing and path
3 Network packets router determination. Routing.
Reporting delivery errors
Physical addressing and access
bridge, to media. Two sublayers: Logical
2 Data Link frames
switch, NIC Link Control (LLC) and Media
Access Control (MAC)
Binary transmission signals and
repeater,
encoding. Layout of pins,
1 Physical bits hub,
voltages, cable specifications,
transceiver
modulation
The TCP/IP model, similar to the OSI model, has a set of layers. The OSI has seven
layers and the TCP/IP model has or layers depending on different preferences.
Some people use the Application, Transport, Internet and Network Access layers.
Others split the layer into the Physical and Data Link components.
Data Data
Sender Receiver
Internet Internet
Physical Link
The OSI model and the TCP/IP models were both created independently. The TCP/IP
network model represents reality in the world, whereas the OSI mode represents an
ideal.
Layer # Description Protocols
Defines TCP/IP application protocols and HTTP, Telnet, FTP, TFTP,
how host programs interface with transport SNMP, DNS, SMTP,
layer services to use the network. X Windows, other
application protocols
Provides communication session TCP, UDP, RTP
management between the nodes/computers.
Defines the level of service and status of the
connection used when transporting data.
Packages data into IP datagrams, which IP, ICMP, ARP, RARP
contain source and destination address
information that is used to forward the
datagrams between hosts and networks.
Performs routing of IP datagrams.
Specifies details of how data is physically Ethernet, Token Ring,
sent through the network, including how FDDI, X.25, Frame Relay,
bits are electrically signaled by hardware RS-2
devices that interface directly with a
network medium, such as coaxial cable,
optical fiber, or twisted-pair copper wire.
(Simple Network Management Protocol), SMTP (Simple Mail Transfer Protocol), DHCP
(Dynamic Host Configuration Protocol), RDP (Remote Desktop Protocol) etc.
same instance, they will collide with each other, destroying the data. If the data is
destroyed during transmission, the data will need to be retransmitted. After collision,
each host will wait for a small interval of time and again the data will be retransmitted.
This is a combination of the Data Link and Physical layers of the OSI model which
consists of the actual hardware.
We send data back and forth over the connection by speaking to one another over the
phone lines. Like the phone company, TCP guarantees that data sent from one end of
the connection actually gets to the other end and in the same order it was sent.
Otherwise, an error is reported.
TCP provides a point-to-point channel for applications that require reliable
communications. The Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP),
and Telnet are all examples of applications that require a reliable communication
channel. The order in which the data is sent and received over the network is critical to
the success of these applications. When HTTP is used to read from a URL, the data
must be received in the order in which it was sent. Otherwise, we end up with a
jumbled HTML file, a corrupt zip file, or some other invalid information.
Application: HTTP, ftp, telnet, SMTP,…
Transport: TCP, UDP,…
Network: IP,…
Link: Device driver,…
For many applications, the guarantee of reliability is critical to the success of the
transfer of information from one end of the connection to the other. However, other
forms of communication don't require such strict standards. In fact, they may be
slowed down by the extra overhead or the reliable connection may invalidate the service
altogether.
Consider, for example, a clock server that sends the current time to its client when
requested to do so. If the client misses a packet, it doesn't really make sense to resend
it because the time will be incorrect when the client receives it on the second try. If the
client makes two requests and receives packets from the server out of order, it doesn't
really matter because the client can figure out that the packets are out of order and
make another request. The reliability of TCP is unnecessary in this instance because it
causes performance degradation and may hinder the usefulness of the service.
Another example of a service that doesn't need the guarantee of a reliable channel is the
ping command. The purpose of the ping command is to test the communication
between two programs over the network. In fact, ping needs to know about dropped or
out-of-order packets to determine how good or bad the connection is. A reliable channel
would invalidate this service altogether.
The UDP protocol provides for communication that is not guaranteed between two
applications on the network. UDP is not connection-based like TCP. Rather, it sends
independent packets of data from one application to another. Sending datagrams is
much like sending a letter through the mail service: The order of delivery is not
important and is not guaranteed, and each message is independent of any others.
The UDP connectionless protocol differs from the TCP connection-oriented protocol in
that it does not establish a link for the duration of the connection. An example of a
connectionless protocol is postal mail. To mail something, you just write down a
destination address (and an optional return address) on the envelope of the item you're
sending and drop it in a mailbox. When using UDP, an application program writes the
destination port and IP address on a datagram and then sends the datagram to its
destination. UDP is less reliable than TCP because there are no delivery-assurance or
error-detection and error-correction mechanisms built into the protocol.
Application protocols such as FTP, SMTP, and HTTP use TCP to provide reliable,
stream-based communication between client and server programs. Other protocols,
such as the Time Protocol, use UDP because speed of delivery is more important than
end-to-end reliability.
: The TCP and UDP protocols use ports to map incoming data to a particular
process running on a computer.
In datagram-based communication such as UDP, the datagram packet contains the
port number of its destination and UDP routes the packet to the appropriate
application, as illustrated in this figure:
3.7 Understanding Ports 68
Elements of Computer Networking OSI and TCP/IP Models
TCP or UDP
Packet
Data Port# Data
Port numbers range from 0 to 65,535 because ports are represented by 16-bit numbers.
The port numbers ranging from 0 - 1023 are restricted; they are reserved for use by
well-known services such as HTTP and FTP and other system services. These ports are
called well − known ports. Your applications should not attempt to bind to them.
Port Protocol
21 File Transfer Protocol
23 Telnet Protocol
25 Simple Mail Transfer Protocol
80 Hypertext Transfer Protocol
Question 5: Which OSI layer is concerned with reliable end-to-end delivery of data?
A) Application B) Transport C) Network D) Data Link
:B
Question 6: Logical addressing is found in the ____ layer, while physical addressing is
found in the __ layer.
A) Physical, Network B) Network, Physical
C) Data Link, Network D) Network, Data Link
:D
Question 7: The OSI Reference Model layers, in order from top to bottom, are:
A) Application, Physical, Session, Transport, Network, Data Link, Presentation
B) Application, Presentation, Network, Session, Transport, Data Link, Physical
C) Physical, Data Link, Network, Transport, Session, Presentation, Application
D) Application, Presentation, Session, Transport, Network, Data Link, Physical
:D
Question 8: The process-to-process delivery of the entire message is the responsibility
of the ___ layer.
A) Network B) Transport C) Application D) Physical
:B
Question 9: The ___ layer is the layer closest to the transmission medium.
A) Physical B) Data link C) Network D) Transport
:A
Question 10: Mail services are available to network users through the __ layer.
A) Data link B) Physical C) Transport D) Application
:D
Question 11: As the data packet moves from the lower to the upper layers, headers are
__
A) Added B) Subtracted C) Rearranged D) Modified
:B
Question 12: As the data packet moves from the upper to the lower layers, headers are
___
A) Added B) Removed C) Rearranged D) Modified
:A
Question 13: The __ layer lies between the network layer and the application layer.
A) Physical B) Data link C) Transport D) None of the above
:C
Question 14: Layer 2 lies between the physical layer and the __ layer.
A) Network B) Data link C) Transport D) None of the above
:A
Question 15: When data are transmitted from device A to device B, the header from A’s
layer 4 is read by B’s __ layer.
A) Physical B) Transport C) Application D) None of the above
:B
Question 16: The ___ layer changes bits into electromagnetic signals.
A) Physical B) Data link C) Transport D) None of the above
:A
Question 17: The physical layer is concerned with the transmission of ___over the
physical medium.
A) Programs B) Dialogs C) Protocols D) Bits
:D
Question 18: Which layer functions as a connection between user support layers and
network support layers?
A) Network layer B) Physical layer C) Transport layer D) Application layer
:C
Question 19: What is the main function of the transport layer?
A) Node-to-node delivery B) Process-to-process delivery
C) Synchronization D) Updating and maintenance of routing tables
:B
Question 20: Which of the following is an application layer service?
A) Remote log-in B) File transfer and access
C) Mail service D) All the above
:D
Question 21: Best effort means packets are delivered to destinations as fast as possible.
Is it true or false?
: False. Best effort refers to no guarantees about performance of any kind, not
high performance.
Question 22: In the OSI model, the transport layer can directly invoke (use) the data link
layer. Is it true or false?
: False. In the OSI model a layer can only use the service provided by the layer
below it. In this case, the transport layer can only use the service provided by the
networking layer.
Question 23: Data are transmitted over an internet in packets from a source system to a
destination across a path involving a single network and routers. Is it true or false?
: False
Question 24: In TCP/IP model, exactly one protocol data unit (PDU) in layer is
encapsulated in a PDU at layer ( – 1). It is also possible to break one -level PDU
into multiple ( – 1)-level PDUs (segmentation) or to group multiple -level PDUs
into one ( – 1)-level PDU (blocking). In the case of segmentation, is it necessary
that each ( – 1)-level segment contain a copy of the n-level header?
In the case of blocking, is it necessary that each -level PDU retain its own header,
or can the data be consolidated into a single -level PDU with a single -level
header?
: . This would violate the principle of separation of layers. To layer ( – 1), the
-level PDU is simply data. The ( – 1) entity does not know about the internal format of
the -level PDU. It breaks that PDU into fragments and reassembles them in the proper
order.
Question 25: For the Question 24, in the case of blocking, is it necessary that each -
level PDU retain its own header, or can the data be consolidated into a single -level
PDU with a single -level header?
: Each -level PDU must retain its own header, for the same reason given in
Question 24.
Question 26: A TCP segment consisting of 1500 bits of data and 20 bytes of header is
sent to the IP layer, which appends another 20 bytes of [Link] is then
transmitted through two networks, each of which uses a 3-byte packet header. The
destination network has a maximum packet size of 800 bits. How many bits,
including headers, are delivered to the network layer protocol at the destination?
: Data + Transport header + Internet (IP) header = 1820 bits. This data is
delivered in a sequence of packets, each of which contains 24 bits of network header
and up to 776 bits of higher-layer headers and/or data. Three network packets are
needed. Total bits delivered = 1820 + 3 × 24 = 1892 bits.
Question 27: In the OSI model, the _____ layer is concerned with finding the best path
for the data from one point to the next within the network.
A) Data Link B) Network C) Physical D) Application
:B
Question 28: Error detection is performed at the __ layer of the OSI model?
A) Data Link B) Transport C) Network D) Both a and b
:D
Question 29: ___ is a very powerful error detection technique and should be considered
for all data transmission systems?
A) Vertical redundancy check B) Cyclic redundancy checksum
C) Simple parity D) Horizontal parity
:B
Question 30: Which layer addresses do routers use to determine a packet's path?
A) Data Link B) Network C) Physical D) Application
:B
Question 31: Why does the data communication industry use the layered OSI reference
model?
1. It divides the network communication process into smaller and simpler
components, thus aiding component development, design, and troubleshooting.
2. It enables equipment from different vendors to use the same electronic
components, thus saving research and development funds.
3. It supports the evolution of multiple competing standards and thus provides
business opportunities for equipment manufacturers.
4. It encourages industry standardization by defining what functions occur at each
layer of the model.
A) 1 only B) 1 and 4 C) 2 and 3 D) 3 only
: B. The main advantage of a layered model is that it can allow application
developers to change aspects of a program in just one layer of the layer model's
specifications. Advantages of using the OSI layered model include, but are not limited
to, the following:
It divides the network communication process into smaller and simpler
components, thus aiding compo- nent development, design, and
troubleshooting
It allows multiple-vendor development through standardization of network
components
It encourages industry standardization by defining what functions occur at
each layer of the model
It allows various types of network hardware and software to communicate
Question 32: Which of the following functionalities must be implemented by a transport
protocol over and above the network protocol ?
Chapter
Networking
Devices 4
4.1 Glossary
: Network segments that typically use the same communication protocol
use bridges to pass information from one network segment to the other.
: When different communications protocols are used by networks,
gateways are used to convert the data from the sender’s
:Another name for a hub is a concentrator. Hubs reside in the core of the
LAN cabling system. The hub connects workstations and sends every
transmission to all the connected workstations.
: A MDA is a plug-in module allowing selection among
fiber-optic, twisted pair, and coaxial cable.
: When the electrical characteristics of various networks are
different, media filter adapter connectors make the connections possible.
: MAUs are special concentrators or hubs for use in
Token Ring networks instead of Ethernet networks.
: Modem is a device that digital signals to analog signals and
analog signals to digital signals.
:NICs are printed circuit boards that are installed in
computer workstations. They provide the physical connection and circuitry
required to access the network.
: Connectivity device used to regenerate and amplify weak signals, thus
extending the length of the network. Repeaters perform no other action on the
data.
: Links two or more networks together, such as an Internet Protocol
network. A router receives packets and selects the optimum path to forward the
packets to other networks.
ℎ: A connection device in a network that functions much like a bridge, but
directs transmissions to specific workstations rather than forwarding data to all
workstations on the network.
: The name transceiver is derived from the combination of the words
transmitter and receiver. It is a device that both transmits and receives signals
and connects a computer to the network. A transceiver may be external or
located internally on the NIC.
: Firewall provides controlled data access. Firewalls can be hardware or
software based and between networks. These are an essential part of a
network’s security strategy.
4.1 Glossary 74
Elements of Computer Networking Networking Devices
NIC provides physical access to a networking medium and often provides a low-level
addressing system through the use of MAC addresses. It allows users to connect to
each other either by using or .
The network interface card (NIC) is an add-on component for a computer, much like a
video card or sound card is. On most of the systems the NIC is integrated into the
system board. On others it has to be installed into an expansion slot.
Most network interface cards have the ℎ protocol as the language of the data
that is being transferred back and forth. However, network interface cards do not all
necessarily need physical Ethernet or other cables to be functional. Some have wireless
capabilities through including a small - that uses radio waves to transmit
information. The computer must have a software driver installed to enable it to interact
with the NIC. These drivers enable the operating system and higher-level protocols to
control the functions of the adapter.
Each NIC has a unique (MAC) address to direct traffic. This unique
MAC address ensures that information is only being sent to a specific computer name
and not to multiple ones if not intended to. Circled in the picture below is an example of
an integrated network interface card.
The MAC (Media Access Layer) address, or hardware address, is a 12-digit number
consisting of digits 0-9 and letters A-F. It is basically a hexadecimal number assigned to
the card. The MAC address consists of two pieces: the first signifies which vendor it
comes from, the second is the serial number unique to that manufacturer.
Example MAC addresses:
00-B0-D0-86-BB-F7 01-23-45-67-89-AB 00-1C-B3-09-85-15
The NIC performs the following functions:
It translates data from the parallel data bus to a serial bit stream for
transmission across the network.
It formats packets of data in accordance with protocol.
It transmits and receives data based on the hardware address of the card.
4.4.3 Transceivers
The term does not necessarily describe a separate network device but rather
embedded in devices such as network cards.
Transceiver is a short name for - . It is a device that both transmits
and receives analog or digital signals. The term transceiver is used most frequently to
describe the component in local-area networks (LANs) that actually applies signals onto
the network wire and detects signals passing through the wire. For many LANs, the
transceiver is built into the network interface card (NIC). Older types of networks,
however, require an external transceiver.
The transceiver does not make changes to information transmitted across the network;
it adapts the signals so devices connected by varying media can interpret them. A
transceiver operates at the physical layer of the OSI model. Technically, on a LAN the
transceiver is responsible to place signals onto the network media and also detecting
incoming signals traveling through the same cable. Given the description of the function
of a transceiver, it makes sense that that technology would be found with network cards
(NICs).
Amplifier
Amplifier is an electronic circuit that increases the power of an input signal. There are
many types of amplifiers ranging from voice amplifiers to optical amplifiers at different
frequencies.
Repeater
The repeater is an electronic circuit that receives a signal and retransmits the same
signal with a higher power. Therefore, a repeater consists of a signal receiver, an
and a . Repeaters are often used in submarine communication
cables as signal would be attenuated to just a random noise when travelling such a
distance. Different types of repeaters have different types of configurations depending
on the transmission medium. If the medium is microwaves, repeater may consist of
antennas and waveguides. If the medium is optical it may contain photo detectors and
light emitters.
6000 meters
Repeater
Sending Node
Receiving Node
ℎ can normally transmit a distance of 500 meters and this can be extended by
introducing repeaters. ℎ can normally transmit a distance of 185 meters, and
can also be extended by using a repeater. This is the advantage to using a repeater. If a
network layout exceeds the normal specifications of cable we can use repeaters to build
network. This will allow for greater lengths when planning cabling scheme.
Repeaters no other action on the data. Repeaters were originally separate
devices. Today a repeater may be a separate device or it may be incorporated into a
hub. Repeaters operate at the physical layer of the OSI model.
4.4.5 Hubs
Hubs are commonly used to connect segments of a LAN. A hub contains multiple ports.
When a packet arrives at one port, it is copied to the other ports so that all segments of
the LAN can see all packets. A ℎ contains multiple ports. When a packet arrives at
one port, it is copied to all (broadcast) the ports of the hub. When the packets are
copied, the destination address in the frame does not change to a address. It
does this in a rudimentary way; it simply copies the data to all of the nodes connected
to the hub.
Hub
Hub Hub
The main function of the hub is to broadcast signals to different workstations in a LAN.
General speaking, the term hub is used instead of repeater when referring to the device
that serves as the center of a network.
4.4.6 Modems
Modem is a device that digital signals to analog signals and analog signals to
digital signals. The word modem stands for and . The process of
converting digital signals to analog signals is called . The process of
converting analog signals to digital signals is called . Modems are used
with computers to transfer data from one computer to another computer through
telephone lines.
Analog connection
Digital connection
Analog Connection
The connection between the modem and the telephone line is called a .
It converts digital signals from a computer to analogue signals that are then sent down
the telephone line. A modem on the other end converts the analogue signal back to a
digital signal the computer can understand. A workstation is connected to an analogue
modem. The analogue modem is then connected to the telephone exchange analogue
modem, which is then connected to the internet.
Analog Digital
Modem Modem
Digital Connection
The connection of modem to computer is called digital connection
Types of Modems
There are two types of modems:
Internal modems
External modems
Internal Modems
It fits into expansion slots inside the computer. It is directly linked to the telephone
lines through the telephone jack. It is normally less inexpensive than external modem.
Its transmission speed is also less external modem.
External Modems
It is the external unit of computer and is connected to the computer through serial port.
It is also linked to the telephone line through a telephone jack. External modems are
expensive and have more operation features and high transmission speed.
Advantages of Modems
Inexpensive hardware and telephone lines
Easy to setup and maintain
Disadvantage of Modems
Very slow performance
bridge, the bridge not only regenerate the frame but it also checks the address of the
destination and forwards the new copy only to the segment to which the destination
address belongs. Ethernet LAN
Hub
Bridge
A bridge device data traffic at a network boundary. Bridges reduce the amount of
on a LAN by dividing it into segments. Key features of a bridge are mentioned
below:
A bridge operates both in physical and data-link layer
A bridge uses a table for /
A bridge does not ℎ the physical (MAC) addresses in a
1. [ ]: Places incoming
frame onto all outgoing ports original incoming port.
2. : Stores the origin of a frame (from which port) and
later uses this information to place frames to that port.
3. : Uses a subset of the LAN topology for a loop-free
operation.
4. : Depends on routing information in frame to place the
frame to an outgoing port.
Destination No
Yes
found in
table?
Forward frame to all
Direction = No
LANs except X
port X?
Forward frame to
Yes correct LAN
Count discarded
frames
1. If the source address is present in the forwarding table, the bridge the
source address and corresponding interface to the table. It then checks the
destination address to determine if it is in the table.
2. If the destination address is listed in the table, it determines if the destination
address is on the same LAN as the source address. If it is, then the bridge
the frame since all the nodes have already received the frame.
3. If the destination address is listed in the table but is on a different LAN than
the source address, then the frame is forwarded to that LAN.
4. If the destination address is not listed in the table, then the bridge forwards the
frame to all the LANs except the one that which originally received the frame.
This process is called .
In some bridges, if the bridge has not accessed an address in the forwarding table over
a period of time, the address is removed to free up memory space on the bridge. This
process is referred to as .
Packets with a source A and destination B are received and discarded, since the node B
is directly connected to the LAN-1, whereas packets from A with a destination C are
forwarded to network LAN-2 by the bridge.
At the time of installation of a transparent bridge, the table is empty. When a packet is
encountered, the bridge checks its source address and build up a table by associating a
source address with a port address to which it is connected. The flowchart explains the
learning process.
Source No
found in
table?
Add source to table
Yes with direction and
timer
Update direction
and timer
Table Building
The table building up operation is illustrated in figure. Initially the table is empty.
Address Port
Node-A Node-B
LAN-1
Node-F
Port-1
Port-3
Bridge
Port-2
Node-E
LAN-2
LAN-3
Node-C Node-D
1. When node A sends a frame to node D, the bridge does not have any entry for
either D or A. The frame goes out from all three ports. The frame floods the
network. However, by looking at the source address, the bridge learns that node
A must be located on the LAN connected to port 1.
This means that frame destined for A (in future), must be sent out through port
1. The bridge adds this entry to its table. The table has its first entry now.
Address Port
A 1
2. When node E sends a frame to node A, the bridge has an entry for A, so it
forwards the frame only to port 1. There is no flooding. Also, it uses the source
address of the frame (E in this case), to add a second entry to the table.
Address Port
A 1
E 3
3. When node B sends a frame to C, the bridge has no entry for C, so once again it
floods the network and adds one more entry to the table.
Address Port
A 1
E 3
B 1
4. The process of learning continues as the bridge forwards frames.
Loop Problem
Forwarding and learning processes work without any problem as long as there is no
redundant bridge in the system. On the other hand, redundancy is desirable from the
viewpoint of reliability, so that the function of a failed bridge is taken over by a
redundant bridge.
The existence of redundant bridges creates the so-called loop problem as shown figure.
Assuming that after initialization tables in both the bridges are empty let us consider
the following steps:
Node-B
LAN-1
Bridge-A Bridge-B
LAN-2
Node-A
1: Node A sends a frame to node B. Both the bridges forward the frame to
LAN 1 and update the table with the source address of A.
2: Now there are two copies of the frame on LAN-1. The copy sent by
Bridge-A is received by Bridge-B and vice versa. As both the bridges have no
information about node B, both will forward the frames to LAN-2.
3: Again both the bridges will forward the frames to LAN-1 because of the
lack of information of the node B in their database and again Step-2 will be
repeated, and so on.
So, the frame will continue to around the two LANs indefinitely.
An Example
Let us walk through the below example for running the spanning tree algorithm on.
Note that some of the LAN segments have a cost 3 times that of others. The following
convention is used for the remaining discussion:
DC means designated cost for a LAN segment
Bridge-# means bridge number
LAN-1 DC = 3
1 1 Root Bridge
Bridge-2 Bridge-1
2
2 3
DC = 3 LAN-2
LAN-3 DC = 1
1
1 1
Bridge-6
Bridge-3 Bridge-4
2
2
2 3
DC = 1 LAN-6
LAN-4 DC = 3
1
Bridge-5
2
LAN-5 DC = 3
Step 1 of the algorithm is already shown in the first picture: Bridge 1 is chosen as the
since all the bridges are assumed to have the same priority. The tie is
broken by choosing the bridge with the smallest ID number.
Next, we determine the root path cost (RPC) for each port on each bridge ℎ ℎ the
bridge. Then each bridge other than the root chooses its port with the lowest RPC
as the root port (RP). Ties are broken by choosing the - port. The root
port is used for all control messages from the root bridge to this particular bridge.
LAN-1 DC = 3
RP RP 1 RPC = 3
1 RPC = 1 1 RPC = 1
Bridge-6
Bridge-3 Bridge-4
2 2 RPC = 2
2 RPC = 4 RPC = 4 3 DC = 1 RP LAN-6
LAN-4 DC = 3
RP
1 RPC = 4
Bridge-5
RPC = 7
2
LAN-5 DC = 3
The root bridge is always the designated bridge for the LAN segments directly attached
to it. The ports by which the root bridge attaches to the LAN segments are thus
designated ports. We assume that no LAN segment attaches to the root bridge by more
than 1 port. Since a root port cannot be chosen as a designated port, do not waste time
even considering root ports as possible designated ports.
In the drawing on the next page, we see that LAN-1, LAN-2, and LAN-3 are directly
attached to the root bridge via ports 1, 2, and 3 respectively on the root bridge. Thus we
only need to consider LAN-4, LAN-5, and LAN-6. LAN-4 could use either port 2 on
Bridge-3 or port 3 on Bridge-4 as its designated port. The DPC for each is 1 since
anything sent from LAN-4 through such a port goes across LAN-3 to the root bridge and
the cost of LAN-3 is just 1.
Since we have a tie for the DP we choose the one on the lowest number bridge. That
means that Bridge-3 is the designated bridge and its port 2 is the designated port for
LAN-3. For LAN-5 there is only one port that could be chosen, so the designated port for
LAN-5 is port 2 on Bridge-5 and the designated bridge is Bridge-5. There is no choice
for LAN-6 either as one port is a root port. Thus the designated port for S6 is the other
one: port 2 on Bridge-4.
LAN-1 DC = 3
1
1 RP 1 RP
DP Bridge-6
DPC = 1
Bridge-3 Bridge-4
2 2
DP 2 DPC = 1 DPC = 1 3 DC = 1 RP LAN-6
LAN-4
DC = 3
1 RP
Bridge-5
2
LAN-5 DC = 3 DP DPC = 4
LAN-1 DC = 3
1 Block 1 DP
Root Bridge
Bridge-2 Bridge-1
2 DP
2 RP 3 DP
DC = 3 LAN-2
LAN-3 DC = 1
1 Block
1 RP 1 RP
DP Bridge-6
Bridge-3 Bridge-4
2 2 RP
DP 2 Block 3 DC = 1 LAN-6
LAN-4
DC = 3
1 RP
Bridge-5
2 DP
LAN-5 DC = 3
Finally, in step 4 each port that is not a root port or designated port is set to be in a
blocking state so that no traffic can flow through it. The blocked ports are X-ed out
above. This, then, produces our spanning tree (no loops). To better see the spanning
tree, the picture can be redrawn as shown on the next page, with the root bridge as the
root of the tree.
FDDI
Ring LAN Bridge
Ethernet LAN
Translational bridges are a type of transparent bridge that connects LANs that use
different protocols at the data link and physical layers, for example, FDDI (Fiber
Distributed Data Interface) and Ethernet.
Token Token
Ring LAN Bridge Ring LAN
Bridge
Token
Ring LAN
Source route bridging is used in token ring networks. A source route bridge links two or
more rings together. There are fundamental characteristics in how a source route bridge
transmits a frame between rings. A source route bridge does not create and maintain
forwarding tables. The decision to forward or drop a frame is based on information
provided in the frame.
The destination station is responsible for maintaining routing tables that define a route
to all workstations on the network. The source workstation is responsible for
determining the path of a frame to its destination. If no route information is available,
then the source station has the ability to perform route discovery to learn the potential
paths that can be taken.
4.4.2 Switches
ℎ is a device that filters and forwards packets between LAN segments. Switch
works at the layer 2 of the OSI model. The main purpose of the switch is to concentrate
connectivity while making data transmission more efficient. Think of the switch as
something that combines the connectivity of a hub with the traffic regulation of a bridge
on each port. Switches makes decisions based on MAC addresses.
A switch is a device that performs switching. Specifically, it forwards and filters OSI
layer 2 datagrams (chunk of data communication) between ports (connected cables)
based on the MAC addresses in the packets.
As discussed earlier, a hub forwards data to all ports, regardless of whether the data is
intended for the system connected to the port. This mechanism is inefficient; and
switches tries to address this issue to some extent. This is different from a hub in that
it only forwards the datagrams to the ports involved in the communications rather than
all ports connected. Strictly speaking, a switch is not capable of routing traffic based on
IP address (layer 3) which is necessary for communicating between network segments
or within a large or complex LAN.
Switch
Data
Data
fragment-free switching read at least 64 bytes of the Ethernet frame before switching it
to avoid forwarding Ethernet runt frames (Ethernet frames smaller than 64 bytes).
4.4.3 Routers
[Link] What is Router?
are ℎ devices that join multiple together. Technically, a router is
a Layer 3 device, meaning that it connects two or more networks and that the router
operates at the network layer of the OSI model.
Routers maintain a table (called ) of the available routes and their
conditions and use this information along with distance and cost algorithms to
determine the best route for a given packet. Typically, a packet may travel through a
number of network points with routers before arriving at its destination.
The purpose of the router is to examine incoming packets (layer 3), chose the best path
for them through the network, and then switches them to the proper outgoing port.
Routers are the most important traffic controlling devices on large networks.
Routers are networking devices that forward data packets between networks using
headers and to determine the best path to forward the packets.
Routers also provide interconnectivity between and media (networks which
use different protocols).
have just discussed? Second, how is service delivery provided? In the next section, we
address these questions.
Internet
4.4.4 Gateways
The term is used in networking to describe the to the Internet. The
controls traffic that travels from the inside network to the Internet and
provides security from traffic that wants to enter the inside network from the Internet.
A network gateway is an internetworking system which joins two networks that use
different base protocols. A network gateway can be implemented completely in software,
completely in hardware, or as a combination of both. Depending on the types of
protocols they support, network gateways can operate at any level of the OSI model.
Since a gateway (by definition) appears at the edge of a network, related capabilities like
firewalls tend to be integrated with it. On home networks, a router typically serves as
the network gateway although ordinary computers can also be configured to perform
equivalent functions.
Gateway
Sub-Network
Sub-Network Gateway
Sub-Network
As mentioned earlier, the Internet is not a single network but a collection of networks
that communicate with each other through gateways. A gateway is defined as a system
that performs relay functions between networks, as shown in figure above. The different
networks connected to each other through gateways are often called ,
because they are a smaller part of the larger overall network.
With TCP/IP, all interconnections between physical networks are through gateways. An
important point to remember for use later is that gateways route information packets
based on their destination network name, not the destination machine. Gateways are
completely transparent to the user.
If the default gateway becomes unavailable, the system cannot communicate outside its
own subnet, except for with systems that it had established connections with prior to
the failure.
[Link] Gateway
The between gateway and router is, gateway it is defined as a network
node that allows a network to interface with another network with different protocols. A
router is a device that is capable of sending and receiving data packets between
computer networks, also creating an overlay network.
Gateways and routers are two words are often confused due to their similarities. Both
gateways and routers are used to regulate traffic into more separate networks. However,
these are two different technologies and are used for different purposes.
The term gateway can be used to define two different technologies: gateway and default
gateway. These two terms should not be confused. In terms of communications
network, gateway it is defined as a network node that allows a network to interface with
another network with different protocols. In simple terms, gateway allows two different
networks to communicate with each other. It contains devices such as impedance
protocol translators, rate converters, or signal translators to allow system
interoperability.
A protocol translation/mapping gateway interconnects networks that have different
network protocol technologies. Gateways acts as a network point that acts as an
entrance to another network. The gateway can also allow the network to connect the
computer to the internet. Many routers are available with the gateway technology,
which knows where to direct the packet of data when it arrives at the gateway.
Gateways are often associated with both routers and switches.
[Link] Router
A router is a device that is capable of sending and receiving data packets between
computer networks, also creating an overlay network. The router connects two or more
data line, so when a packet comes in through one line, the router reads the address
information on the packet and determines the right destination, it then uses the
information in its routing table or routing policy to direct the packet to the next
network. On the internet, routers perform functions. Routers can also
be wireless as well as wired.
The most common type of routers is small office or home routers. These are used for
passing data from the computer to the owner's cable or DSL modem, which is
connected to the internet. Other routers are huge enterprise types that connect large
businesses to powerful routers that forward data to the Internet.
When connected in interconnected networks, the routers exchange data such as
destination addresses by using a dynamic routing protocol. Each router is responsible
for building up a table that lists the preferred routes between any two systems on the
interconnected networks. Routers can also be used to connect two or more logical
groups of computer devices known as subnets. Routers can offer multiple features such
as a DHCP server, NAT, Static Routing, and Wireless Networking.
These days’ routers are mostly available with built-in gateway systems make it easier
for users with them not having to buy separate systems.
4.4.5 Firewalls
The term firewall was derived from and intended to the
of fire from one to another. From the computer security perspective, the Internet is
an unsafe environment; therefore is an excellent metaphor for network
security. A firewall is a system designed to prevent unauthorized access to or from a
private network. Firewalls can be implemented in either hardware or software form, or a
combination of both. Firewalls prevent unauthorized users from accessing private
networks. A firewall sits between the two networks, usually a private network and a
public network such as the Internet.
F
I
R
E
Internal Network W Internet (Unsecure)
A
L
L
The functions of a router, hub and a switch are all quite different from one another,
even if at times they are all integrated into a single device. Let's start with the hub and
the switch since these two devices have similar roles on the network. Each serves as a
central connection for all of your network equipment and handles a data type known as
frames. Frames carry the data. When a frame is received, it is amplified and then
transmitted on to the port of the destination PC. The big difference between these two
devices is in the method in which frames are being delivered.
In a hub, a frame to every one of its ports. It doesn't matter that the frame is
only destined for one port. The hub cannot distinguish which port a frame should be
sent to. Broadcasting it on every port ensures that it will reach its intended destination.
This places a lot of traffic on the network and can lead to poor network response times.
Additionally, a 10/100Mbps hub must share its bandwidth with each and every one of
its ports. So, when only one PC is broadcasting, it will have access to the maximum
available bandwidth. If, however, multiple PCs are broadcasting, then that bandwidth
will need to be divided among all of those systems, which will degrade performance.
A switch, however, keeps a record of the addresses of all the devices connected to
it. With this information, a switch can identify which system is sitting on which port.
So, when a frame is received, it knows exactly which port to send it to, without
significantly increasing network response times. And, unlike a hub, a 10/100Mbps
switch will allocate a full 10/100Mbps to each of its ports. So regardless of the number
of PCs transmitting, users will always have access to the maximum amount of
bandwidth. It's for these reasons why a switch is considered to be a much better choice
than a hub.
are completely different devices. Where a hub or switch is concerned with
transmitting frames, a router's job, as its name implies, is to route packets to other
networks until that packet ultimately reaches its destination. One of the key features of
a packet is that it not only contains data, but the destination address of where it's
going.
A router is typically connected to at least two networks, commonly two Local Area
Networks (LANs) or Wide Area Networks (WAN) or a LAN and its ISP's network, for
example, your PC or workgroup and EarthLink. Routers are located at gateways, the
places where two or more networks connect. Using headers and forwarding tables,
routers determine the best path for forwarding the packets. Router use protocols such
as ICMP to communicate with each other and configure the best route between any two
hosts.
Full Message
Packet Packet
Switch Switch
No Segmentation
Source Destination
Segment
Packet Packet
Switch Switch
With Segmentation
Source Destination
A) Consider sending the message from source to destination without message
segmentation. How long does it take to move the message from the source host to
the first packet switch? Keeping in mind that each switch uses store-and-forward
packet switching, what is the total time to move the message from source host to
destination host?
B) Now suppose that the message is segmented into 5,000 packets, with each
packet being 1,500 bits long. How long does it take to move the first packet from
source host to the first switch? When the first packet is being sent from the first
switch to the second switch, the second packet is being sent from the source host to
the first switch. At what time will the second packet be fully received at the first
switch?
C) How long does it take to move the file from source host to destination host when
message segmentation is used? Compare this result with your answer in part (A)
and comment.
:
×
A) Time to send message from source host to first packet switch = sec = 6 sec.
. ×
With store-and-forward switching, the total time to move message from source host to
destination host = 6 sec × 3 hops = 18 sec.
. ×
B) Time to send 1st packet from source host to first packet switch = sec = 1 msec.
. ×
Time at which second packet is received at the first switch = 1.5 × 10 time at which
first packet is received at the second switch = 2 × 1 msec = 2 msec.
C) Time at which 1st packet is received at the destination host = 1 msec × 3 hops = 3
msec . After this, every 1msec one packet will be received; thus time at which last
(5000 ) packet is received = 3 msec + 4999 × 1 msec = 5.002 sec.
It can be seen that delay in using message segmentation is significantly less (more than
rd).
Question 2: For the following statement, indicate whether the statement is True or
False.
Switches exhibit lower latency than routers.
: True. No routing table look-up, no delays associated with storing data ,
bits flow through the switch essentially as soon as they arrive.
Question 3: Packet switches have queues while circuit switches do not. Is it true or
false?
: False. Routers have queues; switches do not, even though the packet switch
must have more memory than a circuit switch to receive a full packet before it can
forward it on.
Question 4: Consider the arrangement of learning bridges shown in the following
figure. Assuming all are initially empty, give the forwarding tables for each of the
bridges B1-B4 after the following transmissions:
B3 C
A B1 B2
B4 D
D sends to C; A sends to D; C sends to A
: When D sends to C, all bridges see the packet and learn where D is. However,
when A sends to D, the packet is routed directly to D and B3 does not learn where A is.
Similarly, when C sends to A, the packet is routed by B2 towards B1 only, and B4 does
not learn where C is.
The forwarding table for Bridge B1:
Destination Next Hop
A A-Interface
C B2-Interface
D B2-Interface
The forwarding table for Bridge B2:
Destination Next Hop
A B1-Interface
C B3-Interface
D B4-Interface
The forwarding table for Bridge B3:
Destination Next Hop
C C-Interface
D B2-Interface
The forwarding table for Bridge B4:
Destination Next Hop
A B2-Interface
D D-Interface
Question 5: Which type of bridge observes network traffic flow and uses this
information to make future decisions regarding frame forwarding?
A) Remote B) Source routing C) Transparent D) Spanning tree
:C
Question 6: Learning network addresses and converting frame formats are the function
of which device?
A) Switch B) Hub C) MAU D) Bridge
:D
Question 7: The device that can operate in place of a hub is a:
A) Switch B) Bridge C) Router D) Gateway
:A
Question 8: Which of the following is NOT true with respective to a transparent bridge
and a router?
A) Both bridge and router selectively forward data packets
B) A bridge uses IP addresses while a router uses MAC addresses
C) A bridge builds up its routing table by inspecting incoming packets
D) A router can connect between a LAN and a WAN.
: B. Bridge is the device which work at data link layer whereas router works at
network layer. Both selectively forward packets, build routing table and connect
between LAN and WAN but since bridge works at data link it uses MAC addresses to
route whereas router uses IP addresses.
Chapter
LAN Technologies 5
5.1 Introduction
The bottom two layers of the Open Systems Interconnection (OSI) model deal with the
physical structure of the network and the means by which network devices can send
information from one device on a network to another.
The data link layer controls how data packets are sent from one node to another.
Data Data
Sender Receiver
Physical Link
This means that they are not only able to speak, but listen at the same time they are
speaking. All of them will speak and listen at the same time. How is this possible? In
order to sing in harmony, each singer must be able to hear the frequencies being used
by the other singers, and strive to create a frequency with their voice that matches the
desired frequency to create that harmony.
This feed-back of each singer to listen to the collective, and possibly key into a specific
singer's voice is used by them as they sing to create the exact frequency needed, and
ensure their timing is the same as the rest of the singers. All members are able to hear
all other members, and speak at the same time. They are all acting as a -
communications in a broadcast network.
Networks
A point-to-point link consists of a single sender on one end of the link, and a single
receiver at the other end of the link. Many link-layer protocols have been designed for
point-to-point links; PPP (point-to-point protocol) and HDLC (High-level Data Link
Control) are two such protocols.
Now, let us consider a different kind of scenario in which we have a medium which is
shared by a number of users.
Shared Medium
Any user can broadcast the data into the network. Now whenever it is broadcasted
obviously there is a possibility that several users will try to broadcast simultaneously.
This problem can be addressed with medium access control techniques.
Now question arises how different users will send through the shared media. It is
necessary to have a protocol or technique to regulate the transmission from the users.
That means, at a time only one user can send through the media and that has to be
decided with the help of Medium Access Control (MAC) techniques. Medium access
control techniques determines the next user to talk (i.e., transmit into the channel).
A good example is something we are familiar with - a classroom - where teacher(s) and
student(s) share the same, single, broadcast medium. As humans, we have evolved a set
of protocols for sharing the broadcast channel ("Give everyone a chance to speak."
"Don't speak until you are spoken to." "Don't monopolize the conversation." "Raise your
hand if you have question." "Don't interrupt when someone is speaking." "Don't fall
asleep when someone else is talking.").
Similarly, computer networks have protocols called protocols. These
protocols control the nodes data transmission onto the shared broadcast channel.
There are various ways to classify multiple access protocols. Multiple access protocols
can be broadly divided into four types; random, round-robin, reservation and
channelization. These four categories are needed in different situations. Among these
four types, channelization technique is static in nature. We shall discuss each of them
one by one.
Broadcast Multiple Access Techniques
TDMA
Random Access Techniques
FDM
A ALOHA Round-Robin Reservation
CDMA
CSMA Polling R-ALOHA
CSMA/CA
When each node has a fixed flow of information to transmit (for example, a data file
transfer), reservation based access methods are useful as they make an efficient use of
communication resources. If the information to be transmitted is bursty in nature, the
reservation-based access methods are not useful as they waste communication
resources.
Random-access methods are useful for transmitting short messages. The random
access methods give freedom for each to get access to the network whenever the
user has information to send.
5.4.1 ALOHA
Aloha protocol was developed by at . In the
language, Aloha means , , and . University of Hawaii consists
of a number of islands and obviously they cannot setup wired network in these islands.
In the University of Hawaii, there was a centralized computer and there were terminals
distributed to different islands. It was necessary for the central computer to
communicate with the terminals and for that purpose developed a protocol
called ℎ .
Central Node
Random Access
Broadcast
Terminal-1
… Terminal-4
Terminal-2 Terminal-3
Central node and terminals (stations) communicate by using a wireless technique called
. Each of these stations can transmit by using frequency which is
access shared by all the terminals. After receiving the data, the central node
retransmits by using a frequency and that will be received by all terminals.
There are two different types of ALOHA:
1. Pure ALOHA
2. Slotted ALOHA
can listen to broadcasts on the medium, even its own, and determine whether the
frames were transmitted.
A Frame-A.1 Frame-A.2
B Frame-B.1 Frame-B.2
C Frame-C.1 Frame-C.2
D Frame-D.1 Frame-D.2
Time
Collision Durations
As shown in diagram, whenever two frames try to occupy the channel at the same time,
there will be a collision and both will be damaged. If first bit of a new frame overlaps
with just the last bit of a frame almost finished, both frames will be totally destroyed
and both will have to be retransmitted.
No
No
> Got ACK?
K=K+1
Yes Yes
If data was received correctly at the central node, a short acknowledgment packet was
sent to the terminal; if an acknowledgment was not received by a terminal after a short
wait time, it would automatically retransmit the data packet after waiting a randomly
selected time interval. This acknowledgment mechanism was used to detect and correct
for collisions created when two terminals both attempted to send a packet at the same
time.
In pure ALOHA, the stations transmit frames whenever they have data to send.
When two or more stations transmit at the same time, there will be a collision
and the frames will get destroyed.
In pure ALOHA, whenever any station transmits a frame, it expects the
acknowledgement from the receiver.
If acknowledgement is not received within specified time, the station assumes
that the frame has been destroyed.
If the frame is destroyed because of collision the station waits for a random
amount of time and sends it again. This waiting time must be random
otherwise same frames will collide again and again.
Therefore pure ALOHA dictates that when time-out period passes, each station must
wait for a random amount of time before resending its frame. This will
reduce collisions.
Frame-B.1
Frame-A.1
Frame-C.1
t- t t+
Time
Vulnerable Duration=2×
Packet transmitted within this range will overlap with other packets. As a result,
collision will occur and the central node will send the garble packet to terminals.
When the garble packet is received by all terminals; they will know that packet has not
been transmitted successfully and the terminals will perform retransmission. The
retransmission technique is used here whenever there is a collision.
This is about 0.184. So, the best channel utilization with the pure ALOHA protocol is
only 18.4%.
The probability of collision in pure ALOHA is,
1−
Where is the total time which is the sum of propagation and transmission times.
A Frame-A.1 Frame-A.2
B Frame-B.1 Frame-B.2
C Frame-C.1 Frame-C.2
D Frame-D.1
Slots
Time
Frame-B.1
Frame-A.1
Frame-C.1
t- t t+
Time
Vulnerable Duration=
1. 1-Persistent CSMA
2. Non-Persistent CSMA
3. p-Persistent CSMA
Yes
Channel
Busy?
No
Send data
In this method, station that wants to transmit data senses the channel to
check whether the channel is idle or busy. If the channel is busy, the station waits until
it becomes idle. When the station detects an idle-channel, it immediately transmits the
frame with probability 1. Hence it is called 1-persistent CSMA. This method has the
highest chance of collision because two or more nodes may find channel to be idle at
the same time and transmit their frames. When the collision occurs, the nodes wait a
random amount of time and start all over again.
Sense continuously
Frame Time
of 1-Persistent CSMA
The propagation delay greatly affects this protocol. As an example, just after the node-1
begins its transmission, node-2 also ready to send its data and senses the channel. If
the node-1 signal has not yet reached node-2, node-2 will sense the channel to be idle
and will begin its transmission. This will result in collision.
Even if propagation delay is zero, collision will still occur. If two nodes became ready in
the middle of third node's transmission, both nodes will wait until the transmission of
first node ends and then both will begin their transmission exactly simultaneously. This
will also result in collision.
Yes
Channel Wait for
Busy? random time
No
Send data
In non-persistent CSMA, a node senses the channel. If the channel is busy, then the
node waits for a random amount of time and senses the channel again. After the wait
time, if the channel is idle then it sends the packet immediately. If a collision occurs
then the node waits for a random amount of time and start all over again.
In non-persistent CSMA, a node does not sense the channel continuously while it is
busy. Instead, after sensing the busy condition, it waits for a randomly selected interval
of time before sensing again.
Wait Wait
Sense Sense
Sense and send
Frame Time
of Non-Persistent CSMA
It reduces the chance of collision because the stations wait a random amount of time. It
is unlikely that two or more stations will wait for same amount of time and will
retransmit at the same time.
of Non-Persistent CSMA
It reduces the efficiency of network because the channel remains idle when there may
be stations with frames to send. This is due to the fact that the stations wait a random
amount of time after the collision.
Yes
Channel
Busy?
No
No No
Probability ≥ Wait for Channel
? Slot time Busy?
Yes Yes
Whenever a station becomes ready to send, it senses the channel. If channel is busy,
station waits until next slot. If channel is idle, it transmits with a probability . With the
probability = − , the station then waits for the beginning of the next time slot. If the
next slot is also idle, it either transmits or waits again with probabilities and .
This process is repeated till either frame has been transmitted or another station has
begun transmitting. In case of the transmission by another station, the station acts as
though a collision has occurred and it waits a random amount of time and starts again.
Probability does not allow transmission
Sense continuously
Time Slots
Frame Time
This method is used when channel has time slots such that the time slot duration is
equal to or greater than the maximum propagation delay time.
of p-persistent CSMA
It reduces the chance of collision and improves the efficiency of the network.
Time
Channel busy Idle Channel
random intervals. This reduces the probability of a collision after the first retry. The
nodes are not supposed to transmit immediately after the collision has occurred.
Otherwise there is a possibility that the same frames would collide again.
CSMA/CD uses the electric activity on the cable to find the status of the channel. A
collision is detected by the of the received pulse and then comparing it with the
transmitted signal power.
After the collision is detected, the node stops transmitting and waits random amount of
time ( - time) and then sends its data again assuming that no other station is
transmitting in this time. This time slot called slot. If the collision occurs
again then the back-off delay time is progressively.
1. If the channel is idle, transmit; otherwise, go to Step 2.
2. If the channel is busy, continue sensing until the channel is idle, and then
transmit immediately.
3. If a collision is detected during transmission, send a jam signal to other nodes
sharing the medium saying that there has been a collision and then stop
transmission.
4. After sending the jam signal, wait for a random amount of time, then try
sending again.
Sent data
collision found?
Choose a random
number R between 0 Yes
No
and 2 − 1
Send and receive
No
Send a Yes Collision
> K=K+1 jam signal found?
Yes No
is generally set to 15
Abort Success
cannot detect the collision. So, the only solution for wireless networks is collision
avoidance. In the previous section, we have seen that CSMA/CD deals with
transmissions after a collision has occurred. But, CSMA/CA acts to prevent collisions
before they happen.
In CSMA/CA, a station will signal its intention to transmit before it actually transmits data.
In this way, stations will sense when a collision might occur; this allows them to avoid
transmission collisions. Unfortunately, this broadcasting of the intention to transmit data
increases the amount of traffic on the channel and slows down network performance.
CSMA/CA avoids the collisions using three basic concepts.
1. Inter-frame space (IFS)
2. Contention window
3. Acknowledgements
[Link] Acknowledgements
Despite all the precautions, collisions may occur and destroy the data. The positive
acknowledgment and the time-out timer can help guarantee that receiver has received
the frame.
idle, then the packet is sent. If the channel is not idle, the stations waits for a randomly
chosen period of time, and then checks again to see if the channel is idle. This period of
time is called the - factor, and is counted down by a - counter.
Start
K: Number of attempts [back-off counter] Station has
: Maximum propagation time data to send
: Average transmission time for a frame
: Back-off time K= 0
No
Channel Idle?
Yes
No
Channel Still
Idle?
Yes
Choose a random
Contention window
number R between 0
size is 2 − 1
and 2 − 1
Send frameS
Wait time-out
No
No
K>15? K=K+1 Got ?
Yes Yes
Abort Success
If the channel is idle when the - counter reaches , the node transmits the
packet. If the channel is not idle when the - counter reaches , the -
factor is set again, and the process is repeated.
t t
… Common Channel
Data
f f
… …
Source-4
Source-3
t
t
At the receiving end of the system, bandpass filters are used to pass the desired signal
(the signal in the appropriate frequency sub-band) to the appropriate user and to block
all the unwanted signals.
It is also appropriate to design an FDM system so that the bandwidth allocated to each
sub-band is slightly larger than the bandwidth needed by each source. This extra
bandwidth is called a - .
As we can see in figure, FDM divides one channel (with frequency between 0 Hz and 3000
Hz) into several sub-channels including the Guard-band. Guard-band acts as a
delimiter for each logical sub-channel so that the interference (crosstalk) from other
sub-channel can be minimized.
Modulator-1:
Generates a
signal in the
Source-1 frequency sub-
band between Guard-band
2000Hz and
2800Hz
Modulator -2:
Generates a
Channel capable of
signal in the
passing frequency
Source-2 frequency sub-
between 0 Hz and
band between
3000Hz
1000Hz and
1800Hz
Modulator -3:
Generates a
Source-3 signal in the
frequency sub-
band between 0
Hz and 800 Hz
For example, the multiplexed circuit is divided into 3 frequencies. Channel #1 (for
Source-1) using 0-800 Hz for its data transfer and delimited by 200 Hz Guard-band.
Channel #2 (for Source-2) using 1000-1800 Hz and delimited by 200 Hz too; and so on.
In regards to speed, we simply need to divide the main circuit amongst the available
sub-channels. For example, if we have a 64 Kbps physical circuit and wanted to use 4
sub-channels, each sub-channel will have 16 Kbps.
However, Guard-band is also using this 64 Kbps physical circuit and therefore each
channel will be using only 15 Kbps with 4 Guard-bands (1 Kbps per Guard-band). This
calculation depends on the specification.
Now, normally for each of these channels a frequency is statically allocated but if the
traffic is burst that means all the channels do not have data to send all the time. In
such a case there can be under utilization of the channels because a channel is
statically or permanently allocated to a particular station or user.
Code
Channel-n
Channel-1
Channel-2
Frequency
……
Time
What can be done to improve the utilization? Solution would be, instead of statically
allocating a channel to a station the channels can be assigned on demand.
That means not only the overall bandwidth is divided into a number of channels but
each channel can be allocated to a number of stations or users. If we have a number of
channels, we can use the below equation to find the number of channels that can be
used.
( × )
Number of channels, =
This is how we get the total number of channels that is possible in Frequency Division
Multiplexing.
If we have channels, since each channel can be shared by more than one user the
total number of stations that can be provided a service can be greater than . If it is
statically allocated then the total number of number of stations that can be used in
service is equal to .
However, since this is allocated or assigned dynamically on demand the total number of
stations can be larger than the number of channels. This is possible only when the
traffic is bursty and if the traffic is streamed (continuously sent) then of course it
cannot be done.
f f
Source-1 Source-2
… …
t t
… Common Channel
t
Data
f f
… …
Source-3 Source-4
t t
As an example, consider a channel with speed 192 kbit/sec from Hyderabad to Delhi.
Suppose that three sources, all located in Hyderabad, each have 64 kbit/sec of data
and they want to transmit to individual users in Delhi. As shown in Figure 7-2, the
high-bit-rate channel can be divided into a series of time slots, and the time slots can be
alternately used by the three sources.
The three sources are thus capable of sending all of their data across the single, shared
channel. Clearly, at the other end of the channel (in this case, in Delhi), the process
must be reversed (i.e., the system must divide the 192 kbit/sec multiplexed data stream
back into the original three 64 kbit/sec data streams, which are then provided to three
different users). This reverse process is called .
a a a
a a a
Demultiplexer
Multiplexer
a b c a b c a b c
c c c
c c c
Low-bit-rate
Low-bit-rate
input channels
output channels
Choosing the proper size for the time slots involves a trade-off between efficiency and
delay. If the time slots are too small (say, one bit long) then the multiplexer must be fast
enough and powerful enough to be constantly switching between sources (and the
demultiplexer must be fast enough and powerful enough to be constantly switching
between users).
If the time slots are larger than one bit, data from each source must be stored (buffered)
while other sources are using the channel. This storage will produce delay.
If the time slots are too large, then a significant delay will be introduced between each
source and its user.
If we assign the stream enough slots for its peak rate (that is, for 50 kbit/sec), then we
will be wasting slots when the rate drops well below the peak value. This waste will be
high if the system has many variable-speed low-bit-rate streams.
Statistical TDM works by calculating the average transmission rates of the streams to
be combined, and then uses a high-speed multiplexing link with a transmission rate
that is equal to (or slightly greater than) the statistical average of the combined streams.
Since the transmission rates from each source are variable, we no longer assign a fixed
number of time slots to each data stream.
If it is done dynamically based on demand then we call it time division multiple access.
That means a particular channel can be shared by a number of stations or users. We
are dividing into different time slots and each of these time slots can be shared by more
than one station or user. That technique called TDMA or Time Division Multiple Access.
Code
Channel-n
Channel-2
Channel-1
Frequency
Time
If we have a number of channels, we can use the below equation to find the number of
channels that can be used.
×( × )
Number of channels, =
This is how we get the total number of channels that is possible in Time Division
Multiplexing.
Through
Antenna
Block diagram of CDMA system is shown in figure. Since the human speech is in the
analog signal, it has to be first converted into digital form. This function is performed by
5.5 Static Channelization Techniques 122
Elements of Computer Networking LAN Technologies
the source encoding module. After the source information is coded into a digital form,
redundancy needs to be added to this digital message or data. This is done to improve
performance of the communication system (due to noise).
⊗ ⊗
Multipliers
Data
Common Channel
⊗ ⊗
+1, +1, +1, +1 +1, -1, +1, -1 +1, +1, -1, -1 +1, -1, -1, +1
For , multiplying the chip sequence +1, +1, +1, +1 with -1 becomes -1, -1 -1, -1. On
the other hand, for multiplying +1 with +1, -1, +1 -1 becomes +1, -1, +1 -1. Then for
, multiplying -1 with the chip sequence +1, +1, -1, -1 becomes -1, -1, +1, +1. For , it
is multiplied with +1. That means, multiplying +1 with the chip sequence +1, -1, -1, +1
becomes +1, -1, -1, +1.
⊗ Multipliers ⊗
Data
Common Channel
⊗ ⊗
Now these are added bit by bit and for the first bit we can see that sum of +1, -1, +1, -1
becomes 0. For the second bit, -1, -1, -1, -1 becomes -4 (we have to add all the four).
Similarly, for third bit -1, +1, +1, -1 it is 0 and for fourth bit –1, -1, +1, +1 it is 0.
Code
+1, +1, +1, +1
Source-1
Code
+1, -1, +1, -1
Source-3
Code
+1, -1, -1, +1
Source-4
The final composite signal corresponds to 0, -4, 0, 0 and this can be sent over the
medium. After it is received; the same chip sequences (which were used before
transmission) are used for demultiplexing.
5.5 Static Channelization Techniques 124
Elements of Computer Networking LAN Technologies
Code
+1, +1, +1, +1 Divide
with 4
⊗ 0, -4, 0, 0 -4 -1 0
Code
+1, -1, +1, -1 Divide
with 4
⊗ 0, +4, 0, +4 +1 0
0, -4, 0, 0
⊕ Code
+1, +1, -1, -1
Divide
with 4
⊗ 0, -4, 0, 0 -4 -1 0
Code
+1, -1, -1, +1 Divide
with 4
⊗ 0, +4, 0, +4 +1 1
using an antenna. This is how the transmission is performed and as we can see the
bandwidth here is at times the bandwidth of each of the channels.
At the receiving end; signals from all the transmitters are being received by antenna
and then the composite signal is multiplied with the digital demodulator. After
demodulation we get the composite signal and that is multiplied with unique pseudo
random binary sequence. After multiplying with the same pseudo random sequence we
get the original signal back. Of course it will have some noise because of interference
and other problems but we get back the binary information.
Let's consider one other operation—a cyclic shift. Let the notation [ ] indicates
the sequence of bits [ ] cyclically shifted places to the right. For example,
[ ]=[ … ]
[ ] =[ … ]
[ ] = …
The third desirable property of a chip sequence is that the -bit sequence
produced by [ ] ⊕ [ ] should exhibit balance for all non-zero values of less
than (the number of 1’s in [ ] ⊕ [ ] should differ from the number of 0’s by
no more than one for any value 1 ≤ j ≤ – 1).
Why does the property help ensure and ?
Since ⊕ of two bits produces a if both bits are the same (if both bits are 0’s or if
both bits are 1’s) and produces a if the two bits are different (if one of the bits is 0
and the other is 1). If a sequence of bits is truly random and independent, then
cyclically shifting the sequence by an arbitrary number of places; performing a bit-by-
bit comparison of the original; and shifted sequences should produce the same number
of agreements (the values of the two bits are the same) as disagreements (the values of
the two bits are different). Of course, if the sequence contains an odd number of bits,
the number of agreements and disagreements will have to differ by at least one.
As shown above, adding them would give 4 and dividing the result with number of bits
gives 1. So, we found that multiplying same chip sequences would give 1. Also, if we
multiply with the complement then we get 0. That means;
. =1
. = 0; if ≠
. =0
This is the ℎ property that is to be satisfied by the chip sequences, only then
the multiplexing and demultiplexing is possible. In other words transmission and
subsequent recovery at the receiving end is possible only when this ℎ property
is satisfied.
5.6 LocalTalk
LocalTalk is a network protocol developed by for ℎ computers. Older
computers can be connected through a serial port with special twisted pair cable and
adapters. The main disadvantage of LocalTalk is the speed (230 Kbps).
Although LocalTalk networks are slow, they are popular because they are easy and
inexpensive to install and maintain.
5.7 Ethernet
The most popular set of protocols for the Physical and Data Link layers is Ethernet.
Ethernet operates at the first two layers of the OSI model: Physical and Data Link
layers. Initially, Ethernet was given a name ℎ . Ethernet was created by
(in 1973). Metcalfe thought the name ℎ suitable because the cable
used to build a network is a passive medium that permits the propagation of data.
5.6 LocalTalk 128
Elements of Computer Networking LAN Technologies
The cost of an Ethernet port on a node is very low compared to other technologies.
Many vendors build Ethernet into the motherboard of the computer so that it is not
necessary to purchase a separate NIC.
In Ethernet, both the data link and the physical layers are involved in the creation and
transmission of frames. The physical layer is related to the type of LAN cabling and how
the bits are transmitted and received on the cable. Ethernet divides the Data Link layer
into two separate layers:
Logical Link Control (LLC) layer
Medium Access Control (MAC) layer
The MAC sublayer address is the physical hardware address of the source and
destination computer. All devices on a LAN must be identified by a unique MAC
address. This sublayer controls which computer devices send and receive the data and
allows NICs to communicate with the physical layer. The next level of processing is the
LLC sublayer. It is responsible for identifying and passing data to the network layer
protocol.
ℎ :
ℎ : ℎ :
For example, 10BaseT Ethernet protocol uses 10 for the speed of transmission at 10
megabits per second [Mbps], the for [means it has full control of the wire
on a single frequency], and the for cable.
ℎ :
100BaseT 100 Mbps over Twisted-pair category 5
100BaseFX 100 Mbps over fiber optic cable
100BaseSX 100 Mbps over multimode fiber optic cable
100BaseBX 100 Mbps over single mode fiber cable
ℎ :
1000BaseT 1000 Mbps over 2-pair category 5
1000BaseTX 1000 Mbps over 2-pair category 6
1000BaseFX 1000 Mbps over fiber optic cable
1000BaseSX 1000 Mbps over multimode fiber cable
1000BaseBX 1000 Mbps over single mode fiber cable
The choice of an Ethernet technology depends on parameters like: location and size of
user communities, bandwidth, and QoS requirements.
a previous query. The advantage of full-duplex Ethernet is that the transmission rate is
theoretically double what it is on a half-duplex link.
Full-duplex operation requires the cabling to dedicate one wire pair for transmitting and
another for receiving. Full-duplex operation does not work on cables with only one path
(for example, coaxial cable).
10BASE-
10BASE-FB 10BASE-FL Old FOIRL
FP
Backbone or
Repeater- Repeater-
Topology Star repeater
repeater link repeater link
system
Maximum cable
500 2000 2000 1000
length (in meters)
Allows end system
Yes No No No
connections?
Allows cascaded
No Yes No No
repeaters?
Maximum collision
2500 2500 2500 2500
domain (in meters)
Size in Bytes
Frame field Description
Indicates the start of a new frame and establishes
synchronization conditions between devices. The last byte, or
Preamble
start frame delimiter, always has a 10101011-bit pattern. This
byte indicates the start of a frame.
Destination Address The Destination Address is the hardware (MAC) address of the
(DA) receiving device.
Source Address (SA) Specifies the hardware (MAC) address of the sending device.
The Type field specifies the network layer protocol used to send
Type
the frame, for example TCP/IP.
The Data field is for the actual data being transmitted from
Data device to device. It also contains information used by the
network layer and indicates the type of connection.
Frame Check Sequence Contains CRC error-checking information.
Size in Bytes
Frame field Description
Indicates the start of a new frame and establishes
synchronization conditions between devices. The last byte, or
Preamble
start frame delimiter, always has a 10101011-bit pattern. This
byte indicates the start of a frame (same as DIX frame).
The Start Frame Delimiter (SFD) has the same 10101011-bit
sequence found at the end of the DIX preamble. Both formats
Start Frame Delimiter
use the same number of bytes to perform the synchronization
of the signals.
The Destination address can be either 2 or 6 bytes. Whether 2
Destination Address or 6 bytes are used, all devices within the same network must
(DA) use the same format. IEEE protocols specify that a 10Mbs
network must use 6 bytes. The 2 byte length is obsolete.
Source Address (SA) Same as DA.
The Length field indicates the number of bytes in the data
field. If the data field is less than the required 46 bytes, a pad
Length
field is added to the data frame. The bytes added for padding
purposes are usually zeros.
The Data field is for the actual data being transmitted from
Data and Padding device to device. It also contains information used by the
network layer and indicates the type of connection.
Frame Check Sequence Contains CRC error-checking information (same as DIX frame).
It should be noted that if one device uses an IEEE 802.3 NIC and the other device uses a DIX
Ethernet NIC, they would not be able to communicate with one another. Devices must create the
same Ethernet frame format in order to be compatible. One way to tell them apart is that the DIX
frame has a field, which defines the protocol used for the frame, and IEEE 802.3 has a ℎ
field in its place. IEEE 802.3 also has additional fields not used with the DIX format.
Size in Bytes
Size in Bytes
particular server on the network. They are the only devices that receive frames
announcing the availability of that server. Any device that does not belong to this group
will ignore or discard these frames.
A frame is addressed for all network devices to read and process. A broadcast
address is a unique address used only for broadcast frames. It is not a hardware
address. Broadcast frames are transmitted across bridges and switches; but, routers
will stop broadcast frames.
3. Destination makes a
1. Sender holding token
copy of data and passes
sends bits of frame
4. Sender receives
bits of frame
When a computer wants to send data to another computer, it waits for the token to
come around and then attaches its data to it. The token is then passed to the next
computer in the ring until it reaches the recipient computer. The recipient attaches two
bits of data to the token to inform the sender that the data was received. Other
computers can't send data until the ring is free again.
4 1
Free Token
5 6
At the start, an empty information frames are continuously circulated on the ring. To
use the network, a machine first has to capture the free Token and replace the data
with its own message.
3 2
Data Frame
4 1
5 6
In the example above, machine 1 wants to send some data to machine 4, so it first has
to capture the free Token. It then writes its data and the recipient's address onto the
Token.
3 2
Data Frame
4 1
5 6
The packet of data is then sent to machine 2 who reads the address, realizes it is not its
own, so passes it on to machine 3.
3 2
Data Frame
4 1
5 6
3 2
4 1
Data Frame
5 6
This time it is the correct address and so machine 4 reads the message. It cannot,
however, release a free Token on to the ring; it must first send the frame back to
machine 1 with an acknowledgement to say that it has received the data.
3 2
4 1
Data Frame
5 6
The receipt is then sent to machine 5 who checks the address, realizes that it is not its
own and so forwards it on to the next machine in the ring, machine 6.
Machine 6 does the same and forwards the data to machine 1, who sent the original
message.
Machine 1 recognizes the address, reads the acknowledgement from number 4 and then
releases the free Token back on to the ring ready for the next machine to use.
3 2
4 1
Data Frame
5 6
Hub
The Token still circulates around the network and is still controlled in the same
manner. Using a hub or a switch greatly improves reliability because the hub can
automatically bypass any ports that are disconnected or have a cabling fault.
Size in Bytes
Size in Bytes
[Link] Multiple-Token
In multiple-token mode, the transmitting machine generates a new free token and
places it on the ring immediately following the last bit of transmitting data. This type of
operation allows several busy tokens and one free token on the ring at the same time.
[Link] Single-Token
Single-token operation requires that a transmitting machine wait until it has cleared its
own busy token before generating a new free token. If a packet is longer than the ring
latency, however, the machine will receive (and erase) its busy token before it has
finished transmitting data.
In this case, the machine must continue transmitting data and generate a new free
token only after the last data bit has been transmitted. This is the same as multiple-
token operation. Thus single-token and multiple-token operation differ only in cases for
which the packet is shorted than the ring latency.
[Link] Single-Packet
For single-packet operation, a machine does not issue a new free token until after it has
circulated completely around the ring and erased its entire transmitted packet. This
type of operation is the most conservative of the three in ensuring that two
transmissions do not interfere.
Both single-packet and single-token operation ensure that there is only a single token
on the ring at any given time, but the difference is that single-packet operation requires
that the complete packet be cleared before generating a new free token.
= 100 ×
+
Usually the time to send a token is small compared to the time to send a frame, so percent utilization
is close to 100%.
Data Received 0 0 0 1 0 1 1 0
An example consider the figure above. It shows the effect of a single-bit error on a data
unit. To understand the impact of the change, imagine that each group of 8 bits is an
ASCII character with a 0 bit added to the left. In figure, 00110110 was sent but
00010110 was received.
The term (or - error) means that 2 or more bits in the data unit have
changed from 1 to 0 or from 0 to 1. Figure shows the effect of a burst error on a data
unit.
A burst error means that 2 or more bits in the data unit have changed.
Data Sent 0 0 1 1 0 1 1 0
Data Received 1 0 0 1 0 0 1 0
In this case, 00110110 was sent, but 10010010 was received. Note that a burst error
does not necessarily mean that the errors occur in consecutive bits. The length of the
burst is measured from the first corrupted bit to the last corrupted bit. Some bits in
between may not have been corrupted.
5.9.1 Redundancy
The basic idea in detecting or correcting errors is redundancy. To be able to detect or
correct errors, we need to send some extra bits with our data. These redundant bits are
added by the sender and removed by the receiver. Their presence allows the receiver to
detect or correct corrupted bits.
the bits in the frame will thus be 1, and the error is detected. In fact, the single parity
check is sufficient to detect any odd number of transmission errors in the received
frame.
1 0 0 1 1 0 0 1 0
1 1 0 1 0 1 1 0 1
0 0 1 0 0 1 0 1 1
0 1 0 1 0 0 0 1 1 Row Parity Check
1 0 0 1 0 0 1 0 1
0 1 0 0 0 1 0 1 1
1 1 1 0 1 0 1 1 0
0 0 0 0 0 1 1 1 1
Column parity check
Column Parity Check
on Row parity checks
5.9.6 Checksums
A checksum is a value which is computed which allows you to check the validity of
something. Checksums take on various forms, depending upon the nature of the
transmission and the needed reliability. For example, the simplest checksum is to sum
up all the bytes of a transmission, computing the sum in an 8-bit counter. This value is
appended as the last byte of the transmission.
The idea is that upon receipt of bytes, you sum up the first − 1 bytes, and see if the
answer is the same as the last byte. Since this is a bit awkward, a variant on this theme
is to, on transmission, sum up all the bytes, the (treating the byte as a signed, 8-bit
value) negate the checksum byte before transmitting it. This means that the sum of all
n bytes should be 0.
These techniques are not terribly reliable; for example, if the packet is known to be 64
bits in length, and you receive 64 '\0' bytes, the sum is 0, so the result must be correct.
Of course, if there is a hardware failure that simply fails to transmit the data bytes
(particularly easy on synchronous transmission, where no "start bit" is involved), then
the fact that you receive a packet of 64 0 bytes with a checksum result of 0 is
misleading; you think you've received a valid packet and you've received nothing at all.
A solution to this is to do something like negate the checksum value computed, subtract 1
from it, and expect that the result of the receiver's checksum of the n bytes is 0xFF (-1, as a
signed 8-bit value). This means that the 0-lossage problem got resolved. As an another
example, let's say the checksum of a packet is 1 byte long. A byte is made up of 8 bits, and
each bit can be in one of two states, leading to a total of 256 (28 ) possible combinations.
Since the first combination equals zero, a byte can have a maximum value of 255.
If the sum of the other bytes in the packet is 255 or less, then the checksum
contains that exact value.
If the sum of the other bytes is more than 255, then the checksum is the
remainder of the total value after it has been divided by 256.
The Cyclic Redundancy Check is the most powerful of the redundancy checking techniques,
the CRC is based on binary division. In CRC a sequence of redundant bits, called the or
the is appended to the end of a data stream. The resulting data becomes
exactly divisible by a second, predetermined binary number.
At its destination, the incoming data is divided by the same number. The diagram below will
show you the sequence of events that takes place when using CRC.
CRC technique is also applicable to data storage devices, such as a disk drive. In this
situation each block on the disk would have check bits, and the hardware might
automatically initiate a reread of the block when an error is detected, or it might report the
error to software.
then check the data by repeating the calculation, dividing by the key word , and verifying
that the remainder is . The only novel aspect of the CRC process is that it uses a simplified
form of arithmetic, which we'll explain below, in order to perform the division.
By the way, this method of checking for errors is obviously not foolproof, because there are
many different message strings that give a remainder of when divided by . In fact, about 1
out of every randomly selected strings will give any specific remainder. Thus, if our
message string is garbled in transmission, there is a chance (about , assuming the
corrupted message is random) that the garbled version would agree with the check word. In
such a case the error would go undetected. Nevertheless, by making large enough, the
chances of a random error going undetected can be made extremely small. That's really all
there is to it. The rest of our discussion will consist simply of refining this basic idea to
improve its effectiveness.
When discussing CRCs it's customary to present the key word in the form of a
whose coefficients are the binary bits of the number . For example, suppose we
want our CRC to use the key = 37. This number written in binary is 100101, and
expressed as a polynomial it is + + 1.
In order to implement a CRC based on this polynomial, the transmitter and receiver must
have agreed in advance that this is the key word they intend to use. So, for the sake of
discussion, let's say we have agreed to use the generator polynomial 100101.
By the way, it's worth noting that that the remainder of any word divided by a 6-bit word will
contain no more than 5 bits, so our CRC words based on the polynomial 100101 will always
fit into 5 bits. Therefore, a CRC system based on this polynomial would be called a 5 −
CRC. In general, a polynomial with bits leads to a − 1 CRC.
Now suppose I want to send you a message consisting of the string of bits = 0010 1100
0101 0111 0100 011, and I also want to send you some additional information that will allow
you to check the received string for correctness.
Using our agreed key word = 100101, I'll simply by to form the remainder ,
which will constitute the CRC check word. However, we are going to use a simplified kind of
division that is particularly well-suited to the binary form in which digital data is expressed.
If we interpret as an ordinary integer (37), it's binary representation, 100101, is really
shorthand for
(1)2 + (0) 2 + (0) 2 + (1) 2 + (0) 2 + (1) 2
Every integer can be expressed uniquely in this way, i.e., as a polynomial in the base 2 with
coefficients that are either 0 or 1. This is a very powerful form of representation, but it's
actually more powerful than we need for purposes of performing a data check.
Also, operations on numbers like this can be somewhat laborious, because they involve
borrows and carries in order to ensure that the coefficients are always either 0 or 1. (The
same is true for decimal arithmetic, except that all the digits are required to be in the range 0
to 9.)
To make things simpler, let's interpret our message , key word , and remainder , not
as actual integers, but as abstract polynomials in a dummy variable (rather than a
definite base like 2 for binary numbers or 10 for decimal numbers).
Also, we'll simplify even further by agreeing to pay attention only to the parity of the
coefficients, i.e., if a coefficient is an odd number we will simply regard it as 1, and if it is an
even number we will regard it as 0.
This is a tremendous simplification, because now we don't have to worry about borrows and
carries when performing arithmetic. This is because every integer coefficient must obviously
be either odd or even, so it's automatically either 0 or 1.
To give just a brief illustration, consider the two polynomials + + 1 and + + 1. If
we multiply these together by the ordinary rules of algebra we get
( + + 1)( + + 1) = + +2 +2 +2 +1
but according to our simplification we are going to call every coefficient 0, so the result
of the multiplication is simply + + 1. You might wonder if this simplified way of doing
things is really self-consistent.
5.9 Error Detection Techniques 148
Elements of Computer Networking LAN Technologies
For example, can we divide the product + + 1 by one of its factors, say, + + 1, to give
the other factor? The answer is yes, and it's much simpler than ordinary long division. To
divide the polynomial 110001 by 111 (which is the shorthand way of expressing our
polynomials) we simply apply the bit-wise exclusive-OR operation repeatedly as follows
111 110001
111
0010
000
0100
111
0111
111
000
This is exactly like ordinary long division, only simpler, because at each stage we just need to
check whether the leading bit of the current three bits is 0 or 1. If it's 0, we place a 0 in the
quotient and the current bits with 000. If it's 1, we place a 1 in the quotient
and the current bits with the divisor, which in this case is 111.
As can be seen, the result of dividing 110001 by 111 is 1011, which was our other factor, +
+ 1, leaving a remainder of 000. (This kind of arithmetic is called the arithmetic of
polynomials with coefficients from the field of integers modulo 2.)
So now let us concentrate on performing a CRC calculation with the message string and
key word defined above. We simply need to divide by using our simplified polynomial
arithmetic.
In fact, it's even simpler, because we don't really need to keep track of the quotient - all we
really need is the remainder. So we simply need to perform a sequence of 6-bit
with our key word , beginning from the left-most 1 of the message string, and at each
stage thereafter bringing down enough bits from the message string to make a 6-bit word
with leading 1. The entire computation is shown below:
00101 0 0 1 0 1 1 0 0 0 1 0 1 0 1 1 1 0 1 0 0 0 1 1
100101
00100101
100101
0000000101110
100101
00101110
100101
00101100
100101
001 00111
1 00101
0 0 0 0 1 0 remainder = CRC
Our CRC word is simply the remainder, i.e., the result of the last 6-bit exclusive OR
operation. Of course, the leading bit of this result is always 0, so we really only need the
last five bits. This is why a 6-bit key word leads to a 5-bit CRC. In this case, the CRC
word for this message string is 00010, so when we transmit the message word we will
also send this corresponding CRC word.
When you receive them you can repeat the above calculation on with our agreed
generator polynomial and verify that the resulting remainder agrees with the CRC
word weincluded in our transmission.
What we've just done is a perfectly fine CRC calculation, and many actual
implementations work exactly that way, but there is one potential drawback in our
method. As you can see, the computation shown above totally ignores any number of
"0"s ahead of the first 1 bit in the message. It so happens that many data strings in real
applications are likely to begin with a long series of "0"s, so it's a little bothersome that
the algorithm isn't working very hard in such cases.
To avoid this , we can agree in advance that before computing our -bit CRC we
will always begin by exclusive the leading n bits of the message string with a
string of "1"s. With this convention (which of course must be agreed by the
transmitter and the receiver in advance) our previous example would be evaluated as
follows
0100000
100101
000101001
100101
00110001
100101
0101000
100101
00110111
100101
0100101
100101
0000000100011
100101
0 0 0 1 1 0 remainder = CRC
So with the convention, the 5-bit CRC word for this message string
based on the generator polynomial 100101 is 00110. That's really all there is to
computing a CRC, and many commercial applications work exactly as we've described.
People sometimes use various table-lookup routines to speed up the divisions, but that
doesn't alter the basic computation or change the result. In addition, people sometimes
agree to various non-standard conventions, such as interpreting the bits in reverse
order, but the essential computation is still the same. (Of course, it's crucial for the
transmitter and receiver to agree in advance on any unusual conventions they intend to
observe.)
Now that we've seen how to compute CRC's for a given key polynomial, it's natural to
wonder whether some key polynomials work better (i.e., give more robust ℎ ) than
others. From one point of view the answer is obviously yes, because the larger our key
word, the less likely it is that corrupted data will go undetected. By appending an -bit
CRC to our message string we are increasing the total number of possible strings by a
factor of 2 , but we aren't increasing the degrees of freedom, since each message string
has a unique CRC word. Therefore, we have established a situation in which only 1 out
of 2 total strings ( + ) is valid. Notice that if we append our CRC word to our
message word, the result is a multiple of our generator polynomial. Thus, of all possible
combined strings, only multiples of the generator polynomial are valid.
So, if we assume that any corruption of our data affects our string in a completely
random way, i.e., such that the corrupted string is totally uncorrelated with the original
string, then the probability of a corrupted string going undetected is 1/(2 ). This is the
basis on which people say a 16-bit CRC has a probability of 1/(2 ) = 1.5E-5 of failing to
detect an error in the data, and a 32-bit CRC has a probability of 1/(2 ), which is
about 2.3E-10 (less than one in a billion).
Since most digital systems are designed around blocks of 8-bit words (called ), it's
most common to find key words whose lengths are a multiple of 8 bits. The two most
common lengths in practice are 16-bit and 32-bit CRCs (so the corresponding generator
polynomials have 17 and 33 bits respectively). A few specific polynomials have come
into widespread use. For 16-bit CRCs one of the most popular key words is
10001000000100001, and for 32-bit CRCs one of the most popular is
100000100110000010001110110110111. In the form of explicit polynomials these
would be written as
+ + +1
and
+ + + + + + + + + + + + + +1
The 16-bit polynomial is known as the 25 , and the 32-bit polynomial is the
ℎ , and both are widely used in all sorts of applications. (Another
common 16-bit key polynomial familiar to many modem operators is
11000000000000101, which is the basis of the − 16 protocol). These polynomials
are certainly not unique in being suitable for CRC calculations, but it's probably a good
idea to use one of the established standards, to take advantage of all the experience
accumulated over many years of use.
Nevertheless, we may still be curious to know how these particular polynomials were
chosen. It so happens that one could use just about ANY polynomial of a certain degree
and achieve most of the error detection benefits of the standard polynomials. For
example, -bit CRC will certainly catch any single of m consecutive
for any less than , basically because a smaller polynomial can't be a multiple of
a larger polynomial. Also, we can ensure the detection of any odd number of bits simply
by using a generator polynomial that is a multiple of the , which is +
1. A polynomial of our simplified kind is a multiple of + 1 if and only if it has an even
number of terms.
It's interesting to note that the standard 16-bit polynomials both include this parity
check, whereas the standard 32-bit CRC does not. It might seem that this represents a
shortcoming of the 32-bit standard, but it really doesn't, because the inclusion of a
parity check comes at the cost of some other desirable characteristics. In particular,
much emphasis has been placed on the detection of two separated single-bit errors, and
the standard CRC polynomials were basically chosen to be as robust as possible in
detecting such double-errors. Notice that the basic E representing two
5.9 Error Detection Techniques 151
Elements of Computer Networking LAN Technologies
: In normal token ring operation, a station sending information holds the token
until the sending data circles the entire ring. After the sending station strips the data
from the ring, it then issues a free token.
With Early Token Release (ETR), a token is released immediately after the sending
station transmits its frame. This allows for improved performance, since there is no
delay in the downstream neighbour waiting for the token. ETR is only available on 16
megabit rings.
Question 2: What is the difference between Ethernet and Token Ring networks?
: Token Ring is single access, meaning there is only one token. So, at
only one station is able to use the LAN. Ethernet is a shared access medium,
where all stations have equal access to the network at the time.
Question 3: At what speeds does token ring run?
: Token ring runs at speeds of 4 Mbps and 16 Mbps.
Question 4: What is a beacon frame?
: A beacon frame is sent generated by a station or stations that do not detect a
receive signal. A station or stations will broadcast these beacon MAC frames with until
the receive signal is restored.
Question 5: Medium access methods can be categorized as random, maximized or
minimized.
: False
Question 6: ALOHA is an early multiple-random-access method that requires frame
acknowledgment.
: True
Question 7: In the carrier sense multiple-access (CSMA) method, a station must listen
to the medium prior to the sending of data onto the line.
: True
Question 8: In the carrier sense multiple-access (CSMA) method, the server will let a
device know when it is time to transmit.
: False
Question 9: Some examples of controlled-access methods are: reservation, polling and
token passing.
: True
Question 10: Carrier sense multiple access with collision avoidance (CSMA/CA) is CSMA
with procedures added to correct after a collision has happened.
: False
Question 11: Carrier sense multiple access with collision detection (CSMA/CD) is CSMA
with a post collision procedure.
: True
Question 12: FDMA, TDMA and CDMA are controlled-access methods.
: False
Question 13: Channelization is a multiple-access method in which the available
bandwidth of a link is shared in time, frequency, or through code, between stations
on a network.
: True
Question 14: In the reservation access method, a station reserves a slot for data by
controlling transmissions to and from secondary stations.
: False
Question 15: Multiple Access Protocols include:
A. Random-Access Protocols C. Channelization Protocols
B. Controlled-Access Protocols D. All of the above.
:D
Question 16: ALOHA is an example of the earliest:
A. Random-access method C. Channelization protocols
C. Controlled-access method D. All of the above.
:A
Question 17: Polling works with topologies in which one devise is designated as the
___station and the other devices are known as ___ devices.
A. Secondary / primary C. Permanent / switched
B. Primary / secondary D. Physical / virtual
:B
Question 18: The select mode is used when:
A. the sender has something to format. C. the primary device has something to send.
B. the receiver has something to receive. D. the secondary device has something to send.
:C
Question 19: The act of polling secondary devices is so that:
A. The primary device can solicit transmissions from the secondary devices.
B. The secondary devices can solicit transmissions from the primary devices.
C. The secondary device wants to over-ride the primary device.
D. The primary device is in flex mode.
:A
Question 20: Polling is a type of:
A. Random-access C. channelization access
B. Controlled-access D. None of the above.
:B
Question 21: In the reservation access method, a station needs to make a reservation
before:
A. Sending data C. Both A and B.
B. Receiving data D. None of the above.
:A
Question 22: In a channelization access method, the available bandwidth of a link is
shared:
A. In time C. via code
B. In frequency D. All of the above.
:D
Question 23: What is the advantage of controlled access over random access?
: In a random access method, each station has the right to the medium without
being controlled by any other station. However, if more than one station tries to send,
there is an access conflict (collision) and the frames will be either destroyed or modified.
To avoid access collisions or to resolve it when it happens, we need procedures to
address the issues caused by collisions or to try to avoid them, if possible. Some
examples of random access include ALOHA and CSMA.
In controlled access, the stations consult with one another to find which station has the
right to send. A station cannot send unless it has been authorized by other stations.
Three popular controlled access methods include: Reservation, polling and token-
passing.
Question 24: Groups of stations share a 64 pure ALOHA channel. Each station
outputs a 1000 bit frame on an average of once every 100 seconds. What is the
maximum value of (i.e. how many stations can be connected)?
: The maximum throughput for pure Aloha is 18.4%.
Therefore the usable channel rate is equal to 0.184 ∗ 56 = 11.77 .
Bits per second outputted by each station = = 10 [ = ]
station outputs 10 bps on a channel which has the usable channel rate of 11.77 .
. ×
∴ = = 1177 stations
=
For example, a 100 Mbit/s (or 10,000,000 bits per second) Ethernet and maximum
∗
packet size of 1526 bytes gives a maximum packet transmission time = ≈ 122
µ .
: Propagation time is the amount of time it takes for the head of the
signal to travel from the sender to the receiver. It can be computed as the ratio between
the link length and the propagation speed over the specific medium.
From the problem statement we have the value: 10 (10000 frames per second).
×
= + = + = 66.67 × 10
× ×
The probability of collision in pure ALOHA is:
=1−
× × . ×
=1−
.
=1−
= 1- (2.718) . = 1 − 0.27 = 0.73
Question 26: Consider the delay of pure ALOHA versus slotted ALOHA at low load.
Which one is less? Explain your answer.
: Statistically pure ALOHA is supposed to be less efficient than slotted ALOHA
(both at normal load or when collisions occur in a contention channel). However, if the
load is low, then pure ALOHA is supposed to be as efficient as slotted ALOHA. But if we
consider the delay of sending the packet in a slotted time as in the slotted ALOHA
protocol, then we can say that slotted ALOHA’s delay is more than the one in pure
ALOHA protocol, which sends the packet immediately.
Question 27: The valid frame length must be at least 64 bytes long so as to prevent a
station from completing the transmission of a short frame before the first bit has
even reached the far end of the cable, where it may collide with another frame. How
is the minimum frame length adjusted if the network speed goes up?
: As the network speed goes up, the minimum frame length must go up or the
maximum cable length must come down, proportionally so that the sender does not
incorrectly conclude that the frame was successfully sent in case of collision.
Question 28: TDM with sources having different data rates: Consider the case of three
streams with bit rates of 8 kbit/sec, 16 kbit/sec, and 24 kbit/sec, respectively. We
want to combine these streams into a single high-speed stream using TDM.
: The high-speed stream in this case must have a transmission rate of 48
kbit/sec, which is the sum of the bit rates of the three sources. To determine the
number of time slots to be assigned to each source in the multiplexing process. We
must reduce the ratio of the rates, [Link], to the lowest possible form, which in this
case is [Link]. T
he sum of the reduced ratio is 6, which will then represent the minimum length of the
repetitive cycle of slot assignments in the multiplexing process. The solution is now
readily obtained: In each cycle of six time slots we assign one slot to Source A (8
kbit/sec), two slots to Source B (16 kbit/sec), and three slots to Source: C (24 kbit/sec).
Figure 7-4 illustrates this assignment, using “a” to indicate data from Source A, “b” to
indicate data from Source B, and “c” to indicate data from Source C.
Question 29: Consider a system with four low-bit-rate sources of 20 kbit/sec, 30
kbit/sec, 40 kbit/sec, and 60 kbit/sec. Determine the slot assignments when the
data streams are combined using TDM.
: The rate ratio [Link] reduces to [Link]. The length of the cycle is
therefore 2 + 3 + 4 + 6 = 15 slots. Within each cycle of 15 slots, we assign two slots to
the 10 kbit/sec source, three slots to the 15 kbit/sec source, four slots to the 20
kbit/sec source, and six slots to the 30 kbit/sec source.
Question 30: Explain why the hidden terminal problem can be solved by CSMA/CA
protocol.
: A hidden station problem occurs in a wireless LAN if we use CSMA access
protocol. Suppose each station A, B, C and D aligns in a line from left to right.
Assuming station A is transmitting from to B, however, station C cannot sense the
transmission signal because it is out of range of A, it falsely assumes that it is safe to
transmit to B. This will cause a collision at station B, which called a hidden station
problem since the competitor is too far away to be detected.
The main reason to cause this problem is that the sender doesn’t have a correct
knowledge about the receiver’s activity. CSMA can only tell whether there is an activity
around. However, by using CSMA/CA protocol, the sender can get the receiver’s status
through the handshaking. For instance, station C can receive the stations B’s CTS and
know how long the station A will take to transmit data. It will stop its transmission
request before station A completes.
Question 31: Given the following information, find the minimum bandwidth required for
the path:
FDM Multiplexing
Five devices, each requiring 4000 Hz.
200 Hz guard band for each device.
:
No. of devices = 5.
No. of guard bands required between these is 4.
Hence total bandwidth = (4000 × 5) + (200 × 4) = 20.8 KHz.
5.9 Error Detection Techniques 156
Elements of Computer Networking LAN Technologies
Question 32: A small Local Area Network (LAN) has four machines A, B, C and D
connected in the following topology:
LAN-1 LAN-2
Bridge
: True
Question 45: With full-duplex operation a station can transmit and receive
simultaneously. Is it true or false?
: True
Question 46: A technique known as slotted ____ organizes time on the channel into
uniform slots whose size equals the frame transmission time. Transmission is
permitted to begin only at a slot boundary.
A) Ethernet B) ALOHA C) boundary relay D) CSMA
:B
Question 47: Ethernet now encompasses data rates of ____.
A) 100 Mpbs, 1 Gbps, 10 Gbps, and 100 Gbps
B) 10 Mbps, 100 Mpbs, 1 Gbps, and 10 Gbps
C) 1 Gpbs, 10 Gbps, 100 Gbps, and 1000 Gbps
D) 10 Mbps, 100 Mbps, 1000 Mbps, and 10 Gbps
:B
Question 48: A problem with ____ is that capacity is wasted because the medium will
generally remain idle following the end of a transmission, even if there are one or
more stations waiting to transmit.
A) 1-persistent CSMA B) slotted ALOHA
C) p-persistent CSMA D) nonpersistent CSMA
:D
Question 49: One of the rules for CSMA/CD states, "after transmitting the jamming
signal, wait a random amount of time, then attempt to transmit again". This
random amount of time is referred to as the ___.
A) Precursor B) Backoff C) Backlog D) carrier time
:B
Question 50: Which of the following makes use of two optical fibre cables, one for
transmission and one for reception, and utilizes a techniques known as intensity
modulation.
A) 100BASE-T4 B) 10BASE-F C) 100BASE-FX D) 10BASE-T
:C
Question 51: Why do 802.11 (wireless) networks use acknowledgements?
: Unlike a wired Ethernet where collisions can be detected, it is difficult to detect
a collision on a wireless network as the strength of the signal being transmitted is so
much greater than the strength of the signal being received. Without being able to
detect a collision, a sender is unsure if their transmitted data arrived intact, thus a
mechanism for acknowledgements must be used.
Question 52: Why are the wires twisted in twisted-pair copper wire?
: The twisting of the individual pairs reduces electromagnetic interference. For
example, it reduces crosstalk between wire pairs bundled into a cable.
Question 53: Which type of Ethernet framing is used for TCP/IP and AppleTalk?
A) Ethernet 802.3 B) Ethernet 802.2 C) Ethernet II D) Ethernet SNAP
: D. Ethernet 802.3 is used with NetWare versions 2 through 3.11, Ethernet
802.2 is used with NetWare 3.12 and later plus OSI routing, Ethernet II is used with
TCP/IP and DECnet, and Ethernet SNAP is used with TCP/IP and AppleTalk.
Question 54: Ethernet is said to be non-deterministic because of which of the following?
5.9 Error Detection Techniques 159
Elements of Computer Networking LAN Technologies
A) It is not possible to determine how long it will take to get a frame from one device
to another.
B) It is not possible to determine whether an error has occurred during the
transmission of a frame.
C) It is not possible to determine if another device wishes to transmit.
D) It is not possible to determine the maximum time a device will have to wait to
transmit.
:D
Question 55: The multiplexer creates a frame that contains data only from those input
sources that have something to send in __ multiplexing.
A) Frequency Division B) Statistical Time Division
C) Synchronous Time Division D) Dense Wavelength
:B
Question 56: How many 8-bit characters can be transmitted per second over a 9600
baud serial communication link using asynchronous mode of transmission with one
start bit, eight data bits, and one parity bit ?
A) 600 B) 800 C) 876 D) 1200
: B. Baud is the symbol which is sent over the link, baud = 9600 bits 18 bit
character has baud size of 12 bits. So no. of characters = = 800.
Question 57: A and B are the only two stations on an Ethernet. Each has a steady
queue of frames to send. Both A and B attempt to transmit a frame, collide, and A
wins the first backoff race, At the end of this successful transmission by A, both A
and B attempt to transmit and collide. The probability that A wins the second
backoff race is
A) 0.5 B) 0.625 C) 0.75 D) 1.0
: B. A wins the first back off race the conditions are (0,1). After that during
second back off four conditions (0,1,2,3).
Probability = × + × = + = 0.625
Question 58: In a network of LANs connected by bridges, packets are set from one LAN
to another through intermediate bridges. Since more than one path may exist
between two LANs, packets may have to be routed through multiple bridges. Why is
the spanning tree algorithm used for bridge-routing?
A) For shortest path routing between LANs
B) For avoiding loops in the routing paths
C) For fault tolerance D) For minimizing collisions
: B. Spanning tree algorithm for a graph is applied to find a tree free of cycles, so
in this network we apply spanning tree algorithm to remove loops in routing paths.
Chapter
The hosts and routers are recognized at the network level by their logical addresses. A
address is an address. A logical address is unique universally. It is
Node-3
Node-2
When Node 1 tries to communicate with Node 2, the following steps resolve Node 2's
software-assigned address ([Link]) to Node 2‘s hardware-assigned media access
control address:
1. Based on the contents of the routing table on Node 1, IP determines that the
forwarding IP address to be used to reach Node 2 is [Link]. Node 1 then
checks its own local ARP cache for a matching hardware address for Node 2.
2. If Node 1 finds no mapping in the cache, it broadcasts an ARP request frame to
all hosts on the local network with the question "What is the hardware address
for [Link]?" Both hardware and software addresses for the source, Node 1,
are included in the ARP request.
3. Each host on the local network receives the ARP request and checks for a
match to its own IP address. If a host does not find a match, it discards the ARP
request.
4. Node 2 determines that the IP address in the ARP request matches its own IP
address and adds a hardware/software address mapping for Node 1 to its local
ARP cache.
5. Node 2 sends an ARP reply message containing its hardware address directly
back to Node 1.
6. When Node 1 receives the ARP reply message from Node 2, it updates its ARP
cache with a hardware/software address mapping for Node 2.
Once the media access control address for Node 2 has been determined, Node 1 can
send IP traffic to Node 2 by addressing it to Node 2's media access control address.
The following diagram shows how ARP resolves IP addresses to hardware addresses for
two hosts on different physical networks connected by a common router.
Node-2
Router
Once the media access control address for Router interface 1 has been determined,
Node 1 can send IP traffic to Router interface 1 by addressing it to the Router interface
1 media access control address. The router then forwards the traffic to Node 2 through
the same ARP process as discussed in this section.
Each dynamic ARP cache entry has a potential lifetime of 10 minutes. New entries
added to the cache are . If an entry is not reused within 2 minutes of being
added, it expires and is removed from the ARP cache. If an entry is used, it receives two
more minutes of lifetime. If an entry keeps getting used, it receives an additional two
minutes of lifetime up to a maximum lifetime of 10 minutes.
We can view the ARP cache by using the command. To view the ARP cache, type
− at a command prompt.
:
Interface: [Link] --- 0xf
ℎ
[Link] 00-08-5c-8d-4f-8f dynamic
[Link] ff-ff-ff-ff-ff-ff static
[Link] 01-00-5e-00-00-fc static
[Link] ff-ff-ff-ff-ff-ff static
The ARP Cache is needed for improving efficiency on the operation of the ARP. The
cache maintains the recent mappings from the Internet addresses to hardware
addresses. The normal expiration time of an entry in the cache is 20 minutes from the
time the entry was created.
The RARP packet format is almost identical to the ARP packet. An RARP request is
broadcast, identifying the sender’s hardware address, asking for anyone to respond with
the sender’s IP address. The reply is normally unicast.
Node-2, Node-3
RARP Server
Any device on the network that is set up to act as an RARP server responds to the
broadcast from the source. It generates an RARP reply.
Below are some key points in RARP process.
1. Sender generates RARP request message: The source node generates an RARP
request message. It puts its own MAC address as both the sender MAC and also
the destination MAC. It leaves both the sender IP Address and the destination
IP Address blank, since it doesn't know either.
2. Sender broadcasts RARP request message: The source broadcasts the ARP
request message on the local network.
3. Local nodes process RARP request message: The message is received by each
node on the local network and processed. Nodes that are not configured to act
as RARP servers ignore the message.
4. RARP server generates RARP reply message: Any node on the network that is
set up to act as an RARP server responds to the broadcast from the source
device. It generates an RARP reply. It sets the sender MAC address and sender
IP address to its own hardware and IP address of course, since it is the sender
of the reply. It then sets the destination MAC address to the hardware address
of the original source device. It looks up in a table the hardware address of the
source, determines that device's IP address assignment, and puts it into the
destination IP address field.
5. RARP server sends RARP reply message: The RARP server sends the RARP reply
message to the device looking to be configured.
6. Source device processes RARP reply message: The source node processes the
reply from the RARP server. It then configures itself using the IP address in the
destination IP address supplied by the RARP server.
D Router
E B
The IP and MAC addresses for each host and the router are as follows:
Chapter
IP Addressing 7
7.1 Introduction
Let us start our discussion with a basic question: What is a ? Well, a is
a set of communication rules used for connecting some computers in a network. As for
example a man goes to some different land and wants to find his destination. Then
there should be some standard pattern for such people to talk to each other or to
communicate. These standard patterns are some set of rules with which we need to
send our data to this distant land and talk to the person. Thus there is standard set of
protocols without which our communication is impossible for the Internet. These
are called .
Communication between hosts can happen only if they can identify each other on the
network. No doubt you have heard the term . Unless you are a techie, though,
you may not understanding of what an IP address actually is or how it works. Let's
explore the concept with real world scenarios.
The Internet is a global network connecting billions of devices. Each device uses a
to communicate with other devices on the network. These protocols govern
communication between all devices on the Internet.
In 1969, BBN Technologies started building the Interface Message Processors (IMPs) for
the ARPANET and an important piece of the network was missing: the software that
would govern how computers would communicate. Graduate students at various
facilities funded by the US Department of Defense Advanced Research Projects Agency
(DARPA) had been given the task in 1969 of developing the missing communication
protocols. They formed an informal . The students connected to
ARPANET, who had been given the task in 1969 of developing the technical protocols,
also began to establish the informal protocols that would influence interpersonal
communications on the Internet in general.
From 1973 to 1974, ′ networking research group at worked out details of
the idea, resulting in the first TCP specification. DARPA then contracted with BBN
Technologies, Stanford University, and the University College London to develop
operational versions of the protocol on different hardware platforms. Four versions were
developed: TCP v1, TCP v2, TCP v3 and IP v3, and TCP/IP v4.
In 1975, a two-network TCP/IP communications test was performed between Stanford
and University College London (UCL). In November, 1977, a three-network TCP/IP test
was conducted between sites in the US, the UK, and Norway. Several other TCP/IP
prototypes were developed at multiple research centers between 1978 and 1983.
Internet Protocol version 4 (IPv4), developed in 1981, currently controls the majority of
intranet and Internet communication. It was the first viable protocol to handle distance
computer communication. Predecessors had difficulty routing data over long distances
with high reliably. Many questions were answered by IPv4 that were unknown at the
time.
Over the next decade, the usefulness of IPv4 soon surfaced. IPv4’s predecessors fell
short of the requirements for large scale communication. IPv4 was developed by
Internet Engineering Task Force (IETF) in September 1981. When IP was first
standardized in September 1981, the specification required that each system attached
to an IP-based internet be assigned a unique, 32-bit Internet address value.