0% found this document useful (0 votes)
14 views24 pages

Computer Networks - Unit 4 Notes

The document covers the network layer of computer networks, focusing on routing algorithms, congestion control, and the differences between virtual circuit and datagram subnets. It discusses various routing techniques such as shortest path routing, distance vector routing, flooding, and the principles of dynamic routing. Additionally, it highlights the importance of the network layer in both internet and ATM networks, along with key concepts like connectionless and connection-oriented services.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views24 pages

Computer Networks - Unit 4 Notes

The document covers the network layer of computer networks, focusing on routing algorithms, congestion control, and the differences between virtual circuit and datagram subnets. It discusses various routing techniques such as shortest path routing, distance vector routing, flooding, and the principles of dynamic routing. Additionally, it highlights the importance of the network layer in both internet and ATM networks, along with key concepts like connectionless and connection-oriented services.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

COMPUTER NETWORKS

UNIT - 4: THE NETWORK LAYER


Syllabus:
Virtual circuit and Datagram subnets-Routing algorithm shortest path routing, Flooding, Hierarchical
routing, Broad cast, Multi cast, distance vector routing. DYNAMIC ROUTING: Broadcast routing. Rotary for
mobility, Congestion, Control Algorithms – General Principles of Congestion prevention policies. Internetworking:
the Network layer in the internet and in the ATM Networks

IMPORTANT QUESTIONS:
1. With an example explain the Dynamic routing algorithms used in computer networks.
2. What are the reasons for congestion? What are the problems with congestion?
3. With an example explain the shortest path routing algorithms used in computer networks.
4. With an example explain the distance vector routing algorithms used in computer networks
5. What are the general principles of congestion control? Explain.
6. What is the significance of The Network layer in the ATM networks?
7. With an example explain the Flooding, Hierarchical routing algorithms used in computer networks.
8. Explain the Network layer in the internet.
9. What is a Choke packet? How do they help in congestion control?
10. Explain in detail about broadcast and multicast routings.
11. What is the format of IPv4 header? Describe the significance of each field.
12. What are the differences between Static Routing Algorithm and Dynamic Routing Algorithm?
13. Differentiate the open loop congestion control and closed loop congestion control
14. How Congestion control in Datagram Subnets takes place?
15. Describe Optimality Principle for Routing.

The Network Layer


The network layer is concerned with getting packets from the source all the way to the destination. Getting
to the destination may require making many hops at intermediate routers along the way. This function clearly
contrasts with that of the data link layer, which has the more modest goal of just moving frames from one end of a
wire to the other.

4.1 NETWORK LAYER DESIGN ISSUES:


4.1.1 Store-and-Forward Packet Switching:
ISP’s equipment

The major components of the system are the carrier's equipment (routers connected by transmission lines),
shown inside the shaded oval, and the customers' equipment, shown outside the oval. Host H1 is directly
connected to one of the carrier's routers, A, by a leased line. In contrast, H2 is on a LAN with a router, F, owned
and operated by the customer.
This equipment is used as follows. A host with a packet to send transmits it to the nearest router, either on its own
LAN or over a point-to-point link to the carrier. The packet is stored there until it has fully arrived so the
checksum can be verified. Then it is forwarded to the next router along the path until it reaches the destination
host, where it is delivered. This mechanism is store-and-forward packet switching.

4.1.2 Services Provided to the Transport Layer:

The network layer provides services to the transport layer at the network layer/transport layer interface.
The network layer services have been designed with the following goals in mind.
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the routers present.
3. The network addresses made available to the transport layer should use a uniform numbering plan, even
across LANs and WANs.

The network service should be connectionless, with primitives SEND PACKET and RECEIVE PACKET
and little else. In particular, no packet ordering and flow control should be done, because the hosts are going to do
that anyway, and there is usually little to be gained by doing it twice. Furthermore, each packet must carry the full
destination address, because each packet sent is carried independently of its predecessors, if any.

4.1.3 Implementation of Connectionless Service:

In connectionless service, packets are injected into the subnet individually and routed independently of
each other. No advance setup is needed. In this context, the packets are frequently called datagrams and the
subnet is called a datagram subnet.
Let us now see how a datagram subnet works,
ISP’s equipment

A’s table (initially) A’s table (later) C’s


Table E’s Table

The process P1 on host H1 has a long message for P2 on host H2. The network layer has to break a
message into four packets, 1, 2, 3, and 4 and sends each of them in turn to router A. A has only two outgoing lines
to B and C so every incoming packet must be sent to one of these routers, even if the ultimate destination is some
other router. A's initial routing table is shown in the figure under the label ''initially.''
As they arrived at A, packets 1, 2, and 3 were stored briefly (to verify their checksums). Then each was
forwarded to C according to A's table. Packet 1 was then forwarded to E and then to F. When it got to F, it was
encapsulated in a data link layer frame and sent to H2 over the LAN. Packets 2 and 3 follow the same route.
However, something different happened to packet 4. When it got to A it was sent to router B, even though
it is also destined for F. For some reason, perhaps it learned of a traffic jam somewhere along the ACE path and
updated its routing table, as shown under the label ''later.'' The algorithm that manages the tables and makes the
routing decisions is called the routing algorithm.

4.1.4 Implementation of Connection-Oriented Service:

In connection-oriented service, a path from the source router to the destination router must be established
before any data packets can be sent. This connection is called a VC (virtual circuit), and the subnet is called a
virtual-circuit subnet.
The idea behind virtual circuits is to avoid having to choose a new route for every packet sent. Instead,
when a connection is established, a route from the source machine to the destination machine is chosen as part of
the connection setup and stored in tables inside the routers.
ISP’s equipment

A’s table C’s Table


E’s Table

Here, host H1 has established connection 1 with host H2. It is remembered as the first entry in each of the
routing tables. The first line of A's table says that if a packet bearing connection identifier 1 comes in from H1, it
is to be sent to router C and given connection identifier 1. Similarly, the first entry at C routes the packet to E, also
with connection identifier 1. Similarly if H3 wants to connect to H2 they have to do the same procedure and it has
to use different connection identifier in above example host H3 uses connection identifier as one.

4.1.5 Comparison of Virtual-Circuit and Datagram Subnets:


4.2 ROUTING ALGORITHMS:

The main function of the network layer is routing packets from the source machine to the destination
machine. In most subnets, packets will require multiple hops to make the journey. The routing algorithm is that
part of the network layer software responsible for deciding which output line an incoming packet should be
transmitted on. If the subnet uses datagrams internally, this decision must be made anew for every arriving data
packet since the best route may have changed since last time.
Routing algorithms can be grouped into two major classes: nonadaptive and adaptive. Nonadaptive
algorithms do not base their routing decisions on measurements or estimates of the current traffic and topology.
Instead, the choice of the route to use to get from I to J (for all I and J) is computed in advance, off-line, and
downloaded to the routers when the network is booted. This procedure is sometimes called static routing.
Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the topology, and
usually the traffic as well. Adaptive algorithms differ in where they get their information (e.g., locally, from
adjacent routers, or from all routers), when they change the routes. This procedure is sometimes called Dynamic
routing.

4.2.1 The Optimality Principle:

“It states that if router J is on the optimal path from router I to router K, then the optimal path from J to K
also falls along the same route.” To see this, call the part of the route from I to Jr1 and the rest of the route r2. If
a route better than r2 existed from J to K, it could be concatenated with r1 to improve the route from I to K,
contradicting our statement that r1r2 is optimal.
the set of optimal routes from all sources to a given destination form a tree rooted at the destination. Such a tree is
called a sink tree and is illustrated in below Fig. where the distance metric is the number of hops. Note that a sink
tree is not necessarily unique; since a sink tree is indeed a tree, it does not contain any loops, so each packet will
be delivered within a finite and bounded number of hops.

4.2.2 Shortest Path Routing:

The concept of a shortest path deserves some explanation. One way of measuring path length is the
number of hops. Using this metric, the paths ABC and ABE in below Fig. are equally long. Another metric is the
geographic distance in kilometers, in which case ABC is clearly much longer than ABE (assuming the figure is
drawn to scale).
Several algorithms for computing the shortest path between two nodes of a graph are known. This one is
due to Dijkstra (1959). Each node is labeled (in parentheses) with its distance from the source node along the best
known path. Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds and
paths are found, the labels may change, reflecting better paths. A label may be either tentative or permanent.
Initially, all labels are tentative. When it is discovered that a label represents the shortest possible path from the
source to that node, it is made permanent and never changed thereafter.
To illustrate how the labeling algorithm works, look at the weighted, undirected graph of Fig.(a), where the
weights represent, for example, distance. We want to find the shortest path from A to D. We start out by marking
node A as permanent, indicated by a filled-in circle. Then we examine, in turn, each of the nodes adjacent to A
(the working node), relabeling each one with the distance to A. Whenever a node is relabeled, we also label it with
the node from which the probe was made so that we can reconstruct the final path later. Having examined each of
the nodes adjacent to A, we examine all the tentatively labeled nodes in the whole graph and make the one with
the smallest label permanent, as shown in Fig.(b). This one becomes the new working node.
We now start at B and examine all nodes adjacent to it. If the sum of the label on B and the distance from B
to the node being considered is less than the label on that node, we have a shorter path, so the node is relabeled.
The entire graph is searched for the tentatively-labeled node with the smallest value. This node is made permanent
and becomes the working node for the next round.
Look at Fig.(c). At that point we have just made E permanent. Suppose that there were a shorter path than
ABE, say AXYZE. There are two possibilities: either node Z has already been made permanent, or it has not been.
If it has, then E has already been probed (on the round following the one when Z was made permanent), so the
AXYZE path has not escaped our attention and thus cannot be a shorter path.
Now consider the case where Z is still tentatively labeled. Either the label at Z is greater than or equal to
that at E, in which case AXYZE cannot be a shorter path than ABE, or it is less than that of E, in which case Z and
not E will become permanent first, allowing E to be probed from Z.

4.2.3 Flooding:

Another static algorithm is flooding, in which every incoming packet is sent out on every outgoing line
except the one it arrived on. Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite
number unless some measures are taken to damp the process. One such measure is to have a hop counter
contained in the header of each packet, which is decremented at each hop, with the packet being discarded when
the counter reaches zero. Ideally, the hop counter should be initialized to the length of the path from source to
destination. If the sender does not know how long the path is, it can initialize the counter to the worst case,
namely, the full diameter of the subnet.
An alternative technique for damming the flood is to keep track of which packets have been flooded, to
avoid sending them out a second time. Achieve this goal is to have the source router put a sequence number in
each packet it receives from its hosts. Each router then needs a list per source router telling which sequence
numbers originating at that source have already been seen.
4.2.4 Intra- and Interdomain Routing:

Today, an internet can be so large that one routing protocol cannot handle the task of updating the routing
tables of all routers. For this reason, an internet is divided into autonomous systems. An autonomous system (AS)
is a group of networks and routers under the authority of a single administration. Routing inside an autonomous
system is referred to as intradomain routing. Routing between autonomous systems is referred to as interdomain
routing.

4.2.5 Distance Vector Routing:

In distance vector routing, the least-cost route between any two nodes is the route with minimum distance.
In this protocol, as the name implies, each node maintains a vector (table) of minimum distances to every node.
The table at each node also guides the packets to the desired node by showing the next stop in the route.

1. Initialization: At the beginning, each node can know only the distance between itself and its immediate
neighbors, those directly connected to it. So for the moment, we assume that each node can send a message
to the immediate neighbors and find the distance between itself and these neighbors. Below figure shows
the initial tables for each node. The distance for any entry that is not a neighbor is marked as infinite
(unreachable).

2. Sharing: The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So if node C shares its routing table with A,
node A can also know how to reach node E. On the other hand, node C does not know how to reach node
D, but node A does. If node A shares its routing table with node C, node C also knows how to reach node
D. In distance vector routing, each node shares its routing table with its immediate neighbors periodically
and when there is a change.
3. Updating: When a node receives a two-column table from a neighbor, it needs to update its routing table.
Updating takes three steps:
1) The receiving node needs to add the cost between itself and the sending node to each value in the
second column. If node C claims that its distance to a destination is x mi, and the distance between
A and C is y mi, then the distance between A and that destination, via C, is x + y mi.
2) The receiving node needs to add the name of the sending node to each row as the third column if the
receiving node uses information from any row.
3) The receiving node needs to compare each row of its old table with the corresponding row of the
modified version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with the smaller cost.
If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new row.
Below Figure shows how node A updates its routing table after receiving the partial table from node C.
Each node can update its table by using the tables received from other nodes. In a short time, if
there is no change in the network itself, such as a failure in a link, each node reaches a stable condition in
which the contents of its table remains the same. The below fig shows the full update of a system of five
nodes with their corresponding tables

The Count-to-Infinity Problem:

A problem with distance vector routing is instability, which means that a network using this protocol can
become unstable. To understand the problem, let us look at the scenario depicted in below Figure.

A system with three nodes. We have shown only the portions of the routing table needed for our
discussion. At the beginning, both nodes A and B know how to reach node X. But suddenly, the link between A
and X fails. Node A changes its table. If A can send its table to B immediately, everything is fine. However, the
system becomes unstable if B sends its routing table to A before receiving A's routing table. Node A receives the
update and, assuming that B has found a way to reach X, immediately updates its routing table. Based on the
triggered update strategy, A sends its new update to B. Now B thinks that something has been changed around A
and updates its routing table. The cost of reaching X increases gradually until it reaches infinity. At this moment,
both A and B know that X cannot be reached. However, during this time the system is not stable.
A few solutions have been proposed for instability of this kind.
1. Defining Infinity: The first obvious solution is to redefine infinity to a smaller number, 16 as infinity.
However, this means that the distance vector routing cannot be used in large systems. The size of the
network, in each direction, cannot exceed 15 hops.
2. Split Horizon: Another solution is called split horizon. In this strategy, instead of flooding the table
through each interface, each node sends only part of its table through each interface. Node B eliminates
the last line of its routing table before it sends it to A. In this case, node A keeps the value of infinity as
the distance to X. Later when node A sends its routing table to B, node B also corrects its routing table.
The system becomes stable after the first update: both node A and B know that X is not reachable.

4.2.6 Hierarchical Routing:

As networks grow in size, the router routing tables grow proportionally. Not only is router memory
consumed by ever-increasing tables, but more CPU time is needed to scan them and more bandwidth is needed to
send status reports about them.
When hierarchical routing is used, the routers are divided into what we will call regions, with each router
knowing all the details about how to route packets to destinations within its own region, but knowing nothing
about the internal structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group the regions
into clusters, the clusters into zones, the zones into groups, and so on, until we run out of names for
aggregations.
Below Fig. (a) Gives a quantitative example of routing in a two-level hierarchy with five regions. The full
routing table for router 1A has 17 entries, as shown in Fig. (b). When routing is done hierarchically, as in Fig. (c),
there are entries for all the local routers as before, but all other regions have been condensed into a single router, so
all traffic for region 2 goes via the 1B -2A line, but the rest of the remote traffic goes via the 1C -3B line.
Hierarchical routing has reduced the table from 17 to 7 entries. As the ratio of the number of regions to the number
of routers per region grows, the savings in table space increase.

4.2.7 Broadcast Routing:

In some applications, hosts need to send messages to many or all other hosts. Sending a packet to all
destinations simultaneously is called broadcasting. Various methods have been proposed for doing it.
1. One broadcasting method that requires no special features from the subnet is for the source to simply
send a distinct packet to each destination. Not only is the method wasteful of bandwidth, but it also
requires the source to have a complete list of all destinations.
2. Flooding is another obvious candidate. Although flooding is ill-suited for ordinary point-to-point
communication, for broadcasting it might rate serious consideration. The problem with flooding as a
broadcast technique is the same problem it has as a point-to-point routing algorithm: it generates too
many packets and consumes too much bandwidth.
3. The reverse path forwarding, is remarkably simple once it has been pointed out. When a broadcast packet
arrives at a router, the router checks to see if the packet arrived on the line that is normally used for sending
packets to the source of the broadcast. If so, there is an excellent chance that the broadcast packet itself
followed the best route from the router and is therefore the first copy to arrive at the router. This being the
case, the router forwards copies of it onto all lines except the one it arrived on. If, however, the broadcast
packet arrived on a line other than the preferred one for reaching the source, the packet is discarded as a
likely duplicate.
An example of reverse path forwarding is shown in Fig. Part (a) shows a subnet, part (b) shows a
sink tree for router I of that subnet, and part (c) shows how the reverse path algorithm works. On the first
hop, I sends packets to F, H, J, and N, as indicated by the second row of the tree. Each of these packets
arrives on the preferred path to I (assuming that the preferred path falls along the sink tree) and is so
indicated by a circle around the letter. On the second hop, eight packets are generated, two by each of the
routers that received a packet on the first hop. As it turns out, all eight of these arrive at previously
unvisited routers, and five of these arrive along the preferred line. Of the six packets generated on the third
hop, only three arrive on the preferred path (at C, E, and K); the others are duplicates. After five hops and
24 packets, the broadcasting terminates, compared with four hops and 14 packets had the sink tree been
followed exactly.

4.2.8 Multicast Routing:


Some applications require that widely-separated processes work together in groups, for example, a group
of processes implementing a distributed database system. In these situations, it is frequently necessary for one
process to send a message to all the other members of the group.
Sending a message to such a group is called multicasting, and its routing algorithm is called multicast
routing.
Multicasting requires group management. Some way is needed to create and destroy groups, and to
allow processes to join and leave groups. How these tasks are accomplished is not of concern to the routing
algorithm. What is of concern is that when a process joins a group, it informs its host of this fact. It is important
that routers know which of their hosts belong to which groups. Either hosts must inform their routers about
changes in group membership, or routers must query their hosts periodically.
To do multicast routing, each router computes a spanning tree covering all other routers. For example, in
Fig. (a) we have two groups, 1 and 2. Some routers are attached to hosts that belong to one or both of these groups,
as indicated in the figure. A spanning tree for the leftmost router is shown in Fig. (b). When a process sends a
multicast packet to a group, the first router examines its spanning tree and prunes it, removing all lines that do
not lead to hosts that are members of the group. In our example, Fig. (c) shows the pruned spanning tree for group
1. Similarly, Fig. (d) shows the pruned spanning tree for group 2. Multicast packets are forwarded only along the
appropriate spanning tree.

Various ways of pruning the spanning tree are possible. The simplest one can be used if link state routing is
used and each router is aware of the complete topology, including which hosts belong to which groups. Then the
spanning tree can be pruned, starting at the end of each path, working toward the root, and removing all routers
that do not belong to the group in question.

4.2.9 Routing for Mobile Hosts:

The mobile hosts introduce a new complication: to route a packet to a mobile host, the network first has to
find it. The subject of incorporating mobile hosts into a network is very young, but in this section we will sketch
some of the issues and give a possible solution.
The model of the world that network designers typically use is shown in below Fig. Here we have a WAN
consisting of routers and hosts. Connected to the WAN are LANs, MANs, and wireless cells of the type

Hosts that never move are said to be stationary. In contrast, we can distinguish two other kinds of hosts.
Migratory hosts are basically stationary hosts who move from one fixed site to another from time to time but use
the network only when they are physically connected to it. We will use the term mobile hosts to mean either of the
latter two categories, that is, all hosts that are away from home and still want to be connected.
In the above Fig. the world is divided up (geographically) into small units. Let us call them areas, where
an area is typically a LAN or wireless cell. Each area has one or more foreign agents, which are processes that
keep track of all mobile hosts visiting the area. In addition, each area has a home agent, which keeps track of
hosts whose home is in the area, but who are currently visiting another area.
When a new host enters an area, either by connecting to it (e.g., plugging into the LAN) or just wandering
into the cell, his computer must register itself with the foreign agent there.
The registration procedure typically works like this:
1. Periodically, each foreign agent broadcasts a packet announcing its existence and address. A newly-
arrived mobile host may wait for one of these messages, but if none arrives quickly enough, the
mobile host can broadcast a packet saying: Are there any foreign agents around?
2. The mobile host registers with the foreign agent, giving its home address, current data link layer
address, and some security information.
3. The foreign agent contacts the mobile host's home agent and says: One of your hosts is over here.
The message from the foreign agent to the home agent contains the foreign agent's network address.
4. The home agent examines the security information, which contains a timestamp, to prove that it
was generated within the past few seconds. If it is happy, it tells the foreign agent to proceed.
5. When the foreign agent gets the acknowledgement from the home agent, it makes an entry in its
tables and informs the mobile host that it is now registered.
Ideally, when a host leaves an area, that, too, should be announced to allow deregistration, but many users
abruptly turn off their computers when done.

4.3 CONGESTION CONTROL ALGORITHMS:

When too many packets are present in the subnet, performance degrades. This situation is called
congestion.
Below Figure depicts the symptom. When the number of packets dumped into the subnet by the hosts is
within its carrying capacity, they are all delivered and the number delivered is proportional to the number sent.
However, as traffic increases too far, the routers are no longer able to cope and they begin losing packets. This
tends to make matters worse. At very high traffic, performance collapses completely and almost no packets are
delivered.

Congestion can be brought on by several factors:


 If all of a sudden, streams of packets begin arriving on three or four input lines and all need the same
output line, a queue will build up. If there is insufficient memory to hold all of them, packets will be lost.
 Slow processors can also cause congestion. If the routers' CPUs are slow at performing the bookkeeping
tasks required of them queues can build up, even though there is excess line capacity.
 Low-bandwidth lines can also cause congestion. Upgrading the lines but not changing the processors, or
vice versa, often helps a little, but frequently just shifts the bottleneck.
Difference between congestion control and flow control:
 Congestion control has to do with making sure the subnet is able to carry the offered traffic. It is a global
issue, involving the behavior of all the hosts, all the routers, the store-and-forwarding processing within the
routers.
 Flow control, in contrast, relates to the point-to-point traffic between a given sender and a given receiver.
Its job is to make sure that a fast sender cannot continually transmit data faster than the receiver is able to
absorb it.

4.3.1 General Principles of Congestion Control:

In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion
control (prevention) and closed-loop congestion control (removal) as shown in Figure.
1) Open-Loop Congestion Control (Congestion Prevention Policies): In open-loop congestion
control, policies are applied to prevent congestion before it happens. In these mechanisms, congestion
control is handled by either the source or the destination.
a) Retransmission Policy: If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. Retransmission in general may increase congestion in the
network. The retransmission timers must be designed to optimize efficiency and at the same
time prevent congestion.
b) Window Policy: The type of window at the sender may also affect congestion. The
Selective Repeat window is better than the Go-Back-N window for congestion control.
c) Acknowledgment Policy: The acknowledgment policy imposed by the receiver may also
affect congestion. If the receiver does not acknowledge every packet it receives, it may slow
down the sender and help prevent congestion.
d) Discarding Policy: A good discarding policy by the routers may prevent congestion and at
the same time may not harm the integrity of the transmission.
e) Admission Policy: An admission policy, which is a quality-of-service mechanism, can also
prevent congestion in virtual-circuit networks. Switches in a flow first check the resource
requirement of a flow before admitting it to the network. A router can deny establishing a
virtual circuit connection if there is congestion in the network.
2) Closed-Loop Congestion Control: Closed-loop congestion control mechanisms try to alleviate congestion
after it happens. Several mechanisms have been used by different protocols. We describe a few of them
here.
a) Backpressure: The technique of backpressure refers to a congestion control mechanism in
which a congested node stops receiving data from the immediate upstream node or nodes.
This may cause the upstream node or nodes to become congested, and they, in turn, reject
data from their upstream nodes or nodes. And so on. Backpressure is a node-to-node
congestion control that starts with a node and propagates, in the opposite direction of data
flow, to the source.

b) Choke Packet: A choke packet is a packet sent by a node to the source to inform it of
congestion. Note the difference between the backpressure and choke packet methods. In
backpressure, the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke packet method, the warning is from the
router, which has encountered congestion, to the source station directly.
c) Implicit Signaling: In implicit signaling, there is no communication between the congested
node or nodes and the source. The source guesses that there is congestion somewhere in the
network from other symptoms. For example, when a source sends several packets and there
is no acknowledgment for a while, one assumption is that the network is congested.
d) Explicit Signaling: The node that experiences congestion can explicitly send a signal to the
source or destination. The explicit signaling method, however, is different from the choke
packet method. In the choke packet method, a separate packet is used for this purpose; in the
explicit signaling method, the signal is included in the packets that carry data.

4.3.2 Congestion Control in Virtual-Circuit Subnets:

One technique that is widely used to keep congestion that has already started from getting worse is
admission control. The idea is simple: once congestion has been signaled, no more virtual circuits are set up until
the problem has gone away. Thus, attempts to set up new transport layer connections fail.
An alternative approach is to allow new virtual circuits but carefully route all new virtual circuits around
problem areas. For example, consider the subnet of Fig. (a), in which two routers are congested, as indicated.
Suppose that a host attached to router A wants to set up a connection to a host attached to router B.
Normally, this connection would pass through one of the congested routers. To avoid this situation, we can redraw
the subnet as shown in Fig. (b), omitting the congested routers and all of their lines.

4.3.3 Congestion Control in Datagram Subnets:

Some approaches that can be used in datagram subnets for congestion control is:
1. The Warning Bit: A special bit in the packet header is set by the router to warn the source when
congestion is detected. The bit is copied and piggy-backed on the ACK and sent to the sender. The
sender monitors the number of ACK packets it receives with the warning bit set and adjusts its
transmission rate accordingly.
2. Choke Packets: A more direct way of telling the source to slow down. A choke packet is a control
packet generated at a congested node and transmitted to restrict traffic flow. The source, on receiving
the choke packet must reduce its transmission rate by a certain percentage. An example of a choke
packet is the ICMP Source Quench Packet
The above figure depicts the functioning of choke packets, (a) Heavy traffic between node P and Q,
(b) Node Q sends the Choke packet to P, (c) Choke packet reaches to P, (d) reduces the flow and
send a reduced flow out, (e) Reduced flow reaches node Q.
3. Hop-by-Hop Choke Packets: Over long distances or at high speeds choke packets are not very
effective. A more efficient method is to send to choke packets hop-by-hop. This requires each hop to
reduce its transmission even before the choke packet arrive at the source.
The below figure depicts the functioning of Hop-by-Hop choke packets, (a) Heavy traffic between
nodes P and Q, (b) Node Q sends the choke packet to P, (c) choke packet reaches R and the flow
between R and Q is flow down, choke packet reaches P, and P reduces the flow out.

4. Load Shedding: When buffers become full, routers simply discard packets. Which packet is chosen to
be the victim depends on the application and on the error strategy used in the data link layer. For a file
transfer, for, e.g. cannot discard older packets since this will cause a gap in the received data. For real-
time voice or video it is probably better to throw away old data and keep new packets. Get the
application to mark packets with discard priority.
5. Random Early Discard (RED): This is a proactive approach in which the router discards one or more
packets before the buffer becomes completely full. Each time a packet arrives, the RED algorithm
computes the average queue length, avg. If avg is lower than some lower threshold, congestion is
assumed to be minimal or non-existent and the packet is queued. If avg is greater than some upper
threshold, congestion is assumed to be serious and the packet is discarded. If avg is between the two
thresholds, this might indicate the onset of congestion. The probability of congestion is then calculated.
6. Jitter: For applications such as audio and video streaming, it does not matter much if the packets take
20 msec or 30 msec to be delivered, as long as the transit time is constant. The variation (i.e., standard
deviation) in the packet arrival times is called jitter.
7. Traffic Shaping: Another method of congestion control is to “shape” the traffic before it enters the
network. Traffic shaping controls the rate at which packets are sent (not just how many). Used in ATM
and Integrated Services networks. At connection set-up time, the sender and carrier negotiate a traffic
pattern (shape). Two traffic shaping algorithms are: Leaky Bucket and Token Bucket.
The Leaky Bucket Algorithm:
Imagine a bucket with a small hole in the bottom, as illustrated in Fig. (a). No matter the rate at which
water enters the bucket, the outflow is at a constant rate, ρ, when there is any water in the bucket and zero when
the bucket is empty. Also, once the bucket is full, any additional water entering it spills over the sides and is lost.

The same idea can be applied to packets, as shown in Fig. (b). Conceptually, each host is connected to the
network by an interface containing a leaky bucket, that is, a finite internal queue. If a packet arrives at the queue
when it is full, the packet is discarded. In other words, if one or more processes within the host try to send a packet
when the maximum number is already queued, the new packet is unceremoniously discarded. This arrangement
can be built into the hardware interface or simulated by the host operating system. It was first proposed by Turner
(1986) and is called the leaky bucket algorithm. In fact, it is nothing other than a single-server queuing system
with constant service time.
Bursty traffic is converted to a uniform traffic by the leaky bucket. In practice the bucket is a finite queue
that outputs at a finite rate. The leaky bucket enforces a constant output rate (average rate) regardless of the
burstiness of the input. Does nothing when input is idle. The host injects one packet per clock tick onto the
network. These results in a uniform flow of packets, smoothing out bursts and reducing congestion. When packets
are the same size (as in ATM cells), the one packet per tick is okay. For variable length packets though, it is better
to allow a fixed number of bytes per tick. E.g. 1024 bytes per tick will allow one 1024-byte packet or two 512-byte
packets or four 256-byte packets on 1 tick.

The Token Bucket Algorithm:


The leaky bucket algorithm enforces a rigid output pattern at the average rate, no matter how bursty the
traffic is For many applications, it is better to allow the output to speed up somewhat when large bursts arrive, so a
more flexible algorithm is needed, preferably one that never loses data. One such algorithm is the token bucket
algorithm. In this algorithm, the leaky bucket holds tokens, generated by a clock at the rate of one token every ∆T
sec. In Fig. (a) We see a bucket holding three tokens, with five packets waiting to be transmitted. For a packet to
be transmitted, it must capture and destroy one token. In Fig. (b) We see that three of the five packets have gotten
through, but the other two are stuck waiting for two more tokens to be generated.
The token bucket algorithm provides a different kind of traffic shaping than that of the leaky bucket
algorithm. The leaky bucket algorithm does not allow idle hosts to save up permission to send large bursts later.
The token bucket algorithm does allow saving, up to the maximum size of the bucket, n. This property means that
bursts of up to n packets can be sent at once, allowing some burstiness in the output stream and giving faster
response to sudden bursts of input.

4.4 QUALITY OF SERVICE:

“Quality of Service (QoS) refers to the capability of a network to provide better service to
selected network traffic over various technologies, including Frame Relay, Asynchronous Transfer
Mode (ATM), Ethernet and 802.11 networks, SONET, and IP-routed networks that may use any or all
of these underlying technologies. The primary goal of QoS is to provide priority including dedicated
bandwidth, controlled jitter and latency (required by some real-time and interactive traffic), and
improved loss characteristics. “

Basic QoS Architecture:

• Admission Control: Ensures that new flows are entered in the network only if the QoS of all the existing
flows along with the new flows can be satisfied The network does not allow new flows if all the network
resources are blocked in servicing the existing flows based on their QoS requirements
• Classification and Marking: Classifies the packets based on their application QoS requirements, and
mark the packets accordingly
• Policing and Markdown: Monitor the flow characteristics and take appropriate actions based on the flow
QoS
• Traffic policing monitors the flow of traffic and marks them to take appropriate actions (reduce priority,
drop etc.)
• Scheduling: Based on markdown by the traffic policers, schedule the traffic into output buffers of an
interface Example: Priority Queuing
• Traffic Shaping: Control the outgoing traffic rate irrespective of the incoming traffic rate (e.g. constant bit
rate output from the interface buffer)

4.5 THE NETWORK LAYER IN THE INTERNET:

4.5.1 The IP Protocol:


An appropriate place to start our study of the network layer in the Internet is the format of the IP datagrams
themselves. An IP datagram consists of a header part and a text part. The header has a 20-byte fixed part and a
variable length optional part. The header format is shown in Fig.

 The Version field keeps track of which version of the protocol the datagram belongs to. By including the
version in each datagram, it becomes possible to have the transition between versions take years, with
some machines running the old version and others running the new one.
 The header length is not constant, a field in the header, IHL, is provided to tell how long the header is, in
32-bit words. The minimum value is 5, which applies when no options are present. The maximum value of
this 4-bit field is 15, which limits the header to 60 bytes, and thus the Options field to 40 bytes. For some
options, such as one that records the route a packet has taken, 40 bytes is far too small, making that option
useless.
 The Type of service field is one of the few fields that has changed its meaning (slightly) over the years. It
was and is still intended to distinguish between different classes of service. Originally, the 6-bit field
contained (from left to right), a three-bit Precedence field and three flags, D, T, and R. The Precedence
field was a priority, from 0 (normal) to 7 (network control packet). The three flag bits allowed the host to
specify what it cared most about from the set {Delay, Throughput, Reliability}.
 The Total length includes everything in the datagram—both header and data. The maximum length is
65,535 bytes. At present, this upper limit is tolerable, but with future gigabit networks, larger datagrams
may be needed.
 The Identification field is needed to allow the destination host to determine which datagram a newly
arrived fragment belongs to. All the fragments of a datagram contain the same Identification value.
 Next comes an unused bit and then two 1-bit fields. DF stands for Don't Fragment. It is an order to the
routers not to fragment the datagram because the destination is incapable of putting the pieces back
together again.
 MF stands for More Fragments. All fragments except the last one have this bit set. It is needed to know
when all fragments of a datagram have arrived.
 The Fragment offset tells where in the current datagram this fragment belongs. All fragments except the
last one in a datagram must be a multiple of 8 bytes, the elementary fragment unit.
 The Time to live field is a counter used to limit packet lifetimes. It is supposed to count time in seconds,
allowing a maximum lifetime of 255 sec. It must be decremented on each hop and is supposed to be
decremented multiple times when queued for a long time in a router.
 The Protocol field tells it which transport process to give it to. TCP is one possibility, but so are UDP and
some others. The numbering of protocols is global across the entire Internet.
 The Header checksum verifies the header only. Such a checksum is useful for detecting errors generated
by bad memory words inside a router. The algorithm is to add up all the 16-bit half words as they arrive,
using one's complement arithmetic and then take the one's complement of the result.
 The Source address and Destination address indicate the network number and host number. Every host
and router on the Internet has an IP address, which encodes its network number and host number. The
combination is unique: in principle, no two machines on the Internet have the same IP address. All IP
addresses are 32 bits long and are used in the Source address and Destination address fields of IP packets.
 The Options field was designed to provide an escape to allow subsequent versions of the protocol to
include information not present in the original design. Some of the security options are.
1. The Security option tells how secret the information is. In theory, a military router might use this field
to specify not to route through certain countries.
2. The Strict source routing option gives the complete path from source to destination as a sequence of
IP addresses.
3. The Loose source routing option requires the packet to traverse the list of routers specified, and in the
order specified, but it is allowed to pass through other routers on the way.
4. The Record route option tells the routers along the path to append their IP address to the option field.
5. The Timestamp option is like the Record route option, except that in addition to recording its 32-bit IP
address, each router also records a 32-bit timestamp.

4.5.2 IP Addresses:
IP addresses were divided into the five categories listed in Fig. This allocation has come to be called
classful addressing. It is no longer used, but references to it in the literature are still common.

The class A, B, C, and D formats allow for up to 128 networks with 16 million hosts each, 16,384
networks with up to 64K hosts, and 2 million networks (e.g., LANs) with up to 256 hosts each (although a few of
these are special). Also supported is multicast, in which a datagram is directed to multiple hosts. Addresses
beginning with 1111 are reserved for future use.
Network addresses, which are 32-bit numbers, are usually written in dotted decimal notation. In this
format, each of the 4 bytes is written in decimal, from 0 to 255. For example, the 32-bit hexadecimal address
C0290614 is written as [Link]. The lowest IP address is [Link] and the highest is [Link].

Subnets:
As we have seen, all the hosts in a network must have the same network number. This property of IP
addressing can cause problems as networks grow. The solution is to allow a network to be split into several parts
for internal use but still act like a single network to the outside world.
In the Internet literature, the parts of the network (in this case, Ethernets) are called subnets.
To implement subnetting, the main router needs a subnet mask that indicates the split between network +
subnet number and host, as shown in Fig. Subnet masks are also written in dotted decimal notation, with the
addition of a slash followed by the number of bits in the network + subnet part. For the example of Fig. the subnet
mask can be written as [Link]. An alternative notation is /22 to indicate that the subnet mask is 22 bits
long.

CIDR—Classless Inter Domain Routing:


The basic idea behind CIDR, which is described in RFC 1519, is to allocate the remaining IP addresses in
variable-sized blocks, without regard to the classes. If a site needs, say, 2000 addresses, it is given a block of
2048 addresses on a 2048-byte boundary.
Using CIDR, each IP address has a network prefix that identifies either one or several network gateways.
The length of the network prefix in IPv4 CIDR is also specified as part of the IP address and varies depending on
the number of bits needed, rather than any arbitrary class assignment structure. A destination IP address or route
that describes many possible destinations has a shorter prefix and is said to be less specific. A longer prefix
describes a destination gateway more specifically.
CIDR notation: a.b.c.d/n
Where n = number of network IDs
Host ID = 32-n
IP = 2(32-n)

Example: [Link]/20
Network ID = 20.
Host ID = 32-20 = 12.
IP = 2(32-20) = 4096.

NAT—Network Address Translation:

IP addresses are scarce. An ISP might have a /16 (formerly class B) address, giving it 65,534 host numbers.
If it has more customers than that, it has a problem. This quick fix came in the form of NAT (Network Address
Translation), which is described in RFC 3022 and which we will summarize below.
The basic idea behind NAT is to assign each company a single IP address (or at most, a small number of
them) for Internet traffic. Within the company, every computer gets a unique IP address, which is used for routing
intramural traffic. However, when a packet exits the company and goes to the ISP, an address translation takes
place. To make this scheme possible, three ranges of IP addresses have been declared as private.
[Link] – [Link]/8 (16,777,216 hosts)
[Link] – [Link]/12 (1,048,576 hosts)
[Link] – [Link]/16 (65,536 hosts)
The operation of NAT is shown in Fig. Within the company premises, every machine has a unique address
of the form 10.x.y.z. However, when a packet leaves the company premises, it passes through a NAT box that
converts the internal IP source address, [Link] in the figure, to the company's true IP address, [Link] in
this example.
4.5.3 Internet Control Protocols:

1. The Internet Control Message Protocol:


The operation of the Internet is monitored closely by the routers. When something
unexpected occurs, the event is reported by the ICMP (Internet Control Message Protocol), which is
also used to test the Internet.

The DESTINATION UNREACHABLE message is used when the subnet or a router cannot locate the
destination or when a packet with the DF bit cannot be delivered because a ''small-packet'' network stands
in the way.
The TIME EXCEEDED message is sent when a packet is dropped because its counter has reached zero.
This event is a symptom that packets are looping, that there is enormous congestion, or that the timer
values are being set too low.
The PARAMETER PROBLEM message indicates that an illegal value has been detected in a header
field. This problem indicates a bug in the sending host's IP software or possibly in the software of a router
transited.
The SOURCE QUENCH message was formerly used to throttle hosts that were sending too many
packets. When a host received this message, it was expected to slow down. It is rarely used anymore
because when congestion occurs, these packets tend to add more fuel to the fire.
The REDIRECT message is used when a router notices that a packet seems to be routed wrong. It is used
by the router to tell the sending host about the probable error.
The ECHO and ECHO REPLY messages are used to see if a given destination is reachable and alive.
Upon receiving the ECHO message, the destination is expected to send an ECHO REPLY message back.
The TIMESTAMP REQUEST and TIMESTAMP REPLY messages are similar, except that the arrival
time of the message and the departure time of the reply are recorded in the reply. This facility is used to
measure network performance.

2. ARP—The Address Resolution Protocol:


Address Resolution Protocol is a communication protocol used for discovering physical
address associated with given network address. Typically, ARP is a network layer to data link layer
mapping process, which is used to discover MAC address for given Internet Protocol Address.
In order to send the data to destination, having IP address is necessary but not sufficient; we also
need the physical address of the destination machine. ARP is used to get the physical address (MAC
address) of destination machine.

Before sending the IP packet, the MAC address of destination must be known. If not so, then
sender broadcasts the ARP-discovery packet requesting the MAC address of intended destination.
Since ARP-discovery is broadcast, every host inside that network will get this message but the packet
will be discarded by everyone except that intended receiver host who’s IP is associated. Now, this
receiver will send a unicast packet with its MAC address (ARP-reply) to the sender of ARP-discovery
packet. After the original sender receives the ARP-reply, it updates ARP-cache and start sending
unicast message to the destination.

3. Reverse Address Resolution Protocol (RARP):


Reverse ARP is a networking protocol used by a client machine in a local area network to
request its Internet Protocol address (IPv4) from the gateway-router’s ARP table. The network
administrator creates a table in gateway-router, which is used to map the MAC address to
corresponding IP address .

When a new machine is setup or any machine which don’t have memory to store IP address,
needs an IP address for its own use. So the machine sends a RARP broadcast packet which
contains its own MAC address in both sender and receiver hardware address field.

4.6 THE NETWORK LAYER IN ATM NETWORKS:

The lowest layer that goes from source to destination, and thus involves routing and switching (i.e., is multihop), is
the network layer. The ATM layer deals with moving cells from source to destination and definitely involves routing
algorithms and protocols within the ATM switches. It also deals with global addressing. Thus functionally, the ATM layer
performs the work expected of the network layer.
The ATM layer is connection oriented, both in terms of the service it offers and the way it operates internally. The
basic element of the ATM layer is the virtual circuit (officially called a virtual channel). A virtual circuit is normally a
connection from one source to one destination. Virtual circuits are unidirectional, but a pair of circuits can be created at
the same time.
The ATM layer does not provide any acknowledgments; it leaves error control to higher layers. The reason for this
design is that ATM was designed for use on fiber optics networks, which are highly reliable. Furthermore, ATM networks
are often used for real-time traffic, such as audio and video. For this kind of traffic, retransmitting an occasional bad cell is
worse than just ignoring it.
Despite its lack of acknowledgment, the ATM layer does provide one hard guarantee: cells sent along a virtual
circuit will never arrive out of order. The ATM subnet is permitted to discard cells if congestion occurs but under no
conditions may it reorder the cells sent on a single virtual circuit.
The ATM layer supports a two-level connection hierarchy that is visible to the transport layer. Along any
transmission path from a given source to a given destination, a group of virtual circuits can be grouped together into what
is called a virtual path.
4.6.1. Cell Formats:

In the ATM layer, two interfaces are distinguished:

 UNI (User-Network Interface) - defines the boundary between a host and an ATM network.
 NNI (Network-Network Interface) - applies to the line between two ATM switches (ATM switch is the
ATM term for router).

In both cases the cells consist of a 5-byte header followed by a 48-byte payload, but the two headers are slightly
different. Cells are transmitted leftmost byte first and leftmost bit within a byte first.

 VPI (Virtual Path Identifier) - occupies bits 0 - 11. This field contains an integer selecting a particular
virtual path. In cells between a host and a network (UNI), VPI occupies in fact just bits 4 - 11, bits 0 - 3 are
called GFC (General Flow Control) field. But this field has no significance and the network ignores it. It
was originally conceived as perhaps having some utility for flow control or priority between hosts and
networks.
 VCI (Virtual Channel Identification) - occupies bits 12 - 27. This field selects a particular virtual circuit
within the chosen virtual path.
 PTI (Payload Type) - occupies bits 28 - 30. This field defines the type of payload the cell contains. Cell
type information are provided by users, congestion information are network supplied. In other words, a cell
sent with PTI 000 might arrive with 010 to warn destination of problems underway.
 CLP (Cell Loss Priority) - occupies bit 31. This bit is set by a host to differentiate between high-priority
traffic and low-priority traffic. If congestion occurs and cells must be discarded, switches first attempt to
discard cells with CLP set to 1 before throwing out any set to 0.
 HEC (Header Error Check) - occupies bits 32 - 39. This field is a checksum over the header. It does not
check the payload. The chosen code can correct all single-bit errors and detect about 90% of all multi bit
errors. Various studies have shown that the vast majority of errors on optical links are single-bit errors.

The NNI format is the same as the UNI format, except that the GFC field is not present and those 4 bits are used to
make VPI field 12 bits instead of 8.

4.6.2. Connection Setup:

ATM supports both permanent virtual circuits (i.e., always present, like leased lines) and switched virtual
circuits (they have to be established each time they are used, like making phone calls).
Several ways are provided for setting up a connection. The normal way is to first acquire a virtual circuit
for signaling and use it. To establish such a circuit, cell containing a request is sent on virtual path 0, virtual circuit
5. If successful, a new virtual circuit is opened on which connection setup requests and replies can be sent and
received.
Virtual circuit establishment uses the six message types. Each message occupies one or more cells and
contains the message type, length, and parameters. The messages can be sent by a host to the network or by the
network to a host. Various other status and reporting messages also exist but are not mentioned here.

The normal procedure for establishing a call is for a host to send a SETUP message on a special virtual
circuit. The network then responds with CALL PROCEEDING to acknowledge receipt of the request. As the
SETUP message propagates toward the destination, it is acknowledged at each hop by CALL PROCEEDING.
When the SETUP message finally arrives, the destination host can respond with CONNECT to accept the
call. The network then sends a CONNECT ACK message to indicate that it has received the CONNECT message.
As the CONNECT message propagates back toward the originator, each switch receiving it acknowledges it with a
CONNECT ACK message shown in fig (a).
When a host wants to terminate a virtual circuit, it just sends a RELEASE message that propagates to the
other end and causes the circuit to be released. Each hop along the way, the message is acknowledged shown in fig
(b).
ATM networks allow multicast channels to be set up. A multicast channel has one sender and more than
one receiver. These are constructed by setting up a connection to one of the destinations in the usual way. Then
ADD PARTY message is sent to attach a second destination to the virtual circuit returned by the previous call.
Additional ADD PARTY messages can be sent afterwards to increase the size of the multicast group.

You might also like