0% found this document useful (0 votes)
28 views21 pages

Computer Network Module-4

The document provides an overview of various networking devices including repeaters, hubs, bridges, switches, routers, and gateways, detailing their functions and types. It also explains IPv4 addressing, subnet masks, and the differences between static and dynamic routing. Each section highlights the operational layers and specific characteristics of these devices and addressing schemes within computer networks.

Uploaded by

icpco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views21 pages

Computer Network Module-4

The document provides an overview of various networking devices including repeaters, hubs, bridges, switches, routers, and gateways, detailing their functions and types. It also explains IPv4 addressing, subnet masks, and the differences between static and dynamic routing. Each section highlights the operational layers and specific characteristics of these devices and addressing schemes within computer networks.

Uploaded by

icpco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

COMPUTER NETWORK

MODULE-4
Internetworking & devices:

1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal over the
same network before the signal becomes too weak or corrupted so as to extend the length to which
the signal can be transmitted over the same network. An important point to be noted about
repeaters is that they do not amplify the signal. When the signal becomes weak, they copy the
signal bit by bit and regenerate it at the original strength. It is a 2 port device.

2. Hub – A hub is basically a multiport repeater. A hub connects multiple wires coming from
different branches, for example, the connector in star topology which connects different stations.
Hubs cannot filter data, so data packets are sent to all connected devices. In other words, collision
domain of all hosts connected through Hub remains one. Also, they do not have intelligence to
find out best path for data packets which leads to inefficiencies and wastage.

Types of Hub
 Active Hub: - These are the hubs which have their own power supply and can clean , boost
and relay the signal along the network. It serves both as a repeater as well as wiring center.
These are used to extend maximum distance between nodes.

 Passive Hub: - These are the hubs which collect wiring from nodes and power supply from
active hub. These hubs relay signals onto the network without cleaning and boosting them
and can‘t be used to extend distance between nodes.
 Intelligent Hub: - It works like an active hub and includes remote management capabilities.
They also provide flexible data rates to network devices. It also enables an administrator to
monitor the traffic passing through the hub and to configure each port in the hub.

3. Bridge – A bridge operates at data link layer. A bridge is a repeater, with add on functionality
of filtering content by reading the MAC addresses of source and destination. It is also used for
interconnecting two LANs working on the same protocol. It has a single input and single output
port, thus making it a 2 port device.

Types of Bridges

a) Transparent Bridges :- These are the bridge in which the stations are completely unaware of the
bridge‘s existence i.e. whether or not a bridge is added or deleted from the network,
reconfiguration of the stations is unnecessary. These bridges make use of two processes i.e. bridge
forwarding and bridge learning.

b) Source Routing Bridges :- In these bridges, routing operation is performed by source station
and the frame specifies which route to follow. The hot can discover frame by sending a special
frame called discovery frame, which spreads through the entire network using all possible paths
to destination.
4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its efficiency(a
large number of ports imply less traffic) and performance. A switch is a data link layer device. The
switch can perform error checking before forwarding data, which makes it very efficient as it does
not forward packets that have errors and forward good packets selectively to the correct port
only. In other words, the switch divides the collision domain of hosts, but the broadcast
domain remains the same.

Types of Switch
a) Unmanaged switches: These switches have a simple plug-and-play design and do not offer
advanced configuration options. They are suitable for small networks or for use as an expansion
to a larger network.
b) Managed switches: These switches offer advanced configuration options such as VLANs, QoS,
and link aggregation. They are suitable for larger, more complex networks and allow for
centralized management.
c) Smart switches: These switches have features similar to managed switches but are typically
easier to set up and manage. They are suitable for small- to medium-sized networks.
d) Layer 2 switches: These switches operate at the Data Link layer of the OSI model and are
responsible for forwarding data between devices on the same network segment.
e) Layer 3 switches: These switches operate at the Network layer of the OSI model and can route
data between different network segments. They are more advanced than Layer 2 switches and
are often used in larger, more complex networks.
f) PoE switches: These switches have Power over Ethernet capabilities, which allows them to
supply power to network devices over the same cable that carries data.
g) Gigabit switches: These switches support Gigabit Ethernet speeds, which are faster than
traditional Ethernet speeds.
h) Rack-mounted switches: These switches are designed to be mounted in a server rack and are
suitable for use in data centers or other large networks.
i) Desktop switches: These switches are designed for use on a desktop or in a small office
environment and are typically smaller in size than rack-mounted switches.
j) Modular switches: These switches have modular design, which allows for easy expansion or
customization. They are suitable for large networks and data centers.

5. Routers – A router is a device like a switch that routes data packets based on their IP addresses.
The router is mainly a Network Layer device. Routers normally connect LANs and WANs and have
a dynamically updating routing table based on which they make decisions on routing the data
packets. The router divides the broadcast domains of hosts connected through it.
6. Gateway – A gateway, as the name suggests, is a passage to connect two networks that may work
upon different networking models. They work as messenger agents that take data from one system,
interpret it, and transfer it to another system. Gateways are also called protocol converters and can
operate at any network layer. Gateways are generally more complex than switches or routers.

IP ADDRESS :

IPv4 addresses 32 bit binary addresses (divided into 4 octets) used by the Internet Protocol (OSI
Layer 3) for delivering packet to a device located in same or remote network. MAC address
(Hardware address) is a globally unique address which represents the network card and cannot be
changed. IPv4 address refers to a logical address, which is a configurable address used to identify
which network this host belongs to and also a network specific host number. In other words, an
IPv4 address consists of two parts; a network part and a host part.
This can be compared to your home address. A letter addressed to your home address will be
delivered to your house because of this logical address. If you move to another house, your address
will change, and letters addressed to you will be sent to your new address. But the person who the
letter is being delivered to, that is ―you‖, is still the same.

IPv4 addresses are stored internally as binary numbers but they are represented in decimal numbers
because of simplicity.
An example of IPv4 address is 192.168.10.100, which is actually
11000000.10101000.00001010.01100100.

For Each network, one address is used to represent the network and one address is used for
broadcast. Network address is an IPv4 address with all host bits are "0". Broadcast address is an
IPv4 address with all host bits are "1".

That means, for a network, the first IPv4 address is the network address and the last IPv4 address
is the broadcast address. We cannot configure these addresses for your devices. All the usable IPv4
addresses in any IP network are between network address and broadcast address.

We can use the following equation for find the number of usable IPv4 addresses in a network (We
have to use two IPv4 addresses in each network to represent the network id and the broadcat id.)
Number of usable IPv4 addresses = (2n)-2. Where "n" is the number of bits in host part.
Many IPv4 addresses are reserved and we cannot use those IPv4 address. There are five IPv4
address Classes and certain special addresses.

Class A IPv4 addresses

"Class A" IPv4 addresses are for very large networks. The left most bit of the left most octet of a
"Class A" network is reserved as "0". The first octet of a "Class A" IPv4 address is used to identify
the Network and the three remaining octets are used to identify the host in that particular network
(Network.Host.Host.Host).

The 32 bits of a “ Class A" IPv4 address can be represented as


0xxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.

The minimum possible value for the leftmost octet in binaries is 00000000 (decimal equivalent is
0) and the maximum possible value for the leftmost octet is 01111111 (decimal equivalent is 127).
Therefore for a "Class A" IPv4 address, leftmost octet must have a value between 0-127 (0.X.X.X
to 127.X.X.X).

The network 127.0.0.0 is known as loopback network. The IPv4 address 127.0.0.1 is used by the
host computer to send a message back to itself. It is commonly used for troubleshooting and
network testing.

Computers not connected directly to the Internet need not have globally-unique IPv4 addresses.
They need an IPv4 addresses unique to that network only. 10.0.0.0 network belongs to "Class A"
is reserved for private use and can be used inside any organization.

Class B IPv4 addresses


"Class B" IPv4 addresses are used for medium-sized networks. Two left most bits of the left most
octet of a "Class B" network is reserved as "10". The first two octets of a "Class B" IPv4 address
is used to identify the Network and the remaining two octets are used to identify the host in that
particular network (Network.Network.Host.Host).

The 32 bits of a "Class B" IPv4 address can be represented as


10xxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.

The minimum possible value for the leftmost octet in binaries is 10000000 (decimal equivalent is
128) and the maximum possible value for the leftmost octet is 10111111 (decimal equivalent is
191). Therefore for a "Class B" IPv4 address, leftmost octet must have a value between 128-191
(128.X.X.X to 191.X.X.X).

Network 169.254.0.0 is known as APIPA (Automatic Private IPv4 addresses). APIPA range of
IPv4 addresses are used when a client is configured to automatically obtain an IPv4 address from
the DHCP server was unable to contact the DHCP server for dynamic IPv4 address.
Networks starting from 172.16.0.0 to 172.31.0.0 are reserved for private use.

Class C IPv4 addresses


"Class C" IPv4 addresses are commonly used for small to mid-size businesses. Three left most bits
of the left most octet of a "Class C" network is reserved as "110". The first three octets of a "Class
C" IPv4 address is used to identify the Network and the remaining one octet is used to identify the
host in that particular network (Network.Network.Networkt.Host).

The 32 bits of a "Class C" IPv4 address can be represented as


110xxxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx.

The minimum possible value for the leftmost octet in binaries is 11000000 (decimal equivalent is
192) and the maximum possible value for the leftmost octet is 11011111 (decimal equivalent is
223). Therefore for a "Class C" IPv4 address, leftmost octet must have a value between 192-223
(192.X.X.X to 223.X.X.X).
Networks starting from 192.168.0.0 to 192.168.255.0 are reserved for private use.

Class D IPv4 addresses


Class D IPv4 addresses are known as multicast IPv4 addresses. Multicasting is a technique
developed to send packets from one device to many other devices, without any unnecessary packet
duplication. In multicasting, one packet is sent from a source and is replicated as needed in the
network to reach as many end-users as necessary. You cannot assign these IPv4 addresses to your
devices.

Four left most bits of the left most octet of a "Class D" network is reserved as "1110". The other
28 bits are used to identify the group of computers the multicast message is intended for.

The minimum possible value for the left most octet in binaries is 11100000 (decimal equivalent is
224) and the maximum possible value for the leftmost octet is 11101111 (decimal equivalent is
239). Therefore for a "Class D" IPv4 address, leftmost octet must have a value between 224-239
(224.X.X.X to 239.X.X.X).

Class E IPv4 addresses

Class E is used for experimental purposes only and you cannot assign these IPv4 addresses to your
devices. Four left most bits of the left most octet of a "Class E" network is reserved as "1111".
The minimum possible value for the left most octet in binaries is 11110000 (decimal equivalent is
240) and the maximum possible value for the leftmost octet is 11111111 (decimal equivalent is
255). Therefore for a "Class E" IPv4 address, leftmost octet must have a value between 240-255
(240.X.X.X to 255.X.X.X).

SUBNET MASK:
An IPv4 address has two components, a "Network" part and a "Host" part. To identify which part
of an IPv4 address is the "Network" part and which part of the IPv4 address is "Host" part, we
need another identifier, which is known as "Subnet Mask". IPv4 address is a combination of IPv4
address and Subnet mask and the purpose of subnet mask is to identify which part of an IPv4
address is the network part and which part is the host part. Subnet mask is also a 32 bit number
where all the bits of the network part are represented as "1" and all the bits of the host part are
represented as "0".

An IP address has two components, the network address and the host address. A subnet mask
separates the IP address into the network and host addresses (<network><host>). Subnetting
further divides the host part of an IP address into a subnet and host address
(<network><subnet><host>) if additional subnetwork is needed. Use the Subnet Calculator to
retrieve subnetwork information from IP address and Subnet Mask. It is called a subnet mask
because it is used to identify network address of an IP address by perfoming a bitwise AND
operation on the netmask.

A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network
address and host address. Subnet Mask is made by setting network bits to all "1"s and setting host
bits to all "0"s. Within a given network, two host addresses are reserved for special purpose, and
cannot be assigned to hosts. The "0" address is assigned a network address and "255" is assigned
to a broadcast address, and they cannot be assigned to hosts.

Subnetting an IP network is to separate a big network into smaller multiple networks for
reorganization and security purposes. All nodes (hosts) in a subnetwork see all packets transmitted
by any node in a network. Performance of a network is adversely affected under heavy traffic load
due to collisions and retransmissions.

Applying a subnet mask to an IP address separates network address from host address. The network
bits are represented by the 1's in the mask, and the host bits are represented by 0's. Performing a
bitwise logical AND operation on the IP address with the subnet mask produces the network
address. For example, applying the Class C subnet mask to our IP address 216.3.128.12 produces
the following network address:
IP: 1101 1000 . 0000 0011 . 1000 0000 . 0000 1100 (216.003.128.012)

Mask: 1111 1111 . 1111 1111 . 1111 1111 . 0000 0000 (255.255.255.000)

STATIC AND DYNAMIC ROUTING:

Routing algorithms in the context of networking can be classified variously. The prior
classification is based on the building and modification of a routing table. This can be done in two
manners statically or dynamically. More precisely these are known as static and dynamic routing
respectively.

In the Static routing, the table is set up and modified manually whereas in the Dynamic routing the
table is built automatically with the help of the routing protocols. Dynamic routing is preferred
over static routing because of the major issue in static routing where in case of link/node failure
the system cannot recover. The dynamic routing overcomes from the static routing limitations.

Routing is the process of transferring the packets from one network to another network and
delivering the packets to the hosts. The traffic is routed to all the networks in the internetwork by
the routers. In the routing process a router must know following things:

• Destination device address.

• Neighbor routers for learning about remote networks.

• Possible routes to all remote networks.

• The best route with the shortest path to each remote network.

• How the routing information can be verified and maintained.

ROUTING I N F O R M A T I O N PROTOCOL:
The Routing Information Protocol (RIP) defines a way for routers, which connect networks using
the Internet Protocol (IP), to share information about how to route traffic among networks. RIP is
classified by the Internet Engineering Task Force (IETF) as an Interior Gateway Protocol (IGP),
one of several protocols for routers moving traffic around within a larger autonomous system
network -- e.g., a single enterprise's network that may be comprised of many separate local area
networks (LANs) linked through routers.
Each RIP router maintains a routing table, which is a list of all the destinations (networks) it knows
how to reach, along with the distance to that destination. RIP uses a distance vector algorithm to
decide which path to put a packet on to get to its destination. It stores in its routing table the
distance for each network it knows how to reach, along with the address of the "next hop" router
-- another router that is on one of the same networks -- through which a packet has to travel to get
to that destination. If it receives an update on a route, and the new path is shorter, it will update its
table entry with the length and next-hop address of the shorter path; if the new path is longer, it
will wait through a "hold-down" period to see if later updates reflect the higher value as well, and
only update the table entry if the new, longer path is stable.

Using RIP, each router sends its entire routing table to its closest neighbors every 30 seconds. (The
neighbors are the other routers to which this router is connected directly -- that is, the other routers
on the same network segments this router is on.) The neighbors in turn will pass the information
on to their nearest neighbors, and so on, until all RIP hosts within the network have the same
knowledge of routing paths, a state known as convergence.
If a router crashes or a network connection is severed, the network discovers this because that
router stops sending updates to its neighbors, or stops sending and receiving updates along the
severed connection. If a given route in the routing table isn't updated across six successive update
cycles (that is, for 180 seconds) a RIP router will drop that route, letting the rest of the network
know via its own updates about the problem and begin the process of reconverging on a new
network topology.
RIP uses a modified hop count as a way to determine network distance. (Modified reflects the fact
that network engineers can assign paths a higher cost.) By default, if a router's neighbor owns a
destination network (i.e., if it can deliver packets directly to the destination network without using
any other routers), that route has one hop, described as a cost of 1. RIP allows only 15 hops in a
path. If a packet can't reach a destination in 15 hops, the destination is considered unreachable.
Paths can be assigned a higher cost (as if they involved extra hops) if the enterprise wants to limit
or discourage their use. For example, a satellite backup link might be assigned a cost of 10 to force
traffic follow other routes when available.
RIP has been supplanted mainly due to its simplicity and its inability to scale to very large and
complex networks. Other routing protocols push less information of their own onto the network,
while RIP pushes its whole routing table every 30 seconds. As a result, other protocols can
converge more quickly, use more sophisticated routing algorithms, include latency, packet loss,
actual monetary cost and other link characteristics, as well as hop count with arbitrary weighting.

OPEN SHORTEST PATH FIRST:


Open Shortest Path First (OSPF) was designed as an interior gateway protocol (IGP), for use in an
autonomous system such as a local area network (LAN). It implements Dijkstra's algorithm, also
known as the shortest path first (SPF) algorithm. As a link-state routing protocol it was based on
the link-state algorithm developed for the ARPANET in 1980 and the IS-IS routing protocol.
OSPF was first standardised in 1989 as RFC 1131, which is now known as OSPF version 1. The
development work for OSPF prior to its codification as open standard was undertaken largely by
the Digital Equipment Corporation, which developed its own proprietary DECnet protocols.

Routing protocols like OSPF calculate the shortest route to a destination through the network
based on an algorithm. The first routing protocol that was widely implemented, the Routing
Information Protocol (RIP), calculated the shortest route based on hops, that is the number of
routers that an IP packet had to traverse to reach the destination host. RIP successfully
implemented dynamic routing, where routing tables change if the network topology changes. But
RIP did not adapt its routing according to changing network conditions, such as data-transfer rate.
Demand grew for a dynamic routing protocol that could calculate the fastest route to a destination.
OSPF was developed so that the shortest path through a network was calculated based on the cost
of the route, taking into account bandwidth, delay and load. Therefore OSPF undertakes route cost
calculation on the basis of link-cost parameters, which can be weighted by the administrator. OSPF
was quickly adopted because it became known for reliably calculating routes through large and
complex local area networks.

As a link state routing protocol, OSPF maintains link state databases, which are really network
topology maps, on every router on which it is implemented. The state of a given route in the
network is the cost, and OSPF algorithm allows every router to calculate the cost of the routes to
any given reachable destination. Unless the administrator has made a configuration, the link cost
of a path connected to a router is determined by the bit rate (1 Gbit/s, 10 Gbit/s, etc) of the interface.
A router interface with OSPF will then advertise its link cost to neighbouring routers through
multicast, known as the hello procedure. All routers with OSPF implementation keep sending
hello packets, and thus changes in the cost of their links become known to neighbouring routers.
The information about the cost of a link, that is the speed of a point to point connection between
two routers, is then cascaded through the network because OSPF routers advertise the information
they receive from one neighbouring router to all other neighbouring routers. This process of
flooding link state information through the network is known as synchronisation. Based on this
information, all routers with OSPF implementation continuously update their link state databases
with information about the network topology and adjust their routing tables.

An OSPF network can be structured, or subdivided, into routing areas to simplify administration
and optimize traffic and resource utilization. Areas are identified by 32-bit numbers, expressed
either simply in decimal, or often in the same dot-decimal notation used for IPv4 addresses. By
convention, area 0 (zero), or 0.0.0.0, represents the core or backbone area of an OSPF network.

While the identifications of other areas may be chosen at will; administrators often select the IP
address of a main router in an area as the area identifier. Each additional area must have a
connection to the OSPF backbone area. Such connections are maintained by an interconnecting
router, known as an area border router (ABR). An ABR maintains separate link-state databases for
each area it serves and maintains summarized routes for all areas in the network.

OSPF detects changes in the topology, such as link failures, and converges on a new loop-free
routing structure within seconds. OSPF has become a popular dynamic routing protocol. Other
commonly used dynamic routing protocols are the RIP and the Border Gateway Protocol (BGP).

BGP (Border Gateway Protocol):


BGP (Border Gateway Protocol) is the protocol used to exchange routing information between
autonomous systems (AS) on the internet. It is classified as an Exterior Gateway Protocol (EGP),
meaning it is responsible for inter-domain routing, managing how packets are routed between
different networks (autonomous systems), especially across the internet.
Key Concepts of BGP:

1. Autonomous System (AS):


 An Autonomous System is a collection of IP networks and routers under the control of a single
organization that presents a common routing policy to the internet.
 Each AS is assigned a unique Autonomous System Number (ASN) to identify it in the BGP
system.
 BGP routes traffic between autonomous systems.
2. Path Vector Protocol:
 BGP is a path vector protocol, meaning it uses path information to make routing decisions. Each
router advertises the complete path (a sequence of AS numbers) that packets must traverse to reach
a particular network destination.
 BGP advertises routes as a list of AS numbers, ensuring loop-free routing.
3. Interior vs. Exterior BGP:
 Interior BGP (iBGP): BGP used within a single autonomous system to exchange routing
information between routers in the same AS.
 Exterior BGP (eBGP): BGP used between routers in different autonomous systems to exchange
routing information.
4. BGP Sessions:
 BGP routers establish sessions with each other to exchange routing information.
 Peers (or neighbors) are routers that have an established BGP session to exchange routes.
 Peers exchange keepalive messages to ensure the connection is alive, and update messages to share
routing information.
5. Routing Decisions in BGP:
 BGP routers select the best path to a destination based on various attributes, including:
 AS Path: Shortest path (in terms of the number of autonomous systems) is preferred.
 Next Hop: The next router in the path towards the destination.
 Local Preference: Indicates the preferred path within the AS.
 MED (Multi-Exit Discriminator): Used to influence incoming traffic across multiple links
between two ASes.
 Weight: Cisco-specific attribute to prioritize routes locally.

ADDRESS RESOLUTION PROTOCOL:


Address Resolution Protocol (ARP) is a protocol for mapping an Internet Protocol address (IP
address) to a physical machine address that is recognized in the local network. For example, in IP
Version 4, the most common level of IP in use today, an address is 32 bits long. In an Ethernet
local area network, however, addresses for attached devices are 48 bits long. (The physical
machine address is also known as a Media Access Control or MAC address.) A table, usually
called the ARP cache, is used to maintain a correlation between each MAC address and its
corresponding IP address. ARP provides the protocol rules for making this correlation and
providing address conversion in both directions.

IP (Internet Protocol):
Internet Protocol (IP) is a fundamental protocol in the Internet layer of the TCP/IP model, responsible
for addressing and routing packets of data between source and destination devices over the internet or
other networks.
Key Functions of IP:

1. Addressing: IP assigns unique IP addresses to devices (hosts) in a network, which are used for
identifying the source and destination during communication.
 IPv4: Uses a 32-bit address (e.g., 192.168.1.1), supporting approximately 4.3 billion unique
addresses.
 IPv6: Uses a 128-bit address (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334), designed to
provide a much larger address space.
2. Routing: IP is responsible for forwarding packets between networks (routers) by determining the
best path based on IP addresses. Routers use routing tables to make decisions on where to forward
packets next.
3. Packetization: IP breaks down large chunks of data from the transport layer into smaller units
called packets (also known as datagrams) to fit the network's frame size, adding an IP header that
contains necessary control information.
4. Fragmentation and Reassembly: If a packet is too large for the transmission link, IP can
fragment it into smaller pieces. These fragments are reassembled at the destination.
5. Connectionless Protocol: IP operates in a connectionless manner, meaning it sends packets
without establishing a dedicated connection between sender and receiver. It does not guarantee
delivery, order, or error checking.
ICMP (Internet Control Message Protocol):

ICMP (Internet Control Message Protocol) is an error-reporting protocol used by network devices, like
routers, to send messages about network issues back to the source IP address. It operates at the Internet
layer (like IP) and works closely with IP to provide feedback on network problems.

Key Functions of ICMP:

1. Error Reporting: ICMP is used to notify the sender when an error occurs, such as:
 Destination Unreachable: When a router or host cannot deliver a packet to its intended destination.
 Time Exceeded: If a packet's TTL (Time-to-Live) value reaches zero, indicating that the packet
has taken too long to reach its destination and is discarded.
 Redirect Message: Tells the source to use a better route for sending packets.
2. Diagnostics: ICMP is often used for network diagnostics and troubleshooting.
 Ping: Uses ICMP Echo Request and Echo Reply messages to test the reachability of a host. It's a
common tool to check if a device is connected to the network.
 Traceroute: Uses ICMP to trace the path that a packet takes from source to destination, identifying
each hop along the route.
3. Flow Control: Although rare, ICMP can be used for flow control, informing the sender to slow
down the packet transmission rate if the network is congested.

IPv6 (Internet Protocol Version 6):


IPv6 (Internet Protocol version 6) is a set of specifications from the Internet Engineering Task
Force (IETF) that's essentially an upgrade of IP version 4 (IPv4). The basics of IPv6 are similar to
those of IPv4 -- devices can use IPv6 as source and destination addresses to pass packets over a
network, and tools like ping work for network testing as they do in IPv4, with some slight
variations.
The most obvious improvement in IPv6 over IPv4 is that IP addresses are lengthened from 32 bits
to 128 bits. This extension anticipates considerable future growth of the Internet and provides
relief for what was perceived as an impending shortage of network addresses. IPv6 also supports
auto-configuration to help correct most of the shortcomings in version 4, and it has integrated
security and mobility features.
IPv6 features include:

• Supports source and destination addresses that are 128 bits (16 bytes) long.

• Requires IPSec support.

• Uses Flow Label field to identify packet flow for QoS handling by router.

• Allows the host to send fragments packets but not routers.

• Doesn't include a checksum in the header.

• Uses a link-local scope all-nodes multicast address.


• Does not require manual configuration or DHCP.

• Uses host address (AAAA) resource records in DNS to map host names to IPv6 addresses.

• Uses pointer (PTR) resource records in the IP6.ARPA DNS domain to map IPv6 addresses to
host names.
• Supports a 1280-byte packet size (without fragmentation).

• Moves optional data to IPv6 extension headers.

• Uses Multicast Neighbor Solicitation messages to resolve IP addresses to link-layer addresses.

• Uses Multicast Listener Discovery (MLD) messages to manage membership in local subnet
groups.

• Uses ICMPv6 Router Solicitation and Router Advertisement messages to determine the IP
address of the best default gateway.

BORDER GATEWAY PROTOCOL:


BGP (Border Gateway Protocol) is protocol that manages how packets are routed across the
internet through the exchange of routing and reachability information between edge routers. BGP
directs packets between autonomous systems (AS) -- networks managed by a single enterprise or
service provider. Traffic that is routed within a single network AS is referred to as internal BGP,
or iBGP. More often, BGP is used to connect one AS to other autonomous systems, and it is then
referred to as an external BGP or eBGP.

BGP offers network stability that guarantees routers can quickly adapt to send packets through
another reconnection if one internet path goes down. BGP makes routing decisions based on paths,
rules or network policies configured by a network administrator. Each BGP router maintains a
standard routing table used to direct packets in transit. This table is used in conjunction with a
separate routing table, known as the routing information base (RIB), which is a data table stored
on a server on the BGP router. The RIB contains route information both from directly connected
external peers, as well as internal peers, and continually updates the routing table as changes
occurs. BGP is based on TCP/IP and uses client-server topology to communicate routing
information, with the client-server initiating a BGP session by sending a request to the server.
TRANSPORT LAYER:

Process-to-Process Delivery:

Process-to-process delivery refers to the mechanism in networking that ensures data is transferred
from a running process (application) on one device to a corresponding process on another device. This
concept is central to communication between applications in networked systems.

In a networked environment, devices communicate using layers as defined by the OSI (Open Systems
Interconnection) or TCP/IP models. At the transport layer, the Transmission Control Protocol (TCP)
and User Datagram Protocol (UDP) are used for process-to-process communication.

Key Elements in Process-to-Process Delivery

1. Transport Layer Protocols:


The transport layer is responsible for process-to-process delivery. It manages the transfer of
data between applications on different hosts in a network. The two primary protocols that
handle process-to-process communication are:
 TCP (Transmission Control Protocol): Reliable, connection-oriented protocol used
for error-checked and ordered data delivery.
 UDP (User Datagram Protocol): Simple, connectionless protocol for fast and
unordered data transmission without error recovery.
2. Port Numbers:
 Each process (application) on a host is assigned a port number by the transport layer.
This ensures that data is delivered to the correct application.
 A port number is a 16-bit identifier, with a range from 0 to 65535.
 Well-known ports (0-1023): Assigned to common services like HTTP (port
80) or FTP (port 21).
 Registered ports (1024-49151): Used by user-specific applications.
 Dynamic/private ports (49152-65535): Used for temporary connections or
custom applications.
3. Socket:
 A socket is an endpoint for process-to-process communication, consisting of an IP
address and a port number (IP address).
 A socket uniquely identifies a process on a networked host, allowing data to be routed
correctly from sender to receiver.
4. Multiplexing and Demultiplexing
 Multiplexing at the sender's transport layer refers to gathering data from multiple
processes and transmitting them over a single network connection.
 Demultiplexing at the receiver’s transport layer refers to the process of extracting and
delivering the correct data to the appropriate process based on the destination port
number.

Steps for Process-to-Process Delivery

1. Establishing Communication
 Connection Establishment (for TCP): Before data transmission, TCP establishes a
connection between two processes using a three-way handshake.
 UDP, being connectionless, does not establish a connection before sending data.
2. Data Encapsulation
 Data generated by the application layer is encapsulated with transport layer headers
containing information like port numbers, sequence numbers (TCP), and checksum
(UDP/TCP).
3. Data Transmission
 The transport layer protocol transmits the encapsulated segments to the network layer,
which forwards them to the appropriate destination host over the internet.
4. Data Reception
 At the receiver’s transport layer, the header is examined to determine the destination
port, and data is delivered to the correct application via demultiplexing.
5. Connection Termination (for TCP)
 After the data transmission, TCP terminates the connection using a four-way
handshake to ensure that both parties have finished sending data.

UDP (User Datagram Protocol):


UDP (User Datagram Protocol) is an alternative communications protocol to Transmission Control
Protocol (TCP) used primarily for establishing low-latency and loss-tolerating connections
between applications on the internet.
Both UDP and TCP run on top of the Internet Protocol (IP) and are sometimes referred to as
UDP/IP or TCP/IP. But there are important differences between the two.
Where UDP enables process-to-process communication, TCP supports host-to-host
communication. TCP sends individual packets and is considered a reliable transport medium; UDP
sends messages, called datagrams, and is considered a best-effort mode of communications.
In addition, where TCP provides error and flow control, no such mechanisms are supported in
UDP. UDP is considered a connectionless protocol because it doesn't require a virtual circuit to be
established before any data transfer occurs.
UDP provides two services not provided by the IP layer. It provides port numbers to help
distinguish different user requests and, optionally, a checksum capability to verify that the data
arrived intact.
TCP has emerged as the dominant protocol used for the bulk of internet connectivity due to its
ability to break large data sets into individual packets, check for and resend lost packets, and
reassemble packets in the correct sequence. But these additional services come at a cost in terms
of additional data overhead and delays called latency.
In contrast, UDP just sends the packets, which means that it has much lower bandwidth overhead
and latency. With UDP, packets may take different paths between sender and receiver and, as a
result, some packets may be lost or received out of order.

Applications of UDP:
UDP is an ideal protocol for network applications in which perceived latency is critical, such as in
gaming and voice and video communications, which can suffer some data loss without adversely
affecting perceived quality. In some cases, forward error correction techniques are used to improve
audio and video quality in spite of some loss.

UDP can also be used in applications that require lossless data transmission when the application
is configured to manage the process of retransmitting lost packets and correctly arranging received
packets. This approach can help to improve the data transfer rate of large files compared to TCP.

In the Open Systems Interconnection (OSI) communication model, UDP, like TCP, is in Layer 4,
the transport layer. UDP works in conjunction with higher level protocols to help manage data
transmission services including Trivial File Transfer Protocol (TFTP), Real Time Streaming
Protocol (RTSP), Simple Network Protocol (SNP) and domain name system (DNS) lookups.

User datagram protocol features:


The user datagram protocol has attributes that make it advantageous for use with applications that
can tolerate lost data.

• It allows packets to be dropped and received in a different order than they were transmitted,
making it suitable for real-time applications where latency might be a concern.

• It can be used for transaction-based protocols, such as DNS or Network Time Protocol.

• It can be used where a large number of clients are connected and where real-time error
correction isn't necessary, such as gaming, voice or video conferencing, and streaming media.

UDP header composition:


The User Datagram Protocol header has four fields, each of which is 2 bytes. They are:
• source port number, which is the number of the sender;

• destination port number, the port the datagram is addressed to;

• length, the length in bytes of the UDP header and any encapsulated data; and

• Checksum, which is used in error checking. Its use is required in IPv6 and optional in IPv4.

TCP (Transmission Control Protocol):


TCP (Transmission Control Protocol) is a standard that defines how to establish and maintain a
network conversation via which application programs can exchange data. TCP works with the
Internet Protocol (IP), which defines how computers send packets of data to each other. Together,
TCP and IP are the basic rules defining the Internet.
TCP is a connection-oriented protocol, which means a connection is established and maintained
until the application programs at each end have finished exchanging messages. It determines how
to break application data into packets that networks can deliver, sends packets to and accepts
packets from the network layer, manages flow control, and—because it is meant to provide
errorfree data transmission—handles retransmission of dropped or garbled packets as well as
acknowledgement of all packets that arrive. In the Open Systems Interconnection(OSI)
communication model, TCP covers parts of Layer 4, the Transport Layer, and parts of Layer 5,
the Session Layer.
For example, when a Web server sends an HTMLfile to a client, it uses the HTTP protocol to do
so. The HTTP program layer asks the TCP layer to set up the connection and send the file. The
TCP stack divides the file into packets, numbers them and then forwards them individually to the
IP layer for delivery. Although each packet in the transmission will have the same source and
destination IP addresses, packets may be sent along multiple routes. The TCP program layer in the
client computer waits until all of the packets have arrived, then acknowledges those it receives and
asks for the retransmission on any it does not (based on missing packet numbers), then assembles
them into a file and delivers the file to the receiving application.

Retransmissions and the need to reorder packets after they arrive can introduce latency in a TCP
stream. Highly time-sensitive applications like voice over IP (VoIP) and streaming video generally
rely on a transport like User Datagram Protocol (UDP) that reduces latency and jitter (variation in
latency) by not worrying about reordering packets or getting missing data retransmitted.

Congestion Control Overview:

Congestion control refers to the techniques used to prevent network congestion, where too much data
traffic slows down network performance, causing delays and packet losses. Congestion control is
crucial for maintaining the efficiency of data transmission in networks such as the internet.

There are two primary approaches to congestion control: Open Loop and Closed Loop.

1. Open Loop Congestion Control

In Open Loop control, the system attempts to prevent congestion from occurring in the first place.
The approach is preventive and typically involves:

 Traffic Shaping: Methods like leaky bucket or token bucket are used to control the data
transmission rate.
 Admission Control: Before a new flow is allowed into the network, checks are made to ensure
the network can handle the new data without causing congestion.

Characteristics:

 No feedback from the network is considered.


 Control is implemented at the source, without dynamically adapting to network conditions.

2. Closed Loop Congestion Control

In Closed Loop control, the system responds to congestion after it happens and attempts to correct the
problem. The approach is reactive and includes:

 Choke Packets: These are packets sent by a router to inform the source of a particular flow
to reduce the transmission rate.
 Explicit Congestion Notification (ECN): A mechanism where routers signal the presence of
congestion to the sending and receiving parties, who then adjust their transmission rates.
Characteristics:

 Feedback from the network is used to adjust the flow of data.


 Control is based on current network conditions, enabling dynamic response.

Quality of Service (QoS):

Quality of Service (QoS) refers to the overall performance of a network or service, particularly the ability
to provide guaranteed service quality levels for specific types of traffic. QoS is essential for applications
such as video conferencing, VoIP, and streaming, which require reliable, real-time data transmission.
Various techniques are used to improve and maintain QoS in networks.

Techniques to Improve QoS:

1. Traffic Shaping (Policing)

 Traffic Shaping controls the flow of outgoing data to ensure that it conforms to a predetermined
rate, helping to prevent congestion.
 Token Bucket and Leaky Bucket algorithms are common methods used for shaping traffic.

Benefits:

 Helps to smooth out bursty traffic.


 Prevents network overload by regulating the rate of data transmission.

2. Packet Scheduling: Packet Scheduling determines the order in which packets are sent through the
network to prioritize certain types of traffic.

Common Scheduling Techniques:

 First-In-First-Out (FIFO): Simplest form, packets are sent in the order they arrive.
 Priority Queuing: High-priority packets are sent first, ensuring that delay-sensitive traffic (e.g.,
video and voice) is given precedence.
 Weighted Fair Queuing (WFQ): Assigns different weights to different types of traffic, allowing
more important traffic to receive more resources.

Benefits:

 Ensures time-sensitive applications like video and voice get timely delivery.
 Helps avoid unnecessary delays for critical traffic.

3. Traffic Classification and Marking:

 Traffic Classification involves categorizing network traffic into different types based on their
requirements (e.g., voice, video, file transfer).
 Marking refers to tagging packets with QoS priority levels (e.g., using Differentiated Services Code
Point - DSCP).
Benefits:

 Enables routers and switches to prioritize traffic effectively.


 Helps enforce QoS policies by assigning appropriate priorities.

4. Admission Control: Admission Control checks the available network resources (bandwidth, buffers)
before allowing a new traffic flow. If the network cannot meet the QoS requirements for the new flow, it
rejects the request.

Benefits:

 Prevents oversubscription of resources.


 Ensures that ongoing services maintain the required QoS levels.

5. Resource Reservation (RSVP): RSVP (Resource Reservation Protocol) is used to reserve specific
resources for a particular traffic flow. It guarantees bandwidth for critical applications.

Benefits:

 Provides guaranteed QoS for high-priority traffic.


 Helps avoid congestion by reserving resources beforehand.

6. Congestion Avoidance Mechanisms: Techniques like Random Early Detection (RED) and Weighted
Random Early Detection (WRED) are used to manage congestion by dropping packets early before the
network becomes overwhelmed.

Benefits:

 Proactively prevents congestion, maintaining smooth traffic flow.


 Ensures better overall network performance by minimizing packet loss.

7. Jitter Control: Jitter refers to the variation in packet arrival times. Buffering and timing adjustments
can be used to control jitter, ensuring smoother data transmission.

Benefits:

 Critical for real-time applications like voice and video, ensuring minimal interruptions and
smoother playback.

8. Load Balancing: Load Balancing distributes network traffic across multiple paths or servers to avoid
overloading any single resource.

Benefits:

 Improves performance by preventing bottlenecks.


 Enhances reliability and availability of services.
9. Bandwidth Management: Bandwidth Allocation ensures that different types of traffic receive the
appropriate amount of bandwidth.

Techniques:

 Bandwidth Reservation: A portion of the available bandwidth is reserved for specific applications
or users.
 Dynamic Bandwidth Allocation: Adjusts bandwidth allocation in real-time based on traffic
conditions.

Benefits:

 Guarantees bandwidth for high-priority traffic.


 Improves performance for critical applications.

Leaky Bucket Algorithm:


The leaky bucket algorithm is a method of temporarily storing a variable number of requests and
organizing them into a set-rate output of packets in an asynchronous transfer mode (ATM)
network.
The leaky bucket is used to implement traffic policing and traffic shaping in Ethernet and cellular
data networks. The algorithm can also be used to control metered-bandwidth Internet connections
to prevent going over the allotted bandwidth for a month, thereby avoiding extra charges.
The algorithm works similarly to the way an actual leaky bucket holds water: The leaky bucket
takes data and collects it up to a maximum capacity. Data in the bucket is only released from the
bucket at a set rate and size of packet. When the bucket runs out of data, the leaking stops. If
incoming data would overfill the bucket, then the packet is considered to be non-conformant and
is not added to the bucket. Data is added to the bucket as space becomes available for conforming
packets.

The leaky bucket algorithm can also detect both gradually increasing and dramatic memory error
increases by comparing how the average and peak data rates exceed set acceptable background
amounts.
TOKEN BUCKET ALGORITHM:
When we apply traffic policing to the input or output traffic at an interface, the rate limits and
actions specified in the policer configuration are used to enforce a limit on the average throughput
rate at the interface while also allowing bursts of traffic up to a maximum number of bytes based
on the overall traffic load. Junos OS policers measure traffic-flow conformance to a policing rate
limit by using a token bucket algorithm. An algorithm based on a single token bucket allows burst
of traffic for short periods, whereas an algorithm based dual token buckets allows more sustained
bursts of traffic.

Single Token Bucket Algorithm:


A single-rate two-color policer limits traffic throughput at an interface based on how the traffic
conforms to rate-limit values specified in the policer configuration. Similarly, a hierarchical
policer limits traffic throughput at an interface based on how aggregate and premium traffic
subflows conform to aggregate and premium rate-limit values specified in the policer
configuration. For both two-color policer types, packets in a conforming traffic flow are
categorized as green, and packets in a non-conforming traffic flow are categorized as red.

The single token bucket algorithm measures traffic-flow conformance to a two-color policer rate
limit as follows:

• The token arrival rate represents the single bandwidth limit configured for the policer. You
can specify the bandwidth limit as an absolute number of bits per second by including the
bandwidth-limit bps statement. Alternatively, for single-rate two-color policers only, you
can use the bandwidth-percent percentage statement to specify the bandwidth limit as a
percentage of either the physical interface port speed or the configured logical interface
shaping rate.

• The token bucket depth represents the single burst size configured for the policer. You
specify the burst size by including the burst-size-limit bytes statement.

• If the bucket is filled to capacity, arriving tokens ―overflow‖ the bucket and are lost.

When the bucket contains insufficient tokens for receiving or transmitting the traffic at the
interface, packets might be dropped or else re-marked with a lower forwarding class, a higher
packet loss priority (PLP) level, or both.

Conformance Measurement for Two-Color Marking

In two-color-marking policing, a traffic flow whose average arrival or departure rate does not
exceed the token arrival rate (bandwidth limit) is considered conforming traffic. Packets in a
conforming traffic flow (categorized as green traffic) are implicitly marked with a packet loss
priority (PLP) level of low and then passed through the interface.

For a traffic flow whose average arrival or departure rate exceeds the token arrival rate,
conformance to a two-color policer rate limit depends on the tokens in the bucket. If sufficient
tokens remain in the bucket, the flow is considered conforming traffic. If the bucket does not
contain sufficient tokens, the flow is considered non-conforming traffic. Packets in a
nonconforming traffic flow (categorized as red traffic) are handled according to policing actions.
Depending on the configuration of the two-color policer, packets might be implicitly discarded; or
the packets might be re-marked with a specified forwarding class, a specified PLP, or both, and
then passed through the interface.
The token bucket is initially filled to capacity, and so the policer allows an initial traffic burst
(back-to-back traffic at average rates that exceed the token arrival rate) up to the size of the token
bucket depth.

During periods of relatively low traffic (traffic that arrives at or departs from the interface at
average rates below the token arrival rate), unused tokens accumulate in the bucket, but only up
to the configured token bucket depth.

You might also like