B.
TECH COMPUTER SCIENCE
ENGINEERING
ADVANCED COMPUTER NETWORKS
ADVANCED COMPUTER
NETWORKS-22CSE529
MODULE 4
Course faculty:
Dr. N.Priya B.E; M.Tech; PhD;
Assistant Professor,
Computer science Engineering- BlockChain technology,
Faculty of Engineering & Technology,
Jain( Deemed-To-Be University,
[email protected] TOPICS
• Link Layer and LAN: Multiple access Networks ,
• ALOHSs, CSMA, CSMA/CD and CSMA/CA,
• Stability in Slotted ALOHA ,
• Bayesian Algorithm, - Splitting Algorithms
• Tree and First Come First Serve, FCFS Scheduling.
• Switch
• VLANs,
• Trunk Protocol - VTP, Implement Spanning Tree Protocol,
• Inter-VLAN Routing, Wireless Router Services in Converged WAN,
• Point to Point Protocol, Frame Relay.
• ATM Networks: Introduction, ATM, The ATM AAL Layer Protocols, Historical perspective,
protocol architecture.
Link Layer
• Data Link Layer
• In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.
• The communication channel that connects the adjacent nodes is known as links, and in order to move the datagram from
source to the destination, the datagram must be moved across an individual link.
• The main responsibility of the Data Link Layer is to transfer the datagram across an individual link.
• The Data link layer protocol defines the format of the packet exchanged across the nodes as well as the actions such as
Error detection, retransmission, flow control, and random access.
• The Data Link Layer protocols are Ethernet, token ring, FDDI ( Fiber Distributed Data Interface )and PPP. Point-to-Point Protocol
• An important characteristic of a Data Link Layer is that datagram can be handled by different link layer protocols on
different links in a path. For example, the datagram is handled by Ethernet on the first link, PPP on the second link.
Link Layer -Services
• Framing & Link access: Data Link Layer protocols encapsulate each network frame within a Link layer frame before the
transmission across the link. A frame consists of a data field in which network layer datagram is inserted and a number of
data fields. It specifies the structure of the frame as well as a channel access protocol by which frame is to be transmitted
over the link.
• Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the network layer datagram without
any error. A reliable delivery service is accomplished with transmissions and acknowledgements. A data link layer mainly
provides the reliable delivery service over the links as they have higher error rates and they can be corrected locally, link at
which an error occurs rather than forcing to retransmit the data.
• Flow control: A receiving node can receive the frames at a faster rate than it can process the frame. Without flow control,
the receiver's buffer can overflow, and frames can get lost. To overcome this problem, the data link layer uses the flow
control to prevent the sending node on one side of the link from overwhelming the receiving node on another side of the
link.
• Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer protocol provides a
mechanism to detect one or more errors. This is achieved by adding error detection bits in the frame and then receiving
node can perform an error check.
• Error correction: Error correction is similar to the Error detection, except that receiving node not only detect the errors
but also determine where the errors have occurred in the frame.
• Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the data at the same time. In a
Half-Duplex mode, only one node can transmit the data at the same time.
Multiple access Networks
• If there is a dedicated link between the sender and the receiver then data link control layer is sufficient, however if there
is no dedicated link present then multiple stations can access the channel simultaneously.
• Hence multiple access protocols are required to decrease collision and avoid crosstalk.
• Thus, protocols are required for sharing data on non dedicated channels. Multiple access protocols can be subdivided
further as
Multiple access Networks
1. Random Access Protocol
• In this, all stations have same superiority that is no station has more priority than another station. Any station can send
data depending on medium’s state( idle or busy). It has two features:
• There is no fixed time for sending data
• There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
• ALOHA
• CSMA
• CSMA/CD
• CSMA/CA
2. Controlled Access
• Controlled access protocols ensure that only one device uses the network at a time. Think of it like taking turns in a
conversation so everyone can speak without talking over each other.
• In this, the data is sent by that station which is approved by all other stations.
Multiple access Networks
• 3. Channelization
• In this, the available bandwidth of the link is shared in time, frequency and code to multiple stations to access channel
simultaneously.
• Frequency Division Multiple Access (FDMA) – The available bandwidth is divided into equal bands so that each
station can be allocated its own band. Guard bands are also added so that no two bands overlap to avoid crosstalk and
noise.
• Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between multiple stations. To avoid
collision time is divided into slots and stations are allotted these slots to transmit data. However there is a overhead of
synchronization as each station needs to know its time slot. This is resolved by adding synchronization bits to each slot.
Another issue with TDMA is propagation delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
• Code Division Multiple Access (CDMA) – One channel carries all transmissions simultaneously. There is neither
division of bandwidth nor division of time. For example, if there are many people in a room all speaking at the same
time, then also perfect reception of data is possible if only two person speak the same language. Similarly, data from
different stations can be transmitted simultaneously in different code languages.
• Orthogonal Frequency Division Multiple Access (OFDMA) – In OFDMA the available bandwidth is divided into
small subcarriers in order to increase the overall performance, Now the data is transmitted through these small
subcarriers. it is widely used in the 5G technology.
ALOHAs
• What is ALOHA?
• ALOHA is an early computer networking method created at the University of Hawaii in the early 1970s. It’s a
straightforward way to send data over a shared medium, like a wireless or wired network. The main idea of ALOHA is
how it handles collisions, which happen when two devices try to send data at the same time, causing interference.
• Pure ALOHA
• Pure ALOHA refers to the original ALOHA protocol. The idea is that each station sends a frame whenever one is
available. Because there is only one channel to share, there is a chance that frames from different stations will collide.
• The pure ALOHA protocol utilizes acknowledgments from the receiver to ensure successful transmission. When a user
sends a frame, it expects confirmation from the receiver. If no acknowledgment is received within a designated time
period, the sender assumes that the frame was not received and retransmits the frame.
ALOHAs
•When two frames attempt to occupy the channel simultaneously, a collision occurs and both frames become garbled. If the
first bit of a new frame overlaps with the last bit of a frame that is almost finished, both frames will be completely
destroyed and will need to be retransmitted. If all users retransmit their frames at the same time after a time-out, the frames
will collide again.
•To prevent this, the pure ALOHA protocol dictates that each user waits a random amount of time, known as the back-off
time, before retransmitting the frame. This randomness helps to avoid further collisions.
ALOHAs
Key Features of Pure ALOHA
•Random Access: Devices can send data whenever they have something to transmit, without needing to wait for a
predetermined time slot.
•Uncoordinated Transmission: Devices do not coordinate with each other before transmitting. They simply attempt to
send data whenever they have data to send.
•Simple Implementation: Pure ALOHA is straightforward to implement, making it suitable for early network experiments
and scenarios with low traffic.
•Persistent Approach: Devices continue to attempt transmission even after a collision, using a form of exponential
backoff. This means they introduce random delays before retrying, which helps reduce the chances of repeated collisions.
•Contention-Based: Since devices transmit without coordination, collisions may occur if two or more devices transmit
simultaneously. Collisions are detected through feedback from the receiver or by the transmitting device itself.
SLOTTED ALOHA
• Slotted ALOHA is an improved version of the pure ALOHA protocol that aims to make communication networks
more efficient.
• In this version, the channel is divided into small, fixed-length time slots and users are only allowed to transmit
data at the beginning of each time slot. This synchronization of transmissions reduces the chances of collisions
between devices, increasing the overall efficiency of the network.
• How Does Slotted ALOHA work?
• The channel time is separated into time slots in slotted ALOHA, and stations are only authorized to transmit at
particular times. These time slots correspond to the packet transmission time exactly. All users are then
synchronized to these time slots so that whenever a user sends a packet, it must precisely match the next available
channel slot. As a result, wasted time due to collisions can be reduced to one packet time or the susceptible period
can be half.
• When a user wants to transmit a frame, it waits until the next time slot and then sends the frame. If the frame is
received successfully, the receiver sends an acknowledgment. If the acknowledgment is not received within a
time-out period, the sender assumes that the frame was not received and retransmits the frame in the next time slot.
SLOTTED ALOHA
• Slotted ALOHA increases channel utilization by reducing the number of collisions. However, it also increases the
delay for users, as they have to wait for the next time slot to transmit their frames.
• It’s also worth noting that there is a variant of slotted ALOHA called “non-persistent slotted ALOHA” which is a
variation of slotted ALOHA, in this variant the station that wants to send data, first listens to the channel before
sending the data. If the channel is busy it waits for a certain time before trying again.
CSMA
• Carrier Sense Multiple Access (CSMA) is a network protocol that regulates how data packets are transmitted over a
shared communication channel. It plays a crucial role in local area networks (LANs) and wireless networks, ensuring
efficient use of the communication medium while minimizing collisions. This article provides an in-depth exploration of
CSMA, covering its fundamental concepts, types, operation, advantages, disadvantages, and real-world applications.
• Introduction to CSMA
• In any network where multiple devices share a common communication channel, there is a need for a protocol to
manage access to this channel to prevent data collisions. CSMA is one such protocol that helps manage this access by
ensuring that devices "listen" to the channel before transmitting data. If the channel is found to be busy, the device waits
for a random period before trying again. This "listen before talk" strategy forms the basis of CSMA.
• Fundamental Concepts of CSMA
• The primary goal of CSMA is to avoid collisions, which occur when two or more devices transmit data simultaneously
over the same channel. Key concepts in CSMA include:
• Carrier Sense: Before transmitting, a device checks the carrier signal on the channel to determine if it is free (idle) or
busy. This is the "sense" part of CSMA.
• Multiple Access: Multiple devices have access to the same communication channel. CSMA helps coordinate this access
to avoid collisions.
• Collision Detection and Avoidance: CSMA includes mechanisms to detect collisions when they occur and take steps to
minimize their likelihood.
CSMA
• Types of CSMA
• There are several variations of CSMA, each with different mechanisms for handling potential collisions:
• 1-persistent CSMA: In this method, a device continuously senses the channel and transmits immediately when the
channel is found to be free. If the channel is busy, the device waits until it becomes free and then transmits. This
approach can lead to higher collision rates, especially in highly congested networks.
• Non-persistent CSMA: Here, if the channel is busy, the device waits for a random amount of time before sensing the
channel again. This reduces the chances of collisions compared to 1-persistent CSMA, but can result in longer delays.
• P-persistent CSMA: This is a hybrid approach used in slotted channels (time is divided into discrete intervals). When
the channel is free, the device transmits with a probability ‘p’ or waits for the next slot with a probability ‘1-p’. This
method balances the need for immediate transmission and collision avoidance.
CSMA/CD
• CSMA/CD (Collision Detection)
• CSMA/CD is an extension of CSMA used primarily in wired networks like Ethernet. It includes mechanisms for detecting
collisions and taking corrective actions. The key steps in CSMA/CD are:
• Carrier Sensing: Devices monitor the channel to check if it is free before transmitting.
• Transmission: If the channel is free, the device starts transmitting data.
• Collision Detection: Devices continue to monitor the channel while transmitting. If a collision is detected (by sensing
interference or a drop in signal quality), the device immediately stops transmitting.
• Jamming Signal: The device sends a jamming signal to inform other devices of the collision.
• Backoff Algorithm: After a collision, devices wait for a random period before attempting to retransmit. The waiting time
typically increases exponentially with each subsequent collision.
CSMA/CA
• CSMA/CA (Collision Avoidance)
• CSMA/CA is predominantly used in wireless networks, such as Wi-Fi, where collision detection is more challenging
due to the nature of wireless communication. Instead of detecting collisions, CSMA/CA focuses on avoiding them. The
key steps are:
• Carrier Sensing: Devices listen to the channel before transmitting.
• Collision Avoidance: If the channel is busy, the device waits for a random backoff period before attempting to sense the
channel again. Some implementations use a Request to Send (RTS) and Clear to Send (CTS) mechanism to reserve the
channel for a specific communication.
• Transmission: If the channel is free, the device transmits the data.
• Advantages of CSMA
• CSMA offers several benefits, making it a popular choice for managing shared communication channels:
• Efficiency: By allowing devices to listen to the channel before transmitting, CSMA reduces the likelihood of collisions
and improves the overall efficiency of the network.
• Simplicity: The protocol is relatively simple to implement, making it suitable for a wide range of network types and
sizes.
• Scalability: CSMA can accommodate a varying number of devices, from small local networks to large-scale
deployments.
CSMA/CA
• Disadvantages of CSMA
• Despite its advantages, CSMA also has some limitations:
• Collision Probability: In high-traffic networks, the probability of collisions increases, leading to potential delays and
reduced network performance.
• Hidden Terminal Problem: In wireless networks, devices may be unable to detect transmissions from other devices
due to obstacles or distance, leading to collisions. This is known as the hidden terminal problem.
• Exposed Terminal Problem: Conversely, a device may refrain from transmitting even when it could have successfully
done so, due to sensing another transmission that would not actually cause a collision. This is the exposed terminal
problem.
• Real-World Applications of CSMA
• CSMA is widely used in various network environments, including:
• Ethernet Networks: CSMA/CD is a fundamental protocol in traditional Ethernet networks, enabling multiple devices
to share the same wired medium.
• Wi-Fi Networks: CSMA/CA is integral to Wi-Fi networks, helping manage the shared wireless spectrum and reduce
the likelihood of collisions.
• IoT Networks: In the Internet of Things (IoT) context, CSMA protocols help manage communication between
numerous devices, ensuring efficient use of the shared communication channels.
Stability in Slotted ALOHA
• Stability in the Slotted ALOHA protocol refers to the network's ability to keep the throughput at a sustainable level despite varying levels of
load (i.e., the number of packets transmitted). Slotted ALOHA, a random access protocol used in shared communication networks, allows
multiple users to send packets in distinct time slots. If too many users attempt to send packets at once, collisions occur, leading to packet loss
and retransmission, which can destabilize the system if the load is too high.
1. Throughput vs. Load:
1. Throughput represents the successful transmission rate of packets, while load is the total transmission attempts by all users (including
retransmissions).Stability is maintained when throughput remains close to or at the maximum, without spiraling down as load increases.
2. Critical Load:
1. Slotted ALOHA achieves maximum throughput when the total load (in terms of offered load GGG) is around 1 packet per time slot.
2. If the load increases significantly beyond this critical point, collisions multiply rapidly, causing a severe drop in throughput.
3. Control Mechanisms for Stability:
1. To maintain stability, the system can implement control algorithms that limit the retransmission attempts, or adjust access probability
based on network load.
4. Instability and Congestion Collapse:
1. Without controls, if users persist in retransmitting after collisions, the network may become unstable, causing a "congestion collapse."
2. This situation arises when the effective throughput drastically falls because the load far exceeds the network's collision-free capacity.
Bayesian Algorithm
In the link layer, Bayesian algorithms can be applied to improve decision-making under uncertainty, especially in wireless
networks where signal quality, interference, and contention vary dynamically. Bayesian methods can help adaptively manage
channel access, estimate collision likelihoods, and optimize contention resolution strategies.
Applications of Bayesian Algorithms in Link Layer Protocols
1. Collision Detection and Avoidance:
• Bayesian algorithms can estimate the probability of a collision based on observed traffic patterns and past transmission
outcomes. For instance, in a wireless network where nodes contend for the channel, a Bayesian model could predict the
likelihood of collision in the next transmission attempt, guiding whether a node should wait or transmit.
• This approach uses historical data on successful and failed transmissions (prior) and incorporates recent observations
(likelihood) to calculate the probability of a collision-free transmission (posterior).
2.Adaptive Backoff Mechanisms:Backoff mechanisms, like those in CSMA/CA (Carrier Sense Multiple Access with
Collision Avoidance), can be enhanced using Bayesian inference. By continuously updating the probability of finding an idle
channel, a node can adjust its backoff time dynamically, reducing idle waiting and improving network efficiency.Bayesian
approaches allow each node to adjust its transmission probability based on the estimated load, helping to avoid the extremes
of high collision rates or underutilization.
3.Channel Estimation and Selection: In wireless networks with multiple channels, Bayesian algorithms can estimate the
likelihood of finding an idle or low-interference channel. By collecting signal quality data (such as SNR) and feedback on
channel utilization, Bayesian models update beliefs about which channel is optimal for transmission.This application helps
nodes to avoid congested channels and select better options for efficient communication, particularly useful in high-traffic
scenarios or heterogeneous networks like Wi-Fi and cellular networks.
Bayesian Algorithm
4.Traffic Prediction and Load Balancing:
•Bayesian models can predict traffic loads by learning from historical transmission patterns, such as peak times or typical
data bursts. Nodes can adjust their transmission schedules or access strategies based on these predictions, reducing
contention during expected high-load periods.
•In link-layer load balancing, Bayesian inference can guide nodes to spread traffic evenly across available resources, like
multiple access points in a Wi-Fi network, balancing the load and improving network performance.
•Error Detection and Correction:
•Bayesian approaches can enhance error detection and correction mechanisms by predicting the probability of transmission
errors in different conditions (e.g., based on signal strength or interference levels).
• By adjusting error correction parameters adaptively, the link layer can maintain reliable transmission with optimal resource
usage.
Splitting Algorithms
• In the link layer of network protocols, splitting algorithms are used to manage and resolve contention when multiple devices or
nodes attempt to access a shared communication channel. These algorithms help to coordinate access by splitting groups of
contending nodes until only one node can transmit without collisions. This process is particularly important in random access
protocols, where multiple nodes can initiate transmissions at any time, such as in Ethernet networks or wireless LANs.
• Purpose of Splitting Algorithms
• When many nodes want to send data simultaneously, collisions can occur if more than one node transmits at the same time.
• Splitting algorithms minimize these collisions by dynamically grouping and separating the nodes based on access attempts, allowing
one node to transmit successfully while others wait for their turn.
• How Splitting Algorithms Work
• Splitting algorithms work by breaking down a group of contending nodes into smaller subsets, iterating this division process until a
single node or a small enough group remains to transmit without collisions.
• This can involve organizing nodes into binary groups, time slots, or spatial groups based on channel feedback (such as detecting if
the channel is busy or idle).
• Applications
Splitting algorithms are fundamental in random access protocols like:
• ALOHA and Slotted ALOHA networks
• Carrier Sense Multiple Access with Collision Detection (CSMA/CD), as seen in Ethernet
• Wireless LAN protocols (e.g., IEEE 802.11), where collision resolution is key in managing access to shared channels
Types of Splitting Algorithms
• Different splitting algorithms can be applied depending on the network requirements and characteristics of the link layer. Here are a few commonly
used ones:
1. Binary Splitting:-The contending group is divided into two subsets (binary division). The process continues recursively on
each subset until each subset contains only one node, ensuring successful transmission.
2. Tree Splitting:-A tree structure is used, where each node is a branch, and the splitting is based on nodes traveling down to
unique branches in each round. This algorithm is efficient in managing larger groups of nodes.
3. Window Splitting:-The algorithm divides the contention period into multiple time windows, and nodes are assigned to
windows based on random selection. This can reduce collisions but requires synchronization.
4. Adaptive Splitting:-This approach dynamically adjusts the splitting criteria based on the observed network load and collision
frequency, providing a more responsive splitting approach in high-traffic situations.
• Advantages and Disadvantages
• Advantages:Reduces collisions, especially in high-traffic environments.,Improves overall throughput by ensuring more
efficient use of the channel.,Adapts to different contention levels, minimizing delays in transmissions.
• Disadvantages:-May introduce complexity, especially for large networks with many contending nodes,Requires channel
feedback to detect collisions and adjust splitting, which can add to overhead.,Potential for inefficiency in very low or very
high contention periods if the splitting criteria are not adaptive.
Tree and First Come First Serve
• Tree topology in networking refers to a hierarchical arrangement where devices are interconnected. It has a resemblance
to a tree, with a central node known as the root and several offshoots called branches.
• The root node is linked to many tiers of child nodes, creating a hierarchical structure.
• Simplest CPU scheduling algorithm that schedules according to arrival times of processes.
• The first come first serve scheduling algorithm states that the process that requests the CPU first is allocated the CPU
first.
• It is implemented by using the FIFO queue. When a process enters the ready queue, its PCB is linked to the tail of the
queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed
from the queue. FCFS is a non-preemptive scheduling algorithm.
Characteristics of FCFS
• FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not very efficient in performance, and the wait time is quite high.
Tree and First Come First Serve
Algorithm for FCFS Scheduling
The waiting time for the first process is 0 as it is executed first.
The waiting time for the upcoming process can be calculated by:
wt[i] = ( at[i – 1] + bt[i – 1] + wt[i – 1] ) – at[i]
where
wt[i] = waiting time of current process
at[i-1] = arrival time of previous process
bt[i-1] = burst time of previous process
wt[i-1] = waiting time of previous process
at[i] = arrival time of current process
The Average waiting time can be calculated by:
Average Waiting Time = (sum of all waiting time)/(Number of processes)
Switch
• Switches are networking devices operating at layer 2 or a data link layer of the OSI model. They connect devices in a
network and use packet switching to send, receive or forward data packets or data frames over the network.
• A switch has many ports, to which computers are plugged in. When a data frame arrives at any port of a network switch,
it examines the destination address, performs necessary checks and sends the frame to the corresponding device(s). It
supports unicast, multicast as well as broadcast communications.
Features of Switches
• A switch operates in the layer 2, i.e. data link layer of the OSI model.
• It is an intelligent network device that can be conceived as a multiport network bridge.
• It uses MAC addresses (addresses of medium access control sublayer) to send data packets to selected destination ports.
• It uses packet switching technique to receive and forward data packets from the source to the destination device.
• It is supports unicast (one-to-one), multicast (one-to-many), and broadcast (one-to-all) communications.
• Transmission mode is full duplex, i.e. communication in the channel occurs in both the directions at the same time. Due
to this, collisions do not occur.
• Switches are active devices, equipped with network software and network management capabilities.
• Switches can perform some error checking before forwarding data to the destined port.
• The number of ports is higher – 24/48.
Switch
• Types of Switches
• There are variety of switches that can be broadly categorized into 4 types −
Unmanaged Switch − These are inexpensive switches commonly used in home networks and small businesses. They can
be set up by simply plugging in to the network, after which they instantly start operating. When more devices needs to be
added, more switches are simply added by this plug and play method. They are referred to as u managed since they do not
require to be configured or monitored.
Managed Switch − These are costly switches that are used in organizations with large and complex networks, since they
can be customized to augment the functionalities of a standard switch. The augmented features may be QoS (Quality of
Service) like higher security levels, better precision control and complete network management. Despite their cost, they are
preferred in growing organizations due to their scalability and flexibility. Simple Network Management Protocol (SNMP)
is used for configuring managed switches.
LAN Switch − Local Area Network (LAN) switches connects devices in the internal LAN of an organization. They are
also referred as Ethernet switches or data switches. These switches are particularly helpful in reducing network congestion
or bottlenecks. They allocate bandwidth in a manner so that there is no overlapping of data packets in a network.
PoE Switch − Power over Ethernet (PoE) switches are used in PoE Gogabit Ethernets. PoE technology combine data and
power transmission over the same cable so that devices connected to it can receive both electricity as well as data over the
same line. PoE switches offer greater flexibility and simplifies the cabling connections.
VLANs
• Virtual LAN (VLAN) is a concept in which we can divide the devices logically on layer 2 (data link layer). Generally,
layer 3 devices divide the broadcast domain but the broadcast domain can be divided by switches using the concept of
VLAN.
• A broadcast domain is a network segment in which if a device broadcast a packet then all the devices in the same
broadcast domain will receive it. The devices in the same broadcast domain will receive all the broadcast packets but it
is limited to switches only as routers don’t forward out the broadcast packet. To forward out the packets to different
VLAN (from one VLAN to another) or broadcast domains, inter Vlan routing is needed. Through VLAN, different
small-size sub-networks are created which are comparatively easy to handle.
• VLAN ranges:
• VLAN 0, 4095: These are reserved VLAN which cannot be seen or used.
• VLAN 1: It is the default VLAN of switches. By default, all switch ports are in VLAN. This VLAN can’t be deleted or
edit but can be used.
• VLAN 2-1001: This is a normal VLAN range. We can create, edit and delete these VLAN.
• VLAN 1002-1005: These are CISCO defaults for fddi and token rings. These VLAN can’t be deleted.
• Vlan 1006-4094: This is the extended range of Vlan.
VLANs
• VLANs offer several features and benefits, including:
• Improved network security: VLANs can be used to separate network traffic and limit access to specific network
resources. This improves security by preventing unauthorized access to sensitive data and network resources.
• Better network performance: By segregating network traffic into smaller logical networks, VLANs can reduce the
amount of broadcast traffic and improve network performance.
• Simplified network management: VLANs allow network administrators to group devices together logically, rather
than physically, which can simplify network management tasks such as configuration, troubleshooting, and
maintenance.
• Flexibility: VLANs can be configured dynamically, allowing network administrators to quickly and easily adjust
network configurations as needed.
• Cost savings: VLANs can help reduce hardware costs by allowing multiple virtual networks to share a single physical
network infrastructure.
• Scalability: VLANs can be used to segment a network into smaller, more manageable groups as the network grows in
size and complexity.
VLANs
• Types of connections in VLAN –
• There are three ways to connect devices on a VLAN, the type of connections are based on the connected devices i.e. whether they are
VLAN-aware(A device that understands VLAN formats and VLAN membership) or VLAN-unaware(A device that doesn’t understand
VLAN format and VLAN membership).
1. Trunk Link –All connected devices to a trunk link must be VLAN-aware. All frames on this should have a special header attached to it
called tagged frames.
2. Access link –It connects VLAN-unaware devices to a VLAN-aware bridge. All frames on the access link must be untagged.
3. Hybrid link –It is a combination of the Trunk link and Access link. Here both VLAN-unaware and VLAN-aware devices are attached and
it can have both tagged and untagged frames.
• Real-Time Applications of VLAN
1. Voice over IP (VoIP) : VLANs can be used to isolate voice traffic from data traffic, which improves the quality of VoIP calls and reduces
the risk of network congestion.
2. Video Conferencing : VLANs can be used to prioritize video traffic and ensure that it receives the bandwidth and resources it needs for
high-quality video conferencing.
3. Remote Access : VLANs can be used to provide secure remote access to cloud-based applications and resources, by isolating remote users
from the rest of the network.
4. Cloud Backup and Recovery : VLANs can be used to isolate backup and recovery traffic, which reduces the risk of network congestion
and improves the performance of backup and recovery operations.
5. Gaming : VLANs can be used to prioritize gaming traffic, which ensures that gamers receive the bandwidth and resources they need for a
smooth gaming experience.
6. IoT : VLANs can be used to isolate Internet of Things (IoT) devices from the rest of the network, which improves security and reduces the
risk of network congestion.
Trunk Protocol(VTP)
• VTP (VLAN Trunking Protocol) is a Cisco proprietary protocol used by Cisco switches to exchange VLAN
information. With VTP, you can synchronize VLAN information (such as VLAN ID or VLAN name) with switches
inside the same VTP domain.
• A VTP domain is a set of trunked switches with the matching VTP settings (the domain name, password and VTP
version). All switches inside the same VTP domain share their VLAN information with each other.
• VLAN Trunking Protocol (VTP) – VTP is CISCO proprietary protocol used to maintain consistency throughout the
network or the user can say that synchronizing the VLAN information in the same VTP domain. VTP allows you to add,
delete and rename VLANs which is then propagated to other switches in the VTP domain. VTP advertisements can be
sent over 802.1Q, and ISL trunks.
• Requirements – There are some requirements for VTP to communicate VLAN information between switches. These
are:
1. The VTP version must be same on the switches user wants to configure
2. VTP domain name must be same on the switches
3. One of the switches must be a server
4. Authentication should match if applied
Trunk Protocol(VTP)
• VTP Modes :
The user can configure a switch to work in any one of the following VTP modes:
1. Server –
In VTP server mode, the user can make VLANs, modify and delete them, and they can also specify additional
configuration constraints such as VTP pruning and VTP version for the whole domain. These servers promote their
VLAN configuration to additional switches that exist in the same domain, and they also synchronize their configuration
with additional switches based upon the announcements acknowledged over trunk links. The default mode in VTP is
server.
2. Client –
VTP clients act in a similar way as the VTP servers, though here the user can’t change, create, or delete VLANs.
3. Transparent –
VTP transparent switches don’t partake in VTP. A VTP transparent switch doesn’t promote its VLAN configuration and
it also doesn’t synchronize its VLAN configuration based on acknowledged announcements, however, these switches do
transmit VTP announcements that they receive from trunk ports in Version 2 of VTP.
4. VTP mode Off –
In the three described modes, VTP announcements are acknowledged and forwarded as soon as the switch goes in the
management domain state. In the off mode, switches act in a similar way as they do in VTP transparent mode however
there is one difference that is the VTP announcements are not forwarded.
•
Trunk Protocol(VTP)
• VTP configuration :
For exchanging VTP messages there are some basic conditions that need to be fulfilled.
1. The VTP domain name should be the same on both switches.
2. VTP versions should be the same.
3. VTP domain password should be the same.
4. The switch should be configured as either a VTP client or a VTP server.
5. A trunk link should be used between switches.
Implement Spanning Tree Protocol,
• The Spanning Tree Protocol (STP) is a network protocol designed to prevent loops in Ethernet networks by creating a
loop-free logical topology.
• It achieves this by identifying and disabling redundant links that could cause broadcast storms and MAC table
instability
• The Spanning Tree Protocol (STP) is defined by IEEE standard 802.1D-1988. The STP generates a single spanning tree
inside a network. Such mode proved to be useful for supporting applications and protocols in which frames are delivered
out of sequence or as duplicates.
• The topology is named Spanning Tree, because it is constructed as a loop-free active forwarding topology, meaning that
it is a tree-type topology that spans the entire network.
• The spanning tree is generated during the process of exchanging Bridge Protocol Data Units (BPDUs) between bridges
in a LAN. The spanning tree algorithm functions in two following ways:
• Computing a loop-free portion of the topology, called a spanning tree, via an automated process. The topology is
dynamically pruned to the spanning tree by declaring certain redundant ports on a switch and placing them into a
“blocking” state.
• If possible, automatically recovering from a switch failure that could result in the partitioning of the extended LAN
by reconfiguring the spanning tree to use redundant paths.
• By default, RSTP is the mode enabled on every port of a switch. It prevents Layer 2 loops in a network.
Implement Spanning Tree Protocol,
• STP is a Layer 2 link management protocol that provides path redundancy while preventing loops in the network. For a Layer 2
network to function properly, only one active path can exist between any two stations. Spanning-tree operation is transparent to
end stations, which cannot detect whether they are connected to a single LAN segment or to a LAN of multiple segments.
• When you create fault-tolerant internetworks, you must have a loop-free path between all nodes in a network. The spanning-tree
algorithm calculates the best loop-free path throughout a Layer 2 network. Infrastructure devices such as wireless bridges and
switches send and receive spanning-tree frames, called bridge protocol data units (BPDUs), at regular intervals. The devices do
not forward these frames but use them to construct a loop-free path.
• Multiple active paths among end stations cause loops in the network. If a loop exists in the network, end stations might receive
duplicate messages. Infrastructure devices might also learn end-station MAC addresses on multiple Layer 2 interfaces. These
conditions result in an insecure network.
• STP defines a tree with a root device and a loop-free path from the root to all infrastructure devices in the Layer 2 network.
• STP forces redundant data paths into a standby (blocked) state. If a network segment in the spanning tree fails and a redundant
path exists, the spanning-tree algorithm recalculates the spanning-tree topology and activates the standby path.
• When two interfaces are part of a loop, the spanning-tree port priority and path cost settings determine which interface is put in the
forwarding state and which is put in the blocking state. The port priority value represents the location of an interface in the
network topology and how well it is located to pass traffic. The path cost value represents media speed.
• The bridge supports both per-VLAN spanning tree (PVST) and a single 802.1q spanning tree without VLANs. The bridge cannot
run 802.1s multiple spanning tree (MST) or 802.1d Common Spanning Tree, which map multiple VLANs into a one-instance
spanning tree.
• The bridge maintains a separate spanning-tree instance for each active VLAN configured on it. A bridge ID, consisting of the
bridge priority and the MAC address, is associated with each instance. For each VLAN, the bridge with the lowest bridge ID
becomes the spanning-tree root for that VLAN.
Inter-VLAN Routing
• Normally, Routers are used to divide the broadcast domain and switches (at layer 2) Operate in a single broadcast
domain but Switches can also divide the broadcast domain by using the concept of VLAN (Virtual LAN).
• VLAN is the logical grouping of devices in the same or different broadcast domains. By default, all the switch ports are
in VLAN 1. As the single broadcast domain is divided into multiple broadcast domains, Routers or layer 3 switches are
used for intercommunication between the different VLANs.The process of intercommunication of the different Vlans is
known as Inter Vlan Routing (IVR).
• Suppose we have made 2 logical groups of devices (VLAN) named sales and finance. If a device in the sales department
wants to communicate with a device in the finance department, inter-VLAN routing has to be performed. These can be
performed by either router or layer 3 switches.
• Switch Virtual Interface (SVI): SVI is a logical interface on a multilayer switch that provides layer 3 processing for
packets to all switch ports associated with that VLAN. A single SVI can be created for a VLAN. SVI on the layer 3
switch provides both management and routing services while SVI on layer 2 switch provides only management services
like creating VLANs or telnet/SSH services.
• Process of Inter Vlan Routing by Layer 3 Switch: The SVI created for the respective VLAN acts as a default gateway
for that VLAN just like the sub-interface of the router (in the process of Router On a stick). If the packet is to be
delivered to different VLANs i.e inter VLAN Routing is to be performed on the layer 3 switch then first the packet is
delivered to the layer 3 switch and then to the destination just like in the process of the router on a stick.
Inter-VLAN Routing
• Advantages:
• In the Router on the stick method, both switch and router are needed but while using layer 3 switches, a single switch will
perform inter-VLAN routing as well as the layer 2 functions (Vlan), therefore this method is cost-effective and also less
configuration is needed.
1. Speed: Inter-VLAN routing by Layer 3 switch is faster than other methods, as the Layer 3 switch can perform the routing process
quickly, without requiring the intervention of an external router.
2. Cost-effective: Using a Layer 3 switch for inter-VLAN routing can be more cost-effective than using a separate router, as it
eliminates the need for an external router and its associated costs.
3. Scalability: Inter-VLAN routing by Layer 3 switch is scalable, as additional VLANs can be added easily without significant
changes to the network topology.
4. Security: Inter-VLAN routing by Layer 3 switch provides better security than other methods, as it allows for the creation of
access control lists (ACLs) to restrict traffic between VLANs.
• Disadvantages:
1. Complexity: Inter-VLAN routing by Layer 3 switch can be complex to configure and manage, particularly in large networks with
many VLANs.
2. Limited functionality: Layer 3 switches have limited routing capabilities compared to dedicated routers, which can limit the
routing options available to network administrators.
3. Single point of failure: If the Layer 3 switch used for inter-VLAN routing fails, all traffic between VLANs will be disrupted,
which can cause significant network downtime.
4. Bandwidth utilization: Inter-VLAN routing by Layer 3 switch can lead to increased bandwidth utilization, as all traffic between
VLANs must pass through the Layer 3 switch, which can create a bottleneck if not properly managed.
Wireless Router Services in Converged WAN
Wireless Router Services in Converged WAN (Wide Area Network) refers to the use of wireless routers as integral parts of a
WAN, enabling the convergence of different services and communication technologies across a unified network infrastructure. Here
are the key points to understand this concept:
1. Overview of Converged WAN
• Convergence: A converged WAN integrates multiple types of traffic (data, voice, video) over a single infrastructure. This can
include wired and wireless connections.
• Services: The services in a converged WAN include:
• Internet access
• VoIP (Voice over IP)
• Video conferencing
• Cloud services
• Unified communication (UC)
Role of Wireless Routers in a Converged WAN
• Wireless Connectivity: Wireless routers provide seamless connectivity, enabling mobile devices and remote users to access
WAN services without being physically connected via cables.
• Hybrid WANs: A wireless router allows for the integration of wireless networks (such as 4G/5G) with traditional WAN
technologies (like MPLS, Ethernet).
• Backup and Failover: Wireless routers in converged WANs can serve as backup connections for wired links, ensuring failover in
case of primary link failure.
• Load Balancing: Routers can distribute traffic over both wireless and wired links to optimize bandwidth usage and ensure high
availability.
Point to Point Protocol(PPP)
• Point - to - Point Protocol (PPP) is a communication protocol of the data link layer that is used to transmit
multiprotocol data between two directly connected (point-to-point) computers.
• It is a byte - oriented protocol that is widely used in broadband communications having heavy loads and high speeds.
Since it is a data link layer protocol, data is transmitted in frames. It is also known as RFC 1661.
• Services Provided by PPP
• Defining the frame format of the data to be transmitted.
• Defining the procedure of establishing link between two points and exchange of data.
• Stating the method of encapsulation of network layer data in the frame.
• Stating authentication rules of the communicating devices.
• Providing address for network communication.
• Providing connections over multiple links.
• Supporting a variety of network layer protocols by providing a range os services.
Point to Point Protocol(PPP)
• Components of PPP
• Point - to - Point Protocol is a layered protocol having three components −
• Encapsulation Component − It encapsulates the datagram so that it can be transmitted over the specified physical layer.
• Link Control Protocol (LCP) − It is responsible for establishing, configuring, testing, maintaining and terminating links for
transmission. It also imparts negotiation for set up of options and use of features by the two endpoints of the links.
• Authentication Protocols (AP) − These protocols authenticate endpoints for use of services. The two authentication
protocols of PPP are −
1.Password Authentication Protocol (PAP)
2.Challenge Handshake Authentication Protocol (CHAP)
• Network Control Protocols (NCPs) − These protocols are used for negotiating the parameters and facilities for the network
layer. For every higher-layer protocol supported by PPP, one NCP is there. Some of the NCPs of PPP are −
• Internet Protocol Control Protocol (IPCP)
• OSI Network Layer Control Protocol (OSINLCP)
• Internetwork Packet Exchange Control Protocol (IPXCP)
• DECnet Phase IV Control Protocol (DNCP)
• NetBIOS Frames Control Protocol (NBFCP)
• IPv6 Control Protocol (IPV6CP)
Point to Point Protocol(PPP)
• PPP Frame
• PPP is a byte - oriented protocol where each field of the frame is composed of one or more bytes. The fields of a PPP
frame are −
• Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the flag is 01111110.
• Address − 1 byte which is set to 11111111 in case of broadcast.
• Control − 1 byte set to a constant value of 11000000.
• Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
• Payload − This carries the data from the network layer. The maximum length of the payload field is 1500 bytes. However,
this may be negotiated between the endpoints of communication.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code used is CRC (cyclic
redundancy code)
Frame Relay.
• Frame Relay is a packet-switching network protocol that is designed to work at the data link layer of the network. It is used to
connect Local Area Networks (LANs) and transmit data across Wide Area Networks (WANs).
• It is a better alternative to a point-to-point network for connecting multiple nodes that require separate dedicated links to be
established between each pair of nodes.
• It allows transmission of different size packets and dynamic bandwidth allocation. Also, it provides a congestion control
mechanism to reduce the network overheads due to congestion. It does not have an error control and flow management
mechanism.
• Types:
1. Permanent Virtual Circuit (PVC) –These are the permanent connections between frame relay nodes that exist for long
durations. They are always available for communication even if they are not in use. These connections are static and do not
change with time.
2. Switched Virtual Circuit (SVC) –These are the temporary connections between frame relay nodes that exist for the duration
for which nodes are communicating with each other and are closed/ discarded after the communication. These connections are
dynamically established as per the requirements.
• Advantages:
1. High speed,Scalable,Reduced network congestion,Cost-efficient
Secured connection
• Disadvantages:
1. Lacks error control mechanism,Delay in packet transfer,Less
reliable
Frame Relay.
• Frame relay switches set up virtual circuits to connect multiple LANs to build a WAN. Frame relay transfers data
between LANs across WAN by dividing the data in packets known as frames and transmitting these packets across the
network. It supports communication with multiple LANs over the shared physical links or private lines.
• Frame relay network is established between Local Area Networks (LANs) border devices such as routers and service
provider network that connects all the LAN networks. Each LAN has an access link that connects routers of LAN to the
service provider network terminated by the frame relay switch. The access link is the private physical link used for
communication with other LAN networks over WAN. The frame relay switch is responsible for terminating the access
link and providing frame relay services.
• For data transmission, LAN’s router (or other border device linked with access link) sends the data packets over the
access link. The packet sent by LAN is examined by a frame relay switch to get the Data Link Connection Identifier
(DLCI) which indicates the destination of the packet. Frame relay switch already has the information about addresses of
the LANs connected to the network hence it identifies the destination LAN by looking at DLCI of the data packet. DLCI
basically identifies the virtual circuit (i.e. logical path between nodes that doesn’t really exist) between source and
destination network. It configures and transmits the packet to frame relay switch of destination LAN which in turn
transfers the data packet to destination LAN by sending it over its respective access link. Hence, in this way, a LAN is
connected with multiple other LANs by sharing a single physical link for data transmission.
ATM Networks: Introduction
• ATM stands for Asynchronous Transfer Mode. It is a switching technique that uses time division multiplexing
(TDM) for data communications.
• ATM networks are connection oriented networks for cell relay that supports voice, video and data communications. It
encodes data into small fixed - size cells so that they are suitable for TDM and transmits them over a physical medium.
• The size of an ATM cell is 53 bytes
• 5 byte header and 48 byte payload. There are two different cell formats - user-network interface (UNI) and
network-network interface (NNI). The below image represents the Functional Reference Model of the Asynchronous
Transfer Mode.
ATM Networks: Introduction
• ATM reference model comprises of three layers
• Physical Layer − This layer corresponds to physical layer of OSI model. At this layer, the cells are converted into bit
streams and transmitted over the physical medium. This layer has two sub layers: PMD sub layer (Physical Medium
Dependent) and TC (Transmission Convergence) sub layer.
• ATM Layer −This layer is comparable to data link layer of OSI model. It accepts the 48 byte segments from the upper
layer, adds a 5 byte header to each segment and converts into 53 byte cells. This layer is responsible for routing of each
cell, traffic management, multiplexing and switching.
• ATM Adaptation Layer (AAL) −This layer corresponds to network layer of OSI model. It provides facilities to the
existing packet switched networks to connect to ATM network and use its services. It accepts the data and converts them
into fixed sized segments. The transmissions can be of fixed or variable data rate. This layer has two sub layers −
Convergence sub layer and Segmentation and Reassembly sub layer.
• ATM endpoints − It contains ATM network interface adaptor. Examples of endpoints are workstations, routers,
CODECs, LAN switches, etc.
• ATM switch −It transmits cells through the ATM networks. It accepts the incoming cells from ATM endpoints (UNI)
or another switch (NNI), updates cell header and retransmits cell towards destination.
The ATM AAL Layer Protocols
ATM Adaptation Layer (AAL)
• In Asynchronous Transfer Mode (ATM) networks, the ATM
Adaptation Layer (AAL) provides facilities for non-ATM
based networks to connect to ATM network and use its
services.
• AAL is basically a software layer that accepts user data,
which may be digitized voice, video or computer data, and
makes them suitable for transmission over an ATM network.
• The transmissions can be of fixed or variable data rate. AAL
accepts higher layer packets and segments them into fixed
sized ATM cells before transmission via ATM. It also
reassembles the received segments to the higher layer
packets.
• This layer has two sub layers −
• Convergence sub layer
• Segmentation and Reassembly sub layer.
• Some networks that need AAL services are Gigabit Ethernet, IP, Frame
Relay, SONET/SDH and UMTS/Wireless.
The ATM AAL Layer Protocols
AAL Protocols
• AAL Type 0 − This is the simplest service that provides direct interface to ATM services without any restrictions.
These cells are called raw cells that contain 48-byte payload field without any special fields. It lacks guaranteed delivery
and interoperability.
• AAL Type 1 − This service provides interface for synchronous, connection oriented traffic. It supports constant rate bit
stream between the two ends of an ATM link. An AAL 1 cell contains a 4-bit sequence number, a 4-bit sequence
number protection and a 47-byte payload field.
• AAL Type 2 − This service also provides interface for synchronous, connection oriented traffic. However, this is for
variable rate bit stream between the two ends of an ATM link. It is used in wireless applications.
• AAL Type 3/4 − This includes a range of services for variable rate data or bit stream. It is suitable for both connection –
oriented, asynchronous traffic as well as connectionless traffic. These ATM cells contain a 4-byte header.
• AAL Type 5 − AAL 5 provides the similar services as AAL 3/4, but with simplified header information. It was
originally named Simple and Efficient Adaptation Layer (SEAL). It is used in a number of areas like Internet Protocol
(IP) over ATM, Ethernet over ATM and Switched Multimegabit Data Service (SMDS).
Historical perspective
ATM technology, developed in the late 1980s and early 1990s, was designed to revolutionize high-speed data transfer
across broadband networks. Its key design elements include:
Cells: The basic data transmission units in ATM are cells, each 53 bytes in size, comprising a 5-byte header and a 48-byte
payload.
Virtual Channels (VCs) and Virtual Paths (VPs): These are crucial for establishing connections in an ATM network. VPs
are bundles of VCs sharing the same network path, and VCs carry data between devices.
Switches: Operating at layer 2 of the OSI model, ATM switches use the Virtual Path Identifier (VPI) and Virtual Channel
Identifier (VCI) in cell headers to route cells correctly.
Quality of Service (QoS) Mechanisms: These mechanisms, including traffic shaping and priority queuing, are integral to
ATM's ability to deliver data with specific performance levels.
ATM's design was based on fast, reliable communication using small, fixed-size cells and dedicated connections. Its ability
to support both connection-oriented and connectionless communication made it versatile for various networking
applications. Moreover, ATM's efficient use of bandwidth and support for multiple traffic types, including voice, video,
and data, set it apart from traditional packet-switched networks.
protocol architecture.
• ATM standard uses two types of connections. i.e., Virtual path connections (VPCs) which consist of Virtual channel
connections (VCCs) bundled together which is a basic unit carrying a single stream of cells from user to user.
• A virtual path can be created end-to-end across an ATM network, as it does not rout the cells to a particular virtual
circuit. In case of major failure, all cells belonging to a particular virtual path are routed the same way through the ATM
network, thus helping in faster recovery.
• Switches connected to subscribers use both VPIs and VCIs to switch the cells which are Virtual Path and Virtual
Connection switches that can have different virtual channel connections between them, serving the purpose of creating
a virtual trunk between the switches which can be handled as a single entity.
• Its basic operation is straightforward by looking up the connection value in the local translation table determining the
outgoing port of the connection and the new VPI/VCI value of connection on that link.
protocol architecture.
.
protocol architecture.
. Adaption Layer (AAL) –
1. ATM
It is meant for isolating higher-layer protocols from details of ATM processes and prepares for
conversion of user data into cells and segments it into 48-byte cell payloads. AAL protocol excepts
transmission from upper-layer services and helps them in mapping applications, e.g., voice, data to
ATM cells.
2. Physical Layer –
It manages the medium-dependent transmission and is divided into two parts physical
medium-dependent sublayer and transmission convergence sublayer. The main functions are as
follows:
1. It converts cells into a bitstream.
2. It controls the transmission and receipt of bits in the physical medium.
3. It can track the ATM cell boundaries.
4. Look for the packaging of cells into the appropriate type of frames.
3. ATM Layer –
It handles transmission, switching, congestion control, cell header processing, sequential delivery, etc.,
and is responsible for simultaneously sharing the virtual circuits over the physical link known as cell
multiplexing and passing cells through an ATM network known as cell relay making use of the VPI
and VCI information in the cell header.
• ATM Applications:
1. ATM WANs –
It can be used as a WAN to send cells over long distances, a router serving as an end-point between ATM network and other
networks, which has two stacks of the protocol.
2. Multimedia virtual private networks and managed services –
It helps in managing ATM, LAN, voice, and video services and is capable of full-service virtual private networking, which
includes integrated access to multimedia.
3. Frame relay backbone –
Frame relay services are used as a networking infrastructure for a range of data services and enabling frame-relay ATM service to
Internetworking services.
4. Residential broadband networks –
ATM is by choice provides the networking infrastructure for the establishment of residential broadband services in the search of
highly scalable solutions.
5. Carrier infrastructure for telephone and private line networks –
To make more effective use of SONET/SDH fiber infrastructures by building the ATM infrastructure for carrying the telephonic
and private-line traffic.
protocol architecture.
.