0% found this document useful (0 votes)
41 views44 pages

Cn2 Notes (Snaped)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views44 pages

Cn2 Notes (Snaped)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

NOTES

Subject COMPUTER NETWORKS Code CIC-307

Community SnapED Code Campus

UNIT - 2
Data Link Layer: Design issues
o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.
o The communication channel that connects the adjacent nodes is known as links, and in order
to move the datagram from source to the destination, the datagram must be moved across an
individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
o The Data link layer protocol defines the format of the packet exchanged across the nodes as
well as the actions such as Error detection, retransmission, flow control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can be handled by different
link layer protocols on different links in a path. For example, the datagram is handled by
Ethernet on the first link, PPP on the second link.

Following services are provided by the Data Link Layer:


o Framing & Link access: Data Link Layer protocols encapsulate each network frame
within a Link layer frame before the transmission across the link. A frame consists of a
data field in which network layer datagram is inserted and a number of data fields. It
specifies the structure of the frame as well as a channel access protocol by which frame is
to be transmitted over the link. There are two types of frames: Fixed-Size Framing and
Variable-Size Framing. In fixed-size framing, there is no need for defining the boundaries
of the frames; the size itself can be used as a delimiter. An example of this type of framing
is the ATM wide-area network, which uses frames of fixed size called cells. Variable-Size
Framing Our main discussion in this chapter concerns variable-size framing, prevalent in
localarea networks. In variable-size framing, we need a way to define the end of the frame
and the beginning of the next. Historically, two approaches were used for this purpose: a
character-oriented approach and a bit-oriented approach
o Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the
network layer datagram without any error. A reliable delivery service is accomplished with
transmissions and acknowledgements. A data link layer mainly provides the reliable
delivery service over the links as they have higher error rates and they can be corrected
locally, link at which an error occurs rather than forcing to retransmit the data.
o Flow control: A receiving node can receive the frames at a faster rate than it can process
the frame. Without flow control, the receiver's buffer can overflow, and frames can get lost.
To overcome this problem, the data link layer uses the flow control to prevent the sending
node on one side of the link from overwhelming the receiving node on another side of the
link.
o Error detection: Errors can be introduced by signal attenuation and noise. Data Link
Layer protocol provides a mechanism to detect one or more errors. This is achieved by
adding error detection bits in the frame and then receiving node can perform an error check.
o Error correction: Error correction is similar to the Error detection, except that receiving
node not only detect the errors but also determine where the errors have occurred in the
frame.

Design issues
1. Services provided to the network layer –
The data link layer act as a service interface to the network layer. The principle service is
transferring data from network layer on sending machine to the network layer on
destination machine. The concept of this layer is to transfer the data from the network
layer on the source machine to the layer on the destination machine. This transfer also
takes place via DLL (Data link-layer).
Here are the important services given by the Data Link layer to the Network layer −

● Unacknowledged connectionless services


● Acknowledged connectionless service
● Acknowledged-oriented service
2. Frame synchronization –
The source machine sends data in the form of blocks called frames to the destination
machine. The starting and ending of each frame should be identified so that the frame can
be recognized by the destination machine. It is quite difficult and dangerous to count on
timing and mark the starting and endpoints of each frame. Simple techniques used for
framing are −

● Character Count
● Starting and ending character with character filling
● Starting and ending flags with little fillings.

3. Flow control –
Flow control is done to prevent the flow of data frame at the receiver end. The source
machine must not send data frames at a rate faster than the capacity of destination
machine to accept them. It doesn't matter if the transmission is error-free at some point.
The receiver will not be able to control the frames as they will arrive. For stopping the
transmission, a mechanism is there which requests the transmitter to block the incorrect
messages.
Two approaches used are
i) Feedback based flow control: Receiver send back the sender information
regarding permission to send more data.
ii) Rate based flow control: There is a built in mechanism that limits the rate at
which the sender may transmit data without any feedback from the receiver. Two
categories of flow control are: a. Stop and wait b. Sliding window
4. Error control –
Error control is done to prevent duplication of frames. The errors introduced during
transmission from source to destination machines must be detected and corrected at the
destination machine. Error Control. It is done so that there is no copying of the frames for
the safe delivery of the frames at the destination. In addition, Positive and negative
acceptance is sent about the incoming frames. Therefore, if the sender gets positive
acceptance, that means the frame appears safely, while negative appearance means that
something is wrong with the frame and the frame will be retransferred. The timer is put at
the receiver's and sender's end. Besides, the sequence number is given to the outgoing
transmission. So that receiver will easily identify that it is a retransmitted frame. It is one
of the main parts of the data link layer responsibilities.

Data Link Control and Protocols


The protocols that can be used for noiseless (error-free) channels and those that can be used for
noisy (error-causing) channels are studied separately. Although the first group of protocols cannot
be applied in real life, they are utilized as a foundation for noise channel protocols.

● Noiseless Channel:- A noiseless channel is an ideal or nearly perfect channel in which


no frames are lost, distorted, or duplicated. The protocol does not include error control in
these types of channels.
● Noisy Channel:- A noisy channel indicates that there will be a lot of disruption in the
path when data is transmitted from the sender to the receiver.

The flow control protocols are generally divided into two categories i.e.
1. Stop and wait protocol
2. Sliding window protocol.
The difference between these two categories of flow control protocols is that in the stop and wait
protocol, only one data frame is sent at a time from sender to receiver. While in sliding window
protocol, multiple frames can be sent at a time from sender to receiver.
Sliding window protocol has two types:
1. Go-Back-N ARQ
2. Selective Repeat ARQ
Detail about protocol
1. Simplex Protocol
● There is no flow control or error control mechanism in the simplest protocol. The simplest
protocol is a unidirectional protocol in which the data frames travel in only one direction
from the sender to the receiver. The processing time of the simplest protocol is negligible.
Hence it can be neglected.
● The transmission channel is completely noiseless (a channel in which no frames are lost,
corrupted, or duplicated).
● The sender and the receiver are always ready to send and receive data.
● The sender sends a sequence of data frames without thinking about the receiver.
● There is no data loss hence no ACK (Acknowledgment) or NACK (Negative
acknowledgment).
● The protocol is divided into two steps: the sender and the receiver. The sender runs in the
source machine's data link layer, while the receiver operates in the destination machine's
data link layer. There is no usage of a sequence number or acknowledgments in this case.

Design:
● The data link layer at the sender site receives data from its network layer, creates a frame,
and sends it. The data link layer (receiver site) receives a frame from the physical layer,
extracts data from the frame, and sends it to the network layer. The sender and receiver's
data link layers provide communication/transmission services to their network layers. For
the physical movement of bits, the data link levels rely on the services offered by their
physical layers.
Flow Diagram:
This Flow Diagram depicts a communication scenario utilizing the simplest protocol. It is
pretty simple. Without regard for the receiver, the sender broadcasts a succession of frames.
For example, the sender will send three frames, and the receivers will receive three frames.
Data frames are represented by slanted boxes, with the height of the box defining the
transmission time difference between the first and last bit in the frame. Here data transmitting
is carried out in one direction only. The transmission (Tx) and receiving (Rx) are always ready
and the processing time can be ignored. In this protocol, infinite buffer space is available, and
no errors are occurring that is no damage frames and no lost frames. The Unrestricted Simplex
Protocol is diagrammatically represented as follows –
2. Stop and Wait Protocol
● Stop-and-wait is the most basic retransmission protocol.
● The transmitter (Station A) sends a frame over the communication line and waits for
the receiver's positive or negative acknowledgment (station B).
● Station B sends a positive acknowledgment (ACK) to station A if there is no error in the
transmission.
● The transmitter now begins to send the next frame. If a frame is received with errors at
station B, a negative acknowledgment (NAK) is transmitted to station A. Station 'A' must
retransmit the original packet in a new frame in this situation.
● There is also a chance that the information frames or ACKs will be lost.
● The sender is then outfitted with a timer. When the timer expires at the end of the time-out
interval and no recognizable acknowledgment is received, the same frame is transmitted
again.
● Stop and wait refers to a sender who delivers one frame and then waits for an
acknowledgment before proceeding.
Design:
We can see traffic on the front/forward channel (from sender to receiver) and the back/reverse
channel when we compare the stop-and-wait protocol design model to the Simplest protocol
design model. There is always one data frame on the forward channel and one acknowledgment
frame on the reverse channel. We now demand a half-duplex connection.
Flow diagram:
This diagram depicts a communication utilizing the Stop-and-Wait protocol. It remains
straightforward. The sender delivers a frame and then waits for the recipient to respond. When the
receiver's ACK (acknowledged) arrives, send the next frame, and so on. Remember that when two
frames are present, the sender will be involved in four events, and the receivers will be involved
in two events.
In this protocol we assume that data is transmitted in one direction only. No error occurs; the
receiver can only process the received information at finite rate. These assumptions imply that the
transmitter cannot send frames at rate faster than the receiver can process them.
The main problem here is how to prevent the sender from flooding the receiver. The general
solution for this problem is to have the receiver send some sort of feedback to sender, the process
is as follows −
Step1 − The receiver send the acknowledgement frame back to the sender telling the
sender that the last received frame has been processed and passed to the host.
Step 2 − Permission to send the next frame is granted.
Step 3 − The sender after sending the sent frame has to wait for an acknowledge frame
from the receiver before sending another frame.
This protocol is called Simplex Stop and wait protocol, the sender sends one frame and waits for
feedback from the receiver. When the ACK arrives, the sender sends the next frame.
The Simplex Stop and Wait Protocol is diagrammatically represented as follows −

Noisy Channel
There are three major protocols in the noisy channel, which are as follows.

1. Stop and Wait Automatic Repeat Request


● They were used in connection-oriented communication.
● Stop and Wait provides error and flow control.
● It is employed in the Data Link and Transport Layers.
● Stop and wait ARQ primarily employs the Sliding Window Protocol idea, with Window
Size 1 as its primary implementation.
Working
● If a frame is destroyed during transmission on a noisy channel, the receiver will detect it
using the checksum.
● If a damaged frame is received, it is discarded, and the transmitter will retransmit the same
frame whenever a proper acknowledgment is received.
● If the acknowledgment frame is missing, the data link layer on 'A' will eventually time out.
Because it has not received an ACK, it concludes that its data frame has been lost or
corrupted and resends the frame containing packet 1. This duplicate frame also arrives at
the data link layer on 'B', resulting in a duplicated portion of the file and a protocol failure.
● To resolve this issue, insert a sequence number into the message's header.
● Since just the message is ever transmitted, the receiver looks up the sequence number to
see if the message is a duplicate.
● The relationship between the sent message and it's ACK/NAK can be maintained between
the transmitting and receiving stations using just a 1-bit alternating sequence of '0' or '1'.
● Positive acknowledgments take the form of ACK 0 and ACK 1, and the frames are
alternately labeled with "0" or "1" in a modulo-2 numbering scheme.

Problems of Stop and Wait:


1. Lost Data

2. Lost Acknowledgement:

3. Delayed Acknowledgement/Data:
After a timeout on the sender side, a long-delayed acknowledgement might be wrongly
considered as acknowledgement of some other recent packet.
The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat Request) that
does both error control and flow control.

1. Time Out:

2. Sequence Number (Data)


3. Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement also.
Working of Stop and Wait for ARQ:
1) Sender A sends a data frame or packet with sequence number 0.
2) Receiver B, after receiving the data frame, sends an acknowledgement with sequence number
1 (the sequence number of the next expected data frame or packet)
There is only a one-bit sequence number that implies that both sender and receiver have a buffer
for one frame or packet only.

Characteristics of Stop and Wait ARQ:


● It uses a link between sender and receiver as a half-duplex link
● Throughput = 1 Data packet/frame per RTT
● If the Bandwidth*Delay product is very high, then they stop and wait for protocol if it
is not so useful. The sender has to keep waiting for acknowledgements before sending
the processed next packet.
● It is an example of “Closed Loop OR connection-oriented “ protocols
● It is a special category of SWP where its window size is 1
● Irrespective of the number of packets sender is having stop and wait for
protocol requires only 2 sequence numbers 0 and 1
Constraints:
Stop and Wait ARQ has very less efficiency , it can be improved by increasing the window size.
Also , for better efficiency , Go back N and Selective Repeat Protocols are used.
The Stop and Wait ARQ solves the main three problems but may cause big performance issues
as the sender always waits for acknowledgement even if it has the next packet ready to send.
Consider a situation where you have a high bandwidth connection and propagation delay is also
high (you are connected to some server in some other country through a high-speed connection).
To solve this problem, we can send more than one packet at a time with a larger sequence
number. We will be discussing these protocols in the next articles.
So Stop and Wait ARQ may work fine where propagation delay is very less for example LAN
connections but performs badly for distant connections like satellite connections.
Advantages of Stop and Wait ARQ :
● Simple Implementation: Stop and Wait ARQ is a simple protocol that is easy to
implement in both hardware and software. It does not require complex algorithms or
hardware components, making it an inexpensive and efficient option.
● Error Detection: Stop and Wait ARQ detects errors in the transmitted data by using
checksums or cyclic redundancy checks (CRC). If an error is detected, the receiver
sends a negative acknowledgment (NAK) to the sender, indicating that the data needs
to be retransmitted.
● Reliable: Stop and Wait ARQ ensures that the data is transmitted reliably and in
order. The receiver cannot move on to the next data packet until it receives the current
one. This ensures that the data is received in the correct order and eliminates the
possibility of data corruption.
● Flow Control: Stop and Wait ARQ can be used for flow control, where the receiver
can control the rate at which the sender transmits data. This is useful in situations
where the receiver has limited buffer space or processing power.
● Backward Compatibility: Stop and Wait ARQ is compatible with many existing
systems and protocols, making it a popular choice for communication over unreliable
channels.
Disadvantages of Stop and Wait ARQ :
● Low Efficiency: Stop and Wait ARQ has low efficiency as it requires the sender to
wait for an acknowledgment from the receiver before sending the next data packet.
This results in a low data transmission rate, especially for large data sets.
● High Latency: Stop and Wait ARQ introduces additional latency in the transmission
of data, as the sender must wait for an acknowledgment before sending the next
packet. This can be a problem for real-time applications such as video streaming or
online gaming.
● Limited Bandwidth Utilization: Stop and Wait ARQ does not utilize the available
bandwidth efficiently, as the sender can transmit only one data packet at a time. This
results in underutilization of the channel, which can be a problem in situations where
the available bandwidth is limited.
● Limited Error Recovery: Stop and Wait ARQ has limited error recovery
capabilities. If a data packet is lost or corrupted, the sender must retransmit the entire
packet, which can be time-consuming and can result in further delays.
● Vulnerable to Channel Noise: Stop and Wait ARQ is vulnerable to channel noise,
which can cause errors in the transmitted data. This can result in frequent
retransmissions and can impact the overall efficiency of the protocol.

2. Go‐Back‐N ARQ
Go Back N ARQ stands for Go Back N Automatic Repeat Request (ARQ) is a data link layer
protocol that is used for data flow control purposes. It is a sliding window protocol in which
multiple frames are sent from sender to receiver at once. The number of frames that are sent at one
depends upon the size of the window that is taken.
Go Back N ARQ is a sliding window protocol which is used for flow control purposes. Multiple
frames present in a single window are sent together from sender to receiver.
Pipelining is allowed in the Go Back N ARQ protocol. Pipelining means sending a frame before
receiving the acknowledgment for the previously sent frame.

Sender Window and Receiver Window in Go-Back-N ARQ Protocol


The sender window is a fixed-sized window that defines the number of frames that are transmitted
from sender to receiver at once. The integer 'N' in the Go Back 'N' is the frame size.
For example in Go Back 4 ARQ, the size of the sender window is 4.

The Receiver window in Go Back N ARQ protocol is always of size 1. This means that the receiver
takes at most 1 frame at a single time.

Working of Go-Back-N ARQ Protocol


Given below are the steps to clearly explain how the Go Back N ARQ algorithm works.

1. Data packets are divided into multiple frames. Each frame contains information about the
destination address, the error control mechanism it follows, etc. These multiple frames are
numbered so that they can be distinguished from each other.

2. The integer 'N' in Go Back 'N' ARQ tells us the size of the window i.e. the number of frames
that are sent at once from sender to receiver. Suppose the window size 'N' is equal to 4. Then, 4
frames (frame 0, frame 1, frame 2, and frame 3) will be sent first from sender to receiver.
3. Receiver sends the acknowledgment of frame 0. Then the sliding window moves by one and
frame 4 is sent.

4. Receiver sends the acknowledgment of frame 1. Then the sliding window moves by one and
frame 3 is sent.
5. The sender waits for the acknowledgment for some fixed amount of time. If the sender does not
get the acknowledgment for a frame in the time, it considers the frame to be corrupted. Then the
sliding window moves to the starting of the corrupted frame and all the frames in the window are
retransmitted.

For example, if the sender does not receive the acknowledgment for frame 2, it retransmits all the
frames in the windows i.e. frames [2, 3, 4, 5].

Characteristics of Go-Back-N ARQ


Given below are the characteristics of the Go-Back-N ARQ protocol.
1. The size of the sender window in Go Back N ARQ is equal to N.
2. The size of the receiver window in Go Back N ARQ is equal to 1.
3. When the acknowledgment for one frame is not received by the sender or the frames
received by the receiver are out of order, then the whole window starting from the corrupted
frame is retransmitted.
4. Go-Back-N ARQ follows the principle of pipelining i.e. a frame can be sent by the sender
before receiving the acknowledgment of the previously sent frame.

Advantages of Go-Back-N ARQ


Given below are some of the advantages of Go Back N ARQ.
1. It can send multiple frames at once.
2. Pipelining is present in the Go-Back-N ARQ i.e. a frame can be sent by the sender before
receiving the acknowledgment of the previously sent frame. This results in a lesser waiting
time for the frame.
3. It handles corrupted as well as out-of-order frames which result in minimal frame loss.

Disadvantages of Go-Back-N ARQ


Given below are some of the disadvantages of Go Back N ARQ.
1. If acknowledgment for a frame is not received, the whole window of frames is
retransmitted instead of just the corrupted frame. This makes the Go Back N ARQ protocol
inefficient.
2. Retransmission of all the frames on detecting a corrupted frame increases channel
congestion and also increases the bandwidth requirement.
3. It is more time-consuming because while retransmitting the frames on detecting a corrupted
frame, the error-free frames are also transmitted.

Go-Back-N ARQ is a sliding window protocol which is used for flow control purposes. Multiple
frames present in a single window are sent together from sender to receiver.
Pipelining is allowed in the Go-Back-N ARQ protocol. Pipelining means sending a frame before
receiving the acknowledgment for the previously sent frame.
The integer 'N' in Go-Back-N ARQ tells us the size of the window i.e. the number of frames that
are sent at once from sender to receiver.
The sender window size in Go-Back-N ARQ is 'N' and the receiver window size is 1.
When the acknowledgment for one frame is not received by the sender or the frames received by
the receiver are out of order, then the whole window starting from the corrupted frame is
retransmitted.

Selective Repeat ARQ,

Selective Repeat ARQ


Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a
data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works
well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in sending
the frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the size of the
sender window is always equal to the size of the receiver window. The size of the sliding window
is always greater than 1.
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The design of
the Selective Repeat ARQ protocol is shown below.

4 Example data link protocols

Data Link Layer protocols are generally responsible to simply ensure and confirm that the bits and
bytes that are received are identical to the bits and bytes being transferred. It is basically a set of
specifications that are used for implementation of data link layer just above the physical layer of
the Open System Interconnections (OSI) Model.

Some Common Data Link Protocols :


There are various data link protocols that are required for Wide Area Network (WAN) and modem
connections. Logical Link Control (LLC) is a data link protocol of Local Area Network (LAN).
Some of data link protocols are given below :
Synchronous Data Link Protocol (SDLC) –
SDLC is basically a communication protocol of computer. It usually supports multipoint links even
error recovery or error correction also. It is usually used to carry SNA (Systems Network Architecture)
traffic and is present precursor to HDLC. It is also designed and developed by IBM in 1975. It is also
used to connect all of the remote devices to mainframe computers at central locations may be in
point-to-point (one-to-one) or point-to-multipoint (one-to-many) connections. It is also used to make
sure that the data units should arrive correctly and with right flow from one network point to next
network point.

High-Level Data Link Protocol (HDLC) –


HDLC is basically a protocol that is now assumed to be an umbrella under which many Wide Area
protocols sit. It is also adopted as a part of X.25 network. It was originally created and developed by
ISO in 1979. This protocol is generally based on SDLC. It also provides best-effort unreliable service
and also reliable service. HDLC is a bit-oriented protocol that is applicable for point-to-point and
multipoint communications both.

Serial Line Interface Protocol (SLIP) –


SLIP is generally an older protocol that is just used to add a framing byte at end of IP packet. It is
basically a data link control facility that is required for transferring IP packets usually among Internet
Service Providers (ISP) and a home user over a dial-up link. It is an encapsulation of the TCP/IP
especially designed to work with over serial ports and several router connections simply for
communication. It is some limitations like it does not provide mechanisms such as error correction or
error detection.
Point to Point Protocol (PPP) –
PPP is a protocol that is basically used to provide same functionality as SLIP. It is most robust protocol
that is used to transport other types of packets also along with IP Packets. It can also be required for
dial-up and leased router-router lines. It basically provides framing method to describe frames. It is a
character-oriented protocol that is also used for error detection. It is also used to provides two
protocols i.e. NCP and LCP. LCP is used for bringing lines up, negotiation of options, bringing them
down whereas NCP is used for negotiating network-layer protocols. It is required for same serial
interfaces like that of HDLC.

Link Control Protocol (LCP) –


It was originally developed and created by IEEE 802.2. It is also used to provide HDLC style services on
LAN (Local Area Network). LCP is basically a PPP protocol that is used for establishing, configuring,
testing, maintenance, and ending or terminating links for transmission of data frames.

Link Access Procedure (LAP) –


LAP protocols are basically a data link layer protocols that are required for framing and transferring
data across point-to-point links. It also includes some reliability service features. There are basically
three types of LAP i.e. LAPB (Link Access Procedure Balanced), LAPD (Link Access Procedure D-
Channel), and LAPF (Link Access Procedure Frame-Mode Bearer Services). It is actually originated from
IBM SDLC, which is being submitted by IBM to the ISP simply for standardization.

Network Control Protocol (NCP) –


NCP was also an older protocol that was implemented by ARPANET. It basically allows users to have
access to use computers and some of the devices at remote locations and also to transfer files among
two or more computers. It is generally a set of protocols that is forming a part of PPP. NCP is always
available for each and every higher-layer protocol that is supported by PPP. NCP was replaced by
TCP/IP in the 1980s.

5 Medium Access Sub layer

The following diagram depicts the position of the MAC layer −


Functions of MAC Layer

● It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
● It is responsible for encapsulating frames so that they are suitable for transmission via the
physical medium.
● It resolves the addressing of source station as well as the destination station, or groups of
destination stations.
● It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
● It also performs collision resolution and initiating retransmission in case of collisions.
● It generates the frame check sequences and thus contributes to protection against
transmission errors.

MAC Addresses

MAC address or media access control address is a unique identifier allotted to a network interface
controller (NIC) of a device. It is used as a network address for data transmission within a network
segment like Ethernet, Wi-Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired or hard-coded
in the network interface card (NIC). A MAC address comprises of six groups of two hexadecimal digits,
separated by hyphens, colons, or no separators. An example of a MAC address is 00:0A:89:5B:F0:11.

6 Channel allocation problem

Channel allocation is a process in which a single channel is divided and allotted to multiple users in order
to carry user specific tasks. There are user’s quantity may vary every time the process takes place. If there
are N number of users and channel is divided into N equal-sized sub channels, Each user is assigned one
portion. If the number of users are small and don’t vary at times, then Frequency Division Multiplexing
can be used as it is a simple and efficient channel bandwidth allocating technique.

Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs and MANs,
and Dynamic Channel Allocation. These are explained as following below.

Static Channel Allocation in LANs and MANs: It is the classical or traditional approach of allocating a
single channel among multiple competing users using Frequency Division Multiplexing (FDM). if there are
N users, the frequency channel is divided into N equal sized portions (bandwidth), each user being
assigned one portion. since each user has a private frequency band, there is no interference between
users. It is not efficient to divide into fixed number of chunks.

T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)
Where,

T = mean time delay,


C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time

Dynamic Channel Allocation: Possible assumptions include:


● Station Model: Assumes that each of N stations independently produce frames. The probability
of producing a packet in the interval IDt where I is the constant arrival rate of new frames.
● Single Channel Assumption: In this allocation all stations are equivalent and can send and receive
on that channel.
● Collision Assumption: If two frames overlap in time-wise, then that’s collision. Any collision is an
error, and both frames must re transmitted. Collisions are only possible error.
● Time can be divided into Slotted or Continuous.
● Stations can sense a channel is busy before they try it.

7 Multiple access protocols


When a sender and receiver have a dedicated link to transmit data packets, the data link control is
enough to handle the channel. Suppose there is no dedicated path to communicate or transfer the
data between two devices. In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk. Hence, the multiple
access protocol is required to reduce the collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher asks a question,
all the students (small channels) in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at the same time due to which data
is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access protocol) to
manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process as:

A. Random Access Protocol

In this protocol, all the station has the equal priority to send the data over a channel. In random access
protocol, one or more stations cannot depend on another station nor any station control another station.
Depending on the channel's state (idle or busy), each station transmits the data frame. However, if more
than one station sends the data over a channel, there may be a collision or data conflict. Due to the
collision, the data frame packets may be lost or changed. And hence, it does not receive by the receiver
end.

Following are the different methods of random-access protocols for broadcasting frames on the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to transmit
data. Using this method, any station can transmit data across a network simultaneously when a data
frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure Aloha, when
each station transmits data to a channel without checking whether the channel is idle or not, the chances
of collision may occur, and the data frame can be lost. When any station transmits the data frame to a
channel, the pure Aloha waits for the receiver's acknowledgment. If it does not acknowledge the receiver
end within the specified time, the station waits for a random amount of time, called the backoff time (Tb).
And the station may assume the frame has been lost or destroyed. Therefore, it retransmits the frame
until all the data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel and transmitting
data frames. Some frames collide because most stations send their frames at the same time. Only two
frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver end. At the same time, other
frames are lost or destroyed. Whenever two frames fall on a shared channel simultaneously, collisions
can occur, and both will suffer damage. If the new frame's first bit enters the channel before finishing the
last bit of the second frame. Both frames are completely finished, and both stations must retransmit the
data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very high
possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time interval
called slots. So that, if a station wants to send a frame to a shared channel, the frame can only be sent at
the beginning of the slot, and only one frame is allowed to be sent to each slot. And if the stations are
unable to send data to the beginning of the slot, the station will have to wait until the beginning of the
slot for the next time. However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel (idle
or busy) before transmitting the data. It means that if the channel is idle, the station can send data to the
channel. Otherwise, it must wait until the channel becomes idle. Hence, it reduces the chances of a
collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track
of the status of the channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent


mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time
and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.

CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit data frames. The
CSMA/CD protocol works with a medium access control layer. Therefore, it first senses the shared channel
before broadcasting the frames, and if the channel is idle, it transmits a frame to check whether the
transmission was successful. If the frame is successfully received, the station sends another frame. If any
collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel to
terminate data transmission. After that, it waits for a random time before sending a frame to a channel.

CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of data
frames. It is a protocol that works with a medium access control layer. When a data frame is sent to a
channel, it receives an acknowledgment to check whether the channel is clear. If the station receives only
a single (own) acknowledgments, that means the data frame has been successfully transmitted to the
receiver. But if it gets two signals (its own and one more in which the collision of frames), a collision of
the frame occurs in the shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle, and if it gets
the channel is idle, it does not immediately send the data. Instead of this, it waits for some time,
and this time period is called the Interframe space or IFS. However, the IFS time is often used to
define the priority of the station.

Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number
of slots as wait time. If the channel is still busy, it does not restart the entire process, except that
it restarts the timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.

B. Controlled Access Protocol

It is a method of reducing data frame collision on a shared channel. In the controlled access method, each
station interacts and decides to send a data frame by a particular station approved by all other stations.
It means that a single station cannot send the data frames unless all other stations are not approved. It
has three types of controlled access: Reservation, Polling, and Token Passing.

C. Channelization Protocols

It is a channelization protocol that allows the total usable bandwidth in a shared channel to be shared
across multiple stations based on their time, distance and codes. It can access all the stations at the same
time to send the data frames to the channel.

Following are the various methods to access the channel based on their time, distance and codes:

1. FDMA (Frequency Division Multiple Access)


2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)

FDMA

It is a frequency division multiple access (FDMA) method used to divide the available bandwidth into equal
bands so that multiple users can send data through a different frequency to the subchannel. Each station
is reserved with a particular band to prevent the crosstalk between the channels and interferences of
stations.

TDMA

Time Division Multiple Access (TDMA) is a channel access method. It allows the same frequency
bandwidth to be shared across multiple stations. And to avoid collisions in the shared channel, it divides
the channel into different frequency slots that allocate stations to transmit the data frames. The
same frequency bandwidth into the shared channel by dividing the signal into various time slots to
transmit it. However, TDMA has an overhead of synchronization that specifies each station's time slot by
adding synchronization bits to each slot.

CDMA

The code division multiple access (CDMA) is a channel access method. In CDMA, all stations can
simultaneously send the data over the same channel. It means that it allows each station to transmit the
data frames with full frequency on the shared channel at all times. It does not require the division of
bandwidth on a shared channel based on time slots. If multiple stations send data to a channel
simultaneously, their data frames are separated by a unique code sequence. Each station has a different
unique code for transmitting the data over a shared channel. For example, there are multiple users in a
room that are continuously speaking. Data is received by the users if only two-person interact with each
other using the same language. Similarly, in the network, if different stations communicate with each
other simultaneously with different code language.

8 IEEE standard 802.3 LANS/WLANs


Ethernet is a set of technologies and protocols that are used primarily in LANs. It was first standardized
in 1980s by IEEE 802.3 standard. IEEE 802.3 defines the physical layer and the medium access control
(MAC) sub-layer of the data link layer for wired Ethernet networks. Ethernet is classified into two
categories: classic Ethernet and switched Ethernet.
Classic Ethernet is the original form of Ethernet that provides data rates between 3 to 10 Mbps. The
varieties are commonly referred as 10BASE-X. Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE
denoted use of baseband transmission, and X is the type of medium used. Most varieties of classic
Ethernet have become obsolete in present communication scenario.
A switched Ethernet uses switches to connect to the stations in the LAN. It replaces the repeaters used
in classic Ethernet and allows full bandwidth utilization.

IEEE 802.3 Popular Versions

There are a number of versions of IEEE 802.3 protocol. The most popular ones are -
● IEEE 802.3: This was the original standard given for 10BASE-5. It used a thick single coaxial
cable into which a connection can be tapped by drilling into the cable to the core. Here,
10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission,
and 5 refers to the maximum segment length of 500m.
● IEEE 802.3a: This gave the standard for thin coax (10BASE-2), which is a thinner variety
where the segments of coaxial cables are connected by BNC connectors. The 2 refers to
the maximum segment length of about 200m (185m to be precise).
● IEEE 802.3i: This gave the standard for twisted pair (10BASE-T) that uses unshielded
twisted pair (UTP) copper wires as physical layer medium. The further variations were
given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and 100BASE-FX.
● IEEE 802.3i: This gave the standard for Ethernet over Fiber (10BASE-F) that uses fiber optic
cables as medium of transmission.

9 IEEE standard 802.11 for LANS/WLANs


IEEE 802.11 standard, popularly known as WiFi, lays down the architecture and specifications of
wireless LANs (WLANs). WiFi or WLAN uses high frequency radio waves for connecting the
nodes.
There are several standards of IEEE 802.11 WLANs. The prominent among them are 802.11,
802.11a, 802.11b, 802.11g, 802.11n and 802.11p. All the standards use carrier-sense multiple
access with collision avoidance (CSMA/CA). Also, they have support for both centralised base
station based as well as ad hoc networks.

IEEE 802.11
IEEE 802.11 was the original version released in 1997. It provided 1 Mbps or 2 Mbps data rate in
the 2.4 GHz band and used either frequency-hopping spread spectrum (FHSS) or direct-sequence
spread spectrum (DSSS). It is obsolete now.
IEEE 802.11a
802.11a was published in 1999 as a modification to 802.11, with orthogonal frequency division
multiplexing (OFDM) based air interface in physical layer instead of FHSS or DSSS of 802.11.
It provides a maximum data rate of 54 Mbps operating in the 5 GHz band. Besides it provides
error correcting code. As 2.4 GHz band is crowded, relatively sparsely used 5 GHz imparts
additional advantage to 802.11a.
Further amendments to 802.11a are 802.11ac, 802.11ad, 802.11af, 802.11ah, 802.11ai, 802.11aj
etc.
IEEE 802.11b
802.11b is a direct extension of the original 802.11 standard that appeared in early 2000. It uses
the same modulation technique as 802.11, i.e. DSSS and operates in the 2.4 GHz band. It has a
higher data rate of 11 Mbps as compared to 2 Mbps of 802.11, due to which it was rapidly adopted
in wireless LANs. However, since 2.4 GHz band is pretty crowded, 802.11b devices faces
interference from other devices.
Further amendments to 802.11b are 802.11ba, 802.11bb, 802.11bc, 802.11bd and 802.11be.
IEEE 802.11g
802.11g was indorsed in 2003. It operates in the 2.4 GHz band (as in 802.11b) and provides a
average throughput of 22 Mbps. It uses OFDM technique (as in 802.11a). It is fully backward
compatible with 802.11b. 802.11g devices also faces interference from other devices operating in
2.4 GHz band.
IEEE 802.11n
802.11n was approved and published in 2009 that operates on both the 2.4 GHz and the 5 GHz
bands. It has variable data rate ranging from 54 Mbps to 600 Mbps. It provides a marked
improvement over previous standards 802.11 by incorporating multiple-input multiple-output
antennas (MIMO antennas).
IEEE 802.11p
802.11 is an amendment for including wireless access in vehicular environments (WAVE) to
support Intelligent Transportation Systems (ITS). They include network communications between
vehicles moving at high speed and the environment. They have a data rate of 27 Mbps and operate
in 5.9 GHz band.

10 Network devices-repeaters, hubs, Bridge, Switches and Routers

1. 1. Repeater: Functioning at Physical Layer. A repeater is an electronic device that


receives signal and retransmits it at a higher level and/or higher power, or onto the other
side of an obstruction, so that the signal can cover longer distances. Repeaters have two
ports, so cannot be used to connect for more than two devices.
2. Hub: An Ethernet hub, active hub, network hub, repeater hub, hub or concentrator is a
device for connecting multiple twisted pair or fiber optic Ethernet devices together and
making them act as a single network segment. Hubs work at the physical layer (layer 1)
of the OSI model. The device is a form of multiport repeater. Repeater hubs also
participate in collision detection, forwarding a jam signal to all ports if it detects a
collision.
3. Switch: A network switch or switching hub is a computer networking device that
connects network segments. The term commonly refers to a network bridge that
processes and routes data at the data link layer (layer 2) of the OSI model. Switches that
additionally process data at the network layer (layer 3 and above) are often referred to as
Layer 3 switches or multilayer switches.
4. Bridge: A network bridge connects multiple network segments at the data link layer
(Layer 2) of the OSI model. In Ethernet networks, the term bridge formally means a
device that behaves according to the IEEE 802.1 D standards. A bridge and switch are
very much alike; a switch being a bridge with numerous ports. Switch or Layer 2 switch
is often used interchangeably with bridge. Bridges can analyse incoming data packets to
determine if the bridge is able to send the given packet to another segment of the
network.
5. Router: A router is an electronic device that interconnects two or more computer
networks, and selectively interchanges packets of data between them. Each data packet
contains address information that a router can use to determine if the source and
destination are on the same network, or if the data packet must be transferred from one
network to another. Where multiple routers are used in a large collection of
interconnected networks, the routers exchange information about target system addresses,
so that each router can build up a table showing the preferred paths between any two
systems on the interconnected networks.
6. Gate Way: In a communications network, a network node equipped for interfacing
with another network that uses different protocols.
● A gateway may contain devices such as protocol translators, impedance matching
devices, rate converters, fault isolators, or signal translators as necessary to
provide system interoperability. It also requires the establishment of mutually
acceptable administrative procedures between both networks.
● A protocol translation/mapping gateway interconnects networks with different
network protocol technologies by performing the required protocol conversions.
Controlled Access Protocols
Controlled Access Protocols (CAPs) in computer networks control how data packets are sent
over a common communication medium. These protocols ensure that data is transmitted
efficiently, without collisions, and with little interference from other data transmissions.
What is the Controlled Access?
In controlled access, the stations seek data from one another to find which station has the right
to send. It allows only one node to send at a time, to avoid the collision of messages on a shared
medium. The three controlled-access methods are:
 Reservation
 Polling
 Token Passing
1. Reservation
 In the reservation method, a station needs to make a reservation before sending data.
 The timeline has two kinds of periods:
o Reservation interval of fixed time length
o Data transmission period of variable frames.
 If there are M stations, the reservation interval is divided into M slots, and each station
has one slot.
 Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
 In general, ith station may announce that it has a frame to send by inserting a 1 bit into
ith slot. After all N slots have been checked, each station knows which stations wish to
transmit.
 The stations which have reserved their slots transfer their frames in that order.
 After data transmission period, next reservation interval begins.
 Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only
station 1 has made a reservation.
 Advantages of Reservation

 The main advantage of reservation is high rates and low rates of data accessing time of
the respective channel can be predicated easily. Here time and rates are fixed.
 Priorities can be set to provide speedier access from secondary.
 Reservation-based access methods can provide predictable network performance,
which is important in applications where latency and jitter must be minimized, such as
in real-time video or audio streaming.
 Reservation-based access methods can reduce contention for network resources, as
access to the network is pre-allocated based on reservation requests. This can improve
network efficiency and reduce packet loss.
 Reservation-based access methods can support QoS requirements, by providing
different reservation types for different types of traffic, such as voice, video, or data.
This can ensure that high-priority traffic is given preferential treatment over lower-
priority traffic.
 Reservation-based access methods can enable more efficient use of available
bandwidth, as they allow for time and frequency multiplexing of different reservation
requests on the same channel.
 Reservation-based access methods are well-suited to support multimedia applications
that require guaranteed network resources, such as bandwidth and latency, to ensure
high-quality performance.

 Disadvantages of Reservation

 Highly trust on controlled dependability.


 Decrease in capacity and channel data rate under light loads; increase in turn-around
time.
2. Polling
 Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
 In this, one acts as a primary station(controller) and the others are secondary stations.
All data exchanges must be made through the controller.
 The message sent by the controller contains the address of the node being selected for
granting access.
 Although all nodes receive the message the addressed one responds to it and sends data
if any. If there is no data, usually a “poll reject” (NAK) message is sent back.
 Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.
 Advantages of Polling

 The maximum and minimum access time and data rates on the channel are fixed
predictable.
 It has maximum efficiency.
 It has maximum bandwidth.
 No slot is wasted in polling.
 There is assignment of priority to ensure faster access from some secondary.

 Disadvantages of Polling

 It consumes more time.


 Since every station has an equal chance of winning in every round, link sharing is
biased.
 Only some station might run out of data to send.
 An increase in the turnaround time leads to a drop in the data rates of the channel under
low loads.
Efficiency
Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,
Efficiency = Tt/(Tt + Tpoll)

3. Token Passing
 In token passing scheme, the stations are connected logically to each other in form of
ring and access to stations is governed by tokens.
 A token is a special bit pattern or a small message, which circulate from one station to
the next in some predefined order.
 In Token ring, token is passed from one station to another adjacent station in the ring
whereas in case of Token bus, each station uses the bus to send the token to the next
station in some predefined order.
 In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token
to the next station. If it has no queued frame, it passes the token simply.
 After sending a frame, each station must wait for all N stations (including itself) to send
the token to their neighbours and the other N – 1 stations to send a frame, if they have
one.
 There exist problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable operation
of this scheme.

Performance of token ring can be concluded by 2 parameters: -


 Delay, is a measure of time between when a packet is ready and when it is delivered.
So, the average time (delay) required to send a token to the next station = a/N.
 Throughput, which is a measure of successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)
 Advantages of Token passing

 It may now be applied with routers cabling and includes built-in debugging features
like protective relay and auto reconfiguration.
 It provides good throughput when conditions of high load.

 Disadvantages of Token passing

 Its cost is expensive.


 Topology components are more expensive than those of other, more widely used
standard.
 The hardware element of the token rings are designed to be tricky. This implies that
you should choose on manufacture and use them exclusively.
UNIT - 2
 HDLC

HDLC is a synchronous data link layer protocol designed for reliable point-to-point
communication and multipoint communication. HDLC is a bit-oriented protocol means one bit
is transferred at a time. It provides error detection, correction, flow control, and multiplexing
capabilities. It is commonly used in WAN environments and leased lines.

 Features of HDLC

 Error Detection and Correction: HDLC uses error detection and correction mechanisms such
as CRCs and acknowledgments to ensure the integrity of the transmitted data.
 Full-Duplex Communication: HDLC supports full-duplex communication, which allows data
to be transmitted in both directions simultaneously.
 Multiplexing: HDLC supports multiplexing, which enables multiple data streams to be
transmitted over a single communication channel.
 Efficiency: HDLC uses efficient bandwidth utilization techniques, such as sliding windows, to
optimize data transmission.

 PPP

PPP is a versatile data link layer protocol that can be used for both synchronous and
asynchronous transmission over point-to-point links only. PPP is a byte-oriented protocol
means one byte is transferred at a time. It offers features like error detection, correction, flow
control, and negotiation of various options. PPP is widely used in dial-up connections, DSL,
and other point-to-point links.

 Features of PPP

 Authentication: PPP includes authentication mechanisms such as Password Authentication


Protocol (PAP) and Challenge Handshake Authentication Protocol (CHAP) to ensure secure
communication.
 Error Detection and Correction: PPP uses error detection and correction mechanisms such
as CRCs and acknowledgments to ensure the integrity of the transmitted data.
 Network Layer Protocol Independence: PPP is independent of the network layer protocol
being used, making it compatible with various network protocols.
 Multilink Support: PPP supports multilink connections, which allow multiple physical
connections to be combined to increase the data transmission rate.

 Various fields of Frame are given below:

 Flag field – PPP frame similar to HDLC frame, always begins and ends with standard HDLC
flag. It always has a value of 1 byte i.e., 01111110 binary value.
 Address field – Address field is basically broadcast address. In this, all 1’s simply indicates
that all of the stations are ready to accept frame. It has the value of 1 byte i.e., 11111111 binary
value. PPP on the other hand, does not provide or assign individual station addresses.
 Control field – This field basically uses format of U-frame i.e., Unnumbered frame in HDLC.
In HDLC, control field is required for various purposes but in PPP, this field is set to 1 byte
i.e., 00000011 binary value. This 1 byte is used for a connection-less data link.
 Protocol field – This field basically identifies network protocol of the datagram. It usually
identifies the kind of packet in the data field i.e., what exactly is being carried in data field. This
field is of 1 or 2 bytes and helps in identifies the PDU (Protocol Data Unit) that is being
encapsulated by PPP frame.
 Data field – It usually contains the upper layer datagram. Network layer datagram is
particularly encapsulated in this field for regular PPP data frames. Length of this field is not
constant rather it varies.
 Checksum (FCS) field – This field usually contains checksum simply for identification of
errors. It can be either 16 bits or 32 bits in size. It is also calculated over address, control,
protocol, and even information fields. Characters are added to frame for control and handling
of errors.

 Similarities Between HDLC and PPP

 Both are data link layer protocols used for data communication between network devices.
 Both use error detection and correction mechanisms to ensure the integrity of the transmitted
data.
 Both support full-duplex communication.
 Both are widely used in various communication systems and have been standardized by
international organizations.
 Both support bandwidth optimization techniques, such as sliding windows.

 Difference Between HDLC and PPP

Parameters HDLC PPP

Basics HDLC is an ISO-developed bit-oriented PPP is a data link layer communication


code-transparent synchronous data link protocol for establishing a direct link
layer protocol. between two nodes.

Stands for HDLC stands for High-level Data Link PPP stands for Point-to-Point Protocol.
Control.

Protocol type HDLC is a bit-oriented protocol. PPP is a byte-oriented protocol.


Configuration HDLC is implemented by Point-to-point PPP is implemented by Point-to-Point
link configuration and also multi-point configuration only.
link configurations.

Layer HDLC works at layer 2 (Data Link PPP works at layer 2 and layer 3
Layer). (Network Layer).

Media Type HDLC is used in synchronous media. PPP is used in synchronous media
as well as asynchronous media.

Error HDLC does not offer error detection. PPP provides the feature of error
detection detection using FCS (Frame Check
Sequence) while transmitting data.
Compatibility HDLC is not compatible with non-Cisco PPP is compatible with non-Cisco
devices. devices.
Format HDLC protocols have two types- ISO PPP uses a defined format of HDLC.
HDLC and CISCO HDLC.

Cost HDLC is more costly comparatively. PPP is comparatively less costly.


Channel Access Methods:
It is a multiple-access method in which the available bandwidth of a link is shared in time,
frequency or through code, among different stations.
Channelization protocols: FDMA, TDMA & CDMA

1. Frequency Division Multiple Access (FDMA):


FDMA is the channelization protocol in which bandwidth is divided into various frequency
bands. Each station is allocated with band to send data and that band is reserved for particular
station for all the time which is as follows:

The frequency bands of different stations are separated by small band of unused frequency and
that unused frequency bands are called as guard bands that prevent interference of stations. It is
like access method in data link layer in which data link layer at each station tells its physical
layer to make bandpass signal from data passed to it. The signal is created in allocated band
and there is no physical multiplexer at physical layer.

2. Time Division Multiple Access (TDMA):

TDMA is channelization protocol in which bandwidth of channel is divided into various


stations on time basis. There is time slot given to each station, station can transmit data during
that time slot only which is as follows:
Each station must aware of its beginning of time slot and location of the time slot. TDMA
requires synchronization between different stations. It is type of access method in data link
layer. At each station data link layer tells station to use allocated time slot.

Sr. FDMA TDMA


No.
1. FDMA stands for Frequency Division TDMA stands for Time Division Multiple
Multiple Access. Access.
2. Overall bandwidth is shared among Time sharing of satellite transponder takes
number of stations. place.
3. Guard bands between adjacent channels is Guard time between adjacent slots is
necessary. necessary.
4. Synchronization is not required. Synchronization is necessary.
5. Power efficiency is less. Power efficiency is high.
6. It requires stability of high carrier It does not require stability of high carrier
efficiency. efficiency.
7. It is basically used in GSM and PDC. It is basically used in advanced mobile
phone systems.

3. Code Division Multiple Access (CDMA)


The code division multiple access (CDMA) is a channel access method. In CDMA, all stations
can simultaneously send the data over the same channel. It means that it allows each station to
transmit the data frames with full frequency on the shared channel at all times. It does not require
the division of bandwidth on a shared channel based on time slots. If multiple stations send data
to a channel simultaneously, their data frames are separated by a unique code sequence. Each
station has a different unique code for transmitting the data over a shared channel. For example,
there are multiple users in a room that are continuously speaking. Data is received by the users if
only two-person interact with each other using the same language. Similarly, in the network, if
different stations communicate with each other simultaneously with different code language.

You might also like