Cn2 Notes (Snaped)
Cn2 Notes (Snaped)
UNIT - 2
Data Link Layer: Design issues
o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.
o The communication channel that connects the adjacent nodes is known as links, and in order
to move the datagram from source to the destination, the datagram must be moved across an
individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
o The Data link layer protocol defines the format of the packet exchanged across the nodes as
well as the actions such as Error detection, retransmission, flow control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can be handled by different
link layer protocols on different links in a path. For example, the datagram is handled by
Ethernet on the first link, PPP on the second link.
Design issues
1. Services provided to the network layer –
The data link layer act as a service interface to the network layer. The principle service is
transferring data from network layer on sending machine to the network layer on
destination machine. The concept of this layer is to transfer the data from the network
layer on the source machine to the layer on the destination machine. This transfer also
takes place via DLL (Data link-layer).
Here are the important services given by the Data Link layer to the Network layer −
● Character Count
● Starting and ending character with character filling
● Starting and ending flags with little fillings.
3. Flow control –
Flow control is done to prevent the flow of data frame at the receiver end. The source
machine must not send data frames at a rate faster than the capacity of destination
machine to accept them. It doesn't matter if the transmission is error-free at some point.
The receiver will not be able to control the frames as they will arrive. For stopping the
transmission, a mechanism is there which requests the transmitter to block the incorrect
messages.
Two approaches used are
i) Feedback based flow control: Receiver send back the sender information
regarding permission to send more data.
ii) Rate based flow control: There is a built in mechanism that limits the rate at
which the sender may transmit data without any feedback from the receiver. Two
categories of flow control are: a. Stop and wait b. Sliding window
4. Error control –
Error control is done to prevent duplication of frames. The errors introduced during
transmission from source to destination machines must be detected and corrected at the
destination machine. Error Control. It is done so that there is no copying of the frames for
the safe delivery of the frames at the destination. In addition, Positive and negative
acceptance is sent about the incoming frames. Therefore, if the sender gets positive
acceptance, that means the frame appears safely, while negative appearance means that
something is wrong with the frame and the frame will be retransferred. The timer is put at
the receiver's and sender's end. Besides, the sequence number is given to the outgoing
transmission. So that receiver will easily identify that it is a retransmitted frame. It is one
of the main parts of the data link layer responsibilities.
The flow control protocols are generally divided into two categories i.e.
1. Stop and wait protocol
2. Sliding window protocol.
The difference between these two categories of flow control protocols is that in the stop and wait
protocol, only one data frame is sent at a time from sender to receiver. While in sliding window
protocol, multiple frames can be sent at a time from sender to receiver.
Sliding window protocol has two types:
1. Go-Back-N ARQ
2. Selective Repeat ARQ
Detail about protocol
1. Simplex Protocol
● There is no flow control or error control mechanism in the simplest protocol. The simplest
protocol is a unidirectional protocol in which the data frames travel in only one direction
from the sender to the receiver. The processing time of the simplest protocol is negligible.
Hence it can be neglected.
● The transmission channel is completely noiseless (a channel in which no frames are lost,
corrupted, or duplicated).
● The sender and the receiver are always ready to send and receive data.
● The sender sends a sequence of data frames without thinking about the receiver.
● There is no data loss hence no ACK (Acknowledgment) or NACK (Negative
acknowledgment).
● The protocol is divided into two steps: the sender and the receiver. The sender runs in the
source machine's data link layer, while the receiver operates in the destination machine's
data link layer. There is no usage of a sequence number or acknowledgments in this case.
Design:
● The data link layer at the sender site receives data from its network layer, creates a frame,
and sends it. The data link layer (receiver site) receives a frame from the physical layer,
extracts data from the frame, and sends it to the network layer. The sender and receiver's
data link layers provide communication/transmission services to their network layers. For
the physical movement of bits, the data link levels rely on the services offered by their
physical layers.
Flow Diagram:
This Flow Diagram depicts a communication scenario utilizing the simplest protocol. It is
pretty simple. Without regard for the receiver, the sender broadcasts a succession of frames.
For example, the sender will send three frames, and the receivers will receive three frames.
Data frames are represented by slanted boxes, with the height of the box defining the
transmission time difference between the first and last bit in the frame. Here data transmitting
is carried out in one direction only. The transmission (Tx) and receiving (Rx) are always ready
and the processing time can be ignored. In this protocol, infinite buffer space is available, and
no errors are occurring that is no damage frames and no lost frames. The Unrestricted Simplex
Protocol is diagrammatically represented as follows –
2. Stop and Wait Protocol
● Stop-and-wait is the most basic retransmission protocol.
● The transmitter (Station A) sends a frame over the communication line and waits for
the receiver's positive or negative acknowledgment (station B).
● Station B sends a positive acknowledgment (ACK) to station A if there is no error in the
transmission.
● The transmitter now begins to send the next frame. If a frame is received with errors at
station B, a negative acknowledgment (NAK) is transmitted to station A. Station 'A' must
retransmit the original packet in a new frame in this situation.
● There is also a chance that the information frames or ACKs will be lost.
● The sender is then outfitted with a timer. When the timer expires at the end of the time-out
interval and no recognizable acknowledgment is received, the same frame is transmitted
again.
● Stop and wait refers to a sender who delivers one frame and then waits for an
acknowledgment before proceeding.
Design:
We can see traffic on the front/forward channel (from sender to receiver) and the back/reverse
channel when we compare the stop-and-wait protocol design model to the Simplest protocol
design model. There is always one data frame on the forward channel and one acknowledgment
frame on the reverse channel. We now demand a half-duplex connection.
Flow diagram:
This diagram depicts a communication utilizing the Stop-and-Wait protocol. It remains
straightforward. The sender delivers a frame and then waits for the recipient to respond. When the
receiver's ACK (acknowledged) arrives, send the next frame, and so on. Remember that when two
frames are present, the sender will be involved in four events, and the receivers will be involved
in two events.
In this protocol we assume that data is transmitted in one direction only. No error occurs; the
receiver can only process the received information at finite rate. These assumptions imply that the
transmitter cannot send frames at rate faster than the receiver can process them.
The main problem here is how to prevent the sender from flooding the receiver. The general
solution for this problem is to have the receiver send some sort of feedback to sender, the process
is as follows −
Step1 − The receiver send the acknowledgement frame back to the sender telling the
sender that the last received frame has been processed and passed to the host.
Step 2 − Permission to send the next frame is granted.
Step 3 − The sender after sending the sent frame has to wait for an acknowledge frame
from the receiver before sending another frame.
This protocol is called Simplex Stop and wait protocol, the sender sends one frame and waits for
feedback from the receiver. When the ACK arrives, the sender sends the next frame.
The Simplex Stop and Wait Protocol is diagrammatically represented as follows −
Noisy Channel
There are three major protocols in the noisy channel, which are as follows.
2. Lost Acknowledgement:
3. Delayed Acknowledgement/Data:
After a timeout on the sender side, a long-delayed acknowledgement might be wrongly
considered as acknowledgement of some other recent packet.
The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat Request) that
does both error control and flow control.
1. Time Out:
2. Go‐Back‐N ARQ
Go Back N ARQ stands for Go Back N Automatic Repeat Request (ARQ) is a data link layer
protocol that is used for data flow control purposes. It is a sliding window protocol in which
multiple frames are sent from sender to receiver at once. The number of frames that are sent at one
depends upon the size of the window that is taken.
Go Back N ARQ is a sliding window protocol which is used for flow control purposes. Multiple
frames present in a single window are sent together from sender to receiver.
Pipelining is allowed in the Go Back N ARQ protocol. Pipelining means sending a frame before
receiving the acknowledgment for the previously sent frame.
The Receiver window in Go Back N ARQ protocol is always of size 1. This means that the receiver
takes at most 1 frame at a single time.
1. Data packets are divided into multiple frames. Each frame contains information about the
destination address, the error control mechanism it follows, etc. These multiple frames are
numbered so that they can be distinguished from each other.
2. The integer 'N' in Go Back 'N' ARQ tells us the size of the window i.e. the number of frames
that are sent at once from sender to receiver. Suppose the window size 'N' is equal to 4. Then, 4
frames (frame 0, frame 1, frame 2, and frame 3) will be sent first from sender to receiver.
3. Receiver sends the acknowledgment of frame 0. Then the sliding window moves by one and
frame 4 is sent.
4. Receiver sends the acknowledgment of frame 1. Then the sliding window moves by one and
frame 3 is sent.
5. The sender waits for the acknowledgment for some fixed amount of time. If the sender does not
get the acknowledgment for a frame in the time, it considers the frame to be corrupted. Then the
sliding window moves to the starting of the corrupted frame and all the frames in the window are
retransmitted.
For example, if the sender does not receive the acknowledgment for frame 2, it retransmits all the
frames in the windows i.e. frames [2, 3, 4, 5].
Go-Back-N ARQ is a sliding window protocol which is used for flow control purposes. Multiple
frames present in a single window are sent together from sender to receiver.
Pipelining is allowed in the Go-Back-N ARQ protocol. Pipelining means sending a frame before
receiving the acknowledgment for the previously sent frame.
The integer 'N' in Go-Back-N ARQ tells us the size of the window i.e. the number of frames that
are sent at once from sender to receiver.
The sender window size in Go-Back-N ARQ is 'N' and the receiver window size is 1.
When the acknowledgment for one frame is not received by the sender or the frames received by
the receiver are out of order, then the whole window starting from the corrupted frame is
retransmitted.
Data Link Layer protocols are generally responsible to simply ensure and confirm that the bits and
bytes that are received are identical to the bits and bytes being transferred. It is basically a set of
specifications that are used for implementation of data link layer just above the physical layer of
the Open System Interconnections (OSI) Model.
● It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
● It is responsible for encapsulating frames so that they are suitable for transmission via the
physical medium.
● It resolves the addressing of source station as well as the destination station, or groups of
destination stations.
● It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
● It also performs collision resolution and initiating retransmission in case of collisions.
● It generates the frame check sequences and thus contributes to protection against
transmission errors.
MAC Addresses
MAC address or media access control address is a unique identifier allotted to a network interface
controller (NIC) of a device. It is used as a network address for data transmission within a network
segment like Ethernet, Wi-Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired or hard-coded
in the network interface card (NIC). A MAC address comprises of six groups of two hexadecimal digits,
separated by hyphens, colons, or no separators. An example of a MAC address is 00:0A:89:5B:F0:11.
Channel allocation is a process in which a single channel is divided and allotted to multiple users in order
to carry user specific tasks. There are user’s quantity may vary every time the process takes place. If there
are N number of users and channel is divided into N equal-sized sub channels, Each user is assigned one
portion. If the number of users are small and don’t vary at times, then Frequency Division Multiplexing
can be used as it is a simple and efficient channel bandwidth allocating technique.
Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs and MANs,
and Dynamic Channel Allocation. These are explained as following below.
Static Channel Allocation in LANs and MANs: It is the classical or traditional approach of allocating a
single channel among multiple competing users using Frequency Division Multiplexing (FDM). if there are
N users, the frequency channel is divided into N equal sized portions (bandwidth), each user being
assigned one portion. since each user has a private frequency band, there is no interference between
users. It is not efficient to divide into fixed number of chunks.
T = 1/(U*C-L)
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
In this protocol, all the station has the equal priority to send the data over a channel. In random access
protocol, one or more stations cannot depend on another station nor any station control another station.
Depending on the channel's state (idle or busy), each station transmits the data frame. However, if more
than one station sends the data over a channel, there may be a collision or data conflict. Due to the
collision, the data frame packets may be lost or changed. And hence, it does not receive by the receiver
end.
Following are the different methods of random-access protocols for broadcasting frames on the channel.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to transmit
data. Using this method, any station can transmit data across a network simultaneously when a data
frameset is available for transmission.
Aloha Rules
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure Aloha, when
each station transmits data to a channel without checking whether the channel is idle or not, the chances
of collision may occur, and the data frame can be lost. When any station transmits the data frame to a
channel, the pure Aloha waits for the receiver's acknowledgment. If it does not acknowledge the receiver
end within the specified time, the station waits for a random amount of time, called the backoff time (Tb).
And the station may assume the frame has been lost or destroyed. Therefore, it retransmits the frame
until all the data are successfully transmitted to the receiver.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very high
possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time interval
called slots. So that, if a station wants to send a frame to a shared channel, the frame can only be sent at
the beginning of the slot, and only one frame is allowed to be sent to each slot. And if the stations are
unable to send data to the beginning of the slot, the station will have to wait until the beginning of the
slot for the next time. However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.
It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel (idle
or busy) before transmitting the data. It means that if the channel is idle, the station can send data to the
channel. Otherwise, it must wait until the channel becomes idle. Hence, it reduces the chances of a
collision on a transmission medium.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track
of the status of the channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data frames. The
CSMA/CD protocol works with a medium access control layer. Therefore, it first senses the shared channel
before broadcasting the frames, and if the channel is idle, it transmits a frame to check whether the
transmission was successful. If the frame is successfully received, the station sends another frame. If any
collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel to
terminate data transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of data
frames. It is a protocol that works with a medium access control layer. When a data frame is sent to a
channel, it receives an acknowledgment to check whether the channel is clear. If the station receives only
a single (own) acknowledgments, that means the data frame has been successfully transmitted to the
receiver. But if it gets two signals (its own and one more in which the collision of frames), a collision of
the frame occurs in the shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it gets
the channel is idle, it does not immediately send the data. Instead of this, it waits for some time,
and this time period is called the Interframe space or IFS. However, the IFS time is often used to
define the priority of the station.
Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number
of slots as wait time. If the channel is still busy, it does not restart the entire process, except that
it restarts the timer only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.
It is a method of reducing data frame collision on a shared channel. In the controlled access method, each
station interacts and decides to send a data frame by a particular station approved by all other stations.
It means that a single station cannot send the data frames unless all other stations are not approved. It
has three types of controlled access: Reservation, Polling, and Token Passing.
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to be shared
across multiple stations based on their time, distance and codes. It can access all the stations at the same
time to send the data frames to the channel.
Following are the various methods to access the channel based on their time, distance and codes:
FDMA
It is a frequency division multiple access (FDMA) method used to divide the available bandwidth into equal
bands so that multiple users can send data through a different frequency to the subchannel. Each station
is reserved with a particular band to prevent the crosstalk between the channels and interferences of
stations.
TDMA
Time Division Multiple Access (TDMA) is a channel access method. It allows the same frequency
bandwidth to be shared across multiple stations. And to avoid collisions in the shared channel, it divides
the channel into different frequency slots that allocate stations to transmit the data frames. The
same frequency bandwidth into the shared channel by dividing the signal into various time slots to
transmit it. However, TDMA has an overhead of synchronization that specifies each station's time slot by
adding synchronization bits to each slot.
CDMA
The code division multiple access (CDMA) is a channel access method. In CDMA, all stations can
simultaneously send the data over the same channel. It means that it allows each station to transmit the
data frames with full frequency on the shared channel at all times. It does not require the division of
bandwidth on a shared channel based on time slots. If multiple stations send data to a channel
simultaneously, their data frames are separated by a unique code sequence. Each station has a different
unique code for transmitting the data over a shared channel. For example, there are multiple users in a
room that are continuously speaking. Data is received by the users if only two-person interact with each
other using the same language. Similarly, in the network, if different stations communicate with each
other simultaneously with different code language.
There are a number of versions of IEEE 802.3 protocol. The most popular ones are -
● IEEE 802.3: This was the original standard given for 10BASE-5. It used a thick single coaxial
cable into which a connection can be tapped by drilling into the cable to the core. Here,
10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission,
and 5 refers to the maximum segment length of 500m.
● IEEE 802.3a: This gave the standard for thin coax (10BASE-2), which is a thinner variety
where the segments of coaxial cables are connected by BNC connectors. The 2 refers to
the maximum segment length of about 200m (185m to be precise).
● IEEE 802.3i: This gave the standard for twisted pair (10BASE-T) that uses unshielded
twisted pair (UTP) copper wires as physical layer medium. The further variations were
given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and 100BASE-FX.
● IEEE 802.3i: This gave the standard for Ethernet over Fiber (10BASE-F) that uses fiber optic
cables as medium of transmission.
IEEE 802.11
IEEE 802.11 was the original version released in 1997. It provided 1 Mbps or 2 Mbps data rate in
the 2.4 GHz band and used either frequency-hopping spread spectrum (FHSS) or direct-sequence
spread spectrum (DSSS). It is obsolete now.
IEEE 802.11a
802.11a was published in 1999 as a modification to 802.11, with orthogonal frequency division
multiplexing (OFDM) based air interface in physical layer instead of FHSS or DSSS of 802.11.
It provides a maximum data rate of 54 Mbps operating in the 5 GHz band. Besides it provides
error correcting code. As 2.4 GHz band is crowded, relatively sparsely used 5 GHz imparts
additional advantage to 802.11a.
Further amendments to 802.11a are 802.11ac, 802.11ad, 802.11af, 802.11ah, 802.11ai, 802.11aj
etc.
IEEE 802.11b
802.11b is a direct extension of the original 802.11 standard that appeared in early 2000. It uses
the same modulation technique as 802.11, i.e. DSSS and operates in the 2.4 GHz band. It has a
higher data rate of 11 Mbps as compared to 2 Mbps of 802.11, due to which it was rapidly adopted
in wireless LANs. However, since 2.4 GHz band is pretty crowded, 802.11b devices faces
interference from other devices.
Further amendments to 802.11b are 802.11ba, 802.11bb, 802.11bc, 802.11bd and 802.11be.
IEEE 802.11g
802.11g was indorsed in 2003. It operates in the 2.4 GHz band (as in 802.11b) and provides a
average throughput of 22 Mbps. It uses OFDM technique (as in 802.11a). It is fully backward
compatible with 802.11b. 802.11g devices also faces interference from other devices operating in
2.4 GHz band.
IEEE 802.11n
802.11n was approved and published in 2009 that operates on both the 2.4 GHz and the 5 GHz
bands. It has variable data rate ranging from 54 Mbps to 600 Mbps. It provides a marked
improvement over previous standards 802.11 by incorporating multiple-input multiple-output
antennas (MIMO antennas).
IEEE 802.11p
802.11 is an amendment for including wireless access in vehicular environments (WAVE) to
support Intelligent Transportation Systems (ITS). They include network communications between
vehicles moving at high speed and the environment. They have a data rate of 27 Mbps and operate
in 5.9 GHz band.
The main advantage of reservation is high rates and low rates of data accessing time of
the respective channel can be predicated easily. Here time and rates are fixed.
Priorities can be set to provide speedier access from secondary.
Reservation-based access methods can provide predictable network performance,
which is important in applications where latency and jitter must be minimized, such as
in real-time video or audio streaming.
Reservation-based access methods can reduce contention for network resources, as
access to the network is pre-allocated based on reservation requests. This can improve
network efficiency and reduce packet loss.
Reservation-based access methods can support QoS requirements, by providing
different reservation types for different types of traffic, such as voice, video, or data.
This can ensure that high-priority traffic is given preferential treatment over lower-
priority traffic.
Reservation-based access methods can enable more efficient use of available
bandwidth, as they allow for time and frequency multiplexing of different reservation
requests on the same channel.
Reservation-based access methods are well-suited to support multimedia applications
that require guaranteed network resources, such as bandwidth and latency, to ensure
high-quality performance.
Disadvantages of Reservation
The maximum and minimum access time and data rates on the channel are fixed
predictable.
It has maximum efficiency.
It has maximum bandwidth.
No slot is wasted in polling.
There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling
3. Token Passing
In token passing scheme, the stations are connected logically to each other in form of
ring and access to stations is governed by tokens.
A token is a special bit pattern or a small message, which circulate from one station to
the next in some predefined order.
In Token ring, token is passed from one station to another adjacent station in the ring
whereas in case of Token bus, each station uses the bus to send the token to the next
station in some predefined order.
In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token
to the next station. If it has no queued frame, it passes the token simply.
After sending a frame, each station must wait for all N stations (including itself) to send
the token to their neighbours and the other N – 1 stations to send a frame, if they have
one.
There exist problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable operation
of this scheme.
It may now be applied with routers cabling and includes built-in debugging features
like protective relay and auto reconfiguration.
It provides good throughput when conditions of high load.
HDLC is a synchronous data link layer protocol designed for reliable point-to-point
communication and multipoint communication. HDLC is a bit-oriented protocol means one bit
is transferred at a time. It provides error detection, correction, flow control, and multiplexing
capabilities. It is commonly used in WAN environments and leased lines.
Features of HDLC
Error Detection and Correction: HDLC uses error detection and correction mechanisms such
as CRCs and acknowledgments to ensure the integrity of the transmitted data.
Full-Duplex Communication: HDLC supports full-duplex communication, which allows data
to be transmitted in both directions simultaneously.
Multiplexing: HDLC supports multiplexing, which enables multiple data streams to be
transmitted over a single communication channel.
Efficiency: HDLC uses efficient bandwidth utilization techniques, such as sliding windows, to
optimize data transmission.
PPP
PPP is a versatile data link layer protocol that can be used for both synchronous and
asynchronous transmission over point-to-point links only. PPP is a byte-oriented protocol
means one byte is transferred at a time. It offers features like error detection, correction, flow
control, and negotiation of various options. PPP is widely used in dial-up connections, DSL,
and other point-to-point links.
Features of PPP
Flag field – PPP frame similar to HDLC frame, always begins and ends with standard HDLC
flag. It always has a value of 1 byte i.e., 01111110 binary value.
Address field – Address field is basically broadcast address. In this, all 1’s simply indicates
that all of the stations are ready to accept frame. It has the value of 1 byte i.e., 11111111 binary
value. PPP on the other hand, does not provide or assign individual station addresses.
Control field – This field basically uses format of U-frame i.e., Unnumbered frame in HDLC.
In HDLC, control field is required for various purposes but in PPP, this field is set to 1 byte
i.e., 00000011 binary value. This 1 byte is used for a connection-less data link.
Protocol field – This field basically identifies network protocol of the datagram. It usually
identifies the kind of packet in the data field i.e., what exactly is being carried in data field. This
field is of 1 or 2 bytes and helps in identifies the PDU (Protocol Data Unit) that is being
encapsulated by PPP frame.
Data field – It usually contains the upper layer datagram. Network layer datagram is
particularly encapsulated in this field for regular PPP data frames. Length of this field is not
constant rather it varies.
Checksum (FCS) field – This field usually contains checksum simply for identification of
errors. It can be either 16 bits or 32 bits in size. It is also calculated over address, control,
protocol, and even information fields. Characters are added to frame for control and handling
of errors.
Both are data link layer protocols used for data communication between network devices.
Both use error detection and correction mechanisms to ensure the integrity of the transmitted
data.
Both support full-duplex communication.
Both are widely used in various communication systems and have been standardized by
international organizations.
Both support bandwidth optimization techniques, such as sliding windows.
Stands for HDLC stands for High-level Data Link PPP stands for Point-to-Point Protocol.
Control.
Layer HDLC works at layer 2 (Data Link PPP works at layer 2 and layer 3
Layer). (Network Layer).
Media Type HDLC is used in synchronous media. PPP is used in synchronous media
as well as asynchronous media.
Error HDLC does not offer error detection. PPP provides the feature of error
detection detection using FCS (Frame Check
Sequence) while transmitting data.
Compatibility HDLC is not compatible with non-Cisco PPP is compatible with non-Cisco
devices. devices.
Format HDLC protocols have two types- ISO PPP uses a defined format of HDLC.
HDLC and CISCO HDLC.
The frequency bands of different stations are separated by small band of unused frequency and
that unused frequency bands are called as guard bands that prevent interference of stations. It is
like access method in data link layer in which data link layer at each station tells its physical
layer to make bandpass signal from data passed to it. The signal is created in allocated band
and there is no physical multiplexer at physical layer.