Unit-2 Computer Networks
Unit-2 Computer Networks
The data link layer is the second layer from the bottom in the OSI (Open System Interconnection)
network architecture model. It is responsible for the node-to-node delivery of data. Its major role is to
ensure error-free transmission of information. DLL is also responsible for encoding, decode and
organizing the outgoing and incoming data. This is considered the most complex layer of the OSI
model as it hides all the underlying complexities of the hardware from the other above layers.
Sub-layers of the Data Link Layer
The data link layer is further divided into two sub-layers, which are as follows:
Logical Link Control (LLC)
This sublayer of the data link layer deals with multiplexing, the flow of data among applications and
other services, and LLC is responsible for providing error messages and acknowledgments as well.
Media Access Control (MAC)
MAC sublayer manages the device’s interaction, responsible for addressing frames, and also controls
physical media access.
The data link layer receives the information in the form of packets from the Network layer, it divides
packets into frames and sends those frames bit-by-bit to the underlying physical layer.
Functions of the Data-link Layer
There are various benefits of data link layer s let’s look into it.
Error Detection Methods | CRC, VRC, LRC, Checksum error detection techniques
This page describes error detection methods or techniques.The methods of error detection in
networking are VRC, LRC, CRC and Checksum. CRC error detection and correction example is also
explained.
What is Error Detection and Correction ?
As we know errors occur to the data during data transmission as well as data processing stages. The
errors can occur due to various reasons such as electrostatic interference from nearby circuits,
attenuation due to cable or path resistance, distortion due to inductance/capacitance, transmission loss
due to leakages etc.
LAN and optical cables introduce less errors than wireless networks. The errors can be of two types
viz. single bit error and burst error.
Types Of Error | Single Bit Error, Burst Error
Single bit error : Only one bit in the data unit changes in single bit error. Single bit error can happen if
we are sending data using parallel transmission.
Burst error : Multiple bits are changed in the burst error. Burst errors can be caused by impulse noise.
Principle of Error Detection
• When frame is transmitted from transmitter to the receiver. There are two possibilities viz. frame is
received without error, frame is received in error (i.e. frame is bad).
• Error detection helps in detecting errors in a received block or frame by the receiver.
• Once the error is detected receiver informs the transmitter to re-transmit the same frame again.
• Error detection can be made possible by adding redundant bits in each frame during transmission.
Based on all the bits in the frame (i.e. data + error check bits), receiver is capable of detecting errors
in the frame.
Following are the explanation of error detection modules.
➤Message (or data ) source : Source of information in bits (K bits)
➤Encoding process : Process of converting block of k-bit information to n-bit codeword. n = k .
➤Channel : Medium in which n-bit codewords pass through.
➤Decoding process : Process of converting n-bit received block to k-bit message.
➤Message sink : Destination for k-bit information in bits.
Error Detection Methods
Following are the error detection methods or techniques of error detection in networking.
1. VRC Method 2. LRC method 3. CRC method 4. Checksum method
Parity check or vertical redundancy check (VRC) method
In this error detection technique, a redundant bit called parity bit is appended to every data unit so that
total number of 1's in the unit (including parity bit) becomes even. The system now transmits entire
extended unit across the network link. At the receiver, all eight received bits are checked through even
parity checking function. If it counts even 1's data unit passes. If it counts odd number of 1's, it means
error has been introduced in the data somewhere. Hence receiver rejects the whole data unit. Similar
way odd parity VRC can also be implemented. In this method, total number of 1's in should be odd
before transmission.
Longitudinal Redundancy Check (LRC) method
In this error detection method, a block of bits are organized in a table (of rows and columns). For
example, instead of sending block of 32 bits, first it is organized into four rows and eight columns.
Then parity bits for each column is calculated and new row of eight parity bits is formed. These eight
parity bits are appended to original data before transmission.
CRC Error Detection and Correction Example
Following figure-2 depicts CRC addition at transmitter end. CRC is calculated based on received
block and compared with CRC appended by transmitter. When calculated CRC and original CRC is
equal, frame is considered to be error free. When calculated CRC and original CRC is not equal,
frame is said to be erroneous.
As shown in the figure, k bits are passed to the encoder block to generate parity bits. This parity bits
are added to input data bits and are transmitted as n bits. Hence n-k are parity bits. This happens at the
transmitter.
As shown in the figure at the receiver, parity bits along with data bits of total length n bits are passed
to the encoder. From the data part crc is again computed and will be compared with the received CRC
bits and based on this data is corrupted or not is decided. This process is called error detection and
Correction.
Checksum method
There are two modules in this error detection method viz. checksum generator and checksum checker.
In the transmitter, checksum generator subdivides data unit into equal segments of n bits (usually 16).
These segments are added together using one's complement arithmatic in such a way that total is also
n bits long. The total (i.e. sum) is then complemented and appended to the end of the original data unit
as redundancy bits, called checksum field. The extended data unit is transmitted across the network.
So it sum of data segment is equal to T, checksum will be -T.
The receiver subdivides the data unit as above and adds all segments together and complements the
result. If the extended data unit is intact, the total value found by adding all data segments and
checksum field should be zero. If the result is not zero, the packet contains error and receiver rejects
the packet.
Error Correction
Error Correction codes are used to detect and correct the errors when data is transmitted from the
sender to the receiver.
Error Correction can be handled in two ways:
o Backward error correction: Once the error is discovered, the receiver requests the sender to
retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting code which
automatically corrects the errors.
A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For example, If we want to
calculate a single-bit error, the error correction code will determine which one of seven bits is in error.
To achieve this, we have to add some additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data bits. The number of
redundant bits r can be calculated by using the formula:
2r>=d+r+1
The value of r is calculated by using the above formula. For example, if the value of d is 4, then the
possible smallest value that satisfies the above relation would be 3.
To determine the position of the bit which is in error, a technique developed by R.W Hamming is
Hamming code which can be applied to any length of the data unit and uses the relationship between
data units and redundant units.
Hamming Code
Parity bits: The bit which is appended to the original data of binary bits so that the total number of 1s
is even or odd.
Even parity: To check for even parity, if the total number of 1s is even, then the value of the parity
bit is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is 1.
Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of parity bit is 1.
If the total number of 1s is odd, then the value of parity bit is 0.
Algorithm of Hamming code:
o An information of 'd' bits are added to the redundant bits 'r' to form d+r.
o At the receiving end, the parity bits are recalculated. The decimal value of the parity bits
determines the position of an error.
Relationship b/w Error position & binary number.
We observe from the above figure that the bit positions that includes 1 in the second position are 2, 3,
6, 7. Now, we perform the even-parity check at these bit positions. The total number of 1 at these bit
positions corresponding to r2 is odd, therefore, the value of the r2 bit is 1.
Determining r4 bit
The r4 bit is calculated by performing a parity check on the bit positions whose binary representation
includes 1 in the third position.
We observe from the above figure that the bit positions that includes 1 in the third position are 4, 5, 6,
7. Now, we perform the even-parity check at these bit positions. The total number of 1 at these bit
positions corresponding to r4 is even, therefore, the value of the r4 bit is 0.
Data transferred is given below:
Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits are recalculated.
R1 bit
The bit positions of the r1 bit are 1,3,5,7
We observe from the above figure that the binary representation of r1 is 1100. Now, we perform the
even-parity check, the total number of 1s appearing in the r1 bit is an even number. Therefore, the
value of r1 is 0.
R2 bit
The bit positions of r2 bit are 2,3,6,7.
We observe from the above figure that the binary representation of r2 is 1001. Now, we perform the
even-parity check, the total number of 1s appearing in the r2 bit is an even number. Therefore, the
value of r2 is 0.
R4 bit
The bit positions of r4 bit are 4,5,6,7.
We observe from the above figure that the binary representation of r4 is 1011. Now, we perform the
even-parity check, the total number of 1s appearing in the r4 bit is an odd number. Therefore, the
value of r4 is 1.
o The binary representation of redundant bits, i.e., r4r2r1 is 100, and its corresponding decimal
value is 4. Therefore, the error occurs in a 4th bit position. The bit value must be changed
from 1 to 0 to correct the error.
Methods of Line Discipline
Line Discipline :
Line discipline is the foundation of data link layer that determines which device can send
data at a given time out of various connected devices in network and when it can send that
data. Thus line discipline coordinate link system. It also used to make sure that data sent by
sender is received by a particular receiver. It may be possible that when sender is sending
data, receiver is not ready to accept that data or is busy. In such a case, sender will keep on
sending data that will not be received by receiver, and data goes waste.
To overcome such a problem, line discipline method confirms existence and capability of
receiver before data is sent by receiver before data is sent by sender. Line discipline function
looks after establishment of link between sender and receiver and also take care of right of
particular device to transmit data at a given time.
Methods of Line Discipline :
There are two ways to provide line discipline functions :
1. Enquiry/Acknowledgement (ENQ/ACK) :
This method is used when there is dedicated link between sender and receiver.
This method is used in peer-to-peer communication.
It coordinated which device may start transmission and whether receiver is ready to
accept data or not.
If two communicating devices are in same rank, any of two device can start
communication.
2.
Figure : ENQ/ACK Mode
3. Poll/ Select :
This method is used in client-server type networks having primary-secondary
relationships.
In this, one device is called primary station that provides and control services to all
other devices called secondary devices.
In such network, primary device controls link and secondary devices follows its
instructions.
Primary device determines which device has control over link at a given time.
Whenever a communication is to be establish either between a primary and a
secondary device or a secondary and a secondary device, primary device is always
initiator of session.
In such networks two functions are possible : Poll, and Select. These are explained as
following below.
(i). Polling :
When primary device wants to receive details asks secondary devices if they
have anything to send, this function is called polling.
Figure : Poll Mode
(ii). Selecting :
When primary device wants to send data to any secondary device, it tells that
device to be ready to receive data, this function is called selecting.
Flow Control
o It is a set of procedures that tells the sender how much data it can transmit before the data
overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data. Therefore, the
receiving device must be able to inform the sending device to stop the transmission
temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they are processed.
o Sliding window
Stop-and-wait
o In the Stop-and-wait method, the sender waits for an acknowledgement after every frame it
sends.
o When acknowledgement is received, then only next frame is sent. The process of alternately
sending and waiting of a frame continues until the sender transmits the EOT (End of
transmission) frame.
Advantage of Stop-and-wait
The Stop-and-wait method is simple as each frame is checked and acknowledged before the
next frame is sent.
Disadvantage of Stop-and-wait
Stop-and-wait technique is inefficient to use as each frame must travel across all the way to
the receiver, and an acknowledgement travels all the way before the next frame is sent. Each
frame sent and received uses the entire time needed to traverse the link.
Sliding Window
o The Sliding Window is a method of flow control in which a sender can transmit the several
frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the another due to which
capacity of the communication channel can be utilized efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver end.
o The window can hold the frames at either end, and it provides the upper limit on the number
of frames that can be transmitted before the acknowledgement.
o Frames can be acknowledged even when the window is not completely filled.
o The window has a specific size in which they are numbered as modulo-n means that they are
numbered from 0 to n-1. For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1 frames can be sent
before acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame that it wants to
receive. For example, to acknowledge the string of frames ending with frame number 4, the
receiver will send the ACK containing the number 5. When the sender sees the ACK with the
number 5, it got to know that the frames from 0 through 4 have been received.
o Sender Window
o At the beginning of a transmission, the sender window contains n-1 frames, and when they
are sent out, the left boundary moves inward shrinking the size of the window. For example, if
the size of the window is w if three frames are sent out, then the number of frames left out in
the sender window is w-3.
o Once the ACK has arrived, then the sender window expands to the number which will be
equal to the number of frames acknowledged by ACK.
o For example, the size of the window is 7, and if frames 0 through 4 have been sent out and no
acknowledgement has arrived, then the sender window contains only two frames, i.e., 5 and 6.
Now, if ACK has arrived with a number 4 which means that 0 through 3 frames have arrived
undamaged and the sender window is expanded to include the next four frames. Therefore,
the sender window contains six frames (5,6,7,0,1,2).
Receiver Window
o At the beginning of transmission, the receiver window does not contain n frames, but it
contains n-1 spaces for frames.
o When the new frame arrives, the size of the window shrinks.
o The receiver window does not represent the number of frames received, but it represents the
number of frames that can be received before an ACK is sent. For example, the size of the
window is w, if three frames are received then the number of spaces available in the window
is (w-3).
o Once the acknowledgement is sent, the receiver window expands by the number equal to the
number of frames acknowledged.
o Suppose the size of the window is 7 means that the receiver window contains seven spaces for
seven frames. If the one frame is received, then the receiver window shrinks and moving the
boundary from 0 to 1. In this way, window shrinks one by one, so window now contains the
six spaces. If frames from 0 through 4 have sent, then the window contains two spaces before
an acknowledgement is sent.
Error Control
Error Control is a technique of error detection and retransmission.
Categories of Error Control:
Stop-and-wait ARQ
Stop-and-wait ARQ is a technique used to retransmit the data in case of damaged or lost
frames.
This technique works on the principle that the sender will not transmit the next frame until it
receives the acknowledgement of the last transmitted frame.
Four features are required for the retransmission:
o The sending device keeps a copy of the last transmitted frame until the acknowledgement is
received. Keeping the copy allows the sender to retransmit the data if the frame is not
received correctly.
o Both the data frames and the ACK frames are numbered alternately 0 and 1 so that they can
be identified individually. Suppose data 1 frame acknowledges the data 0 frame means that
the data 0 frame has been arrived correctly and expects to receive data 1 frame.
o If an error occurs in the last transmitted frame, then the receiver sends the NAK frame which
is not numbered. On receiving the NAK frame, sender retransmits the data.
o It works with the timer. If the acknowledgement is not received within the allotted time, then
the sender assumes that the frame is lost during the transmission, so it will retransmit the
frame.
Two possibilities of the retransmission:
o Damaged Frame: When the receiver receives a damaged frame, i.e., the frame contains an
error, then it returns the NAK frame. For example, when the data 0 frame is sent, and then the
receiver sends the ACK 1 frame means that the data 0 has arrived correctly, and transmits the
data 1 frame. The sender transmits the next frame: data 1. It reaches undamaged, and the
receiver returns ACK 0. The sender transmits the next frame: data 0. The receiver reports an
error and returns the NAK frame. The sender retransmits the data 0 frame.
o Lost Frame: Sender is equipped with the timer and starts when the frame is transmitted.
Sometimes the frame has not arrived at the receiving end so that it can be acknowledged
neither positively nor negatively. The sender waits for acknowledgement until the timer goes
off. If the timer goes off, it retransmits the last transmitted frame.
Sliding Window ARQ
SlidingWindow ARQ is a technique used for continuous transmission error control.
Three Features used for retransmission:
o In this case, the sender keeps the copies of all the transmitted frames until they have been
acknowledged. Suppose the frames from 0 through 4 have been transmitted, and the last
acknowledgement was for frame 2, the sender has to keep the copies of frames 3 and 4 until
they receive correctly.
o The receiver can send either NAK or ACK depending on the conditions. The NAK frame tells
the sender that the data have been received damaged. Since the sliding window is a
continuous transmission mechanism, both ACK and NAK must be numbered for the
identification of a frame. The ACK frame consists of a number that represents the next frame
which the receiver expects to receive. The NAK frame consists of a number that represents
the damaged frame.
o The sliding window ARQ is equipped with the timer to handle the lost acknowledgements.
Suppose then n-1 frames have been sent before receiving any acknowledgement. The sender
waits for the acknowledgement, so it starts the timer and waits before sending any more. If
the allotted time runs out, the sender retransmits one or all the frames depending upon the
protocol used.
Two protocols used in sliding window ARQ:
o Go-Back-n ARQ: In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it
retransmits all the frames after which it does not receive the positive ACK.
Three possibilities can occur for retransmission:
o Damaged Frame: When the frame is damaged, then the receiver sends a NAK frame.
In the above figure, three frames have been transmitted before an error discovered in the third
frame. In this case, ACK 2 has been returned telling that the frames 0,1 have been received
successfully without any error. The receiver discovers the error in data 2 frame, so it returns
the NAK 2 frame. The frame 3 is also discarded as it is transmitted after the damaged frame.
Therefore, the sender retransmits the frames 2,3.
o Lost Data Frame: In Sliding window protocols, data frames are sent sequentially. If any of
the frames is lost, then the next frame arrive at the receiver is out of sequence. The receiver
checks the sequence number of each of the frame, discovers the frame that has been skipped,
and returns the NAK for the missing frame. The sending device retransmits the frame
indicated by NAK as well as the frames transmitted after the lost frame.
o Lost Acknowledgement: The sender can send as many frames as the windows allow before
waiting for any acknowledgement. Once the limit of the window is reached, the sender has no
more frames to send; it must wait for the acknowledgement. If the acknowledgement is lost,
then the sender could wait forever. To avoid such situation, the sender is equipped with the
timer that starts counting whenever the window capacity is reached. If the acknowledgement
has not been received within the time limit, then the sender retransmits the frame since the
last ACK.
Selective-Reject ARQ
o Selective-Reject ARQ technique is more efficient than Go-Back-n ARQ.
o In this technique, only those frames are retransmitted for which negative acknowledgement
(NAK) has been received.
o The receiver storage buffer keeps all the damaged frames on hold until the frame in error is
correctly received.
o The receiver must have an appropriate logic for reinserting the frames in a correct order.
o The sender must consist of a searching mechanism that selects only the requested frame for
retransmission.
Piggybacking
In reliable full - duplex data transmission, the technique of hooking up acknowledgments onto
outgoing data frames is called piggybacking.
Why Piggybacking?
Communications are mostly full – duplex in nature, i.e. data transmission occurs in both
directions. A method to achieve full – duplex communication is to consider both the
communication as a pair of simplex communication. Each link comprises a forward channel
for sending data and a reverse channel for sending acknowledgments.
However, in the above arrangement, traffic load doubles for each data unit that is transmitted.
Half of all data transmission comprise of transmission of acknowledgments.
So, a solution that provides better utilization of bandwidth is piggybacking. Here, sending of
acknowledgment is delayed until the next data frame is available for transmission. The
acknowledgment is then hooked onto the outgoing data frame. The data frame consists of an
ack field. The size of the ack field is only a few bits, while an acknowledgment frame
comprises of several bytes. Thus, a substantial gain is obtained in reducing bandwidth
requirement.
Working Principle
Suppose that there are two communication stations X and Y. The data frames transmitted have
an acknowledgment field, ack field that is of a few bits length. Additionally, there are frames
for sending acknowledgments, ACK frames. The purpose is to minimize the ACK frames.
The three principles governing piggybacking when the station X wants to communicate with
station Y are −
If station X has both data and acknowledgment to send, it sends a data frame with the ack
field containing the sequence number of the frame to be acknowledged.
If station X has only an acknowledgment to send, it waits for a finite period of time to see
whether a data frame is available to be sent. If a data frame becomes available, then it
piggybacks the acknowledgment with it. Otherwise, it sends an ACK frame.
If station X has only a data frame to send, it adds the last acknowledgment with it. The station
Y discards all duplicate acknowledgments. Alternatively, station X may send the data frame
with the ack field containing a bit combination denoting no acknowledgment.
Example
The following diagram illustrates the three scenario −
HDLC
High-level Data Link Control (HDLC) is a group of communication protocols of the data
link layer for transmitting data between network points or nodes. Since it is a data link
protocol, data is organized into frames. A frame is transmitted via the network to the
destination that verifies its successful arrival. It is a bit - oriented protocol that is applicable
for both point - to - point and multipoint communications.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.
Normal Response Mode (NRM) − Here, two types of stations are there, a primary station that
send commands and secondary station that can respond to received commands. It is used for
both point - to - point and multipoint communications.
Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each station
can both send commands and respond to commands. It is used for only point - to - point
communications.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure
varies according to the type of frame. The fields of a HDLC frame are −
Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
Address − It contains the address of the receiver. If the frame is sent by the primary station, it
contains the address(es) of the secondary station(s). If it is sent by the secondary station, it
contains the address of the primary station. The address field may be from 1 byte to several
bytes.
Control − It is 1 or 2 bytes containing flow and error control information.
Payload − This carries the data from the network layer. Its length may vary from one network
to another.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code
used is CRC (cyclic redundancy code)
o CSMA
o CSMA/CD
o CSMA/CA
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any
station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the specified time, the
station waits for a random amount of time, called the backoff time (Tb). And the station may
assume the frame has been lost or destroyed. Therefore, it retransmits the frame until all the
data are successfully transmitted to the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames fall
on a shared channel simultaneously, collisions can occur, and both will suffer damage. If the
new frame's first bit enters the channel before finishing the last bit of the second frame. Both
frames are completely finished, and both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has
a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a
fixed time interval called slots. So that, if a station wants to send a frame to a shared channel,
the frame can only be sent at the beginning of the slot, and only one frame is allowed to be
sent to each slot. And if the stations are unable to send data to the beginning of the slot, the
station will have to wait until the beginning of the slot for the next time. However, the
possibility of a collision remains when trying to send a frame at the beginning of two or more
station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^
- 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA (Carrier Sense Multiple Access)
It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.
CSMA Access Modes
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep
track of the status of the channel to be idle and broadcast the frame unconditionally as soon as
the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-
Persistent mode defines that each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is inactive,
each station waits for its turn to retransmit the data.
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it
gets the channel is idle, it does not immediately send the data. Instead of this, it waits for
some time, and this time period is called the Interframe space or IFS. However, the IFS time
is often used to define the priority of the station.
Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number
of slots as wait time. If the channel is still busy, it does not restart the entire process, except
that it restarts the timer only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.
B. Controlled Access Protocol
It is a method of reducing data frame collision on a shared channel. In the controlled access
method, each station interacts and decides to send a data frame by a particular station
approved by all other stations. It means that a single station cannot send the data frames
unless all other stations are not approved. It has three types of controlled access: Reservation,
Polling, and Token Passing.
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to be
shared across multiple stations based on their time, distance and codes. It can access all the
stations at the same time to send the data frames to the channel.
Following are the various methods to access the channel based on their time, distance and
codes:
1. FDMA (Frequency Division Multiple Access)
2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)
FDMA
It is a frequency division multiple access (FDMA) method used to divide the available
bandwidth into equal bands so that multiple users can send data through a different frequency
to the subchannel. Each station is reserved with a particular band to prevent the crosstalk
between the channels and interferences of stations.
TDMA
Time Division Multiple Access (TDMA) is a channel access method. It allows the same
frequency bandwidth to be shared across multiple stations. And to avoid collisions in the
shared channel, it divides the channel into different frequency slots that allocate stations to
transmit the data frames. The same frequency bandwidth into the shared channel by dividing
the signal into various time slots to transmit it. However, TDMA has an overhead of
synchronization that specifies each station's time slot by adding synchronization bits to each
slot.
CDMA
The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it allows each
station to transmit the data frames with full frequency on the shared channel at all times. It
does not require the division of bandwidth on a shared channel based on time slots. If multiple
stations send data to a channel simultaneously, their data frames are separated by a unique
code sequence. Each station has a different unique code for transmitting the data over a shared
channel. For example, there are multiple users in a room that are continuously speaking. Data
is received by the users if only two-person interact with each other using the same language.
Similarly, in the network, if different stations communicate with each other simultaneously
with different code language.
Wired lans
What is Ethernet?
A local Area Network (LAN) is a data communication network connecting various terminals or
computers within a building or limited geographical area. The connection between the devices could
be wired or wireless. Ethernet, Token rings, and Wireless LAN using IEEE 802.11 are examples of
standard LAN technologies.
What is Ethernet?
Ethernet is the most widely used LAN technology and is defined under IEEE standards 802.3.
The reason behind its wide usability is that Ethernet is easy to understand, implement, and
maintain, and allows low-cost network implementation. Also, Ethernet offers flexibility in
terms of the topologies that are allowed. Ethernet generally uses a bus topology. Ethernet
operates in two layers of the OSI model, the physical layer and the data link layer. For
Ethernet, the protocol data unit is a frame since we mainly deal with DLLs. In order to handle
collisions, the Access control mechanism used in Ethernet is CSMA/CD.
Although Ethernet has been largely replaced by wireless networks, wired networking still
uses Ethernet more frequently. Wi-Fi eliminates the need for cables by enabling users to
connect their smartphones or laptops to a network wirelessly. The 802.11ac Wi-Fi standard
offers faster maximum data transfer rates when compared to Gigabit Ethernet. However,
wired connections are more secure and less susceptible to interference than wireless networks.
This is the main justification for why so many companies and organizations continue to use
Ethernet.
History of Ethernet
Robert Metcalfe’s invention of Ethernet in 1973 completely changed computer networking.
With Ethernet Version 2’s support for 10 Mbps and an initial data rate of 2.94 Mbps, it first
gained popularity in 1982. Ethernet’s adoption was accelerated by the IEEE 802.3
standardization in 1983. Local area networks (LANs) and the internet were able to expand
quickly thanks to the rapid evolution and advancement of Ethernet, which over time reached
speeds of 100 Mbps, 1 Gbps, 10 Gbps, and higher. It evolved into the standard technology for
wired network connections, enabling dependable and quick data transmission for private
residences, commercial buildings, and data centers all over the world.
There are different types of Ethernet networks that are used to connect devices and transfer
data.
Let’s discuss them in simple terms:
1. Fast Ethernet: This type of Ethernet network uses cables called twisted pair or CAT5. It
can transfer data at a speed of around 100 Mbps (megabits per second). Fast Ethernet uses
both fiber optic and twisted pair cables to enable communication. There are three categories
of Fast Ethernet: 100BASE-TX, 100BASE-FX, and 100BASE-T4.
2. Gigabit Ethernet: This is an upgrade from Fast Ethernet and is more common nowadays.
It can transfer data at a speed of 1000 Mbps or 1 Gbps (gigabit per second). Gigabit Ethernet
also uses fiber optic and twisted pair cables for communication. It often uses advanced cables
like CAT5e, which can transfer data at a speed of 10 Gbps.
3.10-Gigabit Ethernet: This is an advanced and high-speed network that can transmit data at
a speed of 10 gigabits per second. It uses special cables like CAT6a or CAT7 twisted-pair
cables and fiber optic cables. With the help of fiber optic cables, this network can cover
longer distances, up to around 10,000 meters.
4. Switch Ethernet: This type of network involves using switches or hubs to improve
network performance. Each workstation in this network has its own dedicated connection,
which improves the speed and efficiency of data transfer. Switch Ethernet supports a wide
range of speeds, from 10 Mbps to 10 Gbps, depending on the version of Ethernet being used.
In summary, Fast Ethernet is the basic version with a speed of 100 Mbps, Gigabit Ethernet is
faster with a speed of 1 Gbps, 10-Gigabit Ethernet is even faster with a speed of 10 Gbps, and
Switch Ethernet uses switches or hubs to enhance network performance.
The Manchester Encoding Technique is used in Ethernet.
Using Manchester encoding, data can be transmitted over a physical medium in
communication systems. It is a type of line coding where the signal transitions, as opposed to
the absolute voltage levels, serve as the data representation.
Each bit of information is split into two equal time periods, or “halves,” in Manchester
encoding. If the signal level is higher during the first half of the bit period than it is during the
second, the result is a logic high (typically 1), or vice versa.
Since we are talking about IEEE 802.3 standard Ethernet, therefore, 0 is expressed by a high-
to-low transition, a 1 by the low-to-high transition. In both Manchester Encoding and
Differential Manchester, the Encoding Baud rate is double of bit rate.
Key Features of Ethernet
1. Speed: Ethernet is capable of transmitting data at high speeds, with current Ethernet
standards supporting speeds of up to 100 Gbps.
2. Flexibility: Ethernet is a flexible technology that can be used with a wide range of devices
and operating systems. It can also be easily scaled to accommodate a growing number of
users and devices.
3. Reliability: Ethernet is a reliable technology that uses error-correction techniques to ensure
that data is transmitted accurately and efficiently.
4. Cost-effectiveness: Ethernet is a cost-effective technology that is widely available and easy
to implement. It is also relatively low-maintenance, requiring minimal ongoing support.
5. Interoperability: Ethernet is an interoperable technology that allows devices from different
manufacturers to communicate with each other seamlessly.
6. Security: Ethernet includes built-in security features, including encryption and
authentication, to protect data from unauthorized access.
7. Manageability: Ethernet networks are easily managed, with various tools available to help
network administrators monitor and control network traffic.
8. Compatibility: Ethernet is compatible with a wide range of other networking technologies,
making it easy to integrate with other systems and devices.
9. Availability: Ethernet is a widely available technology that can be used in almost any setting,
from homes and small offices to large data centers and enterprise-level networks.
10. Simplicity: Ethernet is a simple technology that is easy to understand and use. It does not
require specialized knowledge or expertise to set up and configure, making it accessible to a
wide range of users.
11. Standardization: Ethernet is a standardized technology, which means that all Ethernet
devices and systems are designed to work together seamlessly. This makes it easier for
network administrators to manage and troubleshoot Ethernet networks.
12. Scalability: Ethernet is highly scalable, which means it can easily accommodate the addition
of new devices, users, and applications without sacrificing performance or reliability.
13. Broad compatibility: Ethernet is compatible with a wide range of protocols and
technologies, including TCP/IP, HTTP, FTP, and others. This makes it a versatile technology
that can be used in a variety of settings and applications.
14. Ease of integration: Ethernet can be easily integrated with other networking technologies,
such as Wi-Fi and Bluetooth, to create a seamless and integrated network environment.
15. Ease of troubleshooting: Ethernet networks are easy to troubleshoot and diagnose, thanks to
a range of built-in diagnostic and monitoring tools. This makes it easier for network
administrators to identify and resolve issues quickly and efficiently.
16. Support for multimedia: Ethernet supports multimedia applications, such as video and audio
streaming, making it ideal for use in settings where multimedia content is a key part of the
user experience.Ethernet is a reliable, cost-effective, and widely used LAN technology that
offers high-speed connectivity and easy manageability for local networks.
Advantages of Ethernet
Speed: When compared to a wireless connection, Ethernet provides significantly more speed.
Because Ethernet is a one-to-one connection, this is the case. As a result, speeds of up to 10
Gigabits per second (Gbps) or even 100 Gigabits per second (Gbps) are possible.
Efficiency: An Ethernet cable, such as Cat6, consumes less electricity, even less than a wifi
connection. As a result, these ethernet cables are thought to be the most energy-efficient.
Good data transfer quality: Because it is resistant to noise, the information transferred is of
high quality.
Baud rate = 2* Bit rate
Ethernet LANs consist of network nodes and interconnecting media, or links.
The network nodes can be of two types:
Data Terminal Equipment (DTE): Media, Generally, DTEs are the end devices that convert
the user information into signals or reconvert the received signals. DTE devices are: personal
computers, workstations, file servers or print servers also referred to as end stations. These
devices are either the source or the destination of data frames. The data terminal equipment
may be a single piece of equipment or multiple pieces of equipment that are interconnected
and perform all the required functions to allow the user to communicate. A user can interact
with DTE or DTE may be a user.
Data Communication Equipment (DCE):- DCEs are the intermediate network devices that
receive and forward frames across the network. They may be either standalone devices such
as repeaters, network switches, or routers, or maybe communications interface units such as
interface cards and modems. The DCE performs functions such as signal conversion, coding,
and maybe a part of the DTE or intermediate equipment.
Disadvantages of Ethernet
Distance limitations: Ethernet has distance limitations, with the maximum cable length for a
standard Ethernet network being 100 meters. This means that it may not be suitable for larger
networks that require longer distances.
Bandwidth sharing: Ethernet networks share bandwidth among all connected devices, which
can result in reduced network speeds as the number of devices increases.
Security vulnerabilities: Although Ethernet includes built-in security features, it is still
vulnerable to security breaches, including unauthorized access and data interception.
Complexity: Ethernet networks can be complex to set up and maintain, requiring specialized
knowledge and expertise.
Compatibility issues: While Ethernet is generally interoperable with other networking
technologies, compatibility issues can arise when integrating with older or legacy systems.
Cable installation: Ethernet networks require the installation of physical cables, which can
be time-consuming and expensive to install.
Physical limitations: Ethernet networks require physical connections between devices, which
can limit mobility and flexibility in network design.