0% found this document useful (0 votes)
19 views112 pages

CN Unit I (B)

The document covers the Data Link Layer of the OSI model, detailing its responsibilities such as error detection and correction, framing, and flow control. It discusses various error control techniques including Parity Check, Checksum, and Cyclic Redundancy Check (CRC), as well as error-correcting codes like Hamming Code. Additionally, it explains the structure of link layer addresses and the types of errors that can occur during data transmission.

Uploaded by

sparsh.sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views112 pages

CN Unit I (B)

The document covers the Data Link Layer of the OSI model, detailing its responsibilities such as error detection and correction, framing, and flow control. It discusses various error control techniques including Parity Check, Checksum, and Cyclic Redundancy Check (CRC), as well as error-correcting codes like Hamming Code. Additionally, it explains the structure of link layer addresses and the types of errors that can occur during data transmission.

Uploaded by

sparsh.sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Computer Networks (CSE2PM03A)

S.Y. [Link] Computer Science and Engineering

Unit 2: Data Link Layer Services

Department of Computer Engineering and Technology (DCET)


Data Link Layer

Contents:
Data Link Layer Services, Types of errors, Block coding, Error Control:
Cyclic Redundancy Check (CRC) Code, Hamming Code, Checksum,
sliding window Protocols: Selective Repeat (SR) & Go Back N (GBN),
Channel allocation, Multiple Access Protocols: ALOHA, CSMA/CD,
CSMA/CA, Ethernet Frame format.

2
Data Link Layer

• The Data-link layer is the second layer from the bottom in


the OSI (Open System Interconnection) network architecture model.
• It is responsible for the node-to-node delivery of data.

• Its major role is to ensure error-free transmission of information.

• DLL is also responsible to encode, decode and organize the


outgoing and incoming data.
• This is considered the most complex layer of the OSI model

3
Data Link Layer
At the sender’s side, DLL receives packets
from the Network layer and divides them
into small frames, then, sends each frame bit-
by-bit to the physical layer.

It also attaches some special bits (for error


control and addressing) at the header and end
of the frame.

At the receiver’s end, DLL takes bits from


the Physical layer organizes them into the
frame, and sends them to the Network layer.

4
Data Link Layer

Services provided
1. Framing
2. Addressing
3. Error Control
4. Flow Control
5. Access Control

5
Addresses in TCP/IP Network

6
Link Layer Address
 Six bytes = 48 bits
 Flat address not hierarchical
 Burned into the NIC ROM
 First three bytes from left specify the vendor. Cisco 00-00-0C, 3 Com 02-60-8C and the last 24 bit
should be created uniquely by the company
 Destination Address can be:
 Unicast: second digit from left is even (one recipient)
 Multicast: Second digit from left is odd (group of stations to receive the frame – conferencing
applications)
 Broadcast (ALL ones) (all stations receive the frame)
 Source address is always Unicast

7
Error Control
▪ To detect or correct errors, we need to send extra (redundant) bits with
data.

▪ Enough redundancy is added to detect an error.


▪ In error detection, the receiver knows an error occurred but does not
know which bit(s) is(are) in error.
▪ Error detection has less overhead than error correction.

8
Types of Error
Single-Bit Error Multiple-Bit Error Burst Error

9
Error Detection and Correction Codes
• During transmission of binary data from one system to the other, the noise may also be added.
Due to this, there may be errors in the received data at other system.

• That means a bit 0 may change to 1 or a bit 1 may change to 0. We can’t avoid the interference
of noise.

• The original data can be recovered first by detecting whether any error is present and then
correcting those errors

For this purpose, following codes are used.

 Error detection codes

 Error correction codes

[Link]
10
Error Detection Techniques

11
Parity Check
Parity check is done by adding an extra bit, called parity bit to the data to
make number of 1s either even in case of even parity, or odd in case of odd
parity.

12
[Link]
Parity Check
Parity check is done by adding an extra bit, called parity bit to the data to
make number of 1s either even in case of even parity, or odd in case of odd
parity.

While creating a frame, the sender counts the number of 1s in it and adds
the parity bit in following way

• In case of even parity: If number of 1s is even then parity bit value is 0. If


number of 1s is odd then parity bit value is 1.
• In case of odd parity: If number of 1s is odd then parity bit value is 0. If
number of 1s is even then parity bit value is 1.

On receiving a frame, the receiver counts the number of 1s in it. In case of


even parity check, if the count of 1s is even, the frame is accepted,
otherwise it is rejected. Similar rule is adopted for odd parity check.

13
Parity Check
Example of Even Parity Code
Binary Code Even Parity bit Even Parity Code

000 0 0000

001 1 0011

010 1 0101

011 0 0110 Parity check


is suitable for
Example of Odd Parity Code single bit
error
Binary Code Odd Parity bit Odd Parity Code
detection
000 1 0001 only.
001 0 0010

010 0 0100

011 1 0111
14
Drawbacks Of Single Parity Checking
It can only detect single-bit errors which are very rare.

If two bits are interchanged, then it cannot detect the errors.

Performance can be improved by using Two-Dimensional Parity Check which organizes


the data in the form of a table.

15
Checksum
A Checksum is an error detection technique based on the concept of redundancy. The
checksum is used in the Internet by several protocols although not at the data link layer.

Checksum Sender
1. The Sender follows the given steps:
2. The block unit is divided into k sections, and each of n bits.
3. All the k sections are added together by using one's complement to get the sum
.
4. The sum is complemented and it becomes the checksum field.
5. The original data and checksum field are sent across the network.
Checksum Checker
1. The Receiver follows the given steps:
2. The block unit is divided into k sections and each of n bits.
3. All the k sections are added together by using one's complement algorithm to g
et the sum.
4. The sum is complemented.
5. If the result of the sum is zero, then the data is accepted otherwise the data is d
16
iscarded.
Checksum

17
Checksum Example
▪ Suppose our data is a list of five 4-bit numbers that we want to send to a
destination.
▪ For example, if the set of numbers is (7, 11, 12, 0, 6)
▪ Sum of the numbers is 36
▪ The data transmitted (7, 11, 12, 0, 6, −36).
▪ Receiver will add all the numbers including checksum
▪ In this case summation = 0 meaning that the data is received correctly
▪ If the sum is non zero then it is assumed that data has error

18
Checksum Example
▪ Let us redo Exercise using one’s complement arithmetic.

▪ The sender initializes the checksum to 0 and adds all data items and the
checksum (the checksum is considered as one data item

▪ The result is 36. However, 36 cannot be expressed in 4 bits. The extra two
bits are wrapped and added with the sum to create the wrapped sum value
6.

▪ The sum is then complemented, resulting in the checksum value 9 (15 − 6


= 9). The sender now sends six data items to the receiver including the
checksum 9.

19
Checksum Example
▪ The receiver follows the same procedure as the sender.

▪ It adds all data items (including the checksum); the result is 45. The sum
is wrapped and becomes 15.

▪ The wrapped sum is complemented and becomes 0.

▪ Since the value of the checksum is 0, this means that the data is not
corrupted.

▪ The receiver drops the checksum and keeps the other data items.

▪ If the checksum is not zero, the entire packet is dropped.


20
Checksum Example

21
Checksum
Advantages
• Simple to implement − It is very simple to implement and uses very minimal
hardware and software resources.
• Efficient − Checksum codes can detect the most common errors in digital
communication.
• Fast − Checksum codes can be computed very quickly because there is nothing
much to do here, which is why it is suitable for real-time data packet transmissions.
• Low Latency − As we discussed it won’t use many resources and very few
computations that’s why latency is very low here.

22
Checksum
Disadvantages
• Limited Error Detection Capability − it is versatile and easy to implement having
less complexity but the main problem is it is limited to very common errors in other
words it can detect very common errors.
• Limited Error Correction Capability − Checksum codes can detect errors in the
data packet but they can’t correct them. So it is only an error detection technique.
• Susceptible to errors − Checksum codes can be susceptible to errors, especially in
cases where multiple errors occur in the same data packet.
• Sensitivity to the Data Packet size − checksum codes are sensitive to the length of
the data packet, if the size of the data packet is too small then very high chances are
there that it might miss the error. While the data packet size is large, the overhead
of the checksum increases.
23
Cyclic Redundancy Check (CRC)
• A cyclic redundancy check (CRC) is an error-detecting code commonly used in
digital networks and storage devices to detect accidental changes to raw data
• Unlike the checksum scheme, which is based on addition, CRC is based on binary
division.
• In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of the data unit so that the resulting data unit becomes
exactly divisible by a second, predetermined binary number.
• At the destination, the incoming data unit is divided by the same number. If at this
step there is no remainder, the data unit is assumed to be correct and is therefore
accepted.
• A remainder indicates that the data unit has been damaged in transit and therefore
must be rejected. 24
Cyclic Redundancy Check (CRC)

25
Cyclic Redundancy Check (CRC)
Modulo 2 Division:
The process of modulo-2 binary division is the same as the familiar division
process we use for decimal numbers. Just that instead of subtraction, we use
XOR here.
• In each step, a copy of the divisor (or data) is XORed with the k bits of
the dividend (or key).
• The result of the XOR operation (remainder) is (n-1) bits, which is used
for the next step after 1 extra bit is pulled down to make it n bits long.
• When there are no bits left to pull down, we have a result. The (n-1)-bit
remainder which is appended at the sender side.

26
Standard Polynomials

Generator polynomial = x3 + x2 + 1
= 1x3 + 1x2 + 0x1 +
1x0 27
CRC: Example 1- Sender side
Data word to be sent – 100100
Key = 1101

The remainder is 001 and


hence the encoded data sent is
100100001.
28
CRC: Example 1- Receiver side
Code word received at the receiver side is 100100001

The remainder is all


zeros. Hence, the
data received has no
error.

29
CRC: Example 2- Receiver side
Code word received-100000001 Key - 1101

Since the remainder is not all


zeroes, the error is detected at
the receiver side
Error-correcting codes
• Error-correcting codes (ECC) are a sequence of numbers generated by
specific algorithms for detecting and removing errors in data that has
been transmitted over noisy channels.
• Error correcting codes ascertain the exact number of bits that has been
corrupted and the location of the corrupted bits

Types of Error
Correcting
Codes

31
Hamming Code
• Hamming code is a block code that is capable of detecting up to two
simultaneous bit errors and correcting single-bit errors
• The source encodes the message by inserting redundant bits within the
message.
• These redundant bits are extra bits that are generated and inserted at
specific positions in the message itself to enable error detection and
correction.
• When the destination receives this message, it performs recalculations to
detect errors and find the bit position that has error.

32
Hamming Code: Encoding
Encoding a message by Hamming Code
Step 1 − Calculation of the number of redundant bits.
Step 2 − Positioning the redundant bits.
Step 3 − Calculating the values of each redundant bit.

Step 1
If the message contains m number of data bits, then r number of redundant
bits are added to it.
The value of r must satisfy the following relation: 2r ≥ m+r+1.
For example: If data bits m = 7 then r = 4
Length of the code-word = 7 + 4 =11

33
Hamming Code: Encoding
Step 2
The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16
etc.
They are referred as r1 (at position 1), r2 (at position 2), r3 (at position
4), r4 (at position 8) and so on.

Step 3
The redundant bits are parity bits. A parity bit is an extra bit that makes the number
of 1s either even or odd. The two types of parity are −
Even Parity − Here the total number of bits in the message is made even.
Odd Parity − Here the total number of bits in the message is made odd.

34
Hamming Code: Encoding
Each redundant bit, ri, is calculated as the parity, generally even parity, based
upon its bit position. It covers all bit positions whose binary representation
includes a 1 in the ith position except the position of ri
• r1 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the least significant position excluding 1 (3, 5, 7, 9, 11 and so
on)
• r2 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)
• r3 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 3 from right except 4 (5-7, 12-15, 20-23 and so
on)

35
Hamming Code: Encoding
Sometimes the redundant bits are named
as per their position eg. R1, R2, R4, R8
etc. instead of R1, R2, R3, R4. In both
the cases calculation process remains the
same

36
Hamming Code: Encoding
• The number of data bits = 7 (m)
• The number of redundant bits = 4 (r)
• The total number of bits = 11 (n = m + r)
• The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4,
8
• data to be transmitted is 1011001

37
Hamming Code: Encoding
Determining the Parity bits:

To find the redundant bit R1,


we check for even parity. Since
the total number of 1’s in all
the bit positions corresponding
to R1 is an even number.
R1= 0

To find the redundant bit R2, we


check for even parity. Since the
total number of 1’s in all the bit
positions corresponding to R2 is
odd.
R2 = 1 38
Hamming Code: Encoding
Determining the Parity bits:

To find the redundant bit R4, we


check for even parity. Since the
total number of 1’s in all the bit
positions corresponding to R4 is
odd
R4 = 1

To find the redundant bit R8, we


check for even parity. Since the
total number of 1’s in all the bit
positions corresponding to R8 is an
even number
R8 = 0 39
Hamming Code: Encoding

Thus, the data transferred is:

40
Hamming Code: Encoding
Consider an Example, Data:1001101 - Use modulo - 2 to calculate parity bits

R1 = D3, D5, D7, D9, D11 => 1+0+1+0+1 => 1.


R2 = D3, D6, D7, D10, D11 => 1+1+1+0+1 => 0.
R4 = D5, D6, D7 => 0+1+1 => 0.
R8 = D9, D10, D11 => 0+0+1 => 1.

41
Hamming Code: Encoding
Consider another Example where we send data stream 1000001 [ASCII ‘A’]
Now we will encode the message
Use modulo - 2 to calculate parity bits

R1 = D3, D5, D7, D9, D11 => 1+0+0+0+1 => 0.


R2 = D3, D6, D7, D10, D11 => 1+0+0+0+1 => 0.
R4 = D5, D6, D7 => 0+0+0 => 0.
R8 = D9, D10, D11 => 0+0+1 => 1.

Data stream to be sent on transmission line from sender

1 0 0 1 0 0 0 0 1 0 0
11 10 9 8 7 6 5 4 3 2 1
42
Hamming Code: Decoding
Once the receiver gets an incoming message, it performs recalculations to
detect errors and correct them. The steps for recalculation are −
• Step 1 − Calculation of the number of redundant bits.
• Step 2 − Positioning the redundant bits.
• Step 3 − Parity checking.
• Step 4 − Error detection and correction

Step 1
• Using the same formula as in encoding, the number of redundant bits are
ascertained.
Step 2
• The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8,
16 etc. 43
Hamming Code: Decoding
Step 3
Parity bits are calculated based upon the data bits and the redundant bits
using the same rule as during generation of c1,c2 ,c3 ,c4 etc. Thus
c1 = parity(1, 3, 5, 7, 9, 11 and so on)
c2 = parity(2, 3, 6, 7, 10, 11 and so on)
c4 = parity(4-7, 12-15, 20-23 and so on)
Step 4
The decimal equivalent of the parity bits binary values is calculated. If it is 0,
there is no error. Otherwise, the decimal value gives the bit position which
has error.
For example, if c4c3c2c1 = 1001, it implies that the data bit at position 9
The bit is flipped to get the correct message.
44
Hamming Code: Example
- when this stream is received at receiver same process will be followed for
decoding if we are getting same values for parity bits as we got at sender side
then we say that transmission was error free
- Consider an erroneous stream where 5th bit is flipped
Data stream received on
1 1 0 1 0 0 0 0 1 0 0
transmission line from sender
11 10 9 8 7 6 5 4 3 2 1

C1 = R1, D3, D5, D7, D9, D11 => 0+1+1+0+0+1 => 1. Now getting corrupted bit position
C2 = R2, D3, D6, D7, D10, D11 => 0+1+0+0+0+1 => 0. = (C8, CR4, C2, C1)
C4 = R4, D5, D6, D7 => 0+1+0+0 => 1.
= (0, 1, 0, 1)2
C8 = R8, D9, D10, D11 => 1+0+0+1 => 0.
= (5)10
To get the correct data, receiver will complement the bit in the position 5
45
Flow Control and Error control

• The most important responsibilities of the data link layer are flow
control and error control. Collectively, these functions are known as data
link control.

• Flow control refers to a set of procedures used to restrict the amount of


data that the sender can send before waiting for acknowledgment.

• Error control in the data link layer is based on automatic repeat


request, which is the retransmission of data.

46
Flow Control
• Flow control is a technique that allows two stations working at different speeds to
communicate with each other.
• In data link layer, flow control restricts the number of frames the sender can send
before it waits for an acknowledgment from the receiver.
• If the rate of produced frames is higher than the rate of consumed frames,
frames at the receiving end need to be buffered while waiting to be consumed
(processed).
• As an unlimited buffer size at the receiving side is not possible, any one of the
following steps are followed
 Receiving data-link layer drop the frames if its buffer is full.
 To let the receiving data-link layer send a feedback to the sending data-
link layer to ask it to stop or slow down.
• Different data-link-layer protocols use different strategies for flow control
47
Flow Control

• Feedback based Flow Control In these protocols, the sender sends frames after it has
received acknowledgments from the user. This is used in the data link layer.

• Rate based Flow Control These protocols have built in mechanisms to restrict the rate
of transmission of data without requiring acknowledgment from the receiver. This is used
in the network layer and the transport layer.
48
Flow Control

- data-link layer at the sending node tries to push frames toward the data-link
layer at the receiving node.
- If the receiving node cannot process and deliver the packet to its network at the
same rate that the frames arrive, it becomes overwhelmed with frames.
- Flow control in this case can be feedback from the receiving node to the
sending node to stop or slow down pushing frames.

49
Flow Control
- Flow and error control can be combined.

- In a simple situation, the acknowledgment that is sent for flow


control can also be used for error control to tell the sender the
packet has arrived uncorrupted.
- The lack of acknowledgment means that there is a problem in the
sent frame.
- A frame that carries an acknowledgment is normally called an
ACK to distinguish it from the data frame.

50
Taxonomy of protocols
Data link layer protocols are divided into two categories based on whether the transmission
channel is noiseless or noisy.

51
Noiseless Channels
There are two noiseless channels which are as follows

• Simplex channel
• Stop & wait channel

An ideal channel where no frames are lost, duplicated, or corrupted.


We introduce two protocols for this type of channel. These two
protocols are as follows

• Protocol that does not use flow control: Simplest Protocol


• Protocol that uses the flow control: Stop-and-Wait Protocol

52
Simplest Protocol
Step 1 − Simplest protocol that does not have
flow or error control.
Step 2 − It is a unidirectional protocol where
data frames are traveling in one direction that
is from the sender to receiver.
Step 3 − It assumes that
• The receiver can handle any frame it
receives with a processing time that is
small enough to be negligible
• The data link layer of the receiver
immediately removes the header from
the frame and hands the data packet to
its network layer
• The network layer can also accept the
packet immediately
53
[Link]
Simple Protocol

• The sender sends a sequence of frames


without even thinking about the receiver.

• To send three frames, three events occur at


the sender side and three events at the
receiver side.

• The data frames are shown by tilted boxes;


the height of the box defines the
transmission time difference between the
first bit and the last bit in the frame.
54
Stop & Wait Protocol
Step 1 − If the data frames arrive faster than they can
be processed, the frames must be stored
Step 2 −If it is receiving data from many sources, then
it does not have enough storage space
Step 3 − To prevent the receiver from becoming
overwhelmed with frames, the sender must slow down.
There must be ACK from the receiver to the sender.
Step 4 − In this protocol the sender sends one frame,
stops until it receives confirmation from the receiver,
and then sends the next frame.
Step 5 − We still have unidirectional communication
for data frames, but auxiliary ACK frames travel from
the other direction. We add flow control to the
previous protocol.
55
Stop & Wait Protocol
• The sender sends one frame and waits
for feedback from the receiver.
• When the ACK arrives, the sender
sends the next frame.
• Note that sending two frames in the
protocol involves the sender in four
events and the receiver in two events.

56
Noisy Channel

Noiseless channels are nonexistent.

For noisy channels, following protocols are used for flow


and error control

• Stop-and-Wait Automatic Repeat Request


• Go-Back-N Automatic Repeat Request
• Selective Repeat Automatic Repeat Request
57
Characteristics of Stop-and-Wait ARQ
• The stop and wait ARD is a sliding window protocol with a window size equal
to 1.
• The stop and wait ARQ is an example of the Closed Loop OR connection-
oriented.
• The sender sends the data frame with a sequence number.
• The sender also maintains a copy of the data frame that is being currently sent so
that if the ACK is not received then the sender can re-transmit the frame.
• The sender can send only one frame at a time and the receiver can also receive
only one frame at a time.
• In the stop and wait ARQ, the sender needs to maintain a time tracker.
• To detect corrupted frames, we need to add a CRC to each data frame. When a
frame arrives at the receiver site, it is checked.
58
Stop-and-Wait Automatic Repeat
Request

59
Stop-and-Wait Automatic Repeat
Request
Problem of Lost Data Packet Problem of Lost Acknowledgement

60
Stop-and-Wait Automatic Repeat
Request
Problem of Delayed Acknowledgement Problem of Damaged Packet

61
Advantages of Stop-and-Wait ARQ

• The stop and wait ARQ can be used in both the data link layer and the
transport layer.
• The stop and wait ARQ provides both error management and flow
control management.
• The stop and wait ARQ works in half-duplex mode as at a time only
the sender can send the message or the receiver can send the ACK.
• The stop and wait ARQ ensures that the information is received by the
receiver in the order it was sent.

62
Limitations of Stop-and-Wait ARQ

• The data can be lost in between the transmission. So, in such a case,
the sender waits for ACK and the receiver waits for the data frame for
an infinite amount of time.
• The ACK from the receiver may get lost in the channel. So, the sender
waits for ACK for an infinite amount of time.
• The window size of the sender and the receiver is only 1. So, only one
frame can be sent at a time.
• As there is a timer concept, so the sender must wait for a long duration
before retransmission. Hence, the stop and wait ARQ is a slow
protocol.

63
Sliding Window Protocol
• The sliding window is a technique for sending multiple frames
at a time.
• It controls the data packets between the two devices where
reliable and gradual delivery of data frames is needed
• In this technique, each frame has sent from the sequence
number.
• The sequence numbers are used to find the missing data in the
receiver end.
• The purpose of the sliding window technique is to avoid
duplicate data, so it uses the sequence number.

64
Sliding Window
Protocol

• Suppose that we have sender


window and receiver window
each of size 4.
• So the sequence numbering of
both the windows will be
0,1,2,3,0,1,2 and so on.
• The diagram shows the
positions of the windows after
sending the frames and
receiving acknowledgments.
65
Types of Sliding Window Protocol

66
Go-Back-N ARQ
• Go-Back-N ARQ stands for Go-Back-N Automatic Repeat Request.
• It is a data link layer protocol that uses a sliding window method.
• Go-Back-N ARQ, we use the concept of a timer.
• When the receiver receives the correct frame, it sends back an
acknowledgment or ACK.
• When sender gets an ACK for a data frame, it shifts its window
forward and sends the next frame.
• If the ACK is not received within the specified period then all the data
frames starting from the lost frame are retransmitted.
• The frames are sequentially numbered and a finite number of frames
are sent. 67
Go-Back-N ARQ
• In these protocols, the sender has a buffer called the sending window and the receiver
has buffer called the receiving window.
• The size of the sending window determines the sequence number of the outbound
frames.
• If the sequence number of the frames is an n-bit field, then the range of sequence
numbers that can be assigned is 0 to 2𝑛−1.
• Consequently, the size of the sending window is 2 𝑛−1.
• For example, if the sending window size is 4, then the sequence numbers will be 0, 1,
2, 3, 0, 1, 2, 3, 0, 1, and so on. The number of bits in the sequence number is 2 to
generate the binary sequence 00, 01, 10, 11.
• The size of the receiving window is the maximum number of frames that the receiver
can accept at a time. The receiver's window size is always 1 for Go Back N ARQ.
• It determines the maximum number of frames that the sender can send before
receiving acknowledgment.

68
Go-Back-N ARQ
.In Go Back 4 ARQ, the size of the sender window is 4.

69
Advantages of Go-Back-N ARQ
Advantages of Go-Back-N ARQ
1. It can send multiple frames at once.
2. Pipelining is present in the Go-Back-N ARQ i.e. a frame can be sent by the
sender before receiving the acknowledgment of the previously sent frame. This
results in a lesser waiting time for the frame.
3. It handles corrupted as well as out-of-order frames which result in minimal
frame loss.

Disadvantages of Go-Back-N ARQ


1. If acknowledgment for a frame is not received, the whole window of frames is
retransmitted instead of just the corrupted frame. This makes the Go Back N
ARQ protocol inefficient.
2. Retransmission of all the frames on detecting a corrupted frame increases
channel congestion and also increases the bandwidth requirement.
3. It is more time-consuming because while retransmitting the frames on detecting
a corrupted frame, the error-free frames are also transmitted. 70
Selective Repeat ARQ
• The Go-back-N ARQ protocol works well if it has fewer errors.
• But if there is a lot of error in the frame, lots of bandwidth loss in
sending the frames again.
• In the Selective Repeat ARQ protocol, the size of the sender window
is always equal to the size of the receiver window. The size of the
sliding window is always greater than 1.
• If the receiver receives a corrupt frame, it does not directly discard
it.
• It sends a negative acknowledgment to the sender.
• The sender sends that frame again as soon as on the receiving
negative acknowledgment.
• There is no waiting for any time-out to send that frame.

71
Selective Repeat ARQ

72
Selective Repeat ARQ

73
Advantages of Selective Repeat ARQ
• Efficient Data Transmission:
• Minimized Congestion:
• Faster Recovery
• Optimized Network Utilization:
• Improved Reliability:
• Supports High Error Rates:
• Adaptive:
• Higher Throughput:
• Minimal Latency:
• Efficient for Large Windows: 74
Disadvantages of Selective Repeat ARQ

• Increased Complexity:
• Higher Overhead:
Out-of-Order Delivery:
• Retransmission Overhead:
• Increased Delay:
• Synchronization Dependency:

75
76
Channel Allocation
• Channel allocation is a process in which a single channel is divided and
allotted to multiple users in order to carry user specific tasks.
• Channel Allocation Strategies are designed in such a way that there is
efficient use of frequencies, time slots and bandwidth.

77
Static Channel Allocation
• In static channel allocation scheme, a fixed portion of the frequency
channel is allotted to each user.
• For N competing users, the bandwidth is divided into N channels using
frequency division multiplexing (FDM), and each portion is assigned to
one user.
• This scheme is also referred as fixed channel allocation or fixed channel
assignment.
• In this allocation scheme, there is no interference between the users since
each user is assigned a fixed channel.
• However, it is not suitable in case of a large number of users with variable
bandwidth requirements

78
Dynamic Channel Allocation
 In dynamic channel allocation scheme, frequency bands are not
permanently assigned to the users.
 Instead channels are allotted to users dynamically as needed, from a
central pool.
 The allocation is done considering a number of parameters so that
transmission interference is minimized.
 This allocation scheme optimizes bandwidth usage and results is faster
transmissions.
 Dynamic channel allocation is further divided into centralized and
distributed allocation.

79
Multiple Access Protocol

 The data link layer is used in a computer network to transmit the data
between two devices or nodes.
 It divides the layer into parts such as data link control and the multiple
access resolution/protocol.
 The upper layer has the responsibility to flow control and the error control in
the data link layer, and hence it is termed as logical of data link control.
 Whereas the lower sub-layer is used to handle and reduce the collision or
multiple access on a channel.
 Hence it is termed as media access control or the multiple access
resolutions.
80
Multiple Access Protocols

81
Random Access Protocol
• No station is superior to another station and none is assigned the control
over another.
• No station permits, or does not permit, another station to send.
• There is no fixed time for sending data
• There is no fixed sequence of stations sending data
• Protocols
•ALOHA (Pure and Slotted ALOHA)
•CSMA
•CSMA/CD
•CSMA/CA

82
ALOHA
It is designed for wireless LAN (Local Area Network) but can also be used in
a shared medium to transmit data.
Aloha Rules
[Link] station can transmit data to a channel at any time.
[Link] does not require any carrier sensing.
[Link] and data frames may be lost during the transmission of data
through multiple stations.
[Link] of the frames exists in Aloha. Hence, there is no collision
detection.
[Link] requires retransmission of data after some random amount of time.
83
Pure ALOHA
• In pure Aloha each station transmits data to a channel without checking
whether the channel is idle or not
• The chances of collision may occur, and the data frame can be lost.
• When any station transmits the data frame to a channel, the pure Aloha
waits for the receiver's acknowledgment.
• If it does not acknowledge the receiver end within the specified time,
the station waits for a random amount of time, called the backoff time
(Tb).
• The station may assume the frame has been lost or destroyed.
• Therefore, it retransmits the frame until all the data are successfully
transmitted to the receiver.
84
Pure ALOHA

85
Pure ALOHA
• There are four stations for
accessing a shared channel and
transmitting data frames.
• Some frames collide because
most stations send their frames
at the same time.
• Only two frames, frame 1.1 is
successfully transmitted to the
receiver end.
• At the same time, other frames
are lost or destroyed.
• Whenever two frames fall on a
shared channel simultaneously,
collisions can occur, and both will
suffer damage.
• If the new frame's first bit enters
the channel before finishing the
last bit of the second frame. 86
Pure ALOHA
Efficiency-
Efficiency of Pure Aloha (η) = G x e-2G
where G = Number of stations willing to transmit data

Maximum Efficiency-
For maximum efficiency,
• We put dη / dG = 0
• Maximum value of η occurs at G = 1/2
• Substituting G = 1/2 in the above expression, we get-

Maximum efficiency of Pure Aloha


= 1/2 x e-2 x 1/2
= 1 / 2e
= 0.184
= 18.4%
87
Vulnerable period for Pure ALOHA

88
Slotted ALOHA
• In slotted Aloha, the shared channel is divided into a fixed time interval
called slots.
• if a station wants to send a frame to a shared channel, the frame can
only be sent at the beginning of the slot, and only one frame is allowed
to be sent to each slot.
• if the stations are unable to send data to the beginning of the slot, the
station will have to wait until the beginning of the slot for the next time.
• However, the possibility of a collision remains when trying to send a
frame at the beginning of two or more station time slot.

89
Slotted ALOHA

90
Slotted ALOHA
Efficiency-
Efficiency of Slotted Aloha (η) = G x e -G
where G = Number of stations willing to transmit data at the beginning of the
same time slot

Maximum Efficiency-
For maximum efficiency,
We put dη / dG = 0
Maximum value of η occurs at G = 1
Substituting G = 1 in the above expression, we get-

Maximum efficiency of Slotted Aloha


= 1 x e-1
=1/e
= 0.368
= 36.8%
91
Vulnerable period for Slotted ALOHA

92
Key Pure Aloha Slotted Aloha
Time Slot In Pure Aloha, any station can In Slotted Aloha, any station can transmit
transmit data at any time. data only at the beginning of a time slot.

Time In Pure Aloha, time is continuous and In Slotted Aloha, time is discrete and is
is not globally synchronized. globally synchronized.
Vulnerable time The vulnerable time or susceptible In Slotted Aloha, the vulnerable time is
time in Pure Aloha is equal to (2×Tt). equal to (Tt).
Probability The probability of successful transmission of The probability of successful transmission of a
a data packet data packet
(η) = G x e-2G (η) = G x e-G

Maximum Maximum efficiency = 18.4%. Maximum efficiency = 36.8%.


efficiency
Number of Does not reduce the number of Slotted Aloha reduces the number of
collisions collisions. collisions to half, thus doubles the
efficiency.

93
Carrier Sense MA protocols
• It is a carrier sense multiple access based on media access protocol
to sense the traffic on a channel (idle or busy) before transmitting the
data.
• It means that if the channel is idle, the station can send data to the
channel. Otherwise, it must wait until the channel becomes idle.
• Hence, it reduces the chances of a collision on a transmission
medium.
• CSMA method was developed to decrease the chances of collisions
when 2 or more stations start sending their signals over the data link
layer

94
Carrier Sense MA protocols

95
Carrier Sense MA protocols
But, what to do if the channels are busy? Now, here the
persistence methods can be applied to help the station act
when the channel is busy or idle.

96
Carrier Sense MA protocols

The CSMA has 4 access modes:


• 1-persistent mode
• Non-persistent mode
• P-persistent mode
• O-persistent

• 1-persistent mode: In this, first the node checks the channel, if


the channel is idle then the node or station transmits data,
otherwise it keeps on waiting and whenever the channel is idle,
the stations transmit the data-frame. 97
Carrier Sense MA protocols
• Non-persistent mode: In this, the station checks the channel
similarly as 1-persistent mode, but the only difference is that
when the channel is busy it checks it again after a random
amount of time, unlike the 1-persistent where the stations keep on
• checking continuously.
P-persistent mode: In this, the station checks the channel and if
found idle then it transmits the data frame with the probability of
P and if the data is not transmitted (1-P) then the station waits for
a random amount of time and again transmits the data with the
probability P and this cycle goes on continuously until the data-
frame is successfully sent.
• O-persistent: In this, the transmission occurs based on the
superiority of stations which is decided beforehand and
transmission occurs in that order. If the channel is idle, then the
station waits for its turn to send the data-frame. 98
Carrier Sense Multiple Access (CSMA)

1-persistent mode

99
Carrier Sense Multiple Access (CSMA)

Non-persistent mode

100
Carrier Sense Multiple Access (CSMA)
P-persistent mode

101
Carrier Sense Multiple Access (CSMA)
O-persistent mode

• In this method of CSMA supervisory node assign a transmission order


to each node in the network.
• When the channel was idle instead of immediately sending the data
channel will wait for its transmission order assigned to them.
• This mode of CSMA defines the superiority of the station before data
transmission in the medium. In this mode, if the channel is inactive then
all stations will wait to transmit the data for its turn.
• Every station in the channel transmits the data in its turn.

102
CSMA with Collision Detection
• It is a carrier sense multiple access/ collision detection network
protocol to transmit data frames.
• The CSMA/CD protocol works with a medium access control layer.
• Therefore, it first senses the shared channel before broadcasting the
frames, and if the channel is idle, it transmits a frame to check whether
the transmission was successful.
• If the frame is successfully received, the station sends another frame.
• If any collision is detected in the CSMA/CD, the station sends a jam/
stop signal to the shared channel to terminate data transmission.
• After that, it waits for a random time before sending a frame to a
channel.

103
CSMA with Collision Detection

104
CSMA/CA (MACA)
(Multiple Access with Collision Avoidance)
• It is a carrier sense multiple access/collision avoidance network
protocol for carrier transmission of data frames.
• It is a protocol that works with a medium access control layer. When a
data frame is sent to a channel, it receives an acknowledgment to
check whether the channel is clear.
• If the station receives only a single (own) acknowledgments, that
means the data frame has been successfully transmitted to the
receiver.
• But if it gets two signals (its own and one more in which the collision of
frames), a collision of the frame occurs in the shared channel. Detects
the collision of the frame when a sender receives an acknowledgment
signal.
• Following are the methods used in the CSMA/ CA to avoid the collision:
105
CSMA/CA (MACA)
(Multiple Access with Collision Avoidance)

106
Ethernet Frame Format

107
802.3 MAC frame
Basic frame format which is required for all MAC
implementation is defined in IEEE 802.3 standard.

108
802.3 MAC frame
PREAMBLE – Ethernet frame starts with a 7-Bytes Preamble. This is a
pattern of alternative 0’s and 1’s which indicates starting of the frame
and allow sender and receiver to establish bit synchronization. PRE
(Preamble) indicates the receiver that frame is coming

Start of frame delimiter (SFD) – This is a 1-Byte field that is always


set to 10101011. SFD indicates that upcoming bits are starting the
frame, which is the destination address. The SFD warns station or stations
that this is the last chance for synchronization.

Destination Address – This is a 6-Byte field that contains the MAC


address of the machine for which data is destined.

Source Address – This is a 6-Byte field that contains the MAC address
of the source machine. As Source Address is always an individual
address (Unicast), the least significant bit of the first byte is always 0.109
802.3 MAC frame
Length – Length is a 2-Byte field, which indicates the length of the entire Ethernet
frame. This 16-bit field can hold a length value between 0 to 65534, but length
cannot be larger than 1500 Bytes because of some own limitations of Ethernet.

Data – This is the place where actual data is inserted, also known as Payload.
Both IP header and data will be inserted here if Internet Protocol is used over
Ethernet. The maximum data present may be as long as 1500 Bytes.

Cyclic Redundancy Check (CRC) – CRC is 4 Byte field. This field contains a 32-
bits hash code of data, which is generated over the Destination Address, Source
Address, Length, and Data field. If the checksum computed by destination is not
the same as sent checksum value, data received is corrupted.

110
References
Text Books:
1. Behrouz A. Forouzan, ‘Data Communications and Networking’, 5th Edition, McGraw-Hill Publishing Company, ISBN 978-0-07-
337622-6
2. Tanenbaum A. S., ‘Computer Networks’, Pearson Education , 5th Edition, ISBN-978-0-13-212695-3
Reference Books:
3. James F. Kurose and Keith W Ross ‘Computer Networking, A Top-Down Approach’, 5th Edition, Pearson Education, ISBN- 978-81-
317-9054-0
4. W. Richard Stevens, Unix Network Programming, The Sockets Networking API, Vol 1, 3rd Edition, PHI Learning Pvt. Ltd.
Supplementary Reading:
5. William Stallings, ‘Data and Computer Communications’, 6th Edition, Prentice Hall of India Pvt.
Web Resources:
6. [Link]
7. [Link]
Web links:
8. [Link]
9. [Link]
10. [Link]
MOOCs:
11. [Link]
12. [Link]

111
Thank You

Any Questions

112

You might also like