CN Module2 Notes
CN Module2 Notes
(BEC702)
Module 2
Coding
• The sender adds redundant bits through a process that creates a relationship
between the redundant bits and the actual data bits.
• The receiver checks the relationships between the two sets of bits to detect errors.
• Coding schemes are divided into two broad categories: block coding and convolution
coding.
Example 10.1
Let us assume that k = 2 and n = 3. Table 10.1 shows the list of datawords and codewords.
Assume the sender encodes the dataword 01 as 011 and sends it to the receiver. Consider
the following cases:
1. The receiver receives 011. It is a valid codeword. The receiver extracts the dataword
01 from it.
2. The codeword is corrupted during transmission, and 111 is received This is not a valid
codeword and is discarded.
3. The codeword is corrupted during transmission, and 000 is received,a valid
codeword. The receiver incorrectly extracts the dataword 00. Two corrupted bits
have made the error undetectable.
An error-detecting code can detect only the types of errors for which it is designed; other
types of errors may remain undetected.
Hamming Distance
• The Hamming distance between two words is the number of differences between
the corresponding bits and the Hamming distance between two words x and y is
represented as d(x, y).
• The Hamming distance between the received codeword and the sent codeword is
the number of bits that are corrupted during transmission.
• If the Hamming distance between the sent and the received codeword is not zero,
the codeword has been corrupted during transmission.
• For example, if the codeword 00000 is sent and 01101 is received, 3 bits are in error
and the Hamming distance between the two is d(00000, 01101) = 3.
• If the d is not zero then message is corrupted.
• The Hamming distance can easily be found if we apply the XOR operation (⊕) on the
two words and count the number of 1s in the result.
• For example,the Hamming distance d(10101, 11110) is 3 because (10101 ⊕ 11110) is
01011 (three 1s).
Example
The minimum Hamming distance for our first code scheme (Table 10.1) is 2. This code
guarantees detection of only a single error. For example, if the third codeword (101) is sent
and one error occurs, the received codeword does not match any valid codeword. If two
errors occur,however, the received codeword may match a valid codeword and the errors
are not detected.
Linear Block Codes
• A linear block code is a code in which the exclusive OR (addition modulo-2) of two
valid codewords creates another valid codeword.
Example
The code in Table 10.1 is a linear block code because the result of XORing any codeword with
any other codeword is a valid codeword.
Minimum Distance for Linear Block Codes
• The minimum Hamming distance is the number of 1s in the nonzero valid codeword
with the smallest number of 1s.
Example
In our first code (Table 10.1), the numbers of 1s in the nonzero codewords are 2, 2, and 2. So
the minimum Hamming distance is dmin = 2.
Parity-Check Code
• This code is a linear-block code.In this code, a k-bit dataword is changed to an n-bit
codeword where n = k + 1. The extra bit, called the parity bit, is selected to make the
total number of 1s in the codeword even.
• Minimum Hamming distance for this category is dmin = 2, which means that the
code is a single-bit error-detecting code. The code in Table 10.2 is also a parity-check
code with k = 4 and n = 5.
A possible structure of an encoder (at the sender) and a decoder(at the receiver).
• The encoder uses a generator that takes a copy of a 4-bit dataword (a0, a1, a2, and
a3) and generates a parity bit r0. The dataword bits and the parity bit create the 5-bit
codeword. The parity bit that is added makes the number of 1s in the codeword
even. In other words,
r0=a3+a2+a1+a0 (modulo-2)
• If the number of 1s is even, the result is 0; if the number of 1s is odd, the result is 1.
• The result, which is called the syndrome, is just 1 bit. The syndrome is 0 when the
number of 1s in the received codeword is even; otherwise, it is 1.
s0 = b3 + b2 + b1 + b0 + q0 (modulo-2)
• If the syndrome is 0, there is no detectable error in the received codeword; the data
portion of the received codeword is accepted as the dataword; if the syndrome is 1,
the data portion of the received codeword is discarded. The dataword is not created.
Example 10
Let us look at some transmission scenarios. Assume the sender sends the dataword 1011.
The codeword created from this dataword is 10111, which is sent to the receiver. We
examine five cases:
1. No error occurs; the received codeword is 10111. The syndrome is 0. The dataword
1011 is created.
2. One single-bit error changes a1. The received codeword is 10011. The syndrome is 1.
No dataword is created.
3. One single-bit error changes r0. The received codeword is 10110. The syndrome is 1.
No dataword is created. Note that although none of the dataword bits are corrupted,
no dataword is created because the code is not sophisticated enough to show the
position of the corrupted bit.
4. An error changes r0 and a second error changes a3. The received codeword is 00110.
The syndrome is 0. The dataword 0011 is created at the receiver. Note that here the
dataword is wrongly created due to the syndrome value. The simple parity-check
decoder cannot detect an even number of errors. The errors cancel each other out
and give the syndrome a value of 0.
5. Three bits—a3, a2, and a1—are changed by errors. The received codeword is 01011.
The syndrome is 1. The dataword is not created. This shows that the simple parity
check, guaranteed to detect one single error, can also find any odd number of errors.
A parity-check code can detect an odd number of errors.
DLC Services
2.2 DLC SERVICES
• The data link control (DLC) deals with procedures for communication between two
adjacent nodes—node-to-node communication—no matter whether the link is
dedicated or broadcast. Data link control functions include framing and flow and
error control.
2.2.1 Framing
Data transmission in the physical layer means moving bits in the form of a signal from
the source to the destination.
The physical layer provides bit synchronization to ensure that the sender and receiver
use the same bit durations and timing.
The data-link layer, on the other hand, needs to pack bits into frames, so that each
frame is distinguishable from another.
Framing in the data-link layer separates a message from one source to a destination
by adding a sender address and a destination address.
Frame Size
Byte stuffing by the escape character allows the presence of the flag in the
data section of the frame.
The receiver removes the escape character, but keeps the next byte, which is
incorrectly interpreted as the end of the frame. If the escape character is part of
the text, an extra one is added to show that the second one is part of the text.
Bit-Oriented Framing
If the flag pattern appears in the data, we need to somehow inform the receiver
that this is not the end of the frame. We do this by stuffing 1 single bit (instead
of 1 byte) to prevent the pattern from looking like a flag. The strategy is called
bit stuffing.
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s
follow a 0 in the data, so that the receiver does not mistake the pattern 0111110
for a flag.
Flow Control
In communication at the data-link layer, we are dealing with four entities: network
and data-link layers at the sending node and network and data-link layers at
the receiving node.
Although we can have a complex relationship with more than one producer and
consumer, we ignore the relationships between networks and data-link layers
and concentrate on the relationship between two data-link layers.
Buffers
Although flow control can be implemented in several ways, we normally use two
buffers; one at the sending data-link layer and the other at the receiving
data-link layer.
A buffer is a set of memory locations that can hold packets at the sender and
receiver.
The flow control communication can occur by sending signals from the consumer to
the producer. When the buffer of the receiving data-link layer is full, it informs
the sending data-link layer to stop pushing frames.
Error Control
The underlying technology at the physical layer is not fully reliable, we need to
implement error control at the data-link layer to prevent the receiving node from
delivering corrupted packets to its network layer.
Error control at the data-link layer is normally very simple and implemented using
one of the following two methods.
➢ In the first method, if the frame is corrupted, it is silently discarded; if it is not
corrupted, the packet is delivered to the network layer. This method is used mostly in
wired LANs such as Ethernet.
➢ In the second method, if the frame is corrupted, it is silently discarded; if it is not
corrupted, an acknowledgment is sent to the sender.
Combination of Flow and Error Control
The acknowledgment that is sent for flow control can also be used for error control
to tell the sender the packet has arrived uncorrupted.
A frame that carries an acknowledgment is normally called an ACK to distinguish it
from the data frame.
2.2.3 Connectionless and Connection-Oriented
A DLC protocol can be either connectionless or connection-oriented.
Connectionless Protocol
In a connectionless protocol, frames are sent from one node to the next without any
relationship between the frames; each frame is independent.
Note that the term connectionless here does not mean that there is no physical
connection (transmission medium) between the nodes; it means that there
is no connection between frames.
The frames are not numbered and there is no sense of ordering. Most of the data-
link protocols for LANs are connectionless protocols.
Connection-Oriented Protocol
The data-link layer at the sender gets a packet from its network layer, makes a frame
out of it, and sends the frame.
The data-link layer at the receiver receives a frame from the link, extracts the packet
from the frame, and delivers the packet to its network layer.
The data-link layers of the sender and receiver provide transmission services for their
network layers.
FSMs
Each FSM has only one state, the ready state. The sending machine remains in the
ready state until a request comes from the process in the network layer.
When this event occurs, the sending machine encapsulates the message in a frame
and sends it to the receiving machine.
The receiving machine remains in the ready state until a frame arrives from the
sending machine.
When this event occurs, the receiving machine decapsulates the message out of the
frame and delivers it to the process at the network layer.
2.3.2 Stop-and-Wait Protocol
Our second protocol is called the Stop-and-Wait protocol, which uses both flow and
error control.
The sender sends one frame at a time and waits for an acknowledgment before
sending the next one.
To detect corrupted frames, we need to add a CRC to each data frame. When a frame
arrives at the receiver site, it is checked. If its CRC is incorrect, the frame is
corrupted and silently discarded.
Every time the sender sends a frame, it starts a timer. If an acknowledgment arrives
before the timer expires, the timer is stopped and the sender sends the next
frame .
If the timer expires, the sender resends the previous frame, assuming that the frame
was either lost or corrupted.
FSMs
Figure shows the FSMs for our primitive Stop-and-Wait protocol.
Sender States
The sender is initially in the ready state, but it can move between the ready and
blocking state.
Ready State.
When the sender is in this state, it is only waiting for a packet from the network layer.
If a packet comes from the network layer, the sender creates a frame, saves a
copy of the frame, starts the only timer and sends the frame. The sender then
moves to the blocking state.
Blocking State.
When the sender is in this state, three events can occur:
a) If a time-out occurs, the sender resends the saved copy of the frame and restarts the
timer.
b) If a corrupted ACK arrives, it is discarded.
c) If an error-free ACK arrives, the sender stops the timer and discards the saved copy
of the frame. It then moves to the ready state.
Receiver
The receiver is always in the ready state. Two events may occur:
a) If an error-free frame arrives, the message in the frame is delivered to the network
layer and an ACK is sent.
b) If a corrupted frame arrives, the frame is discarded.
Sequence and Acknowledgment Numbers
2.3.3 Piggybacking
To make the communication more efficient, the data in one direction is piggybacked
with the acknowledgment in the other direction.
In other words, when node A is sending data to node B, Node A also acknowledges
the data received from node B. Because piggybacking makes communication at
the datalink layer more complicated, it is not a common practice.
2.4
2.4.1 Framing
PPP uses a character-oriented (or byte-oriented) frame. Figure 11.20 shows the format of a
PPP frame. The description of each field follows:
• Flag. A PPP frame starts and ends with a 1-byte flag with the bit pattern 01111110.
• Address. The address field in this protocol is a constant value and set to 11111111
(broadcast address).
• Control. This field is set to the constant value 00000011 (imitating unnumbered
frames in HDLC). As we will discuss later, PPP does not provide any flow control. Error
control is also limited to error detection.
• Protocol. The protocol field defines what is being carried in the data field: either user
data or other information. This field is by default 2 bytes long, but the two parties
can agree to use only 1 byte.
• Payload field. This field carries either the user data or other information. The data
field is a sequence of bytes with the default of a maximum of 1500 bytes; but this
can be changed during negotiation
• FCS. The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.
Byte Stuffing
• Since PPP is a byte-oriented protocol, the flag in PPP is a byte that needs to be
escaped whenever it appears in the data section of the frame.
• The escape byte is 01111101, which means that every time the flaglike pattern
appears in the data, this extra byte is stuffed to tell the receiver that the next byte is
not a flag. Obviously, the escape byte itself should be stuffed with another escape
byte.
2.4.2 Transition Phases
A PPP connection goes through phases which can be shown in a transition phase
diagram (see Figure 11.21).
The transition diagram, which is an FSM, starts with the dead state. In this state,
there is no active carrier (at the physical layer) and the line is quiet.
When one of the two nodes starts the communication, the connection goes into the
establish state. In this state, options are negotiated between the two parties.
If the two parties agree that they need authentication, then the system needs to do
authentication; otherwise, the parties can simply start communication.
Data transfer takes place in the open state. When a connection reaches this state, the
exchange of data packets can be started. The connection remains in this state
until one of the endpoints wants to terminate the connection.
In this case, the system goes to the terminate state. The system remains in this state
until the carrier (physical-layer signal) is dropped, which moves the system to
the dead state again.
2.4.3 RANDOM ACCESS
• The original ALOHA protocol is called pure ALOHA. This is a simple but elegant
protocol.
• The idea is that each station sends a frame whenever it has a frame to send.
However, since there is only one channel to share, there is the possibility of collision
between frames from different stations.
• The pure ALOHA protocol relies on acknowledgments from the receiver. When a
station sends a frame, it expects the receiver to send an acknowledgment. If the
acknowledgment does not arrive after a time-out period, the station assumes that
the frame (or the acknowledgment) has been destroyed and resends the frame
• Pure ALOHA dictates that when the time-out period passes, each station waits a
random amount of time before resending its frame. The randomness will help avoid
more collisions. We call this time the backoff time T B.
• Pure ALOHA has a second method to prevent congesting the channel with
retransmitted frames. After a maximum number of retransmission attempts Kmax, a
station must give up and try later.
• The time-out period is equal to the maximum possible round-trip propagation delay,
which is twice the amount of time required to send a frame between the two most
widely separated stations (2 × Tp).
• The formula for TB depends on the implementation. One common formula is the
binary exponential backoff. In this method, for each retransmission, a multiplier R = 0
to 2K − 1 is randomly chosen and multiplied by Tp or Tfr to find TB. The value of Kmax
is usually chosen as 15.
Vulnerable time
• The vulnerable time, the length of time in which there is a possibility of collision. We
assume that the stations send fixed-length frames with each frame taking Tfr seconds
to send.
• Station B starts to send a frame at time t. Station A has started to send its frame after
t − Tfr. This leads to a collision between the frames from station B and station A.
• On the other hand, suppose that station C starts to send a frame before time t + Tfr.
Here, there is also a collision between frames from station B and station C.
Pure ALOHA vulnerable time=2×Tfr
Example 12.2
A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What is
the requirement to make this frame collision-free?
Solution
Average frame transmission time Tfr is 200 bits/200 kbps or 1 ms. The vulnerable time is 2 ×
1 ms = 2 ms. This means no station should send later than 1 ms before this station starts
transmission and no station should start sending during the period (1 ms) that this station is
sending.
Throughput
Example 12.3
A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What is
the throughput if the system (all stations together) produces
a. If the system creates 1000 frames per second, or 1 frame per millisecond, then G = 1. In
this case S = G × e−2G = 0.135 (13.5 percent). This means that the throughput is 1000 ×
0.135 = 135 frames. Only 135 frames out of 1000 will probably survive.
b. If the system creates 500 frames per second, or 1/2 frames per millisecond, then G = 1/2.
In this case S = G × e−2G = 0.184 (18.4 percent). This means that the throughput is 500 ×
0.184 = 92 and that only 92 frames out of 500 will probably survive. Note that this is the
maximum throughput case, percentagewise.
c. If the system creates 250 frames per second, or 1/4 frames per millisecond, then G = 1/4.
In this case S = G × e−2G = 0.152 (15.2 percent). This means that the throughput is 250 ×
0.152 = 38. Only 38 frames out of 250 will probably survive.
Slotted ALOHA
• In slotted ALOHA we divide the time into slots of Tfr seconds and force the station to
send only at the beginning of the time slot.
• Because a station is allowed to send only at the beginning of the synchronized time
slot, if a station misses this moment, it must wait until the beginning of the next time
slot.
• The vulnerable time is now reduced to one-half, equal to Tfr.
Slotted ALOHA vulnerable time=Tfr
Throughput
Example 12.4
A slotted ALOHA network transmits 200-bit frames using a shared channel with a 200-kbps
bandwidth. Find the throughput if the system (all stations together) produces
a. 1000 frames per second.
b. Here G is 1/2. In this case S = G × e−G = 0.303 (30.3 percent). This means that the
throughput is 500 × 0.0303 = 151. Only 151 frames out of 500 will probably survive.
c. Now G is 1/4. In this case S = G × e−G = 0.195 (19.5 percent). This means that the
throughput is 250 × 0.195 = 49. Only 49 frames out of 250 will probably survive.
12.1.2 CSMA
• Carrier sense multiple access (CSMA) requires that each station first listen to the
medium (or check the state of the medium) before sending. In other words, CSMA is
based on the principle “sense before transmit” or “listen before talk.”
• CSMA can reduce the possibility of collision, but it cannot eliminate it.Stations are
connected to a shared channel (usually a dedicated medium).
• The possibility of collision still exists because of propagation delay; when a station
sends a frame, it still takes time (although very short) for the first bit to reach every
station and for every station to sense it.
Vulnerable Time
• The vulnerable time for CSMA is the propagation time Tp. This is the time needed for
a signal to propagate from one end of the medium to the other.
Persistence Methods
Three methods have been devised to answer these questions: the 1-persistent method, the
nonpersistent method, and the p-persistent method.
I- Persistent
• The I-persistent method is simple and straightforward. In this method, after the
station finds the line idle, it sends its frame immediately (with probability 1).
• This method has the highest chance of collision because two or more stations may
find the line idle and send their frames immediately.
Nonpersistent
• In the nonpersistent method, a station that has a frame to send senses the line. If the
line is idle, it sends immediately. If the line is not idle, it waits a random amount of
time and then senses the line again.
• This method reduces the efficiency of the network because the medium remains idle
when there may be stations with frames to send.
p- Persistent
• The p-persistent method is used if the channel has time slots with a slot duration
equal to or greater than the maximum propagation time.
• The p-persistent approach combines the advantages of the other two strategies. It
reduces the chance of collision and improves efficiency.
12.1.3 CSMA/CD
• Carrier sense multiple access with collision detection (CSMA/CD) augments the
algorithm to handle the collision.
• In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful. If so, the station is finished. If, however, there is a
collision, the frame is sent again.
• Let us look at the first bits transmitted by the two stations involved in the collision.
• At time t1, station A has executed its persistence procedure and starts sending the
bits of its frame.
• At time t2, station C has not yet sensed the first bit sent by A.
• Station C executes its persistence procedure and starts sending the bits in its frame,
which propagate both to the left and to the right.
•
• The collision occurs sometime after time t2. Station C detects a collision at time t3
when it receives the first bit of A’s frame.
• Station C immediately aborts transmission.
• Station A detects collision at time t4 when it receives the first bit of C’s frame; it also
immediately aborts transmission.
• Looking at the figure, we see that A transmits for the duration t4 − t1; C transmits for
the duration t3 − t2.
The flow diagram for CSMA/CD in Figure 12.13. It is similar to the one for the ALOHA
protocol, but there are differences.
• The level of energy in a channel can have three values: zero, normal, and abnormal.
• At the zero level, the channel is idle.
• At the normal level, a station has successfully captured the channel and is sending its
frame.
• At the abnormal level, there is a collision and the level of the energy is twice the
normal level.
Throughput
• The traditional Ethernet was a broadcast LAN that used the 1-persistence method to
control access to the common media.
12.1.4 CSMA/CA
• Carrier sense multiple access with collision avoidance (CSMA/CA) was invented for
wireless networks.
• Collisions are avoided through the use of CSMA/CA’s three strategies: the interframe
space, the contention window, and acknowledgments,
❑ Interframe Space (IFS).
• When an idle channel is found, the station does not send immediately. It waits for a
period of time called the interframe space or IFS.
• Even though the channel may appear idle when it is sensed, a distant station may
have already started transmitting.
• The IFS time allows the front of the transmitted signal by the distant station to reach
this station.
• After waiting an IFS time, if the channel is still idle, the station can send, but it still
needs to wait a time equal to the contention window (described next).
• The IFS variable can also be used to prioritize stations or frame types. For example, a
station that is assigned a shorter IFS has a higher priority.
❑ Contention Window.
• The contention window is an amount of time divided into slots. A station that is
ready to send chooses a random number of slots as its wait time. The number of
slots in the window changes according to the binary exponential backoff strategy.
• The station needs to sense the channel after each time slot. However, if the station
finds the channel busy, it does not restart the process; it just stops the timer and
restarts it when the channel is sensed as idle. This gives priority to the station with
the longest waiting time. See Figure 12.16.
❑ Acknowledgment.
• With all these precautions, there still may be a collision resulting in destroyed data.
In addition, the data may be corrupted during the transmission. The positive
acknowledgment and the time-out timer can help guarantee that the receiver has
received the frame.
Frame Exchange Time Line
1. Before sending a frame, the source station senses the medium by checking the energy
level at the carrier frequency.
a. The channel uses a persistence strategy with backoff until the channel is idle.
b. After the station is found to be idle, the station waits for a period of time called the DCF
interframe space (DIFS); then the station sends a control frame called the request to send
(RTS).
2. After receiving the RTS and waiting a period of time called the short interframe space
(SIFS), the destination station sends a control frame, called the clear to send (CTS), to the
source station. This control frame indicates that the destination station is ready to receive
data.
3. The source station sends data after waiting an amount of time equal to SIFS.
4. The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is needed in
this protocol because the station does not have any means to check for the successful arrival
of its data at the destination.
Network Allocation Vector
• When a station sends an RTS frame, it includes the duration of time that it needs to
occupy the channel.
• The stations that are affected by this transmission create a timer called a network
allocation vector (NAV) that shows how much time must pass before these stations
are allowed to check the channel for idleness.
• Each station, before sensing the physical medium to see if it is idle, first checks its
NAV to see if it has expired. Figure 12.17 shows the idea of NAV.
Collision During Handshaking
• Two or more stations may try to send RTS frames at the same time. These control
frames may collide.
• However, because there is no mechanism for collision detection, the sender assumes
there has been a collision if it has not received a CTS frame from the receiver. The
backoff strategy is employed, and the sender tries again.
Hidden-Station Problem
• The solution to the hidden station problem is the use of the handshake frames (RTS
and CTS). Figure 12.17 also shows that the RTS message from B reaches A, but not C.
• However, because both B and C are within the range of A, the CTS message, which
contains the duration of data transmission from B to A, reaches C.
• Station C knows that some hidden station is using the channel and refrains from
transmitting until that duration is over.
CSMA/CA and Wireless Networks
12.2.1 Reservation
• In the reservation method, a station needs to make a reservation before sending
data. Time is divided into intervals. In each interval, a reservation frame precedes the
data frames sent in that interval.
• If there are N stations in the system, there are exactly N reservation minislots in the
reservation frame. Each minislot belongs to a station. When a station needs to send a
data frame, it makes a reservation in its own minislot. The stations that have made
reservations can send their data frames after the reservation frame.
12.2.2 Polling
• Polling works with topologies in which one device is designated as a primary station
and the other devices are secondary stations.
• All data exchanges must be made through the primary device even when the
ultimate destination is a secondary device.
• The primary device controls the link; the secondary devices follow its instructions.
• The primary device, therefore, is always the initiator of a session (see Figure 12.19).
This method uses poll and select functions to prevent collisions.
Select
• The select function is used whenever the primary device has something to send.
• The primary must alert the secondary to the upcoming transmission and wait for an
acknowledgment of the secondary’s ready status.
• Before sending data, the primary creates and transmits a select (SEL) frame, one field
of which includes the address of the intended secondary.
Poll
• The poll function is used by the primary device to solicit transmissions from the
secondary devices. When the primary is ready to receive data, it must ask (poll) each
device in turn if it has anything to send.
• When the first secondary is approached, it responds either with a NAK frame if it has
nothing to send or with data (in the form of a data frame) if it does.
• If the response is negative (a NAK frame), then the primary polls the next secondary
in the same manner until it finds one with data to send. When the response is
positive (a data frame), the primary reads the frame and returns an acknowledgment
(ACK frame), verifying its receipt.
12.2.3 Token Passing
• In the token-passing method, the stations in a network are organized in a logical ring.
• For each station, there is a predecessor and a successor. The predecessor is the
station which is logically before the station in the ring; the successor is the station
which is after the station in the ring.
• The current station is the one that is accessing the channel now.
• A special packet called a token circulates through the ring.
• The possession of the token gives the station the right to access the channel and
send its data. When a station has some data to send, it waits until it receives the
token from its predecessor.
• It then holds the token and sends its data. When the station has no more data to
send, it releases the token, passing it to the next logical station in the ring.
• Token management is needed for this access method. Stations must be limited in the
time they can have possession of the token.
• The token must be monitored to ensure it has not been lost or destroyed.
• And finally, token management is needed to make low-priority stations release the
token to high-priority stations.
Logical Ring
• In the physical ring topology, when a station sends the token to its successor, the
token cannot be seen by other stations; the successor is the next one in line.
• The dual ring topology uses a second (auxiliary) ring which operates in the reverse
direction compared with the main ring. The high-speed Token Ring networks called
FDDI (Fiber Distributed Data Interface) and CDDI (Copper Distributed Data Interface)
use this topology.
• In the bus ring topology, also called a token bus, the stations are connected to a
single cable called a bus. They, however, make a logical ring, because each station
knows the address of its successor (and also predecessor for token management
purposes).
• In a star ring topology, the physical topology is a star. There is a hub, however, that
acts as the connector. The wiring inside the hub makes the ring; the stations are
connected to this ring through the two wire connections.