INFORMATION AND COMMUNICATIONS UNIVERSITY
(School of Engineering)
Computer Networks
Assignments 1
First Name: Gilbert
Surname: Makovwa
Program: Bachelor of ICT in computer systems engineering
SIN: 2201260844
Mode of study: Distance Learning
Semester: Eight
Lecturer: Mr Kalenga
Due date: 15th September 2025
Q1.
Reliable data transfer refers to the methods and protocols used in computer networking
to ensure that data is transmitted accurately and without loss across unreliable channels. The
data sent should guarantee that no data bits are corrupted or lost during transmission, and that
all data is delivered in the order it was sent. Transport layer introduces mechanisms that
transform unreliable packet delivery into dependable end-to-end communication. The diagrams
below show a service and implemented model.
Provided service
Application layer
Sending process Receiving
process
Data Data
Transport layer
Reliable channel
Implemented service
Transport layer
rdt_send() deliver_data()
Reliable data Reliable data
transfer protocol transfer protocol
(sending side) (receiving side)
udt_send() Data rdt_rcv() Data
Network layer
unreliable channel
Reliable data transfer is achieved through a combination of different techniques/protocols such
as;
Checksums: Detect data corruption. This method uses a Checksum Generator on the
sender side and a Checksum Checker on the receiver side. On the Sender side, the data is
divided into equal subunits of n-bit length by the checksum generator. This bit is generally of 16-
bit length. These subunits are then added together using the one’s complement method. This
sum is of n bits. The resultant bit is then complemented. This complemented sum, which is
called a checksum, is appended to the end of the original data unit and is then transmitted to
the receiver. After receiving data, the checksum passes it to the checksum checker, which
divides this data unit into various subunits of equal length and adds all these subunits. These
subunits also contain a checksum as one of the subunits. The resultant bit is then
complemented. If the complemented result is zero, it means the data is error-free. If the result
is non-zero, it means the data contains an error, and the Receiver rejects it.
Stop-and-wait protocol: The sender transmits one packet at a time and waits for an
acknowledgment (ACK) before sending the next one. If no ACK is received within a timeout, the
packet is retransmitted. Acknowledgment confirms delivery of data, and it involves three steps
known as the 3-way handshake process to establish a reliable connection. In the first step, the
client wants to establish a connection with a server, so it sends a segment with SYN
(Synchronize Sequence Number), which informs the server that the client is likely to start
communication and with what sequence number it starts segments with, in the second step,
the Server responds to the client's request with SYN+ACK signal bits set. Acknowledgement
(ACK) signifies the response of the segment it received, and SYN signifies with what sequence
number it is likely to start the segments with In the final part, the client acknowledges the
response of the server and they both establish a reliable connection with which they will start
the actual data transfer
3-Way Handshake diagram
Go-Back-N (GBN) Protocol: The Go-Back-N (GBN) protocol is a sliding window protocol
used in networking for reliable data transmission. It is part of the Automatic Repeat reQuest
(ARQ) protocols, which ensure that data is correctly received and that any lost or corrupted
packets are retransmitted. The sender is allowed to transmit multiple packets (when available)
without waiting for an acknowledgment, but is constrained to have no more than some
maximum allowable number, N, of unacknowledged packets in the pipeline. If one packet is lost
or not acknowledged, the sender must go back and resend that packet and all the packets that
followed it, even if they were received correctly. For example, if packets 1, 2, 3, 4, and 5 are sent
and packet 3 gets lost, the sender will have to resend packets 3, 4, and 5, even if 4 and 5 were
received.
Selective Repeat: This protocol allows additional frames to be sent before the first
frame's acknowledgment is received. But in this case, the excellent frames are received and
buffered, and only the incorrect or lost frames are retransmitted. Selective Repeat sender
events Sand actions, when data is received from above, the SR sender checks the next available
sequence number for the packet. If the sequence number is within the sender’s window, the
data is packetized and sent; otherwise, it is either buffered or returned to the upper layer for
later transmission. Timers are again used to protect against lost packets. However, each packet
must now have its own logical timer, since only a single packet will be transmitted on timeout. A
single hardware timer can be used to mimic the operation of multiple logical timers. If an ACK is
received, the SR sender marks that packet as having been received, provided it is in the window.
If the packet’s sequence number is equal to the send base, the window base is moved forward
to the unacknowledged packet with the smallest sequence number. If the window moves and
there are untransmitted packets with sequence numbers that now fall within the window, these
packets are transmitted.
Data flow control: It ensures smooth communication by matching data transmission
speed with the receiver’s capacity, preventing packet loss and ensuring reliability. In Credit-
Based Flow Control the receiver informs the sender how many units of data (credits) it can
accept then the sender adjusts its sending rate based on available credits this is common in
high-performance systems like Fibre Channel. The function of data flow include the Prevention
of buffer overflow at the receiver, ensure efficient use of network resources and maintain
reliable and orderly data transmission.
Q2.
Multiplexing in data transmission is the gathering of data chunks at the source host from
different sockets, encapsulating each data chunk with header information to create segments,
and passing the segments to the network layer. The diagram below shows the process of
multiplexing.
Multiplexing plays an important role in modern networks that include;
Internet Communication (TCP/UDP)
Multiple applications like browsing, email, streaming use the same network connection to
transmittes data through TCP/UDP protocols that encapsulates data streams by assigning port
numbers, ensuring that many services can run on one device simultaneously. This has made it
possible for the user to multitask (listening to music, working on the document, surfing the net)
without interruption or interference whilst using the computer.
Telephone Networks (TDM & FDM)
In traditional telephony, Time Division Multiplexing (TDM) and Frequency Division Multiplexing
(FDM) are used to allow multiple voice calls over the same physical line increase the efficiency
of communication network. FDM is an analog multiplexing technique that works by dividing the
total available bandwidth of a communication channel into a series of non-overlapping
frequency bands. Each phone call is assigned its own unique frequency band, and all calls are
transmitted simultaneously over the same medium and TDM is a digital multiplexing technique
that allows multiple data streams to share a single channel by dividing the channel's time into
discrete, recurring slots. It is the dominant multiplexing method used in today's digital
telephone networks. An example of this technique is a single optical fiber carrying thousands of
calls at once.
Digital TV and Radio Broadcasting
Frequency Division Multiplexing (FDM) enables multiple TV channels or FM radio stations to
share the same frequency spectrum without interference. The process works by taking different
sources, such as video streams for various TV channels, audio for different radio stations, and
even additional data services like program guides or closed captions. These signals are
compressed and then combined into a single, high-speed data stream.
Satellite and Cellular Communication
In satellite communication, multiplexing is essential for enabling multiple ground stations to
communicate with a single satellite transponder simultaneously. The most common technique is
Frequency Division Multiple Access (FDMA), this technique divides the satellite's available
frequency spectrum into non-overlapping frequency bands or channels. Each user or earth
station is assigned a specific frequency band for continuous transmission. In cellular networks
multiplexing allow many mobile users to communicate with a single base station. The primary
technique Code Division Multiple Access (CDMA) is a spread-spectrum technology where
multiple users can transmit on the same frequency band simultaneously. It assigns a unique
pseudo-random code to each user's data stream. The data is "spread" across a wide frequency
band using this code. To receive the signal, the receiver uses the same unique code to "de-
spread" it, isolating the desired signal from all others, which appear as background noise.
Optical Communication Network
Fiber optic Mux and Demux enhances optical communication networks by increasing
transmission capacity, enabling long-distance communication, and providing network topology
flexibility. Using multiplexers and demultiplexers, multiple signals can be transmitted on the
same fiber, maximizing bandwidth to meet growing needs. They extend signal transmission
distances through WDM and DWDM, supporting long-distance communications like backbone
networks and submarine cables. Additionally, they allow cross-connections between different
fiber links, improving signal routing and network management efficiency.
Demultiplexing diagram
Data Center
Fiber optic Mux and Demux is widely used in data centers to improve bandwidth utilization and
data transmission efficiency. It enables high-density data transmission by combining multiple
optical signals into a single signal, thereby increasing the overall bandwidth of the data center.
Additionally, it facilitates fast data exchange by allocating different data traffic to separate fiber
channels for parallel transmission, enhancing the speed and efficiency of data exchanges
between servers and storage devices. As data centers expand and business needs grow, fiber
optic Mux and Demux supports network expansion and upgrades by increasing the number of
fiber channels or utilizing tighter wavelength spacing to accommodate rising data demands.
Data center architecture diagram
Q3
In as much as building a reliable data transfer is important for communication, it comes with its
own challenges to effectively implement it. The challenges can be categorized in terms of
unreliable networks, performance limitations, security threats, data integrity, and
environmental complexity.
Unreliable network
Packet loss refers to the loss or non-delivery of packets during data transmission over a
network. When data is transmitted through a network, it is broken into tiny units known as
packets. These packets carry a portion of the data as well as addressing information for proper
routing. Packets can get lost during transmission due to network congestion, faulty equipment,
or other issues. One real-life situation where packet loss can occur is during video streaming.
When you watch a video online, the video data is divided into packets and transmitted over the
internet to your device. If there is network congestion or a slow internet connection, some
packets may not reach your device in time. This can result in buffering or stuttering of the video,
as missing packets need to be retransmitted or skipped, causing interruptions in the playback.
Packet corruption or bit errors occur due to noise or interference, leading to invalid data, which
can have varying impacts depending on the specific application. In some cases, a single-bit error
may not be critical, especially if error correction techniques are employed to detect and correct
errors. However, in certain sensitive applications, such as high-speed data transfers or critical
systems like medical devices or aerospace systems, even a single bit error can lead to significant
consequences.
Performance limitations
Limited bandwidth, the maximum rate at which data can be sent, can be insufficient for a large
dataset, impacting the overall transfer time and reliability. This can slow down the speed at
which applications and processes run, leading to a decrease in overall system performance. For
instance, if a system is designed to handle a large volume of data but the available bandwidth is
limited, the system's resources may not be utilized to their full potential. This can lead to
inefficiencies and increased costs, as the system is not operating at its optimal capacity.
Network Congestion occurs when the traffic moving through a network exceeds its highest
capacity. Network Congestion can be caused by excessive bandwidth consumption, poor subnet
management, broadcast Storms, multicasting, Border Gateway Protocol, too many devices,
outdated Hardware, and over-subscription. These impacts the data transfer in terms of
queueing delay, packet loss, slow network, blocking of new connections, and Low throughput.
System latency refers to the overall time it takes for a request to go from its origin in the system
to its destination and receive a response. The time between clicking and seeing the updated
webpage is the system latency. It includes processing time on both client and server, network
transfers, and rendering delays. The time taken for each step, transmitting the action, server
processing, transmitting the response, and updating your screen, contributes to the overall
latency.
Data security and integrity issues
Data security is the process of safeguarding digital information throughout its entire life cycle to
protect it from corruption, theft, or unauthorized access. A poorly designed system can cause a
data breach; for instance, a high-profile hack or loss of data can result in customers losing trust
in an organization and taking their business to a competitor. This also runs the risk of serious
financial losses, along with fines, legal payments, and damage repair in case sensitive data is
lost.
Data integrity refers to the overall accuracy, consistency, and reliability of data stored in a
database, data warehouse, or any other information storage system. Compromised data can
lead to inaccuracies in reports and analysis, and these inaccuracies can have severe
consequences, as they can result in misguided decisions, inefficient operations, and loss of
competitive advantage. Loss of trust in data, this loss of trust can hinder the decision-making
process, as individuals may be reluctant to rely on data-driven insights and may instead resort
to intuition or guesswork. In regulatory compliance issues, organizations are required to
maintain accurate and reliable data to meet the standards set by regulatory bodies. Failure to
ensure data integrity can result in non-compliance, leading to fines, penalties, and reputational
damage.
Environmental complexity
A heterogeneous network is a type of network architecture that combines multiple
access technologies and different types of cells to provide seamless connectivity and improve
network performance. Despite its benefits, it also hurts reliable data transfer, such as
Interoperability Issues, where different network technologies are use to vary communication
protocols, which can lead to incompatibility. Ensuring seamless data transfer between these
disparate networks requires complex management systems and middleware to translate
protocols and maintain connectivity. Interference and Resource Management issue, the various
network technologies often operate in the same frequency bands, which can cause signal
interference. Effectively managing these shared resources and coordinating transmissions is
crucial to prevent performance degradation. Handover and Mobility issues, seamlessly
transitioning connections from one network type to another, as in from a cellular network to a
Wi-Fi network, can be a challenge. Poorly executed handovers can result in temporary
interruptions or dropped connections, which negatively impacts data transfer, especially for
real-time applications.
Heterogeneous network diagram
References
1. James F. Kurose| Keith W. Ross (2013), “Computer networking, A Top-Down Approach,”
6th edition, united states of America, Pearson education
2. Nadia Amysha Yusmadie College of Computing,” Challenges in Data Communication and
Networking” Informatics and Media, UiTM Tapah, Malaysia
3. Mrs. M. Latha, Assistant Professor (2019),” Data Communication and Networks”, IV B.
Tech I Semester (JNTUA-R15)
4. Kyle Jamieson, “Computer Networks “, Lecture 5, The Transport Layer