0% found this document useful (0 votes)
38 views7 pages

Clemency Assignment 2.

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
38 views7 pages

Clemency Assignment 2.

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 7

The Copperbelt University

SCHOOL OF INFORMATION COMMUNICATION TECHNOLOGY


DEPARTMENT OF COMPUTER ENGINEERING
IS 212 INTERNET TECHNOLOGIES
ASSIGNMENT 2

NAME: CLEMENCY CHIFUNDO MICHEAL


SURNAME: NJOBVU
PROGRAM: BSc. INFORMATION SYSTEMS
SIN: 18121550
LECTURER: Dr. SAMPA NKONDE
Question 1
1. Open Systems Interconnection (OSI) provides a general framework for standardization
and defines a set of layers and services provided by each layer. With the aid of a schematic
diagram, explain how the TCP/IP Protocol Suite compares with the OSI model and clearly
explain the functions of the layers.

The above diagram compares the TCP/IP model with OSI model.
The OSI (Open System Interconnection) model was introduced by ISO (International Standard
Organization). It is not a protocol but a model which is based on the concept of layering. As in
the diagram above the OSI model has seven layers and It follows a bottom-up approach to
transfer the data. It is robust and flexible, but not tangible. OSI reference model is to conduct the
designing and development of the digital communication hardware, devices and software in a
way that they can efficiently interoperate.
The seven layers and its function of the OSI model are discussed below.
1. Physical layer - To transmit bits over a medium To provide mechanical and electrical
specifications.
2. Data link - To organize bits into frames and also to provide hop-to-hop delivery.
3. Network - To provide internetworking, to move packets from source to destination.
4. Transport layer - The transport layer builds on the network layer to provide data
transport from a process on a source machine to a process on a destination machine.
5. Session layer - To establish, manage, and terminate the session.
6. Presentation layer- To translate, encrypt and compress data. Also convert data into
proper format.
7. Application layer - To allow access to network resources.
Whereas TCP (Transmission Control Protocol) /IP (Internet Protocol) was developed by the
Department of Defense (DoD) project agency. Unlike OSI Model, it consists of four (4) layers
each having its own protocols. Internet Protocols are the set of rules defined for communication
over the network. TCP/IP is considered as the standard protocol model for networking. TCP
handles data transmission and IP handles addresses. The TCP/IP protocol suite has a set of
protocols that includes TCP, UDP, ARP, DNS, HTTP, ICMP, etc. It is a robust and flexible
model. The TCP/IP model is mostly used for interconnecting computers over the internet.
And the 4 layers of TCP/IP model are outlined below.
And the four layers of TCP/IP model include;
1. Network Interface Layer: This layer acts as an interface between hosts and transmission
links and used for transmitting datagrams. It also specifies what operation must be
performed by links like serial link and classic ethernet to fulfil the requirements of the
connectionless internet layer.
2. Internet Layer: This layer transmits an independent packet into any network which
travels to the destination. It includes the IP (Internet Protocol), ICMP (Internet Control
Message Protocol) and ARP (Address Resolution Protocol) as the standard packet format
for the layer.
3. Transport Layer: Enables a fault-free end-to-end delivery of the data between the
source and destination hosts in the form of datagrams. The protocols defined by this layer
are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
4. Application Layer: This layer permits users to access the services of global or private
internet. The various protocols described in this layer are virtual terminal (TELNET),
electronic mail (SMTP) and file transfer (FTP). Some additional protocols like DNS
(Domain Name System), HTTP (Hypertext Transfer Protocol) and RTP (Real-time
Transport Protocol). The working of this layer is a combination of application,
presentation and session layer of the OSI model
Question 2
1. Explain how flow control is realized in TCP
When transmitting and receiving data, it is likely that the recipient cannot receive the data. To
avoid the data loss flow control is used to prevent loss in the case of port blocking. This method
is implemented by sending blocking signals back to the source address when the sending or
receiving buffer begins to overflow. Traffic control can effectively prevent the impact of a large
amount of instantaneous data on the network, and ensure the efficient and stable operation of the
user network.
 There is a “sending window” and a “receiving window” on the host of both sides of the
communication.
 TCP connection stage, negotiation window size between two parties
 According to the result of negotiation, the sender sends the data byte stream that meets the
size of the window, and waits for the confirmation of the other party “receiving window”,
waiting for the confirmation mechanism.
 The sender changes the size of the window according to the confirmation information.
2. With the aid of a schematic diagram, explain what is meant by “three-way handshake”
in TCP connection management.
A three-way handshake is primarily used to create a TCP socket connection to reliably transmit
data between devices. As soon as a client requests a communication session with the server, a
three-way handshake process initiates TCP traffic by following three steps as illustrated in
below.

Step 1: A connection between server and client is established


In the first step, the client establishes a connection with a server. It sends a segment with SYN
and informs the server about the client should start communication, and with what should be its
sequence number.
Step 2: The server receives the SYN packet from the client node
In this step server responds to the client request with SYN-ACK signal set. ACK helps you to
signify the response of segment that is received and SYN signifies what sequence number it
should able to start with the segments.
Step 3: Client node receives the SYN/ACK from the server and responds with an ACK
packet
In this final step, the client acknowledges the response of the Server, and they both create a
stable connection will begin the actual data transfer process.

3. Based on the concept of sliding window, explain how TCP performance can be improved
As the Simple ACK protocol wastes bandwidth since it must delay sending next packet until it gets
ACK, the use of sliding window can improve the TCP performance.
The concept of sliding window allows the sender to send an amount of data packets for example 4
packets at a time without ACK. When a packet is successfully sent the window slide to the next packet
to be sent, still maintaining the 4 packets at a time. This enhance the minimization of the data loss
during transmission.
Question 3
1.State and explain the 4 TCP congestion control algorithms.
The TCP congestion algorithm prevents a sender from overrunning the capacity of the network (for
example, slower WAN links). TCP can adapt the sender's rate to network capacity and attempt to
avoid potential congestion situations. TCP has the following four intertwined algor which includes
Slow start, Congestion avoidance, Fast retransmit and Fast recovery.
1. Slow start
TCP slow start is an algorithm which balances the speed of a network connection. Slow start
gradually increases the amount of data transmitted until it finds the network’s maximum
carrying capacity. The slow start works as outlined below.
 A sender attempts to communicate to a receiver. The sender’s initial packet contains a
small congestion window, which is determined based on the sender’s maximum
window.
 The receiver acknowledges the packet and responds with its own window size. If the
receiver fails to respond, the sender knows not to continue sending data.
 After receiving the acknowledgement, the sender increases the next packet’s window
size. The window size gradually increases until the receiver can no longer acknowledge
each packet, or until either the sender or the receiver’s window limit is reached.
Once a limit has been determined, slow start’s job is done. Other congestion control
algorithms take over to maintain the speed of the connection.
2. Congestion Avoidance
Congestion Avoidance Mechanism
Congestion control - once congestion happens, TCP will control the congestion.
TCP repeatedly increases the load It nnposes on the network in effort to find the point at
which congestion occurs, and then It backs off from this point.
Congestion Avoidance - to predict when congestion is about to happen and then to reduce the
rate at which ho. send data just before packets start being discarded. But it is not yet
widely used.
Three Methods, include DEC bit, Random Early Detection (RED) and Source Based
Congestion Control.
3. Fast retransmit
This is is a modification to the congestion avoidance algorithm. As when the sender
receives for instance 3rd duplicate ACK, it assumes that the packet is lost and retransmit
that packet without waiting for a retransmission timer to expire. After retransmission, the
sender continues normal data transmission. That means TCP does not wait for the other
end to acknowledge the retransmission.
4. Fast Recovery
Fast Recovery is now the last improvement of TCP. With using only Fast Retransmit, the
congestion window is dropped down to 1 each time network congestion is detected.
Thus, it takes an amount of time to reach high link utilization as before. Fast Recovery,
however, alleviates this problem by removing the slow-start phase. Particulary, slow-start
will be used only at the beginning of a connection and whenever an RTO period is
expired.
The reason for not performing slow-start after receiving 3 dup ACKs is that duplicate
ACKs tell the sending side more than a packet has been lost. Since the receiving side can
create a duplicate ACK only in the case that it receives an out-of-order packet, the dup
ACK shows the sending side that one packet has been left out the network. Thus, the
sending side does not need to drastically decrease cwnd down to 1 and restart slow-start.
Moreover, the sender can only decrease cwnd to one-half of the current cwnd and
increase cwnd by 1 each time it receives a duplicate ACK.

2. Briefly explain the two main approaches to implementing QoS in TCP/IP networks

A QoS can be described as a set of parameters that describe the quality (for example, bandwidth,
buffer usage, priority, and CPU usage) of a specific stream of data. The two main approaches to
implementing QoS in TCP/IP networks are:
1. Integrated Services
2. Differentiated Services
Integrated Service
Integrated Service framework was developed within IETF to provide individualized QoS
guarantees to individual sessions. Provides services on a per flow basis where a flow is a packet
stream with common source address, destination address and port number. Integrated Service
routers must maintain per flow state information. It has two key features Reserved resources and
call setup. Best Effort service, Controlled Load service and guaranteed services.
Differentiated Services (DiffServ)
In Differentiated Services, flows are aggregated into classes that receive “treatment” by class.
More complex operations are pushed out to edge routers and simpler operations done by core
routers. Per-hop behavior (PHB) defines differences in performance among classes.
Differentiated services are Motivated by: scalability, flexibility, and better-than-best-effort
service without RSVP signaling

You might also like