PG NP Mod 1 Notes
PG NP Mod 1 Notes
)
R. L. JALAPPA INSTITUTE OF TECHNOLOGY
(Approved by AICTE, New Delhi, Affiliated to VTU, Belagavi & Accredited by NAAC “A” Grade)
Kodigehalli, Doddaballapur- 561 203
Department of CS&E - PG
Subject Code: MCS203
Subject Name: Network Programming Module Number: 01
Name of the Module: Introduction Scheme: 2024
Prepared by: Dr. Mamatha C M
Professor
Institute Vision
To be a premier Institution by imparting quality Technical education, Professional Training andResearch.
Institute Mission
M1:To provide an outstanding Teaching, Learning and Research environment through InnovativePractices in
Quality Education.
M2: Develop Leaders with high level of Professionalism to have career in the Industry, Zeal forHigher
Education, focus on Entrepreneurial and Societal activities.
Department Vision- PG
To nurture students with advanced expertise and research-oriented skills in Computer Science and Engineering,
empowering them to drive technological innovation and thrive in an evolving global landscape.
Department Mission- PG
M1: To foster advanced skills in specialized domains of Computer Science and Engineering, equippingstudents
with the necessary expertise to address contemporary challenges and meet the evolving demands of the global
industry.
M2: To promote cutting-edge research and technological innovation, while cultivating entrepreneurship and
consultancy skills that empower students to contribute to the technological needs of industries, governments, and
society.
PROGRAMME SPECIFIC OUTCOMES (PSOs)
PSO1: Students will have a knowledge of Advanced Software, Hardware, Network Models, Algorithms
PSO2: Students will be able to develop applications in the areas related to Artificial Intelligence, Machine
Learning, Data Science and IoT for efficient design of computer-based systems.
PROGRAMME EDUCATIONAL OBJECTIVES (PEOs)
PEO1: Our Graduates will have prospective careers in the IT Industry.
PEO2: Our Graduates will exhibit a high level of Professionalism and Leadership skills in work Environment.
PEO3: Our Graduates will pursue Research, and focus on Entrepreneurship.
Module-1
Introduction to network application, client/server communication, OSI Model, BSD Networking history, Test Networks
and Hosts, Unix Standards, 64-bit architectures, Transport Layer: TCP, UDP and SCTP
OSI Model:
What is OSI Model? – Layers of OSI Model
The OSI (Open Systems Interconnection) Model is a set of rules that explains how different computer systems
communicate over a network. OSI Model was developed by the International Organization for Standardization (ISO).
The OSI Model consists of 7 layers and each layer has specific functions and responsibilities. This layered approach
makes it easier for different devices and technologies to work together. OSI Model provides a clear structure for data
transmission and managing network issues. The OSI Model is widely used as a reference to understand how network
systems function.
Layers of the OSI Model
There are 7 layers in the OSI Model and each layer has its specific role in handling data. All the layers are mentioned
below:
Physical Layer
Data Link Layer
Network Layer
Transport Layer
Session Layer
Presentation Layer
Application Layer
Layer 1 – Physical Layer
The lowest layer of the OSI reference model is the Physical Layer. It is responsible for the actual physical connection
between the devices. The physical layer contains information in the form of bits. Physical Layer is responsible for
transmitting individual bits from one node to the next. When receiving data, this layer will get the signal received and
convert it into 0s and 1s and send them to the Data Link layer, which will put the frame back together. Common
physical layer devices are Hub, Repeater, Modem, and Cables.
Physical Layer
Functions of the Physical Layer
Bit Synchronization: The physical layer provides the synchronization of the bits by providing a clock. This clock
controls both sender and receiver thus providing synchronization at the bit level.
Bit Rate Control: The Physical layer also defines the transmission rate i.e. the number of bits sent per second.
Physical Topologies: Physical layer specifies how the different, devices/nodes are arranged in a network i.e. bus
topology, star topology, or mesh topology.
Transmission Mode: Physical layer also defines how the data flows between the two connected devices. The various
transmission modes possible are Simplex, half-duplex and full duplex.
Layer 2 – Data Link Layer (DLL)
The data link layer is responsible for the node-to-node delivery of the message. The main function of this layer is to
make sure data transfer is error-free from one node to another, over the physical layer. When a packet arrives in a
network, it is the responsibility of the DLL to transmit it to the Host using its MAC address. Packet in the Data Link
layer is referred to as Frame. Switches and Bridges are common Data Link Layer devices.
The Data Link Layer is divided into two sublayers:
Logical Link Control (LLC)
Media Access Control (MAC)
The packet received from the Network layer is further divided into frames depending on the frame size of the NIC
(Network Interface Card). DLL also encapsulates Sender and Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP (Address Resolution Protocol) request onto the wire
asking, “Who has that IP address?” and the destination host will reply with its MAC address.
Functions of the Data Link Layer
Framing: Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are
meaningful to the receiver. This can be accomplished by attaching special bit patterns to the beginning and end of the
frame.
Physical Addressing: After creating frames, the Data link layer adds physical addresses (MAC addresses) of the sender
and/or receiver in the header of each frame.
Error Control: The data link layer provides the mechanism of error control in which it detects and retransmits damaged
or lost frames.
Flow Control: The data rate must be constant on both sides else the data may get corrupted thus, flow control
coordinates the amount of data that can be sent before receiving an acknowledgment.
Access Control: When a single communication channel is shared by multiple devices, the MAC sub-layer of the data
link layer helps to determine which device has control over the channel at a given time.
Layer 3 – Network Layer
The network layer works for the transmission of data from one host to the other located in different networks. It also
takes care of packet routing i.e. selection of the shortest path to transmit the packet, from the number of routes
available. The sender and receiver’s IP address are placed in the header by the network layer. Segment in the Network
layer is referred to as Packet. Network layer is implemented by networking devices such as routers and switches.
Functions of the Network Layer
Routing: The network layer protocols determine which route is suitable from source to destination. This function of the
network layer is known as routing.
Logical Addressing: To identify each device inter-network uniquely, the network layer defines an addressing scheme.
The sender and receiver’s IP addresses are placed in the header by the network layer. Such an address distinguishes
each device uniquely and universally.
Layer 4 – Transport Layer
The transport layer provides services to the application layer and takes services from the network layer. The data in the
transport layer is referred to as Segments. It is responsible for the end-to-end delivery of the complete message. The
transport layer also provides the acknowledgment of the successful data transmission and re-transmits the data if an
error is found. Protocols used in Transport Layer are TCP, UDP NetBIOS, PPTP.
At the sender’s side, the transport layer receives the formatted data from the upper layers, performs Segmentation, and
also implements Flow and error control to ensure proper data transmission. It also adds Source and Destination port
number in its header and forwards the segmented data to the Network Layer.
Generally, this destination port number is configured, either by default or manually. For example, when a web
application requests a web server, it typically uses port number 80, because this is the default port assigned to web
applications. Many applications have default ports assigned.
At the Receiver’s side, Transport Layer reads the port number from its header and forwards the Data which it has
received to the respective application. It also performs sequencing and reassembling of the segmented data.
Functions of the Transport Layer
Segmentation and Reassembly: This layer accepts the message from the (session) layer and breaks the message into
smaller units. Each of the segments produced has a header associated with it. The transport layer at the destination
station reassembles the message.
Service Point Addressing: To deliver the message to the correct process, the transport layer header includes a type of
address called service point address or port address. Thus, by specifying this address, the transport layer makes sure
that the message is delivered to the correct process.
Services Provided by Transport Layer
Connection-Oriented Service
Connectionless Service
Layer 5 – Session Layer
Session Layer in the OSI Model is responsible for the establishment of connections, management of connections,
terminations of sessions between two devices. It also provides authentication and security. Protocols used in the
Session Layer are NetBIOS, PPTP.
Functions of the Session Layer
Session Establishment, Maintenance, and Termination: The layer allows the two processes to establish, use, and
terminate a connection.
Synchronization: This layer allows a process to add checkpoints that are considered synchronization points in the data.
These synchronization points help to identify the error so that the data is re-synchronized properly, and ends of the
messages are not cut prematurely, and data loss is avoided.
Dialog Controller: The session layer allows two systems to start communication with each other in half-duplex or full
duplex.
Example
Let us consider a scenario where a user wants to send a message through some Messenger application running in their
browser. The “Messenger” here acts as the application layer which provides the user with an interface to create the data.
This message or so-called Data is compressed, optionally encrypted (if the data is sensitive), and converted into bits
(0’s and 1’s) so that it can be transmitted.
Application Layer
Functions of the Application Layer
The main functions of the application layer are given below.
Network Virtual Terminal (NVT): It allows a user to log on to a remote host.
File Transfer Access and Management (FTAM): This application allows a user to access files in a remote host, retrieve
files in a remote host, and manage or control files from a remote computer.
Mail Services: Provide email service.
Directory Services: This application provides distributed database sources and access for global information about
various objects and services.
How Data Flows in the OSI Model?
When we transfer information from one device to another, it travels through 7 layers of OSI model. First data travels
down through 7 layers from the sender’s end and then climbs back 7 layers on the receiver’s end.
Data flows through the OSI model in a step-by-step process:
Application Layer: Applications create the data.
Presentation Layer: Data is formatted and encrypted.
Session Layer: Connections are established and managed.
Transport Layer: Data is broken into segments for reliable delivery.
Network Layer: Segments are packaged into packets and routed.
Data Link Layer: Packets are framed and sent to the next device.
Physical Layer: Frames are converted into bits and transmitted physically.
Each layer adds specific information to ensure the data reaches its destination correctly, and these steps are reversed
upon arrival.
We can understand how data flows through OSI Model with the help of an example mentioned below.
Let us suppose, Person A sends an e-mail to his friend Person B.
Step 1: Person A interacts with e-mail application like Gmail, outlook, etc. Writes his email to send. (This happens
at Application Layer).
Step 2: At Presentation Layer, Mail application prepares for data transmission like encrypting data and formatting it for
transmission.
Step 3: At Session Layer, there is a connection established between the sender and receiver on the internet.
Step 4: At Transport Layer, Email data is broken into smaller segments. It adds sequence number and error-checking
information to maintain the reliability of the information.
Step 5: At Network Layer, addressing of packets is done in order to find the best route for transfer.
Step 6: At Data Link Layer, data packets are encapsulated into frames, then MAC address is added for local devices
and then it checks for error using error detection.
Step 7: At Physical Layer, Frames are transmitted in the form of electrical/ optical signals over a physical network
medium like ethernet cable or WiFi.
After the email reaches the receiver i.e. Person B, the process will reverse and decrypt the e-mail content. At last, the
email will be shown on Person B email client.
Please refer the below animation for detailed flow.
2 – Data Link Layer Node to Node Delivery of Message. Frames Ethernet, PPP, etc.
Segments
Take Service from Network Layer and (for TCP) or TCP, UDP, SCTP, e
4 – Transport Layer
provide it to the Application Layer. Datagrams tc.
(for UDP)
In the OSI model, only layers 1,2 and 3 are All layers of the TCP/IP model are needed for data
necessary for data transmission. transmission.
Protocols at each layer is independent of the other Layers are integrated; some layers are required by
layer. other layers of TCP/IP model.
OSI Model is a conceptual framework, less used in Widely used in actual networks like Internet and
practical applications. Communication Systems.
OSI vs TCP/IP
Advantages of OSI Model
The OSI Model defines the communication of a computing system into 7 different layers. Its advantages include:
It divides network communication into 7 layers which makes it easier to understand and troubleshoot.
It standardizes network communications, as each layer has fixed functions and protocols.
Diagnosing network problems is easier with the OSI model.
It is easier to improve with advancements as each layer can get updates separately.
Disadvantages of OSI Model
The OSI Model has seven layers, which can be complicated and hard to understand for beginners.
In real-life networking, most systems use a simpler model called the Internet protocol suite (TCP/IP), so the OSI Model
is not always directly applicable.
Each layer in the OSI Model adds its own set of rules and operations, which can make the process more time-
consuming and less efficient.
The OSI Model is more of a theoretical framework, meaning it’s great for understanding concepts but not always
practical for implementation.
BSD(Berkeley Software Distribution) Networking history:
The history of the Berkeley Software Distribution began in the 1970s when University of California, Berkeley received
a copy of Unix. Professors and students at the university began adding software to the operating system and released it
as BSD to select universities. Since it contained proprietary Unix code, it originally had to be distributed subject to
AT&T licenses. The bundled software from AT&T was then rewritten and released as free software under the BSD
license. However, this resulted in a lawsuit with Unix System Laboratories, the AT&T subsidiary responsible for Unix.
Eventually, in the 1990s, the final versions of BSD were publicly released without any proprietary licenses, which led
to many descendants of the operating system that are still maintained today.
Sl. No. Year technique
1 Late 1970s BSD started as a series of modifications to AT&T’s original UNIX,
made by students and faculty at UC Berkeley
2 Early 1980s The 4.x BSD series became increasingly independent and feature-rich,
particularly in areas like file systems and networking.
3 1983 - 4.2BSD Integrated the TCP/IP networking stack, making it the first widely
available OS to do so.
Developed under DARPA sponsorship, with the goal of implementing
the emerging ARPANET protocols.
Provided a reference implementation of the TCP/IP suite, which was
adopted by many other operating systems.
4 1982 Sockets API- This API became a de facto standard for Unix-like
systems and is still in use today across Linux, macOS, and
Windows.The sockets model helped unify communication between
different network protocol families (e.g., IPv4, IPv6, Unix domain
sockets)
5 1989 Net/1 - Released the networking code from 4.3BSD as open source,
under the BSD license
6 1991 Net/2- Contained much of the BSD system minus AT&T-licensed code.
These led to the creation of FreeBSD, NetBSD, and OpenBSD — all of
which continued to develop advanced networking features
Test Networks and Hosts:
To test networks and hosts, you can use tools like ping to check basic connectivity, traceroute to trace the path of
packets, and telnet to verify port access. Other methods include network testing tools, which provide a range of
functionalities for verifying network performance and connectivity.
Figure 1.1 shows the various networks and hosts used in the examples throughout the text. For each host, we show the
OS and the type of hardware (since some of the operating systems run on more than one type of hardware). The name
within each box is the hostname that appears in the text.
Unix Standards
The term UNIX standards generally refers to a set of specifications that define how UNIX
operating systems should behave to ensure compatibility, portability, and interoperability
across different UNIX systems. Here are the main UNIX standards:
1. POSIX (Portable Operating System Interface)
POSIX is an acronym for Portable Operating System Interface. POSIX is not a single standard, but a family of
standards being developed by the Institute for Electrical and Electronics Engineers, Inc., normally called the IEEE. The
POSIX standards have also been adopted as international standards by ISO and the International Electrotechnical
Commission (IEC), called ISO/IEC.
2. The Open Group
The Open Group was formed in 1996 by the consolidation of the X/Open Company (founded in 1984) and the Open
Software Foundation (OSF, founded in 1988). It is an international consortium of vendors and end-user customers from
industry, government, and academia.
3. Unification of Standards
The above brief backgrounds on POSIX and The Open Group both continue with The Austin Group's publication of The
Single Unix Specification(SUS) Version 3, as mentioned at the beginning of this section. Getting over 50 companies to
agree on a single standard is certainly a landmark in the history of Unix. Most Unix systems today conform to
some version of POSIX.1 and POSIX.2; many comply with The Single Unix Specification Version 3.
4. Internet Engineering Task Force (IETF)
The Internet Engineering Task Force (IETF) is a large, open, international community of network designers, operators,
vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the
Internet. It is open to any interested individual. The Internet standards process is documented in RFC 2026 [Bradner
1996]. Internet standards normally deal with protocol issues and not with programming APIs. Nevertheless, two RFCs
(RFC 3493 [Gilligan et al. 2003] and RFC 3542 [Stevens et al. 2003]) specify the sockets API for IPv6. These are
informational RFCs, not standards, and were produced to speed the deployment of portable applications by the
numerous vendors working on early releases of IPv6. Although standards bodies tend to take a long time, many APIs
were standardized in The Single Unix Specification Version 3.
64-Bit Architectures
During the mid to late 1990s, the trend began toward 64-bit architectures and 64-bit software. One reason is for larger
addressing within a process (i.e., 64-bit pointers), which can address large amounts of memory (more than 232 bytes).
The common programming model for existing 32-bit Unix systems is called the ILP32 model, denoting that integers (I),
long integers (L), and pointers (P) occupy 32 bits. The model that is becoming most prevalent for 64-bit Unix systems is
called the LP64 model, meaning only long integers (L) and pointers (P) require 64 bits. Figure 1.17 compares these two
models. Figure 1.2. Comparison of number of bits to hold various datatypes for the ILP32 and LP64 models.