0% found this document useful (0 votes)
10 views10 pages

Client Server Implementation

This document outlines the implementation of a client-server architecture in a distributed system, focusing on the interaction between clients and multiple server instances. It details the roles of clients and servers, the algorithms for server-side, client-side, and load balancer operations, and provides source code examples in Python. The lab demonstrates load balancing and concurrent processing, highlighting the importance of scalability and responsiveness in modern computing systems.

Uploaded by

Rhea Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Client Server Implementation

This document outlines the implementation of a client-server architecture in a distributed system, focusing on the interaction between clients and multiple server instances. It details the roles of clients and servers, the algorithms for server-side, client-side, and load balancer operations, and provides source code examples in Python. The lab demonstrates load balancing and concurrent processing, highlighting the importance of scalability and responsiveness in modern computing systems.

Uploaded by

Rhea Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Client-Server Implementation in

Distributed System
1. Objectives
• To learn how clients and servers interact in a networked environment

• To set up multiple server instances to reflect a real-world distributed system

2. Theory
Introduction
In modern computing, client-server architecture is often implemented in distributed systems,
where server functionality is distributed across multiple servers. In this case, clients interact
with a network of interconnected servers rather than a single centralized server. Cloud com-
puting is an example of such a system, where client requests are processed by various servers
across data centers worldwide, providing scalable resources based on demand. Distributed
client-server models support high availability, failover, and redundancy, allowing services to
continue operating even if one server fails.
The client-server model is a core framework in networked computing, supporting a wide
array of applications, from simple local networks to complex, distributed cloud systems.
Its defined roles, centralization, and ability to handle multiple clients concurrently make it
a highly efficient and scalable model for managing resources and services across network
environments. Despite the growing variety of network architectures, the client-server model
remains fundamental to data processing and resource management in modern computing.

Roles and Responsibilities of Clients and Servers


Client Role: The client component primarily serves as the user’s interface to the system.
It is responsible for allowing a computer user to request services from the server and for
displaying the results that the server returns. Clients are frequently situated at workstations
or on personal computers, acting as the point of interaction for end-users. A key aspect
of client behavior is that these devices typically do not share resources directly with other
clients; instead, they rely on the centralized server to provide the requested resources or
services.

1
Server Role: Conversely, the server component is designed to listen for incoming client
requests, process them, and then send the required information back to the requesting client.
Servers are generally more powerful machines, strategically located elsewhere on the network,
capable of handling multiple client requests concurrently.

Figure 1: Distributed Client-Server Architecture

3. Algorithm
Server-Side Algorithm
1. Initialize server with a unique ID, host, and port.

2. Bind the socket to the host and port, and start listening.

3. Continuously accept new client connections.

4. For each client:

• Start a new thread for the session.


• Receive data from client.
• Process the data.
• Send a response.
• Close the client connection.

5. On shutdown, close the socket and unblock accept() by connecting once.

2
Client-Side Algorithm
1. Define constants for load balancer host and port.

2. Implement send request(message) function:

• Open a socket.
• Connect to the load balancer.
• Send a message string encoded in UTF-8.
• Receive and decode the server’s response.

3. Simulate multiple clients in a loop:

• Iterate 10 times.
• Each iteration sends a request.
• Print the server’s response.
• Wait 0.5 seconds before the next request.

Load Balancer Algorithm


1. Define a list of backend server addresses.

2. Maintain a global index for round-robin tracking.

3. Start a TCP socket and bind it to the load balancer’s host and port.

4. Continuously listen for client connections.

5. For each client connection:

• Select the next server using round-robin: current index = (current index +
1) % len(SERVERS).
• Create a socket to the chosen server and connect.
• Receive data from the client and forward it to the server.
• Receive the server’s response and send it back to the client.
• Close both server and client sockets.

3
4. Source Code
server.py

1 import socket
2 import threading
3

4 class Server:
5 def __init__(self, host, port, server_id):
6 self.host = host
7 self.port = port
8 self.server_id = server_id
9 self.socket = socket.socket(socket.AF_INET,
10 socket.SOCK_STREAM)
11 self.running = True
12

13 def start(self):
14 self.socket.bind((self.host, self.port))
15 self.socket.listen(5)
16 print(f"[Server {self.server_id}] Listening on {self.host}:
17 self.port}")
18

19 while self.running:
20 client_socket, addr = self.socket.accept()
21 threading.Thread(
22 target=self.handle_client,
23 args=(client_socket, addr)
24 ).start()
25

26 def handle_client(self, client_socket, addr):


27 try:
28 print(f"[Server {self.server_id}] Connection from {addr}")
29 data = client_socket.recv(1024).decode('utf-8')
30

31 # Process request (simple echo service)


32 response = f"Server {self.server_id} processed:

4
33 {data.upper()}"
34

35 client_socket.sendall(response.encode('utf-8'))
36 finally:
37 client_socket.close()
38

39 def stop(self):
40 self.running = False
41 # Create a connection to unblock accept()
42 socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(
43 (self.host, self.port))
44 self.socket.close()
45

46 if __name__ == "__main__":
47 import sys
48 if len(sys.argv) != 3:
49 print("Usage: python server.py <server_id> <port>")
50 sys.exit(1)
51

52 server_id = sys.argv[1]
53 port = int(sys.argv[2])
54 server = Server('127.0.0.1', port, server_id)
55 try:
56 server.start()
57 except KeyboardInterrupt:
58 server.stop()

client.py

1 import socket
2 import time
3

4 LB_HOST = '127.0.0.1'
5 LB_PORT = 8000
6

5
7 def send_request(message):
8 with socket.socket(socket.AF_INET, socket.SOCK_STREAM)
9 as s:
10 s.connect((LB_HOST, LB_PORT))
11 s.sendall(message.encode('utf-8'))
12 response = s.recv(1024).decode('utf-8')
13 return response
14

15 # Simulate multiple clients


16 for i in range(10):
17 response = send_request(f"Request {i}")
18 print(f"Client received: {response}")
19 time.sleep(0.5)

load balancer.py

1 import socket
2 import threading
3

4 # Configuration
5 LB_HOST = '0.0.0.0'
6 LB_PORT = 8000
7 SERVERS = [('127.0.0.1', 8001), ('127.0.0.1', 8002), ('127.0.0.1', 8003)]
8 current_index = 0
9

10 def handle_client(client_socket):
11 global current_index
12

13 # Round-robin server selection


14 server = SERVERS[current_index]
15 current_index = (current_index + 1) % len(SERVERS)
16

17 try:
18 # Connect to selected server
19 with socket.socket(socket.AF_INET, socket.SOCK_STREAM)

6
20 as server_socket:
21 server_socket.connect(server)
22 print(f"[LB] Routing to {server}")
23

24 # Forward data
25 data = client_socket.recv(1024)
26 server_socket.sendall(data)
27

28 # Return response
29 response = server_socket.recv(1024)
30 client_socket.sendall(response)
31 except Exception as e:
32 print(f"[LB] Error: {e}")
33 finally:
34 client_socket.close()
35

36 def start_load_balancer():
37 lb_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
38 lb_socket.bind((LB_HOST, LB_PORT))
39 lb_socket.listen(5)
40 print(f"[LB] Listening on {LB_HOST}:{LB_PORT}")
41

42 while True:
43 client_socket, addr = lb_socket.accept()
44 print(f"[LB] Accepted connection from {addr}")
45 client_handler = threading.Thread(
46 target=handle_client,
47 args=(client_socket,)
48 )
49 client_handler.start()
50

51 if __name__ == "__main__":
52 start_load_balancer()

7
5. Output

Figure 2: First Server Output

Figure 3: Second Server Output

Figure 4: Third Server Output

Figure 5: Client Side Output

8
Figure 6: Load Balancer Output

6. Discussion
This lab demonstrated a practical implementation of a distributed client-server model using
Python sockets, where a load balancer routed client requests across multiple server instances.
The round-robin approach effectively distributed the load, preventing any single server from
being overwhelmed and showcasing a simple but functional load balancing strategy. Thread-
ing enabled concurrent handling of client connections on both the server and load balancer
sides, reflecting real-world system behavior in terms of scalability and responsiveness. While
the implementation worked as intended, it lacked features such as fault detection, server
health checks, and encrypted communication, which are essential for production-grade sys-
tems.

9
7. Conclusion
In conclusion, this lab successfully illustrated the key concepts of client-server interaction
within a distributed system, emphasizing load distribution, concurrent processing, and sys-
tem scalability. By developing a modular architecture comprising clients, a load balancer,
and multiple backend servers, we gained hands-on experience with essential aspects of net-
worked systems.

10

You might also like