0% found this document useful (0 votes)
27 views40 pages

Overview of Distributed Computing Systems

Uploaded by

zeelmg875
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views40 pages

Overview of Distributed Computing Systems

Uploaded by

zeelmg875
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Enroll no - 202103103510490

Practical 1
Aim: Study any two realtime distributed computing system developed
based on distributed computing system models. (ARPAnet, Sprite System,
Experimental System, Amoeba, Plan 9 and Cambridge Distributed
Computing System).

1) Amoeba:
Amoeba is a distributed operating system designed to provide a transparent and efficient
computing environment across multiple machines. The main goal of Amoeba is to enable the
seamless use of distributed resources as if they were a single coherent system. The working
process of Amoeba can be understood by examining its key components and their
interactions:

Key Components of Amoeba:


• Microkernel: The core of the Amoeba operating system, responsible for basic functions
such as process management, inter-process communication (IPC), and low level resource
management.
• Servers: Specialized processes that provide various services such as file storage, directory
management, and network communication.
• Clients: User applications that request services from the servers through IPC.
• Objects: Encapsulated data and operations, accessible via capabilities which are like keys
granting access rights.

Working Process:
• Initialization:
✓ The system starts by initializing the microkernel on each machine.
✓ Servers are started on designated machines to provide various services.
✓ Clients (user applications) are launched on any machine in the network.

• Resource Discovery:
✓ Amoeba uses a directory server to manage and locate resources within the network.
✓ Clients query the directory server to find the services they need.

• Inter-Process Communication (IPC):


✓Amoeba uses a message-passing mechanism for IPC

UTU/CGPIT/CE/SEM-7/Distributed Computing System


pg. 1
Enroll no - 202103103510490
✓ Clients send requests to servers using the RPC (Remote Procedure Call) model.
✓ The microkernel handles the transmission of messages between clients and servers.

• Capability-Based Security:
✓ Each object in Amoeba is accessed via a capability, which is a protected reference
containing access rights.
✓ Capabilities ensure that clients can only perform operations for which they have been
granted permission.
✓ Capabilities can be delegated, allowing flexible and secure resource sharing.

• Distributed File System:


✓ Amoeba provides a distributed file system where files are stored on file servers.
✓ Clients interact with files as objects, using capabilities to perform operations like read,
write, and execute.
✓ The file system ensures consistency and replication for fault tolerance.

• Process Management:
✓ Amoeba allows processes to be distributed across multiple machines.
✓The microkernel manages process creation, scheduling, and communication.
✓Processes can migrate between machines for load balancing or fault tolerance.

• Fault Tolerance:
✓ Amoeba uses replication and checkpointing to ensure reliability.
✓ Critical data and services are replicated across multiple servers.
✓ Checkpointing allows the system to recover processes from saved states in case of failures.

Example Workflow:

• Application Startup:
✓ A user starts a text editing application on their workstation (Client).
✓ The client queries the directory server to locate the file server holding the document.

• File Access:
✓ The client sends a request to the file server to open the document.

UTU/CGPIT/CE/SEM-7/Distributed Computing System


pg. 2
Enroll no - 202103103510490
✓The file server verifies the capability provided by the client to ensure proper access rights.
✓The file server sends the document data back to the client.

• Editing and Saving:


✓The user edits the document, and the client updates the local copy.
✓When the user saves the document, the client sends an RPC request to the file server with
the updated data.
✓The file server updates the document and replicates the changes to ensure fault tolerance.

• Process Migration:
✓The system detects high load on the current machine running the text editor.
✓The microkernel decides to migrate the text editor process to another machine with more
available resources.
✓The process state is transferred, and the text editor resumes operation on the new machine
seamlessly.

Advantages:
• Transparency:
✓ Unified View: Amoeba provides a seamless and unified view of distributed resources,
✓ making it easier for users and applications to interact with the system without
worrying aboutthe underlying distribution.
✓ Location Transparency: Users and applications can access resources without
needing toknow their physical location.

• Scalability:
✓Horizontal Scaling: Amoeba can scale by adding more machines to the network,
distributing the workload across these machines.
✓Efficient Resource Utilization: The system can dynamically allocate resources based on
demand, optimizing performance and utilization.

• Fault Tolerance:
✓Replication: Critical data and services are replicated across multiple servers to ensure
availability even in the case of failures.
✓Checkpointing: Processes can be checkpointed and recovered, enabling the system to
recover from crashes.

UTU/CGPIT/CE/SEM-7/Distributed Computing System


pg. 3
Enroll no - 202103103510490
• Security:
✓Capability-Based Security: Access to objects is controlled using capabilities, which define
specific access rights. This ensures fine-grained security and access control.
✓Delegation: Capabilities can be securely delegated to other processes, allowing flexible
sharing of resources.

• Flexibility:
✓Microkernel Architecture: The microkernel design allows for minimal and efficient core
operations, with other services provided by user-space servers. This modularity facilitates
easier maintenance and upgrades.
✓Process Migration: Processes can be migrated between machines for load balancing and
resource optimization.

• High Performance:
✓Optimized IPC: The message-passing mechanism for inter-process communication is
designed to be efficient, supporting high-performance distributed applications.

Disadvantages:
• Complexity:
✓System Management: Managing and configuring a distributed operating system like
Amoeba can be complex, requiring specialized knowledge and expertise.
✓Debugging and Monitoring: Debugging distributed applications and monitoring the system
can be more challenging compared to traditional, centralized systems.

• Network Dependency:
✓Latency: The performance of the system can be affected by network latency, especially for
applications requiring frequent communication between distributed components.
✓Bandwidth: High network traffic can lead to bandwidth constraints, potentially impacting
system performance.

• Overhead:
✓ Capability Management: Managing capabilities can introduce overhead, both in
terms ofstorage and computational complexity.
✓ Replication and Checkpointing: While these features provide fault tolerance, they also
addoverhead in terms of storage and processing.

• Compatibility and Adoption:

UTU/CGPIT/CE/SEM-7/Distributed Computing System


pg. 4
Enroll no - 202103103510490
✓Legacy Systems: Integrating Amoeba with existing legacy systems and applications can be
difficult.
✓Adoption: The unique architecture and concepts of Amoeba may hinder its widespread
adoption, as organizations may prefer more conventional distributed systems.

• Resource Contention:
✓Shared Resources: Contention for shared resources can lead to performance bottlenecks,
especially in a heavily loaded system.
✓Load Balancing: Efficiently balancing the load across distributed resources requires
sophisticated algorithms and can be challenging to implement effectively.

2) Cambridge Distributed Computing System:


The Cambridge Distributed Computing System (CDCS) is a framework developed in the
1980s at the University of Cambridge, primarily aimed at exploring the use of distributed
computing. It focused on leveraging multiple interconnected computers to perform tasks
more efficiently than a single machine could.

Key Features and Components:


• Distribution: The system was designed to allow processes to run across multiple
machines, effectively distributing workloads to enhance performance and reliability.
• Interconnection: It relied on a network of computers that communicated with each otherto
share data and resources.
• Fault Tolerance: By distributing tasks, the system aimed to be more resilient to failures, as
the failure of a single machine wouldn't necessarily disrupt the entire system.
• Scalability: The system could be scaled by adding more machines to the network,
increasing its overall processing power and storage capacity.
• Resource Sharing: It facilitated the sharing of resources like storage and computational
power among different nodes in the network.

Working Process:
• Architecture and Components:
✓Nodes: Individual computers (nodes) connected via a network.
✓Communication Infrastructure: Mechanism for nodes to communicate with each other,
typically through message passing.
✓Distributed Operating System (DOS): Managed the distribution of tasks and resources
UTU/CGPIT/CE/SEM-7/Distributed Computing System
pg. 5
Enroll no - 202103103510490
across the nodes.

• Task Distribution:
✓Task Decomposition: The overall computational task is broken down into smaller subtasks.
✓Task Assignment: Subtasks are assigned to different nodes based on their capabilities and
current load.

• Resource Management:
✓ Resource Allocation: The DOS allocates resources (CPU, memory, storage) to the
subtasks,ensuring efficient utilization.
✓ Load Balancing: Ensures that no single node is overwhelmed by distributing the

• Communication:
✓Message Passing: Nodes communicate via messages to share data and synchronize
processes.
✓Remote Procedure Calls (RPC): Nodes can invoke procedures on other nodes, allowing for
coordinated execution of subtasks.

• Synchronization and Coordination:


✓Synchronization Mechanisms: Ensures that subtasks running on different nodes are
properly synchronized, often using techniques like locks, semaphores, or barriers.
✓Coordination Protocols: Manage the order and timing of task execution to ensure that
dependencies are respected.

• Fault Tolerance:
✓Redundancy: Critical tasks or data are replicated across multiple nodes to prevent data loss
in case of node failure.
✓Failure Detection and Recovery: The system continuously monitors node health,
reassigning tasks if a node fails.

• Execution and Monitoring:


✓Task Execution: Nodes execute their assigned subtasks, reporting progress and results back
to the coordinating system.
✓Monitoring: The DOS monitors the performance and health of nodes, dynamically
adjusting task distribution as needed.

• Result Aggregation:

UTU/CGPIT/CE/SEM-7/Distributed Computing System


pg. 6
Enroll no - 202103103510490
✓Result Collection: Once subtasks are completed, results are collected and combined to form
the final output.
✓Data Integration: Integrates results from different nodes, ensuring consistency and
correctness.

Example Workflow:
✓ Initialization: The system initializes and nodes are connected via the network.
✓ Task Submission: A user submits a computational task to the system.
✓ Decomposition: The DOS decomposes the task into smaller subtasks.
✓ Distribution: Subtasks are distributed to available nodes.
o Execution: Nodes execute their subtasks, communicating and synchronizing as
needed.

Advantages:
• Improved Performance and Scalability:
✓Parallel Processing: By distributing tasks across multiple nodes, the system could perform
computations in parallel, significantly improving performance.
✓Scalability: Adding more nodes to the system increased its computational power and
storage capacity, allowing it to handle larger and more complex tasks.

• Resource Utilization:
✓ Efficient Resource Use: The system ensured that the computational resources (CPU,
✓ memory, storage) of all nodes were utilized effectively, reducing idle times and
increasingoverall efficiency.
✓ Load Balancing: The DOS balanced the workload among nodes, preventing any
singlenode from becoming a bottleneck.

• Fault Tolerance and Reliability:


✓Redundancy: Critical data and tasks could be replicated across multiple nodes, ensuring
that the system could recover from individual node failures without data loss or
significantdowntime.
✓Failure Recovery: The system could detect node failures and reassign tasks to other nodes,
maintaining the continuity of operations.

• Flexibility and Adaptability:


✓Modular Design: The system’s architecture allowed for easy integration of new nodes,
UTU/CGPIT/CE/SEM-7/Distributed Computing System
pg. 7
Enroll no - 202103103510490
enabling it to adapt to changing computational needs.
✓Dynamic Resource Allocation: Resources could be dynamically allocated based on the
current workload, optimizing performance for different types of tasks.

• Cost-Effectiveness:
✓Resource Sharing: Multiple users and applications could share the same set of
computational resources, maximizing their utilization and reducing costs.
✓Scalable Investment: Organizations could start with a smaller setup and gradually expand
their system by adding more nodes as needed, spreading out costs over time.

Disadvantages:
• Complexity:
✓System Management: Managing a distributed system is inherently more complex than a
single machine, requiring sophisticated software to handle task distribution,
synchronization,and fault tolerance.
✓Development Difficulty: Writing software for a distributed system requires a different
approach compared to traditional single-machine programming, often involving
complexalgorithms for communication, coordination, and error handling.

• Communication Overhead:
✓Latency: Communication between nodes introduces latency, which can slow down the
overall system performance, especially for tasks requiring frequent data exchange.
✓Bandwidth: The network bandwidth can become a bottleneck, particularly in data-intensive
applications where large amounts of data need to be transferred between nodes.

• Synchronization Issues:
✓Concurrency Control: Ensuring data consistency across distributed nodes is challenging,
requiring mechanisms like locking, which can introduce additional complexity and
potentialfor deadlocks.
✓Clock Synchronization: Keeping the clocks synchronized across different nodes is difficult,
and discrepancies can lead to issues in coordinating tasks and data.

• Fault Tolerance Limitations:


✓Partial Failures: While the system is designed to handle node failures, partial failures (e.g.,
network partitioning) can complicate recovery and consistency.
✓Complex Recovery: Recovering from failures often requires complex protocols and can
lead to performance degradation during the recovery process.
UTU/CGPIT/CE/SEM-7/Distributed Computing System
pg. 8
Enroll no - 202103103510490

• Security Concerns:
✓Vulnerability: Distributed systems can be more vulnerable to security threats such as
unauthorized access, data breaches, and distributed denial-of-service (DDoS) attacks.
✓Data Integrity: Ensuring the integrity and security of data as it is transferred across nodes
and networks adds an additional layer of complexity.

Future Scope:
• Integration with Cloud Computing:
✓ Cloud Services: Modern cloud platforms (e.g., AWS, Azure, Google Cloud) inherently
rely on distributed systems. The principles of CDCS can be extended to optimize
resource
✓ allocation, improve fault tolerance, and enhance scalability in cloud environments.
✓ Hybrid Cloud Solutions: CDCS principles can aid in developing hybrid cloud
solutions thatseamlessly integrate on-premises and cloud resources, optimizing
performance and cost- efficiency.

• Advancements in Edge Computing:


✓Edge Networks: With the rise of IoT and edge computing, the concepts from CDCS can be
applied to create distributed systems that process data closer to the source, reducing latency
and bandwidth usage.
✓Decentralized Processing: CDCS techniques can improve the performance and reliability of
edge computing networks by ensuring efficient distribution and fault tolerance.

• Enhanced Fault Tolerance and Resilience:


✓ Self-Healing Systems: Building on the fault tolerance mechanisms of CDCS, future
✓ systems can develop more sophisticated self-healing capabilities, automatically
detecting andrecovering from failures without human intervention.

✓ CGPIT/CE/SEM-7/B/DCS 13 | 14

UTU/CGPIT/CE/SEM-7/Distributed Computing System


pg. 9
Enrollment No: 202103103510490

✓ Resilient Architectures: Leveraging CDCS principles, distributed systems can be


designed to withstand and quickly recover from large-scale failures or cyber-attacks,
enhancing overallsystem resilience.

• Artificial Intelligence and Machine Learning:


✓Distributed AI/ML Models: CDCS can contribute to the development of distributed AI
andML models, where training and inference are performed across multiple nodes to
speed up processing and handle larger datasets.
✓Federated Learning: Techniques from CDCS can be applied to federated learning, where
models are trained across decentralized devices or servers while keeping data localized
forprivacy and security.

• Big Data and Analytics:


✓Distributed Data Processing: CDCS concepts can enhance big data platforms like Hadoop
and Spark, optimizing data distribution, processing speed, and fault tolerance.
✓Real-Time Analytics: By applying CDCS principles, systems can be designed to process
and analyze data in real-time across distributed nodes, providing faster insights and
decision-making capabilities.

• Blockchain and Decentralized Systems:


✓Distributed Consensus: The synchronization and coordination techniques from CDCS
canimprove the efficiency and scalability of blockchain networks, facilitating faster and
more reliable consensus mechanisms.
✓Decentralized Applications (DApps): CDCS principles can support the development of
DApps that run on distributed networks, ensuring robustness, security, and efficient
resourceutilization.

• Collaborative and Distributed Work Environments:


✓Remote Collaboration Tools: CDCS can influence the design of distributed collaboration
tools, enhancing performance and reliability for remote work environments.
✓Distributed Virtual Environments: Principles from CDCS can be applied to create scalable
and resilient virtual environments for remote collaboration, training, and education.

• Quantum Computing Integration:


✓Hybrid Classical-Quantum Systems: The distributed computing principles from CDCS
CGPIT/CE/SEM-7/B/DCS 10 | 40
Enrollment No: 202103103510490

Practical : 2

Aim: Implement inter process communication using message passing .


Code :
import [Link].*;
import [Link].*;
import [Link].*;
public class p2 {
public static void sender(PipedOutputStream outputStream, String message, int processId)
throws IOException {
[Link]("Process " + processId + " sender started.");
[Link]("Process " + processId + " sending: " + message);
[Link]([Link]());
[Link]();
[Link]("Process " + processId + " sender complete.");
}
public static void receiver(PipedInputStream inputStream, int processId) throws
IOException {
[Link]("Process " + processId + " receiver started.");
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
String message = [Link]();
[Link]("Process " + processId + " received: " + message);
[Link]();
[Link]("Process " + processId + " receiver complete.");
}
public static void main(String[] args) throws IOException, InterruptedException {
int numProcesses = 5;
Map<Integer, PipedOutputStream> outputPipes = new HashMap<>();
Map<Integer, PipedInputStream> inputPipes = new HashMap<>();

CGPIT/CE/SEM-7/B/DCS 11 | 40
Enrollment No: 202103103510490

List<Thread> senderThreads = new ArrayList<>();


List<Thread> receiverThreads = new ArrayList<>();
for (int i = 1; i <= numProcesses; i++) {
PipedOutputStream outputStream = new PipedOutputStream();
PipedInputStream inputStream = new PipedInputStream(outputStream);
[Link](i, outputStream);
[Link](i, inputStream);
}
Random random = new Random();
for (int i = 1; i <= numProcesses; i++) {
int targetProcess = [Link](numProcesses) + 1;
while (targetProcess == i) {
targetProcess = [Link](numProcesses) + 1;
}
final PipedOutputStream outputStream = [Link](targetProcess);
final String message = "Hello from Process " + i + "!";
final int processId = i;
[Link](new Thread(() -> {
try {
sender(outputStream, message, processId);
} catch (IOException e) {
[Link]();
}
}));
final PipedInputStream inputStream = [Link](i);
[Link](new Thread(() -> {
try {
receiver(inputStream, processId);
} catch (IOException e) {
CGPIT/CE/SEM-7/B/DCS 12 | 40
Enrollment No: 202103103510490

[Link]();
}
}));
}
List<Thread> allThreads = new ArrayList<>();
[Link](senderThreads);
[Link](receiverThreads);
[Link](allThreads);
[Link]("Starting all processes in random order");
for (Thread thread : allThreads) {
[Link]();
}
for (Thread thread : allThreads) {
[Link]();
}
[Link]("All processes have completed.");
}
}

Output :

CGPIT/CE/SEM-7/B/DCS 13 | 40
Enrollment No: 202103103510490

Practical : 3

Aim: Implement inter process communication between two systems using


socket.
Code:
import socket
import
threading

# Server code
def start_server():
server_socket = [Link](socket.AF_INET,
socket.SOCK_STREAM)host = '[Link]'
port = 5000
server_socket.bind((host,
CGPIT/CE/SEM-7/B/DCS 14 | 40
Enrollment No: 202103103510490

port))server_socket.listen(5)
print(f"Server listening on {host}:{port}")

while True:
client_socket, addr = server_socket.accept()
print(f"Connection from {addr} has been
established.")

data = client_socket.recv(1024).decode('utf-
8')print(f"Received from client: {data}")

response = f"Server received: {data}"


client_socket.send([Link]('utf-
8'))client_socket.close()

# Client code
import socket
import threading
import time
def start_server():
server_socket = [Link](socket.AF_INET, socket.SOCK_STREAM)
host = '[Link]'
port = 12345
server_socket.bind((host, port))
server_socket.listen(1)
print(f"Server listening on {host}:{port}")
conn, addr = server_socket.accept()
print(f"Connection from {addr}")

CGPIT/CE/SEM-7/B/DCS 15 | 40
Enrollment No: 202103103510490

data = [Link](1024).decode('utf-8')
print(f"Received from client: {data}")

response = "Hii Dhaval!"


[Link]([Link]('utf-8'))
[Link]()

def start_client():
client_socket = [Link](socket.AF_INET, socket.SOCK_STREAM)
host = '[Link]'
port = 12345
client_socket.connect((host, port))
message = "Hiii Server!"
client_socket.send([Link]('utf-8'))

response = client_socket.recv(1024).decode('utf-8')
print(f"Received from server: {response}")
client_socket.close()

# Threading to run both server and client


server_thread = [Link](target=start_server)
client_thread = [Link](target=start_client)

# Start the server thread


server_thread.start()

# Add a small delay to ensure the server is ready before the client tries to connect
[Link](1)
CGPIT/CE/SEM-7/B/DCS 16 | 40
Enrollment No: 202103103510490

# Start the client thread


client_thread.start()

# Join the threads to ensure they complete before finishing the notebook cell
server_thread.join()
client_thread.join()

Output:

Practical : 4

Aim: Implement the group communication using sockets programming.


Code:
1) [Link]
import socket
import threading

def handle_client(client_socket, clients):


while True:
CGPIT/CE/SEM-7/B/DCS 17 | 40
Enrollment No: 202103103510490

try:
message = client_socket.recv(1024).decode()
if not message:
break
print(f"Received message from {client_socket.getpeername()}: {message}")
broadcast(message, clients, client_socket)
except:
break
[Link](client_socket)
client_socket.close()

def broadcast(message, clients, sender_socket):


for client in clients:
if client != sender_socket:
try:
[Link]([Link]())
except:
[Link]()
[Link](client)

def start_server():
server_socket = [Link](socket.AF_INET, socket.SOCK_STREAM)
server_host = '[Link]'
server_port = 12345
server_socket.bind((server_host, server_port))
server_socket.listen(10)
print("Server started. Waiting for connections...")
clients = []
while True:
CGPIT/CE/SEM-7/B/DCS 18 | 40
Enrollment No: 202103103510490

client_socket, client_address = server_socket.accept()


print("Connected by:", client_address)
[Link](client_socket)
client_handler = [Link](target=handle_client, args=(client_socket,
clients))
client_handler.start()

if __name__ == "__main__":
start_server()
2) [Link]
import socket
import threading

def receive_messages(client_socket):
while True:
try:
message = client_socket.recv(1024).decode()
print(message)
except Exception as e:
print(f"Error occurred: {e}")
print("Disconnected from the server.")
client_socket.close()
break

def send_message(client_socket):
while True:
message = input("Enter your message: ")
try:
client_socket.sendall([Link]())

CGPIT/CE/SEM-7/B/DCS 19 | 40
Enrollment No: 202103103510490

except Exception as e:
print(f"Error occurred: {e}")
print("Disconnected from the server.")
client_socket.close()
break

def start_client():
client_socket = [Link](socket.AF_INET, socket.SOCK_STREAM)
server_host = '[Link]'
server_port = 12345
try:
client_socket.connect((server_host, server_port))
print("Connected to the server.")
except ConnectionRefusedError:
print("Connection refused. Is the server running?")
return

receive_thread = [Link](target=receive_messages, args=(client_socket,))


send_thread = [Link](target=send_message, args=(client_socket,))
receive_thread.start()
send_thread.start()

if __name__ == "__main__":
start_client()

Output:
Client_1 :

CGPIT/CE/SEM-7/B/DCS 20 | 40
Enrollment No: 202103103510490

Client_2 :

Server: -

CGPIT/CE/SEM-7/B/DCS 21 | 40
Enrollment No: 202103103510490

Practical : 5

Aim: Implement a TCP based distributed client server application.


Code :
(1) [Link]
import [Link].*;
import [Link].*;

import [Link];
import [Link];
import [Link];
public class server {
private static final int PORT = 8888;

public static void main(String[] args) {


ExecutorService executor =
[Link](); try (ServerSocket
serverSocket = new ServerSocket(PORT)) {
[Link]("Serving on " + [Link]() + ":" +
PORT);
while (true) {
Socket clientSocket = [Link]();
[Link](() -> handleClient(clientSocket));
}
} catch (IOException e) {
[Link]();
}
}

private static void handleClient(Socket clientSocket) {


try (InputStream input = [Link]();
CGPIT/CE/SEM-7/B/DCS 22 | 40
Enrollment No: 202103103510490

OutputStream output = [Link]();


BufferedReader reader = new BufferedReader(new
InputStreamReader(input,
StandardCharsets.UTF_8));
PrintWriter writer = new PrintWriter(new OutputStreamWriter(output,
StandardCharsets.UTF_8), true)) {

InetAddress addr = [Link]();


[Link]("Connected to " + addr);

String message;
while ((message = [Link]()) != null)
{ [Link]("Received: " +
message); String response = "Echo: " +
message; [Link](response);
}
[Link]("Connection closed by " + addr);
} catch (IOException e) {
[Link]();
}
}
}

(2) [Link]
import [Link].*;
import [Link].*;

import [Link];

public class Client {


public static void main(String[] args)

CGPIT/CE/SEM-7/B/DCS 23 | 40
Enrollment No: 202103103510490

{ try {

Socket socket = new Socket("[Link]", 8888);


[Link]("Connected to the server"); PrintWriter writer = new
PrintWriter([Link](), true);
BufferedReader reader = new BufferedReader(new
InputStreamReader([Link]()));
Scanner scanner = new Scanner([Link]);

while (true) {
[Link]("Enter message to send:
");String message = [Link]();
if ([Link]("exit")) {
break;
}

[Link](message);

// Wait for and print the server's


response String data = [Link]();
[Link]("Received from server: " + data);
}

[Link]("Closing
connection"); [Link]();
[Link]();
[Link]();
[Link]();
} catch (IOException e) {
[Link]();
}
CGPIT/CE/SEM-7/B/DCS 24 | 40
Enrollment No: 202103103510490

}
}

Output:

CGPIT/CE/SEM-7/B/DCS 25 | 40
Enrollment No: 202103103510490

Practical : 6

Aim: Implement communication between two systems using remote


procedure call (RPC).

• [Link]:
import [Link];
import [Link];
import [Link];
import [Link];

import [Link];
import [Link];
import [Link];

public class XmlRpcServerExample {

public static class RequestHandler {


private final InetAddress clientAddress;

public RequestHandler(InetAddress clientAddress) {


[Link] = clientAddress;
}

public Object dispatch(String method, Object[] params) throws Exception {


[Link]("Client " + [Link]() + " called method: " +
method + " with params: " + [Link](params));
if ("add".equals(method)) {
return add((Integer) params[0], (Integer) params[1]);
} else {
CGPIT/CE/SEM-7/B/DCS 26 | 40
Enrollment No: 202103103510490

throw new Exception("method \"" + method + "\" is not supported");


}
}

public static int add(int x, int y) {


return x + y;
}
}

public static void main(String[] args) throws IOException {


WebServer server = new WebServer(8000);
XmlRpcServer xmlRpcServer = [Link]();
PropertyHandlerMapping phm = new PropertyHandlerMapping();
[Link]("RequestHandler", [Link]);
[Link](phm);
[Link](new XmlRpcServerConfigImpl());
[Link]("Server is running...");
[Link]();
}
}

CGPIT/CE/SEM-7/B/DCS 27 | 40
Enrollment No: 202103103510490

• [Link]
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

public class client {


public static void main(String[] args) {
try {
XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
[Link](new URL("[Link]
XmlRpcClient client = new XmlRpcClient(config);

int result1 = (int) [Link]("add", new Object[]{2, 3});


[Link]("Add x + y: " + result1);

int result2 = (int) [Link]("divide", new Object[]{10, 5});


[Link]("Divide x / y: " + result2);

String message = (String) [Link]("print_message", new


Object[]{"Hellofromclient"});
[Link](message);
} catch (MalformedURLException e) {
[Link]();
CGPIT/CE/SEM-7/B/DCS 28 | 40
Enrollment No: 202103103510490

} catch (RemoteException e) {
[Link]();
}
}
}

Output :

CGPIT/CE/SEM-7/B/DCS 29 | 40
Enrollment No: 202103103510490

Practical : 7

Aim: Implement object-based system using remote method invocation


(RMI).
• [Link]
import Pyro4
@[Link]
class Calculator(object):
def add(self, a, b):
return a + b

def subtract(self, a, b):


return a - b

def main():

calculator = Calculator()
daemon = [Link]()
uri = [Link](calculator)

ns = [Link]()
[Link]("[Link]", uri)

print("Calculator server is ready.")


[Link]()

if __name__ == "__main__":
main()

CGPIT/CE/SEM-7/B/DCS 30 | 40
Enrollment No: 202103103510490

[Link]
import Pyro4
def main():
ns = [Link]()
uri = [Link]("[Link]")
calculator = [Link](uri)
result_add = [Link](10, 5)
result_subtract = [Link](10, 5)
print(f"Addition Result: {result_add}")
print(f"Subtraction Result: {result_subtract}")
if __name__ == "__main__":
main()

Output :

CGPIT/CE/SEM-7/B/DCS 31 | 40
Enrollment No: 202103103510490

CGPIT/CE/SEM-7/B/DCS 32 | 40
Enrollment No: 202103103510490

Practical : 8

Aim: Demonstrate the use of mutual exclusion.


A feature of process synchronization known as mutual exclusion asserts that "no two
processes can exist in the critical section at any given point in time." It was Dijkstra who
initially used the word. Any method of process synchronization that is employed needs to
meet the requirement of reciprocal exclusion, without which racial conditions could not be
eliminated.

Code:
import threading
import time
import random

class Process:
def __init__(self, process_id, total_processes):
self.process_id = process_id
self.total_processes = total_processes
self.replies_received = 0

def request_access(self):
print(f"[Process {self.process_id}] Requesting access...")

# Simulate sending requests and receiving replies


for process in self.total_processes:
if process.process_id != self.process_id:
print(f"[Process {self.process_id}] Sending request to Process
{process.process_id}")
process.replies_received += 1

# Wait for replies from all other processes


while self.replies_received < len(self.total_processes) - 1:
CGPIT/CE/SEM-7/B/DCS 33 | 40
Enrollment No: 202103103510490

[Link](0.1)

print(f"[Process {self.process_id}] All replies received")

# Reset for next request


self.replies_received = 0

def run_process(process):
[Link]([Link](0.1, 1)) # Simulate random delays
process.request_access()

if __name__ == "__main__":
# Create processes
processes = [Process(i, []) for i in range(3)]

# Assign reference to all processes in each process object


for process in processes:
process.total_processes = processes

# Start each process in its own thread


threads = [[Link](target=run_process, args=(process,)) for process in processes]

# Start and join threads


for t in threads:
[Link]()
for t in threads:
[Link]()

print("Simulation finished.")
CGPIT/CE/SEM-7/B/DCS 34 | 40
Enrollment No: 202103103510490

Output:

CGPIT/CE/SEM-7/B/DCS 35 | 40
Enrollment No: 202103103510490

Practical 9
Aim: Implement clock synchronization algorithm using a counter.
Code:
import sys
from datetime import datetime, timedelta

# Use raw_input() for Python 2 and input() for Python 3


if sys.version_info[0] < 3:
input = raw_input # Python 2 compatibility

class Process:
def __init__(self, pid, clock):
[Link] = pid # Process ID
[Link] = clock # Initial clock time as a datetime object

def get_time(self):
"""Returns the local clock time."""
return [Link]

def adjust_time(self, adjustment_seconds):


"""Adjusts the local clock by a given offset in seconds."""
[Link] += timedelta(seconds=adjustment_seconds)
return [Link]

def receive_request(self, send_time):


"""Simulates receiving a request with the arrival time stamped."""
arrival_time = self.get_time() # Local clock time when request is received
print(f"Process {[Link]} received request at time: {arrival_time.time()} (Coordinator
sent it at {send_time.time()})")
CGPIT/CE/SEM-7/B/DCS 36 | 40
Enrollment No: 202103103510490

return self.get_time() # Return the current clock time to coordinator

class BerkeleyAlgorithm:
def __init__(self, coordinator):
[Link] = coordinator
[Link] = [coordinator] # List of all processes (starting with the coordinator)

def add_process(self, process):


"""Add a new process to the system."""
[Link](process)

def synchronize_clocks(self):
"""Synchronize clocks across all processes."""
print(f"Coordinator (Process {[Link]}) polling other processes for their
times...")

# Step 1: Polling - Gather all the clocks from each process, with request and arrival
timestamps
times = {}
coordinator_time = [Link].get_time() # Coordinator's time when sending the
request
print(f"Coordinator sent requests at time: {coordinator_time.time()}")

for process in [Link]:


if process != [Link]:
# Send request and capture arrival time at process
process_time = process.receive_request(coordinator_time)
times[[Link]] = process_time

CGPIT/CE/SEM-7/B/DCS 37 | 40
Enrollment No: 202103103510490

# Add coordinator's time


times[[Link]] = coordinator_time
print("Collected times from processes:")
for pid, time in [Link]():
print(f"Process {pid}: {[Link]()}")

# Step 2: Calculate the average difference in times in seconds


time_differences = {pid: (time - coordinator_time).total_seconds() for pid, time in
[Link]()}
avg_diff = sum(time_differences.values()) // len([Link]) # Calculate average
offset in seconds
print(f"Average time difference: {avg_diff} seconds")

# Step 3: Adjust times for all processes based on the average difference
for process in [Link]:
if process != [Link]:
adjustment = avg_diff - time_differences[[Link]]
process.adjust_time(adjustment)
print(f"Process {[Link]} clock adjusted by {adjustment} seconds, new time:
{process.get_time().time()}")

# Coordinator adjusts itself last


[Link].adjust_time(avg_diff)
print(f"Coordinator (Process {[Link]}) clock adjusted, new time:
{[Link].get_time().time()}")

def input_time(prompt):
"""Helper function to input time in HH:MM:SS format and return a datetime object."""
while True:

CGPIT/CE/SEM-7/B/DCS 38 | 40
Enrollment No: 202103103510490

try:
time_input = input(prompt)
return [Link](time_input, "%H:%M:%S")
except ValueError:
print("Invalid time format. Please enter in HH:MM:SS format.")

# Simulation of the Berkeley Algorithm with user input for initial clock times

# Get the number of processes from the user


num_processes = int(input("Enter the number of processes (including coordinator): "))

# Initialize the coordinator process with user-provided time


coordinator_time = input_time("Enter the initial clock time for the coordinator (Process 1) in
HH:MM:SS: ")
coordinator = Process(1, coordinator_time)

# Berkeley Algorithm coordinator setup


berkeley = BerkeleyAlgorithm(coordinator)

# Add other processes with user-provided clock times


for i in range(2, num_processes + 1):
process_time = input_time(f"Enter the initial clock time for Process {i} in HH:MM:SS: ")
process = Process(i, process_time)
berkeley.add_process(process)

# Run the clock synchronization


berkeley.synchronize_clocks()

CGPIT/CE/SEM-7/B/DCS 39 | 40
Enrollment No: 202103103510490

Output :

CGPIT/CE/SEM-7/B/DCS 40 | 40

You might also like