0% found this document useful (0 votes)
21 views23 pages

DC (U2)

Uploaded by

Sathishsabari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views23 pages

DC (U2)

Uploaded by

Sathishsabari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

IV YEAR VIIISEM Distributed Computing

UNIT II
Unit II Remote Invocation – Indirect Communication - Operating System Support:- Introduction,
OSLayer, Protection, Processes and Threads, Communication and invocation, Operating system
architecture – Distributed objects and components.

2.1 INTRODUCTION TO DISTRIBUTED OBJECTS


 Distributed object systems may adopt the client-server architecture.
 In this case, objects are managed by servers and their clients invoke their methods using RMI.
 Objects can be replicated in order to obtain the usual benefits of fault tolerance and enhanced performance.
 Having client and server objects in different processes enforces encapsulation.
2.2 COMMUNICATION BETWEEN DISTRIBUTED OBJECTS
The communications between distributed objects are addressed bymeans of RMI under the following headings:
2.2.1 The object model
 An object-oriented program consists of a collection of interacting objects, each of which consists of a set of data
and a set of methods.
 An object communicates with other objects by invoking their methods, generally passing arguments, and
receiving results.
 Some languages such as java and C++, allows programmers to define objects whose instance variables can be
accessed directly.
 However, in distributed object system, an object's data should be accessible only via its methods.
• Object references: Objects can be accessed viaobject references. In Java, avariable that appears to
hold an object actually holds a reference to that object. To invoke a method object, the object
reference and method name are given, together with necessary arguments. The object whose method
is invoked is called the targetand sometimes the receiver.
• Interfaces: It provides a definition of the signatures of a set of methods without specifying
their implementation.
• Action: Action is initiated by an object invoking a method in another object. An invocation of a
method can have two effects:
• The state of the receiver may be changed
• Further invocations on methods in other objects may take place.
• Exceptions: Programs can encounter many sorts of errors and unexpected conditions of varying
seriousness. Exceptions provide a way to deal with error conditions without
complicating the code.
• Garbage collection: A language, for example Java, that can detect automatically when an object is
no longer accessible recovers the space and makes it available for allocation to other objects. This
process is called garbage collection.
2.2.3. The distributed object model
 The object model can be extended to make it applicable to distributed objects.
 Each process contains a collection of objects, some of which can receive both local and remote invocations,
whereas the other objects can receive only local invocations as shown in Figure 2.1.
IV YEAR VIIISEM Distributed Computing

Figure 2.I: Remote and local method invocations

The following fundamental concepts are at the heart of the distributed object model.
 Remote object references
 A remote object reference is an identifier that can be used throughout a distributed system to refer to a particular
unique remote object.
Remote object references arc analogies to local ones in that
• The remote object to receive a remote method invocation is specified as a remote object reference
• Remote object references may be passed as arguments and results of remote methods invocations.
 Remote interfaces
 The class of a remote object implements the methods of its remote interface, forexample as public instance
methods in Java.
 Every remote object has a remote interface that specifies which of its methods can be invoked remotely.
 The objects in other processes can invoke only the methods that belong to its remote interface.
 Actions in distributed system
 An action isinitiated by a methods invocation, which mayresult in further invocations on methods in other
objects.
 In the distributed case, the objects involved in a chain of related invocations may be located in different
processes or different computer.
 When an invocation crosses the boundary of a process or computer, RMI is used, and the remote reference of the
object must be available to make the RMI possible.
 Garbage collection in a distributed object system
• Distributed garbage collection is generally achieved by co-operation between the existing local garbage collector
and an added module thatcarries out a form of distributed garbage collection, usually based on reference
counting.
 Exceptions
• Any remote invocation may fail for reasons related to the invoked object being in a different process or
computer from the invoker.
• Therefore, remote method invocation should be able to raise exceptions such as timeouts that are due to
distribution as well as those raised during the execution of the method invoked.
2.2.4. Design Issues for RMI:
 Although local invocations areexecuted exactly once, this cannot always be the case for remote methods
invocations.
 The level of transparency that is desirable for RMI.
 RMI invocation semantics
 The combinations of the following choices lead to a variety of possible semantics for the reliability of remote
invocations:
Retry request message - whether to retransmit the request message until a reply is received.
Duplicate filling - when retransmissions are used, whether to filter out duplicate requests at the server.
Retransmission of results - whether to keep a history of result messages to enable lost results to be
retransmitted without re-executing the operations at the server.
 Maybe invocation semantics
• With maybe invocation semantics, the invoker cannot tell whether a remote method has been executed
once or not at a l l .
• Maybe semantics arises when none ofthe fault tolerancemeasures is applied.
• This can suffer from the Omission failures, if the invocation or result message is lost and Crash failures,
when the server containing the remote object fails.
 At-least-once invocation semantics
 In this semantics, the invoker receives either a result, in which case the invoker knows that the method was
executed at least once, or an exception informing it that no result was received.
 At-least-once semantics can be achieved by the retransmission of request messages, which masks the omission
failures.
 It can suffer from crash failure when the server fails and some arbitrary failures.
 At-most-once invocation semantics:
IV YEAR VIIISEM Distributed Computing

 In this, the invoker receives either a result, in which case the invoker knows that the methods
was executed once or an exception informing it that no result was received, in which case the method
will have been executed either once or not at a l l .
 It can be achieved by using all of the fault tolerance measures.
 Transparency
Remote invocations should be made transparent in the sense that the syntax of a remote
invocation is the same as that of a local invocation, but the difference between local and
remote objects should be expressed in their interfaces.
2.2.5 Implementation of RMI
• Several separate objects and modules are involved in achieving a remote method invocation.
• These are shown in the Figure 2.2 in which an application level object A invokes a method in a remote
application level object B for which it holds a remote object reference. The roles of each of the components are
explained below.

Figure 2.2 Remote Method Invocation

 Communication module
• The communication modules are responsible for providing specified invocation semantics, e.g., atmost-once.
• The communication module in the server selects the dispatcher for the class of the object to be invoked, passing
on its local reference, which it gets from the remote reference module.
 Remote Reference module
• This module isresponsible fortranslating between remote and local objectreferences and for creating remote
object references.
• To support its responsibilities, the remote reference module in each process has a remote object table, which
includes:
• An entry for all the remote objects are held by the process. E.g. Remote object B will be recorded in the table at
the server.
• An entry for each local proxy. E.g. the proxy for B will be recorded in the table at the client.
 RMI software
This consists of a layer of software between the application-level objects and the communication and
remote reference modules. It includes:
 Proxy
• The role of a proxy is to make remote method invocation transparent to clients by behaving like
a local object to the invoker: but instead executing an invocation, it forwards it in a message to a
remote object.
• It hides the details of the remote object reference, the marshaling of arguments unmarshalling of
results, sending, andreceiving of messages from the client.
• There is one proxy for each remote object for which a process holds a remote object reference.
 Dispatcher
• The dispatcher receives the request message from the communication module.
• It uses the method ID to select the appropriate method in the skeleton, passing on the
request message.
 Skeleton
• The class of a remote object has a skeleton, which implements the methods in remote interface.
• A skeleton method un-marshalls the arguments in the request message and invoke the corresponding
method in the remote object.
IV YEAR VIIISEM Distributed Computing

• It waits for the invocation to complete and then marshals the result, together with any exceptions, in a
reply message to the sending proxy's method.
• The classes for the proxy dispatcher and skeleton used in RMI arc generated automatically by an
interface compiler.
 Related topics to RMI
 Binder
• It is a separate service that contains a table consisting of mappings from textual names of the object to
remote object references.
 Server threads
• To avoid the execution of one remote invocation delaying the execution of another, servers generally
allocate a separate thread for the execution of each remote invocation.
 Activation of remote objects
• Some applications require that information survive for long periods. However, it is not practical for the
objects representing such information to be kept inrunning process forunlimited periods.
• To avoid the potential waste of resources due to running a l l of the servers that manage objects
a l l of the time, the servers can be started whenever they are needed by clients.
• Processes that start server processes to host remote objects are called activators.
• A remote object is described as active when it is available for invocation within a running process,
whereas it is called passive, if it is not currently active but can be made active.
• Activation consists of creating an active object from the corresponding passive object by creating
a new- instance of i t s class and initializing us instance variables from the stored state.
• Passive objects can be activated on demand.
 Persistent object store
• An object that is guaranteed to live between activations of processes is called a persistent object.
• Persistent objects are generally managed by persistent object stores that store their state managed
marshalled form on disk.
• The persistent objects will be activated when their methods are invoked by other objects.
• Activation is generally designed to be transparent i.e., the invoker should not be able to tell whether an
object is already in main memory or has to be activated before its method is invoked.
• Persistent objects that are no longer needed in main memory can be passivated.
 Objects Location
• The remote object reference can be used as an address for a remote object so long as that object
remains in the same process for the rest of its life .
• However, some remote objects will exist in a series of different processes, possibly on different
computers, throughout their lifetime.
• In this case, a remote object reference cannot act as an address.
• A location service helps clients to locate remote objects from their remote object references.
• It uses a database that maps remote object references to their probable current locations.
2.2.6 Distributed garbage collection
• The aim of a distributed garbage collector is to ensure that if a local or remote reference to an object is still held
anywhere in a set of distributed objects, then the object will continue to exist but as soon as the object has no
reference, the objects will be deleted and the memory is recovered.
• The distributed garbage collection is mainly based on reference counting.
• Whenever a remote object reference enters a process, a proxy will be created and will stay there for as long as it
is needed.
• The process where the object lives (its server) should be informed of the new proxy at the client.
• Then later when there is no longer a proxy at the client, the server should be informed.
• The distributed garbage collector works in cooperation with the local garbage collectors as follow:
 Each server process maintains a set of client processes that hold remote object reference for each of
its remote objects; for example, B. holders is the set of client processes that have proxies for object B.
 When a client C first receives a remote reference to a particular remote object, B, makes anaddRef(B)
invocation to the server of that remote object and then create proxy; the server adds C to B holders.
 When a client C's garbage collector notices that a proxy for remote object B is longer reachable, it makes a
removeRef(B) invocation to the corresponding server a deletes the proxy; the server removes C from B
Holders.
 When B "holders are empty, theserver’s localgarbage collector willreclaim the space occupied by B unless
there are any local holders.
2.3 REMOTE PROCEDURE CALL
IV YEAR VIIISEM Distributed Computing

• A remote procedure call is very similar to RMI in that a client program calls a procedure in another program
running in a server process.
• Servers may be clients of other server allow chains of RPCs. RFC, like RMI, may be implemented to have one
of the choice invocation semanticsdiscussed earlier.
• Figure 2.3 show the software that supports RMI. The client that accesses a service includes one stub
procedure for each procedure in the service interface.
• The role of a stub procedure is similar to that of a proxy. It marshals procedure identifier and the arguments
into a request message, which it sends via communication module to the server.
• When the reply message arrives, it unmarshals results.
• The server process contains a dispatcher together with one server stub procedure one service procedure for each
procedure in the service interface.

Figure 2.3: Role of client and server stub procedures in RPC


• The dispatcher selects one of the server stub procedures according to the procedure identifier in the
request message.
• A server stub procedure is like a skeleton method in that unmarshals the arguments in the request
message, calls the corresponding service procedure, and marshals the return values for the reply message.
• The service procedures implement procedures in the service interface.
2.3.1 SUN RPC-case study
• SUN RPC was designed for client-server communication in the sun network file system. Sun RPC is sometimes
called Open Network Computing (ONC) RPC.
• Implementors have the choice of using RPCs over either UDP or TCP. It uses at-least-once call semantics.
• The sun RPC system provides an interface language called XDR and an interface compiler called rpcgen
which is intended for use with the C programming language.
• The sun XDR language was originally designed for specifying external data representation and
extended to become an interface definition language.
• Most languages allow interface names to be specified, but sun RPC does not -instead of this, a program number
and a version number are supplied in the request message.
• Only a single input parameter is allowed.
• Therefore, procedures requesting multiple parameters must include them as components of a single structure.
• The output parameters of a procedure are returned via a single result.
Files interface in sun XDR const
MAX = 1000;
typedefintFileldentifier;
typedefintFilePointer;
typedefint Length; struct
Data
{ int length; char
buffer [MAX];
}
structwriteargs
{
Fileldentifier f;
FilePointer position;
Data data;
}
structreadargs
{
Fileldentifier f;
FilePointer position;
Length length;
IV YEAR VIIISEM Distributed Computing

}
Program FILE READ WRITE
{ version VERS1ON{ void
WRITE (writeargs}= 1;
Data READ (readargs) = 2;
} =2;
} = 9999;
• The XDR definition of an interface with a pair of procedures for writing and reading files is shown above.
• The program number is 9999 and the version number is 2.
• The READ procedure takes as input parameter a structure with three components specifying a file
identifier, a position in the file and the number of bytes required.
• Its result is a structure containing the number of bytes returned and the file data. The WRITE procedure has no
result.
• The WRITE and READ procedures are given numbers 1 and 2. The interface compiles 'rpcgen' can be used
to generate the following from an interface definition:
• Client stub procedure
• Server main procedure,
• Dispatcher
• Server stub procedures
• XDR marshalling
• XDR un-marshallmg procedures for use by the dispatcher and client and server stub procedures
2.4 EVENTS AND NOTIFICATIONS
• The idea behind the use of events is that one object can react to a change occurring m another object.
• Notification of events is essentially asynchronous and determined by then receivers.
• Distributed event-based systems extend the local event model by allowing multiple objects at different
locations to be notified of events taking place at an object.
• They use publish-subscribe paradigm, in which an object that generates events publishesthe type of events that it
will make available for observation by other objects.
• Objects that want to receive notifications from an object that has published its events subscribeto the types of
events that are of interest to them.
• Notifications may be stored, sent in messages, queried and applied on variety of orders to different things.
• Distributed event-based system has two main characteristics:
 Heterogeneous
When event notifications are used as a means of communication between distributed
objects, components in a distributed system that were not designed to interoperate can
be made to work together.
 Asynchronous
These notifications are sent asynchronously by event-generating object to all the objects that
have subscribed to them to prevent publishers needing B synchronize with the participants in
distributed event notification.
• The architecture is designed to decouple the publishers from the subscribers, allowing publishers to be developed
independently of their subscribers, and limiting the work imposed on publishers by subscribers.
• The main component is an event service that maintains a database of published events and of subscribers'
interest.
• Events at an object of interest are published at the event service.
• Subscribers inform the event service about the types of events they are interested in.
• When an event occurs at an object of interest, a notification is sent to the subscribers to that type of event.
IV YEAR VIIISEM Distributed Computing

Figure 2.4: Architecture of distributed event notification


The roles of the participating objects are as follows:
 The object of interest
• This is an object that experiences changes of state, as a result of its operations being invoked. Its changes of
state might be of interest to other objects.
 Event:
• An event occurs at an object of interest as the result of the completion of a method execution.
 Notification:
• A notification is an object that contains information about an event.
• Typically, it contains the type of the event and its attributes, which generally include the entity of the object
of interest, the method invoked and the time of occurrence or a sequence number.
 Subscriber
• A subscriber is an object that has subscribed to some type of events in another object. It receives
notifications about such events.
Observer objects
• The main purpose of an observer is to decouple an object of interest from its subscribers.
• An object ofinterest can have many different subscribers with different interests.
• It could over-complicate the object of interest if it had to perform all of the logic for distinguishing between
the needs of its subscribers.
• One or more observers can be interposed between an object of interest and subscribers.
 Publisher
• This object declares that it will generate notifications of particular types of event.
• A publisher may be an object of interest or an observer.
• An object of interest inside the event service without an observer. It sends notifications directly to the
subscribers.
• An object of interest inside the event service with an observer. The object of interest sends notifications via
the observer to the subscribers.
• An object of interest outside the event service. In this case, an observer quene the object of interestin
orderto discover when events occur. The observer send notifications to the subscribers
 Roles for observers
The task of processing notifications can be divided among observer processes playing a variety of
different roles. Some examples are:
• Forwarding - A forwarding observer may carry out all the work of sending notifications to subscribers on
behalf of one or more objects of interest.
• Filtering of notifications - Filters may be applied by an observer to reduce the number of notifications
received according to some predicate on the content of each notification.
• Patterns of events - A pattern specifies a relationship between several events E.g., a subscriber may
be interested when there are three withdrawals from a bank account without any intervening deposits.
• Notification mailboxes - In some cases, notifications need to be delayed until ipotential subscriber is
ready to receive them. An observer may take on the role o: a notification mailbox, which is to receive
notifications on behalf of a subscriber. Only passing them on when the subscriber is ready to receive them.
 Jini distributed event specification
IV YEAR VIIISEM Distributed Computing

• It allows a subscriber in one Java virtual machine (JVM) to subscribe to and receive notifications of
events in an object of interest in another JVM, usually on another computer. The main objects are:
• Event generators - Object that allows other objects to subscribe to i t s events and generates notifications.
• Remote event listeners - object that can receive notifications.
• Remote events - notification.
• Third party agents - interposed between an object of interest and a subscribers. They are equivalent to observers.
• Jini events are provided by means of the following interfaces and classes:
• Remote event listener: This interface provides a method called notify. Subscribers and third party agents
implement this interface so that they can receive notifications when the notify method is invoked.
• Remote event: This class has instance variables that hold:
o A reference to the event generator.
o An event identifier.
o A sequence number, which applies to events of that type.
o A marshaled objects, which is supplied when the recipient subscribes to that type of event and may
be used by a recipient for any purpose.
• Event generator
• This interface provides a method called register.
• Event generators implement this interface, whose register method is used to subscribe to events at the event
generator.
• The arguments of register specify:
o An event identifier.
o A marshaled object to be handed back w i t h each notification.
o A remote reference to an event listener object.
• A requested leasing period specifies the duration of lease required by the subscriber andcan berenewed whenever
the time limit in the lease expires.
2.5 JAVA RMI CASE STUDY
• The RMI architecture is based on one important principle: the definition of behaviour and the implementation of
that behaviour are separate concepts.
• RMI allows the code that defines the behaviour and the code that implements the behaviour to remain separate and
to run on separate JVMs.
• This fits nicely with the needs of a distributed system where clients are concerned about the definition of a service
and servers are focused onproviding the service.
• Specifically, in RMI, the definition of a remote service is coded using a Java interface. The implementation of the
remote service is coded in a class.
• Therefore, the key to understanding RMI is to remember that interfaces define behaviour andclasses define
implementation. Figure 2.5 illustrates this separation.

• A Java interface does not contain executable code. RMI supports two classes that implement the same interface.
• The first class is the implementation of the behaviour, and it runs on the server.
• The second class acts as a proxy for the remote service and it runs on the client and is shown in Figure 2.6.
• A client program makes method calls on the proxy object, RMI sends the request to the remote JVM, and forwards
it to the implementation.
• Any return values provided by the implementation are sent back to the proxy and then to the client's program.
IV YEAR VIIISEM Distributed Computing

Figure 2.7: Layers of RMI

2.5.1 RMI Architecture Layers


The RMI implementation is essentially built from three abstraction layers. (Fig 2.7)
 Stub and Skeleton layer
• It which lies just beneath the view of the developer.
• This layer intercepts method calls made by the client to the interface reference variable and redirects
these calls to a remote RMI service.
 Remote Reference Layer
• This layer understands how to interpret and manage references made from clients to the remote service
objects.
• In JDK 1.1, this layer connects clients to remote service objects that are running and exported on a
server.
• The connection is a one-to- one (unicast) link.
• In the Java 2 SDK., this layer was enhanced to support the activation of dormant remote service objects
via Remote Object Activation.
 Transport layer
• It is based on TCP/IP connections between machines in a network.
• It provides basic connectivity, as well as some firewall penetration strategies
• By using a layered architecture each of the layers could be enhanced or replaced without
affecting the rest of the system.
• For example, the transport layer could be replaced by a UDP/ IP layer without affecting the upper layers.
The 3 layers functions as follows:
 Stub and Skeleton Layer
• The stub and skeleton layer of RMI l i e just beneath the view of the Java developer.
• In this layer, RMI uses the Proxy design pattern. In the Proxy pattern, an object in one context is
represented by another (the proxy) in a separate context.
• The proxy knows how to forward method calls between the participating objects. Figure 2. 7 illustrates the
Proxy pattern.
• In RMPs use of the Proxy pattern, the stub class plays the role of the proxy, and the remote service
implementation class plays the role of the RealSubject.
• A skeleton is a helper class that is generated for RMI to use.
• The skeleton understands how to communicate with the stub across the RMI link.
• The skeleton carries on a conversation with the stub; it reads the parameters for the method call from the
link, makes the call to the remote service implementation object, accepts the return value, and then writes
the return value back to the stub.
IV YEAR VIIISEM Distributed Computing

• In the Java 2 SDK implementation of RMI. the new wire protocol has made skeleton classes obsolete. RMI
uses reflection to make the connection to the remote service object.
 Remote Reference layer
• The Remote Reference Layers defines and supports the invocation semantics of the RMI connection.
• This layer provides a RemoteRef object that represents the link to the remote service implementation object.
• The stub objects use the invoke method in RemoteRef to forward the method call.
• The RemoteRef object understands the invocation semantics for services.
• The JDK 1.1 implementation of RMI provides only one way for clients to connect to the service
implementations: a unicast, point-to-point connection.
• Before a client can use RMI service, the remote service must be instantiated on the server and exported to the
RMI system. ( If it is the primary service, it must also be named and registered in the RMI Registry).
• The Java 2 SDK implementation of RMI adds a new semantic for the client-server connection.
• In this version, RMI supports activatable remote objects.
• When a method call is made to the proxy for an activatable object, RMIdetermines ifthe remote service
implementation object is dormant.
• If it is dormant, RMI will instantiate the object and restore its state from a disk file.
• Once an activatable object is in memory, it behaves just like JDK 1.1 remote service implementation
objects.
• Other types of connection semantics are possible.
• For example, with multicast, a single proxy could send a method request to multiple implementations
simultaneously and accept the first reply (this improves response time and possibly improves availability).
 Transport Layer
• The Transport Layer makes the connection between JVMs.
• All connections are stream- based network connections that use TCP/IP.
• Even if two JVMs are running on the same physical computer, they connect through their host
computer's TCP/IP network protocol stack.
• Figure 2.9 shows the unfettered use of TCP/IP connections between JVMs.
• TCP/IP provides a persistent, stream-based connection between two machines based on an IP address
and port number at each end, usually a DNS name is used instead of an IP address.

Figure2.9:TCP/IP Connections between JVMs


• On top of TCP/IP, RMI uses a wire level protocol called Java Remote Method Protocol (JRMP).
• JRMP is a proprietary, stream-based protocol that is only partially specified is now in two versions.
• The first version was released with the JDK 1.1 version of RMI and required the use of Skeleton classes on
the server.
• The second version was released with the Java 2 SDK. It has been optimized for performance and does not
require skeleton classes.
• Some other changes with the Java 2 SDK are that RMI service interfaces are not required to extend from
Java.rmi.Remote and their service methods do not necessarily throw Remote Exception.
• The RMI transport layer is designed to make a connection between clients and server, even in the face of
networking working obstacles.
• While the transport layer prefers to use multiple TCP/IP connections, some network configurations only allow
a single TCP/IP connection between a client and server (some browsers restrict applets to a single network
connection back to their hosting server).
IV YEAR VIIISEM Distributed Computing

• In this case, the transport layer multiplexes multiple virtual connections within a single TCP/IP connection.
2.5.2 Naming Remote Objects
• Clients find remote services by using a naming or directory service.
• A naming or directory service is run on a well-known host and port number.
• RMI can use many different directory services, including the Java Naming and Directory Interface (JNDI).
• RMI itself includes a simple service called the RMI Registry, rmiregistry.
• The RMI Registry runs on each machine that hosts remote service objects andaccepts queries for services,
bydefault on port 1099.
• On a host machine, a server program creates a remote service by first creating a local object that
implements that service.
• Next, it exports that object to RMI.
• When the object is exported, RMI creates a listening service that waits for clients to connect and request the
service.
• After exporting, the server registers the object in the RMI Registry under a public name. On the client side, the
RMI Registry is accessed through the static class .naming.
• It provides the method lookup() that a client uses to query a registry.
• The method lookup() accepts a URL that specifics the server host name and the name of the desired service.
• The method returns a remote reference to the service object. The URL takes the form:
rmi://<host_name>[:<name service port>|/<service_name>
- Where the hostname is a name recognized on the Local Area Network (LAN) or a DNS name
on the Internet.
- The nameservice_portonly needs to be specified only if the naming service is running on a
different port to the default 1099.
2.5.3 Using RMI
• A working RMI system is composed of several parts.
• Interface definitions for the remote services
• Implementations of the remote services
• Stub and Skeleton files
• A server to host the remote services
• An RMI Naming service that allows clients to find the remote services
• A class file provider ( a n HTTP or FTP server)
• A client program that needs the remote services
The steps to be followed to build a RMI system are:
• Write and compile Java code for interfaces
• Write and compile Java code for implementation classes
• Generate Stub and Skeleton c l a s s files from the implementation classes
• Write Java code for a remote service host program
• Develop Java code for RMI client program
• Install and run RMI system
2.5.3.1 Interfaces
• The first step is to write and compile the Java code for the service interface.
• The Calculator interface defines all of the remote features offered by the service:
public interface Calculator extends Java. rmi.Remote
{
public long add(long a, long b) throwsjava.rmi.RemoteException
public long sub(long a, long b) throwsjava.rmi.RemoteException
public long mul(long a, long b) throwsjava.rmi.RemoteException
public long div(long a, long b) throws java.rmi.RemoteException
}
• The interface extends Remote, and each method signature declares that it may throw a RemoteException
object. Compile the above code with a java compiler.
2.5.3.2 Implementation
The implementation for the remote service is given below:
public class Calculatorlmpl extends Java.rmi.server.UnicastRemoteObject implements Calculator
{
// Implementations must have an explicit constructor in order to declare the RemoteException exception

publicCalculatorlmpl() throws java.rmi.RemoteException


{ super();
IV YEAR VIIISEM Distributed Computing

}
public long add(long a, long b) throws java.rmi.RemoteException
{
returna+b
}
public long sub(long a, long b) throws java.rmi.RemoteException
{
return a - b;
}
public long mul(long a, long b) throws java.rmi.RemoteException
{
return a * b;
}
public long div(long a, long b) throws java.rmi.RemoteException
{
return a / b;
}

• Compile the above code with java compiler.


• The implement ation class uses UnicastRemoteObjectto link into the RMI system.
• In the example the implementation class directly extends UnicastRemoteObject. This is not a requirement.
• A class that does not extend UnicastRemoteObject may use its exportObject() method to be linked into RMI.
• When a class extends UnicastRemoteObject, it must provide a constructor that declares that it may throw a
RemoteException object.
• When this constructor calls superQ, it activates code in UnicastRemoteObject that performs the
RMI linking and remote object initialization.
2.5.3.3 Stubs and Skeletons
• The RMI compiler rmic, is used to generate the stub and skeleton files.
• The compiler runs on the remote service implementation class file (Calculatorlmpl).
• After rmic is, run the file Calculator_Stub.class and Calculator_Skel.class are created (in case Java 2).
• Instructions for the JDK 1.1 version of the RMI compiler, rmic, are-
-Usage: rmic<options><class name>where <opt ions> includes:
-keep: Do not delete intermediate generated source files
-keepgenerated: (same as "-keep")
-g: Generate debugging info
depend: Recompile out-of-date f i le s recursively
-nowarn: Generate no warnings
-verbose: Output messages about what the compiler is doing
-classpath<path>: Specify where to find input source and class files
-d <directory>: Specify where to place generated class f i l e s
-J<runtime flag>: Pass argument to the java interpreter
• The Java 2 platform version of rmic adds three new options:
• v1.1: Create stubs/skeletons for JDK 1.1 stub protocol version
• vcompat(default): Create stubs/skeletons compatible with both JDK 1.1 and Java 2 stub protocol
versions
• v1.2: Create stubs for Java 2 stub protocol version only
2.5.3.4 Host Server
• Remote RMI services must be hosted in a server process.
• The class CalcukitorServeris a very simple server that provides the bare essentials for hosting.
importjava.rmi.Naming; public
class CalculatorServer
{ publicCalculatorServer()
{ try
{
Calculator c = new Calculatorlmpl();
Naming.rebind("rmi://localhost:1099/CalculatorService", c);
}
IV YEAR VIIISEM Distributed Computing

catch ( Exception e)
{
System.out.println("Troubl e: " + e);
}

}
public static void main(String args[])
{
newCalculatorServer();
}
2.5.3.5 Client
The source code for the client is:
importJava.rmi.Naming;
importJava.rmi.RemoteException;
importjava.net.MalformedURLException;
import Java.rmi.NotBoundException;
public class CalcuiatorClient
{
public static void main(String[] args)
{
try
{
Calculator c = (Calculator) Naming.lookup("rmi://loca!host/CalculatorService”);
System.out.println(c,sub(4, 3) );
System.out.println(c.add(4, 5) );
System.out.println(c.mul(3, 6) );
System.out.printingc.div(9, 3) );
}
catch (MaiformedURLExceptionmurle)
{
System. out. println():
System.out.println("MaiformedURLException");
System.out.println (murle);
}
catch (RemoteException re)
{
System. out. println(); System.out.println("RemoteException"; System. out. println(re);}
catch (NotBoundExceptionnbe)
{System.out. println(); System.
out.
Printlnf(“NotBoundException");
System.out.println(nbe);
}
catch (java.lang.ArithmeticException ae)
{
System.out.println();
System, out. println("java.lang.ArithmeticException”);
System.out.println(ae);
}
2.5.3.6 Running the RMI System
Start three consoles, one for the server, one for the client, and one for the RMIRegistry.
• Start the Registry: Be in the directory that contains the classes that had been written and RMIregistry. The registry
will start running and switch to the next console.
• Start the server: Enter "Java CalcuIatorServer” at the console, it will start, load the implementation into memory
and wait for a client connection.
• Start the client: Enter "Java CalculatorClient", if all goes well you will see the following.
2.6 DISTRIBUTED OPERATING SYSTEM SUPPORT
IV YEAR VIIISEM Distributed Computing

2.6.1 INTRODUCTION
• The task of any operating system is to provide problem-oriented abstractions of the underlying physical resources the
processors, memory, communications and storage media.
• It takes over the physical resources on a single node
• Manages them to present these resource abstractions through the system-call interface.
• UNIX and Windows NT are examples of network operating systems.
• They have a networking capability built into them and can be used to access remote resources.
• Access is network-transparent for some types of resources.
• With a network operating system, a user can remotely log into another computer and run processes there.
• Unlike the operating system's control of the processes running as its own node, it does not schedule processor across
the nodes.
 Distributed Operating Systems
There are two types of distributed operating systems.
 A Multiprocessor Operating System
• It manages the resources of a multiprocessor.
• A multicomputer operating system is an operating system that is developed for homogeneous
multicomputer.
• The functionality ofdistributed operating systems is the same as that of traditional operating systems
for uniprocessor systems, except that they handle multiple CPUs.
• Multiprocessor Operating Systems aim to support high performance through multiple CPUs.
• An important goal is to make the number of CPUs transparent to the application.
 Uni Processor Operating Systems
• Operating systems have traditionally been built to manage computers with only a single CPU.
• The main goal of these systems is to allow users and applications an easy way of sharing resources
such as the CPU, main memory, disks, and peripheral devices.
2.6.2 THE OPERATING SYSTEM LAYER
• Middleware runs on a variety of OS-hardware combinations (platforms) at the nodes of a distributed system.
• The OS running at a node provides itsown flavour ofabstractions of local hardware resources for processing, storage
and communication.
• Middleware utilizes combination of these local resources to implement its mechanisms for remote invocations
between objects or processes at the nodes.
• Figure 3.1 shows how the OS layer at each of two nodes support a common middleware layer in providing a
distributed infrastructure for applications and services.
• Kernels and server processes are the components that manage resources and present clients with an interface to the
resources. The following functions are to be satisfied for an effective operating system.
 Encapsulation -They should provide a useful service interface to their resources i.e. a set of
operations to meet their clients' need.
 Details such as memory management and device management used to implement resources should be
hidden from clients.
 Protection - Resources require protection from illegitimate accesses
 Concurrent Processing - Clients may share resources and access them concurrently Clients access
resources by making RMI to server object, or system calls to kernel.

App ca ons Se v ces

M dd ewa e

OS ke ne OS1 P ocesses h eads Os2


L beau s & commun ca on P ocesses h eads
Se ve s commun ca on Paom

Figure 3.1: System layers


IV YEAR VIIISEM Distributed Computing

 Communication - Operation parameters and results have to be passed to and from resource
managers, over a network or within a computer.
 Scheduling - When an operation is invoked, its processing must be scheduled within the kernel or
server. The core OS functionality is concerned with process and thread management, memory
management and communication between processes on the same computer.
 The core OS components are listed below:
 Process manager - Handles the creation of and operations upon processes. A process is a unit of
resource management, including an address space and one or more threads.
 Thread manager - Handles thread creation, synchronization and scheduling.
 Communication manager - Handles communication between threads attached to different
processes on the same computer. Some kernels also support communication between threads m
remote processes.
 Memory manager - Handles management of physical and virtual memory.
 Supervisor - Dispatching of interrupts, system call traps and other exceptions; control of memory
management unit and hardware caches; processor and floating point unit register manipulations. This
is known as hardware abstraction layer in Windows NT.

FPigru
orceess
3.2M
: CaonraegOeS
r functionality

Communication
Manager

Thread Manager Memory Manager

Supervisor

2.6.3 PROTECTION
• Resources require protection from illegitimate accesses.
• One way is to use a type-safe programming language such as Java or Modula-3.
• A type safe language is such that no module may access a target module unless it has a reference to it .
• We can also employ hardware support to protect modules from one another at the level of individual
invocations, regardless of the language in which they are written.
• To operate this scheme on general purpose computer, we require a kernel.
 Kernel and protection:
• The kernel is a program that is distinguished by the facts that it always runs and its code is executed with
complete access privileges for the physical resources on i t s host computer.
• In particular, it can control the memory management unit and set the processor registers so that no other code may
access the machine's physical resources except in acceptable ways.
• Most processors have a hardware mode register whose setting determines whether privileged instructions can be
executed, such as those used to determine which protection tables are currently employed by the memory
management unit.
• A kernel process executes with the processor in supervisor (privileged) mode; the kernel arranges that other
processor execute in user (unprivileged) mode.
• The kernel also set up address spaces to protect itself.
• An address space is a collection of ranges of virtual memory locations, in each of which a specified combination
of memory access rights applies, such as read-only or read-write.
• A process cannot access memory outside i t s address space.
• The terms user process or user-level process are normally used to describe one that executes in user mode and has
a user-level address space. (That is one with restricted memory access rights compared with the kernel's address
space).
• When a process executes application code it executes in a distinct user-level address space for that application;
when the same process executes kernel code it executes in the kernels address space.
IV YEAR VIIISEM Distributed Computing

• The process can safely transfer from a user-level address space to the kernels address space via an exception such
as an interrupt or a system call trap.
• When the TRAP instruction is executed, the hardware forces the processor to execute kernel-supplied handler
function, in order that no process may gain illicit control of the hardware.
2.7 PROCESSES AND THREADS
• A process consists of an execution environment together with one or more threads.
• A thread is the operating system abstraction of an activity.
• An execution environment is the unit of resources management: a collection of local kernel-managed resources
to which i t s thread has access.
• An execution environment primarily consists of
• An address space
• Thread synchronization and communication resources such as semaphores and communication interfaces
resources such as open files e.g. sockets Higher-level
• Execution environments are normally expensive to create and manage, but several threads can share them.
• Threads can be created and destroyed dynamically as needed.
• The central a i m of having multiple threads of execution is to maximize the degree of concurrent execution
between operations, thus enabling the overlap of computation with input and output and enabling concurrent
processing on multiprocessors.
• This can be useful within servers, where. Concurrent processing of requests of clients can reduce the tendency
for servers.
2.7.1 Address spaces
•An address space is a unit of management of a process's virtual memory.
•It is large (up to 2 64 bytes) and consists of one or more regions, separated by inaccessible areas of virtual
memory.
• A region is an area of contiguous virtual memory that is accessible by the threads of the owning process.
• Regions do not overlap.
• Each region is specified by the following properties:
 Its extent (lowest virtual address & size)
 Read/write/execute permissions for the process's thread
 Whether it can be grown upwards or downwards. This mode is page oriented rather than
segment-oriented.
• A shared memory region is one that is backed by the same physical memory as one or more regions belonging to
other address spaces.
• Processes therefore access identical memory contents in the regions that are shared, while their nonshared
regions remain protected.
• The uses of shared regions include libraries, kernel, etc
. 2.7.2. Creation of a new process
• The creation of a new process can be separated into two independent aspects:
• The choice of a target host
• The creation of an execution environment.
 Choice of process host
• The choice of node at which the new process will reside i.e., the process allocation decision is a matter of policy.
• In general, process allocation policies range from always running new processes at their originator's workstation to
sharing the process load between a set of computers.
• The transfer policy determines whether to situate a new process locally or remotely.
• This may depend on whether the local node is lightly or heavily loaded.
• The location policy determines which node should host a new process selected for transfer.
• In all cases, the choice of target host for each process is transparent to the programmer and the user.
• Process location policies may be static or adaptive.
• The former operate without regard to the current state of the system.
• The later apply heuristics to make their allocation decisions, based on unpredictable run time factors such as a
measure of the load on each node.
• Load sharing systems may be centralized, hierarchical or decentralized.
• Load managers collect information about the nodes and use it to allocate new processes to nodes.
• In centralized systems, there is one load manager component. In hierarchical systems, managers make process
allocation decisions as far down the tree as possible.
• In the decentralized load-sharing algorithm, nodes exchange information with one another directly to make
allocation decisions.
IV YEAR VIIISEM Distributed Computing

• In sender initiated load sharing algorithms the node that requires a new process to be created is responsible for
initiating the transfer decision
• It typically initiates a transfer when its own load crosses a threshold.
• By contrast, in receiver initiated algorithms, a node whose load is below a given threshold advertise its existence to
other nodes so that relatively loaded nodes will transfer work to it.
• Migratory systems can shift load at any time, not just when a new process is created.
• They use a mechanism called process migration: the transfer of an executing process from one node to another.
• Creation of a new execution environment: once the host computer has been selected, a new process requires an
execution environment consisting of an address space with initialized contents.
• There are two approaches to defining and initializing the address space of a newly created process.
 The first approach is used when the address space is of statically defined format. It could contain just a program
text region, heap region and stack region.
 In the second approach, the address space can be defined with respect to an existing execution environment.
• In the case of UNIX fork semantics, for example, the newly created child process physically shares the parent's
text region and has heap and stack regions that are copies of the parent's in extent.
• This scheme has been generalized _so that each region of the parent process may be inherited by the child process.
An inherited region may either be shared with or logically copied from the parent's region.
• In copy-on-write technique, an inherited region is copied from the parent to the child.
• The region is copied, but no physical copying takes place by default.
• The page frames are shared between the two address spaces.
• A page in the region is only physically copied when one or other process attempts to modify it.

Figure 6.3: Copy on write

• Initially, all page frame associated with the inherited regions are shared between the two processes page tables.
• The pages are initially write protected at the hardware level. If a thread in either process attempts to modify the
data, a hardware exception called a page fault is taken.
• Let us say the process B attempted the write.
• The page fault handler allocates a new frame for process B and copies the original frames data into it .
• The old frame number is replaced by the new frame number in one process's page table.
• The two corresponding pages in processes A and B are then made writable at the hardware level. After all of this
has taken place process B's modifying instruction is allowed to proceed.
2.7.3 Threads
• To understand the role of threads in a distributed system, it is important to understand what a process is, and how
process and threads relate.
• To execute a program an OS creates a number of virtual processors, each one for running a different program.
• To keep track of these virtual processors, the OS has a process table, containing entries to store CPU register
values, memory maps, open files, accounting information, privileges, etc.
• A process is defined as a program in execution i.e. a program that is currently being executed on one of the OS‘s
virtual processors.
IV YEAR VIIISEM Distributed Computing

• An important issue is that the OS takes great care to ensure transparency.


• In sharing the same CPU and other hardware resources by multiple processes.
• This concurrency transparency comes at a relatively high price e.g., each time a process is created the OS must
create a completely independent address space.
• A thread is very similar to a process in the sense that it can also be seen as the execution if a (part of a) program on
a virtual processor.
• However, in contrast to processor, no attempt mode to achieve a high degree of concurrency transparency.
• Therefore, a thread system generally maintains only the minimum information to allow a CPU to be share by
several threads.

2.7.3.1 Architectures for multi-threaded servers


 Work pool Architecture
• The server creates a fixed pool of worker threads to process the requests when it status.
• The module marked receipt and queuing is typically implemented by an I/O thread,which receives requests
from a collection of sockets or ports and places them on a shared request queue for retrieval by workers.
• It is also possible to handle varying request priorities introducing multiple queues into the worker pool
architecture, so that the worker threads can the queues in the order of decreasing priority.
Input-Output

Figure 6.4: Worker Pool Architecture

Disadvantage
• Inflexibility in the number of worker threads.
• High level of switching between the I/O and worker threads as they manipulate the shared queue.
 Thread-per-request architecture
• The I/O thread spawns a new worker thread for each request and the worker destroys self when it has
processed the request against i t s designated remote object.
Advantage
• Flexibility in the number of worker threads
• Throughput‘s potentially maximized.
Disadvantage
• Overhead associated with thread creation and destruction operations.
 Thread- per-connection architecture:
• The server creates a new worker thread when a c l i e n t makes a connection and destroys the thread when the
client closes the connection.
• In between the client may make many requests over the connection, targeted at one or more remote objects.
Disadvantage
Thread management overhead.
IV YEAR VIIISEM Distributed Computing

Figure 6.5: Thread per request/ connection architectures

 Thread-per-object architecture:
• It associates a thread with each remote object. An I/O thread receives requests and queues them for the
workers.
• But this time there is per-object queue.

Disadvantage
Clients may be delayed while a worker thread has several outstanding requests; another thread has no work to
perform.

2.7.3.2 Threads within clients

• Threads can be useful for clients as well as servers.


• Figure 3.6 shows a client process with two threads.
• The just thread generates results to be passed to a server by RMI, but does not require a reply
• . RMI typically block the caller, even when there is strictly no need to wait.
• This client process can incorporate a second thread, which performs the remote method invocations and blocks while
the first thread is able to continue computing further results.
• The first thread places its results in buffers, which are emptied by the second thread. It is only blocked when all the
buffers are full.
• Creating a thread is cheaper than creating a process.
• Switching to a different thread within the same process is cheaper than switching between threads belonging to
different processes.
• Threads within a process may share data and other resources conveniently and efficiently compared with separate
processes.
• Threads within a process are not protected from one another.

Figure 6.6 Client process with Two threads


IV YEAR VIIISEM Distributed Computing

2.7.3.3 Thread Programming



It is a concurrent programming. Java provides methods for creating threads, destroying and synchronizing
them.
• The Java thread class included the following methods:
 Thread (Thread Group group, Runnable target. String name) - Creates a new thread in the Suspended slate,
which will belong the group and be identified as name, the thread will execute the run ( ) method of target
 setPriority (intnewPrionty) getPriority ( ) - set and return the threads priority.
 run ( ) - A thread executes the run ( ) method of its target object.
 start( ) - change the state of the thread from Suspended to Runnable
 sleep (intmillisec) - Causes the thread to enter the Suspended state for the specified time.
 destroy ( ) - Destroy the thread.
2.7.3.4 Thread lifetimes
• A new thread is created on the same JVM in the Suspended state.
• After it is mad Runnable, with the start () method, it executes the run () method of an object designated i its
constructor.
• A thread ends its life when it returns from the run ( ) method, or when itdestroy () method is called.
2.7.3.5 Thread synchronization
• Programming a multi-threaded process requires great care.
• The main difficult issue are the sharing of objects and the techniques used for thread co-ordination and co-
operation Java provides the synchronized keyword for programmers to designate the well-known monistic
construct for thread co-ordination.
• The monitor's guarantee is that atmost one thread ca execute within it at any time.
• The actions of the I/O thread & worker threads can be serialize by designating addTo( ) and removeFrom ( )
methods in the queue class as synchronize methods.
• When a worker thread discovers that there are no requests to process, it calls wait() on the instance of queue.
• When the I/O thread subsequently adds a request to the queue, calls the queue's notify() method to wake up-the
worker.
• The join() method blocks the caller until the target threads' termination. The interupt( ) method is useful for
prematurely waking a waiting thread.
2.7.3.6 Thread scheduling
• In preemptive scheduling, a thread may be suspended at any point to make way for another thread.
• In non-preemptive scheduling, a thread runs until it makes a call to the threading system, when the system may de-
schedule it and schedules another thread to run
2.7.3.7 Thread implementation.
• Many kernels provide native support for multi-threaded processes including window NT, Solaris, Mach and
chorus.
• These kernels provide thread creation and management system calls and they schedule individual threads.
• Some other kernels have only a single-thread process abstraction.
• Multi-threaded processes must then be implemented in library c procedures linked to application programs.
• In such cases, the kernel has no knowledge c these user-level threads and therefore cannot schedule them
Independently.
• A threads' run time library organizes the scheduling of threads.
• A user-level threads' implementation suffers from the following problems:
 The threads within a process cannot take advantage of a multiprocessor.
 A thread that takes a page fault blocks the entire process and all threads within it
 Threads within different processes cannot be scheduled.
• Advantage of user-level threads over kernel-level thread implementations are
 Certain thread operations are significantly less cost e.g., switching between threads belonging to the
same process does not necessarily involve a system call.
 The thread scheduling can be customized or changed to suit particular application requirements.
 It is possible to combine the advantages of user-level and kernel-level threads implementations. Eg.,
Hierarchic event based scheduling system.
 In hierarchic, event based scheduling system, each application process contains a user-level scheduler,
which manages the threads inside the process.
 The kernel is responsible for allocating virtual processors to processes.
 The number of virtual processors assigned to a process depends on the application requirements.
• Figure 6.7 shows an example of a three-processor machine, on which the kernel allocates one virtual processor to
process A running a relative low-priority job, and two virtual processors to process B.
IV YEAR VIIISEM Distributed Computing

• The number of virtual processors assigned to a process can also vary.


• Processes can give back a virtual processor that they no longer need; they can also request extra virtual
processors.

Figure 6. 7: Scheduler Activation

• Scheduler Activation (SA) is a call from the kernel to a process, which notifies the process‘s scheduler of an
event.
• A process notifies the kernel when either of the two type’s event occurs: when a virtual processor is idle and
no longer needed, or when an extra virtual processor is required.
The four types of events that the kernel notifies to the user-level scheduler are as follows:
 Virtual processor allocated: the kernel has assigned a new virtual processor to the process.
 SA blocked - an SA has blocked in the kernel, and the kernel is using a fresh SA to notify the
scheduler.
 SA unblocked - an SA that was blocked in the kernel has become unblocked and is ready to
execute at user-level again.
 SA preempted- the kernel has taken away the specified SA from the process.

2.8 COMMUNICATION AND INVOCATION


• Clients and servers may make millions of invocation-related operations in their lifetime, so that small fractions of
milliseconds count in invocation costs.
• Network technologies continue to improve, but invocation times have not decreased in proportion with increases in
network bandwidth.
Figure 6.8: Invocations between address spaces

• Calling a conventional procedure or invoking a conventional method, making a system call, RPC and RMI are all
examples of invocation mechanisms.
• Invocation mechanisms can be either synchronous or asynchronous.
IV YEAR VIIISEM Distributed Computing

• The import performance related distinctions between invocation mechanisms are whether they involve a domain
transition, whether they involve communication across a network and whether they involve thread scheduling and
switching.
• The Figure 6.8 shows the invocations between address spaces.
• The main components accounting for remote invocation delay are:
 Marshalling: Marshalling and unmarshalling, which involve copying and converting data, become a
significant overhead as the amount of data grows.
 Data copying: Message data is copied several times in the course of an RPC
• Across the user- kernel boundary, between the client and server address space and kernel buffers.
• Across protocol layer
• Between the network interface and kernel buffers.
 Packet Initialization: This involves initializing protocol headers and trailers, including checksums.
 Thread scheduling and context switching
 Waiting for acknowledgements.
 Choice of Protocol
• The delay that a client experiences during request-reply interactions over TCP is not necessarily worse than for
UDP, and it is sometimes better, particularly for large messages.
• The connection overheads of TCP are particularly evident is web invocations, since HTTP 1.0 makes a separate
TCP connection for every invocation. But. HTTP 1. 1 makes use of persistent connections, which last over the
course of several invocations.
 Invocations within a computer
• Shared regions may be used for rapid communication between a user process and the kernel, or between user
processes.
• Data is communicated by writing to and reading from the shared region.
• Data is thus passed efficiently, without copying them to and from the kernel's address space.
• Light weight RPC (LRPC) is a more efficient invocation mechanism for the case of two processes on the same
machine.
• The LRPC designs are based on optimizations concerning data copying and thread scheduling. Instead of RPC
parameters being copied between the kernel and user address spaces involved, client and server are able to pass
arguments and return values directly via and 'A1 stack.
• The same stack is used by the client and server stubs in LPRC, arguments are copied once: when they are marshaled
on to the A stack.
• In an equivalent RPC, they are copied four times: form the client stub's stack to a message; from the message to a
kernel buffer; from the kernel buffer to a server message from the message the server stub's stack.
• A client thread enters the server's execution environment by first trapping to the kernel and presenting it with a
capability.
• The kernel checks this and only allows a context switch to valid server procedure; if it is valid, the kernel switches the
thread's context to call the procedure in the server's execution environment.
• When the procedure in the server returns, the thread returns to the kernel, which switches the thread back to the client
execution environment.
• The first client thread marshals the agreements and calls the send operation.
• The second thread then immediately makes the second invocation.
• Each thread waits to receive its results.
 Asynchronous invocations
 A n asynchronous invocation is one that is performed asynchronously with respect to the caller.
 Thatit is made with a non-blocking call, which returns as soon as invocation request message has been created
and is ready for dispatch.
 Persistent asynchronous invocations
 A conventional invocation mechanism is designed to fail after a given number of timeouts have occurred.
 A system for persistent asynchronous invocation tries indefinitely to perform the invocation, until it is known to
have succeeded or failed, or until the application cancels the invocation. E.g. Queued RFC.
2.9 OPERATING SYSTEM ARCHITECTURE
• The separation of fixed resource management mechanisms from resource management policies, which vary from
application to application and service to service, has been a guiding principle m operating system design for a long
time.
• Ideally, the kernel would provide only the most basic mechanisms upon which general resource management tasks at a
node are carried out. Server modules would be dynamically looked as required, to implement the required resource
management policies for the currently running applications.
IV YEAR VIIISEM Distributed Computing

There are two key examples of kernel design: the monolithic and micro kernel approaches.
2.9.1 Monolithic kernels
• The UNIX operating system kernel has been called monolithic. It performs all basic OS functions and takes up in the
order of megabytes of code and data and that it is undifferentiated: it is coded in a non-modular way.
• The result is that to a large extent it is intractable: altering any individual software component to adapt it to changing
requirements is difficult.
• A monolithic kernel can contain same server processes that execute within i t s address space, including file servers
and some networking.
• The code that these processes execute is part of the standard kernel configuration.
• The advantage is the relative efficiency with which operations can be invoked.

Figure 6.11: Monolithic kernel & Micro kernel


2.9.2 Micro kernel
 In the micro kernel design the kernel provides only the most basic abstract principally address spaces, threads and
local inter process communication; all other services are provided by servers that are dynamically loaded precisely at
those compute a distributed system that require them.
• Clients access these system services using the kernel message based invocation mechanisms. The place of t h e micro
kernel in the os distributed system design is shown in Figure 3.12.

Figure 6.12: Micro kernel supports middleware via subsystems


• The micro kernel appears as a layer between the hardware layer and a layer consist of major s system components
called subs system.
• If performance is the main goal middleware may use the facilities of t h e ma c r o kernel directly.
• Otherwise. It uses a lang run-time support subsystem.Or a higher-level operating system interface provided by at
emulation subs ystem.
• Each of these. In turn, is implemented by a combination of library procedures linked into applications, and a set of
Movers running on top of the micro kernel.
Advantages:
• Extensibility.
• Modularity behind Memory Protection Boundaries

You might also like