0% found this document useful (0 votes)
21 views27 pages

Module - 3 Updated

The document discusses process management and synchronization in operating systems, emphasizing the importance of coordinating process execution to prevent data inconsistency. It covers essential sections of a program, types of processes, the critical section problem, and various synchronization algorithms, including hardware solutions like Test and Set, Swap, and Unlock and Lock. Additionally, it introduces semaphores for process synchronization and explains the Banker's algorithm for deadlock avoidance in resource allocation.

Uploaded by

rithesh9866
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views27 pages

Module - 3 Updated

The document discusses process management and synchronization in operating systems, emphasizing the importance of coordinating process execution to prevent data inconsistency. It covers essential sections of a program, types of processes, the critical section problem, and various synchronization algorithms, including hardware solutions like Test and Set, Swap, and Unlock and Lock. Additionally, it introduces semaphores for process synchronization and explains the Banker's algorithm for deadlock avoidance in resource allocation.

Uploaded by

rithesh9866
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Module -3

Chapter-1 : Process Management and Synchronization

Process Synchronization in Operating System

Process Synchronization means coordinating the execution of processes such


that no two processes access the same shared resources and data. It is
required in a multi-process system where multiple processes run together,
and more than one process tries to gain access to the same shared resource
or data at the same time.

Changes made in one process aren’t reflected when another process


accesses the same shared data. It is necessary that processes are
synchronized with each other as it helps avoid the inconsistency of shared
data.

For example : A process P1 tries changing data in a particular memory


location. At the same time another process P2 tries reading data from the
same memory location. Thus, there is a high probability that the data being
read by the second process is incorrect.

Sections of a Program in OS
Following are the four essential sections of a program:

1. Entry Section: This decides the entry of any process.


2. Critical Section: This allows a process to enter and modify the shared
variable.
3. Exit Section: This allows the process waiting in the Entry Section, to
enter into the Critical Sections and makes sure that the process is removed
through this section once it’s done executing.
4. Remainder Section: Parts of the Code, not present in the above three
sections are collectively called Remainder Section.
Types of process in Operating System
On the basis of synchronization, the following are the two types of processes:

1. Independent Processes: The execution of one process doesn’t affect


the execution of another.
2. Cooperative Processes: Execution of one process affects the execution
of the other. Thus, it is necessary that these processes are synchronized in
order to guarantee the order of execution.

The critical section problem


Consider a system consisting of n processes {Po,P1, ..., Pn-1). Each process
has a segment of code, called a critical section, in which the process may be
changing common variables, updating a table, writing a file, and so on. The
important feature of the system is that, when one process is executing in its
critical section, no other process is to be allowed to execute in its critical
section. Thus, the execution of critical sections by the processes is mutually
exclusive in time. The critical-section problem is to design a protocol that the
processes can use to cooperate. Each process must request permission to
enter its critical section. The section of code implementing this request is the
entry section. The critical section may be followed by an exit section. The
remaining code is the remainder section.

do{

Entry section

Critical section

Exit section

Remainder section

}while(1);
A solution to the critical-section problem must satisfy the following three
requirements:

1. Mutual Exclusion: If process Pi is executing in its critical section, then


no other processes can be executing in their critical sections.

2. Progress: If no process is executing in its critical section and some


processes wish to enter their critical sections, then only those processes that
are not executing in their remainder section can participate in the decision
on which will enter its critical section next, and this selection cannot be
postponed indefinitely.

3. Bounded Waiting: There exists a bound on the number of times that


other processes are allowed to enter their critical sections after a process
has made a request to enter its critical section and before that request is
granted.

Hardware Synchronization Algorithms :


Unlock and Lock, Test and Set, Swap

Process Synchronization problems occur when two processes running


concurrently share the same data or same variable. The value of that
variable may not be updated correctly before its being used by a second
process. Such a condition is known as Race Around Condition. There are a
software as well as hardware solutions to this problem. In this article, we will
talk about the most efficient hardware solution to process synchronization
problems and its implementation.
There are three algorithms in the hardware approach of solving Process
Synchronization problem:
1. Test and Set
2. Swap
3. Unlock and Lock
Hardware instructions in many operating systems help in the effective
solution of critical section problems.
1. Test and Set:

Here, the shared variable is lock which is initialized to false. TestAndSet(lock)


algorithm works in this way – it always returns whatever value is sent to it
and sets lock to true. The first process will enter the critical section at once
as TestAndSet(lock) will return false and it’ll break out of the while loop. The
other processes cannot enter now as lock is set to true and so the while loop
continues to be true. Mutual exclusion is ensured. Once the first process gets
out of the critical section, lock is changed to false. So, now the other
processes can enter one by one. Progress is also ensured. However, after the
first process, any process can go in. There is no queue maintained, so any
new process that finds the lock to be false again can enter. So bounded
waiting is not ensured.

Test and Set Pseudocode –

//Shared variable lock initialized to false


boolean lock;

boolean TestAndSet (boolean &target){


boolean rv = target;
target = true;
return rv;
}

while(1){
while (TestAndSet(lock));
critical section
lock = false;
remainder section
}

2. Swap:

Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly


setting lock to true in the swap function, key is set to true and then swapped
with lock. First process will be executed, and in while(key), since key=true ,
swap will take place and hence lock=true and key=false. Again next iteration
takes place while(key) but key=false , so while loop breaks and first process
will enter in critical section. Now another process will try to enter in Critical
section, so again key=true and hence while(key) loop will run and swap
takes place so, lock=true and key=true (since lock=true in first process).
Again on next iteration while(key) is true so this will keep on executing and
another process will not be able to enter in critical section. Therefore Mutual
exclusion is ensured. Again, out of the critical section, lock is changed to
false, so any process finding it gets t enter the critical section. Progress is
ensured. However, again bounded waiting is not ensured for the very same
reason.

Swap Pseudocode –

// Shared variable lock initialized to false


// and individual key initialized to false;

boolean lock;
Individual key;

void swap(boolean &a, boolean &b){


boolean temp = a;
a = b;
b = temp;
}

while (1){
key = true;
while(key)
swap(lock,key);
critical section
lock = false;
remainder section
}
3. Unlock and Lock :

Unlock and Lock Algorithm uses TestAndSet to regulate the value of


lock but it adds another value, waiting[i], for each process which
checks whether or not a process has been waiting. A ready queue is
maintained with respect to the process in the critical section. All the
processes coming in next are added to the ready queue with respect to
their process number, not necessarily sequentially. Once the ith
process gets out of the critical section, it does not turn lock to false so
that any process can avail the critical section now, which was the
problem with the previous algorithms. Instead, it checks if there is any
process waiting in the queue. The queue is taken to be a circular
queue. j is considered to be the next process in line and the while loop
checks from jth process to the last process and again from 0 to (i-1)th
process if there is any process waiting to access the critical section. If
there is no process waiting then the lock value is changed to false and
any process which comes next can enter the critical section. If there is,
then that process’ waiting value is turned to false, so that the first
while loop becomes false and it can enter the critical section. This
ensures bounded waiting. So the problem of process synchronization
can be solved through this algorithm.

Unlock and Lock Pseudocode –

// Shared variable lock initialized to false


// and individual key initialized to false

boolean lock;
Individual key;
Individual waiting[i];

while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key)
key = TestAndSet(lock);
waiting[i] = false;
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}

Semaphores

Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.

The definitions of wait and signal are as follows −

 Wait

The wait operation decrements the value of its argument S, if it is


positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}

 Signal

The signal operation increments the value of its argument S.


signal(S)
{
S++;
}

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows −

 Counting Semaphores

These are integer value semaphores and have an unrestricted value


domain. These semaphores are used to coordinate the resource
access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically
incremented and if the resources are removed, the count is
decremented.

 Binary Semaphores

The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the
semaphore is 1 and the signal operation succeeds when semaphore is
0. It is sometimes easier to implement binary semaphores than
counting semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They
follow the mutual exclusion principle strictly and are much more
efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores
as processor time is not wasted unnecessarily to check if a condition is
fulfilled to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must


be implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss
of modularity. This happens because the wait and signal operations
prevent the creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority
processes may access the critical section first and high priority
processes later.
Banker's Algorithm in Operating
System

In computer systems, the Banker’s algorithm is utilized to avoid deadlock


so that resources can be safely allocated to each process. This algorithm is
named so because it can be used in a banking system to ensure that the
bank never allocates its available cash such that it can no longer satisfy the
requirements of all of its customers.

In this article, we will discuss the banker’s algorithm in detail. But before that
let us understand a real-world situation analogous to it.

Banker's Algorithm: How Does It Work?

Let us consider a bank has "n" number of accounts and "M" is the total cash
in the bank. If a customer has applied for a loan, then the bank first subtracts
the loan amount from the total cash available and then verifies that the cash
difference must be greater than the total cash "M" to approve the loan. This
process helps the bank to manage and operate all the banking functions
without any restriction.

In the same way, the banker’s algorithm works in a computer system’s


operating system. In a computer system, when a new process enters the
system. Then, the process must declare the maximum number of instances
of each resource type that it may require to execute. This number must not
be more than the total number of resources in the system. Now, when a new
process requests resources, the system must calculate whether the
allocation of the requested resources will leave the computer system in a
safe state. If so then the process will get allocated the requested resources,
otherwise it must wait until some other process releases the requested
resources.

By following this practice, the banker’s algorithm avoids deadlock and


allocate resources safely. For this reason, it is also termed as deadlock
detection algorithm or deadlock avoidance algorithm in the operating
system.
Data Structures Used in Banker’s Algorithm

In a computer system, if the number of processes is equal to "n" and the


number of resource types is "m". The following four data structures are
required to implement the banker’s algorithm −

 Available − It is a vector of length "m" that indicates the number of


available resources of each type in the system. If Available [j] = k, then
"k" be the instances of resource type Rj available in the system.

 Max − It is a [n × m] matrix that defines the maximum resource
demand of each process. When Max [I, j] = k, then a process P i may
request maximum k instances of resource type, R j.

 Allocation − – It is also a [n × m] matrix that defines the number of
resources of each type which are currently allocated to each process.
Therefore, if Allocation [i, j] = k, then a process P i is currently allocated
"k" instances of resource type, Rj.

 Need − It is an [n × m] matrix that represents the remaining resource
need of each process. When Need [i, j] = k, then a process P i may need
"k" more instances of resource type Rj to accomplish the assigned task.

 Finish − It is a matrix of length "m". It includes a Boolean value, i.e.
TRUE or FALSE to indicate whether a process has been allocated to the
needed resources, or all resources have been released after finishing
the assigned work.

Also, it is to be noted that,

Need [i, j] = Max [i, j] – Allocation [i, j]


The baker’s algorithm is a combination of two algorithms namely, Safety
Algorithm and Resource Request Algorithm. These two algorithms together
control the processes and avoid dead lock in a system.

Safety Algorithm

The safety algorithm is one that determines whether or not the system is in a
safe state. It follows the following safe sequence in the banker’s algorithm −
Step 1

 Consider two vectors named Work and Finish of lengths "m" and "n"
respectively.
 Initialize: Work = Available
 Finish [i] = false; for i = 0, 1, 2, 3, 4, 5 … n.

Step 2

 Find "i" such that both Finish [i] = false and Needi ≤ Work
 If such "i" does not exist, then go to the step 4.

Step 3

 Work = Work + Allocation


 Finish [i] = true
 Go to step 2.

Step 4

 If Finish [i] = true for all "i", it means that the system is in a safe state
otherwise it is in unsafe state.
 This algorithm takes (m × n2) operations to decide whether the state
is safe or not.

Now, let us discuss the Resource Request Algorithm.

Resource Request Algorithm

This algorithm determines how a system behaves when a process request for
each type of resource in the system.

Let us consider a request vector Requesti for a process P i. When Requesti [j]
= k, then the process Pi requires k instances of resource type Rj.

When a process Pi requests for resources, the following sequence is followed



Step 1

If Requesti ≤ Needi, then go to the step 2. Otherwise give an error as the


process has exceeded its maximum claim for the resource.

Step 2

If Requesti ≤ Available, then go to the step 3. Otherwise, the process P i must


wait until the resource is available.

Step 3

If the requested resource is allocated to the process, Pi by modifying the


state as follows −

Available = Available – Requesti;


Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;

When the state of the resulting resource allocation is safe, then the
requested resources are allocated to the process, P i. If the new state is
unsafe, then the process Pi must wait for Requesti and the old resource
allocation state is restored.

The Sleeping Barber problem

The Sleeping Barber problem is a classic problem in process


synchronization that is used to illustrate synchronization issues that can
arise in a concurrent system. The problem is as follows:
There is a barber shop with one barber and a number of chairs for waiting
customers. Customers arrive at random times and if there is an available
chair, they take a seat and wait for the barber to become available. If there
are no chairs available, the customer leaves. When the barber finishes with
a customer, he checks if there are any waiting customers. If there are, he
begins cutting the hair of the next customer in the queue. If there are no
customers waiting, he goes to sleep.
The problem is to write a program that coordinates the actions of the
customers and the barber in a way that avoids synchronization problems,
such as deadlock or starvation.
One solution to the Sleeping Barber problem is to use semaphores to
coordinate access to the waiting chairs and the barber chair. The solution
involves the following steps:
Initialize two semaphores: one for the number of waiting chairs and one for
the barber chair. The waiting chairs semaphore is initialized to the number
of chairs, and the barber chair semaphore is initialized to zero.
Customers should acquire the waiting chairs semaphore before taking a
seat in the waiting room. If there are no available chairs, they should leave.
When the barber finishes cutting a customer’s hair, he releases the barber
chair semaphore and checks if there are any waiting customers. If there
are, he acquires the barber chair semaphore and begins cutting the hair of
the next customer in the queue.
The barber should wait on the barber chair semaphore if there are no
customers waiting.
The solution ensures that the barber never cuts the hair of more than one
customer at a time, and that customers wait if the barber is busy. It also
ensures that the barber goes to sleep if there are no customers waiting.
However, there are variations of the problem that can require more
complex synchronization mechanisms to avoid synchronization issues. For
example, if multiple barbers are employed, a more complex mechanism
may be needed to ensure that they do not interfere with each other.
Prerequisite – Inter Process Communication Problem : The analogy is
based upon a hypothetical barber shop with one barber. There is a barber
shop which has one barber, one barber chair, and n chairs for waiting for
customers if there are any to sit on the chair.
 If there is no customer, then the barber sleeps in his own chair.
 When a customer arrives, he has to wake up the barber.
 If there are many customers and the barber is cutting a customer’s hair,
then the remaining customers either wait if there are empty chairs in the
waiting room or they leave if no chairs are empty.
Solution : The solution to this problem includes three semaphores. First is
for the customer which counts the number of customers present in the
waiting room (customer in the barber chair is not included because he is
not waiting). Second, the barber 0 or 1 is used to tell whether the barber is
idle or is working, And the third mutex is used to provide the mutual
exclusion which is required for the process to execute. In the solution, the
customer has the record of the number of customers waiting in the waiting
room if the number of customers is equal to the number of chairs in the
waiting room then the upcoming customer leaves the barbershop. When
the barber shows up in the morning, he executes the procedure barber,
causing him to block on the semaphore customers because it is initially 0.
Then the barber goes to sleep until the first customer comes up. When a
customer arrives, he executes customer procedure the customer acquires
the mutex for entering the critical region, if another customer enters
thereafter, the second one will not be able to anything until the first one
has released the mutex. The customer then checks the chairs in the waiting
room if waiting customers are less then the number of chairs then he sits
otherwise he leaves and releases the mutex. If the chair is available then
customer sits in the waiting room and increments the variable waiting value
and also increases the customer’s semaphore this wakes up the barber if he
is sleeping. At this point, customer and barber are both awake and the
barber is ready to give that person a haircut. When the haircut is over, the
customer exits the procedure and if there are no customers in waiting room
barber sleeps.
Semaphore Customers = 0;
Semaphore Barber = 0;
Mutex Seats = 1;
int FreeSeats = N;

Barber {
while(true) {
/* waits for a customer (sleeps). */
down(Customers);

/* mutex to protect the number of available seats.*/


down(Seats);

/* a chair gets free.*/


FreeSeats++;

/* bring customer for haircut.*/


up(Barber);

/* release the mutex on the chair.*/


up(Seats);
/* barber is cutting hair.*/
}
}

Customer {
while(true) {
/* protects seats so only 1 customer tries to sit
in a chair if that's the case.*/
down(Seats); //This line should not be here.
if(FreeSeats > 0) {

/* sitting down.*/
FreeSeats--;

/* notify the barber. */


up(Customers);

/* release the lock */


up(Seats);

/* wait in the waiting room if barber is busy. */


down(Barber);
// customer is having hair cut
} else {
/* release the lock */
up(Seats);
// customer leaves
}
}
}

Critical Regions in Operating System


In an operating system, a critical region refers to a section of code or a data
structure that must be accessed exclusively by one method or thread at a time.
Critical regions are utilized to prevent concurrent entry to shared sources, along
with variables, information structures, or devices, that allow you to maintain
information integrity and keep away from race conditions.
The concept of important regions is carefully tied to the want for synchronization
and mutual exclusion in multi-threaded or multi-manner environments. Without
proper synchronization mechanisms, concurrent admission to shared resources
can lead to information inconsistencies, unpredictable conduct, and mistakes.
To implement mutual exclusion and shield important areas, operating structures
provide synchronization mechanisms, inclusive of locks, semaphores, or monitors.
These mechanisms ensure that the handiest one procedure or thread can get the
right of entry to the vital location at any given time, even as other procedures or
threads are averted from entering till the cutting-edge occupant releases the lock.

Critical Region Characteristics and Requirements


Following are the characteristics and requirements for critical regions in
an operating system.

1. Mutual Exclusion
Only one procedure or thread can access the important region at a time.
This ensures that concurrent entry does not bring about facts corruption or
inconsistent states.

2. Atomicity
The execution of code within an essential region is dealt with as an
indivisible unit of execution. It way that after a system or thread enters a
vital place, it completes its execution without interruption.

3. Synchronization
Processes or threads waiting to go into a essential vicinity are synchronized
to prevent simultaneous access. They commonly appoint synchronization
primitives, inclusive of locks or semaphores, to govern access and put in
force mutual exclusion.

4. Minimal Time Spent in Critical Regions


It is perfect to reduce the time spent inside crucial regions to reduce the
capacity for contention and improve gadget overall performance. Lengthy
execution within essential regions can increase the waiting time for
different strategies or threads.
Conclusion
Operating systems often provide high-stage abstractions and libraries to
facilitate the management of critical areas, together with mutexes ,
situation variables, or video display units. These abstractions encapsulate
the necessary synchronization mechanisms and offer smooth-to-use
interfaces for programmers to shield shared sources and ensure accurate
concurrent behavior.
Properly figuring out and protecting vital regions is important in operating
gadget design to keep statistics integrity, avoid race situations, and allow
secure concurrent execution. By employing synchronization mechanisms
and following quality practices for important region control, operating
structures can make sure the dependable and efficient execution of
packages in multi-threaded or multi-technique environments.

Monitors in Process Synchronization


Monitors are a higher-level synchronization construct that simplifies process
synchronization by providing a high-level abstraction for data access and
synchronization. Monitors are implemented as programming language
constructs, typically in object-oriented languages, and provide mutual
exclusion, condition variables, and data encapsulation in a single construct.
1. A monitor is essentially a module that encapsulates a shared resource and
provides access to that resource through a set of procedures. The
procedures provided by a monitor ensure that only one process can
access the shared resource at any given time, and that processes waiting
for the resource are suspended until it becomes available.
2. Monitors are used to simplify the implementation of concurrent programs
by providing a higher-level abstraction that hides the details of
synchronization. Monitors provide a structured way of sharing data and
synchronization information, and eliminate the need for complex
synchronization primitives such as semaphores and locks.
3. The key advantage of using monitors for process synchronization is that
they provide a simple, high-level abstraction that can be used to
implement complex concurrent systems. Monitors also ensure that
synchronization is encapsulated within the module, making it easier to
reason about the correctness of the system.
However, monitors have some limitations. For example, they can be less
efficient than lower-level synchronization primitives such as semaphores and
locks, as they may involve additional overhead due to their higher-level
abstraction. Additionally, monitors may not be suitable for all types of
synchronization problems, and in some cases, lower-level primitives may be
required for optimal performance.
The monitor is one of the ways to achieve Process synchronization. The
monitor is supported by programming languages to achieve mutual
exclusion between processes.
For example Java Synchronized methods. Java provides wait() and notify()
constructs.
1. It is the collection of condition variables and procedures combined
together in a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal
variable of the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax: Condition Variables: Two
different operations are performed on the condition variables of the monitor.
Wait.
signal.
let say we have 2 condition variables condition x, y;

// Declaring variable Wait operation [Link]() : Process performing wait


operation on any condition variable are suspended. The suspended
processes are placed in block queue of that condition variable.
Note: Each condition variable has its unique block queue.

Signal operation [Link](): When a process performs signal operation on


condition variable, one of the blocked processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.

Advantages of Monitor: Monitors have the advantage of making parallel


programming easier and less error prone than using techniques such as
semaphore.

Disadvantages of Monitor: Monitors have to be implemented as part of


the programming language . The compiler must generate code for them. This
gives the compiler the additional burden of having to know what operating
system facilities are available to control access to critical sections in
concurrent processes.
Some languages that do support monitors are Java,C#,Visual Basic,Ada and
concurrent Euclid.
Inter-Process Communication
Definition

"Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given diagram
that illustrates the importance of inter-process communication:

Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a time.
This also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore
2. Counting Semaphore

Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes does
not reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not. It is known
as busy waiting because even though the process active, the process does not perform any
functional operation (or task).

Approaches to Interprocess Communication

We will now discuss some different approaches to inter-process communication which are as
follows:

These are a few different approaches for Inter- Process Communication:


1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in
this type of data channel can be moved in only a single direction at a time. Still, one can use
two-channel of this type, so that he can able to send and receive data in two processes.
Typically, it uses the standard methods for input and output. These pipes are used in all
types of POSIX systems and in different versions of window operating systems as well.

Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple
processes simultaneously. It is primarily used so that the processes can communicate with
each other. Therefore the shared memory is used by almost all POSIX and Windows
operating systems as well.

Message Queue:-
In general, several different messages are allowed to read and write the data to the
message queue. In the message queue, the messages are stored or stay in the queue unless
their recipients retrieve them. In short, we can also say that the message queue is very
helpful in inter-process communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take
a look at its diagram given below:
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each
other without restoring the shared variables.

Usually, the inter-process communication mechanism provides two operations that are as
follows:

o send (message)
o received (message)

Note: The size of the message can be fixed or variable.

Direct Communication:-
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.

Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These
shared links can be unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated processes. It can also be
considered as full-duplex, which means that one process can communicate with another
process and vice versa.

Some other different approaches

o Socket:-

It acts as a type of endpoint for receiving or sending the data in a network. It is correct for
data sent between processes on the same computer or data sent between different
computers on the same network. Hence, it used by several types of operating systems.

o File:-

A file is a type of data record or a document stored on the disk and can be acquired on
demand by the file server. Another most important thing is that several processes can
access that file as required or needed.

o Signal:-

As its name implies, they are a type of signal used in inter process communication in a
minimal way. Typically, they are the massages of systems that are sent by one process to
another. Therefore, they are not used for sending data but for remote commands between
multiple processes.

Usually, they are not used to send the data but to remote commands in between several
processes.

Why we need inter-process communication?

There are numerous reasons to use inter-process communication for sharing the data. Here
are some of the most important reasons that are given below:

o It helps to speedup modularity


o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other and synchronize their
actions as well.
IPC between processes on a single system :

Inter-Process Communication (IPC) in an operating system refers to the mechanisms and


techniques that enable processes to communicate and synchronize with each other. On a
single system, there are several ways processes can achieve IPC:

1. Pipes: A pipe is a one-way communication channel. It allows the output of one process to be
connected directly to the input of another process. Pipes are typically used for
communication between a parent process and its child process.
2. Named Pipes (FIFOs): Similar to pipes, FIFOs or named pipes allow communication
between unrelated processes. Unlike pipes, FIFOs exist as special files in the file system and
can be accessed by multiple processes for communication.
3. Shared Memory: This technique involves creating a shared area of memory that multiple
processes can access. This allows processes to communicate by reading and writing to the
same memory space. It's a fast and efficient way of sharing data but requires careful
synchronization to prevent conflicts.
4. Message Queues: Message queues allow processes to communicate by sending and
receiving messages through a queue data structure. Messages can have a predefined
format and are stored in a queue until they are received by the intended process.
5. Sockets: In the context of IPC on a single system, UNIX domain sockets can be used. They
allow communication between processes on the same system, functioning similarly to
network sockets but without the overhead of network communication.
6. Signals: Processes can use signals to notify each other about specific events or to handle
interrupts. Signals can be used for simple forms of IPC, such as process termination
notification.

Each method has its own advantages and limitations. The choice of IPC mechanism depends
on factors such as the nature of communication, data size, synchronization requirements,
and the relationship between the communicating processes.

IPC between processes on a different system:


Inter-Process Communication (IPC) between processes on different systems involves
establishing communication channels or mechanisms that allow processes running on
separate systems to exchange data and information. There are several methods for
achieving IPC between processes on different systems:

1. Sockets: Using network sockets (like TCP/IP or UDP) is a common way for processes on
different systems to communicate. One process acts as a server, listening for connections,
while the other acts as a client, initiating the connection. This allows data to be sent and
received between the processes over the network.
2. Message Queues: Message queues can be implemented using middleware like Message
Queuing Telemetry Transport (MQTT), RabbitMQ, or ZeroMQ. These systems facilitate
asynchronous communication between processes on different systems by allowing one
process to send messages to a queue, which can be received by another process.
3. Remote Procedure Calls (RPC): RPC mechanisms like gRPC or XML-RPC enable processes
on different systems to call functions or procedures located on remote systems as if they
were local. These systems encapsulate the communication details and make it appear as
though the function call is local to the caller.
4. Shared Memory: While trickier to implement across different systems due to differences in
memory addressing, shared memory can still be achieved using techniques like memory-
mapped files or using a distributed file system that allows different systems to access
shared files.
5. Distributed Computing Frameworks: Systems like Hadoop, Spark, or MPI (Message
Passing Interface) are designed explicitly for distributed computing and allow processes
running on different systems to collaborate and share data for computation.

When implementing IPC between processes on different systems, it's essential to consider
aspects like network security, latency, reliability, and serialization/deserialization of data as
data is transmitted between different machines.

Additionally, the choice of IPC mechanism depends on factors such as the nature of the
application, the level of required communication, and the systems' capabilities.

You might also like