0% found this document useful (0 votes)
11 views21 pages

Note - DCS30102 - Module 3

Uploaded by

rdxsp8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views21 pages

Note - DCS30102 - Module 3

Uploaded by

rdxsp8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

BRAINWARE UNIVERSITY

CLASS NOTES [Operating System]

What is Inter Process Communication?


In general, Inter Process Communication is a type of mechanism usually provided by the operating
system (or OS). The main aim or goal of this mechanism is to provide communications in between
several processes. In short, the intercommunication allows a process letting another process know that
some event has occurred.

Let us now look at the general definition of inter-process communication, which will explain the same
thing that we have discussed above.

Definition
"Inter-process communication is used for exchanging useful information between numerous
threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given diagram that illustrates
the importance of inter-process communication:

Role of Synchronization in Inter Process Communication


It is one of the essential parts of inter process communication. Typically, this is provided by inter-process
communication control mechanisms, but sometimes it can also be controlled by communication
processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion

2. Semaphore

3. Barrier
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

4. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a time. This also helps
in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by several
processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore

2. Counting Semaphore

Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes does not reach it.
It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock waits or
stays in a loop while checking that the lock is available or not. It is known as busy waiting because even
though the process active, the process does not perform any functional operation (or task).

Approaches to Interprocess Communication


We will now discuss some different approaches to inter-process communication which are as follows:
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

These are a few different approaches for Inter- Process Communication:

1. Pipes

2. Shared Memory

3. Message Queue

4. Direct Communication

5. Indirect communication

6. Message Passing

7. FIFO
To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this type of
data channel can be moved in only a single direction at a time. Still, one can use two-channel of this type,
so that he can able to send and receive data in two processes. Typically, it uses the standard methods for
input and output. These pipes are used in all types of POSIX systems and in different versions of window
operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other. Therefore the
shared memory is used by almost all POSIX and Windows operating systems as well.

Message Queue:-

In general, several different messages are allowed to read and write the data to the message queue. In the
message queue, the messages are stored or stay in the queue unless their recipients retrieve them. In
short, we can also say that the message queue is very helpful in inter-process communication and used by
all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take a look at its
diagram given below:
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with each other.
However, by using the message passing, the processes can communicate with each other without
restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as follows:

o send (message)

o received (message)

Note: The size of the message can be fixed or variable.

Direct Communication:-

In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common mailbox, and
each pair of these processes shares multiple communication links. These shared links can be
unidirectional or bi-directional.

FIFO:-

It is a type of general communication between two unrelated processes. It can also be considered as full-
duplex, which means that one process can communicate with another process and vice versa.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

Some other different approaches

o Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data sent
between processes on the same computer or data sent between different computers on the same network.
Hence, it used by several types of operating systems.

o File:-
A file is a type of data record or a document stored on the disk and can be acquired on demand by the file
server. Another most important thing is that several processes can access that file as required or needed.

o Signal:-
As its name implies, they are a type of signal used in inter process communication in a minimal way.
Typically, they are the massages of systems that are sent by one process to another. Therefore, they are
not used for sending data but for remote commands between multiple processes.

Usually, they are not used to send the data but to remote commands in between several processes.

Why we need interprocess communication?


There are numerous reasons to use inter-process communication for sharing the data. Here are some of
the most important reasons that are given below:

o It helps to speedup modularity

o Computational

o Privilege separation

o Convenience

o Helps operating system to communicate with each other and synchronize their actions as well.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

Process Synchronization in OS (Operating System)


When two or more process cooperates with each other, their order of execution must be preserved
otherwise there can be conflicts in their execution and inappropriate outputs can be produced.

A cooperative process is the one which can affect the execution of other process or can be affected by the
execution of other process. Such processes need to be synchronized so that their order of execution can
be guaranteed.

The procedure involved in preserving the appropriate order of execution of cooperative processes is
known as Process Synchronization. There are various synchronization mechanisms that are used to
synchronize the processes.

Race Condition
A Race Condition typically occurs when two or more threads try to read, write and possibly make the
decisions based on the memory that they are accessing concurrently.

Critical Section
The regions of a program that try to access shared resources and may cause race conditions are called
critical section. To avoid race condition among the processes, we need to assure that only one process at
a time can execute within the critical section.

Critical Section Problem in OS (Operating System)


Critical Section is the part of a program which tries to access shared resources. That resource may be any
resource in a computer like a memory location, Data structure, CPU or any IO device.

The critical section cannot be executed by more than one process at the same time; operating system
faces the difficulties in allowing and disallowing the processes from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that the Race condition
among the processes will never arise.

Requirements of Synchronization mechanisms

Primary

1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process
is executing inside critical section then the other process must not enter in the critical section.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

2. Progress

Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section.

Secondary

1. Bounded Waiting

We should be able to predict the waiting time for every process to get into the critical section.
The process must not be endlessly waiting for getting into the critical section.

2. Architectural Neutrality
Our mechanism must be architectural natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.

Lock Variable
This is the simplest synchronization mechanism. This is a Software Mechanism implemented in User
mode. This is a busy waiting solution which can be used for more than two processes.

In this mechanism, a Lock variable lockis used. Two values of lock can be possible, either 0 or 1. Lock
value 0 means that the critical section is vacant while the lock value 1 means that it is occupied.

A process which wants to get into the critical section first checks the value of the lock variable. If it is 0
then it sets the value of lock as 1 and enters into the critical section, otherwise it waits.

The pseudo code of the mechanism looks like following. Entry Section →

1. While (lock! = 0);


2. Lock = 1;
3. //Critical Section
4. Exit Section →
5. Lock =0;
If we look at the Pseudo Code, we find that there are three sections in the code. Entry Section, Critical
Section and the exit section.

Initially the value of lock variable is 0. The process which needs to get into the critical section, enters
into the entry section and checks the condition provided in the while loop.

The process will wait infinitely until the value of lock is 1 (that is implied by while loop). Since, at the
very first time critical section is vacant hence the process will enter the critical section by setting the lock
variable as 1.

When the process exits from the critical section, then in the exit section, it reassigns the value of lock as
0.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
Every Synchronization mechanism is judged on the basis of four conditions.

1. Mutual Exclusion

2. Progress

3. Bounded Waiting

4. Portability

Out of the four parameters, Mutual Exclusion and Progress must be provided by any solution. Let?s
analyze this mechanism on the basis of the above mentioned conditions.

Mutual Exclusion

The lock variable mechanism doesn't provide Mutual Exclusion in some of the cases. This can be better
described by looking at the pseudo code by the Operating System point of view I.E. Assembly code of
the program. Let's convert the Code into the assembly language.

1. Load Lock, R0

2. CMP R0, #0

3. JNZ Step 1

4. Store #1, Lock

5. Store #0, Lock

Let us consider that we have two processes P1 and P2. The process P1 wants to execute its critical
section. P1 gets into the entry section. Since the value of lock is 0 hence P1 changes its value from 0 to 1
and enters into the critical section.

Meanwhile, P1 is pre-empted by the CPU and P2 gets scheduled. Now there is no other process in the
critical section and the value of lock variable is 0. P2 also wants to execute its critical section. It enters
into the critical section by setting the lock variable to 1.

Now, CPU changes P1's state from waiting to running. P1 is yet to finish its critical section. P1 has
already checked the value of lock variable and remembers that its value was 0 when it previously
checked it. Hence, it also enters into the critical section without checking the updated value of lock
variable.

Now, we got two processes in the critical section. According to the condition of mutual exclusion, more
than one process in the critical section must not be present at the same time. Hence, the lock variable
mechanism doesn't guarantee the mutual exclusion.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
The problem with the lock variable mechanism is that, at the same time, more than one process can see
the vacant tag and more than one process can enter in the critical section. Hence, the lock variable doesn't
provide the mutual exclusion that's why it cannot be used in general.

Since, this method is failed at the basic step; hence, there is no need to talk about the other conditions to
be fulfilled.

est Set Lock Mechanism

Modification in the assembly code

In lock variable mechanism, Sometimes Process reads the old value of lock variable and enters the
critical section. Due to this reason, more than one process might get into critical section. However, the
code shown in the part one of the following section can be replaced with the code shown in the part two.
This doesn't affect the algorithm but, by doing this, we can manage to provide the mutual exclusion to
some extent but not completely.

In the updated version of code, the value of Lock is loaded into the local register R0 and then value of
lock is set to 1.

However, in step 3, the previous value of lock (that is now stored into R0) is compared with 0. if this is 0
then the process will simply enter into the critical section otherwise will wait by executing continuously
in the loop.

The benefit of setting the lock immediately to 1 by the process itself is that, now the process which enters
into the critical section carries the updated value of lock variable that is 1.

In the case when it gets preempted and scheduled again then also it will not enter the critical section
regardless of the current value of the lock variable as it already knows what the updated value of lock
variable is.

Section 1 Section 2

1. Load Lock, R0 1. Load Lock, R0


2. CMP R0, #0 2. Store #1, Lock
3. JNZ step1 3. CMP R0, #0
4. store #1, Lock 4. JNZ step 1

TSL Instruction

However, the solution provided in the above segment provides mutual exclusion to some extent but it
doesn't make sure that the mutual exclusion will always be there. There is a possibility of having more
than one process in the critical section.

What if the process gets preempted just after executing the first instruction of the assembly code written
in section 2? In that case, it will carry the old value of lock variable with it and it will enter into the
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
critical section regardless of knowing the current value of lock variable. This may make the two
processes present in the critical section at the same time.

To get rid of this problem, we have to make sure that the preemption must not take place just after
loading the previous value of lock variable and before setting it to 1. The problem can be solved if we can
be able to merge the first two instructions.

In order to address the problem, the operating system provides a special instruction called Test Set Lock
(TSL) instruction which simply loads the value of lock variable into the local register R0 and sets it to 1
simultaneously

The process which executes the TSL first will enter into the critical section and no other process after that
can enter until the first process comes out. No process can execute the critical section even in the case of
preemption of the first process.

The assembly code of the solution will look like following.

1. TSL Lock, R0

2. CMP R0, #0

3. JNZ step 1

Let's examine TSL on the basis of the four conditions.

o Mutual Exclusion

Mutual Exclusion is guaranteed in TSL mechanism since a process can never be preempted just
before setting the lock variable. Only one process can see the lock variable as 0 at a particular
time and that's why, the mutual exclusion is guaranteed.

o Progress

According to the definition of the progress, a process which doesn't want to enter in the critical
section should not stop other processes to get into it. In TSL mechanism, a process will execute
the TSL instruction only when it wants to get into the critical section. The value of the lock will
always be 0 if no process doesn't want to enter into the critical section hence the progress is
always guaranteed in TSL.

o Bounded Waiting

Bounded Waiting is not guaranteed in TSL. Some process might not get a chance for so long. We
cannot predict for a process that it will definitely get a chance to enter in critical section after a
certain time.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

o Architectural Neutrality

TSL doesn't provide Architectural Neutrality. It depends on the hardware platform. The TSL
instruction is provided by the operating system. Some platforms might not provide that. Hence it
is not Architectural natural.

Semaphores in OS (Operating System)


To get rid of the problem of wasting the wake-up signals, Dijkstra proposed an approach which involves
storing all the wake-up calls. Dijkstra states that, instead of giving the wake-up calls directly to the
consumer, producer can store the wake-up call in a variable. Any of the consumers can read it whenever
it needs to do so.

Semaphore is the variables which storesthe entire wake up calls that are being transferred from producer
to consumer. It is a variable on which read, modify and update happens automatically in kernel mode.

Semaphore cannot be implemented in the user mode because race condition may always arise when two
or more processes try to access the variable simultaneously. It always needs support from the operating
system to be implemented.

According to the demand of the situation, Semaphore can be divided into two categories.

1. Counting Semaphore

2. Binary Semaphore or Mutex

Counting Semaphore
There are the scenarios in which more than one processes need to execute in critical section
simultaneously. However, counting semaphore can be used when we need to have more than one process
in the critical section at the same time.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
The programming code of semaphore implementation is shown below which includes the structure of
semaphore and the logic using which the entry and the exit can be performed in the critical section.

1. struct Semaphore
2. {
3. int value; // processes that can enter in the critical section simultaneously.
4. queue type L; // L contains set of processes which get blocked
5. }
6. Down (Semaphore S)
7. {
8. SS.value = S.value - 1; //semaphore's value will get decreased when a new
9. //process enter in the critical section
10. if (S.value< 0)
11. {
12. put_process(PCB) in L; //if the value is negative then
13. //the process will get into the blocked state.
14. Sleep();
15. }
16. else
17. return;
18. }
19. up (Semaphore s)
20. {
21. SS.value = S.value+1; //semaphore value will get increased when
22. //it makes an exit from the critical section.
23. if(S.value<=0)
24. {
25. select a process from L; //if the value of semaphore is positive
26. //then wake one of the processes in the blocked queue.
27. wake-up();
28. }
29. }
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

30. }

In this mechanism, the entry and exit in the critical section are performed on the basis of the value of
counting semaphore. The value of counting semaphore at any point of time indicates the maximum
number of processes that can enter in the critical section at the same time.

A process which wants to enter in the critical section first decrease the semaphore value by 1 and then
check whether it gets negative or not. If it gets negative then the process is pushed in the list of blocked
processes (i.e. q) otherwise it gets enter in the critical section.

When a process exits from the critical section, it increases the counting semaphore by 1 and then checks
whether it is negative or zero. If it is negative then that means that at least one process is waiting in the
blocked state hence, to ensure bounded waiting, the first process among the list of blocked processes will
wake up and gets enter in the critical section.

The processes in the blocked list will get waked in the order in which they slept. If the value of counting
semaphore is negative then it states the number of processes in the blocked state while if it is positive
then it states the number of slots available in the critical section.

Problem on Counting Semaphore


The questions are being asked on counting semaphore in GATE. Generally the questions are very simple
that contains only subtraction and addition.

1. Wait → Decre → Down → P


2. Signal → Inc → Up → V
The following type questions can be asked in GATE.

A Counting Semaphore was initialized to 12. then 10P (wait) and 4V (Signal) operations were computed
on this semaphore. What is the result?
1. S = 12 (initial)
2. 10 p (wait) :
3. SS = S -10 = 12 - 10 = 2
4. then 4 V :
5. SS = S + 4 =2 + 4 = 6
Hence, the final value of counting semaphore is 6.

Binary Semaphore or Mutex


In counting semaphore, Mutual exclusion was not provided because we has the set of processes which
required to execute in the critical section simultaneously.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
However, Binary Semaphore strictly provides mutual exclusion. Here, instead of having more than 1
slots available in the critical section, we can only have at most 1 process in the critical section. The
semaphore can have only two values, 0 or 1.

Let's see the programming implementation of Binary Semaphore.

1. StructBsemaphore
2. {
3. enum Value(0,1); //value is enumerated data type which can only have two values 0 or 1.
4. Queue type L;
5. }
6. /* L contains all PCBs corresponding to process
7. Blocked while processing down operation unsuccessfully.
8. */
9. Down (Bsemaphore S)
10. {
11. if (s.value == 1) // if a slot is available in the
12. //critical section then let the process enter in the queue.
13. {
14. S.value = 0; // initialize the value to 0 so that no other process can read it as 1.
15. }
16. else
17. {
18. put the process (PCB) in S.L; //if no slot is available
19. //then let the process wait in the blocked queue.
20. sleep();
21. }
22. }
23. Up (Bsemaphore S)
24. {
25. if (S.L is empty) //an empty blocked processes list implies that no process
26. //has ever tried to get enter in the critical section.
27. {
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

28. S.Value =1;


29. }
30. else
31. {
32. Select a process from S.L;
33. Wakeup(); // if it is not empty then wake the first process of the blocked queue.
34. }
35. }

THE DINING PHILOSOPHERS PROBLEM


The dining philosopher's problem is the classical problem of synchronization which says that Five
philosophers are sitting around a circular table and their job is to think and eat alternatively. A bowl of
noodles is placed at the center of the table along with five chopsticks for each of the philosophers. To eat
a philosopher needs both their right and a left chopstick. A philosopher can only eat if both immediate
left and right chopsticks of the philosopher is available. In case if both immediate left and right
chopsticks of the philosopher are not available then the philosopher puts down their (either left or right)
chopstick and starts thinking again.

The dining philosopher demonstrates a large class of concurrency control problems hence it's a classic
synchronization problem.

Five Philosophers sitting around the table

Dining Philosophers Problem- Let's understand the Dining Philosophers Problem with the below code,
we have used fig 1 as a reference to make you understand the problem exactly. The five Philosophers are
represented as P0, P1, P2, P3, and P4 and five chopsticks by C0, C1, C2, C3, and C4.
68.5M
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
1.4K

Features of Java - Javatpoint

1. Void Philosopher
2. {
3. while(1)
4. {
5. take_chopstick[i];
6. take_chopstick[ (i+1) % 5] ;
7. ..
8. . EATING THE NOODLE
9. .
10. put_chopstick[i] );
11. put_chopstick[ (i+1) % 5] ;
12. .
13. . THINKING
14. }
15. }
Let's discuss the above code:
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
Suppose Philosopher P0 wants to eat, it will enter in Philosopher() function, and
execute take_chopstick[i]; by doing this it holds C0 chopstick after that it execute take_chopstick[
(i+1) % 5]; by doing this it holds C1 chopstick( since i =0, therefore (0 + 1) % 5 = 1)

Similarly suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and
execute take_chopstick[i]; by doing this it holds C1 chopstick after that it execute take_chopstick[
(i+1) % 5]; by doing this it holds C2 chopstick( since i =1, therefore (1 + 1) % 5 = 2)

But Practically Chopstick C1 is not available as it has already been taken by philosopher P0, hence the
above code generates problems and produces race condition.

The solution of the Dining Philosophers Problem


We use a semaphore to represent a chopstick and this truly acts as a solution of the Dining Philosophers
Problem. Wait and Signal operations will be used for the solution of the Dining Philosophers Problem,
for picking a chopstick wait operation can be executed while for releasing a chopstick signal semaphore
can be executed.

Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only two
standard atomic operations - wait and signal, whose definitions are as follows:

1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
6.
7. 2. signal( S )
8. {
9. S++;
10. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop(because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.

The structure of the chopstick is an array of a semaphore which is represented as shown below -

1. semaphore C[5];
Initially, each element of the semaphore C0, C1, C2, C3, and C4 are initialized to 1 as the chopsticks are
on the table and not picked up by any of the philosophers.

Let's modify the above code of the Dining Philosopher Problem by using semaphore operations wait and
signal, the desired code looks like
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

1. void Philosopher
2. {
3. while(1)
4. {
5. Wait( take_chopstickC[i] );
6. Wait( take_chopstickC[(i+1) % 5] ) ;
7. ..
8. . EATING THE NOODLE
9. .
10. Signal( put_chopstickC[i] );
11. Signal( put_chopstickC[ (i+1) % 5] ) ;
12. .
13. . THINKING
14. }
15. }
In the above code, first wait operation is performed on take_chopstickC[i] and take_chopstickC [ (i+1) %
5]. This shows philosopher i have picked up the chopsticks from its left and right. The eating function is
performed after that.

On completion of eating by philosopher i the, signal operation is performed on take_chopstickC[i] and


take_chopstickC [ (i+1) % 5]. This shows that the philosopher i have eaten and put down both the left
and right chopsticks. Finally, the philosopher starts thinking again.

Let's understand how the above code is giving a solution to the dining philosopher problem?
Let value of i = 0( initial value ), Suppose Philosopher P0 wants to eat, it will enter in Philosopher()
function, and execute Wait( take_chopstickC[i] ); by doing this it holds C0 chopstick and reduces
semaphore C0 to 0, after that it execute Wait( take_chopstickC[(i+1) % 5] ); by doing this it holds C1
chopstick( since i =0, therefore (0 + 1) % 5 = 1) and reduces semaphore C1 to 0

Similarly, suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and
execute Wait( take_chopstickC[i] ); by doing this it will try to hold C1 chopstick but will not be able to
do that, since the value of semaphore C1 has already been set to 0 by philosopher P0, therefore it will
enter into an infinite loop because of which philosopher P1 will not be able to pick chopstick C1 whereas
if Philosopher P2 wants to eat, it will enter in Philosopher() function, and execute Wait(
take_chopstickC[i] ); by doing this it holds C2 chopstick and reduces semaphore C2 to 0, after that, it
executes Wait( take_chopstickC[(i+1) % 5] ); by doing this it holds C3 chopstick( since i =2, therefore
(2 + 1) % 5 = 3) and reduces semaphore C3 to 0.

Hence the above code is providing a solution to the dining philosopher problem, A philosopher can only
eat if both immediate left and right chopsticks of the philosopher are available else philosopher needs to
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
wait. Also at one go two independent philosophers can eat simultaneously (i.e., philosopher P0 and P2,
P1 and P3 & P2 and P4 can eat simultaneously as all are the independent processes and they are
following the above constraint of dining philosopher problem)

The drawback of the above solution of the dining philosopher problem


From the above solution of the dining philosopher problem, we have proved that no two neighbouring
philosophers can eat at the same point in time. The drawback of the above solution is that this solution
can lead to a deadlock condition. This situation happens if all the philosophers pick their left chopstick at
the same time, which leads to the condition of deadlock and none of the philosophers can eat.

To avoid deadlock, some of the solutions are as follows -

o Maximum number of philosophers on the table should not be more than four, in this case,
chopstick C4 will be available for philosopher P3, so P3 will start eating and after the finish of
his eating procedure, he will put down his both the chopstick C3 and C4, i.e. semaphore C3 and
C4 will now be incremented to 1. Now philosopher P2 which was holding chopstick C2 will also
have chopstick C3 available, hence similarly, he will put down his chopstick after eating and
enable other philosophers to eat.

o A philosopher at an even position should pick the right chopstick and then the left chopstick
while a philosopher at an odd position should pick the left chopstick and then the right chopstick.

o Only in case if both the chopsticks ( left and right ) are available at the same time, only then a
philosopher should be allowed to pick their chopsticks

o All the four starting philosophers ( P0, P1, P2, and P3) should pick the left chopstick and then the
right chopstick, whereas the last philosopher P4 should pick the right chopstick and then the left
chopstick. This will force P4 to hold his right chopstick first since the right chopstick of P4 is C0,
which is already held by philosopher P0 and its value is set to 0, i.e C0 is already 0, because of
which P4 will get trapped into an infinite loop and chopstick C4 remains vacant. Hence
philosopher P3 has both left C3 and right C4 chopstick available, therefore it will start eating and
will put down its both chopsticks once finishes and let others eat which removes the problem of
deadlock.

The design of the problem was to illustrate the challenges of avoiding deadlock, a deadlock state of a
system is a state in which no progress of system is possible. Consider a proposal where each philosopher
is instructed to behave as follows:

o The philosopher is instructed to think till the left fork is available, when it is available, hold it.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]

o The philosopher is instructed to think till the right fork is available, when it is available, hold it.

o The philosopher is instructed to eat when both forks are available.

o then, put the right fork down first

o then, put the left fork down next

o repeat from the beginning.

You might also like