Note - DCS30102 - Module 3
Note - DCS30102 - Module 3
Let us now look at the general definition of inter-process communication, which will explain the same
thing that we have discussed above.
Definition
"Inter-process communication is used for exchanging useful information between numerous
threads in one or more processes (or programs)."
To understand inter process communication, you can consider the following given diagram that illustrates
the importance of inter-process communication:
These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
4. Spinlock
Mutual Exclusion:-
It is generally required that only one process thread can enter the critical section at a time. This also helps
in synchronization and creates a stable state to avoid the race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources by several
processes. Semaphore is further divided into two types which are as follows:
1. Binary Semaphore
2. Counting Semaphore
Barrier:-
A barrier typically not allows an individual process to proceed unless all the processes does not reach it.
It is used by many parallel languages, and collective routines impose barriers.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock waits or
stays in a loop while checking that the lock is available or not. It is known as busy waiting because even
though the process active, the process does not perform any functional operation (or task).
1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
To understand them in more detail, we will discuss each of them individually.
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in this type of
data channel can be moved in only a single direction at a time. Still, one can use two-channel of this type,
so that he can able to send and receive data in two processes. Typically, it uses the standard methods for
input and output. These pipes are used in all types of POSIX systems and in different versions of window
operating systems as well.
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other. Therefore the
shared memory is used by almost all POSIX and Windows operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and write the data to the message queue. In the
message queue, the messages are stored or stay in the queue unless their recipients retrieve them. In
short, we can also say that the message queue is very helpful in inter-process communication and used by
all operating systems.
To understand the concept of Message queue and Shared memory in more detail, let's take a look at its
diagram given below:
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each other.
However, by using the message passing, the processes can communicate with each other without
restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as follows:
o send (message)
o received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common mailbox, and
each pair of these processes shares multiple communication links. These shared links can be
unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated processes. It can also be considered as full-
duplex, which means that one process can communicate with another process and vice versa.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
o Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data sent
between processes on the same computer or data sent between different computers on the same network.
Hence, it used by several types of operating systems.
o File:-
A file is a type of data record or a document stored on the disk and can be acquired on demand by the file
server. Another most important thing is that several processes can access that file as required or needed.
o Signal:-
As its name implies, they are a type of signal used in inter process communication in a minimal way.
Typically, they are the massages of systems that are sent by one process to another. Therefore, they are
not used for sending data but for remote commands between multiple processes.
Usually, they are not used to send the data but to remote commands in between several processes.
o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other and synchronize their actions as well.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
A cooperative process is the one which can affect the execution of other process or can be affected by the
execution of other process. Such processes need to be synchronized so that their order of execution can
be guaranteed.
The procedure involved in preserving the appropriate order of execution of cooperative processes is
known as Process Synchronization. There are various synchronization mechanisms that are used to
synchronize the processes.
Race Condition
A Race Condition typically occurs when two or more threads try to read, write and possibly make the
decisions based on the memory that they are accessing concurrently.
Critical Section
The regions of a program that try to access shared resources and may cause race conditions are called
critical section. To avoid race condition among the processes, we need to assure that only one process at
a time can execute within the critical section.
The critical section cannot be executed by more than one process at the same time; operating system
faces the difficulties in allowing and disallowing the processes from entering the critical section.
The critical section problem is used to design a set of protocols which can ensure that the Race condition
among the processes will never arise.
Primary
1. Mutual Exclusion
Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process
is executing inside critical section then the other process must not enter in the critical section.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
2. Progress
Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section.
Secondary
1. Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical section.
The process must not be endlessly waiting for getting into the critical section.
2. Architectural Neutrality
Our mechanism must be architectural natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.
Lock Variable
This is the simplest synchronization mechanism. This is a Software Mechanism implemented in User
mode. This is a busy waiting solution which can be used for more than two processes.
In this mechanism, a Lock variable lockis used. Two values of lock can be possible, either 0 or 1. Lock
value 0 means that the critical section is vacant while the lock value 1 means that it is occupied.
A process which wants to get into the critical section first checks the value of the lock variable. If it is 0
then it sets the value of lock as 1 and enters into the critical section, otherwise it waits.
The pseudo code of the mechanism looks like following. Entry Section →
Initially the value of lock variable is 0. The process which needs to get into the critical section, enters
into the entry section and checks the condition provided in the while loop.
The process will wait infinitely until the value of lock is 1 (that is implied by while loop). Since, at the
very first time critical section is vacant hence the process will enter the critical section by setting the lock
variable as 1.
When the process exits from the critical section, then in the exit section, it reassigns the value of lock as
0.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
Every Synchronization mechanism is judged on the basis of four conditions.
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
4. Portability
Out of the four parameters, Mutual Exclusion and Progress must be provided by any solution. Let?s
analyze this mechanism on the basis of the above mentioned conditions.
Mutual Exclusion
The lock variable mechanism doesn't provide Mutual Exclusion in some of the cases. This can be better
described by looking at the pseudo code by the Operating System point of view I.E. Assembly code of
the program. Let's convert the Code into the assembly language.
1. Load Lock, R0
2. CMP R0, #0
3. JNZ Step 1
Let us consider that we have two processes P1 and P2. The process P1 wants to execute its critical
section. P1 gets into the entry section. Since the value of lock is 0 hence P1 changes its value from 0 to 1
and enters into the critical section.
Meanwhile, P1 is pre-empted by the CPU and P2 gets scheduled. Now there is no other process in the
critical section and the value of lock variable is 0. P2 also wants to execute its critical section. It enters
into the critical section by setting the lock variable to 1.
Now, CPU changes P1's state from waiting to running. P1 is yet to finish its critical section. P1 has
already checked the value of lock variable and remembers that its value was 0 when it previously
checked it. Hence, it also enters into the critical section without checking the updated value of lock
variable.
Now, we got two processes in the critical section. According to the condition of mutual exclusion, more
than one process in the critical section must not be present at the same time. Hence, the lock variable
mechanism doesn't guarantee the mutual exclusion.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
The problem with the lock variable mechanism is that, at the same time, more than one process can see
the vacant tag and more than one process can enter in the critical section. Hence, the lock variable doesn't
provide the mutual exclusion that's why it cannot be used in general.
Since, this method is failed at the basic step; hence, there is no need to talk about the other conditions to
be fulfilled.
In lock variable mechanism, Sometimes Process reads the old value of lock variable and enters the
critical section. Due to this reason, more than one process might get into critical section. However, the
code shown in the part one of the following section can be replaced with the code shown in the part two.
This doesn't affect the algorithm but, by doing this, we can manage to provide the mutual exclusion to
some extent but not completely.
In the updated version of code, the value of Lock is loaded into the local register R0 and then value of
lock is set to 1.
However, in step 3, the previous value of lock (that is now stored into R0) is compared with 0. if this is 0
then the process will simply enter into the critical section otherwise will wait by executing continuously
in the loop.
The benefit of setting the lock immediately to 1 by the process itself is that, now the process which enters
into the critical section carries the updated value of lock variable that is 1.
In the case when it gets preempted and scheduled again then also it will not enter the critical section
regardless of the current value of the lock variable as it already knows what the updated value of lock
variable is.
Section 1 Section 2
TSL Instruction
However, the solution provided in the above segment provides mutual exclusion to some extent but it
doesn't make sure that the mutual exclusion will always be there. There is a possibility of having more
than one process in the critical section.
What if the process gets preempted just after executing the first instruction of the assembly code written
in section 2? In that case, it will carry the old value of lock variable with it and it will enter into the
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
critical section regardless of knowing the current value of lock variable. This may make the two
processes present in the critical section at the same time.
To get rid of this problem, we have to make sure that the preemption must not take place just after
loading the previous value of lock variable and before setting it to 1. The problem can be solved if we can
be able to merge the first two instructions.
In order to address the problem, the operating system provides a special instruction called Test Set Lock
(TSL) instruction which simply loads the value of lock variable into the local register R0 and sets it to 1
simultaneously
The process which executes the TSL first will enter into the critical section and no other process after that
can enter until the first process comes out. No process can execute the critical section even in the case of
preemption of the first process.
1. TSL Lock, R0
2. CMP R0, #0
3. JNZ step 1
o Mutual Exclusion
Mutual Exclusion is guaranteed in TSL mechanism since a process can never be preempted just
before setting the lock variable. Only one process can see the lock variable as 0 at a particular
time and that's why, the mutual exclusion is guaranteed.
o Progress
According to the definition of the progress, a process which doesn't want to enter in the critical
section should not stop other processes to get into it. In TSL mechanism, a process will execute
the TSL instruction only when it wants to get into the critical section. The value of the lock will
always be 0 if no process doesn't want to enter into the critical section hence the progress is
always guaranteed in TSL.
o Bounded Waiting
Bounded Waiting is not guaranteed in TSL. Some process might not get a chance for so long. We
cannot predict for a process that it will definitely get a chance to enter in critical section after a
certain time.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
o Architectural Neutrality
TSL doesn't provide Architectural Neutrality. It depends on the hardware platform. The TSL
instruction is provided by the operating system. Some platforms might not provide that. Hence it
is not Architectural natural.
Semaphore is the variables which storesthe entire wake up calls that are being transferred from producer
to consumer. It is a variable on which read, modify and update happens automatically in kernel mode.
Semaphore cannot be implemented in the user mode because race condition may always arise when two
or more processes try to access the variable simultaneously. It always needs support from the operating
system to be implemented.
According to the demand of the situation, Semaphore can be divided into two categories.
1. Counting Semaphore
Counting Semaphore
There are the scenarios in which more than one processes need to execute in critical section
simultaneously. However, counting semaphore can be used when we need to have more than one process
in the critical section at the same time.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
The programming code of semaphore implementation is shown below which includes the structure of
semaphore and the logic using which the entry and the exit can be performed in the critical section.
1. struct Semaphore
2. {
3. int value; // processes that can enter in the critical section simultaneously.
4. queue type L; // L contains set of processes which get blocked
5. }
6. Down (Semaphore S)
7. {
8. SS.value = S.value - 1; //semaphore's value will get decreased when a new
9. //process enter in the critical section
10. if (S.value< 0)
11. {
12. put_process(PCB) in L; //if the value is negative then
13. //the process will get into the blocked state.
14. Sleep();
15. }
16. else
17. return;
18. }
19. up (Semaphore s)
20. {
21. SS.value = S.value+1; //semaphore value will get increased when
22. //it makes an exit from the critical section.
23. if(S.value<=0)
24. {
25. select a process from L; //if the value of semaphore is positive
26. //then wake one of the processes in the blocked queue.
27. wake-up();
28. }
29. }
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
30. }
In this mechanism, the entry and exit in the critical section are performed on the basis of the value of
counting semaphore. The value of counting semaphore at any point of time indicates the maximum
number of processes that can enter in the critical section at the same time.
A process which wants to enter in the critical section first decrease the semaphore value by 1 and then
check whether it gets negative or not. If it gets negative then the process is pushed in the list of blocked
processes (i.e. q) otherwise it gets enter in the critical section.
When a process exits from the critical section, it increases the counting semaphore by 1 and then checks
whether it is negative or zero. If it is negative then that means that at least one process is waiting in the
blocked state hence, to ensure bounded waiting, the first process among the list of blocked processes will
wake up and gets enter in the critical section.
The processes in the blocked list will get waked in the order in which they slept. If the value of counting
semaphore is negative then it states the number of processes in the blocked state while if it is positive
then it states the number of slots available in the critical section.
A Counting Semaphore was initialized to 12. then 10P (wait) and 4V (Signal) operations were computed
on this semaphore. What is the result?
1. S = 12 (initial)
2. 10 p (wait) :
3. SS = S -10 = 12 - 10 = 2
4. then 4 V :
5. SS = S + 4 =2 + 4 = 6
Hence, the final value of counting semaphore is 6.
1. StructBsemaphore
2. {
3. enum Value(0,1); //value is enumerated data type which can only have two values 0 or 1.
4. Queue type L;
5. }
6. /* L contains all PCBs corresponding to process
7. Blocked while processing down operation unsuccessfully.
8. */
9. Down (Bsemaphore S)
10. {
11. if (s.value == 1) // if a slot is available in the
12. //critical section then let the process enter in the queue.
13. {
14. S.value = 0; // initialize the value to 0 so that no other process can read it as 1.
15. }
16. else
17. {
18. put the process (PCB) in S.L; //if no slot is available
19. //then let the process wait in the blocked queue.
20. sleep();
21. }
22. }
23. Up (Bsemaphore S)
24. {
25. if (S.L is empty) //an empty blocked processes list implies that no process
26. //has ever tried to get enter in the critical section.
27. {
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
The dining philosopher demonstrates a large class of concurrency control problems hence it's a classic
synchronization problem.
Dining Philosophers Problem- Let's understand the Dining Philosophers Problem with the below code,
we have used fig 1 as a reference to make you understand the problem exactly. The five Philosophers are
represented as P0, P1, P2, P3, and P4 and five chopsticks by C0, C1, C2, C3, and C4.
68.5M
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
1.4K
1. Void Philosopher
2. {
3. while(1)
4. {
5. take_chopstick[i];
6. take_chopstick[ (i+1) % 5] ;
7. ..
8. . EATING THE NOODLE
9. .
10. put_chopstick[i] );
11. put_chopstick[ (i+1) % 5] ;
12. .
13. . THINKING
14. }
15. }
Let's discuss the above code:
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
Suppose Philosopher P0 wants to eat, it will enter in Philosopher() function, and
execute take_chopstick[i]; by doing this it holds C0 chopstick after that it execute take_chopstick[
(i+1) % 5]; by doing this it holds C1 chopstick( since i =0, therefore (0 + 1) % 5 = 1)
Similarly suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and
execute take_chopstick[i]; by doing this it holds C1 chopstick after that it execute take_chopstick[
(i+1) % 5]; by doing this it holds C2 chopstick( since i =1, therefore (1 + 1) % 5 = 2)
But Practically Chopstick C1 is not available as it has already been taken by philosopher P0, hence the
above code generates problems and produces race condition.
Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only two
standard atomic operations - wait and signal, whose definitions are as follows:
1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
6.
7. 2. signal( S )
8. {
9. S++;
10. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop(because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.
The structure of the chopstick is an array of a semaphore which is represented as shown below -
1. semaphore C[5];
Initially, each element of the semaphore C0, C1, C2, C3, and C4 are initialized to 1 as the chopsticks are
on the table and not picked up by any of the philosophers.
Let's modify the above code of the Dining Philosopher Problem by using semaphore operations wait and
signal, the desired code looks like
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
1. void Philosopher
2. {
3. while(1)
4. {
5. Wait( take_chopstickC[i] );
6. Wait( take_chopstickC[(i+1) % 5] ) ;
7. ..
8. . EATING THE NOODLE
9. .
10. Signal( put_chopstickC[i] );
11. Signal( put_chopstickC[ (i+1) % 5] ) ;
12. .
13. . THINKING
14. }
15. }
In the above code, first wait operation is performed on take_chopstickC[i] and take_chopstickC [ (i+1) %
5]. This shows philosopher i have picked up the chopsticks from its left and right. The eating function is
performed after that.
Let's understand how the above code is giving a solution to the dining philosopher problem?
Let value of i = 0( initial value ), Suppose Philosopher P0 wants to eat, it will enter in Philosopher()
function, and execute Wait( take_chopstickC[i] ); by doing this it holds C0 chopstick and reduces
semaphore C0 to 0, after that it execute Wait( take_chopstickC[(i+1) % 5] ); by doing this it holds C1
chopstick( since i =0, therefore (0 + 1) % 5 = 1) and reduces semaphore C1 to 0
Similarly, suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and
execute Wait( take_chopstickC[i] ); by doing this it will try to hold C1 chopstick but will not be able to
do that, since the value of semaphore C1 has already been set to 0 by philosopher P0, therefore it will
enter into an infinite loop because of which philosopher P1 will not be able to pick chopstick C1 whereas
if Philosopher P2 wants to eat, it will enter in Philosopher() function, and execute Wait(
take_chopstickC[i] ); by doing this it holds C2 chopstick and reduces semaphore C2 to 0, after that, it
executes Wait( take_chopstickC[(i+1) % 5] ); by doing this it holds C3 chopstick( since i =2, therefore
(2 + 1) % 5 = 3) and reduces semaphore C3 to 0.
Hence the above code is providing a solution to the dining philosopher problem, A philosopher can only
eat if both immediate left and right chopsticks of the philosopher are available else philosopher needs to
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
wait. Also at one go two independent philosophers can eat simultaneously (i.e., philosopher P0 and P2,
P1 and P3 & P2 and P4 can eat simultaneously as all are the independent processes and they are
following the above constraint of dining philosopher problem)
o Maximum number of philosophers on the table should not be more than four, in this case,
chopstick C4 will be available for philosopher P3, so P3 will start eating and after the finish of
his eating procedure, he will put down his both the chopstick C3 and C4, i.e. semaphore C3 and
C4 will now be incremented to 1. Now philosopher P2 which was holding chopstick C2 will also
have chopstick C3 available, hence similarly, he will put down his chopstick after eating and
enable other philosophers to eat.
o A philosopher at an even position should pick the right chopstick and then the left chopstick
while a philosopher at an odd position should pick the left chopstick and then the right chopstick.
o Only in case if both the chopsticks ( left and right ) are available at the same time, only then a
philosopher should be allowed to pick their chopsticks
o All the four starting philosophers ( P0, P1, P2, and P3) should pick the left chopstick and then the
right chopstick, whereas the last philosopher P4 should pick the right chopstick and then the left
chopstick. This will force P4 to hold his right chopstick first since the right chopstick of P4 is C0,
which is already held by philosopher P0 and its value is set to 0, i.e C0 is already 0, because of
which P4 will get trapped into an infinite loop and chopstick C4 remains vacant. Hence
philosopher P3 has both left C3 and right C4 chopstick available, therefore it will start eating and
will put down its both chopsticks once finishes and let others eat which removes the problem of
deadlock.
The design of the problem was to illustrate the challenges of avoiding deadlock, a deadlock state of a
system is a state in which no progress of system is possible. Consider a proposal where each philosopher
is instructed to behave as follows:
o The philosopher is instructed to think till the left fork is available, when it is available, hold it.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
o The philosopher is instructed to think till the right fork is available, when it is available, hold it.