Process Synchronization
Process Synchronization
Critical section
• A critical section is a segment of code present in a process in which the process may be
modifying or accessing comm shared data items.
• The most important thing that a system should control is that, when one process is
executing its should not allow other processes to execute in its critical sections.
• Before executing critical section, the process should get permission to enter its critical
section from the system. This is called an entry section.
• After that process executes its critical section and comes out of it this is called exit
section.
• Then, it executes the remaining code called remainder section.
Starvation:
Two are more processes are said to be in starvation, if they are waiting perpetually for a
resource which is occupied by another process. The process that has occupied the resource may
or may not present in the list of processes that are starved.
Let P1.P2 and P3 be the three processes, each of which requires a periodical access to resource
R. if access to resource ‘R’ is granted to the process P1. then the other two processesP2 and P3
are delayed as they are waiting for the resource "R'.
Now. let the Access is granted to P3 and if P1 again needs 'R” prior to the completion
of its critical section. If the OS permits P1 to use ‘R” after P3 has completed ifs execution, then
these alternate access permissions provided to P1 and P3 causes P2 to be blocked.
Here, we need to illustrate Where starvation is possible or not in algorithms like FCFS, SRT
and priority.
• Consider FCFS (First Come First Served) Algorithm, in this starvation is not possible.
The reason is the CPU picks the process according to arrival of its burst time and run
the process till its completion.
• SRT (shortest remaining time) algorithm, in this starvation is possible with the
processes that has shortest remaining time. The reason is CPU picks the process that
has shortest remaining time. Here, we can overcome the problem of starvation by
giving chance to processes that are waiting for a long period of time.
• Finally, consider priority algorithm, in this starvation is possible with low priority
processes. The reason is. CPU picks the process with highest priority. We can overcome
starvation problem by a technique called aging. This technique increases the priority of
the processes that waiting for long period of time.
2) Progress: If no process is executing in the critical section and there exits some
processes that wish to enter their critical section then only those processes that are not
executing in their critical section can participate in the decision on which will enter
critical next. This selection of the processes that will enter the critical section next
cannot be postponed indefinitely.
3) Bounded Waiting:
• When a process requests access to a critical section, a decision that grants its
access may not be delayed indefinitely.
• A process may not be denied access because of starvation or deadlock.
• A bound must exist on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical
section and before that request is granted. (i.e. All requesters must eventually
be let into the critical section).
There are two general ways for handling critical sections in the operating systems.
They are:
1) Preemptive Kernel: It allows the Kernel model process to be pre-empted (i.e.
interrupted) during execution.
2) Non-preemptive Kernel: It does not allow a Kernel mode process to be preempted
during execution, the process will extent until it exits Kernel mode or voluntarily leaves
control of the CPU. This approach is helpful in avoiding race conditions.
Peterson’s Solution:
• A classic software-based solution to the critical section problem is known as
Peterson’s solution.
• It provides a good algorithmic description of solving the critical section problem and
illustrates some of the complexities involved in designing software that addresses the
requirements of mutual exclusion, progress and bounded waiting requirements.
• Peterson’s solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.
Peterson’s solution requires two data items to be shared between the two processes:
Boolean flag[]:
• An array of boolean values indicating whether a process is ready to enter its critical
section.
• flag[0] = true means Process 0 wants to enter.
• flag[1] = true means Process 1 wants to enter.
int turn:
• An integer variable indicating whose turn it is to enter the critical section.
• turn = 0 allows Process 0 to enter.
• turn = 1 allows Process 1 to enter.
Algorithm:
The structure of process Pi in Peterson’s solution.
For two processes, Process 0 and Process 1:
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
critical section
Flag[i] = false;
remainder section
} while (true);
Where:
• i is the current process (0 or 1).
• j is the other process (1 - i).
The algorithm does satisfy the three essential criteria to solve the critical section problem. For
two processes Po and P1:
1)Mutual exclusion:
• P0 and P1 can never be in the critical section at the same time.
• If P0 is in its critical section, then flag [0] is true and either flag [1] is false (meaning
P; has left its critical section) or turn is 0 (meaning P is just now trying to enter the
critical section, but graciously waiting).
• In both cases, P1 cannot be in critical section when P0 is in critical section.
2) Progress:
• Each process can only be blocked at the while if the other process wants to use the
critical section (flag[ j ] ==true ), AND it is the other process's turn to use the critical
section (tum == j).
• if both of those conditions are true, then the other process (j) will be allowed to enter
the critical section, and upon exiting the critical section, will set flag[j] to false,
releasing process i.
• The shared variable turn assures that only one process at a time can be blocked, and
the flag variable allows one process to release the other when exiting their critical
section.
3) Bounded waiting:
As each process enters their entry section, they set the turn variable to be the other
processes turn. Since no process ever sets it back to their own turn, this ensures that
each process will have to let the other process go first at most one time before it
becomes their turn again.
Limitations
Two Processes Only:
Peterson’s solution is specifically designed for two processes. Extending it to multiple
processes is complex and less efficient.
Mutex Locks
• A mutex (mutual exclusion) lock is a synchronization mechanism used in operating
systems to prevent concurrent processes or threads from accessing a critical section
simultaneously.
• It ensures mutual exclusion, allowing only one thread or process to execute the critical
section at a time and thus prevent race conditions.
• A process must acquire the lock before entering a critical section; it releases the lock
when it exits the critical section.
• The acquire()function acquires the lock, and the release() function releases the lock.
Working:
• A mutex lock has a boolean variable available whose value indicates if the lock is
available or not.
• If the lock is available, a call to acquire() succeeds, and the lock is then considered
unavailable.
• A process that attempts to acquire an unavailable lock is blocked until the lock is
released.
Key Concepts of Mutex Locks:
1. Critical Section: A section of code where shared resources are accessed, requiring
synchronization to prevent data inconsistency.
2. Mutex Lock:
1. A binary lock (locked or unlocked) used to protect the critical section.
2. Only one thread or process can acquire the lock at a time.
3. Other threads/processes attempting to acquire the lock must wait until it is
released.
3. Operations:
1. Lock (acquire): If the mutex is available, a thread/process acquires it and enters
the critical section.
2. Unlock (release): Once the critical section is exited, the thread/process releases
the mutex, allowing others to acquire it.
The definition of acquire() is as follows:
acquire()
{
while (!available)
; /* busy wait */
available = false;
}
The definition of release() is as follows:
release()
{
available = true;
}
Solution to the critical-section problem using mutex locks:
do {
acquire lock
critical section
release lock
remainder section
} while(true)
Advantages of Mutex Locks:
1. Mutual Exclusion: Ensures only one thread/process accesses the critical section at a
time.
2. Simple to Use: Well-supported by most programming languages and operating
systems.
3. Thread Safety: Helps avoid race conditions.
Disadvantages of Mutex Locks:
1. Deadlocks:
1. Can occur if multiple threads hold locks and wait indefinitely for others.
2. Overhead:
1. Lock acquisition and release involve some computational cost.
3. Priority Inversion:
1. A high-priority thread might be waiting for a low-priority thread holding the
lock, causing delays.
Semaphore:
• Semaphores are most often used to synchronize operations (to avoid race
conditions) when multiple processes access a common, non-shareable resource.
• Semaphores are integer variables for which only two (atomic) operations are
defined, the wait (P) and signal operations, whose definitions in pseudocode are
shown in the following figure.
• To indicate a process has gained access to the resource, the process decrements the
semaphore.
• Modifications to the integer value of the semaphore in the wait and signal operations
must be executed indivisibly.
• When one process modifies the semaphore value. no other process can
simultaneously modify that same semaphore value.
• Access to the semaphore is provided by a series of semaphore system calls.
• Semaphores can be used to deal with the n-processes critical section problem, where
the n-processes share a semaphore mutex (mutual exclusion) initialized to 1.
Each process P: is organized as shown:
Implementation
• The big problem with semaphores described above is the busy loop in the wait call
(busy waiting), which consumes CPU cycles without doing any useful work.
• This type of lock is known as a spinlock, because the lock just sits there and spins
while it waits.
• While this is generally a bad thing. it does have the advantage of not invoking
context switches, and so it is sometimes used in multi-processing systems when the
wait time is expected to be short - One thread spins on one processor while another
completes their critical section on another processor
• An alternative approach is to block a process when it is forced to wait for an
available semaphore, and swap it out of the CPU.
• In this implementation each semaphore needs to maintain a list of processes that are
blocked waiting for it, so that one of the processes can be woken up and swapped
back in when the semaphore becomes available. (Whether it gets swapped back info
the CPU immediately or whether it needs to hang out in the ready queue for a while
is a scheduling problem.)
The new definition of a semaphore and the corresponding wait and signal operations are
shown as follows:
Binary Semaphore:
• The value of a binary semaphore can range only between 0 and 1.
• A binary semaphore must be initialized with 1 or 0, and the completion of P and V
operations must alternate.
• If the semaphore is initialized with 1, then the first completed operation must be P.
• If the semaphore is initialized with 0, then the first completed operation must be
V.
• Both P and V operations can be blocked, if they are attempted in a consecutive
manner.
• Binary semaphores are known as mutex locks as they are locks that provide
mutual exclusion. Binary semaphores are used to deal with the critical section
problem for multiple processes.
Counting Semaphore
• The value of a counting semaphore can range over an unrestricted domain.
• Counting semaphores can be used to control access to a given resource consisting
of a finite number of instances.
• Counting semaphores maintain a count of the number of times a resource is given.
• The semaphore is initialized to the number of resources available. Each process that
wishes to use a resource performs a wait() operation on the semaphore. When a
process releases a resource, it performs a signal() operation.
Drawbacks of Semaphore:
• They are essentially shared global variables
• Access to semaphores can come from anywhere in a program.
• There is no control or guarantee of proper usage.
• They serve two purposes mutual exclusion and scheduling constraint
• Here the pool consists of n buffers, each capable of holding one item.
• The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is
initialized to the value 1.
• The empty and full semaphores count the number of empty and full buffers.
• The semaphore empty is initialized to the value n, the semaphore full s initialized to
value 0.
The code below can be interpreted as the producer producing full buffers for the consumer or
as the consumer producing empty buffers for the producer.
Example:
Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database, whereas others may want to update (that is, to
read and write) the database. two types of processes by referring to the former as readers and
to the latter as writers.
Obviously, if two readers access the shared data simultaneously, no adverse effects will
result. However, if a writer and some other process (either a reader or a writer) access the
database simultaneously, chaos may ensue. To ensure that these difficulties do not arise, we
require that the writers have exclusive access to the shared database while writing to the
database. This synchronization problem is referred to as the readers–writers problem.
There are several variations to the readers-writers problem, most cantered around relative
priorities of readers versus writers.
• The first readers-writers problem gives priority to readers. In this problem. if a reader
wants access to the data, and there is not already a writer accessing it, then access is
granted to the reader. A solution to this problem can lead to starvation of the writers, s
there could always be more readers coming along to access the data.
• The second readers-writers problem gives priority to the writers. In this problem, when
a writer wants access to the data it jumps to the head of the queue - All waiting readers
are blocked, and the writer gets access to the data as soon as it becomes available. In
this solution the readers may be starved by a steady stream of writers.
In the solution to the first readers–writers problem, the reader processes share the following
data structures:
if a writer is in the critical section and n readers are waiting, then one reader is queued on rw
mutex, and n − 1 readers are queued on mutex. Also observe that, when a writer executes
signal(rw mutex), we may resume the execution of either the waiting readers or a single waiting
writer. The selection is made by the scheduler.
The readers–writers problem and its solutions have been generalized to provide reader–writer
locks on some systems. Acquiring a reader–writer lock requires specifying the mode of the
lock: either read or write access. When a process wishes only to read shared data, it requests
the reader–writer lock in read mode. A process wishing to modify the shared data must request
the lock in write mode. Multiple processes are permitted to concurrently acquire a reader–
writer lock in read mode, but only one process may acquire the lock for writing, as exclusive
access is required for writers.
Reader–writer locks are most useful in the following situations:
• In applications where it is easy to identify which processes only read shared data and
which processes only write shared data.
• In applications that have more readers than writers. This is because reader–writer locks
generally require more overhead to establish than semaphores or mutual-exclusion
locks. The increased concurrency of allowing multiple readers compensates for the
overhead involved in setting up the reader–writer lock.
[Link]-Philosopher’s problem:
Dinning philosopher’s problem is a classic synchronization problem involving the allocation
of limited resources among a group of processes in a deadlock-free and starvation-free manner.
Consider five philosophers sitting around a table, in which there are five chopsticks evenly
distributed and an endless bowl of rice in the center, as shown in the diagram below. (There is
exactly one chopstick between each pair of dining philosophers.)
These philosophers spend their lives alternating between two activities: eating and
thinking. When it is time for a philosopher to eat, it must first acquire two chopsticks - one
from their left and one from their right. When a philosopher thinks, it puts down both chopsticks
in their original locations.