0% found this document useful (0 votes)
16 views15 pages

Process Synchronization

Uploaded by

Kiran M k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views15 pages

Process Synchronization

Uploaded by

Kiran M k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Process Synchronization: The Critical-Section Problem, Peterson’s Solution,

Synchronization, Semaphores, Classic Problems of Synchronization.


Process Synchronization
• There are several situations, where different processes need to interact with each other
to achieve a common goal.
• The interaction can be done by sharing data or by sending messages.
• When the data is shared by several independent processes, then there is a chance of data
becoming data inconsistent.
• Concurrent access to shared data may result in data inconsistency.
• Maintaining data consistency requires mechanisms to ensure the orderly execution of
cooperating processes.
Example:
Let us consider two processes P1 and P2, both share a variable named ‘counter’. If both P1 and
P2 execute simultaneously. with P1 incrementing the counter while P2 is decrementing it.
Then, at the end the result of this variable will be such that, it is not expected by either P1 or
P2 because it became inconsistent. This is often called as race condition.
Hence a synchronization mechanism is needed to avoid inconsistency among several
processes.
Basically, process synchronization is implemented by making a process to wait for another
process performs an appropriate action on shared data. It is also called as signaling where one
process waits for notification of an event that all occur in another process.
Race Condition: A race condition refers to the situation that results when many processes or
threads reads and writes data items in a way that the final result generated is in accordance
with the order of instructions execution in multiple processes.
• When several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes
place is called race condition.
• To guard against the race condition, we need to ensure that only one process at a time
can be manipulating the variable counter. Hence processes must be synchronized.
Example:
1. Let P1 and P2 be two processes that shares a global variable ‘x’. During execution, if
at some point process P1 updates some value of ‘x" to 5 and at some other point updates
it to 10. Thus, a race among two processes starts for changing the value of ‘x' and a
process that performs an update operation last determines the final value of ‘x”.
2. Consider another situation in which two processes P3 and P4 shares two global
variables ‘y’ and ‘z’ whose initial values are 5 and 10 respectively. During some point
in execution, if process P3 executes an assignment statement y = y + z and 1f process
P4 executes an assignment statement z =y + z The final values of y and z depends on
the order in which the two assignment statements are executed if process P3 is executed
prior to P4, then the final values of y and z are 15 and 25 respectively, on the other hand
if execution of P4 precedes P3, then the result values of y and z are 20 and 15
respectively.
Process Competition for Resources: Conflict arises among the various concurrently
executing processes when they are competing for the same resource.
Consider a situation in which two or more processes want to access a resource. Each
of these concurrently executing processes is unaware of the presence of other processes and
the execution of one process does not cause any effect on the execution of the other process.
Hence, the state of the resources used, remains unaffected.
For instance, consider that information is not exchanged between these processes and
the execution of one process causes a significant effect on the behavior of the other competing
processes, i., if two processes want to access a single resource then the OS grants the resource
access to only one process and let the other process to wait, as a result of which the blocked
process may never get the resource and terminates in an inappropriate manner.
Three problems dominate in case of competing process.
1) The need for mutual exclusion.
2) The occurrence of deadlock.
3) The problems of starvation

The need for Mutual Exclusion:


• Consider a situation in which two or more processes need access to a single non-
sharable resource (Example printer).
• During the execution process, each process sends the commands to I/O devices or sends
and receives data or receives status information etc. Such an I/O device is said to be
critical resource and a portion of a program that uses is called a critical section.
• An important point to be considered is that there is only one program is permitted to
enter into the critical section at any time
The occurrence of deadlocks:
• The major cause for the occurrence of a deadlock is the imposition of the mutual
exclusion.
• Consider an example, granting of two resources R1 and R2 to two processes P1 and P2.
• Further suppose that each of these processes want to access both the resources in order
to execute some function.
• A situation may occur in which an operating system assigns the resource R1 to process
P2 and resource R2 to process P1.
• Hence, each process is waiting for one of the resources and will not release the acquired
resource till it gets the other resource that is a process needs both these resources in
order to proceed. This leads to deadlock.
3) The problem of Starvation:
• Let P1, P2 and P3 be the three processes, each of which requires a periodical access to
resource R. If access to resource ‘R’ is granted to the process P1, then the other two
processes P2 and P3 are delayed as they are waiting for the resource ‘R".
• Now let the access is granted to P3 and if P1 again needs ‘R’ prior to the completion of
its critical section.
• If the OS permits P1 to use ‘R’ after P3 has completed its execution, then these ultimate
access permissions provided to P1 and P3 causes P2 to be blocked.
• This competition among the processes can be controlled by involving an OS which is
responsible for allocating the resources to all processes in the system.

Critical Section Problem:


• The resource that cannot be shared between two or more processes at the same
time is called as a critical resource.
• There may be a situation where more than one process requires to access the critical
resource. Then, during the execution of these processes they can send data to the
critical resource, receive data from the critical resource or they can just get the
information about the status of the critical resource by sending related commands
to it.
• An example of a source by sending related commands to it. An example of a critical
or a non-sharable resource is “printer”. A critical resource can be accessed only
from the critical section of a program.

Critical section
• A critical section is a segment of code present in a process in which the process may be
modifying or accessing comm shared data items.
• The most important thing that a system should control is that, when one process is
executing its should not allow other processes to execute in its critical sections.
• Before executing critical section, the process should get permission to enter its critical
section from the system. This is called an entry section.
• After that process executes its critical section and comes out of it this is called exit
section.
• Then, it executes the remaining code called remainder section.

Starvation:
Two are more processes are said to be in starvation, if they are waiting perpetually for a
resource which is occupied by another process. The process that has occupied the resource may
or may not present in the list of processes that are starved.
Let P1.P2 and P3 be the three processes, each of which requires a periodical access to resource
R. if access to resource ‘R’ is granted to the process P1. then the other two processesP2 and P3
are delayed as they are waiting for the resource "R'.
Now. let the Access is granted to P3 and if P1 again needs 'R” prior to the completion
of its critical section. If the OS permits P1 to use ‘R” after P3 has completed ifs execution, then
these alternate access permissions provided to P1 and P3 causes P2 to be blocked.
Here, we need to illustrate Where starvation is possible or not in algorithms like FCFS, SRT
and priority.
• Consider FCFS (First Come First Served) Algorithm, in this starvation is not possible.
The reason is the CPU picks the process according to arrival of its burst time and run
the process till its completion.
• SRT (shortest remaining time) algorithm, in this starvation is possible with the
processes that has shortest remaining time. The reason is CPU picks the process that
has shortest remaining time. Here, we can overcome the problem of starvation by
giving chance to processes that are waiting for a long period of time.
• Finally, consider priority algorithm, in this starvation is possible with low priority
processes. The reason is. CPU picks the process with highest priority. We can overcome
starvation problem by a technique called aging. This technique increases the priority of
the processes that waiting for long period of time.

Requirements for Mutual Exclusion?


Mutual Exclusion must meet the following requirements.
1) Mutual exclusion must be enforced only one process at a time is allowed into its critical
section, among all processes that have critical sections for the same resource or the shared
object.
2) A process that halts in its non-critical section must do so without interfering with other
processes.
3) It must not be possible for a process requiring access to a critical section to be delayed
indefinitely, no deadlock or starvation.
4) When no process is in a critical section, any process that requests entry to its critical section
must be permitted to enter without delay.
5) No assumptions are made about relative process speeds or number of processors
6) A process remains inside its critical section for a finite time only.

Critical section problem:


• Each process has a segment of code called a critical section.
• Critical section is used to avoid race conditions on data items in critical section, the
process may be changing common variables, updating a table, writing a file and so on
at any moment at most one process can execute in critical section.
• A critical section is a piece of code that accesses a shared resource (data structure or a
device) that must not be concurrently accessed by more than one thread of execution.
• The critical-section problem is to design a protocol that the processes can use to
cooperate.

A Critical Section Environment contains:


• Entry Section: Code requesting entry into the critical section.
• Critical Section code in which only one process can execute at any one time.
• Exit Section: The end of the critical section, releasing or allowing others in.
• Remainder Section: Rest of the code after the critical section.
A solution to the critical-section problem must satisfy the following requirements
1) Mutual Exclusion: If process P; is executing in its critical section, then no other
processes can be executing in their critical sections.

2) Progress: If no process is executing in the critical section and there exits some
processes that wish to enter their critical section then only those processes that are not
executing in their critical section can participate in the decision on which will enter
critical next. This selection of the processes that will enter the critical section next
cannot be postponed indefinitely.
3) Bounded Waiting:
• When a process requests access to a critical section, a decision that grants its
access may not be delayed indefinitely.
• A process may not be denied access because of starvation or deadlock.
• A bound must exist on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical
section and before that request is granted. (i.e. All requesters must eventually
be let into the critical section).
There are two general ways for handling critical sections in the operating systems.
They are:
1) Preemptive Kernel: It allows the Kernel model process to be pre-empted (i.e.
interrupted) during execution.
2) Non-preemptive Kernel: It does not allow a Kernel mode process to be preempted
during execution, the process will extent until it exits Kernel mode or voluntarily leaves
control of the CPU. This approach is helpful in avoiding race conditions.
Peterson’s Solution:
• A classic software-based solution to the critical section problem is known as
Peterson’s solution.
• It provides a good algorithmic description of solving the critical section problem and
illustrates some of the complexities involved in designing software that addresses the
requirements of mutual exclusion, progress and bounded waiting requirements.
• Peterson’s solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.
Peterson’s solution requires two data items to be shared between the two processes:
Boolean flag[]:
• An array of boolean values indicating whether a process is ready to enter its critical
section.
• flag[0] = true means Process 0 wants to enter.
• flag[1] = true means Process 1 wants to enter.
int turn:
• An integer variable indicating whose turn it is to enter the critical section.
• turn = 0 allows Process 0 to enter.
• turn = 1 allows Process 1 to enter.
Algorithm:
The structure of process Pi in Peterson’s solution.
For two processes, Process 0 and Process 1:
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
critical section
Flag[i] = false;
remainder section
} while (true);
Where:
• i is the current process (0 or 1).
• j is the other process (1 - i).
The algorithm does satisfy the three essential criteria to solve the critical section problem. For
two processes Po and P1:
1)Mutual exclusion:
• P0 and P1 can never be in the critical section at the same time.
• If P0 is in its critical section, then flag [0] is true and either flag [1] is false (meaning
P; has left its critical section) or turn is 0 (meaning P is just now trying to enter the
critical section, but graciously waiting).
• In both cases, P1 cannot be in critical section when P0 is in critical section.
2) Progress:
• Each process can only be blocked at the while if the other process wants to use the
critical section (flag[ j ] ==true ), AND it is the other process's turn to use the critical
section (tum == j).
• if both of those conditions are true, then the other process (j) will be allowed to enter
the critical section, and upon exiting the critical section, will set flag[j] to false,
releasing process i.
• The shared variable turn assures that only one process at a time can be blocked, and
the flag variable allows one process to release the other when exiting their critical
section.
3) Bounded waiting:
As each process enters their entry section, they set the turn variable to be the other
processes turn. Since no process ever sets it back to their own turn, this ensures that
each process will have to let the other process go first at most one time before it
becomes their turn again.
Limitations
Two Processes Only:
Peterson’s solution is specifically designed for two processes. Extending it to multiple
processes is complex and less efficient.

Mutex Locks
• A mutex (mutual exclusion) lock is a synchronization mechanism used in operating
systems to prevent concurrent processes or threads from accessing a critical section
simultaneously.
• It ensures mutual exclusion, allowing only one thread or process to execute the critical
section at a time and thus prevent race conditions.
• A process must acquire the lock before entering a critical section; it releases the lock
when it exits the critical section.
• The acquire()function acquires the lock, and the release() function releases the lock.
Working:
• A mutex lock has a boolean variable available whose value indicates if the lock is
available or not.
• If the lock is available, a call to acquire() succeeds, and the lock is then considered
unavailable.
• A process that attempts to acquire an unavailable lock is blocked until the lock is
released.
Key Concepts of Mutex Locks:
1. Critical Section: A section of code where shared resources are accessed, requiring
synchronization to prevent data inconsistency.
2. Mutex Lock:
1. A binary lock (locked or unlocked) used to protect the critical section.
2. Only one thread or process can acquire the lock at a time.
3. Other threads/processes attempting to acquire the lock must wait until it is
released.
3. Operations:
1. Lock (acquire): If the mutex is available, a thread/process acquires it and enters
the critical section.
2. Unlock (release): Once the critical section is exited, the thread/process releases
the mutex, allowing others to acquire it.
The definition of acquire() is as follows:
acquire()
{
while (!available)
; /* busy wait */
available = false;
}
The definition of release() is as follows:
release()
{
available = true;
}
Solution to the critical-section problem using mutex locks:
do {
acquire lock
critical section
release lock
remainder section
} while(true)
Advantages of Mutex Locks:
1. Mutual Exclusion: Ensures only one thread/process accesses the critical section at a
time.
2. Simple to Use: Well-supported by most programming languages and operating
systems.
3. Thread Safety: Helps avoid race conditions.
Disadvantages of Mutex Locks:
1. Deadlocks:
1. Can occur if multiple threads hold locks and wait indefinitely for others.
2. Overhead:
1. Lock acquisition and release involve some computational cost.
3. Priority Inversion:
1. A high-priority thread might be waiting for a low-priority thread holding the
lock, causing delays.
Semaphore:
• Semaphores are most often used to synchronize operations (to avoid race
conditions) when multiple processes access a common, non-shareable resource.
• Semaphores are integer variables for which only two (atomic) operations are
defined, the wait (P) and signal operations, whose definitions in pseudocode are
shown in the following figure.

• To indicate a process has gained access to the resource, the process decrements the
semaphore.
• Modifications to the integer value of the semaphore in the wait and signal operations
must be executed indivisibly.
• When one process modifies the semaphore value. no other process can
simultaneously modify that same semaphore value.
• Access to the semaphore is provided by a series of semaphore system calls.
• Semaphores can be used to deal with the n-processes critical section problem, where
the n-processes share a semaphore mutex (mutual exclusion) initialized to 1.
Each process P: is organized as shown:

Implementation
• The big problem with semaphores described above is the busy loop in the wait call
(busy waiting), which consumes CPU cycles without doing any useful work.
• This type of lock is known as a spinlock, because the lock just sits there and spins
while it waits.
• While this is generally a bad thing. it does have the advantage of not invoking
context switches, and so it is sometimes used in multi-processing systems when the
wait time is expected to be short - One thread spins on one processor while another
completes their critical section on another processor
• An alternative approach is to block a process when it is forced to wait for an
available semaphore, and swap it out of the CPU.
• In this implementation each semaphore needs to maintain a list of processes that are
blocked waiting for it, so that one of the processes can be woken up and swapped
back in when the semaphore becomes available. (Whether it gets swapped back info
the CPU immediately or whether it needs to hang out in the ready queue for a while
is a scheduling problem.)
The new definition of a semaphore and the corresponding wait and signal operations are
shown as follows:

There are two types of Semaphores:


[Link] Semaphore
[Link] Semaphore

Binary Semaphore:
• The value of a binary semaphore can range only between 0 and 1.
• A binary semaphore must be initialized with 1 or 0, and the completion of P and V
operations must alternate.
• If the semaphore is initialized with 1, then the first completed operation must be P.
• If the semaphore is initialized with 0, then the first completed operation must be
V.
• Both P and V operations can be blocked, if they are attempted in a consecutive
manner.
• Binary semaphores are known as mutex locks as they are locks that provide
mutual exclusion. Binary semaphores are used to deal with the critical section
problem for multiple processes.
Counting Semaphore
• The value of a counting semaphore can range over an unrestricted domain.
• Counting semaphores can be used to control access to a given resource consisting
of a finite number of instances.
• Counting semaphores maintain a count of the number of times a resource is given.
• The semaphore is initialized to the number of resources available. Each process that
wishes to use a resource performs a wait() operation on the semaphore. When a
process releases a resource, it performs a signal() operation.

Drawbacks of Semaphore:
• They are essentially shared global variables
• Access to semaphores can come from anywhere in a program.
• There is no control or guarantee of proper usage.
• They serve two purposes mutual exclusion and scheduling constraint

Classic problems of synchronization


A number of synchronization problems as examples of a large class of concurrency-control
problems are discussed. These problems are used for testing nearly every newly proposed
synchronization scheme. Semaphores are used for synchronization.

[Link] buffer problem (or) producer consumer problem


The bounded-buffer problem is commonly used to illustrate the power of synchronization
primitives.
In this problem the producer and consumer processes share the following data structures:

• Here the pool consists of n buffers, each capable of holding one item.
• The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is
initialized to the value 1.
• The empty and full semaphores count the number of empty and full buffers.
• The semaphore empty is initialized to the value n, the semaphore full s initialized to
value 0.
The code below can be interpreted as the producer producing full buffers for the consumer or
as the consumer producing empty buffers for the producer.

[Link] Readers-wriiters Problem:


The readers-writers problem there are some processes (termed readers), who only read the
shared data, and never change it, and there are other processes (termed writers), who may
change the data in addition to or instead of reading it. There is no limit to how many readers
can access the data simultaneously, but when a writer accesses the data, it needs exclusive
access.
This synchronization problem is referred to as the readers-writers problem.

Example:
Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database, whereas others may want to update (that is, to
read and write) the database. two types of processes by referring to the former as readers and
to the latter as writers.
Obviously, if two readers access the shared data simultaneously, no adverse effects will
result. However, if a writer and some other process (either a reader or a writer) access the
database simultaneously, chaos may ensue. To ensure that these difficulties do not arise, we
require that the writers have exclusive access to the shared database while writing to the
database. This synchronization problem is referred to as the readers–writers problem.

There are several variations to the readers-writers problem, most cantered around relative
priorities of readers versus writers.
• The first readers-writers problem gives priority to readers. In this problem. if a reader
wants access to the data, and there is not already a writer accessing it, then access is
granted to the reader. A solution to this problem can lead to starvation of the writers, s
there could always be more readers coming along to access the data.
• The second readers-writers problem gives priority to the writers. In this problem, when
a writer wants access to the data it jumps to the head of the queue - All waiting readers
are blocked, and the writer gets access to the data as soon as it becomes available. In
this solution the readers may be starved by a steady stream of writers.
In the solution to the first readers–writers problem, the reader processes share the following
data structures:

• The semaphores mutex and rw_mutex are initialized to 1; read_count is initialized to 0.


The semaphore rw_mutex is common to both reader and writer processes.
• The mutex semaphore is used to ensure mutual exclusion when the variable read count
is updated.
• The read count variable keeps track of how many processes are currently reading the
object. The semaphore rw mutex functions as a mutual exclusion semaphore for the
writers.
• It is also used by the first or last reader that enters or exits the critical section. It is not
used by readers who enter or exit while other readers are in their critical sections.
The following code is an example of the first readers-writers problem, and involves an
important counter and two binary semaphores:

if a writer is in the critical section and n readers are waiting, then one reader is queued on rw
mutex, and n − 1 readers are queued on mutex. Also observe that, when a writer executes
signal(rw mutex), we may resume the execution of either the waiting readers or a single waiting
writer. The selection is made by the scheduler.
The readers–writers problem and its solutions have been generalized to provide reader–writer
locks on some systems. Acquiring a reader–writer lock requires specifying the mode of the
lock: either read or write access. When a process wishes only to read shared data, it requests
the reader–writer lock in read mode. A process wishing to modify the shared data must request
the lock in write mode. Multiple processes are permitted to concurrently acquire a reader–
writer lock in read mode, but only one process may acquire the lock for writing, as exclusive
access is required for writers.
Reader–writer locks are most useful in the following situations:
• In applications where it is easy to identify which processes only read shared data and
which processes only write shared data.
• In applications that have more readers than writers. This is because reader–writer locks
generally require more overhead to establish than semaphores or mutual-exclusion
locks. The increased concurrency of allowing multiple readers compensates for the
overhead involved in setting up the reader–writer lock.

[Link]-Philosopher’s problem:
Dinning philosopher’s problem is a classic synchronization problem involving the allocation
of limited resources among a group of processes in a deadlock-free and starvation-free manner.
Consider five philosophers sitting around a table, in which there are five chopsticks evenly
distributed and an endless bowl of rice in the center, as shown in the diagram below. (There is
exactly one chopstick between each pair of dining philosophers.)
These philosophers spend their lives alternating between two activities: eating and
thinking. When it is time for a philosopher to eat, it must first acquire two chopsticks - one
from their left and one from their right. When a philosopher thinks, it puts down both chopsticks
in their original locations.

The dining-philosophers problem is considered a classic synchronization problem because it is


an example of a large class of concurrency-control problems. It is a simple representation of
the need to allocate several resources among several processes in a deadlock-free and
starvation-free manner.
One simple solution is to represent each chop stick with a semaphore.
• A philosopher tries to grab a chop stick by executing a wait () operation on that
semaphore.
• she releases her chop sticks by executing the signal () operation on the appropriate
semaphores.
• Thus, the shared data are Semaphore chopstick [5]; where all elements of chopstick are
inifialized to 1.
• This solution is rejected as it could create a dead lock. Suppose that all five
philosophers become hungry simultaneously and each grabs her left chop stick. All the
elements of chop stick will now be equal to 0. When each philosopher tries to grab her
right chopstick, she will be delayed forever.

Potential solution to the problem includes:


• Allow at most four philosophers to be sitting simultaneously at the table.
• Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do
this, she must pick them up in a critical section).
• Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left
chopstick and then her right chopstick, whereas an even numbered philosopher picks up her
right chopstick and then her left chopstick.

You might also like