UNIT -3
SYNCHRONIZATION TOOLS
A critical section is a part of a program where shared resources (like memory, files, or
variables) are accessed by multiple processes or threads. To avoid problems such as race
conditions and data inconsistency, only one process/thread should execute the critical section
at a time using synchronization techniques.
Structure of a Critical Section
1. Entry Section
The process requests permission to enter the critical section.
Synchronization tools (e.g., mutex, semaphore) are used to control access.
2. Critical Section: The actual code where shared resources are accessed or modified.
3. Exit Section: The process releases the lock or semaphore, allowing other processes to enter
the critical section.
4. Remainder Section: The rest of the program that does not involve shared resource access.
Peterson’s Solution
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
1 of 16
Peterson’s Algorithm is a classic software-based solution for the critical section problem in
operating systems. It ensures mutual exclusion between two processes, meaning only one
process can access a shared resource at a time, thus preventing race conditions.
The algorithm uses two shared variables:
flag[i]: shows whether process i wants to enter the critical section.
turn: indicates whose turn it is to enter if both processes want to access the critical section
at the same time.
The Algorithm
For process Pi: For process Pj:
do { do {
flag[i] = true; // Pi wants to enter flag[j] = true;
turn = j; // Give turn to Pj turn = i;
while (flag[j] && turn == j); // Wait if Pj also while (flag[i] && turn == i);
wants to enter // Critical Section
// Critical Section flag[j] = false;
flag[i] = false; // Pi leaves critical section // Remainder Section
// Remainder Section } while (true);
} while (true);
Step-by-Step Explanation
1. Intent to Enter: A process sets its flag to true when it wants to enter the critical section.
2. Turn Assignment: It sets the turn variable to the other process, giving the other process the
chance to enter first if it also wants to.
3. Waiting Condition: A process waits if the other process also wants to enter and it is the
other’s turn.
4. Critical Section: Once the condition is false, the process enters the critical section safely.
5. Exit: On leaving, the process resets its flag to false, allowing the other process to proceed.
Mutex Locks
mutex lock makes it possible to implement mutual exclusion by limiting the number of threads
or processes that can simultaneously acquire the lock. A single thread or procedure has to first
try to obtain the mutex lock for something that is shared before it can access it. The seeking
string or procedure gets halted and placed in a state of waiting as long as the encryption key turns
into accessible if it is being organized by a different thread or process.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
2 of 16
Components of Mutex Locks
Below we discuss some of the main components of Mutex Locks.
Mutex Variable ? A mutex variable is used to represent the lock. It is a data structure that
maintains the state of the lock and allows threads or processes to acquire and release it.
Lock Acquisition ? Threads or processes can attempt to acquire the lock by requesting it. If the
lock is available, the requesting thread or process gains ownership of the lock. Otherwise, it
enters a waiting state until the lock becomes available.
Lock Release ? Once a thread or process has finished using the shared resource, it releases the
lock, allowing other threads or processes to acquire it.
Types of Mutex Locks
Mutex locks come in a variety of forms that offer varying levels of capabilities and behavior.
Below are a few different kinds of mutex locks that are frequently used −
Recursive Mutex
A recursive mutex enables multiple lock acquisitions without blocking an operating system or
procedure. It keeps track of the number of occasions it was previously purchased and needs the
same amount of discharges before it can be completely unlocked.
Example
Think about a data framework where a tree-like representation of the directory structure exists.
Each node in the tree stands for a directory, and several threads are simultaneously going through
the tree to carry out different operations. The use of a recursive mutex can prevent conflicts. A
thread is a program that passes through a directory node, takes control of the lock, executes its
actions, and then declares itself repeatedly to enter subdirectories. The recursive mutex enables
multiple acquisitions of the identical lock without blocking the thread, maintaining appropriate
traversal and synchronization.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
3 of 16
Error-Checking Mutex
When lock processes, an error-checking mutex executes further error checking. By hindering
looping lock appropriation, it makes a guarantee that an application or procedure doesn't take
over a mutex lock it presently already holds.
Example
Consider a multi-threaded program in which multiple processes update a common contrary
fluctuating. The counter is secured by a mutex lock to avoid conflicts and race conditions. A
fault occurs if a thread unintentionally attempts to obtain the mutex lock it currently possesses.
Such coding errors can be found and quickly fixed by programmers thanks to the error-checking
mutex.
Times Mutex
For a predetermined amount of time, an algorithm or procedure can try to acquire a lock using a
timed mutex. The acquiring functioning fails if the security key fails to become accessible during
the allotted time, as well as if the thread/process can respond appropriately.
Example
Think about a system that operates in real-time that has a number of operations or strings that
require access to a constrained number of resources that are fixed. Each assignment requires a
particular resource to complete it. Nevertheless, a task might need to execute a different course
of action or indicate a mistake if it can't get the needed resource in an appropriate period of time.
Each task can make an attempt to obtain the material for a set amount of time through the use of
a timed mutex. The task at hand can continue with an alternate method or take the necessary
action depending on the time limit circumstance if the resource in question is not accessible
within the allotted time.
Priority Inheritance Mutex
A particular kind of mutex that aids in reducing the inversion of priority issues is the priority
inheritance mutex (also known as the priority ceiling mutex). The priority inheritance mutex
momentarily increases the ranking associated with the low-priority organization to the level of
the highest-priority organization awaiting the lock to be released as a low-priority string or
procedure is holding the lock and a higher-priority string or procedure requires it.
Example
Consider a system that operates in real-time with several threads operating at various priorities.
These processes have access to shared assets that are mutex lock protected. A priority inheritance
mutex can be used to avoid highest-priority inversion, which happens when a lower-priority
string blocks a higher-priority string by retaining the lock. In order to ensure that the highest-
priority string receives immediate access to the material, the mutex momentarily raises the
lower-priority thread's importance to coincide with that associated with the highest-priority
thread awaiting the lock.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
4 of 16
Read-Write Mutex
A read-write lock is a synchronization system that permits several threads or procedures to
access the same resource concurrently whereas implementing exclusion between them
throughout write activities, though it does not constitute solely an instance of a mutex lock.
Example
Only one thread is in charge of drafting new frames into the provided buffer whereas various
threads read provides and from it in an immediate time online video implementation. A read-
write lock can be used to permit simultaneous reading but reserved writing. For unrestricted
access to the structures, numerous reading strings can concurrently gain the read lock. Yet, the
writer string develops the write lock solely when it wishes to modify the storage device, making
sure that no other threads are able to read or write throughout the procedure for updating.
Semaphores in Process Synchronization : (VVV IMP)
In multiprogramming systems, multiple processes may need to access shared resources like
files, printers, or memory. To handle this, operating systems use synchronization mechanisms.
One of the most widely used mechanisms is the Semaphore.
A semaphore works using two fundamental operations:
1. Wait (P operation / down)
Decreases the semaphore value.
If the value becomes negative, the process is blocked until the resource becomes available.
2. Signal (V operation / up)
Increases the semaphore value.
If there are waiting processes, one of them gets unblocked.
Types of Semaphores
Semaphores are mainly of two Types:
1. Counting Semaphore
Used when multiple instances of a resource exist.
The semaphore value can range over an unrestricted domain (0 to N).
Example: Managing access to a pool of 5 printers.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
5 of 16
2. Binary Semaphore
Special case of counting semaphore with only two values: 0 and 1.
Works like a lock: either the resource is free (1) or busy (0).
Example: Managing access to a single critical section.
Working of Semaphore
A semaphore works by maintaining a counter that controls access to a specific resource,
ensuring that no more than the allowed number of processes access the resource at the same
time.
To achieve synchronization, every critical section of code is surrounded by two operations-
Wait (P operation) and Signal (V operation) that are discussed above.
Example: Let’s consider two processes P1 and P2 sharing a semaphore S, initialized to 1:
State 1: Both processes are in their non-critical sections, and S = 1.
State 2: P1 enters the critical section. It performs wait(S), so S = 0. P2 continues in the non-
critical section.
State 3: If P2 now wants to enter, it cannot proceed since S = 0. It must wait until S > 0.
State 4: When P1 finishes, it performs signal(S), making S = 1. Now P2 can enter its critical
section and again sets S = 0.
Features of Semaphores
Mutual Exclusion: Semaphore ensures that only one process accesses a shared resource at a
time.
Process Synchronization: Semaphore coordinates the execution order of multiple
processes.
Resource Management: Limits access to a finite set of resources, like printers, devices, etc.
Reader-Writer Problem: Allows multiple readers but restricts the writers until no reader is
present.
Avoiding Deadlocks: Prevents deadlocks by controlling the order of allocation of resources.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
6 of 16
Monitors in Process Synchronization
Monitors are a high-level synchronization mechanism that simplify process and thread
synchronization. They are built on top of locks and are mostly used in multithreading systems
like Java.
Key Points:
A monitor is similar to a class/module that groups shared variables and the functions that
operate on them.
Only one thread can execute inside a monitor at a time, ensuring automatic mutual
exclusion.
In Java, monitors are implemented using classes and synchronized methods.
There is no monitor keyword in Java – the functionality is achieved through synchronized.
How to Implement Monitors
Monitors are implemented at the programming language level, not directly by the operating
system.
In Java, monitor-like behavior is achieved using the synchronized keyword ensures that only
one thread can execute inside the monitor at a time.
A monitor encapsulates both shared data (critical resource) and the operations that access or
modify it.
Mutual exclusion is enforced automatically, unlike semaphores where programmers must
explicitly call wait() and signal().
Synchronization can be applied through synchronized methods or synchronized blocks.
Condition variables are used to control thread waiting and signaling.
The methods wait(), notify(), and notifyAll() provide process coordination.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
7 of 16
Condition Variables in Monitors
A condition variable allows threads to wait until a certain condition becomes true. They are
always used inside a monitor. There are three main condition variables:
wait(): temporarily releases the monitor lock and puts the thread to sleep until it is signaled.
signal(): wakes up one waiting thread (if any).
broadcast() (in some languages): wakes up all waiting threads.
Classical Problems of Synchronization
In this tutorial we will discuss about various classical problem of synchronization.
Semaphore can be used in other synchronization problems besides Mutual Exclusion.
Below are some of the classical problem depicting flaws of process synchronaization in systems
where cooperating processes are present.
We will discuss the following three problems:
1. Bounded Buffer (Producer-Consumer) Problem
2. Dining Philosophers Problem
3. The Readers Writers Problem
Bounded Buffer Problem
Because the buffer pool has a maximum size, this problem is often called the Bounded buffer
problem.
This problem is generalised in terms of the Producer Consumer problem, where
a finite buffer pool is used to exchange messages between producer and consumer
processes.
Solution to this problem is, creating two counting semaphores "full" and "empty" to keep
track of the current number of full and empty buffers respectively.
In this Producers mainly produces a product and consumers consume the product, but
both can use of one of the containers each time.
The main complexity of this problem is that we must have to maintain the count for both
empty and full containers that are available.
Dining Philosophers Problem
The dining philosopher's problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-free
manner.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
8 of 16
There are five philosophers sitting around a table, in which there are five
chopsticks/forks kept beside them and a bowl of rice in the centre, When a
philosopher wants to eat, he uses two chopsticks - one from their left and
one from their right. When a philosopher wants to think, he keeps down both
chopsticks at their original place.
The Readers Writers Problem
In this problem there are some processes(called readers) that only read the shared data,
and never change it, and there are other processes(called writers) who may change the
data in addition to reading, or instead of reading it.
There are various type of readers-writers problem, most centred on relative priorities of
readers and writers.
The main complexity with this problem occurs from allowing more than one reader to
access the data at the same time.
Deadlocks:
A deadlock is a situation in a computing environment where a set of processes gets
permanently stuck because each process is waiting for a resource held by another process, and
none of them can proceed.
Deadlock System model
A deadlock occurs when a set of processes is stalled because each process is holding a resource
and waiting for another process to acquire another resource. In the diagram below, for
example, Process 1 is holding Resource 1 while Process 2 acquires Resource 2, and Process 2
is waiting for Resource 1.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
9 of 16
System Model :
For the purposes of deadlock discussion, a system can be modeled as a collection of limited
resources that can be divided into different categories and allocated to a variety of processes,
each with different requirements.
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources are
examples of resource categories.
By definition, all resources within a category are equivalent, and any of the resources within
that category can equally satisfy a request from that category. If this is not the case (i.e. if
there is some difference between the resources within a category), then that category must be
subdivided further. For example, the term “printers” may need to be subdivided into “laser
printers” and “color inkjet printers.”
Some categories may only have one resource.
The kernel keeps track of which resources are free and which are allocated, to which process
they are allocated, and a queue of processes waiting for this resource to become available for
all kernel-managed resources. Mutexes or wait() and signal() calls can be used to control
application-managed resources (i.e. binary or counting semaphores. )
When every process in a set is waiting for a resource that is currently assigned to another
process in the set, the set is said to be deadlocked.
Deadlock characterization:
Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram below,
there is a single instance of Resource 1 and it is held by Process 1 only.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
10 of 16
Ho
ld and Wait
A process can hold multiple resources and still request more resources from other processes
which are holding them. In the diagram given below, Process 2 holds Resource 2 and Resource 3
and is requesting the Resource 1 which is held by Process 1.
No Preemption:
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will
only be released when Process 1 relinquishes it voluntarily after its execution is complete.
Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held by
the first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and
it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
11 of 16
Methods for handling Deadlocks:
There are four approaches to dealing with deadlocks.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
12 of 16
Lets discuss them in short:
1. Deadlock Prevention
Deadlock Prevention is a method of handling deadlocks where the operating system ensures
that at least one of the necessary conditions for deadlock (mutual exclusion, hold and wait, no
preemption, circular wait) never occurs. By breaking these conditions in advance, the system
prevents deadlock from happening.
2. Deadlock Avoidance
Deadlock Avoidance is a method where the operating system makes decisions dynamically to
ensure the system never enters an unsafe state, thereby avoiding the possibility of deadlock.
This is usually done using algorithms like the Banker’s Algorithm.
Deadlock avoidance mainly relies on algorithms that check resource allocation before making
decisions. The two common types are:
1. Banker’s Algorithm: Used when multiple instances of resources exist. It simulates
allocation and ensures that the system remains in a safe state before granting resources.
2. Resource Allocation Graph (RAG) Algorithm: Used when each resource has only
one instance. It checks for cycles in the graph to avoid unsafe states.
3. Deadlock Detection & Recovery
Deadlock Detection: Deadlock detection periodically checks for circular waits and resolves
deadlocks by aborting and restarting processes, releasing their resources. It allows unrestricted
resource access and immediate request fulfillment, enabling online handling. The main
drawback is potential loss due to preemption.
Deadlock Recovery is the process of handling a deadlock after it has occurred. It involves:
1. Process Termination: Aborting one or more deadlocked processes to break the cycle.
2. Resource Preemption: Temporarily taking resources from some processes and reallocating
them to others.
4. Deadlock Ignorance
In the Deadlock ignorance method the OS acts like the deadlock never occurs and completely
ignores it even if the deadlock occurs. This method only applies if the deadlock occurs very
rarely. The algorithm is very simple. It says, " if the deadlock occurs, simply reboot the system
and act like the deadlock never occurred." That's why the algorithm is called the Ostrich
Algorithm.
Deadlock prevention:
Deadlock prevention is a strategy used in computer systems to ensure that different processes
can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic
system where cars (processes) must move through intersections (resources) without getting
into a gridlock.
As we have already discussed that deadlock can only happen if all four of the following
conditions are met simultaneously:
Mutual Exclusion
Hold and Wait
No Preemption
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
13 of 16
Circular Wait
1. Eliminate Mutual Exclusion
Some resources, like a printer, are inherently non-sharable, so this condition is difficult to
break.
However, sharable resources like read-only files can be accessed by multiple processes at
the same time.
For non-sharable resources, prevention through this method is not possible.
2. Eliminate Hold and Wait
Hold and wait is a condition in which a process holds one resource while simultaneously
waiting for another resource that is being held by a different process. The process cannot
continue until it gets all the required resources.
There are two ways to eliminate hold and wait:
By eliminating wait: The process specifies the resources it requires in advance so that it
does not have to wait for allocation after execution starts.
For Example, Process1 declares in advance that it requires both Resource1 and Resource2.
By eliminating hold: The process has to release all resources it is currently holding before
making a new request.
For Example: Process1 must release Resource2 and Resource3 before requesting Resource1.
3. Eliminate No Preemption
No preemption means resources can’t be taken away once allocated. To prevent this:
Processes must release resources voluntarily: A process gives up resources once it
finishes using them.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
14 of 16
Avoid partial allocation: If a process requests resources that are unavailable, it must release
all currently held resources and wait until all required resources are free.
4. Eliminate Circular Wait
Circular wait happens when processes form a cycle, each waiting for a resource held by the
next. To prevent this:
Impose a strict ordering on resources.
Assign each resource a unique number.
Processes can only request resources in increasing order of their numbers.
This prevents cycles, as no process can go backwards in numbering.
Deadlock avoidance:
Deadlock avoidance is an operating system (OS) strategy that dynamically prevents deadlocks by
ensuring that resource allocations always maintain a "safe state".
How Deadlock Avoidance Works
1. 1. Maximum Resource Needs:
Each process must declare its maximum resource requirements to the OS before it begins
execution.
2. 2. Resource Allocation State:
The OS keeps track of the current state of resource allocation, including available resources,
allocated resources, and the remaining maximum needs of each process.
3. 3. Safe State:
The core concept is a "safe state". A system is in a safe state if there exists a sequence of
process executions that can complete without any deadlock occurring, even if all processes
request their maximum resources in that order.
4. 4. Dynamic Checking:
When a process requests a resource, the OS checks if granting that request would lead to an
unsafe state.
5. 5. Grant or Wait:
If granting the request keeps the system in a safe state, the resource is allocated to the process.
Deadlock detection:
Deadlock Detection and Recovery is the mechanism of detecting and resolving deadlocks in an
operating system. In operating systems, deadlock recovery is important to keep everything
running smoothly.
Deadlock detection is the process of identifying when processes are stuck waiting for
resources held by other processes.
Recovery is the method of resolving the deadlock to allow the system to continue
functioning.
Detection is done using techniques like Resource Allocation Graphs (RAG) or Wait-for
Graphs.
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
15 of 16
Once a deadlock is detected, recovery methods include process termination, resource
preemption, or process rollback.
Deadlock Recovery in Operating System
Deadlock recovery is the process of resolving a deadlock after it has already occurred. Unlike
deadlock prevention (which avoids deadlocks in advance) or deadlock avoidance (which
carefully allocates resources to avoid unsafe states), recovery accepts that deadlocks may occur
and focuses on fixing them.
Goals of Deadlock Recovery
1. Free resources locked by processes stuck in the deadlock.
2. Ensure system consistency without corruption or partial updates.
3. Minimize loss of work by carefully choosing which processes to terminate or roll back.
4. Restore normal operation so other processes can continue execution.
Deadlock Recovery Techniques
There are several ways an operating system can recover from a deadlock:
1. Process Termination
The OS can break the deadlock by terminating one or more processes.
Kill All Deadlocked Processes:
Simple but drastic: all processes involved in the deadlock are terminated.
Wasteful since a lot of work may be lost.
2. Resource Preemption:
Instead of killing processes, the OS may take away resources from one process and give them to another to break the cycle.
Challenges in resource preemption:
Which resources to take? Choosing the right resource is tricky.
Victim selection: Pick a process whose rollback cost is lowest.
Starvation risk: A process might repeatedly lose its resources (to avoid this, the OS tracks how
many times a process has been chosen as a victim).
3. Process Rollback:
The OS can roll back one or more processes to a previous safe state before the deadlock
happened.
Rollback requires the system to maintain checkpoints of process states.
Once rolled back, the process can be restarted later when resources are available.
Useful in systems where killing processes is not acceptable (like banking or critical systems).
4. Combination of Methods:
In practice, the OS may use a mix of termination, preemption, and rollback depending on the
system design and type of processes running.
******
BIET College (A) Asst prof CSE Department [OS CSM, CSE , ALLIED BRANCHS R23 Regulation] perpared By SHAIK ARAFATH
16 of 16