OPERATING SYSTEM
UNIT 2
CPU Scheduling in Operating Systems
CPU scheduling is the process of deciding which process will own the CPU
to use while another process is suspended. The main function of the CPU
scheduling is to ensure that whenever the CPU remains idle, the OS has at
least selected one of the processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I / O
binding processes then most of the time, the CPU remains an idle. The
function of an effective program is to improve resource utilization.
Terminologies Used in CPU Scheduling
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
2
Turn Around Time: Time Difference between completion time and arrival
time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst
time.
Waiting Time = Turn Around Time – Burst Time
Things to Take Care While Designing a CPU Scheduling
Algorithm
Different CPU Scheduling algorithms have different structures and the
choice of a particular algorithm depends on a variety of factors. Many
conditions have been raised to compare CPU scheduling algorithms.
The criteria include the following:
● CPU Utilization: The main purpose of any CPU algorithm is to keep
the CPU as busy as possible. Theoretically, CPU usage can range
from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the system load.
● Throughput: The average CPU performance is the number of
processes performed and completed during each unit. This is called
throughput. The output may vary depending on the length or
duration of the processes.
● Turn Round Time: For a particular process, the important conditions
are how long it takes to perform that process. The time elapsed
from the time of process delivery to the time of completion is known
3
as the conversion time. Conversion time is the amount of time spent
waiting for memory access, waiting in line, using CPU, and waiting
for I / O.
● Waiting Time: The Scheduling algorithm does not affect the time
required to complete the process once it has started performing. It
only affects the waiting time of the process i.e. the time spent in the
waiting process in the ready queue.
● Response Time: In a collaborative system, turn around time is not
the best option. The process may produce something early and
continue to computing the new results while the previous results
are released to the user. Therefore another method is the time taken
in the submission of the application process until the first response
is issued. This measure is called response time.
What Are The Different Types of CPU Scheduling
Algorithms?
There are mainly two types of scheduling methods:
● Preemptive Scheduling: Preemptive scheduling is used when a
process switches from running state to ready state or from the
waiting state to the ready state.
● Non-Preemptive Scheduling: Non-Preemptive scheduling is used
when a process terminates , or when a process switches from
running state to waiting state.
4
1. First Come First Serve
FCFS considered to be the simplest of all operating system scheduling
algorithms. First come first serve scheduling algorithm states that the
process that requests the CPU first is allocated the CPU first and is
implemented by using FIFO queue.
Characteristics of FCFS
● FCFS supports non-preemptive and preemptive CPU scheduling
algorithms.
● Tasks are always executed on a First-come, First-serve concept.
● FCFS is easy to implement and use.
● This algorithm is not much efficient in performance, and the wait
time is quite high.
Advantages of FCFS
5
● Easy to implement
● First come, first serve method
Disadvantages of FCFS
● FCFS suffers from Convoy effect.
● The average waiting time is much higher than the other algorithms.
● FCFS is very simple and easy to implement and hence not much
efficient.
Examples to Show Working of Non-Preemptive First come First Serve
CPU Scheduling Algorithm
Example-1: Consider the following table of arrival time and burst time for
five processes P1, P2, P3, P4 and P5.
Processes Arrival Time Burst Time
P1 0 4
P2 1 3
6
P3 2 1
P4 3 2
P5 4 5
SOLVE ABOVE QUESTION BY CLASS NOTES
Program for Shortest Job First (or SJF) CPU
Scheduling | Set 1 (Non- preemptive
The shortest job first (SJF) or shortest job next, is a scheduling policy that
selects the waiting process with the smallest execution time to execute next.
SJN, also known as Shortest Job Next (SJN), can be preemptive or non-
preemptive.
Characteristics of SJF Scheduling:
● Shortest Job first has the advantage of having a minimum average
waiting time among all scheduling algorithms.
● It is a Greedy Algorithm.
● It may cause starvation if shorter processes keep coming. This
problem can be solved using the concept of ageing.
7
● It is practically infeasible as Operating System may not know burst
times and therefore may not sort them. While it is not possible to
predict execution time, several methods can be used to estimate the
execution time for a job, such as a weighted average of previous
execution times.
● SJF can be used in specialized environments where accurate
estimates of running time are available.
Algorithm:
● Sort all the processes according to the arrival time.
● Then select that process that has minimum arrival time and
minimum Burst time.
● After completion of the process make a pool of processes that
arrives afterward till the completion of the previous process and
select that process among the pool which is having minimum Burst
time.
Examples to show working of Non-Preemptive Shortest Job First CPU
Scheduling Algorithm:
Example-1: Consider the following table of arrival time and burst time for
five processes P1, P2, P3, P4 and P5.
8
Process Burst Time Arrival Time
P1 6 ms 2 ms
P2 2 ms 5 ms
P3 8 ms 1 ms
P4 3 ms 0 ms
P5 4 ms 4 ms
Advantages of SJF:
● SJF is better than the First come first serve(FCFS) algorithm as it
reduces the average waiting time.
● SJF is generally used for long term scheduling
9
● It is suitable for the jobs running in batches, where run times are
already known.
● SJF is probably optimal in terms of average turnaround time.
Disadvantages of SJF:
● SJF may cause very long turn-around times or starvation.
● In SJF job completion time must be known earlier, but sometimes it
is hard to predict.
● Sometimes, it is complicated to predict the length of the upcoming
CPU request.
● It leads to the starvation that does not reduce average turnaround
time.
Round Robin Scheduling Algorithm
Round Robin is a CPU scheduling algorithm where each process is cyclically
assigned a fixed time slot. It is the preemptive version of the First come First
Serve CPU Scheduling algorithm.
● Round Robin CPU Algorithm generally focuses on Time Sharing
technique.
● The period of time for which a process or job is allowed to run in a
pre-emptive method is called time quantum.
● Each process or job present in the ready queue is assigned the CPU
for that time quantum, if the execution of the process is completed
10
during that time then the process will end else the process will go
back to the waiting table and wait for its next turn to complete the
execution.
Characteristics of Round Robin CPU Scheduling
Algorithm
● It is simple, easy to implement, and starvation-free as all processes
get a fair share of CPU.
● One of the most commonly used techniques in CPU scheduling is a
core.
● It is preemptive as processes are assigned CPU only for a fixed slice
of time at most.
● The disadvantage of it is more overhead of context switching.
Advantages of Round Robin CPU Scheduling Algorithm
● There is fairness since every process gets an equal share of the
CPU.
● The newly created process is added to the end of the ready queue.
● A round-robin scheduler generally employs time-sharing, giving
each job a time slot or quantum.
● While performing a round-robin scheduling, a particular time
quantum is allotted to different jobs.
11
● Each process get a chance to reschedule after a particular quantum
time in this scheduling.
Disadvantages of Round Robin CPU Scheduling
Algorithm
● There is Larger waiting time and Response time.
● There is Low throughput.
● There is Context Switches.
● Gantt chart seems to come too big (if quantum time is less for
scheduling. For Example:1 ms for big scheduling.)
● Time consuming scheduling for small quantum.
Examples to show working of Round Robin Scheduling
Algorithm
Example-1: Consider the following table of arrival time and burst time for
four processes P1, P2, P3, and P4 and given Time Quantum = 2
Process Burst Time Arrival Time
P1 5 ms 0 ms
12
P2 4 ms 1 ms
P3 2 ms 2 ms
P4 1 ms 4 ms
Multilevel Queue (MLQ) CPU Scheduling
It may happen that processes in the ready queue can be divided into different
classes where each class has its own scheduling needs. For example, a
common division is a foreground (interactive) process and a background
(batch) process. These two classes have different scheduling needs. For this
kind of situation, Multilevel Queue Scheduling is used.
Now, let us see how it works.
Features of Multilevel Queue (MLQ) CPU Scheduling:
● Multiple queues: In MLQ scheduling, processes are divided into
multiple queues based on their priority, with each queue having a
different priority level. Higher-priority processes are placed in
queues with higher priority levels, while lower-priority processes
are placed in queues with lower priority levels.
13
● Priorities assigned: Priorities are assigned to processes based on
their type, characteristics, and importance. For example, interactive
processes like user input/output may have a higher priority than
batch processes like file backups.
● Preemption: Preemption is allowed in MLQ scheduling, which
means a higher priority process can preempt a lower priority
process, and the CPU is allocated to the higher priority process. This
helps ensure that high-priority processes are executed in a timely
manner.
● Scheduling algorithm: Different scheduling algorithms can be used
for each queue, depending on the requirements of the processes in
that queue. For example, Round Robin scheduling may be used for
interactive processes, while First Come First Serve scheduling may
be used for batch processes.
● Feedback mechanism: A feedback mechanism can be implemented
to adjust the priority of a process based on its behavior over time.
For example, if an interactive process has been waiting in a lower-
priority queue for a long time, its priority may be increased to ensure
it is executed in a timely manner.
● Efficient allocation of CPU time: MLQ scheduling ensures that
processes with higher priority levels are executed in a timely
manner, while still allowing lower priority processes to execute
when the CPU is idle.
14
● Fairness: MLQ scheduling provides a fair allocation of CPU time to
different types of processes, based on their priority and
requirements.
● Customizable: MLQ scheduling can be customized to meet the
specific requirements of different types of processes.
Advantages of Multilevel Queue CPU Scheduling:
● Low scheduling overhead: Since processes are permanently
assigned to their respective queues, the overhead of scheduling is
low, as the scheduler only needs to select the appropriate queue for
execution.
● Efficient allocation of CPU time: The scheduling algorithm ensures
that processes with higher priority levels are executed in a timely
manner, while still allowing lower priority processes to execute
when the CPU is idle. This ensures optimal utilization of CPU time.
● Fairness: The scheduling algorithm provides a fair allocation of CPU
time to different types of processes, based on their priority and
requirements.
● Customizable: The scheduling algorithm can be customized to meet
the specific requirements of different types of processes. Different
scheduling algorithms can be used for each queue, depending on
the requirements of the processes in that queue.
15
● Prioritization: Priorities are assigned to processes based on their
type, characteristics, and importance, which ensures that important
processes are executed in a timely manner.
● Preemption: Preemption is allowed in Multilevel Queue Scheduling,
which means that higher-priority processes can preempt lower-
priority processes, and the CPU is allocated to the higher-priority
process. This helps ensure that high-priority processes are executed
in a timely manner.
Disadvantages of Multilevel Queue CPU Scheduling:
● Some processes may starve for CPU if some higher priority queues
are never becoming empty.
● It is inflexible in nature.
● There may be added complexity in implementing and maintaining
multiple queues and scheduling algorithms.
Ready Queue is divided into separate queues for each class of processes. For
example, let us take three different types of processes System processes,
Interactive processes, and Batch Processes. All three processes have their
own queue. Now, look at the below figure.
16
The Description of the processes in the above diagram is as follows:
● System Processes: The CPU itself has its own process to run which
is generally termed a System Process.
● Interactive Processes: An Interactive Process is a type of process in
which there should be the same type of interaction.
● Batch Processes: Batch processing is generally a technique in the
Operating system that collects the programs and data together in
the form of a batch before the processing starts.
All three different type of processes have their own queue. Each queue has
its own Scheduling algorithm. For example, queue 1 and queue 2 use Round
Robin while queue 3 can use FCFS to schedule their processes.
A deadlock is a situation where a set of processes is blocked because each
process is holding a resource and waiting for another resource acquired by
some other process. In this article, we will discuss deadlock, its necessary
conditions, etc. in detail.
17
What is Deadlock?
Deadlock is a situation in computing where two or more processes are
unable to proceed because each is waiting for the other to release resources.
Key concepts include mutual exclusion, resource holding, circular wait, and
no preemption.
Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once they
are in front of each other. This is a practical example of deadlock.
How Does Deadlock occur in the Operating System?
Before going into detail about how deadlock occurs in the Operating System,
let’s first discuss how the Operating System uses the resources present. A
process in an operating system uses resources in the following way.
● Requests a resource
● Use the resource
● Releases the resource
A situation occurs in operating systems when there are two or more
processes that hold some resources and wait for resources held by other(s).
For example, in the below diagram, Process 1 is holding Resource 1 and
waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.
18
Necessary Conditions for Deadlock in OS
Deadlock can arise if the following four conditions hold simultaneously
(Necessary Conditions)
● Mutual Exclusion: Two or more resources are non-shareable
(Only one process can use at a time).
● Hold and Wait: A process is holding at least one resource and
waiting for resources.
● No Preemption: A resource cannot be taken from a process unless
the process releases the resource.
● Circular Wait: A set of processes waiting for each other in circular
form.
19
What is Deadlock Detection?
Deadlock detection is a process in computing where the system checks if
there are any sets of processes that are stuck waiting for each other
indefinitely, preventing them from moving forward. In simple words,
deadlock detection is the process of finding out whether any process are
stuck in loop or not. There are several algorithms like
● Resource Allocation Graph
● Banker’s Algorithm
These algorithms helps in detection of deadlock in Operating System.
What are the Methods For Handling Deadlock?
There are three ways to handle deadlock
● Deadlock Prevention or Avoidance
● Deadlock Recovery
● Deadlock Ignorance
Deadlock Prevention or Avoidance
Deadlock Prevention and Avoidance is the one of the methods for
handling deadlock. First, we will discuss Deadlock Prevention, then
Deadlock Avoidance.
What is Deadlock Prevention?
In deadlock prevention the aim is to not let full-fill one of the required
condition of the deadlock. This can be done by this method:
20
(i) Mutual Exclusion
We only use the Lock for the non-share-able resources and if the
resource is share- able (like read only file) then we not use the locks here.
That ensure that in case of share -able resource , multiple process can
access it at same time. Problem- Here the problem is that we can only do
it in case of share-able resources but in case of no-share-able resources
like printer , we have to use Mutual exclusion.
(ii) Hold and Wait
To ensure that Hold and wait never occurs in the system, we must
guarantee that whenever process request for resource , it does not hold
any other resources.
● we can provide the all resources to the process that is required
for it’s execution before starting it’s execution . problem – for
example if there are three resource that is required by a process
and we have given all that resource before starting execution of
process then there might be a situation that initially we required
only two resource and after one hour we want third resources
and this will cause starvation for the another process that wants
this resources and in that waiting time that resource can
allocated to other process and complete their execution.
● We can ensure that when a process request for any resources
that time the process does not hold any other resources. Ex- Let
there are three resources DVD, File and Printer . First the process
21
request for DVD and File for the copying data into the file and let
suppose it is going to take 1 hour and after it the process free all
resources then again request for File and Printer to print that file.
(iii) No Preemption
If a process is holding some resource and requestion other resources
that are acquired and these resource are not available immediately then
the resources that current process is holding are preempted. After some
time process again request for the old resources and other required
resources to re-start.
For example – Process p1 have resource r1 and requesting for r2 that is
hold by process p2. then process p1 preempt r1 and after some time it try
to restart by requesting both r1 and r2 resources.
Problem – This can cause the Live Lock Problem .
Live Lock : Live lock is the situation where two or more processes
continuously changing their state in response to each other without
making any real progress.
Example:
● suppose there are two processes p1 and p2 and two resources r1
and r2.
● Now, p1 acquired r1 and need r2 & p2 acquired r2 and need r1.
● so according to above method- Both p1 and p2 detect that they
can’t acquire second resource, so they release resource that they
are holding and then try again.
22
● continuous cycle- p1 again acquired r1 and requesting to r2 p2
again acquired r2 and requesting to r1 so there is no overall
progress still process are changing there state as they preempt
resources and then again holding them. This the situation of Live
Lock.
(iv) Circular Wait:
To remove the circular wait in system we can give the ordering of
resources in which a process needs to acquire.
Ex: If there are process p1 and p2 and resources r1 and r2 then we can
fix the resource acquiring order like the process first need to acquire
resource r1 and then resource r2. so the process that acquired r1 will be
allowed to acquire r2 , other process needs to wait until r1 is free.
This is the Deadlock prevention methods but practically only fourth
method is used as all other three condition removal method have some
disadvantages with them .
What is Deadlock Avoidance?
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we
have to make an assumption. We need to ensure that all information
about resources that the process will need is known to us before the
execution of the process. We use Banker’s algorithm (Which is in turn a
gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but
performance decreases.
23
What is Deadlock Recovery?
If Deadlock prevention or avoidance is not applied to the software then
we can handle this by deadlock detection and recovery. which consist of
two phases:
1. In the first phase, we examine the state of the process and check
whether there is a deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm
for recovery of the deadlock.
In Deadlock detection and recovery, we get the correctness of data but
performance decreases.
Methods of Deadlock Recovery
There are several Deadlock Recovery Techniques:
● Manual Intervention
● Automatic Recovery
● Process Termination
● Resource Preemption
1. Manual Intervention
When a deadlock is detected, one option is to inform the operator and let
them handle the situation manually. While this approach allows for
human judgment and decision-making, it can be time-consuming and may
not be feasible in large-scale systems.
24
2. Automatic Recovery
An alternative approach is to enable the system to recover from deadlock
automatically. This method involves breaking the deadlock cycle by either
aborting processes or preempting resources. Let’s delve into these
strategies in more detail.
3. Process Termination
● Abort all Deadlocked Processes
○ This approach breaks the deadlock cycle, but it
comes at a significant cost. The processes that
were aborted may have executed for a
considerable amount of time, resulting in the loss
of partial computations. These computations may
need to be recomputed later.
● Abort one process at a time
○ Instead of aborting all deadlocked processes
simultaneously, this strategy involves selectively
aborting one process at a time until the deadlock
cycle is eliminated. However, this incurs overhead
as a deadlock-detection algorithm must be invoked
after each process termination to determine if any
processes are still deadlocked.
○ Factors for choosing the termination order:
25
○ The process’s priority
○ Completion time and the progress
made so far
○ Resources consumed by the process
○ Resources required to complete the
process
○ Number of processes to be
terminated
○ Process type (interactive or batch)
4. Resource Preemption
● Selecting a Victim
○ Resource preemption involves choosing which
resources and processes should be preempted to
break the deadlock. The selection order aims to
minimize the overall cost of recovery. Factors
considered for victim selection may include the
number of resources held by a deadlocked process
and the amount of time the process has consumed.
● Rollback
○ If a resource is preempted from a process, the
process cannot continue its normal execution as it
26
lacks the required resource. Rolling back the
process to a safe state and restarting it is a
common approach. Determining a safe state can be
challenging, leading to the use of total rollback,
where the process is aborted and restarted from
scratch.
● Starvation Prevention
○ To prevent resource starvation, it is essential to
ensure that the same process is not always chosen
as a victim. If victim selection is solely based on
cost factors, one process might repeatedly lose its
resources and never complete its designated task.
To address this, it is advisable to limit the number
of times a process can be chosen as a victim,
including the number of rollbacks in the cost
factor.
What is Deadlock Ignorance?
If a deadlock is very rare, then let it happen and reboot the system. This is
the approach that both Windows and UNIX take. we use the ostrich
algorithm for deadlock ignorance.
27
In Deadlock, ignorance performance is better than the above two methods
but the correctness of data is not there.
Safe State
A safe state can be defined as a state in which there is no deadlock. It is
achievable if:
● If a process needs an unavailable resource, it may wait until the
same has been released by a process to which it has already been
allocated. if such a sequence does not exist, it is an unsafe state.
● All the requested resources are allocated to the process.
BANKER ALGORITHM
The Banker’s Algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for the
predetermined maximum possible amounts of all resources, then makes an
“s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.
The Banker’s Algorithm is a smart way for computer systems to manage how
programs use resources, like memory or CPU time. It helps prevent situations
where programs get stuck and can’t finish their tasks, which is called
deadlock. By keeping track of what resources each program needs and
what’s available, the algorithm makes sure that programs only get what they
need in a safe order. This helps computers run smoothly and efficiently,
especially when lots of programs are running at the same time.
28
Why Banker’s Algorithm is Named So?
The banker’s algorithm is named so because it is used in the banking system
to check whether a loan can be sanctioned to a person or not. Suppose there
are n number of account holders in a bank and the total sum of their money is
S. Let us assume that the bank has certain amount of money Y . If a person
applies for a loan then the bank first subtracts the loan amount from the total
money that the bank has (Y) and if the remaining amount is greater than S
then only the loan is sanctioned. It is done because if all the account holders
come to withdraw their money then the bank can easily do it.
It also helps the OS to successfully share the resources between all the
processes. It is called the banker’s algorithm because bankers need a similar
algorithm- they admit loans that collectively exceed the bank’s funds and
then release each borrower’s loan in installments. The banker’s algorithm
uses the notation of a safe allocation state to ensure that granting a resource
request cannot lead to a deadlock either immediately or in the future.
In other words, the bank would never allocate its money in such a way that it
can no longer satisfy the needs of all its customers. The bank would try to be
in a safe state always.
Example
Considering a system with five processes P 0 through P4 and three
resources of type A, B, C. Resource type A has 10 instances, B has 5
instances and type C has 7 instances. Suppose at time t 0 following
snapshot of the system has been taken:
29
SOLVE THE BELOW QUESTION BY CLASS NOTES