UNIT – II PROCESS MANAGEMENT
Process
A process is basically a program in execution.
The execution of a process must progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and
when we execute this program, it becomes a process which performs all the
tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data.
The following image shows a simplified layout of a process inside main
memory −
S.N Component & Description
.
1 Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run
time.
3 Text
This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
4 Data
This section contains the global and static variables.
Program Vs Process:
A program is a piece of code which may be a single line or millions of lines.
A computer program is usually written by a computer programmer in a
programming language.
A computer program is a collection of instructions that performs a specific task
when executed by a computer.
When we compare a program with a process, we can conclude that a process is
a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as
an algorithm.
A collection of computer programs, libraries and related data are referred to as
a software.
Process Life Cycle:
When a process executes, it passes through different states.
These stages may differ in different operating systems, and the names of these
states are also not standardized.
In general, a process can have one of the following five states at a time.
S.N State & Description
o
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor.
Ready processes are waiting to have the processor allocated to
them by the operating system so that they can run.
Process may come into this state after Start state or while running
it by but interrupted by the scheduler to assign CPU to some other
process.
3 Running
Once the process has been assigned to a processor by the OS
scheduler, the process state is set to running and the processor
executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a
resource, such as waiting for user input, or waiting for a file to
become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it
waits to be removed from main memory.
Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System
for every process.
The PCB is identified by an integer process ID (PID).
A PCB keeps all the information needed to keep track of a process as listed
below in the table −
S.N Information & Description
.
1 Process State
The current state of the process i.e., whether it is ready, running,
waiting, or whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating
system.
4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction
to be executed for this process.
6 CPU registers
Various CPU registers where process need to be stored for
execution for running state.
7 CPU Scheduling Information
Process priority and other scheduling information which is required
to schedule the process.
8 Memory management information
This includes the information of page table, memory limits, Segment
table depending on memory used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time
limits, execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and
may contain different information in different operating systems.
Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted
once the process terminates.
PROCESS SCHEDULING:
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using
time multiplexing.
Categories of Scheduling:
There are two categories of scheduling:
1. Non-preemptive:
Here the resource can’t be taken from a process until the process
completes execution.
The switching of resources occurs when the running process terminates
and moves to a waiting state.
2. Preemptive:
Here the OS allocates the resources to a process for a fixed amount of
time.
During resource allocation, the process switches from running state to
ready state or from waiting state to ready state.
This switching occurs as the CPU may give priority to other processes
and replace the process with higher priority with the running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling
Queues.
The OS maintains a separate queue for each of the process states and PCBs of
all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.
The Operating System maintains the following important process scheduling
queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in
this queue.
Device queues − The processes which are blocked due to unavailability
of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.).
The OS scheduler determines how to move processes between the ready and
run queues which can only have one entry per processor core on the system; in
the above diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are
described below −
S.N State & Description
.
1 Running
When a new process is created, it enters into the system as in the
running state.
2 Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that process
is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then
selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in
various ways.
Their main task is to select the jobs to be submitted into the system and to
decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler.
A long-term scheduler determines which programs are admitted to the system
for processing.
It selects processes from the queue and loads them into memory for
execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.
It also controls the degree of multiprogramming.
If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system.
On some systems, the long-term scheduler may not be available or minimal.
Time-sharing operating systems have no long term scheduler.
When a process changes the state from new to ready, then there is use of
long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler.
Its main objective is to increase system performance in accordance with the
chosen set of criteria.
It is the change of ready state to running state of the process.
CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next.
Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler:
Medium-term scheduling is a part of swapping.
It removes the processes from the memory.
It reduces the degree of multiprogramming.
The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request.
A suspended processes cannot make any progress towards completion.
In this condition, to remove the process from memory and make space for
other processes, the suspended process is moved to the secondary storage.
This process is called swapping, and the process is said to be swapped out or
rolled out.
Swapping may be necessary to improve the process mix.
Comparison among Scheduler:
S.N Long-Term Short-Term Medium-Term
. Scheduler Scheduler Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than Speed is fastest Speed is in between
short term scheduler among other two both short and long
term scheduler.
3 It controls the It provides lesser It reduces the degree
degree of control over degree of multiprogramming.
multiprogramming of
multiprogramming
4 It is almost absent or It is also minimal in It is a part of Time
minimal in time time sharing system sharing systems.
sharing system
5 It selects processes It selects those It can re-introduce the
from pool and loads processes which are process into memory
them into memory ready to execute and execution can be
for execution continued.
Context Switching
A context switching is the mechanism to store and restore the state or context
of a CPU in Process Control block so that a process execution can be resumed
from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a
single CPU.
Context switching is an essential part of a multitasking operating system
features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block.
After this, the state for the process to run next is loaded from its own PCB and
used to set the PC, registers, etc.
At that point, the second process can start executing.
Context switches are computationally intensive since register and memory
state must be saved and restored.
To avoid the amount of context switching time, some hardware systems
employ two or more sets of processor registers.
When the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Scheduling criteria:
Process scheduler assigns different processes to CPU based on
particular scheduling algorithms.
The scheduling is responsible for taking part in the scheduling process
that is the set of the policies and mechanisms to control the order in
which the jobs can be completed.
By using the scheduling algorithms, the scheduler is done.
Types of Process Scheduling Algorithms
The different types of process scheduling algorithms are as follows −
FCFS (First Come First Serve)
SJF or shortest job next.
Round Robin.
Shortest Remaining time.
Priority Scheduling.
Multiple level queues.
The scheduling criterion is responsible for helping in the design of the
good scheduler. These criteria are as follows −
CPU Utilization
The scheduling algorithm should be designed in such a way that the
usage of the CPU should be as efficient as possible.
The main objective of any CPU scheduling algorithm is to keep the
CPU as busy as phossible.
Theoretically, CPU utilization can range from 0 to 100 but in a real-
time system, it varies from 40 to 90 percent depending on the load
upon the system.
Throughput
It can be defined as the number of processes executed by the CPU in a
given amount of time.
It is used to find the efficiency of a CPU.
A measure of the work done by the CPU is the number of processes
being executed and completed per unit of time. This is called
throughput.
The throughput may vary depending on the length or duration of the
processes
Response Time
The Response time is the time taken to start the job when the job enters
the queues so that the scheduler should be able to minimize the response
time.
Response time = Time at which the process gets the CPU for the first
time - Arrival time
Turnaround time
Turnaround time is the total amount of time spent by the process from
coming in the ready state for the first time to its completion.
Turnaround time = Burst time + Waiting time
or
Turnaround time = Exit time - Arrival time
Waiting time
The Waiting time is nothing but where there are many jobs that are
competing for the execution, so that the Waiting time should be
minimized.
Waiting time = Turnaround time - Burst time
Fairness
For schedulers there should be fairness for making sure that the
processes get the fair share of chances to be executed.
Completion time:
The completion time is the time when the process stops
executing, which means that the process has completed its
burst time and is completely executed.
Priority:
If the operating system assigns priorities to processes, the
scheduling mechanism should favor the higher-priority
processes.
Predictability:
A given process always should run in about the same amount
of time under a similar system load.
SCHEDULING ALGORITHMS:
A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms.
There are six popular process scheduling algorithms which we are going
to discuss in this chapter −
First-Come, First-Served (FCFS) Scheduling
Shortest-Job-Next (SJN) Scheduling
Priority Scheduling
Shortest Remaining Time
Round Robin(RR) Scheduling
Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive.
Non-preemptive algorithms are designed so that once a process enters
the running state, it cannot be preempted until it completes its allotted
time, whereas the preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process anytime when a
high priority process enters into a ready state.
First Come First Serve (FCFS)
Jobs are executed on first come, first serve basis.
It is a non-preemptive, pre-emptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high.
Wait time of each process is as follows −
Proces Wait Time : Service Time - Arrival Time
s
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
Shortest Job Next (SJN)
This is also known as shortest job first, or SJF
This is a non-preemptive, pre-emptive scheduling algorithm.
Best approach to minimize waiting time.
Easy to implement in Batch systems where required CPU time
is known in advance.
Impossible to implement in interactive systems where required
CPU time is not known.
The processer should know in advance how much time process
will take.
Given: Table of processes, and their Arrival time, Execution time
Process Arrival Time Execution Time Service Time
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
Waiting time of each process is as follows −
Proces Waiting Time
s
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25
Priority Based Scheduling
Priority scheduling is a non-preemptive algorithm and one of
the most common scheduling algorithms in batch systems.
Each process is assigned a priority. Process with highest
priority is to be executed first and so on.
Processes with same priority are executed on first come first
served basis.
Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and
priority. Here we are considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
Waiting time of each process is as follows −
Proces Waiting Time
s
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the
SJN algorithm.
The processor is allocated to the job closest to completion but
it can be preempted by a newer ready job with shorter time to
completion.
Impossible to implement in interactive systems where required
CPU time is not known.
It is often used in batch environments where short jobs need
to give preference.
Round Robin Scheduling
Round Robin is the preemptive process scheduling algorithm.
Each process is provided a fix time to execute, it is called
a quantum.
Once a process is executed for a given time period, it is
preempted and other process executes for a given time
period.
Context switching is used to save states of preempted
processes.
Wait time of each process is as follows −
Proces Wait Time : Service Time - Arrival Time
s
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They
make use of other existing algorithms to group and schedule jobs with
common characteristics.
Multiple queues are maintained for processes with common
characteristics.
Each queue can have its own scheduling algorithms.
Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-
bound jobs in another queue. The Process Scheduler then alternately
selects jobs from each queue and assigns them to the CPU based on the
algorithm assigned to the queue.
Multilevel Queue Scheduling in Operating
System
Multilevel Queue Scheduling
Each algorithm supports a different process, but in a general system,
some processes require scheduling using a priority algorithm.
While some processes want to stay in the system (interactive
processes), others are background processes whose execution can be
delayed.
The number of ready queue algorithms between queues and within
queues may differ between systems.
A round-robin method with various time quantum is typically utilized for
such maintenance.
Several types of scheduling algorithms are designed for circumstances
where the processes can be readily separated into groups.
There are two sorts of processes that require different scheduling
algorithms because they have varying response times and resource
requirements.
The foreground (interactive) and background processes (batch
process) are distinguished.
Background processes take priority over foreground processes.
The ready queue has been partitioned into seven different queues using
the multilevel queue scheduling technique.
These processes are assigned to one queue based on their priority, such
as memory size, process priority, or type.
The method for scheduling each queue is different. Some queues are
utilized for the foreground process, while others are used for the
background process.
The foreground queue may be scheduled using a round-robin
method, and the background queue can be scheduled using
an FCFS strategy.
Advantages and Disadvantages of Multilevel Queue
Scheduling
There are various advantages and disadvantages of multilevel queue
scheduling. Some of the advantages and disadvantages of the multilevel
queue scheduling are as follows:
Advantages
1. You can use multilevel queue scheduling to apply different scheduling
methods to distinct processes.
2. It will have low overhead in terms of scheduling.
Disadvantages
1. There is a risk of starvation for lower priority processes.
2. It is rigid in nature.
Example
Let's take an example of a multilevel queue-scheduling algorithm with five
queues to understand how this scheduling works:
1. System process
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Every queue would have an absolute priority over the low-priority queues.
No process may execute until the high-priority queues are empty.
In the above instance, no other process may execute until and unless the
queues for system, interactive, and editing processes are empty.
If an interactive editing process enters the ready queue while a batch
process is underway, the batch process will be preempted.
There are the descriptions of the processes that are used in the above
example:
System Process
The OS has its process to execute, which is referred to as the System
Process.
Interactive Process
It is a process in which the same type of interaction should occur.
Batch Process
Batch processing is an operating system feature that collects programs
and data into a batch before processing starts.
Student Process
The system process is always given the highest priority, whereas the
student processes are always given the lowest.
Example Problem
Let's take an example of a multilevel queue-scheduling (MQS) algorithm
that shows how the multilevel queue scheduling work. Consider the four
processes listed in the table below under multilevel queue scheduling. The
queue number denotes the process's queue.
Process Arrival Time CPU Burst Time Queue Number
P1 0 4 1
P2 0 3 1
P3 0 8 2
P4 10 5 1
Queue 1 has a higher priority than queue 2. Round Robin is used in
queue 1 (Time Quantum = 2), while FCFS is used in queue 2.
Working:
1. Both queues have been processed at the start. Therefore, queue 1 (P1,
P2) runs first (due to greater priority) in a round-robin way and finishes
after 7 units.
2. The process in queue 2 (Process P3) starts running (since there is no
process in queue 1), but while it is executing, P4 enters queue 1 and
interrupts P3, and then P4 takes the CPU and finishes its execution.
Multilevel Feedback queue Scheduling
Each algorithm supports a different process, but some processes require
scheduling using a priority algorithm in a general system.
There is a different queue for foreground or background operations, but
they do not switch between queues or change their foreground or
background nature; this type of organization benefits from low scheduling
but is inflexible.
This strategy prioritizes operations that require I/O and are interactive.
It is a distinct process with a distinct CPU burst time. It enables a process
to switch between queues.
If a process consumes too much processor time, it will be switched to the
lowest priority queue.
A process waiting in a lower priority queue for too long may be shifted to a
higher priority queue. This type of aging prevents starvation.
The parameters of the multilevel feedback queue scheduler are as follows:
1. The scheduling algorithm for every queue in the system.
2. The queues number in the system.
3. The method for determining when a queue should be demoted to a lower-
priority queue.
4. When a process is upgraded to a higher-priority queue, this process
determines when it gets upgraded.
5. The method for determining which processes will enter the queue and
when those processes will require service
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel
Queue(MLQ) Scheduling but in this processes can move between the queues. And
thus, much more efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:
In a multilevel queue-scheduling algorithm, processes are permanently
assigned to a queue on entry to the system and processes are allowed to
move between queues.
As the processes are permanently assigned to the queue, this setup has
the advantage of low scheduling overhead,
Advantages of Multilevel Feedback Queue Scheduling:
It is more flexible.
It allows different processes to move between different queues.
It prevents starvation by moving a process that waits too long for the
lower priority queue to the higher priority queue.
Disadvantages of Multilevel Feedback Queue Scheduling:
For the selection of the best scheduler, it requires some other means to
select the values.
It produces more CPU overheads.
It is the most complex algorithm.
Multilevel feedback queue scheduling, however, allows a process to move
between queues. Multilevel Feedback Queue Scheduling (MLFQ) keeps analyzing
the behavior (time of execution) of processes and according to which it changes its
priority.
Now let us suppose that queues 1 and 2 follow round robin with time quantum 4
and 8 respectively and queue 3 follow FCFS.
Implementation of MFQS is given below –
When a process starts executing the operating system can insert it into
any of the above three queues depending upon its priority. For example,
if it is some background process, then the operating system would not
like it to be given to higher priority queues such as queue 1 and 2. It will
directly assign it to a lower priority queue i.e. queue 3. Let’s say our
current process for consideration is of significant priority so it will be
given queue 1.
In queue 1 process executes for 4 units and if it completes in these 4
units or it gives CPU for I/O operation in these 4 units then the priority of
this process does not change and if it again comes in the ready queue
then it again starts its execution in Queue 1.
If a process in queue 1 does not complete in 4 units then its priority gets
reduced and it shifted to queue 2.
Above points 2 and 3 are also true for queue 2 processes but the time
quantum is 8 units. In a general case if a process does not complete in a
time quantum then it is shifted to the lower priority queue.
In the last queue, processes are scheduled in an FCFS manner.
A process in a lower priority queue can only execute only when higher
priority queues are empty.
A process running in the lower priority queue is interrupted by a process
arriving in the higher priority queue.
Well, the above implementation may differ for example the last queue can also
follow Round-robin Scheduling.
Problems in the above implementation: A process in the lower priority queue can
suffer from starvation due to some short processes taking all the CPU time.
Solution: A simple solution can be to boost the priority of all the processes
after regular intervals and place them all in the highest priority queue.