Note - DCS30102 - Module 2
Note - DCS30102 - Module 2
Process Management in OS
A Program does nothing unless its instructions are executed by a CPU. A program in execution is called a
process. In order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the same
time. Therefore, the operating system has to manage all the processes and the resources in a convenient
and efficient way.
Some resources may need to be executed by one process at one time to maintain the consistency
otherwise the system can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process Management
Attributes of a process
The Attributes of the process are used by the Operating System to create the process control block (PCB)
for each of them. This is also called context of the process. Attributes which are stored in the PCB are
described below.
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique identification
of the process in the system.
2. Program counter
A program counter stores the address of the last instruction of the process on which the process was
suspended. The CPU uses this address when the execution of this process is resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new, ready,
running and waiting. We will discuss about them later in detail.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
4. Priority
Every process has its own priority. The process with the highest priority among the processes gets the
CPU first. This is also stored on the process control block.
Process States
State Diagram
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
The process, from its creation to completion, passes through various states. The minimum number of
states is five.
The names of the states are not standardized although the process may be in one of the following states
during execution.
1. New
A program which is going to be picked up by the OS into the main memory is called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to be
assigned. The OS picks the new processes from the secondary memory and put all of them in the main
memory.
The processes which are ready for the execution and reside in the main memory are called ready state
processes. There can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the scheduling
algorithm. Hence, if we have only one CPU in our system, the number of running processes for a
particular time will always be one. If we have n processors in the system then we can have n processes
running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon the
scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the OS
move this process to the block or wait state and assigns the CPU to the other processes.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the process
(Process Control Block) will also be deleted the process will be terminated by the Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory due to lack of
the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS have to
make the room for the process in the main memory by throwing the lower priority process out into the
secondary memory. The suspend ready processes remain in the secondary memory until the main
memory gets available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked process which is
waiting for some resources in the main memory. Since it is already waiting for some resource to get
available hence it is better if it waits in the secondary memory and make room for the higher priority
process. These processes complete their execution once the main memory gets available and their wait is
finished.
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and will be
ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one process and
start executing it. Selecting the process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may come to
the blocked or wait state during the execution then in that case the processor starts executing the other
processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the process
(PCB) will be deleted and the process gets terminated by the Operating system.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB related to
the process is also stored in the queue of the same state. If the Process is moved from one state to another
state then its PCB is also unlinked from the corresponding queue and added to the other state queue in
which the transition is made.
1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary memory. The
long-term scheduler (Job scheduler) picks some of the jobs and put them in the primary memory.
2. Ready Queue
Ready queue is maintained in primary memory. The short-term scheduler picks the job from the ready
queue and dispatch to the CPU for the execution.
3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the state of the
process from running to waiting. The context (PCB) associated with the process gets stored on the
waiting queue which will be used by the Processor when the process finishes the IO.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
1. Arrival Time
The time at which the process enters into the ready queue is called the arrival time.
2. Burst Time
The total amount of time required by the CPU to execute the whole process is called the Burst Time. This
does not include the waiting time. It is confusing to calculate the execution time for a process even before
executing it hence the scheduling problems based on the burst time cannot be implemented in reality.
3. Completion Time
The Time at which the process enters into the completion state or the time at which the process completes
its execution, is called completion time.
4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called Turnaround
time.
5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called waiting time.
6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU is called
Response Time.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
CPU Scheduling
In the uni programming systems like MS DOS, when a process waits for any I/O operation to be
done, the CPU remains idol. This is an overhead since it wastes the time and causes the problem of
starvation. However, In Multiprogramming systems, the CPU doesn't remain idle during the
waiting time of the Process and it starts executing other processes. Operating System has to define
which process the CPU will be given.
In Multiprogramming systems, the Operating system schedules the processes on the CPU to have
the maximum utilization of it and this procedure is called CPU scheduling. The Operating System
uses various scheduling algorithm to schedule the processes.
This is a task of the short term scheduler to schedule the CPU for the number of processes present
in the Job Pool. Whenever the running process requests some IO operation then the short term
scheduler saves the current context of the process (also called PCB) and changes its state from
running to waiting. During the time, process is in waiting state; the Short term scheduler picks
another process from the ready queue and assigns the CPU to this process. This procedure is
called context switching.
The Operating system maintains a process control block during the lifetime of the process. The
Process control block is deleted when the process is terminated or killed. There is the following
information which is saved in the process control block and is changing with the state of the
process.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the
time, the CPU remains idol. The task of Operating system is to optimize the utilization of resources.
If most of the running processes change their state from running to waiting then there may always
be a possibility of deadlock in the system. Hence to reduce this overhead, the OS needs to schedule
the jobs to get the optimal utilization of CPU and to avoid the possibility to deadlock.
PROCESS SCHEDULING
Legend:
Scheduler:
The short-term scheduler selects the process to get the processor (cpu) from among the
processes which are already in memory (i.e., from the ready queue). This scheduler will be
executing frequently. So, it has to be very fast in order to achieve a better processor utilization.
Processor scheduling algorithms try to answer the following crucial question. Which process in
the ready queue will get the processor? In order to answer this, one should consider the relative
importance of the following performance criteria.
Performance Criteria:
1) Processor utilization:
= (processor busy time) / (processor busy time + processor idle time)
We would like to keep the processor as busy as possible, i.e., we want to reduce the idle
time, (in real life, 40% - 90%).
2) Throughput:
= no. of processes completed per time unit
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
→ It is the sum of: time spent waiting to get into the ready queue, execution time and I/O
time.
4) Waiting time :
= The average time spent time by a process waiting in the ready queue.
→ Considering only the waiting time for each process instead of the tat will be sufficient.
It is desirable that:
Throughput → Max.
Turnaround → Min.
A scheduling decision may take place under the five possible circumstances:
1. After a process switches from the running state to the blocked (waiting) state (process
requests an I/O).
2. After a process switches from the running state to the ready state (interrupt occurs).
3. After a process switches from the blocked (waiting) state to the ready state (completion of
I/O).
4. After a process terminates (completion of the running process).
5. After a newly created process arrives at ready state (a new job is accepted).
For circumstances 1 and 4 → a new process must be selected from the ready processes queue.
For circumstances 2, 3 and 5 → should the previously running process be re-selected or would it be
possible to select a new process from the ready processes queue?
→ Once CPU has been allocated to a process, that process keeps the CPU until it release the
CPU (either by termination or by requesting I/O)
A scheduling algorithm which acts on all circumstances is called preemptive. (i.e. such an
algorithm can select a new process in circumstances 2, 3 and 5).
→ The highest-priority process among all ready processes is allocated the CPU. The
scheduler is called each time a process enters the ready state.
The First Come First Served (FCFS) Scheduling Algorithm is the simplest one. In this algorithm
the set of ready processes is managed as FIFO (first-in-first-out) Queue. The processes are serviced
by the CPU until completion in order of their entering in the FIFO queue. A process once allocated
the CPU keeps it until releasing the CPU either by terminating or requesting I/O. For example,
interrupted process is allowed to continujre running after interrupt handling is done with.
Example:
Let processes P1, P2, and P3 arrive to the ready queue in that order and let the run times (CPU
burst times) of these processes be as follows:
P1 24
P2 3
P2 3
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
These processes are served by the CPU in FCFS order. The result can be shown in the following
Gantt chart:
P1 P2 P3
0 24 27 30
If these processes arrived in the following order: P2, P3, P1 then the Gantt chart would be as
follows:
P2 P3 P1
0 3 6 30
Exercise:
Consider the following processes with the relative CPU bursts which arrive together in the order
given.
A 16
B 4
C 1
Draw the Gantt charts, and find out the average turn around time (tat) for FCFS CPU scheduling
algorithm if:
a) Coming order is A, B, C.
b) Coming order is C, B, A.
The Shortest Job First Scheduling Algorithm chooses the process that has the smallest next CPU
burst.
Example:
Assume there are 4 ready processes with their next CPU burst time as follows:
P1 6
P2 8
P3 7
P4 3
Using SJF scheduling algorithm, these processes are scheduled according to the following Gantt
Chart:
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
P4 P1 P3 P2
0 3 9 16 24
NOTE:
If the processes (in order P1, P2, P3, P4) are scheduled using the FCFS algorithm then the average
waiting time is (0+6+14+21) /4 = 10.25 ms.
P1 P2 P3 P4
0 6 14 21 24
→ SJF minimizes the average wait time because it services smaller processes before large ones.
The main problem with the SJF algorithm is to know for each process the next CPU burst
time!!! However some techniques exist to predict this next CPU burst time.
Another problem is that starvation of long processes may occur.
Exercise:
Consider the following processes which are in the ready queue in the given order with the relative
next CPU bursts.
P1 7
P2 14
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
P3 3
P4 6
FCFS scheduling
SJF scheduling
FCFS:
P1 P2 P3 P4
0 7 21 24 30
Average W = (21+7+24)/4 = 13 ms
SJF:
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
P3 P4 P1 P2
0 3 9 16 30
Average W = (9+16+3)/4 = 7 ms
This is the preemptive version of SJF. The currently executing process will be preempted from the
CPU if a process with a shorter CPU burst time is arrived.
Example:
P1 0 8
P2 1 4
P3 2 9
P4 3 5
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
P1 P2 P4 P1 P3
0 1 5 10 17 26
0 8
Avg. waiting time: Average tat = 13 ms
1 7 4
= (9+0+15+2) / 4
2 7 3 9
= 26/4
3 7 2 9 5
= 6.5 ms
…
P1 P2 P4 P3
0 8 12 17 26
=> SJF gives smaller wait time than FCFS since shorter processes are executed before longer ones!
Exercise:
P1 0 15
P2 2 2
P3 2 9
P4 10 3
SJF scheduling
SRTF scheduling
quantum has elapsed, the scheduler will then proceed with the next process in the ready
queue.
Example:
Consider the following set of process that arrive at time 0 with the order A,B,C,D and the following
CPU burst time. Find the average waiting time with RR of quantum : 10 ms
A 20
B 40
C 14
D 6
A B C D A B C B B
0 10 20 30 36 46 56 60 70 80
Example:
Redo the previous example by introducing switching time of 2 ms and some I/O. i.e. A has 10 ms
CPU burst – 40 ms I/O burst -10 ms CPU burst , and the others are as before.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
CPU:
A1 B1 C1 D1 B2 C2 A2 B3 B4
0 10 12 22 24 34 36 42 44 54 56 60 62 72 74 84 86
96
I/O:
idle A idle
0 10 50
Important note: smaller time quantum increases the number of context switches! So, time
quantum should be large with respect to the context switching time.
Consider the following process arrival CPU, I/O. burst times given. Assume that context switching
time negligible and there is a single I/O device which operates in FCFS manner.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
A 0 5 10 5 10 5 10
B 5 5 12 5 12 5 12
Draw the Gantt charts for the CPU and I/O device, and find out the average turn around time (tat)
and utilization for FCFS processor scheduling algorithm.
CPU:
A1 B1 A2 B2 A3 B3
0 5 10 15 20 27 32 37 42 49 54
I/O:
A1 B1 A2 B2 A3 B3
0 5 15 27 37 49 59 71
tat(a) = 59 ms
tat(b) = 66 ms
Example:
A 0 100 3
B 0 10 1
C 0 300 3
D 0 60 5
E 80 150 4
Draw the Gantt chart and compute the average waiting time in case of:
a) Non-preemptive
b) Preemptive.
a) Non-preemptive:
D A E C B
W (E) = 80 ms
b) Preemptive:
D A E A C B
Wait (E) = 0 ms
Note: SJF is a case of the priority scheduling where priority = 1/ (next CPU burst time).
A major problem with the priority scheduling is starvation: a low priority process may
wait for ever before being executed. ( MIT IBM 7094 crashed in 1973. They found a ready
process submitted in 1967 and still waiting for execution).
Solution: Gradually increase the priority of processes that wait for a long time (this technique is
called aging).
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
The priority scheduling algorithm may be combined with the RR algorithm. In this case,
processes are grouped into priority classes:
Among classes a priority scheduling algorithm is used.
Inside a class a RR algorithm is used => 1 ready queue for each class.
The following shows a system with four priority classes. The scheduling algorithm is as follows:
as long as there are runnable processes in priority class 4, just run each one for one
quantum(i.e. round robin fashion), and never bother with lower priority classes. If priority
class 4 in empty , then run the class 3 processes in round robin fashion , and so on. If priorities
are not adjusted from time to time, lower priority classes may all starve.
Example:
A 0 22 1
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
B 0 12 2
C 0 23 2
D 0 11 3
E 0 21 2
F 0 15 3
G 30 22 3
Assume quantum =10 ms , and 0 switching time, draw the Gantt chart and compute the average
waiting time in case of:
Queue_1 : A
Queue_2 : B, C, E
a)
D1 F1 D F B1 C1 E1 B C2 E2 C E G1 G2 G A1 A2 A
2 2 2 3 3 3 3
0 10 20 21 26 36 46 56 58 68 78 81 82 92
102 104 114 124
b)
D1 F1 D2 F2 B1 G1 G2 G3 C1 E1 B2 C2 E2 C3 E3 A1 A2 A3
0 10 20 21 26 36 46 56 58 68 78 80 90
100 103 104 114 124
The process can be split down into so many threads. For example, in a browser, many tabs can be
viewed as threads. MS Word uses many threads - formatting text from one thread, processing input from
another thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new process.
o Threads can share the common data, they do not need to use Inter- Process communication.
Types of Threads
2. User-level thread.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily implemented
and it is implemented by the user. If a user performs a user-level thread blocking operation, the whole
process is blocked. The kernel level thread does not know nothing about the user level thread. The
kernel-level thread manages user-level threads as if they are single-threaded processes?
examples: Java thread, POSIX threads, etc.
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support threads
at the kernel-level.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread control
blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.
1. User-level threads lack coordination between the thread and the kernel.
2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.
Components of Threads
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why the
throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.
o Responsiveness: When the process is split into several threads, and when a thread completes its
execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share the same
address space, while in process, we adopt just a few exclusive communication strategies for
communication between two processes.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
o Resource sharing: Resources can be shared between all threads within a process, such as code,
data, and files. Note: The stack and register cannot be shared between threads. There is a stack
and register for each thread.
Benefits of Multithreading
In this article, you will learn about the benefits of multithreading in the operating system. But before
discussing the benefits of multithreading, you must know about multithreading.
What is Multithreading?
Multithreading is a function of the CPU that permits multiple threads to run independently while sharing
the same process resources. A thread is a conscience sequence of instructions that may run in the same
parent process as other threads.
Multithreading allows many parts of a program to run simultaneously. These parts are referred to as
threads, and they are lightweight processes that are available within the process. As a result,
multithreading increases CPU utilization through multitasking. In multithreading, a computer may
execute and process multiple tasks simultaneously.
Multithreading needs a detailed understanding of these two terms: process and thread. A process is a
running program, and a process can also be subdivided into independent units called threads.
Examples of Multithreading
Multiple threads run behind the scenes in most of the applications you use regularly. At any given time,
you may have numerous tabs open in the system and every tab displaying different types of content.
Many threads of execution are used to display animations, load content, play a video, etc.
A word processor is another instance of a multithreaded program with which you are all familiar.
Multiple threads are used to show the content, asynchronously check content's spelling and grammar, and
generate a PDF version of content while typing. These are all happening simultaneously, with
independent threads doing these tasks internally.
Benefits of Multithreading
1. Responsiveness
Multithreading in an interactive application enables a program to continue running even if a section is
blocked or executing a lengthy process, increasing user responsiveness.
A server in a non-multithreading environment listens to a port for a request, processes the request, and
then resumes listening for another request. Other users are made to wait unnecessarily because of the
time it takes to execute a request. Instead, a better approach will be to pass the request to a worker thread
while listening on a port.
BRAINWARE UNIVERSITY
CLASS NOTES [Operating System]
For instance, a multithreaded web browser permits the user interaction in one thread while a video is
loading in another thread. As a result, instead of waiting for the entire web page to load, the user can
continue viewing a section of a web page.
2. Resource Sharing
Processes can only share the resources only via two techniques such as:
1. Message Passing
2. Shared Memory
The programmer must explicitly structure such strategies. On the other hand, by default, threads share the
memory and resources of the process they belong to.
The advantage of sharing code and data is that it permits an app to execute multiple code threads in the
same address space.
3. Economy
Allocating memory and resources for process creation is an expensive procedure because it is a time and
space-consuming task.
Because threads share a memory with the process to which they belong, establishing and context
switching threads is more cost-effective. In general, generating and managing processes takes far more
time than threads.
4. Scalability
The advantages of multi-programming become much more apparent in the case of multiprocessor
architecture, when threads may execute in parallel on many processors. When there is just one thread, it
is impossible to break the processes into smaller jobs performed by different processors.
A single-threaded process could only run on one processor, despite the number of processors available.
Multithreading on multiple CPU machines increases parallelism.
5. Better Communication
Thread synchronization functions could be used to improve inter-process communication. Moreover,
sharing huge amounts of data across multiple threads of execution inside the same address space provides
extremely high-bandwidth, low-latency communication across various tasks within an application.
The CPU switches among threads so quickly in single-processor architecture that it creates the illusion of
parallelism, but only one thread is running at a particular time.
Disadvantages of Multithreading
Here, you will learn the disadvantages of multithreading. There are various disadvantages of
multithreading in the operating system, and some of the disadvantages are as follows:
4. If a parent process has several threads for proper process functioning, the child processes should
also be multithreaded because they may be required.