0% found this document useful (0 votes)
9 views50 pages

Chapter 2 Process and Process Scheduling

The document discusses the concept of processes in modern computer systems, highlighting the evolution from single program execution to concurrent execution of multiple processes. It covers process states, scheduling algorithms, and the role of process control blocks, emphasizing the importance of effective scheduling for CPU utilization and system performance. Additionally, it explains various scheduling criteria and algorithms, including First Come First Serve and Shortest Job First, detailing their implementation and implications.

Uploaded by

mugdhaspd1070
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views50 pages

Chapter 2 Process and Process Scheduling

The document discusses the concept of processes in modern computer systems, highlighting the evolution from single program execution to concurrent execution of multiple processes. It covers process states, scheduling algorithms, and the role of process control blocks, emphasizing the importance of effective scheduling for CPU utilization and system performance. Additionally, it explains various scheduling criteria and algorithms, including First Come First Serve and Shortest Job First, detailing their implementation and implications.

Uploaded by

mugdhaspd1070
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Process and

Process scheduling
Prof.Chaitali Godse
Watumull COE

Prof Chaitali Godse,Watumull COE


Concept of a PROCESS

• Early computer systems allowed only one program to be executed at a time.


This program had complete control of the system and had access to all the
system's resources. In contrast, current-day computer systems allow multiple
programs to be loaded into memory and executed concurrently.
• This evolution required firmer control and more compartmentalization of the
various programs; and these needs resulted in the notion of a process which is
a program in execution. A process is the unit of work in a modern time-sharing
system.
• The more complex the operating system is, the more it is expected to do on
behalf of its users. Although its main concern is the execution of user
programs, it also needs to take care of various system tasks that are better
left outside the kernel itself.
• A system therefore consists of a collection of processes: operating system
processes executing system code and user processes executing user code.
Potentially all these processes can execute concurrently with the CPU (or
CPUs) multiplexed among them. By switching the CPU between processes, the
operating system can make the computer more productive. In this chapter you
will read about what processes are and how they work.

Prof Chaitali Godse,Watumull COE


Process

• Informally, as mentioned earlier, a process is a program in


execution. A process is more than the program code, which is
sometimes known as the text section.
• It also includes the current activity, as represented by the
value of the program counter and the contents of the
processor's registers. A process generally also includes the
process stack, which contains temporary data (such as
function parameters, return addresses, and local variables),
and a data section, which contains global variables. A process
may also include a heap, which is memory that is dynamically
allocated during process run time.
• Although two processes may be associated with the same
program, they are nevertheless considered two separate MOV AL,02H
execution sequences. For instance, several users may be
running different copies of the mail program, or the same user MOV BL,03H
may invoke many copies of the Web browser program. Each of
these is a separate process; and although the text sections are ADD AL,BL
equivalent, the data, heap, and stack sections vary. It is also MOV CL,AL
common to have a process that spawns many processes as it
runs.

Prof Chaitali Godse,Watumull COE


Process State Diagram(5 state
model)
• As a process executes, it changes
state. The state of a process is
defined in part by the current
activity of that process. Each
process may be in one of the
following states:
• New. The process is being created.
• Running. Instructions are being
executed.
• Waiting. The process is waiting for
some event to occur (such as an I/0
completion or reception of a
signal).
• Ready. The process is waiting to be
assigned to a processor.
• Terminated. The process has
finished execution.

Prof Chaitali Godse,Watumull COE


Process State Diagram(2 state
model)
• Not-running State: Process waiting for execution.
• Running State: Process currently executing.
• When a process is first created by the OS, it
initializes the program control block for the
process and the new process enters the system in
Not-running state. After some time, the currently
running process will be interrupted by some
events, and the OS will move the currently running
process from Running state to Not-running state.
The dispatcher then selects one process from Not-
running processes and moves the process to the
Running state for execution.
• For example, if we implement round-robin
scheduling then the running process moves to not-
running state after the time quantum.
• When a process finishes the execution, the process
exits the system and the dispatcher again selects a
new process and moves it to Running state.
• All the processes in the Not-running state are
maintained in a queue.

Prof Chaitali Godse,Watumull COE


Process Control Block
Each Process is represented in the operating
system by a Process Control Block, it contains
info associated with a specific process as given
below
Process Scheduling

• The objective of multiprogramming is to have some


process running at all times, to maximize CPU utilization.
• The objective of time sharing is to switch the CPU among
processes so frequently that users can interact with each
program while it is running.
• To meet these objectives, the process scheduler selects an
available process (possibly from a set of several available
processes) for program execution on the CPU.
• For a single-processor system, there will never be more
than one running process. If there are more processes, the
rest will have to wait until the CPU is free and can be
rescheduled.

Prof Chaitali Godse,Watumull COE


Process Scheduling Queue

• As processes enter the system, they are put into a


job queue, which consists of all processes in the
system. The processes that are residing in main
memory and are ready and waiting to execute are
kept on a list called the ready queue.
• When a process is allocated the CPU, it executes
for a while and eventually quits, is interrupted,
or waits for the occurrence of a particular event,
such as the completion of an I/0 request.
• Suppose the process makes an I/O request to a
shared device, such as a disk. Since there are
many processes in the system, the disk may be
busy with the I/0 request of some other process.
• The process therefore may have to wait for the
disk. The list of processes waiting for a particular
I/0 device is called a device queue.
Process Schedulers

• The operating system must select, for scheduling


purposes, processes from queues in some fashion. The
selection process is carried out by the appropriate
scheduler.
• The long-term scheduler, or job scheduler, selects
processes from the job pool and loads them into memory
for execution.
• The short-term scheduler, or CPU scheduler, selects
from among the processes that are ready to execute and
allocates the CPU to one of them.

Prof Chaitali Godse,Watumull COE


Process Schedulers

• It is important that the long-term scheduler make a careful


selection. In general, most processes can be described as
either I/ 0 bound or CPU bound.
• An I/O-bound process is one that spends more of its time
doing I/O than it spends doing computations. A CPU-bound
process, in contrast, generates I/0 requests infrequently,
using more of its time doing computations. It is important
that the long-term scheduler select a good process mix of I/O-
bound and CPU-bound processes.
• If all processes are I/0 bound, the ready queue will almost
always be empty, and the short-term scheduler will have little
to do. If all processes are CPU bound, the I/0 waiting queue
will almost always be empty, devices will go unused, and
again the system will be unbalanced. The system with the
best performance will thus have a combination of CPU-bound
and I/O-bound processes.
Process Schedulers

• The primary distinction between these two schedulers lies in frequency of


execution. The short-term scheduler must select a new process for the CPU
frequently. A process may execute for only a few milliseconds before waiting
for an I/0 request. Often, the short-term scheduler executes at least once
every 100 milliseconds. Because of the short time between executions, the
short-term scheduler must be fast. If it takes 10 milliseconds to decide to
execute a process for 100 milliseconds, then 10 I (100 + 10) = 9 percent of the
CPU is being used(wasted) simply for scheduling the work.
• The long-term scheduler executes much less frequently; minutes may separate
the creation of one new process and the next. The long-term scheduler
controls the degree of multiprogramming (the number of processes in
memory).
• If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system. Thus, the long-term scheduler may need to be invoked only when a
process leaves the system. Because of the longer interval between executions,
the long-term scheduler can afford to take more time to decide which process
should be selected for execution.

Prof Chaitali Godse,Watumull COE


Process Schedulers

• The key idea behind a medium-term


scheduler is that sometimes it can
be advantageous to remove
processes from memory (and from
active contention for the CPU) and
thus reduce the degree of
multiprogramming.
• Later, the process can be
reintroduced into memory, and its
execution can be continued where
it left off. This scheme is called
swapping.
• The process is swapped out, and is
later swapped in, by the medium-
term scheduler. Swapping may be
necessary to improve the process
mix or because a change in memory
requirements has overcommitted
available memory, requiring
memory to be freed up

Prof Chaitali Godse,Watumull COE


Creation of a process
Creation of a process
Parent process

Mem and file


Parent management
process for
User
processes
Creation of a process
Process Termination
Process Termination
Process Termination
fork() system call

• System call fork() is used to create processes. It takes no arguments


and returns a process ID. The purpose of fork() is to create
a new process, which becomes the child process of the caller.
• After a new child process is created, both processes will execute the
next instruction following the fork() system call. Therefore, we have
to distinguish the parent from the child. This can be done by testing
the returned value of fork():
• If fork() returns a negative value, the creation of a child process was
unsuccessful.
• fork() returns a zero to the newly created child process.
• fork() returns a positive value, the process ID of the child process, to
the parent. Normally, the process ID is an integer. Moreover, a
process can use function getpid() to retrieve the process ID assigned
to this process.
Scheduling Algorithms

• Context Switch: Assume, main memory contains more than


one process. If cpu is executing a process, if time expires or if
a high priority process enters into main memory, then the
scheduler saves information about current process in the PCB
and switches to execute the another process. The concept of
moving CPU by scheduler from one process to other process is
known as context switch.
• Non-Preemptive Scheduling: CPU is assigned to one process,
CPU do not release until the completion of that process. The
CPU will assigned to some other process only after the
previous process has finished.
• Preemptive scheduling: here CPU can release the processes
even in the middle of the execution. CPU received a signal
from process p2. OS compares the priorities of p1 ,p2. If
p1>p2, CPU continues the execution of p1. If p1<p2 CPU
preempt p1 and assigned to p2.

Prof Chaitali Godse,Watumull COE


Scheduling Criteria

• CPU utilization. We want to keep the CPU as busy as


possible.Conceptually, CPU utilization can range from 0 to 100
percent. In a real system, it should range from 40 percent (for a
lightly loaded system) to 90 percent (for a heavily used system).
• Throughput:If the CPU is busy executing processes, then work is being
done. One measure of work is the number of processes that are
completed per time unit, called throughput. For long processes, this
rate may be one process per hour; for short transactions, it may be
ten processes per second.
• Turnaround time: From the point of view of a particular process, the
important criterion is how long it takes to execute that process. The
interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/0.

Prof Chaitali Godse,Watumull COE


Scheduling Criteria

• Waiting time: The CPU-scheduling algorithm does not affect


the amount of time during which a process executes or does
I/0; it affects only the amount of time that a process spends
waiting in the ready queue. Waiting time is the sum of the
periods spent waiting in the ready queue.
• Response time. In an interactive system, turnaround time may
not be the best criterion. Often, a process can produce some
output fairly early and can continue computing new results
while previous results are being output to the user. Thus,
another measure is the time from the submission of a request
until the first response is produced. This measure, called
response time, is the time it takes to start responding, not
the time it takes to output the response. The turnaround time
is generally limited by the speed of the output device.
• It is desirable to maximize CPU utilization and throughput and
to minimise turnaround time, waiting time, and response time

Prof Chaitali Godse,Watumull COE


FCFS Scheduling Algorithm

• First Come First Serve Scheduling Algorithm : Process that request the CPU
1st is allotted the CPU 1st
• Implementation is managed with FIFO principle. When the CPU is free it is
allocated to the process at the head of the queue.
READY
QUEUE

Process Burst Waiting Time Turn Around p1


Time Time=WT+BT p2
P1 24 0 24 p3
P2 3 24 27
P3 3 27 30
AwT=17ms
• Gantt Chart: p1 p2 p3
• 0 24 27 30
FCFS Scheduling Algorithm

• First Come First Serve Scheduling Algorithm :Convoy Effect: All the
processes in the system are waiting for one big process to release
the CPU READY
Process Burst Waiting Time Turn Around QUEUE
Time Time=WT+BT
p1
P1 3 0 3
p2
P2 3 3 6 p3
P3 24 6 30
AWT= 3 ms

p1 p2 p3
• Gantt Chart:
• 0 3 6 30
FCFS Example with Arrival Time

READY
Process Arrival Burst CT TAT WT QUEUE
Time Time
P1 2 2 4 2 0
p5
P2 0 1 1 1 0
P3 2 3 7 5 2
p4 3 5 12 9 4
P5 4 4 16 12 8

p2 - p1 p3 p4 p5
Shortest Job First Algorithm

• Pre-emptive: The running process can be


interrupted to give services of CPU to another
higher priority process
• Non Pre-emptive: Once the process enters CPU
the process will keep the CPU until it releases
the CPU by terminating the process or by
requesting I/O
• Shortest Job 1st: the job in the CPU are sorted
according to the Shortest Burst time. The process
which has the shortest CPU burst time will be
executed 1st.

Prof Chaitali Godse,Watumull COE


READY
Shortest Job First Non Pre-emptive QUEUE
P1
p2
• Problem:

Process Arrival Burst CT TAT WT


Time Time
P1 1 5 11 10 5
P2 5 7 18 13 6
P3 0 6 6 6 0
Gantt Chart:
• p3 p1 p2

Prof Chaitali Godse,Watumull COE


Shortest Job First Non Pre-emptive

• Problem:
READY
Proces Arrival Burst CT TAT WT RT QUEUE
s Time Time
P4
P1 2 1 7 5 4 4
p2
P2 1 5 16 15 10 10 p1
P3 4 1 8 4 3 3 p5
p3
P4 0 6 6 6 0 0
P5 2 3 11 9 6 6
Gantt Chart:
• p4 p1 p3 p5 p2

0 6 7 8 11 16


Shortest Job First Pre-emptive
READY
QUEUE
• Problem:
Proces Arrival Burst CT TAT WT RT
s Time Time
P1
P1 1 10 9 23 22 13 0 P2
P2 2 432 6 4 2 0 p1
p3
P3 3 5 14 11 6 6
p4
p4 4 3 9 5 2 2

Gantt Chart: P1 P2 P2 p4 p3 p1

• 1 2 3 6 9 14 23
Shortest Job First Pre-emptive

READY
• Problem: (SRTF) QUEUE
Process Arrival Burst CT TAT WT RT
P4
Time Time
p2
P1 2 1 3 1 0 0 p1
P2 1 5,4 16 15 10 10 P5
p4
P3 4 1 5 1 0 0
p3
P4 0 6,5,4 11 11 5 0
P5 2 3,2 7 5 2 1

Gantt Chart: P4 P4 P1 P5 P3 P5 P4 P1

• 0 1 2 3 4 5 7 11 16
Priority Scheduling

• CPU is allocated to the process with highest priority. If the


priorities are equal then the processes are sorted by FCFS
• SJF is a special case of Priority, Larger the CPU burst lower
is the Priority
• Priority Scheduling can be non Pre-emptive or Pre-emptive.
• Priorities can be set internally or externally.
• Internal: time limits, Memory requirements, ratio of I/O
burst to CPU burst
• External: ( outside OS) imp of process, Type and amount of
work done, funds being paid for use, dept. sponsoring the
work

Prof Chaitali Godse,Watumull COE


Priority Scheduling

• Starvation or indefinite blocking is phenomenon associated with the


Priority scheduling algorithms, in which a process ready to run for
CPU can wait indefinitely because of low priority. In heavily loaded
computer system, a steady stream of higher-priority processes can
prevent a low-priority process from ever getting the CPU.
• We can think of a scenario in which only one process has very low-
priority (for example 127) and we are running other process with
high-priority, this can lead to the low priority process indefinitely
waiting for the for CPU this leads to Starvation. Solution to
Starvation : Aging
• Aging is a technique of gradually increasing the priority of processes
that wait in the system for a long time. For example, if priority range
is from 127(low) to 0(high), we could increase the priority of a
waiting process by 1 Every 15 minutes. Eventually even a process
with an initial priority of 127 would take no more than 32 hours for to
age to a priority-0 process.

Prof Chaitali Godse,Watumull COE
Priority scheduling non pre-
emptive
READY
QUEUE
Process AT BT Priority CT TAT WT RT
P1 0 8 3 8 8 0 0 p4

P2 1 2 4 17 16 14 14 P6
P3 3 4 4 21 18 14 14
P4 4 1 5 22 18 17 17
P5 5 6 2 14 9 3 3
P6 6 5 6 27 21 16 16
P7 10 1 1 15 5 4 4
• Gantt Chart p1 p5 p7 p2 p3 p4 p6
• 0 8 14 15 17 21 22 27
Priority scheduling pre-emptive
READY
QUEUE
Process AT BT P CT TAT WT RT
P1 0 8,3 3 15 15 7 0
P2 1 2 4 17 16 14 14
P3 3 4 4 21 18 14 14
P4 4 1 5 22 18 17 17
P5 5 6,1 2 12 7 1 0
P6 6 5 6 27 21 16 16
P7 10 1 1 11 1 0 0
• Gantt P1 P1 P1 P1 P5 P5 P7 P5 P1 P2 P3 P4 p6

• 0 1 3 4 5 6 10 11 12 15 17 21 22 27
Priority scheduling Pre emptive
READY
QUEUE
• CPU is allocated to the process with highest priority. If the priorities are equal
then the processes are sorted by FCFS
• SJF is a special case of Priority, Larger the CPU burst lower is the Priority
p2
Process AT BT Priority CT TAT WT RT
P1 1 5 1 6 5 0 0
P2 5 7 3 18 13 6 6
P3 0 6,5 2 11 11 5 0

• Gantt chart: p3 p1 p3 p2
• 0 1 6 11 18

Round Robin Scheduling

• In this algorithm all jobs are treated equally


• The CPU time is divided into a number of quantum
• A CPU quanta for e.g. is 2 ms,So a job will be
executed for only 2ms and will be forcefully
terminated if the CPU burst time is more than 2ms.It
will be forced to the ready queue and a job in the
ready queue will be executed.
• After termination job may go to the halted state or
waiting state.
• If job is only of 1 ms then CPU cannot remain free for
1 ms ,so next task is taken up by the CPU

Prof Chaitali Godse,Watumull COE


Round Robin Scheduling Algorithm
READY
QUEUE
Process Burst Time CT TAT WT
P1 24,21,18,15,12 30 24 0
,9,6,3,0
P2 3,0 9 6 3 p1
p2
P3 3,0 6 9 6 p3

• Gantt Chart: p1 p2 p3 p1 p1 p1 P1 P1 P1 p1

• 0 3 6 9 12 15 18 21 24 27 30

AVT= ms
Round Robin Scheduling READY QUEUE

Process AT BT CT TAT WT
P1,p3,p1,p2,p4,p3
P1 0 8,5,2,0 22 22 14 ,p5,p1,p3,p5
P2 5 2,0 11 6 4
P3 1 7,4,1,0 23 22 15
P4 6 3,0 14 8 5
P5 8 5,2,0 25 17 12

p1 p3 p1 p2 p4 p3 p5 p1 p3 p5
TQ=3ms

0 3 6 9 11 14 17 20 22 23 25
READY QUEUE
Round Robin Scheduling

Process AT BT CT TAT WT P1,p2,p3,p1,p4,p2


P1 0 5,3,1 12 12 7 ,p1
P2 1 4,2,0 11 10 6
P3 2 2,0 6 4 2
P4 4 1,0 9 5 4

p1 p2 p3 p1 p4 p2 p1
TQ=2ms

0 2 4 6 8 9 11 12
Threads

• Threads:
• A process is divide into number of light weight process, each light
weight process is said to be a Thread
• Thread States:
• 1. bornState : A thread is just created.
• 2. ready state : The thread is waiting for CPU.
• 3. running : System assigns the processor to the thread.
• 4. sleep : A sleeping thread becomes ready after the designated
sleep time expires.
• 5. dead : The Execution of the thread finished.

• Eg: Word processor.


• Typing, Formatting, Spell check ,saving the file
Threads
Threads

• https://www.youtube.com/watch?v=O3EyzlZxx3g&t=20s
Multithreading

• A process is divided into number of smaller tasks each task is called a


Thread. Number of Threads with in a Process execute at a time is
called Multithreading.
• If a program, is multithreaded, even when some portion of it is
blocked, the whole program is not blocked. The rest of the program
continues working If multiple CPU’s are available.
• Multithreading gives best performance. I f we have only a single
thread, number of CPU’s available, No performance benefits
achieved.

• Process creation is heavy-weight while thread creation is light-


weight
•  Can simplify code, increase efficiency
•  Kernels are generally multithreaded

Prof Chaitali Godse,Watumull COE


Multithreading
Threads

• 1) User Threads : Thread creation, scheduling,


management happen in user space by Thread Library.
user threads are faster to create and manage. If a user
thread performs a system call, which blocks it, all the
other threads in that process one also automatically
blocked, whole process is blocked.
• Advantages
• Thread switching does not require Kernel mode
privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user
level thread.
• User level threads are fast to create and manage.
• Disadvantages
• In a typical operating system, most system calls are
blocking.
• Multithreaded application cannot take advantage of
multiprocessing.
Threads

• 2) Kernel Threads: kernel creates, schedules, manages these threads .these threads are
slower, manage. If one thread in a process blocked, over all process need not be blocked.
• Advantages
•  Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
•  If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
•  Kernel routines themselves can multithreaded.

• Disadvantages
•  Kernel threads are generally slower to create and manage than the user threads.
•  Transfer of control from one thread to another within same process requires a mode
switch to the Kernel.

Prof Chaitali Godse,Watumull COE


Multithreading Models

• Multithreading Models
• Some operating system provides a combined user level
thread and Kernel level thread facility. Solaris is a good
example of this combined approach. In a combined system,
multiple threads within the same application can run in
parallel on multiple processors and a blocking system call
need not block the entire process. Multithreading models
are three types
•  Many to many relationship.
•  Many to one relationship.
•  One to one relationship.
Multithreading Models

• Many to One Model


• Many to one model maps many user
level threads to one Kernel level
thread.
• Thread management is done in user
space. When thread makes a
blocking system call, the entire
process will be blocks.
• Only one thread can access the
Kernel at a time,so multiple threads
are unable to run in parallel on
multiprocessors.
• If the user level thread libraries are
implemented in the operating
system in such a way that system
does not support them then Kernel
threads use the many to one
relationship modes.
Multithreading Models

• One to One Model


• There is one to one relationship of
user level thread to the kernel level
thread.
• This model provides more
concurrency than the many to one
model. It also another thread to run
when a thread makes a blocking
system call.
• It support multiple thread to
execute in parallel on
microprocessors.
• Disadvantage of this model is that
creating user thread requires the
corresponding Kernel thread. OS/2,
windows NT and windows 2000 use
one to one relationship model.
Multithreading Models

• Many to Many Model


• In this model, many user level
threads multiplexes to the
Kernel thread of smaller or
equal numbers. The number of
Kernel threads may be specific
to either a particular
application or a particular
machine.
• Following diagram shows the
many to many model. In this
model, developers can create
as many user threads as
necessary and the
corresponding Kernel threads
can run in parallels on a
multiprocessor

You might also like