0% found this document useful (0 votes)
12 views20 pages

CPUSchedulingUnit3B Tech

CPU scheduling is a critical operating system function that determines which process uses the CPU at any given time, aiming to maximize CPU utilization and minimize response and waiting times. It involves various scheduling methods, criteria, and process states, as well as the role of schedulers in managing CPU resources. Additionally, the document discusses threading, deadlocks, and methods for handling deadlocks in operating systems.

Uploaded by

shankarspshukla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views20 pages

CPUSchedulingUnit3B Tech

CPU scheduling is a critical operating system function that determines which process uses the CPU at any given time, aiming to maximize CPU utilization and minimize response and waiting times. It involves various scheduling methods, criteria, and process states, as well as the role of schedulers in managing CPU resources. Additionally, the document discusses threading, deadlocks, and methods for handling deadlocks in operating systems.

Uploaded by

shankarspshukla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

CPU Scheduling Unit - 3

CPU Scheduling
CPU scheduling is a process used by the operating system to decide
which task or process gets to use the CPU at a particular time. This is
important because a CPU can only handle one task at a time, but there
are usually many tasks that need to be processed. The following are
different purposes of a CPU scheduling time.
 Maximize the CPU utilization
 Minimize the response and waiting time of the process.

Need for a CPU Scheduling


CPU scheduling is the process of deciding which process will own the
CPU to use while another process is suspended. The main function of
CPU scheduling is to ensure that whenever the CPU remains idle, the
OS has at least selected one of the processes available in the ready-to-
use line.
In Multiprogramming, if the long-term scheduler selects multiple I/O
binding processes then most of the time, the CPU remains idle. The
function of an effective program is to improve resource utilization.

Terminologies Used in CPU Scheduling


 Arrival Time: The time at which the process arrives in the ready
queue.
 Completion Time: The time at which the process completes its
execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turnaround time and
burst time.
Waiting Time = Turn Around Time – Burst Time

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 1


CPU Scheduling Unit - 3

Different Types of CPU Scheduling

There are mainly two types of scheduling methods:

 Preemptive Scheduling: Preemptive scheduling is used when a


process switches from running state to ready state or from the
waiting state to the ready state.

 Non-Preemptive Scheduling: Non-Preemptive scheduling is used


when a process terminates , or when a process switches from running
state to waiting state.

CPU Scheduling Criteria

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 2


CPU Scheduling Unit - 3

1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the
CPU as busy as possible. Theoretically, CPU utilization can range from
0 to 100 but in a real-time system, it varies from 40 to 90 percent
depending on the load upon the system.

2. Throughput
A measure of the work done by the CPU is the number of processes
being executed and completed per unit of time. This is called
throughput. The throughput may vary depending on the length or
duration of the processes.

3. Turnaround Time
For a particular process, an important criterion is how long it takes to
execute that process. The time elapsed from the time of submission of a
process to the time of completion is known as the turnaround time.
Turn-around time is the sum of times spent waiting to get into memory,
waiting in the ready queue, executing in CPU, and waiting for I/O.

Turn Around Time = Completion Time – Arrival Time.


4. Waiting Time
A scheduling algorithm does not affect the time required to complete
the process once it starts execution. It only affects the waiting time of a
process i.e. time spent by a process waiting in the ready queue.

Waiting Time = Turnaround Time – Burst Time.

5. Response Time
In an interactive system, turn-around time is not the best criterion. A
process may produce some output fairly early and continue computing
new results while previous results are being output to the user. Thus
another criterion is the time taken from submission of the process of the
request until the first response is produced. This measure is called
response time.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 3


CPU Scheduling Unit - 3

Response Time = CPU Allocation Time(when the CPU was allocated


for the first) – Arrival Time

6. Completion Time
The completion time is the time when the process stops executing,
which means that the process has completed its burst time and is
completely executed.

7. Priority
If the operating system assigns priorities to processes, the scheduling
mechanism should favor the higher-priority processes.

8. Predictability
A given process always should run in about the same amount of time
under a similar system load.

Process States
In operating systems, processes go through several distinct states as they
are managed by the CPU scheduler. These states are: New, Ready,
Running, Wait/Block, and Termination.
Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 4
CPU Scheduling Unit - 3

Here's a breakdown of each state and its implications for CPU


scheduling:
 New:
A process is initially in the "New" state when it's created but hasn't
been admitted into the ready queue yet.
 Ready:
The "Ready" state signifies that a process is waiting to be executed by
the CPU and is available for scheduling.
 Running:
A process in the "Running" state is currently being executed by the
CPU.
 Wait/Block:
A process in the "Wait/Block" state is waiting for a certain event to
occur, such as an I/O operation or a system call, and is therefore not
currently eligible for CPU execution.
 Termination:
The "Termination" state signifies that the process has completed its
execution or has been terminated.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 5


CPU Scheduling Unit - 3

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 6


CPU Scheduling Unit - 3

Process Transition Diagram

Scheduler
Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 7
CPU Scheduling Unit - 3

A scheduler is a software component responsible for managing how the CPU


allocates its time among multiple processes or tasks. It determines which process
should be executed next and for how long, ensuring efficient CPU utilization and
fair resource sharing. The scheduler's primary goal is to create the illusion that
each process is running on its own dedicated computer, even though they are
sharing the same CPU resources.

Functions of a Scheduler:
 Process Selection:
The scheduler selects processes from the ready queue (processes that are ready to
be executed) and loads them into memory for execution.
 CPU Allocation:
It allocates the CPU to the selected process, allowing it to execute its
instructions.
 Process Swapping:
When a process needs to wait for an I/O operation or its time slice expires, the
scheduler may swap it out of the CPU and place it in a waiting or ready queue.
 Resource Management:
The scheduler helps manage CPU resources, preventing any single process from
monopolizing the CPU and ensuring that all processes receive a fair share of
processing time.
 Scheduling Algorithms:
Different scheduling algorithms, such as First-Come, First-Served (FCFS),
Shortest Job First (SJF), Round Robin, and priority-based scheduling, can be used
to determine which process to execute next.

Types of Schedulers:

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 8


CPU Scheduling Unit - 3

 Long-term scheduler (Job scheduler): Selects processes from the secondary


memory (disk) and puts them into the ready queue for execution.
 Short-term scheduler (CPU scheduler): Selects a process from the ready queue
and allocates the CPU to it.
 Medium-term scheduler: May be used to remove processes from memory (swap
them out to disk) to improve memory usage and prioritize other processes.

Importance of Scheduling:
 Efficient CPU Utilization: Ensures that the CPU is not idle and that processes are
executed efficiently.
 Fair Resource Sharing: Allows multiple users and processes to share system
resources effectively.
 Reduced Waiting Time: Minimizes the time processes spend waiting for their
turn to run on the CPU.
 Multitasking and Multiprogramming: Enables operating systems to run multiple
programs concurrently.

Process Address Space


A process address space, also known as a virtual address space, is the set of
memory addresses that a process can use. It's a logical view of memory, separate

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 9


CPU Scheduling Unit - 3

from the physical memory. Each process has its own unique address space, though
some processes can share address spaces (like threads).
 Logical Addresses:
Processes use logical addresses (also called virtual addresses) to reference
memory locations. These addresses are relative to the process's own address
space and not the physical memory location.
 Address Space Size:
The size of a process's address space depends on the system's architecture (32-bit
or 64-bit). 32-bit systems typically have a 2GB address space, while 64-bit
systems can have much larger address spaces (e.g., 128 TB on Windows 8.1 and
later).
 Memory Areas:
A process's address space can be divided into memory areas, each with its own
access permissions (read, write, execute). The OS can dynamically add or remove
these memory areas.
 Virtual Memory:
The OS uses virtual memory to map logical addresses to physical addresses. This
allows processes to use memory that exceeds the system's physical RAM.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 10


CPU Scheduling Unit - 3

Process Identification Information

Threads
A thread is a single sequence stream within a process. Threads have the same
properties as the process so they are called lightweight processes. On single core

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 11


CPU Scheduling Unit - 3

processor, threads are rapidly switched giving the illusion that they are executing
in parallel. In multi-core systems, threads can execute truly in parallel across
different cores. Each thread has different states. In this article, we are going to
discuss threads in detail along with similarities between Threads and Processes,
Differences between Threads and Processes.

Types of Threads
There are mainly two types of threads -

User Level Thread (ULT)


User Level Thread is implemented in the user level library, they are not created
using the system calls. Thread switching does not need to call OS and to cause
interrupt to Kernel. Kernel doesn’t know about the user level thread and manages
them as if they were single-threaded processes.

Advantages of ULT
 Can be implemented on an OS that doesn’t support multithreading.
 Simple representation since thread has only program counter, register set,
stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.

Disadvantages of ULT
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.

Kernel Level Thread (KLT)


Kernel knows and manages the threads. Instead of thread table in each process,
the kernel itself has thread table (a master one) that keeps track of all the threads
in the system. In addition kernel also maintains the traditional process table to
keep track of the processes. OS kernel provides system call to create and manage
threads.
Advantages of KLT
 Since kernel has full knowledge about the threads in the system, scheduler
may decide to give more time to processes having large number of threads.
 Good for applications that frequently block.

Disadvantages of KLT
 Slow and inefficient.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 12


CPU Scheduling Unit - 3

 It requires thread control block so it is an overhead.

Advantages of Threading

 Responsiveness: A multithreaded application increases responsiveness to the


user.
 Resource Sharing: Resources like code and data are shared between threads,
thus allowing a multithreaded application to have several threads of activity
within the same address space.
 Increased Concurrency: Threads may be running parallel on different
processors, increasing concurrency in a multiprocessor machine.
 Lesser Cost: It costs less to create and context-switch threads than processes.
 Lesser Context-Switch Time: Threads take lesser context-switch time than
processes.

Disadvantages of Threading
 Complexity: Threading can make programs more complicated to write and
debug because threads need to synchronize their actions to avoid conflicts.

 Resource Overhead: Each thread consumes memory and processing power,


so having too many threads can slow down a program and use up system
resources.

 Difficulty in Optimization: It can be challenging to optimize threaded


programs for different hardware configurations, as thread performance can
vary based on the number of cores and other factors.
 Debugging Challenges: Identifying and fixing issues in threaded programs
can be more difficult compared to single-threaded programs, making
troubleshooting complex.

Deadlock
A deadlock is a situation where a set of processes is blocked because
each process is holding a resource and waiting for another resource
acquired by some other process.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 13


CPU Scheduling Unit - 3

Deadlock is a situation in computing where two or more processes are


unable to proceed because each is waiting for the other to release
resources.
 Key concepts include mutual exclusion, resource holding, circular
wait, and no preemption.

How Does Deadlock occur in the Operating System?


A situation occurs in operating systems when there are two or more
processes that hold some resources and wait for resources held by
other(s).
For example, in the below diagram, Process 1 is holding Resource 1
and waiting for resource 2 which is acquired by process 2, and process
2 is waiting for resource 1.

Necessary Conditions/Characterization for Deadlock in OS


Deadlock can arise if the following four conditions hold simultaneously
(Necessary Conditions)

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 14


CPU Scheduling Unit - 3

Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the
diagram below, there is a single instance of Resource 1 and it is held by Process 1
only.

Advertisement

Hold and Wait


A process can hold multiple resources and still request more resources from other
processes which are holding them. In the diagram given below, Process 2 holds
Resource 2 and Resource 3 and is requesting the Resource 1 which is held by
Process 1.

No Preemption
A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily. In the diagram below, Process 2 cannot preempt
Resource 1 from Process 1. It will only be released when Process 1 relinquishes it
voluntarily after its execution is complete.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 15


CPU Scheduling Unit - 3

Circular Wait
A process is waiting for the resource held by the second process, which is waiting
for the resource held by the third process and so on, till the last process is waiting
for a resource held by the first process. This forms a circular chain. For example:
Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process
2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait
loop.

 .

Methods of Handling Deadlocks in Operating System


There are three ways to handle deadlock
1. Deadlock Prevention or Avoidance
2. Deadlock Detection and Recovery
3. Deadlock Ignorance

Prevention:
Deadlocks can be avoided by avoiding at least one of the four necessary
conditions: as follows.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 16


CPU Scheduling Unit - 3

Condition-1:
Mutual Exclusion:

 Read-only files, for example, do not cause deadlocks.


 Unfortunately, some resources, such as printers and tape drives, require a
single process to have exclusive access to them.

Condition-2:
Hold and Wait:
To avoid this condition, processes must be prevented from holding one or more
resources while also waiting for one or more others. There are a few possibilities
here:
 Make it a requirement that all processes request all resources at the same time.
This can be a waste of system resources if a process requires one resource
early in its execution but does not require another until much later.
 Processes that hold resources must release them prior to requesting new ones,
and then re-acquire the released resources alongside the new ones in a single
new request. This can be a problem if a process uses a resource to partially
complete an operation and then fails to re-allocate it after it is released.
 If a process necessitates the use of one or more popular resources, either of the
methods described above can result in starvation.

Condition-3:
No Preemption:
When possible, preemption of process resource allocations can help to avoid
deadlocks.
 One approach is that if a process is forced to wait when requesting a new
resource, all other resources previously held by this process are implicitly
released (preempted), forcing this process to re-acquire the old resources
alongside the new resources in a single request, as discussed previously.
 Another approach is that when a resource is requested, and it is not available,
the system looks to see what other processes are currently using those
resources and are themselves blocked while waiting for another resource. If
such a process is discovered, some of their resources may be preempted and
added to the list of resources that the process is looking for.
 Either of these approaches may be appropriate for resources whose states can
be easily saved and restored, such as registers and memory, but they are
generally inapplicable to other devices, such as printers and tape drives.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 17


CPU Scheduling Unit - 3

Condition-4:
Circular Wait:
 To avoid circular waits, number all resources and insist that processes request
resources is strictly increasing ( or decreasing) order.
 To put it another way, before requesting resource Rj, a process must first
release all Ri such that I >= j.
 The relative ordering of the various resources is a significant challenge in this
scheme.

Deadlock Avoidance:

 The general idea behind deadlock avoidance is to avoid deadlocks by avoiding


at least one of the aforementioned conditions.
 This necessitates more information about each process AND results in low
device utilization. (This is a conservative approach.)
 The scheduler only needs to know the maximum number of each resource that
a process could potentially use in some algorithms. In more complex
algorithms, the scheduler can also use the schedule to determine which
resources are required and in what order.
 When a scheduler determines that starting a process or granting resource
requests will result in future deadlocks, the process is simply not started or the
request is denied.
 The number of available and allocated resources, as well as the maximum
requirements of all processes in the system, defines a resource allocation state.

Deadlock Detection:
 If deadlocks cannot be avoided, another approach is to detect them and
recover in some way.
 Aside from the performance hit of constantly checking for deadlocks, a
policy/algorithm for recovering from deadlocks must be in place, and when
processes must be aborted or have their resources preempted, there is the
possibility of lost work.

Recovery From Deadlock :


There are three basic approaches to getting out of a bind:
1. Inform the system operator and give him/her permission to intervene
manually.
2. Stop one or more of the processes involved in the deadlock.
3. Prevent the use of resources.
Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 18
CPU Scheduling Unit - 3

Approach of Recovery From Deadlock :

Approach-1:
Process Termination:
There are two basic approaches for recovering resources allocated to terminated
processes as follows.
1. Stop all processes that are involved in the deadlock. This does break the
deadlock, but at the expense of terminating more processes than are absolutely
necessary.
2. Processes should be terminated one at a time until the deadlock is broken. This
method is more conservative, but it necessitates performing deadlock
detection after each step.
In the latter case, many factors can influence which processes are terminated next
as follows.
1. Priorities in the process
2. How long has the process been running and how close it is to completion?
3. How many and what kind of resources does the process have? (Are they
simple to anticipate and restore? )
4. How many more resources are required for the process to be completed?
5. How many processes will have to be killed?
6. Whether the process is batch or interactive.

Approach-2:
Resource Preemption:
When allocating resources to break the deadlock, three critical issues must be
addressed:

1. Selecting a victim -
Many of the decision criteria outlined above apply to determine which
resources to preempt from which processes.

2. Rollback -
A preempted process should ideally be rolled back to a safe state before the
point at which that resource was originally assigned to the process.
Unfortunately, determining such a safe state can be difficult or impossible, so
the only safe rollback is to start from the beginning. (In other words, halt and
restart the process.)

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 19


CPU Scheduling Unit - 3

3. Starvation -
How do you ensure that a process does not go hungry because its resources are
constantly being preempted? One option is to use a priority system and raise
the priority of a process whenever its resources are preempted. It should
eventually gain a high enough priority that it will no longer be preempted.

Prepared by:Mrs Ankita tripathi[KCNIT,College of Education,Banda] Page 20

You might also like