0% found this document useful (0 votes)
110 views30 pages

UNIT3solved OS

The document outlines key concepts of Operating System (OS) principles, focusing on process management, including definitions of processes, process control blocks (PCBs), context switches, threads, and various types of scheduling queues. It explains the roles of different types of schedulers, the importance of CPU and I/O bursts, and the distinctions between independent and cooperating processes. Additionally, it discusses performance metrics like throughput, turnaround time, and response time, as well as scheduling strategies such as preemptive and non-preemptive scheduling.

Uploaded by

mishriyaanwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views30 pages

UNIT3solved OS

The document outlines key concepts of Operating System (OS) principles, focusing on process management, including definitions of processes, process control blocks (PCBs), context switches, threads, and various types of scheduling queues. It explains the roles of different types of schedulers, the importance of CPU and I/O bursts, and the distinctions between independent and cooperating processes. Additionally, it discusses performance metrics like throughput, turnaround time, and response time, as well as scheduling strategies such as preemptive and non-preemptive scheduling.

Uploaded by

mishriyaanwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

National Education Policy – 2020 [NEP-2020]

QUESTION BANK OF Operating System Concepts


IV SEMESTER BCA
UNIT III
2 marks Questions
1. What is a process? List the four section of the process.
 A process can be thought of as a program in execution.
A process will need certain resources — such as CPU
time, memory, files, and I/O devices — to accomplish
its task. These resources are allocated to the process
either when it is created, or while it is executing.
 A process is the unit of work in most systems. Such a
system consists of a collection of processes:
 Four section of the process are: stack, heap, data and
text

2. What is PCB? List its component.


Each process is represented in the OS by a process
control block (PCB), also called a task control
block. A PCB is shown in figure below:
It contains many pieces of information associated with
a specific process include these:
 Process state
 Program counter
 CPU registers
 CPU scheduling information.
 Memory-management information
 Accounting information
 I/O status information

3. What is Context switch?

Mr. K Praveen Kumar, SMC SHIRVA Page 1


When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process.

4. Define Thread.
 A thread, sometimes called a lightweight process (LWP), is a basic unit of
CPU utilization;
 It comprises a thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and
signals.

5. What is meant scheduling queue?


Scheduling Queues: As
processes enter the
system, they are put into
a job queue. This queue
consists of all processes
in the system. The
processes that are
residing in main memory
and are ready and
waiting to execute are
kept on a list called the ready queue.
Ready queue is generally stored as a linked list; a ready-queue header will
contain pointers to the first and last PCBs in the list. "Each PCB has a pointer
field that points to the next process in the ready queue. The list of processes
waiting for a particular I/O device is called a device queue.
A new process is initially put in the ready queue. It waits in the ready queue
until it is selected for execution and is given the CPU. Once the process is
allocated the CPU and is executing, one of several events could occur:
 The process could issue an I/O request and then be placed in an I/O queue.
 The process could be removed forcibly from the CPU, as the result of an
interrupt and be put back in the ready queue.

6. List three types of implementation of queues in buffering


Job queue, ready queue and waiting queue.
Mr. K Praveen Kumar, SMC SHIRVA Page 2
7. Define dispatcher and dispatch latency.
Dispatcher: It gives control of the CPU to the process selected by the short-
term scheduler. The function involves:
1. Switching context
2. Switching to user mode &
3. Jumping to the proper location in the user program to restart that program
 It should be as fast as possible, since it is invoked during every process
switch.
Dispatch latency means the time taken by the dispatcher to
 Stop one process and
 Start another running.

8. Define independent and cooperating processes.


• Independent Processes – processes that cannot affect other processes or be
affected by other processes executing in the system.
• Cooperating Processes – processes that can affect other processes or be
affected by other processes executing in the system.

9. What do you mean by Inter Process communication (IPC)


Interprocess Communication (IPC) is defined as the communication between
processes to process. It provides a mechanism to allow processes to
communicate and to synchronize their actions.

10. What is a scheduler? Mention their types.


Scheduler is defined as a program which selects a user program from disk and
allocates CPU to that program. Types are: Long Term, Short term Scheduler and
Medium term schedulers

11. What is ready queue and device queue?


As processes enter the system, they are put into a job queue. This queue consists
of all processes in the system. The processes that are residing in main memory
and are ready and waiting to execute are kept on a list called the ready queue.
Ready queue is generally stored as a linked list; a ready-queue header will
contain pointers to the first and last PCBs in the list. "Each PCB has a pointer
Mr. K Praveen Kumar, SMC SHIRVA Page 3
field that points to the next process in the ready queue. The list of processes
waiting for a particular I/O device is called a device queue.

12. Define CPU and I/O Burst.


CPU burst Scheduling process state in which the process executes on CPU
I/O burst Scheduling process state in which the CPU performs I/O

13. Define throughput and turnaround time.


 Throughput: If the CPU is busy executing processes, then work is being
done. One measure of work is the number of processes that are completed per
time unit, called throughput
 Turnaround time: The interval from the time of submission of a process
to the time of completion is the turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.

14. Define response time and arrival time.


Response time: Another measure is the time from the submission of a request
until the first response is produced. This measure, called response time, is the
amount of time it takes to start responding, but not the time that it takes to output
that response. The turnaround time is generally limited by the speed of the
output device
Arrival Time: The time at which the process enters into the ready queue is
called the arrival time.

15. What is waiting time?


Waiting time: The CPU scheduling algorithm does not affect the amount of
time during which a process executes or does I/O; it affects only the amount of
time that a process spends waiting in the ready queue. Waiting time is the sum
of the periods spent waiting in the ready queue.

16. Define preemptive scheduling and non-preemptive scheduling.


Non Preemptive Scheduling: Once the CPU has been allocated to a
process, the process keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state.

Mr. K Praveen Kumar, SMC SHIRVA Page 4


Preemptive Scheduling: This is driven by the idea of prioritized computation.
Processes that are runnable may be temporarily suspended

17. What do you mean by CPU scheduling?


CPU scheduling: The process by which the system chooses which job will run
next if several jobs are ready to run at the same time

18. Difference between job and process.


Job: A set of commands or processes executed by a batch system
A process can be thought of as a program in execution. A process will need
certain resources — such as CPU time, memory, files, and I/O devices — to
accomplish its task. These resources are allocated to the process either when it
is created, or while it is executing. A process is the unit of work in most
systems. Such a system consists of a collection of processes:

19. List the limitation of SJF.


Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest
average waiting time. Implementing SJF scheduling is difficult, however,
because predicting the length of the next CPU burst is difficult

20. What is aging?


Aging is a technique of increasing priority of processes that wait in system for a
long time

21. List the disadvantages of SJF.


A major drawback associated with this algorithm is indefinite blocking which
is also called starvation. In this case, a process with lowest priority will never
get the CPU because it keeps on allotting the higher priority to other
processes. To avoid this, a technique called ‘Aging’ is employed which
increases the priority of processes that are present in the ready queue for a
long time

22. Differentiate long term scheduler and short term scheduler.


• A long-term scheduler or Job scheduler – selects jobs from the job
pool (of secondary memory, disk) and loads them into the memory. If more
processes are submitted, than that can be executed immediately, such processes
Mr. K Praveen Kumar, SMC SHIRVA Page 5
will be in secondary memory. It runs infrequently, and can take time to select the
next process. The short-term scheduler, or CPU Scheduler – selects job
from memory and assigns the CPU to it. It must select the new process for
CPU frequently.

Three or More Marks questions.


1. Explain process state transition neat diagram.
A process is an entity with a program counter specifying the next instruction to
be executed and a set of associated resources.
Process State: As a process executes, it changes state. Each process may be in
one of the following state:
 New: The process is being executed.
 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur.
 Ready: The process is waiting to be assigned to a processor.
 Terminated: The process has finished execution.

2. What is PCB? Explain with diagram.


Each process is represented in the OS by a process
control block (PCB), also called a task control
block. A PCB is shown in figure below:
It contains many pieces of information associated
with a specific process include these:
 Process state: The state may be new, ready,
running, waiting, halted, and be executed for this
process.

Mr. K Praveen Kumar, SMC SHIRVA Page 6


 Program counter: The counter indicates the address of the next instruction to
be executed for this process.
 CPU registers: The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code information.
Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward.
 CPU scheduling information: This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
 Memory-management information: This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables depending on the memory system used by the operating
system.
 Accounting information: This information includes the amount of CPU and
real time used, time limits, account numbers, job or process numbers, and so
on.
 I/O status information: The information includes the list of I/O devices;
(such as tape drives) allocated to this process, a list of open files, and so on.

3. Explain process scheduling.


 The objective of multiprogramming is to have some process running at all
times so as to maximize CPU utilization.
 The objective of time sharing is to switch a CPU core among processes so
frequently that users can interact with each
program while it is running.
 To meet these objectives, the process scheduler
selects an available process (possibly from a set of several available
processes) for program execution on a core.
 Each CPU core can run one process at a time.
 For a system with a single CPU core, there will never be more than one
process running at a time, whereas a multicore system can run multiple
processes at one time.
 If there are more processes than cores, excess processes will have to wait
until a core is free and can be rescheduled.

Mr. K Praveen Kumar, SMC SHIRVA Page 7


 The number of processes currently in memory is known as the degree of
multiprogramming
 In general, most processes can be described as either I/O bound or CPU
bound.
 An I/O-bound process is one that spends more of its time doing I/O than it
spends doing computations.
 A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.

4. With diagram explain queuing diagram representation of process


scheduling.
 A common representation of process scheduling is a queueing diagram.
 Each rectangular box represents a queue.
 Two types of queues are present: the ready queue and a set of device queues.
 The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
 A new process is
initially put in the
ready queue. It waits
in the ready queue
until it is selected for
execution (or
dispatched). Once
the process is
assigned to the CPU
and is executing, one of several events could occur:
 The process could issue an I/O request, and then be placed in an I/O queue.
 The process could create a new sub process and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.

5. Write a note on schedulers.


Schedulers: Schedulers are software which selects an available program to be
assigned to CPU.

Mr. K Praveen Kumar, SMC SHIRVA Page 8


• A long-term scheduler or Job scheduler – selects jobs from the job
pool (of secondary memory, disk) and loads them into the memory. If more
processes are submitted, than that can be executed immediately, such processes
will be in secondary memory. It runs infrequently, and can take time to select the
next process.
• The short-term scheduler, or CPU Scheduler – selects job from
memory and assigns the CPU to it. It must select the new process for CPU
frequently.
• The medium-term scheduler - selects the process in ready queue and
reintroduced into the memory.

Processes can be described as either:


• I/O-bound process – spends more time doing I/O than computations,
• CPU-bound process – spends more time doing computations and few I/O
operations.
An efficient scheduling system will select a good mix of CPU-bound
processes and I/O bound processes.
• If the scheduler selects more I/O bound process, then I/O queue will
be full and ready queue will be empty.
• If the scheduler selects more CPU bound process, then ready queue will be
full and I/O queue will be empty.
Time sharing systems employ a medium-term scheduler. It swaps out the
process from ready queue and swap in the process to ready queue. When system
loads get high, this scheduler will swap one or more processes out of the ready
queue for a few seconds, in order to allow smaller faster jobs to finish up quickly
and clear the system.
Advantages of medium-term scheduler –

Mr. K Praveen Kumar, SMC SHIRVA Page 9


• To remove process from memory and thus reduce the degree of
multiprogramming(number of processes in memory).
 To make a proper mix of processes (CPU bound and I/O bound)

6. Explain Context switch with diagram.


 When an interrupt
occurs, the system
needs to save the
current context of
the process running
on the CPU core so
that it can restore
that context when its
processing is done,
essentially
suspending the
process and then
resuming it.
 The context is
represented in the
PCB of the process.
It includes the value
of the CPU registers,
the process state (see
Figure 3.2), and memory-management information. Generically, we perform
a state save of the current state of the CPU core, be it in kernel or user
mode, and then a state restore to resume operations.
 Switching the CPU core to another process requires performing a state
save of the current process and a state restore of a different process. This
task is known as a context switch and is illustrated in Figure 3.6.
 When a context switch occurs, the kernel saves the context of the old process
in its PCB and loads the saved context of the new process scheduled to run.
Context switch time is pure overhead, because the system does no useful
work while switching. Switching speed varies from machine to machine,
depending on the memory speed, the number of registers that must be copied,

Mr. K Praveen Kumar, SMC SHIRVA Page 10


and the existence of special instructions (such as a single instruction to load
or store all registers). A typical speed is a several microseconds Context-
switch times are highly dependent on hardware support

7. Write a note on Inter Process Communication (IPC).


Interprocess Communication- Processes executing may be either co-operative
or independent processes.
• Independent Processes – processes that cannot affect other processes or be
affected by other processes executing in the system.
• Cooperating Processes – processes that can affect other processes or be
affected by other processes executing in the system.
Co-operation among processes are allowed for following reasons –
• Information Sharing - There may be several processes which need to access
the same file. So the information must be accessible at the same time to all users.
• Computation speedup - Often a solution to a problem can be solved faster if
the problem can be
broken down into sub-
tasks, which are solved
simultaneously
(particularly when
multiple processors are
involved.)
• Modularity - A
system can be divided
into cooperating
modules and executed
by sending information
among one another.
• Convenience - Even a single user can work on multiple tasks by information
sharing.
Cooperating processes require some type of inter-process communication. This is
allowed by two models: 1. Shared Memory systems 2. Message passing systems
Shared Memory Message passing
A region of memory is shared by Message exchange is done
communicating processes, into which the among the processes by using
information is written and read objects.
Mr. K Praveen Kumar, SMC SHIRVA Page 11
Useful for sending large block of data Useful for sending small data.
System call is used only to create shared System call is used during every
memory read and writes operation.
Message is sent faster, as there are no system Message is communicated
calls slowly
2. Shared Memory is faster once it is set up, because no system calls are
required and access occurs at normal memory speeds. Shared memory is
generally preferable when large amounts of information must be shared quickly
on the same computer.
3. • Message Passing requires system calls for every message transfer, and is
therefore slower, but it is simpler to set up and works well across multiple
computers. Message passing is generally preferable when the amount and/or
frequency of data transfers is small

8. Explain IPC in Message-Passing System.


Messaging Passing Method: A mechanism to allow process communication
without sharing address space. It is used in distributed systems.
• Message passing systems uses system calls for "send message" and "receive
message".
• A communication link must be established between the cooperating processes
before messages can be sent.
• There are three methods of creating the link between the sender and the
receiver
 Direct or indirect communication (naming)
 Synchronous or asynchronous communication (Synchronization)
 Automatic or explicit buffering
1. Naming: Processes that want to communicate must have a way to refer to
each other. They can use either direct or indirect communication.
a) Direct communication the sender and receiver must explicitly know each
other’s name. The syntax for send() and receive() functions are as follows-
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
Properties of communication link:
• A link is established automatically between every pair of processes that wants
to communicate. The processes need to know only each other's identity to
communicate.
Mr. K Praveen Kumar, SMC SHIRVA Page 12
• A link is associated with exactly one pair of communicating processes
• Between each pair, there exists exactly one link.
Types of addressing in direct communication –
• Symmetric addressing – the above-described communication is symmetric
communication. Here both the sender and the receiver processes have to name
each other to communicate.
• Asymmetric addressing – Here only the sender’s name is mentioned, but the
receiving data can be from any system.
send (P, message) --- Send a message to process P
receive (id, message). Receive a message from any process
Disadvantages of direct communication – any changes in the identifier of a
process, may have to change the identifier in the whole system (sender and
receiver), where the messages are sent and received.
b) Indirect communication uses shared mailboxes, or ports.
A mailbox or port is used to send and receive messages. Mailbox is an object
into which messages can be sent and received. It has a unique ID. Using this
identifier messages are sent and received.
Two processes can communicate only if they have a shared mailbox. The send
and receive functions are –
• send (A, message) – send a message to mailbox A
• receive (A, message) – receive a message from mailbox A
Properties of communication link:
• A link is established between a pair of processes only if they have a shared
mailbox
• A link may be associated with more than two processes
• Between each pair of communicating processes, there may be any number of
links, each link is associated with one mailbox.
• A mail box can be owned by the operating system. It must take steps to –
• create a new mailbox
• send and receive messages from mailbox
• delete mailboxes.
2. Synchronization: The send and receive messages can be implemented as
either blocking or non-blocking.
Blocking (synchronous) send - sending process is blocked (waits) until the
message is received by receiving process or the mailbox.

Mr. K Praveen Kumar, SMC SHIRVA Page 13


Non-blocking (asynchronous) send - sends the message and continues (does
not wait)
Blocking (synchronous) receive - The receiving process is blocked until a
message is available
Non-blocking (asynchronous) receive - receives the message without block.
The received message may be a valid message or null.

3. Buffering: When messages are passed, a temporary queue is created. Such


queue can be of three capacities:
Zero capacity – The buffer size is zero (buffer does not exist). Messages are
not stored in the queue. The senders must block until receivers accept the
messages.
Bounded capacity- The queue is of fixed size(n). Senders must block if the
queue is full. After sending ‘n’ bytes the sender is blocked.
Unbounded capacity - The queue is of infinite capacity. The sender never
blocks

9. Explain the shared memory system of IPC with example


Shared-Memory Systems
• A region of shared-memory is created within the address space of a process,
which needs to communicate. Other process that needs to communicate uses this
shared memory.
• The form of data and position of creating shared memory area is decided by the
process. Generally, a few messages must be passed back and forth between the
cooperating processes first in order to set up and coordinate the shared memory
access.
• The process should take care that the two processes will not write the data to
the shared memory at the same time.
Producer-Consumer Example Using Shared Memory
• This is a classic example, in which one process is producing data and another
process is consuming the data.
• The data is passed via an intermediary buffer (shared memory). The producer
puts the data to the buffer and the consumer takes out the data from the buffer. A
producer can produce one item while the consumer is consuming another item.
The producer and consumer must be synchronized, so that the consumer does not

Mr. K Praveen Kumar, SMC SHIRVA Page 14


try to consume an item that has not yet been produced. In this situation, the
consumer must wait until an item is produced.
There are two types of buffers into which information can be put –
• Unbounded buffer
• Bounded buffer
• With Unbounded buffer, there is no limit on the size of the buffer, and so on
the data produced by producer. But the consumer may have to wait for new
items.
• With bounded-buffer– As the buffer size is fixed. The producer has to wait if
the buffer is full and the consumer has to wait if the buffer is empty
This example uses shared memory as a circular queue. The in and out are two
pointers to the array. Note in the code below that only the producer changes "in",
and only the consumer changes "out".
First the following data is set up in the shared memory area:
#define BUFFER SIZE 10
typedef struct{
...
} item;
item buffer[BUFFER SIZE];
int in = 0;
int out = 0;
Producer Process Code
Note that the buffer is full when [ (in+1) % BUFFER_SIZE == out]
item next produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
}
Consumer Process Code
Note that the buffer is empty when [ in == out]
item next consumed;
while (true) {
while (in == out)
Mr. K Praveen Kumar, SMC SHIRVA Page 15
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
/* consume the item in next consumed */
}
In the above code, the Producer will start producing again when the
(in+1)%buffersize will be free because if it is not free, this implies that there are
still items that can be consumed by the Consumer so there is no need to
produce more. Similarly, if (in+1)% Buffersize==out, this implies that there
are no items to consume.

10. Write a note on Synchronization


Synchronization: Any two process interact with each other by using the system
calls send ( ) and receive ( ).
The concept of message passing can be either blocking or unblocking and
synchronous or asynchronous. Blocking refers to the process of halting the
sending or receiving of the messages. Whereas, no blocking refers to the process
allowing free low of the sending or receiving of the messages. Various
combinations of send or receive are allowed. A situation called rendezvous
occurs between the sender and receiver when both the send( ) and receive( ) are
blocking. Example of synchronization in message passing is producer consumer
problem.
Producer Process
message producednext;
while(true)
{
send(producednext);
}
Consumer Process
message consumednext;
while(true)
{
receive(consumednext);
}

11. Explain operation on process.


Mr. K Praveen Kumar, SMC SHIRVA Page 16
Process Creation: A process may create several new processes. The creating
process is called a parent process, and the new processes are called the children
of that process. Each of these new processes may in turn create other processes.
Every process has a unique process ID.
• On typical Solaris systems, the process at the top of the tree is the ‘sched’
process with PID of 0. The ‘sched’ process creates several children processes –
init, pageout and fsflush. Pageout and fsflush are responsible for managing
memory and file systems. The init process with a PID of 1, serves as a parent
process for all user processes.
A process will need certain resources (CPU time, memory, files, I/O devices) to
accomplish its task. When a process creates a sub process, the sub process may
be able to obtain its resources in two ways:
• Directly from the operating system
• Sub process may take the resources of the parent process.
The resource can
be taken from
parent in two ways

▪ the parent may
have to partition its
resources among
its children
▪ Share the
resources among
several children
There are two
options for the
parent process
after creating the
child:
• Wait for the child
process to
terminate and then
continue execution.
The parent makes a
wait() system call.
Mr. K Praveen Kumar, SMC SHIRVA Page 17
• Run concurrently with the child, continuing to execute without waiting.
Two possibilities for the address space of the child relative to the parent:
• The child may be an exact duplicate of the parent, sharing the same program
and data segments in memory. Each will have their own PCB, including program
counter, registers, and PID. This is the behaviour of the fork system call in
UNIX.
• The child process may have a new program loaded into its address space, with
all new code and data segments. This is the behaviour of the spawn system calls
in Windows.

In UNIX OS, a
child process can be
created by fork()
system call. The
fork system call, if
successful, returns
the PID of the child
process to its
parents and returns
a zero to the child
process. If failure, it
returns -1 to the
parent. Process IDs
of current process
or its direct parent
can be accessed
using the getpid( )
and getppid( )
system calls respectively.
The parent
waits for
the child
process to
complete
with the
wait()
Mr. K Praveen Kumar, SMC SHIRVA Page 18
system call. When the child process completes, the parent process resumes and
completes its execution.
In windows the child process is created using the function createprocess( ). The
createprocess( ) returns 1, if the child is created and returns 0, if the child is not
created.
Process Termination
• A process terminates when it finishes executing its last statement and asks the
operating system to delete it, by using the exit () system call. All of the resources
assigned to the process like memory, open files, and I/O buffers, are deallocated
by the operating system.
• A process can cause the termination of another process by using appropriate
system call. The parent process can terminate its child processes by knowing of
the PID of the child.
• A parent may terminate the execution of children for a variety of reasons, such
as:
• The child has exceeded its usage of the resources, it has been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system terminates all the children. This
is called cascading termination.

12. Explain Preemptive and Non preemptive Scheduling.


SJF Scheduling: The CPU is assigned to the process that as the smallest next
CPU burst.
 If two processes have the same length CPU burst, FCFS scheduling is used
to break the tie.
 For long-term scheduling in a batch system, we can use the process time
limit specified by the user, as the ‘length’
 SJF can't be implemented at the level of short-term scheduling, because
there is no way to know the length of the next CPU burst
 Advantage: The SJF is optimal, i.e. it gives the minimum average
waiting time for a given set of processes.
 Disadvantage: Determining the length of the next CPU burst.
 SJF algorithm may be either 1) non-preemptive or 2) preemptive.
 Non-preemptive SJF

Mr. K Praveen Kumar, SMC SHIRVA Page 19


The current process is allowed to finish its CPU burst.
 Preemptive SJF
If the new process has a shorter next CPU burst than what is left of the
executing process, that process is preempted. It is also known as SRTF
scheduling(Shortest-Remaining-Time-First).
Eg. Consider the following set of processes with the length of CPU-burst time
given in milliseconds.

Using SJF scheduling we should schedule these processes according to the


following Gantt chart

Waiting time for P1 = 3; P2 = 16; P3 = 9; 4=0


Average waiting time = (3+16+9+0) /4 = 7 ms
If we were using the FCFS scheduling scheme, then the average waiting time
would be

Average waiting time = (0+6+14+21) /4 = 10.25 ms

Preemptive SJF/SRTF: Consider the following set of processes, with the


length of the CPU- burst time given in milliseconds
Eg.

Average waiting time = [ (10-1)+(1-1)+(5-3) ] /4 = 26/4 =6.5 ms

Mr. K Praveen Kumar, SMC SHIRVA Page 20


13. List and explain various Scheduling Criteria.
Criteria that are used include the following:
 CPU utilization: We want to keep the CPU as busy as possible. CPU
utilization may range from 0 to 100 percent.
 Throughput: If the CPU is busy executing processes, then work is being
done. One measure of work is the number of processes that are completed per
time unit, called throughput.
 Turnaround time: The interval from the time of submission of a process to
the time of completion is. The turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.
 Waiting time: The CPU scheduling algorithm does not affect the amount of
time during which a process executes or does I/O; it affects only the amount
of time that a process spends waiting in the ready queue. Waiting time is the
sum of the periods spent waiting in the ready queue.
 Response time: Another measure is the time from the submission of a request
until the first response is produced. This measure, called response time, is the
amount of time it takes to start responding, but not the time that it takes to
output that response. The turnaround time is generally limited by the speed of
the output device.

14. Explain FCFS scheduling with an example.


FCFS: The process that requests the CPU first is allocated the CPU first.
 The implementation is easily done using a FIFO queue.
 Procedure:
1. When a process enters the ready-queue, its PCB is linked on to the tail
of the queue.
2. When the CPU is free, the CPU is allocated to the process at the queue’s
head.
The running process is then removed from the queue.
 Advantage: Code is simple to write & understand.
 Disadvantages:
1. Convoy effect: All other processes wait for one big process to get off the
CPU.
2. Non-preemptive (a process keeps the CPU until it releases it).

Mr. K Praveen Kumar, SMC SHIRVA Page 21


3. Not good for time-sharing systems.
The average waiting time is generally not minimal
Eg.

If the processes arrive in the order P1, P2, P3 and served in FCFS order, we
get the result in the following Gantt chart

 Waiting time for P1=0;P2=24;P3=27


Average waiting time:(0+24+27)/3=17ms
If the processes arrived in the order P2, P3, P1 then the result will be, Average
waiting time:

 Waitingtime for P1=6;P2=0; P3=3


 Average waiting time: (6+0+3)/3 =3ms

15. Explain Shortest-Job-First Scheduling with an example.


Refer the 12th answer

16. Explain round robin scheduling algorithm with an example.


Round Robin scheduling:
 Designed especially for time sharing systems.
 It is similar to FCFS scheduling, but with preemption.
 A small unit of time is called a time quantum (or time slice).
 Time quantum is ranges from10 to100ms.
 The ready-queue is treated as a circular queue.
 The CPU scheduler
 Goes around the ready-queue and
 Allocates the CPU to each process for a time interval of up to 1
time quantum.
Mr. K Praveen Kumar, SMC SHIRVA Page 22
 To implement: The ready-queue is kept as a FIFO queue of processes
 CPU scheduler
1. Picks the first process from the ready-queue.
2. Sets a timer to interrupt after 1time quantum and
3. Dispatches the process.
 One of two things will then happen.
1. The process may have a CPU burst of less than 1 time quantum.
In this case, the process itself will release the CPU voluntarily.
2. If the CPU burst of the currently running process is longer
than1timequantum, the timer will go off and will cause an
interrupt to the OS. The process will be put at the tail of the
ready-queue.
 Advantage: Higher average turns around than SJF.
 Disadvantage: Better response time than SJF.
Eg.

Quantum time will be 4

Gantt chart

Average waiting time = (10+0+7) /3 = 17/3 = 5.66 ms

17. Explain Priority Scheduling with an example.


Priority scheduling: A priority is associated with each process.
 The CPU is allocated to the process with the highest priority.
 Equal-priority processes are scheduled in FCFS order.
 Priorities can be defined either internally or externally.
Internally-defined priorities.
 Use some measurable quantity to compute the priority of a process.
 For example: time limits, memory requirements, no. of open files.

Mr. K Praveen Kumar, SMC SHIRVA Page 23


Externally-defined priorities.
 Set by criteria that are external to the OS For example:
 importance of the process, political factors
 Priority scheduling can be either preemptive or non-preemptive.
 1.Preemptive
The CPU is preempted if the priority of the newly arrived process is
higher than the priority of the currently running process.
2. Non-Preemptive
The new process is put at the head of the ready-queue
 Advantage: Higher priority processes can be executed first.
 Disadvantage: Indefinite blocking, where low-priority processes are left
waiting indefinitely for CPU. Solution: Aging is a technique of increasing
priority of processes that wait in system for a long time.

Using priority scheduling, we would schedule these process according to the


following Gantt chart:

Average waiting time = (0+1+6+16+18+19) /5 = 8.2 ms

18. Explain Multilevel Queue Scheduling.


Multilevel Queue Scheduling
 Useful for situations in which processes are easily classified into
different groups.
 For example, a common division is made between
foreground(or interactive)processes and
Background (or batch) processes.
 The ready-queue is partitioned into several separate queues
Mr. K Praveen Kumar, SMC SHIRVA Page 24
 The processes are
permanently assigned to
one queue based on some
property like
Memory size
Process priority or
Process type.
 Each queue has its own
scheduling algorithm.
For example, separate queues
might be used for foreground and
background processes.
 There must be scheduling among the queues, which is commonly
implemented as fixed-priority preemptive scheduling.
 For example, the foreground queue may have absolute priority
over the back ground queue.
Time slice: each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR 20% to background in
FCFS

19. Explain Multilevel Feedback Queue Scheduling.


Multilevel Feedback Queue Scheduling
 A process may move between queues
 The basic idea: Separate processes according to the features of their
CPU bursts. For example
1. If a process uses too much CPU time, it will be moved to a lower-
priority queue. This scheme leaves I/O-bound and interactive
processes in the higher-priority queues.
2. If a process waits too long in a lower-
priority queue, it may be moved to a
higher-priority queue. This form of
aging prevents starvation
In general, a multilevel feedback queue
scheduler is defined by the following
parameters:

Mr. K Praveen Kumar, SMC SHIRVA Page 25


 The number of queues
 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a process to a higher
priority queue.
 The method used to determine when to demote a process to a lower priority
queue.
 The method used to determine which queue a process will enter when
that process needs service

20. Describe the differences between short-term, medium-term and long-


term schedulers.
Refer 5th answer

21. Consider the following set of processes, with length of the CPU-
burst time given in milliseconds.
Process Burst time
P1 15
P2 4
P3 10
P4 8
P5 5
Draw Gantt chart using round robin with time quantum of 5 milliseconds
and find average waiting time.

Waiting time of P1= (0-0) + (24-5)+(37-29)=0+19+8=27 ms


Waiting time of P2= (5-0) =5 ms
Mr. K Praveen Kumar, SMC SHIRVA Page 26
Waiting time of P3= (9-0) + (29-14) =24ms
Waiting time of P4= (14-0) + (34-19) =29ms
Waiting time of P5= (19-0) =19ms
Average waiting time= (27+5+24+29+19)/5 = 104/5 =20.8ms

22. Explain briefly preemptive and non-preemptive priority scheduling.


Refer the 12th answer
23. Consider the following set of processes, with length of the CPU-
burst time given in milliseconds.
Process Burst time
P1 6
P2 8
P3 7
P4 3
Find the average turnaround and waiting time. And Also Draw the gantt
chart Using SJF.
Using SJF scheduling, we would schedule these processes according to the
Following Gantt chart:

The waiting time is 3 milliseconds for process PI, 16 milliseconds for process
P2, 9 milliseconds for process P3, and 0 milliseconds for process P4. Thus, the
average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds
The turnaround time of p1=9-3 =6
The turnaround time of p2=24-16=8
The turnaround time of p3=16-9=7
The turnaround time of p1=9-3=6
The average turnaround time is =6+8+7+6/4=27/4=6.75ms

24. Consider the following set of processes, with length of the CPU-
burst time given in milliseconds.
Process Arrival Time Burst Time
P1 0.0 8
P2 0.4 4
P3 1.0 1
Mr. K Praveen Kumar, SMC SHIRVA Page 27
a. What is the average turnaround time for these processes with the
FCFS scheduling algorithm?
b. What is the average turnaround time for these processes with the
Preemptive SJF scheduling algorithm?
Average Wait Times
a. FCFS =6.2ms
b. SJF (non-preemptive)=5.2ms and SRTF(Preemptive)=2.0ms
Gantt charts

Average Wait time

25. Consider the following set of processes, with the length of the CPU burst
time given in milliseconds:
Process Burst Time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3

Mr. K Praveen Kumar, SMC SHIRVA Page 28


The processes are assumed to have arrived in the order P1, P2, P3, P4,
P5, all at time 0.
a. Draw four Gantt charts that illustrate the execution of these
processes using the following scheduling algorithms: FCFS, SJF,
non preemptive priority (a larger priority number implies a higher
priority), and RR (quantum = 2).
b. What is the turnaround time of each process for each of the
scheduling algorithms in part a?
c. What is the waiting time of each process for each of these scheduling
algorithms?

Mr. K Praveen Kumar, SMC SHIRVA Page 29


26.The following processes are being scheduled using a Preemptive, round
robin scheduling algorithm.

Each process is assigned a numerical priority, with a higher number


indicating a higher relative priority. The length of a time quantum is 10
units. If a process is preempted by a higher-priority process, the
preempted process is placed at the end of the queue.
a. Show the scheduling order of the processes using a Gantt chart.
b. What is the turnaround time for each process?
c. What is the waiting time for each process?
d. What is the CPU utilization rate

Mr. K Praveen Kumar, SMC SHIRVA Page 30

You might also like