0% found this document useful (0 votes)
124 views51 pages

Unit I

The document discusses different types of operating systems: 1. Batch operating systems execute jobs one by one in batches without user interaction. 2. Multiprogramming systems improve efficiency by allowing the CPU to execute other processes when one is waiting for I/O. 3. Multitasking systems allow multiple programs to run simultaneously, improving throughput. Real-time, network, and time-sharing systems further enhance these capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views51 pages

Unit I

The document discusses different types of operating systems: 1. Batch operating systems execute jobs one by one in batches without user interaction. 2. Multiprogramming systems improve efficiency by allowing the CPU to execute other processes when one is waiting for I/O. 3. Multitasking systems allow multiple programs to run simultaneously, improving throughput. Real-time, network, and time-sharing systems further enhance these capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

VARDHAMAN COLLEGE OF ENGINEERING

UNIT I
1. Definition
Operating System can be defined as an interface between user and the hardware. It provides
an environment to the user so that, the user can perform its task in convenient and efficient
way.
An Operating System can be defined as an interface between user and hardware. It is
responsible for the execution of all the processes, Resource Allocation, CPU management,
File Management and many other tasks.

Structure of a Computer System

A Computer System consists of:

o Users (people who are using the computer)


o Application Programs (Compilers, Databases, Games, Video player, Browsers, etc.)
o System Programs (Shells, Editors, Compilers, etc.)
o Operating System ( A special program which acts as an interface between user and
hardware )
o Hardware ( CPU, Disks, Memory, etc)

What does an Operating system do?

1. Process Management
2. Process Synchronization

1
VARDHAMAN COLLEGE OF ENGINEERING

3. Memory Management
4. CPU Scheduling
5. File Management
6. Security

2. Types of Operating Systems (OS)

An operating system is a well-organized collection of programs that manages the computer


hardware. It is a type of system software that is responsible for the smooth functioning of the
computer system.

Batch Operating System

In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were
batched together and executed in time. People were used to having a single computer which
was called a mainframe.

In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.

The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed.

2
VARDHAMAN COLLEGE OF ENGINEERING

The purpose of this operating system was mainly to transfer control from one job to another as
soon as the job was completed. It contained a small set of programs called the resident monitor
that always resided in one part of the main memory. The remaining part is used for servicing
jobs.

3
VARDHAMAN COLLEGE OF ENGINEERING

Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.

Disadvantages of Batch OS

1. Starvation

Batch processing suffers from starvation.

For Example:

There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very
high, then the other four jobs will never be executed, or they will have to wait for a very long
time. Hence the other processes get starved.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires
the input of two numbers from the console, then it will never get it in the batch processing
scenario since the user is not present at the time of execution.

Multiprogramming Operating System

Multiprogramming is an extension to batch processing where the CPU is always kept busy.
Each process needs two types of system time: CPU time and IO time.

In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.

4
VARDHAMAN COLLEGE OF ENGINEERING

Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction with the
computer system.

Multiprocessing Operating System

In Multiprocessing, Parallel computing is achieved. There are more than one processors present
in the system which can execute more than one process at the same time. This will increase the
throughput of the system.

5
VARDHAMAN COLLEGE OF ENGINEERING

In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput
of the system.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks can be


distributed among several processors. This increases reliability as if one processor fails,
the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.

Disadvantages of Multiprocessing operating System

6
VARDHAMAN COLLEGE OF ENGINEERING

o Multiprocessing operating system is more complex and sophisticated as it takes care of


multiple CPUs simultaneously.

Multitasking Operating System

The multitasking operating system is a logical extension of a multiprogramming system that


enables multiple programs simultaneously. It allows a user to perform more than one computer
task at the same time.

Advantages of Multitasking operating system


o This operating system is more suited to supporting multiple users simultaneously.
o The multitasking operating systems have well-defined memory management.

7
VARDHAMAN COLLEGE OF ENGINEERING

Disadvantages of Multitasking operating system


o The multiple processors are busier at the same time to complete any task in a
multitasking environment, so the CPU generates more heat.

Network Operating System

An Operating system, which includes software and associated protocols to communicate with
other computers via a network conveniently and cost-effectively, is called Network Operating
System.

8
VARDHAMAN COLLEGE OF ENGINEERING

Advantages of Network Operating System


o In this type of operating system, network traffic reduces due to the division between
clients and the server.
o This type of system is less expensive to set up and maintain.

Disadvantages of Network Operating System


o In this type of operating system, the failure of any node in a system affects the whole
system.
o Security and performance are important issues. So trained network administrators are
required for network administration.

Real Time Operating System

In Real-Time Systems, each job carries a certain deadline within which the job is supposed to
be completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.

The Application of a Real-Time system exists in the case of military applications, if you want
to drop a missile, then the missile is supposed to be dropped with a certain precision.

9
VARDHAMAN COLLEGE OF ENGINEERING

Advantages of Real-time operating system:


o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.

Disadvantages of Real-time operating system:


o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.

Time-Sharing Operating System

In the Time Sharing operating system, computer resources are allocated in a time-dependent
fashion to several programs simultaneously. Thus it helps to provide a large number of user's
direct access to the main computer. It is a logical extension of multiprogramming. In time-
sharing, the CPU is switched among multiple programs given by different users on a scheduled
basis.

10
VARDHAMAN COLLEGE OF ENGINEERING

A time-sharing operating system allows many users to be served simultaneously, so


sophisticated CPU scheduling schemes and Input/output management are required.

Time-sharing operating systems are very difficult and expensive to build.

Advantages of Time Sharing Operating System


o The time-sharing operating system provides effective utilization and sharing of
resources.
o This system reduces CPU idle and response time.

Disadvantages of Time Sharing Operating System


o Data transmission rates are very high in comparison to other methods.
o Security and integrity of user programs loaded in memory and data need to be
maintained as many users access the system at the same time.

Distributed Operating System

The Distributed Operating system is not installed on a single machine, it is divided into parts,
and these parts are loaded on different machines. A part of the distributed Operating system is
installed on each machine to make their communication possible. Distributed Operating
systems are much more complex, large, and sophisticated than Network operating systems
because they also have to take care of varying networking protocols.

11
VARDHAMAN COLLEGE OF ENGINEERING

Advantages of Distributed Operating System


o The distributed operating system provides sharing of resources.
o This type of system is fault-tolerant.

Disadvantages of Distributed Operating System


o Protocol overhead can dominate computation cost.

3. Operating System operations

An operating system is a software on which application programs are executed. It acts


as an interface between the user and the computer hardware.

The major operations of the operating system are as follows:


1. Process Management
2. Device Management
3. File Management
4. Memory Management

Process Management.

Process Management involves tasks like creation, scheduling, deadlock, and termination
of processes. The operating system performs these tasks using process scheduling,

12
VARDHAMAN COLLEGE OF ENGINEERING

which is an OS task that schedules processes according to their states like ready,
waiting, and running.
The operating system is responsible for managing the processes i.e assigning the processor to
a process at a time. This is known as process scheduling. The different algorithms used for
process scheduling are FCFS (first come first served), SJF (shortest job first), priority
scheduling, round robin scheduling etc.
There are many scheduling queues that are used to handle processes in process management.
When the processes enter the system, they are put into the job queue. The processes that are
ready to execute in the main memory are kept in the ready queue. The processes that are
waiting for the I/O device are kept in the device queue.

Device Management.

Device Management is the process of managing the application and operation of the
input and output devices, such as keyboard, mouse, etc. There are different device
drivers that can be connected to the operating system to handle a specific device. The device
controller is an interface between the device and the device driver. The user applications can
access all the I/O devices using the device drivers, which are device specific codes.

Applications of device management:


1. It helps in configuring a device so that it can perform as expected.
2. It executes security measures and processes.
3. Gives users access to the I\O devices using device drivers.

File Management.

File management is used to organize important data and create a searchable database
for quick retrieval. It administers the system for effective handling of digital data.
The system can access the files in two ways:
 Sequential access: The data is accessed in a predetermined and ordered sequence.
 Direct access: The data is accessed directly without having to look sequentially for it.
Files are used to provide a uniform view of data storage by the operating system. All the files
are mapped onto physical devices that are usually non volatile so data is safe in the case of
system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access −

 Sequential Access
The information in a file is processed in order using sequential access. The files
records are accessed on after another. Most of the file systems such as editors,
compilers etc. use sequential access.
 Direct Access

13
VARDHAMAN COLLEGE OF ENGINEERING

In direct access or relative access, the files can be accessed in random for read
and write operations. The direct access model is based on the disk model of a
file, since it allows random accesses.

Memory Management.

Memory Management is a form of resource management, it allocates portions of the


memory to programs at their request. It is necessary for any modern computer system
that may run more than one process at a time.

Memory management plays an important part in operating system. It deals with memory and
the moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are −

 The operating system assigns memory to the processes as required. This can be
done using best fit, first fit and worst fit algorithms.
 All the memory is tracked by the operating system i.e. it nodes what memory
parts are in use by the processes and which are empty.
 The operating system deallocated memory from processes as required. This
may happen when a process has been terminated or if it no longer needs the
memory.
Applications of memory management:

1. It helps to protect different processes from each other.


2. It helps to utilize the memory to the full extent by placing the programs in the
memory.
3. It tracks the inventory and updates the status if it gets emptied or unallocated.

4. Services of Operating System


1. Program execution
2. Input Output Operations
3. Communication between Process
4. File Management
5. Memory Management
6. Process Management
7. Secuirity and Privacy
8. Resource Management
9. User Interface
10. Networking
11. Error handling
12. Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed

14
VARDHAMAN COLLEGE OF ENGINEERING

depends on the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program
is in execution, the Operating System also handles deadlock i.e. no two processes come for
execution at the same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various resources available
for the efficient running of all types of functionalities.
Input Output Operations
Operating System manages the input-output operations and establishes communication
between the user and device drivers. Device drivers are software that is associated with
hardware that is being managed by the OS so that the sync between the devices works
properly. It also provides access to input-output devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes. Communication
between processes includes data transfer among them. If the processes are not on the same
computer but connected through a computer network, then also their communication is
managed by the Operating System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is
the operating system that grants access. These permissions include read-only, read-write, etc.
It also provides a platform for the user to create, and delete files. The Operating System is
responsible for making decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of player . The team manager (OS) decide whether the upcoming player
will be in playing 11 ,playing 15 or will not be included in team , based on his performance
. In the same way, OS first check whether the upcoming program fulfil all requirement to
get memory space or not ,if all things good, it checks how much memory space will be
sufficient for program and then load the program into memory at certain location. And thus
, it prevents program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happen and chef as the (OS) who uses
kitchen-stove(CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
Secuirity and Privacy
 Secuirity : OS keep our computer safe from an unauthorised user by adding
secuirity layer to [Link], Secuirity is nothing but just a layer of protection
which protect computer from bad guys like viruses and hackers. OS provide us
defenses like firewalls and anti-virus software and ensure good safety of
computer and personal information.
 Privacy : OS give us facility to keep our essential information hidden like
having a lock on our door, where only you can enter and other are not allowed .
Basically , it respect our secrets and provide us facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that
manages resource sharing. It also manages the CPU time among processes using CPU
15
VARDHAMAN COLLEGE OF ENGINEERING

Scheduling Algorithms. It also helps in the memory management of the system. It also
controls input-output devices. The OS also ensures the proper use of all the resources
available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The
command interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to
the internet, sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also
prevents the process from coming to a deadlock. It also looks for any type of error or bugs
that can occur while any task. The well-secured OS sometimes also acts as a
countermeasure for preventing any sort of breach of the Computer System from any
external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue) , start(yellow)=>(ready queue),move(green)=>(under execution)
and this light (control) changes after a certain interval of time at each side of the
road(computer system) so that the cars(program) from all side of road move smoothly
without traffic.

The structure of Unix OS Layers are as follows:

16
VARDHAMAN COLLEGE OF ENGINEERING

While working with UNIX OS, several layers of this system provide interaction between
the pc hardware and the user. Following is the description of each and every layer
structure in UNIX system:

Layer-1: Hardware -

This layer of UNIX consists of all hardware-related information in the UNIX


environment.

Layer-2: Kernel –

Every operating system- whether it is Windows, Mac, Linux, or Android, has a core program
called a Kernel which acts as the ‘boss’ for the whole system. It is the heart of the OS! The
Kernel is nothing but a computer program that controls everything else.

The core of the operating system that's liable for maintaining the full functionality is
named the kernel. The kernel of UNIX runs on the particular machine hardware and
interacts with the hardware effectively.

17
VARDHAMAN COLLEGE OF ENGINEERING

It also works as a device manager and performs valuable functions for the processes
which require access to the peripheral devices connected to the computer. The kernel
controls these devices through device drivers.

The kernel also manages the memory. Processes are executed programs that have
owner's humans or systems who initiate their execution.

The system must provide all processes with access to an adequate amount of memory,
and a few processes require a lot of it. To make effective use of main memory and to
allocate a sufficient amount of memory to every process. It uses essential techniques
like paging, swapping, and virtual storage.

Layer-3: The Shell -

The Shell is an interpreter that interprets the command submitted by the user at the
terminal, and calls the program you simply want.

It also keeps a history of the list of the commands you have typed in. If you need to
repeat a command you typed it, use the cursor keys to scroll up and down the list or
type history for a list of previous commands. There are various commands like cat, mv,
cat, grep, id, wc, and many more.

SHELL is a program which provides the interface between the user and an operating system.
When the user logs in OS starts a shell for user. Kernel controls all essential computer
operations, and provides the restriction to hardware access, coordinates all executing utilities,
and manages Resources between process. Using kernel only user can access utilities provided
by operating system.

 The C Shell –
Denoted as csh
 The Bourne Shell –
Denoted as sh
 The Korn Shell
It is denoted as ksh
 GNU Bourne-Again Shell –
Denoted as bash

18
VARDHAMAN COLLEGE OF ENGINEERING

Types of Shell in UNIX System:

o Bourne Shell: This Shell is simply called the Shell. It was the first Shell for UNIX
OS. It is still the most widely available Shell on a UNIX system.
o C Shell: The C shell is another popular shell commonly available on a UNIX
system. The C shell was developed by the University of California at Berkeley
and removed some of the shortcomings of the Bourne shell.
o Korn Shell: This Shell was created by David Korn to address the Bourne Shell's
user-interaction issues and to deal with the shortcomings of the C shell's
scripting quirks.

Layer-4: Application Programs Layer -

It is the outermost layer that executes the given external applications. UNIX
distributions typically come with several useful applications programs as standard. For
Example: emacs editor, StarOffice, xv image viewer, g++ compiler etc.

19
VARDHAMAN COLLEGE OF ENGINEERING

5. System calls and System Programs


The system call establishes a connection between the user software and the operating system's
services. In contrast, the system software defines the OS user interface. The system program
also offers a proper environment for the development and execution of a program. For example,
a modern operating system includes system programs such as an assembler, compiler, editor,
loader, etc. These programs enable programmers to create and run new programs.

What is a System Call?


It is a method of interaction with the OS through the system programs. It is a technique in
which a computer system program requests a service from the OS kernel.

The Application Program Interface (API) helps to connect the OS functions with user
programs. It serves as a bridge between a process and the OS, enabling user-level programs to
request OS services. System calls may only be accessed using the kernel system, and any
software that consumes resources must use system calls.

Types of System call


There are mainly five kinds of system calls. These are classified as follows:

1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Now, you will learn all these different types of system calls one by one.

Process Control

It is responsible for file manipulation jobs, including creating files, deleting files,
reading, opening, writing, closing, etc.

File Management

It is responsible for file manipulation jobs, including creating files, opening files,
deleting files, closing files, etc.

Device Management

20
VARDHAMAN COLLEGE OF ENGINEERING

These are responsible for device manipulation, including reading from device buffers,
writing into device buffers, etc.

Information Maintenance

These are used to manage the data and its share between the OS and the user
program. Some common instances of information maintenance are getting time or
date, getting system data, setting time or date, setting system data, etc.

Communication

These are used for interprocess communication (IPC). Some examples of IPC are
creating, sending, receiving messages, deleting communication connections, etc.

What is System Program?


System programming may be defined as the act of creating System Software by using
the System Programming Languages. A system program offers an environment in
which programs may be developed and run. In simple terms, the system programs
serve as a link between the user interface (UI) and system calls. Some system programs
are only user interfaces, and others are complex. For instance, a compiler is
complicated system software.

The system program is a component of the OS, and it typically lies between the user
interface (UI) and system calls. The system user view is defined by the system
programs, not the system call, because the user view interacts with system programs
and is closer to the user interface.

Types of the System Program


There are mainly six types of system programs. These are classified as follows:

1. File Management
2. Status Information
3. File Modification
4. Programming-Language support
5. Program Loading and Execution
6. Communication

Now, you will learn all these different types of system programs one by one.

File Management

21
VARDHAMAN COLLEGE OF ENGINEERING

It is a collection of specific information saved in a computer system's memory. File


management is described as manipulating files in a computer system, including the
creation, modification, and deletion of files.

Status Information

Status information is information about the input, output process, storage, and CPU
utilization time, how the process will be computed in how much memory is necessary
to execute a task.

File Modification

These system programs are utilized to change files on hard drives or other storage
media. Besides modification, these programs are also utilized to search for content
within a file or to change content within a file.

Programming-Language Support

The OS includes certain standard system programs that allow programming languages
such as C, Visual Basic, C++, Java, and Pearl. There are various system programs,
including compilers, debuggers, assemblers, interpreters, etc.

Program Loading and Execution

After Assembling and Compiling, the program must be loaded into the memory for
execution. A loader is a component of an operating system responsible for loading
programs and libraries, and it is one of the most important steps to starting a program.
The system includes linkage editors, relocatable loaders, Overlay loaders, and loaders.

Communication

System program offers virtual links between processes, people, and computer systems.
Users may browse websites, log in remotely, communicate messages to other users
via their screens, send emails, and transfer files from one user to another.

There are various key differences between the System Call and System Program in the
operating system. Some main differences between the System Call and System Program are as
follows:
1. A user may request access to the operating system's services by using the system call.
In contrast, the system program fulfils a common user request and provides a
compatible environment for a program to create and run effectively.
2. The programmer creates system calls using high-level languages like C and C++.
Assembly level language is used to create the calls that directly interact with the

22
VARDHAMAN COLLEGE OF ENGINEERING

system's hardware. On the other hand, a programmer solely uses a high-level language
to create system programs.
3. System call defines the interface between the services and the user process provided by
the OS. In contrast, the system program defines the operating system's user interface.
4. The system program satisfies the user program's high-level request. It converts the
request into a series of low-level requests. In contrast, the system call fulfils the low-
level requests of the user program.
5. The user process requests an OS service using a system call. In contrast, the system
program transforms the user request into a set of system calls needed to fulfil the
requirement.
6. The system call may be categorized into file manipulation, device manipulation,
communication, process control, information maintenance, and protection. On the other
hand, a System program may be categorized into file management, program loading
and execution, programming-language support, status information, communication,
and file modification.
Head-to-head comparison between the System Call and System Program in Operating System
The OS has various head-to-head comparisons between System Call and System Program.
Some comparisons of the System Call and System Program are as follows:

Features System Call System Program

Definition It is a technique in which a It offers an environment for a


computer system program program to create and run.
requests a service from the OS
kernel.

Request It fulfils the low-level requests It fulfils the high-level request or


of the user program. requirement of the user program.

Programming It is usually written in C and It is commonly written in high-


Languages C++ programming languages. level programming languages
Assemble-level language is only.
used in system calls where
direct hardware access is
required.

User View It defines the interface between It defines the user interface (UI)
the services and the user of the OS.
process provided by the OS.

Action The user process requests an It transforms the user request into
OS service using a system call. a set of system calls needed to
fulfil the requirement.

23
VARDHAMAN COLLEGE OF ENGINEERING

Classification It may be categorized into file It may be categorized into file


manipulation, device management, program loading
manipulation, communication, and execution, programming-
process control, information language support, status
maintenance, and protection. information, file modification,
and communication.

6. Process concepts- Process, Process State Diagram

Process in an Operating System


A process is actively running software or a computer code. Any procedure must be
carried out in a precise order. An entity that helps in describing the fundamental work
unit that must be implemented in any system is referred to as a process.

In other words, we create computer programs as text files that, when executed, create
processes that carry out all of the tasks listed in the program.

When a program is loaded into memory, it may be divided into the four components
stack, heap, text, and data to form a process. The simplified depiction of a process in
the main memory is shown in the diagram below.

Stack
The process stack stores temporary information such as method or function
arguments, the return address, and local variables.

Heap

This is the memory where a process is dynamically allotted while it is running.

24
VARDHAMAN COLLEGE OF ENGINEERING

Text

This consists of the information stored in the processor's registers as well as the most
recent activity indicated by the program counter's value.

Data

In this section, both global and static variables are discussed.

Program
Program is a set of instructions which are executed when the certain task is allowed to
complete that certain task. The programs are usually written in a Programming
Language like C, C ++, Python, Java, R, C # (C sharp), etc.

A computer program is a set of instructions that, when carried out by a computer,


accomplish a certain task

Difference between process and the program

S. Process Program
No

1 A process is actively running software or a Program is a set of instructions


computer code. Any procedure must be carried which are executed when the
out in a precise order. An entity that helps in certain task is allowed to
describing the fundamental work unit that must complete that certain task
be implemented in any system is referred to as a
process

2 Process is Dynamic in Nature Program is Static in Nature

3 Process is an Active in Nature Program is Passive in Nature

4 Process is created during the execution and it is Program is already existed in


loaded directly into the main memory the memory and it is present
in the secondary memory.

25
VARDHAMAN COLLEGE OF ENGINEERING

5 Process has its own control system known as Program does not have any
Process Control Block control system. It is just called
when specified and it executes
the whole program when
called

6 Process changes from time to time by itself Program cannot be changed


on its own. It must be changed
by the programmer.

7 A process needs extra data in addition to the Program is basically divided


program data needed for management and into two parts. One is Code
execution. part and the other part is data
part.

8 Processes have significant resource demands; A program just needs memory


they require resources like Memory Addresses, space to store its instructions;
Central Processing Unit, Input or Output until their no further resources are
presence or existence in the Operating System. needed.

A process has several stages that it passes through from beginning to end. There must
be a minimum of five states. Even though during execution, the process could be in
one of these states, the names of the states are not standardized. Each process goes
through several stages throughout its life cycle.

Process States in Operating System

The states of a process are as follows:


 New (Create): In this step, the process is about to be created but not yet
created. It is the program that is present in secondary memory that will be
picked up by OS to create the process.
 Ready: New -> Ready to run. After the creation of a process, the process
enters the ready state i.e. the process is loaded into the main memory. The
process here is ready to run and is waiting to get the CPU time for its
execution. Processes that are ready for execution by the CPU are maintained
in a queue called ready queue for ready processes.
 Run: The process is chosen from the ready queue by the CPU for execution
and the instructions within the process are executed by any one of the
available CPU cores.
 Blocked or Wait: Whenever the process requests access to I/O or needs
input from the user or needs access to a critical region(the lock for which is

26
VARDHAMAN COLLEGE OF ENGINEERING

already acquired) it enters the blocked or waits for the state. The process
continues to wait in the main memory and does not require CPU. Once the
I/O operation is completed the process goes to the ready state.
 Terminated or Completed: Process is killed as well as PCB is deleted. The
resources allocated to the process will be released or deallocated.
 Suspend Ready: Process that was initially in the ready state but was
swapped out of main memory(refer to Virtual Memory topic) and placed
onto external storage by the scheduler is said to be in suspend ready state.
The process will transition back to a ready state whenever the process is
again brought onto the main memory.
 Suspend wait or suspend blocked: Similar to suspend ready but uses the
process which was performing I/O operation and lack of main memory
caused them to move to secondary memory. When work is finished it may
go to suspend ready.

7. PCB and Operations on processes


While creating a process the operating system performs several operations. To identify
the processes, it assigns a process identification number (PID) to each process. As the
operating system supports multi-programming, it needs to keep track of all the
processes. For this task, the process control block (PCB) is used to track the process’s

27
VARDHAMAN COLLEGE OF ENGINEERING

execution status. Each block of memory contains information about the process state,
program counter, stack pointer, status of opened files, scheduling algorithms, etc. All
this information is required and must be saved when the process is switched from one
state to another. When the process makes a transition from one state to another, the
operating system must update information in the process’s PCB. A process control
block (PCB) contains information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCBs, that means logically contains a PCB for all
of the current processes in the system.

 Pointer – It is a stack pointer which is required to be saved when the


process is switched from one state to another to retain the current position
of the process.
 Process state – It stores the respective state of the process.
 Process number – Every process is assigned with a unique id known as
process ID or PID which stores the process identifier.
 Program counter – It stores the counter which contains the address of the
next instruction that is to be executed for the process.
 Register – These are the CPU registers which includes: accumulator, base,
registers and general purpose registers.

28
VARDHAMAN COLLEGE OF ENGINEERING

 Memory limits – This field contains the information about memory


management system used by operating system. This may include the page
tables, segment tables etc.
 Open files list – This information includes the list of files opened for a
process.
Additional Points to Consider for Process Control Block (PCB)
 Interrupt handling: The PCB also contains information about the
interrupts that a process may have generated and how they were handled
by the operating system.
 Context switching: The process of switching from one process to another
is called context switching. The PCB plays a crucial role in context
switching by saving the state of the current process and restoring the state
of the next process.
 Real-time systems: Real-time operating systems may require additional
information in the PCB, such as deadlines and priorities, to ensure that
time-critical processes are executed in a timely manner.
 Virtual memory management: The PCB may contain information about
a process’s virtual memory management, such as page tables and page
fault handling.
 Inter-process communication: The PCB can be used to facilitate inter-
process communication by storing information about shared resources and
communication channels between processes.
 Fault tolerance: Some operating systems may use multiple copies of the
PCB to provide fault tolerance in case of hardware failures or software
errors.

Advantages:

1. Efficient process management: The process table and PCB provide an


efficient way to manage processes in an operating system. The process
table contains all the information about each process, while the PCB
contains the current state of the process, such as the program counter and
CPU registers.
2. Resource management: The process table and PCB allow the operating
system to manage system resources, such as memory and CPU time,
efficiently. By keeping track of each process’s resource usage, the
operating system can ensure that all processes have access to the resources
they need.
3. Process synchronization: The process table and PCB can be used to
synchronize processes in an operating system. The PCB contains
information about each process’s synchronization state, such as its waiting
status and the resources it is waiting for.
4. Process scheduling: The process table and PCB can be used to schedule
processes for execution. By keeping track of each process’s state and
resource usage, the operating system can determine which processes
should be executed next.
29
VARDHAMAN COLLEGE OF ENGINEERING

Disadvantages:

1. Overhead: The process table and PCB can introduce overhead and reduce
system performance. The operating system must maintain the process
table and PCB for each process, which can consume system resources.
2. Complexity: The process table and PCB can increase system complexity
and make it more challenging to develop and maintain operating systems.
The need to manage and synchronize multiple processes can make it more
difficult to design and implement system features and ensure system
stability.
3. Scalability: The process table and PCB may not scale well for large-scale
systems with many processes. As the number of processes increases, the
process table and PCB can become larger and more difficult to manage
efficiently.
4. Security: The process table and PCB can introduce security risks if they
are not implemented correctly. Malicious programs can potentially access
or modify the process table and PCB to gain unauthorized access to
system resources or cause system instability.
5. Miscellaneous accounting and status data – This field includes
information about the amount of CPU used, time constraints, jobs or
process number, etc. The process control block stores the register content
also known as execution content of the processor when it was blocked
from running. This execution content architecture enables the operating
system to restore a process’s execution context when the process returns to
the running state. When the process makes a transition from one state to
another, the operating system updates its information in the process’s
PCB. The operating system maintains pointers to each process’s PCB in a
process table so that it can access the PCB
quickly.

30
VARDHAMAN COLLEGE OF ENGINEERING

Operation on a Process
The execution of a process is a complex activity. It involves various operations.
Following are the operations that are performed while execution of a process:

Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system,
the user, or the old process itself. There are several events that lead to the process
creation. Some of the such events are the following:
1. When we start the computer, the system creates several background
processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. The batch system takes initiation of a batch job.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to run.
It means the operating system puts the process from the ready state into the running
state. Dispatching is done by the operating system when the resources are free or the
process has higher priority than the ongoing process. There are various other cases in
which the process in the running state is preempted and the process in the ready state
is dispatched by the operating system.
Blocking
When a process invokes an input-output system call that blocks the process, and
operating system is put in block mode. Block mode is basically a mode where the
process waits for input-output. Hence on the demand of the process itself, the
operating system blocks the process and dispatches another process to the processor.

31
VARDHAMAN COLLEGE OF ENGINEERING

Hence, in process-blocking operations, the operating system puts the process in a


‘waiting’ state.
Preemption
When a timeout occurs that means the process hadn’t been terminated in the allotted
time interval and the next process is ready to execute, then the operating system
preempts the process. This operation is only valid where CPU scheduling supports
preemption. Basically, this happens in priority scheduling where on the incoming of
high priority process the ongoing process is preempted. Hence, in process
preemption operation, the operating system puts the process in a ‘ready’ state.
Process Termination
Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the
execution. Like creation, in termination also there may be several events that may
lead to the process of termination. Some of them are:
1. The process completes its execution fully and it indicates to the OS that it
has finished.
2. The operating system itself terminates the process due to service errors.
3. There may be a problem in hardware that terminates the process.
4. One process can be terminated by another process.

Process table is a table that contains Process ID and the reference to the
corresponding PCB in memory. We can visualize the Process table as a
dictionary containing the list of all the processes running.

8. Process Scheduling

Process scheduling in CPU refers to the technique used by operating


systems to manage the execution of multiple processes on a single CPU. The

32
VARDHAMAN COLLEGE OF ENGINEERING

operating system is responsible for allocating CPU time to each process in a


fair and efficient manner, based on a predefined scheduling algorithm.

Types of CPU Scheduling –


There are two main types of CPU scheduling algorithms used by operating
systems: preemptive and non-preemptive.

Pre-emptive CPU scheduling


Pre-emptive CPU scheduling is a technique used by operating systems to
efficiently allocate the CPU to multiple processes or threads. In pre-emptive
scheduling, the operating system interrupts the currently executing process or
thread and switches to another process or thread based on a predefined
scheduling algorithm.

The goal of pre-emptive scheduling is to ensure that all processes or threads


have a fair share of CPU time, and that no process or thread monopolizes the
CPU. This improves the overall system performance and responsiveness, as it
allows multiple processes or threads to run concurrently without waiting for
each other.

Non-Pre-emptive CPU scheduling


Non-preemptive CPU scheduling is another technique used by operating
systems to allocate the CPU to multiple processes or threads. In non-
preemptive scheduling, the currently executing process or thread continues to
run until it completes its execution or blocks for some reason.

Unlike pre-emptive scheduling, the operating system does not interrupt the
currently executing process or thread to switch to another one, unless the
currently running process blocks or voluntarily gives up the CPU. As a result,
non-preemptive scheduling algorithms may lead to longer waiting times for
some processes or threads, and may not be as efficient as pre-emptive
scheduling in certain situations.

One advantage of non-preemptive scheduling is that it is simpler and less


resource-intensive than pre-emptive scheduling, as there is no need to
constantly switch between processes or threads. This can be useful in certain
real-time applications, where predictable execution times are more important
than overall system performance.

Some common non-preemptive scheduling algorithms include First-Come-


First-Serve (FCFS), Shortest Job First (SJF), and Priority-based scheduling. In

33
VARDHAMAN COLLEGE OF ENGINEERING

these algorithms, the scheduler chooses the next process or thread to run
based on its arrival time, execution time, or priority, respectively.

Criteria for CPU scheduling –


There are several processes kept in the main memory by the short-term
scheduler. The selection of a process from the main memory and given to the
CPU for execution is done on the basis of certain algorithms. These algorithms
are known as CPU scheduling algorithms. There are various criteria according
to which an algorithm is selected over another. The various scheduling criteria
are –

 CPU Utilization – CPU needs to be kept busy all the time. It should
not be idle. CPU utilization can range from 0 to 100 percent. CPU
utilization from 40 to 90 percent is considered as good whereas below
this is considered poor.
 Throughput – This is a measure of rate of work done in a system. It
is defined as the number of processes per unit time.
 Turn Around Time – It is defined as the time interval between the
submission of a process to the time of completion.
 Waiting Time – It is defined as the sum of time periods that are spent
by the processes waiting in the queue.
 Response Time – Turnaround time is sometimes considered to be a
bad criterion because there may exist situations where a process has
completed a process quite fast and started computation of the next
process, while results of the previous are being output to the user.
Therefore, response time is considered to be a better criterion. The
time from the submission of the process to the time it executes is
called response time.
So, it is considered better to have a maximum CPU utilization and throughput
and a minimum turnaround time, waiting time, and response time. In some
cases, it is preferred to optimize the average value of these criteria whereas
cases may arise where optimizing the minimum and maximum values may
give better results.

9. Scheduler Types
Process Scheduling handles the selection of a process for the processor on the basis of a scheduling
algorithm and also the removal of a process from the processor. It is an important part of
multiprogramming operating system.

There are many scheduling queues that are used in process scheduling. When the processes enter
the system, they are put into the job queue. The processes that are ready to execute in the main

34
VARDHAMAN COLLEGE OF ENGINEERING

memory are kept in the ready queue. The processes that are waiting for the I/O device are kept in
the I/O device queue.

The different schedulers that are used for process scheduling are −

Long Term Scheduler


The job scheduler or long-term scheduler selects processes from the storage pool in the secondary
memory and loads them into the ready queue in the main memory for execution.

The long-term scheduler controls the degree of multiprogramming. It must select a careful mixture
of I/O bound and CPU bound processes to yield optimum system throughput. If it selects too many
CPU bound processes then the I/O devices are idle and if it selects too many I/O bound processes
then the processor has nothing to do.

The job of the long-term scheduler is very important and directly affects the system for a long time.

Short Term Scheduler


The short-term scheduler selects one of the processes from the ready queue and schedules them for
execution. A scheduling algorithm is used to decide which process will be scheduled for execution
next.

The short-term scheduler executes much more frequently than the long-term scheduler as a process
may execute only for a few milliseconds.

The choices of the short term scheduler are very important. If it selects a process with a long burst
time, then all the processes after that will have to wait for a long time in the ready queue. This is
known as starvation and it may happen if a wrong decision is made by the short-term scheduler.

A diagram that demonstrates long-term and short-term schedulers is given as follows −

Medium Term Scheduler


The medium-term scheduler swaps out a process from main memory. It can again swap in the
process later from the point it stopped executing. This can also be called as suspending and
resuming the process.

This is helpful in reducing the degree of multiprogramming. Swapping is also useful to improve
the mix of I/O bound and CPU bound processes in the memory.

35
VARDHAMAN COLLEGE OF ENGINEERING

A diagram that demonstrates medium-term scheduling is given as follows −

I. Non Preemptive
1. First Come First Serve
First Come First Serve CPU Scheduling Algorithm shortly known as FCFS is the first
algorithm of CPU Process Scheduling Algorithm. In First Come First Serve Algorithm
what we do is to allow the process to execute in linear manner.

This means that whichever process enters process enters the ready queue first is
executed first. This shows that First Come First Serve Algorithm follows First In First
Out (FIFO) principle.

Characteristics of FCFS CPU Process Scheduling


The characteristics of FCFS CPU Process Scheduling are:

1. Implementation is simple.
2. Does not cause any causalities while using
3. It adopts a non pre emptive and pre emptive strategy.
4. It runs each procedure in the order that they are received.
5. Arrival time is used as a selection criterion for procedures.

Advantages of FCFS CPU Process Scheduling


The advantages of FCFS CPU Process Scheduling are:

36
VARDHAMAN COLLEGE OF ENGINEERING

1. In order to allocate processes, it uses the First In First Out queue.


2. The FCFS CPU Scheduling Process is straight forward and easy to implement.
3. In the FCFS situation pre emptive scheduling, there is no chance of process starving.
4. As there is no consideration of process priority, it is an equitable algorithm.

Disadvantages of FCFS CPU Process Scheduling


The disadvantages of FCFS CPU Process Scheduling are:

o FCFS CPU Scheduling Algorithm has Long Waiting Time


o FCFS CPU Scheduling favors CPU over Input or Output operations
o In FCFS there is a chance of occurrence of Convoy Effect
o Because FCFS is so straight forward, it often isn't very effective. Extended waiting
periods go hand in hand with this. All other orders are left idle if the CPU is busy
processing one time-consuming order.

Problems in the First Come First Serve CPU Scheduling Algorithm

Example
Process ID Arrival Time Burst Time
P1 0 9
P2 1 3
P3 1 2
P4 1 4
P5 2 3
P6 3 2

Gantt chart for the above Example 1 is:

Turn Around Time = Completion Time - Arrival Time

Waiting Time = Turn Around Time - Burst Time

37
VARDHAMAN COLLEGE OF ENGINEERING

Solution to the Above Question Example 1


S. Process Arrival Burst Completion Turn Waiting
No ID Time Time Time Around Time
Time

1 P1 A 0 9 9 9

2 P2 B 1 3 12 11

3 P3 C 1 2 14 13

4 P4 D 1 4 18 17

5 P5 E 2 3 21 19

6 P6 F 3 2 23 20

The Average Completion Time is:

Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6

Average CT = 97 / 6

Average CT = 16.16667

The Average Waiting Time is:

Average WT = ( 0 + 8 + 11 + 13 + 16 + 18 ) /6

Average WT = 66 / 6

Average WT = 11

The Average Turn Around Time is:

Average TAT = ( 9 + 11 + 13 + 17 + 19 +20 ) / 6

Average TAT = 89 / 6

Average TAT = 14.83334

38
VARDHAMAN COLLEGE OF ENGINEERING

2. SJF (Shortest Job First)


Example
In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their
arrival time and burst time are given in the table below.

PID Arrival Burst Completion Turn Around Waiting


Time Time Time Time Time

1 1 7 8 7 0

2 3 3 13 10 7

3 6 2 10 4 2

4 7 10 31 24 14

5 9 8 21 12 4

Since, No Process arrives at time 0 hence; there will be an empty slot in the Gantt chart from
time 0 to 1 (the time at which the first process arrives).

Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time

39
VARDHAMAN COLLEGE OF ENGINEERING

Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be
known in advance.

II. Pre-emptive scheduling


1. Round Robin
This algorithm is very special because it is going to remove all the Flaws which we have
detected in the previous CPU Process Scheduling Algorithms.

There is a lot of popularity for this Round Robin CPU Scheduling is because Round
Robin works only in Pre Emptive state. This makes it very reliable.

Important Abbreviations
1. CPU - - - > Central Processing Unit
2. AT - - - > Arrival Time
3. BT - - - > Burst Time
4. WT - - - > Waiting Time
5. TAT - - - > Turn Around Time
6. CT - - - > Completion Time
7. FIFO - - - > First In First Out
8. TQ - - - > Time Quantum

Round Robin CPU Scheduling


Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which
is ever used in the history of CPU Scheduling Algorithms. Round Robin CPU Scheduling
uses Time Quantum (TQ). The Time Quantum is something which is removed from the
Burst Time and lets the chunk of process to be completed.

Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is
carried out cyclically. The system defines a specific time slice, known as a time
quantum.

40
VARDHAMAN COLLEGE OF ENGINEERING

First, the processes which are eligible to enter the ready queue enter the ready queue.
After entering the first process in Ready Queue is executed for a Time Quantum chunk
of time. After execution is complete, the process is removed from the ready queue.
Even now the process requires some time to complete its execution, then the process
is added to Ready Queue.

The Ready Queue does not hold processes which already present in the Ready Queue.
The Ready Queue is designed in such a manner that it does not hold non unique
processes. By holding same processes Redundancy of the processes increases.

After, the process execution is complete, the Ready Queue does not take the
completed process for holding.

Advantages
The Advantages of Round Robin CPU Scheduling are:

1. A fair amount of CPU is allocated to each job.


2. Because it doesn't depend on the burst time, it can truly be implemented in the system.
3. It is not affected by the convoy effect or the starvation problem as occurred in First
Come First Serve CPU Scheduling Algorithm.

41
VARDHAMAN COLLEGE OF ENGINEERING

Disadvantages
The Disadvantages of Round Robin CPU Scheduling are:

1. Low Operating System slicing times will result in decreased CPU output.
2. Round Robin CPU Scheduling approach takes longer to swap contexts.
3. Time quantum has a significant impact on its performance.
4. The procedures cannot have priorities established.

Examples:

S. No Process ID Arrival Time Burst Time


1. P1 0 7
2 P2 1 4
3 P3 2 15
4 P4 3 11
5 P5 4 20
6 P6 4 9

Assume Time Quantum TQ = 5

Ready Queue:

1. P1, P2, P3, P4, P5, P6, P1, P3, P4, P5, P6, P3, P4, P5

Gantt chart:

Average Completion Time

42
VARDHAMAN COLLEGE OF ENGINEERING

1. Average Completion Time = ( 31 +9 + 55 +56 +66 + 50 ) / 6


2. Average Completion Time = 267 / 6
3. Average Completion Time = 44.5

Average Waiting Time

1. Average Waiting Time = ( 5 + 26 + 5 + 42 + 42 + 37 ) / 6


2. Average Waiting Time = 157 / 6
3. Average Waiting Time = 26.16667

Average Turn Around Time

1. Average Turn Around Time = ( 31 + 8 + 53 + 53 + 62 + 46 ) / 6


2. Average Turn Around Time = 253 / 6
3. Average Turn Around Time = 42.16667
2. Shortest Remaining Time First (SRTF)
This Algorithm is the preemptive version of SJF scheduling. In SRTF, the execution
of the process can be stopped after certain amount of time. At the arrival of every
process, the short term scheduler schedules the process with the least remaining burst
time among the list of available processes and the running process.

Once all the processes are available in the ready queue, No preemption will be done
and the algorithm will work as SJF scheduling. The context of the process is saved in
the Process Control Block when the process is removed from the execution and the
next process is scheduled. This PCB is accessed on the next execution of this process.

Example
In this Example, there are five jobs P1, P2, P3, P4, P5 and P6. Their arrival time and burst
time are given below in the table.

Process Arrival Burst Completion Turn Waiting Response


ID Time Time Time Around Time Time
Time

1 0 8 20 20 12 0

2 1 4 10 9 5 1

43
VARDHAMAN COLLEGE OF ENGINEERING

3 2 2 4 2 0 2

4 3 1 5 2 1 4

5 4 3 13 9 6 10

6 5 2 7 2 0 5

Avg Waiting Time = 24/6

III Priority Scheduling


a. Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems. Each process is
assigned first arrival time (less arrival time process first) if two
processes have same arrival time, then compare to priorities (highest
process first). Also, if two processes have same priority then compare
to process number (less process number first). This process is repeated
while all process get executed.
Implementation –
1. First input the processes with their arrival time, burst time and priority.
2. First process will schedule, which have the lowest arrival time, if two or
more processes will have lowest arrival time, then whoever has higher
priority will schedule first.
3. Now further processes will be schedule according to the arrival time and
priority of the process. (Here we are assuming that lower the priority
number having higher priority). If two process priority are same then sort
according to process number.
Note: In the question, They will clearly mention, which number will have
higher priority and which number will have lower priority.
4. Once all the processes have been arrived, we can schedule them based on
their priority.

44
VARDHAMAN COLLEGE OF ENGINEERING

Gantt Chart –

b. Priority Scheduling Preemptive

Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling


algorithm that works based on the priority of a process. In this algorithm, the scheduler
schedules the tasks to work as per the priority, which means that a higher priority
process should be executed first. In case of any conflict, i.e., when there is more than
one process with equal priorities, then the pre-emptive priority CPU scheduling
algorithm works on the basis of FCFS (First Come First Serve) algorithm.

Example

rocess Arrival Time Priority Burst Time

P1 0 ms 3 3 ms

P2 1 ms 2 4 ms

45
VARDHAMAN COLLEGE OF ENGINEERING

rocess Arrival Time Priority Burst Time

P3 2 ms 4 6 ms

P4 3 ms 6 4 ms

P5 5 ms 10 2 ms

After calculating the above fields, the final table looks like

IV Multilevel Queue (MLQ) CPU Scheduling


It may happen that processes in the ready queue can be divided into different classes
where each class has its own scheduling needs. For example, a common division is
a foreground (interactive) process and a background (batch) process. These two
classes have different scheduling needs. For this kind of situation, Multilevel Queue
Scheduling is used.

Features of Multilevel Queue (MLQ) CPU Scheduling:

 Multiple queues: In MLQ scheduling, processes are divided into multiple


queues based on their priority, with each queue having a different priority
level. Higher-priority processes are placed in queues with higher priority
levels, while lower-priority processes are placed in queues with lower
priority levels.
 Priorities assigned: Priorities are assigned to processes based on their
type, characteristics, and importance. For example, interactive processes

46
VARDHAMAN COLLEGE OF ENGINEERING

like user input/output may have a higher priority than batch processes like
file backups.
 Preemption: Preemption is allowed in MLQ scheduling, which means a
higher priority process can preempt a lower priority process, and the CPU
is allocated to the higher priority process. This helps ensure that high-
priority processes are executed in a timely manner.
 Scheduling algorithm: Different scheduling algorithms can be used for
each queue, depending on the requirements of the processes in that queue.
For example, Round Robin scheduling may be used for interactive
processes, while First Come First Serve scheduling may be used for batch
processes.
 Feedback mechanism: A feedback mechanism can be implemented to
adjust the priority of a process based on its behavior over time. For
example, if an interactive process has been waiting in a lower-priority
queue for a long time, its priority may be increased to ensure it is executed
in a timely manner.
 Efficient allocation of CPU time: MLQ scheduling ensures that
processes with higher priority levels are executed in a timely manner,
while still allowing lower priority processes to execute when the CPU is
idle.
 Fairness: MLQ scheduling provides a fair allocation of CPU time to
different types of processes, based on their priority and requirements.
 Customizable: MLQ scheduling can be customized to meet the specific
requirements of different types of processes.
Advantages of Multilevel Queue CPU Scheduling:
 Low scheduling overhead: Since processes are permanently assigned to
their respective queues, the overhead of scheduling is low, as the scheduler
only needs to select the appropriate queue for execution.
 Efficient allocation of CPU time: The scheduling algorithm ensures that
processes with higher priority levels are executed in a timely manner,
while still allowing lower priority processes to execute when the CPU is
idle. This ensures optimal utilization of CPU time.
 Fairness: The scheduling algorithm provides a fair allocation of CPU time
to different types of processes, based on their priority and requirements.
 Customizable: The scheduling algorithm can be customized to meet the
specific requirements of different types of processes. Different scheduling
algorithms can be used for each queue, depending on the requirements of
the processes in that queue.
 Prioritization: Priorities are assigned to processes based on their type,
characteristics, and importance, which ensures that important processes
are executed in a timely manner.
 Preemption: Preemption is allowed in Multilevel Queue Scheduling,
which means that higher-priority processes can preempt lower-priority
processes, and the CPU is allocated to the higher-priority process. This
helps ensure that high-priority processes are executed in a timely manner.

47
VARDHAMAN COLLEGE OF ENGINEERING

Disadvantages of Multilevel Queue CPU Scheduling:


 Some processes may starve for CPU if some higher priority queues are
never becoming empty.
 It is inflexible in nature.
 There may be added complexity in implementing and maintaining multiple
queues and scheduling algorithms.
Ready Queue is divided into separate queues for each class of processes. For
example, let us take three different types of processes System processes, Interactive
processes, and Batch Processes. All three processes have their own queue. Now,
look at the below figure.

The Description of the processes in the above diagram is as follows:


 System Processes: The CPU itself has its own process to run which is
generally termed a System Process.
 Interactive Processes: An Interactive Process is a type of process in
which there should be the same type of interaction.
 Batch Processes: Batch processing is generally a technique in the
Operating system that collects the programs and data together in the form
of a batch before the processing starts.
All three different type of processes have their own queue. Each queue has its own
Scheduling algorithm. For example, queue 1 and queue 2 use Round Robin while
queue 3 can use FCFS to schedule their processes.
Scheduling among the queues: What will happen if all the queues have some
processes? Which process should get the CPU? To determine this Scheduling among
the queues is necessary. There are two ways to do so –
1. Fixed priority preemptive scheduling method – Each queue has
absolute priority over the lower priority queue. Let us consider the
following priority order queue 1 > queue 2 > queue 3. According to this
algorithm, no process in the batch queue(queue 3) can run unless queues 1
and 2 are empty. If any batch process (queue 3) is running and any system
(queue 1) or Interactive process(queue 2) entered the ready queue the
batch process is preempted.
2. Time slicing – In this method, each queue gets a certain portion of CPU
time and can use it to schedule its own processes. For instance, queue 1

48
VARDHAMAN COLLEGE OF ENGINEERING

takes 50 percent of CPU time queue 2 takes 30 percent and queue 3 gets
20 percent of CPU time.
V Multilevel Feedback Queue Scheduling (MLFQ) CPU
Scheduling

Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is
like Multilevel Queue(MLQ) Scheduling but in this process can move between the
queues. And thus, much more efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:
 In a multilevel queue-scheduling algorithm, processes are permanently
assigned to a queue on entry to the system, and processes are allowed
to move between queues.
 As the processes are permanently assigned to the queue, this setup has
the advantage of low scheduling overhead,

Features of Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling:

Multiple queues: Similar to MLQ scheduling, MLFQ scheduling divides processes


into multiple queues based on their priority levels. However, unlike MLQ
scheduling, processes can move between queues based on their behavior and
needs.
Priorities adjusted dynamically: The priority of a process can be adjusted
dynamically based on its behavior, such as how much CPU time it has used or how
often it has been blocked. Higher-priority processes are given more CPU time and
lower-priority processes are given less.
Time-slicing: Each queue is assigned a time quantum or time slice, which
determines how much CPU time a process in that queue is allowed to use before it
is preempted and moved to a lower priority queue.
Feedback mechanism: MLFQ scheduling uses a feedback mechanism to adjust
the priority of a process based on its behavior over time. For example, if a process
in a lower-priority queue uses up its time slice, it may be moved to a higher-
priority queue to ensure it gets more CPU time.
Preemption: Preemption is allowed in MLFQ scheduling, meaning that a higher-
priority process can preempt a lower-priority process to ensure it gets the CPU
time it needs.
Advantages of Multilevel Feedback Queue Scheduling:
 It is more flexible.
 It allows different processes to move between different queues.
 It prevents starvation by moving a process that waits too long for the
lower priority queue to the higher priority queue.
Disadvantages of Multilevel Feedback Queue Scheduling:
 The selection of the best scheduler, it requires some other means to
select the values.
 It produces more CPU overheads.
 It is the most complex algorithm.

49
VARDHAMAN COLLEGE OF ENGINEERING

V Multilevel feedback queue scheduling,


It allows a process to move between queues. Multilevel Feedback
Queue Scheduling (MLFQ) keeps analyzing the behavior (time of execution) of
processes and according to which it changes its priority.
Now, look at the diagram and explanation below to understand it properly.

Now let us suppose that queues 1 and 2 follow round robin with time quantum 4
and 8 respectively and queue 3 follow FCFS.
Implementation of MFQS is given below –
 When a process starts executing the operating system can insert it into
any of the above three queues depending upon its priority. For
example, if it is some background process, then the operating system
would not like it to be given to higher priority queues such as queues 1
and 2. It will directly assign it to a lower priority queue i.e. queue 3.
Let’s say our current process for consideration is of significant priority
so it will be given queue 1.
 In queue 1 process executes for 4 units and if it completes in these 4
units or it gives CPU for I/O operation in these 4 units then the priority
of this process does not change and if it again comes in the ready queue
then it again starts its execution in Queue 1.
 If a process in queue 1 does not complete in 4 units then its priority
gets reduced and it is shifted to queue 2.
 Above points 2 and 3 are also true for queue 2 processes but the time
quantum is 8 units. In a general case if a process does not complete in a
time quantum then it is shifted to the lower priority queue.
 In the last queue, processes are scheduled in an FCFS manner.
 A process in a lower priority queue can only execute only when higher
priority queues are empty.
 A process running in the lower priority queue is interrupted by a
process arriving in the higher priority queue.
Well, the above implementation may differ for example the last queue can also
follow Round-robin Scheduling.
50
VARDHAMAN COLLEGE OF ENGINEERING

Problems in the above implementation: A process in the lower priority queue


can suffer from starvation due to some short processes taking all the CPU time.
Solution: A simple solution can be to boost the priority of all the processes after
regular intervals and place them all in the highest priority queue.
What is the need for such complex Scheduling?
 Firstly, it is more flexible than multilevel queue scheduling.
 To optimize turnaround time algorithms like SJF are needed which
require the running time of processes to schedule them. But the running
time of the process is not known in advance. MFQS runs a process for a
time quantum and then it can change its priority(if it is a long process).
Thus it learns from past behavior of the process and then predicts its
future behavior. This way it tries to run a shorter process first thus
optimizing turnaround time.
 MFQS also reduces the response time.

51

You might also like