Unit I
Unit I
UNIT I
1. Definition
Operating System can be defined as an interface between user and the hardware. It provides
an environment to the user so that, the user can perform its task in convenient and efficient
way.
An Operating System can be defined as an interface between user and hardware. It is
responsible for the execution of all the processes, Resource Allocation, CPU management,
File Management and many other tasks.
1. Process Management
2. Process Synchronization
1
VARDHAMAN COLLEGE OF ENGINEERING
3. Memory Management
4. CPU Scheduling
5. File Management
6. Security
In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were
batched together and executed in time. People were used to having a single computer which
was called a mainframe.
In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed.
2
VARDHAMAN COLLEGE OF ENGINEERING
The purpose of this operating system was mainly to transfer control from one job to another as
soon as the job was completed. It contained a small set of programs called the resident monitor
that always resided in one part of the main memory. The remaining part is used for servicing
jobs.
3
VARDHAMAN COLLEGE OF ENGINEERING
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very
high, then the other four jobs will never be executed, or they will have to wait for a very long
time. Hence the other processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires
the input of two numbers from the console, then it will never get it in the batch processing
scenario since the user is not present at the time of execution.
Multiprogramming is an extension to batch processing where the CPU is always kept busy.
Each process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.
4
VARDHAMAN COLLEGE OF ENGINEERING
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction with the
computer system.
In Multiprocessing, Parallel computing is achieved. There are more than one processors present
in the system which can execute more than one process at the same time. This will increase the
throughput of the system.
5
VARDHAMAN COLLEGE OF ENGINEERING
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput
of the system.
6
VARDHAMAN COLLEGE OF ENGINEERING
7
VARDHAMAN COLLEGE OF ENGINEERING
An Operating system, which includes software and associated protocols to communicate with
other computers via a network conveniently and cost-effectively, is called Network Operating
System.
8
VARDHAMAN COLLEGE OF ENGINEERING
In Real-Time Systems, each job carries a certain deadline within which the job is supposed to
be completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.
The Application of a Real-Time system exists in the case of military applications, if you want
to drop a missile, then the missile is supposed to be dropped with a certain precision.
9
VARDHAMAN COLLEGE OF ENGINEERING
In the Time Sharing operating system, computer resources are allocated in a time-dependent
fashion to several programs simultaneously. Thus it helps to provide a large number of user's
direct access to the main computer. It is a logical extension of multiprogramming. In time-
sharing, the CPU is switched among multiple programs given by different users on a scheduled
basis.
10
VARDHAMAN COLLEGE OF ENGINEERING
The Distributed Operating system is not installed on a single machine, it is divided into parts,
and these parts are loaded on different machines. A part of the distributed Operating system is
installed on each machine to make their communication possible. Distributed Operating
systems are much more complex, large, and sophisticated than Network operating systems
because they also have to take care of varying networking protocols.
11
VARDHAMAN COLLEGE OF ENGINEERING
Process Management.
Process Management involves tasks like creation, scheduling, deadlock, and termination
of processes. The operating system performs these tasks using process scheduling,
12
VARDHAMAN COLLEGE OF ENGINEERING
which is an OS task that schedules processes according to their states like ready,
waiting, and running.
The operating system is responsible for managing the processes i.e assigning the processor to
a process at a time. This is known as process scheduling. The different algorithms used for
process scheduling are FCFS (first come first served), SJF (shortest job first), priority
scheduling, round robin scheduling etc.
There are many scheduling queues that are used to handle processes in process management.
When the processes enter the system, they are put into the job queue. The processes that are
ready to execute in the main memory are kept in the ready queue. The processes that are
waiting for the I/O device are kept in the device queue.
Device Management.
Device Management is the process of managing the application and operation of the
input and output devices, such as keyboard, mouse, etc. There are different device
drivers that can be connected to the operating system to handle a specific device. The device
controller is an interface between the device and the device driver. The user applications can
access all the I/O devices using the device drivers, which are device specific codes.
File Management.
File management is used to organize important data and create a searchable database
for quick retrieval. It administers the system for effective handling of digital data.
The system can access the files in two ways:
Sequential access: The data is accessed in a predetermined and ordered sequence.
Direct access: The data is accessed directly without having to look sequentially for it.
Files are used to provide a uniform view of data storage by the operating system. All the files
are mapped onto physical devices that are usually non volatile so data is safe in the case of
system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access −
Sequential Access
The information in a file is processed in order using sequential access. The files
records are accessed on after another. Most of the file systems such as editors,
compilers etc. use sequential access.
Direct Access
13
VARDHAMAN COLLEGE OF ENGINEERING
In direct access or relative access, the files can be accessed in random for read
and write operations. The direct access model is based on the disk model of a
file, since it allows random accesses.
Memory Management.
Memory management plays an important part in operating system. It deals with memory and
the moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are −
The operating system assigns memory to the processes as required. This can be
done using best fit, first fit and worst fit algorithms.
All the memory is tracked by the operating system i.e. it nodes what memory
parts are in use by the processes and which are empty.
The operating system deallocated memory from processes as required. This
may happen when a process has been terminated or if it no longer needs the
memory.
Applications of memory management:
14
VARDHAMAN COLLEGE OF ENGINEERING
depends on the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program
is in execution, the Operating System also handles deadlock i.e. no two processes come for
execution at the same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various resources available
for the efficient running of all types of functionalities.
Input Output Operations
Operating System manages the input-output operations and establishes communication
between the user and device drivers. Device drivers are software that is associated with
hardware that is being managed by the OS so that the sync between the devices works
properly. It also provides access to input-output devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes. Communication
between processes includes data transfer among them. If the processes are not on the same
computer but connected through a computer network, then also their communication is
managed by the Operating System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is
the operating system that grants access. These permissions include read-only, read-write, etc.
It also provides a platform for the user to create, and delete files. The Operating System is
responsible for making decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of player . The team manager (OS) decide whether the upcoming player
will be in playing 11 ,playing 15 or will not be included in team , based on his performance
. In the same way, OS first check whether the upcoming program fulfil all requirement to
get memory space or not ,if all things good, it checks how much memory space will be
sufficient for program and then load the program into memory at certain location. And thus
, it prevents program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happen and chef as the (OS) who uses
kitchen-stove(CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
Secuirity and Privacy
Secuirity : OS keep our computer safe from an unauthorised user by adding
secuirity layer to [Link], Secuirity is nothing but just a layer of protection
which protect computer from bad guys like viruses and hackers. OS provide us
defenses like firewalls and anti-virus software and ensure good safety of
computer and personal information.
Privacy : OS give us facility to keep our essential information hidden like
having a lock on our door, where only you can enter and other are not allowed .
Basically , it respect our secrets and provide us facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that
manages resource sharing. It also manages the CPU time among processes using CPU
15
VARDHAMAN COLLEGE OF ENGINEERING
Scheduling Algorithms. It also helps in the memory management of the system. It also
controls input-output devices. The OS also ensures the proper use of all the resources
available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The
command interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to
the internet, sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also
prevents the process from coming to a deadlock. It also looks for any type of error or bugs
that can occur while any task. The well-secured OS sometimes also acts as a
countermeasure for preventing any sort of breach of the Computer System from any
external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue) , start(yellow)=>(ready queue),move(green)=>(under execution)
and this light (control) changes after a certain interval of time at each side of the
road(computer system) so that the cars(program) from all side of road move smoothly
without traffic.
16
VARDHAMAN COLLEGE OF ENGINEERING
While working with UNIX OS, several layers of this system provide interaction between
the pc hardware and the user. Following is the description of each and every layer
structure in UNIX system:
Layer-1: Hardware -
Layer-2: Kernel –
Every operating system- whether it is Windows, Mac, Linux, or Android, has a core program
called a Kernel which acts as the ‘boss’ for the whole system. It is the heart of the OS! The
Kernel is nothing but a computer program that controls everything else.
The core of the operating system that's liable for maintaining the full functionality is
named the kernel. The kernel of UNIX runs on the particular machine hardware and
interacts with the hardware effectively.
17
VARDHAMAN COLLEGE OF ENGINEERING
It also works as a device manager and performs valuable functions for the processes
which require access to the peripheral devices connected to the computer. The kernel
controls these devices through device drivers.
The kernel also manages the memory. Processes are executed programs that have
owner's humans or systems who initiate their execution.
The system must provide all processes with access to an adequate amount of memory,
and a few processes require a lot of it. To make effective use of main memory and to
allocate a sufficient amount of memory to every process. It uses essential techniques
like paging, swapping, and virtual storage.
The Shell is an interpreter that interprets the command submitted by the user at the
terminal, and calls the program you simply want.
It also keeps a history of the list of the commands you have typed in. If you need to
repeat a command you typed it, use the cursor keys to scroll up and down the list or
type history for a list of previous commands. There are various commands like cat, mv,
cat, grep, id, wc, and many more.
SHELL is a program which provides the interface between the user and an operating system.
When the user logs in OS starts a shell for user. Kernel controls all essential computer
operations, and provides the restriction to hardware access, coordinates all executing utilities,
and manages Resources between process. Using kernel only user can access utilities provided
by operating system.
The C Shell –
Denoted as csh
The Bourne Shell –
Denoted as sh
The Korn Shell
It is denoted as ksh
GNU Bourne-Again Shell –
Denoted as bash
18
VARDHAMAN COLLEGE OF ENGINEERING
o Bourne Shell: This Shell is simply called the Shell. It was the first Shell for UNIX
OS. It is still the most widely available Shell on a UNIX system.
o C Shell: The C shell is another popular shell commonly available on a UNIX
system. The C shell was developed by the University of California at Berkeley
and removed some of the shortcomings of the Bourne shell.
o Korn Shell: This Shell was created by David Korn to address the Bourne Shell's
user-interaction issues and to deal with the shortcomings of the C shell's
scripting quirks.
It is the outermost layer that executes the given external applications. UNIX
distributions typically come with several useful applications programs as standard. For
Example: emacs editor, StarOffice, xv image viewer, g++ compiler etc.
19
VARDHAMAN COLLEGE OF ENGINEERING
The Application Program Interface (API) helps to connect the OS functions with user
programs. It serves as a bridge between a process and the OS, enabling user-level programs to
request OS services. System calls may only be accessed using the kernel system, and any
software that consumes resources must use system calls.
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Now, you will learn all these different types of system calls one by one.
Process Control
It is responsible for file manipulation jobs, including creating files, deleting files,
reading, opening, writing, closing, etc.
File Management
It is responsible for file manipulation jobs, including creating files, opening files,
deleting files, closing files, etc.
Device Management
20
VARDHAMAN COLLEGE OF ENGINEERING
These are responsible for device manipulation, including reading from device buffers,
writing into device buffers, etc.
Information Maintenance
These are used to manage the data and its share between the OS and the user
program. Some common instances of information maintenance are getting time or
date, getting system data, setting time or date, setting system data, etc.
Communication
These are used for interprocess communication (IPC). Some examples of IPC are
creating, sending, receiving messages, deleting communication connections, etc.
The system program is a component of the OS, and it typically lies between the user
interface (UI) and system calls. The system user view is defined by the system
programs, not the system call, because the user view interacts with system programs
and is closer to the user interface.
1. File Management
2. Status Information
3. File Modification
4. Programming-Language support
5. Program Loading and Execution
6. Communication
Now, you will learn all these different types of system programs one by one.
File Management
21
VARDHAMAN COLLEGE OF ENGINEERING
Status Information
Status information is information about the input, output process, storage, and CPU
utilization time, how the process will be computed in how much memory is necessary
to execute a task.
File Modification
These system programs are utilized to change files on hard drives or other storage
media. Besides modification, these programs are also utilized to search for content
within a file or to change content within a file.
Programming-Language Support
The OS includes certain standard system programs that allow programming languages
such as C, Visual Basic, C++, Java, and Pearl. There are various system programs,
including compilers, debuggers, assemblers, interpreters, etc.
After Assembling and Compiling, the program must be loaded into the memory for
execution. A loader is a component of an operating system responsible for loading
programs and libraries, and it is one of the most important steps to starting a program.
The system includes linkage editors, relocatable loaders, Overlay loaders, and loaders.
Communication
System program offers virtual links between processes, people, and computer systems.
Users may browse websites, log in remotely, communicate messages to other users
via their screens, send emails, and transfer files from one user to another.
There are various key differences between the System Call and System Program in the
operating system. Some main differences between the System Call and System Program are as
follows:
1. A user may request access to the operating system's services by using the system call.
In contrast, the system program fulfils a common user request and provides a
compatible environment for a program to create and run effectively.
2. The programmer creates system calls using high-level languages like C and C++.
Assembly level language is used to create the calls that directly interact with the
22
VARDHAMAN COLLEGE OF ENGINEERING
system's hardware. On the other hand, a programmer solely uses a high-level language
to create system programs.
3. System call defines the interface between the services and the user process provided by
the OS. In contrast, the system program defines the operating system's user interface.
4. The system program satisfies the user program's high-level request. It converts the
request into a series of low-level requests. In contrast, the system call fulfils the low-
level requests of the user program.
5. The user process requests an OS service using a system call. In contrast, the system
program transforms the user request into a set of system calls needed to fulfil the
requirement.
6. The system call may be categorized into file manipulation, device manipulation,
communication, process control, information maintenance, and protection. On the other
hand, a System program may be categorized into file management, program loading
and execution, programming-language support, status information, communication,
and file modification.
Head-to-head comparison between the System Call and System Program in Operating System
The OS has various head-to-head comparisons between System Call and System Program.
Some comparisons of the System Call and System Program are as follows:
User View It defines the interface between It defines the user interface (UI)
the services and the user of the OS.
process provided by the OS.
Action The user process requests an It transforms the user request into
OS service using a system call. a set of system calls needed to
fulfil the requirement.
23
VARDHAMAN COLLEGE OF ENGINEERING
In other words, we create computer programs as text files that, when executed, create
processes that carry out all of the tasks listed in the program.
When a program is loaded into memory, it may be divided into the four components
stack, heap, text, and data to form a process. The simplified depiction of a process in
the main memory is shown in the diagram below.
Stack
The process stack stores temporary information such as method or function
arguments, the return address, and local variables.
Heap
24
VARDHAMAN COLLEGE OF ENGINEERING
Text
This consists of the information stored in the processor's registers as well as the most
recent activity indicated by the program counter's value.
Data
Program
Program is a set of instructions which are executed when the certain task is allowed to
complete that certain task. The programs are usually written in a Programming
Language like C, C ++, Python, Java, R, C # (C sharp), etc.
S. Process Program
No
25
VARDHAMAN COLLEGE OF ENGINEERING
5 Process has its own control system known as Program does not have any
Process Control Block control system. It is just called
when specified and it executes
the whole program when
called
A process has several stages that it passes through from beginning to end. There must
be a minimum of five states. Even though during execution, the process could be in
one of these states, the names of the states are not standardized. Each process goes
through several stages throughout its life cycle.
26
VARDHAMAN COLLEGE OF ENGINEERING
already acquired) it enters the blocked or waits for the state. The process
continues to wait in the main memory and does not require CPU. Once the
I/O operation is completed the process goes to the ready state.
Terminated or Completed: Process is killed as well as PCB is deleted. The
resources allocated to the process will be released or deallocated.
Suspend Ready: Process that was initially in the ready state but was
swapped out of main memory(refer to Virtual Memory topic) and placed
onto external storage by the scheduler is said to be in suspend ready state.
The process will transition back to a ready state whenever the process is
again brought onto the main memory.
Suspend wait or suspend blocked: Similar to suspend ready but uses the
process which was performing I/O operation and lack of main memory
caused them to move to secondary memory. When work is finished it may
go to suspend ready.
27
VARDHAMAN COLLEGE OF ENGINEERING
execution status. Each block of memory contains information about the process state,
program counter, stack pointer, status of opened files, scheduling algorithms, etc. All
this information is required and must be saved when the process is switched from one
state to another. When the process makes a transition from one state to another, the
operating system must update information in the process’s PCB. A process control
block (PCB) contains information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCBs, that means logically contains a PCB for all
of the current processes in the system.
28
VARDHAMAN COLLEGE OF ENGINEERING
Advantages:
Disadvantages:
1. Overhead: The process table and PCB can introduce overhead and reduce
system performance. The operating system must maintain the process
table and PCB for each process, which can consume system resources.
2. Complexity: The process table and PCB can increase system complexity
and make it more challenging to develop and maintain operating systems.
The need to manage and synchronize multiple processes can make it more
difficult to design and implement system features and ensure system
stability.
3. Scalability: The process table and PCB may not scale well for large-scale
systems with many processes. As the number of processes increases, the
process table and PCB can become larger and more difficult to manage
efficiently.
4. Security: The process table and PCB can introduce security risks if they
are not implemented correctly. Malicious programs can potentially access
or modify the process table and PCB to gain unauthorized access to
system resources or cause system instability.
5. Miscellaneous accounting and status data – This field includes
information about the amount of CPU used, time constraints, jobs or
process number, etc. The process control block stores the register content
also known as execution content of the processor when it was blocked
from running. This execution content architecture enables the operating
system to restore a process’s execution context when the process returns to
the running state. When the process makes a transition from one state to
another, the operating system updates its information in the process’s
PCB. The operating system maintains pointers to each process’s PCB in a
process table so that it can access the PCB
quickly.
30
VARDHAMAN COLLEGE OF ENGINEERING
Operation on a Process
The execution of a process is a complex activity. It involves various operations.
Following are the operations that are performed while execution of a process:
Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system,
the user, or the old process itself. There are several events that lead to the process
creation. Some of the such events are the following:
1. When we start the computer, the system creates several background
processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. The batch system takes initiation of a batch job.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to run.
It means the operating system puts the process from the ready state into the running
state. Dispatching is done by the operating system when the resources are free or the
process has higher priority than the ongoing process. There are various other cases in
which the process in the running state is preempted and the process in the ready state
is dispatched by the operating system.
Blocking
When a process invokes an input-output system call that blocks the process, and
operating system is put in block mode. Block mode is basically a mode where the
process waits for input-output. Hence on the demand of the process itself, the
operating system blocks the process and dispatches another process to the processor.
31
VARDHAMAN COLLEGE OF ENGINEERING
Process table is a table that contains Process ID and the reference to the
corresponding PCB in memory. We can visualize the Process table as a
dictionary containing the list of all the processes running.
8. Process Scheduling
32
VARDHAMAN COLLEGE OF ENGINEERING
Unlike pre-emptive scheduling, the operating system does not interrupt the
currently executing process or thread to switch to another one, unless the
currently running process blocks or voluntarily gives up the CPU. As a result,
non-preemptive scheduling algorithms may lead to longer waiting times for
some processes or threads, and may not be as efficient as pre-emptive
scheduling in certain situations.
33
VARDHAMAN COLLEGE OF ENGINEERING
these algorithms, the scheduler chooses the next process or thread to run
based on its arrival time, execution time, or priority, respectively.
CPU Utilization – CPU needs to be kept busy all the time. It should
not be idle. CPU utilization can range from 0 to 100 percent. CPU
utilization from 40 to 90 percent is considered as good whereas below
this is considered poor.
Throughput – This is a measure of rate of work done in a system. It
is defined as the number of processes per unit time.
Turn Around Time – It is defined as the time interval between the
submission of a process to the time of completion.
Waiting Time – It is defined as the sum of time periods that are spent
by the processes waiting in the queue.
Response Time – Turnaround time is sometimes considered to be a
bad criterion because there may exist situations where a process has
completed a process quite fast and started computation of the next
process, while results of the previous are being output to the user.
Therefore, response time is considered to be a better criterion. The
time from the submission of the process to the time it executes is
called response time.
So, it is considered better to have a maximum CPU utilization and throughput
and a minimum turnaround time, waiting time, and response time. In some
cases, it is preferred to optimize the average value of these criteria whereas
cases may arise where optimizing the minimum and maximum values may
give better results.
9. Scheduler Types
Process Scheduling handles the selection of a process for the processor on the basis of a scheduling
algorithm and also the removal of a process from the processor. It is an important part of
multiprogramming operating system.
There are many scheduling queues that are used in process scheduling. When the processes enter
the system, they are put into the job queue. The processes that are ready to execute in the main
34
VARDHAMAN COLLEGE OF ENGINEERING
memory are kept in the ready queue. The processes that are waiting for the I/O device are kept in
the I/O device queue.
The different schedulers that are used for process scheduling are −
The long-term scheduler controls the degree of multiprogramming. It must select a careful mixture
of I/O bound and CPU bound processes to yield optimum system throughput. If it selects too many
CPU bound processes then the I/O devices are idle and if it selects too many I/O bound processes
then the processor has nothing to do.
The job of the long-term scheduler is very important and directly affects the system for a long time.
The short-term scheduler executes much more frequently than the long-term scheduler as a process
may execute only for a few milliseconds.
The choices of the short term scheduler are very important. If it selects a process with a long burst
time, then all the processes after that will have to wait for a long time in the ready queue. This is
known as starvation and it may happen if a wrong decision is made by the short-term scheduler.
This is helpful in reducing the degree of multiprogramming. Swapping is also useful to improve
the mix of I/O bound and CPU bound processes in the memory.
35
VARDHAMAN COLLEGE OF ENGINEERING
I. Non Preemptive
1. First Come First Serve
First Come First Serve CPU Scheduling Algorithm shortly known as FCFS is the first
algorithm of CPU Process Scheduling Algorithm. In First Come First Serve Algorithm
what we do is to allow the process to execute in linear manner.
This means that whichever process enters process enters the ready queue first is
executed first. This shows that First Come First Serve Algorithm follows First In First
Out (FIFO) principle.
1. Implementation is simple.
2. Does not cause any causalities while using
3. It adopts a non pre emptive and pre emptive strategy.
4. It runs each procedure in the order that they are received.
5. Arrival time is used as a selection criterion for procedures.
36
VARDHAMAN COLLEGE OF ENGINEERING
Example
Process ID Arrival Time Burst Time
P1 0 9
P2 1 3
P3 1 2
P4 1 4
P5 2 3
P6 3 2
37
VARDHAMAN COLLEGE OF ENGINEERING
1 P1 A 0 9 9 9
2 P2 B 1 3 12 11
3 P3 C 1 2 14 13
4 P4 D 1 4 18 17
5 P5 E 2 3 21 19
6 P6 F 3 2 23 20
Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6
Average CT = 97 / 6
Average CT = 16.16667
Average WT = ( 0 + 8 + 11 + 13 + 16 + 18 ) /6
Average WT = 66 / 6
Average WT = 11
Average TAT = 89 / 6
38
VARDHAMAN COLLEGE OF ENGINEERING
1 1 7 8 7 0
2 3 3 13 10 7
3 6 2 10 4 2
4 7 10 31 24 14
5 9 8 21 12 4
Since, No Process arrives at time 0 hence; there will be an empty slot in the Gantt chart from
time 0 to 1 (the time at which the first process arrives).
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
39
VARDHAMAN COLLEGE OF ENGINEERING
Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be
known in advance.
There is a lot of popularity for this Round Robin CPU Scheduling is because Round
Robin works only in Pre Emptive state. This makes it very reliable.
Important Abbreviations
1. CPU - - - > Central Processing Unit
2. AT - - - > Arrival Time
3. BT - - - > Burst Time
4. WT - - - > Waiting Time
5. TAT - - - > Turn Around Time
6. CT - - - > Completion Time
7. FIFO - - - > First In First Out
8. TQ - - - > Time Quantum
Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is
carried out cyclically. The system defines a specific time slice, known as a time
quantum.
40
VARDHAMAN COLLEGE OF ENGINEERING
First, the processes which are eligible to enter the ready queue enter the ready queue.
After entering the first process in Ready Queue is executed for a Time Quantum chunk
of time. After execution is complete, the process is removed from the ready queue.
Even now the process requires some time to complete its execution, then the process
is added to Ready Queue.
The Ready Queue does not hold processes which already present in the Ready Queue.
The Ready Queue is designed in such a manner that it does not hold non unique
processes. By holding same processes Redundancy of the processes increases.
After, the process execution is complete, the Ready Queue does not take the
completed process for holding.
Advantages
The Advantages of Round Robin CPU Scheduling are:
41
VARDHAMAN COLLEGE OF ENGINEERING
Disadvantages
The Disadvantages of Round Robin CPU Scheduling are:
1. Low Operating System slicing times will result in decreased CPU output.
2. Round Robin CPU Scheduling approach takes longer to swap contexts.
3. Time quantum has a significant impact on its performance.
4. The procedures cannot have priorities established.
Examples:
Ready Queue:
1. P1, P2, P3, P4, P5, P6, P1, P3, P4, P5, P6, P3, P4, P5
Gantt chart:
42
VARDHAMAN COLLEGE OF ENGINEERING
Once all the processes are available in the ready queue, No preemption will be done
and the algorithm will work as SJF scheduling. The context of the process is saved in
the Process Control Block when the process is removed from the execution and the
next process is scheduled. This PCB is accessed on the next execution of this process.
Example
In this Example, there are five jobs P1, P2, P3, P4, P5 and P6. Their arrival time and burst
time are given below in the table.
1 0 8 20 20 12 0
2 1 4 10 9 5 1
43
VARDHAMAN COLLEGE OF ENGINEERING
3 2 2 4 2 0 2
4 3 1 5 2 1 4
5 4 3 13 9 6 10
6 5 2 7 2 0 5
44
VARDHAMAN COLLEGE OF ENGINEERING
Gantt Chart –
Example
P1 0 ms 3 3 ms
P2 1 ms 2 4 ms
45
VARDHAMAN COLLEGE OF ENGINEERING
P3 2 ms 4 6 ms
P4 3 ms 6 4 ms
P5 5 ms 10 2 ms
After calculating the above fields, the final table looks like
46
VARDHAMAN COLLEGE OF ENGINEERING
like user input/output may have a higher priority than batch processes like
file backups.
Preemption: Preemption is allowed in MLQ scheduling, which means a
higher priority process can preempt a lower priority process, and the CPU
is allocated to the higher priority process. This helps ensure that high-
priority processes are executed in a timely manner.
Scheduling algorithm: Different scheduling algorithms can be used for
each queue, depending on the requirements of the processes in that queue.
For example, Round Robin scheduling may be used for interactive
processes, while First Come First Serve scheduling may be used for batch
processes.
Feedback mechanism: A feedback mechanism can be implemented to
adjust the priority of a process based on its behavior over time. For
example, if an interactive process has been waiting in a lower-priority
queue for a long time, its priority may be increased to ensure it is executed
in a timely manner.
Efficient allocation of CPU time: MLQ scheduling ensures that
processes with higher priority levels are executed in a timely manner,
while still allowing lower priority processes to execute when the CPU is
idle.
Fairness: MLQ scheduling provides a fair allocation of CPU time to
different types of processes, based on their priority and requirements.
Customizable: MLQ scheduling can be customized to meet the specific
requirements of different types of processes.
Advantages of Multilevel Queue CPU Scheduling:
Low scheduling overhead: Since processes are permanently assigned to
their respective queues, the overhead of scheduling is low, as the scheduler
only needs to select the appropriate queue for execution.
Efficient allocation of CPU time: The scheduling algorithm ensures that
processes with higher priority levels are executed in a timely manner,
while still allowing lower priority processes to execute when the CPU is
idle. This ensures optimal utilization of CPU time.
Fairness: The scheduling algorithm provides a fair allocation of CPU time
to different types of processes, based on their priority and requirements.
Customizable: The scheduling algorithm can be customized to meet the
specific requirements of different types of processes. Different scheduling
algorithms can be used for each queue, depending on the requirements of
the processes in that queue.
Prioritization: Priorities are assigned to processes based on their type,
characteristics, and importance, which ensures that important processes
are executed in a timely manner.
Preemption: Preemption is allowed in Multilevel Queue Scheduling,
which means that higher-priority processes can preempt lower-priority
processes, and the CPU is allocated to the higher-priority process. This
helps ensure that high-priority processes are executed in a timely manner.
47
VARDHAMAN COLLEGE OF ENGINEERING
48
VARDHAMAN COLLEGE OF ENGINEERING
takes 50 percent of CPU time queue 2 takes 30 percent and queue 3 gets
20 percent of CPU time.
V Multilevel Feedback Queue Scheduling (MLFQ) CPU
Scheduling
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is
like Multilevel Queue(MLQ) Scheduling but in this process can move between the
queues. And thus, much more efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:
In a multilevel queue-scheduling algorithm, processes are permanently
assigned to a queue on entry to the system, and processes are allowed
to move between queues.
As the processes are permanently assigned to the queue, this setup has
the advantage of low scheduling overhead,
49
VARDHAMAN COLLEGE OF ENGINEERING
Now let us suppose that queues 1 and 2 follow round robin with time quantum 4
and 8 respectively and queue 3 follow FCFS.
Implementation of MFQS is given below –
When a process starts executing the operating system can insert it into
any of the above three queues depending upon its priority. For
example, if it is some background process, then the operating system
would not like it to be given to higher priority queues such as queues 1
and 2. It will directly assign it to a lower priority queue i.e. queue 3.
Let’s say our current process for consideration is of significant priority
so it will be given queue 1.
In queue 1 process executes for 4 units and if it completes in these 4
units or it gives CPU for I/O operation in these 4 units then the priority
of this process does not change and if it again comes in the ready queue
then it again starts its execution in Queue 1.
If a process in queue 1 does not complete in 4 units then its priority
gets reduced and it is shifted to queue 2.
Above points 2 and 3 are also true for queue 2 processes but the time
quantum is 8 units. In a general case if a process does not complete in a
time quantum then it is shifted to the lower priority queue.
In the last queue, processes are scheduled in an FCFS manner.
A process in a lower priority queue can only execute only when higher
priority queues are empty.
A process running in the lower priority queue is interrupted by a
process arriving in the higher priority queue.
Well, the above implementation may differ for example the last queue can also
follow Round-robin Scheduling.
50
VARDHAMAN COLLEGE OF ENGINEERING
51