Introduction o TSL Mechanism
o OS Tutorial o Priority Inversion in TSL
o Types of OS o Turn Variable
Process Management o Interested Variable
o Process Management in OS o Paterson Solution
o Attributes of a Process o Without Busy Waiting
o Process States o Sleep and Wake
o Process Schedulers o Semaphore Introduction
o Process Queues o Counting Semaphore
o Times Related to Process o Problem on counting
semaphore
o CPU Scheduling
o Binary Semaphore
o Scheduling Algorithms
Deadlocks
o FCFS Scheduling
o Introduction
o SJF Scheduling
o strategies Handling
o Burst Time Prediction
o Deadlock Prevention
o SRTF scheduling
o Deadlock Avoidance
o SRTF GATE 2011 Example
o Resource Allocation Graph
o Round Robin Scheduling
o Detection using RAG
o RR scheduling Example
o Detection and Recovery
o HRRN Scheduling
Memory Management
o HRNN Example
o Introduction
o Priority Scheduling
o Fixed Partitioning
o Non Preemptive Priority
o Dynamic Partitioning
o Preemptive Priority
o Compaction
o SRTF:IO bound processes
o Bit Map for Dynamic
Synchronization
Partitioning
o Introduction
o Linked List for Dynamic
o Critical Section Problem Partitioning
o Lock Variable Mechanism o Partitioning Algorithms
o GATE on Best Fit & First Fit o Tree structured Directory
o Need for Paging o Acyclic Graph Directories
o Paging with Example o File System
o Binary Addresses o File System Structure
o Physical & Logical Address o Master Boot Record
o Page Table o On Disk Data Structures
o Mapping from page table o In memory Data structures
o Page Table Entry o Directory Implementation
o Page Table Size o Allocation Methods
o Finding Optimal Page Size o Contiguous Allocation
o Virtual Memory o Linked List Allocation
o Look aside Buffer o File Allocation Table
o GATE question on TLB o Indexed Allocation
o Demand Paging o Linked Index Allocation
o Inverted Page Table o Inode
o Page Replacement o Free space Management
o Gate on LRU and FIFO o Disk Scheduling
o Numerical on LRU, FIFO o FCFS Scheduling
o Beladys Anamoly o SSTF Scheduling
o Segmentation o SCAN and C-SCAN
o Paging VS Segmentation o Look and C-Look
o Segmented Paging o Numerical on SSTF
File Management o Numerical on Disk
o Attributes of the File
o Operations on the File
o File Access Methods
o Directory Structure
o Single level Directory
o Two level Directory
Definition:
An Operating System can be defined as an interface between user and
hardware.
Structure of a Computer System:
A Computer System consists of:
o Users (people who are using the computer)
o Application Programs (Compilers, Databases, Games, Video player,
Browsers, etc.)
o System Programs (Shells, Editors, Compilers, etc.)
o Operating System ( A special program which acts as an interface
between user and hardware )
o Hardware ( CPU, Disks, Memory, etc)
Types of Operating Systems (OS):
An operating system is a well-organized collection of programs that
manages the computer hardware. It is a type of system software that is
responsible for the smooth functioning of the computer system.
1. Batch Operating System
In this technique, similar types of jobs were batched together and
executed in time. People were used to having a single computer which
was called a mainframe.
In Batch operating system, access is given to more than one person; they
submit their respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first
serve and then executes the jobs one by one. The users collect their
respective output when all the jobs
get executed.
The purpose of this operating
system was mainly to transfer
control from one job to another as soon as the job was completed. It
contained a small set of programs called the resident monitor that
always resided in one part of the main memory. The remaining part is
used for servicing jobs.
Advantages of Batch OS:
o The use of a resident monitor improves computer efficiency as it
eliminates CPU time between two jobs.
Disadvantages of Batch OS
o Starvation: Batch processing suffers from starvation.
For Example:
There are five jobs J1, J2, J3, J4, and J5,
present in the batch. If the execution
time of J1 is very high, then the other
four jobs will never be executed, or they
will have to wait for a very long time.
Hence the other processes get starved.
o Not Interactive: Batch Processing is not suitable for jobs that are
dependent on the user's input. If a job requires the input of two
numbers from the console, then it will never get it in the batch
processing scenario since the user is not present at the time of
execution.
2. Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU
is always kept busy. Each process needs two types of system time: CPU
time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU
can start the execution of other processes. Therefore, multiprogramming
improves the efficiency of the system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one
program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various
systems resources are used efficiently, but they do not provide
any user interaction with the computer system.
3. Multiprocessing Operating System
In Multiprocessing, Parallel computing is achieved. There are more
than one processors present in the system which can execute more
than one process at the same time. This will increase the throughput
of the system.
In Multiprocessing, Parallel
computing is achieved. More
than one processor present in
the system can execute more
than one process
simultaneously, which will
increase the throughput of the
system.
Advantages of Multiprocessing operating system:
o Increased reliability: Due to the multiprocessing system,
processing tasks can be distributed among several processors. This
increases reliability as if one processor fails, the task can be given to
another processor for completion.
o Increased throughout: As several processors increase, more work
can be done in less.
Disadvantages of Multiprocessing operating System
o Multiprocessing operating system is more complex and
sophisticated as it takes care of multiple CPUs simultaneously.
4. Multitasking Operating System
The multitasking operating system
is a logical extension of a
multiprogramming system that
enables multiple programs
simultaneously. It allows a user to
perform more than one computer
task at the same time.
Advantages of Multitasking operating system
o This operating system is more suited to supporting multiple users
simultaneously.
o The multitasking operating systems have well-defined memory
management.
Disadvantages of Multitasking operating system
o The multiple processors are busier at the same time to complete
any task in a multitasking environment, so the CPU generates more
heat.
5. Network Operating System
An Operating system, which
includes software and associated
protocols to communicate with
other computers via a network
conveniently and cost-effectively, is
called Network Operating System.
Advantages of Network Operating System
o In this type of operating system, network traffic reduces due to the
division between clients and the server.
o This type of system is less expensive to set up and maintain.
Disadvantages of Network Operating System
o In this type of operating system, the failure of any node in a system
affects the whole system.
o Security and performance are important issues. So trained network
administrators are required for network administration.
6. Real Time Operating System
In Real-Time Systems,
each job carries a certain
deadline within which the
job is supposed to be
completed, otherwise,
the huge loss will be
there, or even if the
result is produced, it will
be completely useless.
The Application of a Real-Time system exists in the case of military
applications, if you want to drop a missile, then the missile is supposed to
be dropped with a certain precision.
Advantages of Real-time operating system:
o Easy to layout, develop and execute real-time applications under
the real-time operating system.
o In a Real-time operating system, the maximum utilization of devices
and systems.
Disadvantages of Real-time operating system:
o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume
critical CPU cycles.
7. Time-Sharing Operating System
In the Time Sharing operating
system, computer resources are
allocated in a time-dependent
fashion to several programs
simultaneously. Thus, it helps to
provide a large number of user's direct access to the main computer. It is
a logical extension of multiprogramming. In time-sharing, the CPU is
switched among multiple programs given by different users on a
scheduled basis.
A time-sharing operating system allows many users to be served
simultaneously, so sophisticated CPU scheduling schemes and
Input/output management are required.
Time-sharing operating systems are very difficult and expensive to build.
Advantages of Time Sharing Operating System
o The time-sharing operating system provides effective utilization and
sharing of resources.
o This system reduces CPU idle and response time.
Disadvantages of Time Sharing Operating System
o Data transmission rates are very high in comparison to other
methods.
o Security and integrity of user programs loaded in memory and data
need to be maintained as many users access the system at the
same time.
8. Distributed Operating System
The Distributed Operating system is not installed on a single machine, it is
divided into parts, and these parts are loaded on different machines. A
part of the distributed Operating system is installed on each machine to
make their communication possible.
Distributed Operating
systems are much
more complex, large,
and sophisticated
than Network
operating systems
because they also
have to take care of
varying networking
protocols.
Advantages of Distributed Operating System
o The distributed operating system provides sharing of resources.
o This type of system is fault-tolerant.
Disadvantages of Distributed Operating System
o Protocol overhead can dominate computation cost.
Process Management in OS
Process Management:
A Program does nothing unless its instructions are executed by a CPU. A
program in execution is called a process. In order to accomplish its
task, process needs the computer resources.
There may exist more than one process in the system which may require
the same resource at the same time. Therefore, the operating system
has to manage all the processes and the resources in a convenient and
efficient way.
Some resources may need to be executed by one process at one time
to maintain the consistency otherwise the system can become
inconsistent and deadlock may occur.
The operating system is responsible for the following activities in
connection with Process Management
1. Scheduling processes and threads on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.
Attributes of a Process:
The Attributes of the process are used by the Operating System to create
the process control block (PCB) for each of them. This is also called
context of the process. Attributes which are stored in the PCB are
described below.
1. Process ID: When a process is created, a unique id is assigned to the
process which is used for unique identification of the process in the
system.
2. Program counter: A program counter stores the address of the last
instruction of the process on which the process was suspended. The CPU
uses this address when the execution of this process is resumed.
3. Process State: The Process, from its creation to the completion, goes
through various states which are new, ready, running and waiting. We will
discuss about them later in detail.
4. Priority: Every process has its own priority. The process with the
highest priority among the processes gets the CPU first. This is also stored
on the process control block.
5. General Purpose Registers: Every process has its own set of
registers which are used to hold the data which is generated during the
execution of the process.
6. List of open files: During the Execution, Every process uses some
files which need to be present in the main memory. OS also maintains a
list of open files in the PCB.
7. List of open devices: OS also maintain the list of all open devices
which are used during the execution of the process.
Process States:
The process, from its creation to completion, passes through various
states. The minimum number of states is five.
The names of the states are not standardized although the process may
be in one of the following states during execution.
1. New: A program which is going to be picked up by the OS into the
main memory is called a new process.
2. Ready: Whenever a process is created, it directly enters in the ready
state, in which, it waits for the CPU to be assigned. The OS picks the new
processes from the secondary memory(SSD) and put all of them in the
main memory(RAM).
The processes which are ready for the execution and reside in the main
memory are called ready state processes. There can be many processes
present in the ready state.
3. Running: One of the processes from the ready state will be chosen by
the OS depending upon the scheduling algorithm. Hence, if we have only
one CPU in our system, the number of running processes for a particular
time will always be one. If we have n processors in the system then we
can have n processes running simultaneously.
4. Block or wait: From the Running state, a process can make the
transition to the block or wait state depending upon the scheduling
algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input
from the user then the OS move this process to the block or wait state and
assigns the CPU to the other processes.
5. Completion or termination: When a process finishes its execution, it
comes in the termination state. All the context of the process (Process
Control Block) will also be deleted the process will be terminated by the
Operating system.
6. Suspend ready: A process in the ready state, which is moved to
secondary memory from the main memory due to lack of the resources
(mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the
execution then the OS have to make the room for the process in the main
memory by throwing the lower priority process out into the secondary
memory. The suspend ready processes remain in the secondary memory
until the main memory gets available.
7. Suspend wait: Instead of removing the process from the ready queue,
it's better to remove the blocked process which is waiting for some
resources in the main memory. Since it is already waiting for some
resource to get available hence it is better if it waits in the secondary
memory and make room for the higher priority process. These processes
complete their execution once the main memory gets available and their
wait is finished.
Operations on the Process
1. Creation: Once the process is created, it will be ready and come into
the ready queue (main memory) and will be ready for the execution.
2. Scheduling: Out of the many processes present in the ready queue,
the Operating system chooses one process and start executing it.
Selecting the process which is to be executed next, is known as
scheduling.
3. Execution: Once the process is scheduled for the execution, the
processor starts executing it. Process may come to the blocked or wait
state during the execution then in that case the processor starts executing
the other processes.
4. Deletion/killing: Once the purpose of the process gets over then the
OS will kill the process. The Context of the process (PCB) will be deleted
and the process gets terminated by the Operating system.
Process Scheduling:
Operating system uses various schedulers for the process scheduling
described below.
1. Long term scheduler: Long term scheduler is also known as job
scheduler. It chooses the processes from the pool (secondary memory)
and keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of
Multiprogramming. The purpose of long term scheduler is to choose a
perfect mix of IO bound and CPU bound processes among the jobs present
in the pool.
If the job scheduler chooses more IO bound processes, then all of the jobs
may reside in the blocked state all the time and the CPU will remain
idle most of the time. This will reduce the degree of Multiprogramming.
Therefore, the Job of long term scheduler is very critical and may affect
the system for a very long time.
2. Short term scheduler: Short term scheduler is also known as CPU
scheduler. It selects one of the Jobs from the ready queue and dispatch
to the CPU for the execution.
A scheduling algorithm is used to select which job is going to be
dispatched for the execution. The Job of the short term scheduler can be
very critical in the sense that if it selects job whose CPU burst time is
very high then all the jobs after that, will have to wait in the ready queue
for a very long time.
This problem is called starvation which may arise if the short term
scheduler makes some mistakes while selecting the job.
3. Medium term scheduler: Medium term scheduler takes care of the
swapped out processes. If the running state processes needs some IO
time for the completion, then there is a need to change its state from
running to waiting.
Medium term scheduler is used for this purpose. It removes the process
from the running state to make room for the other processes. Such
processes are the swapped out processes and this procedure is called
swapping. The medium term scheduler is responsible for suspending
and resuming the processes.
It reduces the degree of multiprogramming. The swapping is
necessary to have a perfect mix of processes in the ready queue.
Process Queues
The Operating system manages various types of queues for each of the
process states. The PCB related to the process is also stored in the queue
of the same state. If the Process is moved from one state to another state,
then its PCB is also unlinked from the corresponding queue and added to
the other state queue in which the transition is made.
There are the following queues maintained by the Operating system.
1. Job Queue: In starting, all the processes get stored in the job queue. It
is maintained in the secondary memory. The long term scheduler (Job
scheduler) picks some of the jobs and put them in the primary memory.
2. Ready Queue: Ready queue is maintained in primary memory. The
short term scheduler picks the job from the ready queue and dispatch to
the CPU for the execution.
3. Waiting Queue: When the process needs some IO operation in order
to complete its execution, OS changes the state of the process from
running to waiting. The context (PCB) associated with the process gets
stored on the waiting queue which will be used by the Processor when the
process finishes the IO.
Various Times related to the Process
1. Arrival Time: The time at which the process enters into the ready
queue is called the arrival time.
2. Burst Time: The total amount of time required by the CPU to execute
the whole process is called the Burst Time. This does not include the
waiting time
3. Completion Time: The Time at which the process enters into the
completion state or the time at which the process completes its execution,
is called completion time.
4. Turnaround time: The total amount of time spent by the process from
its arrival to its completion, is called Turnaround time.
5. Waiting Time: The Total amount of time for which the process waits
for the CPU to be assigned is called waiting time.
6. Response Time: The difference between the arrival time and the time
at which the process first gets the CPU is called Response Time.
CPU Scheduling
In the Uniprogrammming systems like MS DOS, when a process waits
for any I/O operation to be done, the CPU remains idol. This is an overhead
since it wastes the time and causes the problem of starvation. However, In
Multiprogramming systems, the CPU doesn't remain idle during the
waiting time of the Process and it starts executing other processes.
Operating System has to define which process the CPU will be given.
In Multiprogramming systems, the Operating system schedules the
processes on the CPU to have the maximum utilization of it and this
procedure is called CPU scheduling. The Operating System uses various
scheduling algorithm to schedule the processes.
This is a task of the short term scheduler to schedule the CPU for the
number of processes present in the Job Pool. Whenever the running
process requests some IO operation then the short term scheduler saves
the current context of the process (also called PCB) and changes its state
from running to waiting. During the time, process is in waiting state; the
Short term scheduler picks another process from the ready queue and
assigns the CPU to this process. This procedure is called context
switching.
What is saved in the Process Control Block?
The Operating system maintains a process control block during the
lifetime of the process. The Process control block is deleted when the
process is terminated or killed. There is the following information which is
saved in the process control block and is changing with the state of the
process.
Why do we need Scheduling?
In Multiprogramming, if the long term scheduler picks
more I/O bound processes then most of the time, the
CPU remains idol. The task of Operating system is to
optimize the utilization of resources.
If most of the running processes change their state
from running to waiting then there may always be a
possibility of deadlock in the system. Hence to reduce
this overhead, the OS needs to schedule the jobs to
get the optimal utilization of CPU and to avoid the
possibility to deadlock.
Scheduling Algorithms in OS (Operating System)
There are various algorithms which are used by the Operating System to
schedule the processes on the processor in an efficient way.
The Purpose of a Scheduling algorithm
1. Maximum CPU utilization
2. Fare allocation of CPU
3. Maximum throughput
4. Minimum turnaround time
5. Minimum waiting time
6. Minimum response time
There are the following algorithms which can be used to schedule the
jobs.
1. First Come First Serve: It is the simplest algorithm to implement.
The process with the minimal arrival time will get the CPU first. The lesser
the arrival time, the sooner will the process gets the CPU. It is the non-
preemptive type of scheduling.
2. Round Robin: In the Round Robin scheduling algorithm, the OS
defines a time quantum (slice). All the processes will get executed in the
cyclic way. Each of the process will get the CPU for a small amount of time
(called time quantum) and then get back to the ready queue to wait for its
next turn. It is a preemptive type of scheduling(interrupt or suspend a
currently running process).
3. Shortest Job First: The job with the shortest burst time will get the
CPU first. The lesser the burst time, the sooner will the process get the
CPU. It is the non-preemptive type of scheduling.
4. Shortest remaining time first: It is the preemptive form of SJF. In
this algorithm, the OS schedules the Job according to the remaining time
of the execution.
5. Priority based scheduling: In this algorithm, the priority will be
assigned to each of the processes. The higher the priority, the sooner will
the process get the CPU. If the priority of the two processes is same then
they will be scheduled according to their arrival time.
6. Highest Response Ratio Next: In this scheduling Algorithm, the
process with highest response ratio will be scheduled next. This reduces
the starvation in the system.
First Come First Serve CPU Process Scheduling in Operating
Systems
In First Come First Serve Algorithm what we do is to allow the process to
execute in linear manner.
This means that whichever process enters process enters the ready queue
first is executed first. This shows that First Come First Serve Algorithm
follows First In First Out (FIFO) principle.
The First Come First Serve Algorithm can be executed in Pre Emptive and
Non Pre Emptive manner.
Convoy Effect In First Come First Serve (FCFS )
So, here as the first
process is large or
completion time is too
high, then this Convoy
effect in the First Come
First Serve Algorithm is
occurred.
Characteristics of FCFS CPU Process Scheduling
o Implementation is simple.
o Does not cause any causalities while using
o It adopts a non pre emptive and pre emptive strategy.
o It runs each procedure in the order that they are received.
o Arrival time is used as a selection criterion for procedures.
Advantages of FCFS CPU Process Scheduling
o In order to allocate processes, it uses the First In First Out queue.
o The FCFS CPU Scheduling Process is straight forward and easy to
implement.
o In the FCFS situation pre emptive scheduling, there is no chance of
process starving.
o As there is no consideration of process priority, it is an equitable
algorithm.
Disadvantages of FCFS CPU Process Scheduling
o FCFS CPU Scheduling Algorithm has Long Waiting Time
o FCFS CPU Scheduling favors CPU over Input or Output operations
o In FCFS there is a chance of occurrence of Convoy Effect
o Because FCFS is so straight forward, it often isn't very effective.
Extended waiting periods go hand in hand with this. All other orders
are left idle if the CPU is busy processing one time-consuming order.
Problems in the First Come First Serve CPU Scheduling Algorithm
Example
1. S. No Process ID Process Name Arrival Time Burst Time
2. ___ ______ _______ _______ _______
3. 1 P1 A 0 9
4. 2 P2 B 1 3
5. 3 P3 C 1 2
6. 4 P4 D 1 4
7. 5 P5 E 2 3
8. 6 P6 F 3 2
Non Pre Emptive Approach
Now, let us solve this problem with the help of the Scheduling Algorithm
named First Come First Serve in a Non Preemptive Approach.
Gantt chart for the above Example 1 is:
Turn Around Time = Completion Time - Arrival Time
Waiting Time = Turn Around Time - Burst Time
Solution:
S. Proces Arriva Burs Completio Turn Waiting
No s ID l Time t n Time Around Time
Time Time
1 P1 0 9 9 9 0
2 P2 1 3 12 11 8
3 P3 1 2 14 13 11
4 P4 1 4 18 17 13
5 P5 2 3 21 19 16
6 P6 3 2 23 20 18
The Average Completion Time is:
Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6
Average CT = 97 / 6
Average CT = 16.16667
The Average Waiting Time is:
Average WT = ( 0 + 8 + 11 + 13 + 16 + 18 ) /6
Average WT = 66 / 6
Average WT = 11
The Average Turn Around Time is:
Average TAT = ( 9 + 11 + 13 + 17 + 19 +20 ) / 6
Average TAT = 89 / 6
Average TAT = 14.83334
This is how the FCFS is solved in Non Pre Emptive Approach.
Pre Emptive Approach
Now, let us solve this problem with the help of the Scheduling Algorithm
named First Come First Serve in a Pre Emptive Approach.
In Pre Emptive Approach we search for the best process which is available
Gantt chart for the above Example 1 is:
S. No Proces Arrival Burst Completio Turn Waiting
s ID Time Time n Time Around Time
Time
1 P1 0 9 23 23 14
2 P2 1 3 8 7 4
3 P3 1 2 3 2 0
4 P4 1 4 15 14 10
5 P5 2 3 11 9 7
6 P6 3 2 5 2 0
Semaphores in OS (Operating System)
To get rid of the problem of wasting the wake-up signals, Dijkstra proposed
an approach which involves storing all the wake-up calls. Dijkstra states
that, instead of giving the wake-up calls directly to the consumer,
producer can store the wake-up call in a variable. Any of the consumers
can read it whenever it needs to do so.
Semaphore cannot be implemented in the user mode because race
condition may always arise when two or more processes try to access the
variable simultaneously. It always needs support from the operating
system to be implemented.
According to the demand of the situation, Semaphore can be divided into
two categories.
1. Counting Semaphore
2. Binary Semaphore or Mutex
Shortest Job First (SJF) Scheduling
Till now, we were scheduling the processes according to their arrival time
(in FCFS scheduling). However, SJF scheduling algorithm, schedules the
processes according to their burst time.
In SJF scheduling, the process with the lowest burst time, among the list of
available processes in the ready queue, is going to be scheduled next.
However, it is very difficult to predict the burst time needed for a process
hence this algorithm is very difficult to implement in the system.
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process
can't be known in advance.
Example
In the following example, there are five jobs named as P1, P2, P3, P4 and
P5. Their arrival time and burst time are given in the table below.
PID Arrival Burst Completi Turn Waiting
Time Time on Time Around Time
Time
1 1 7 8 7 0
2 3 3 13 10 7
3 6 2 10 4 2
4 7 10 31 24 14
5 9 8 21 12 4
Since, No Process arrives at time 0 hence; there will be an empty slot in
the Gantt chart from time 0 to 1 (the time at which the first process
arrives).
According to the algorithm, the OS schedules the process which is having
the lowest burst time among the available processes in the ready queue.
Till now, we have only one process in the ready queue hence the
scheduler will schedule this to the processor no matter what is its burst
time.
This will be executed till 8 units of time. Till then we have three more
processes arrived in the ready queue hence the scheduler will choose the
process with the lowest burst time.
Among the processes given in the table, P3 will be executed next since it
is having the lowest burst time among all the available processes.
So that's how the procedure will go on in shortest job first
(SJF) scheduling algorithm.
Avg Waiting Time = 27/5
Shortest Remaining Time First (SRTF) Scheduling Algorithm
This Algorithm is the preemptive version of SJF scheduling. In SRTF,
the execution of the process can be stopped after certain amount of time.
At the arrival of every process, the short term scheduler schedules the
process with the least remaining burst time among the list of available
processes and the running process.
Once all the processes are available in the ready queue, No preemption
will be done and the algorithm will work as SJF scheduling. The context
of the process is saved in the Process Control Block when the process is
removed from the execution and the next process is scheduled. This PCB
is accessed on the next execution of this process.
Round Robin Scheduling Algorithm
In this tutorial, we are going to learn about the most efficient CPU Process
Scheduling Algorithm named Round Robin CPU Process Scheduling. This
algorithm is very special because it is going to remove all the Flaws which
we have detected in the previous CPU Process Scheduling Algorithms.
Advantages
1. A fair amount of CPU is allocated to each job.
2. Because it doesn't depend on the burst time, it can truly be
implemented in the system.
3. It is not affected by the convoy effect or the starvation problem as
occurred in First Come First Serve CPU Scheduling Algorithm.
Disadvantages
1. Low Operating System slicing times will result in decreased CPU
output.
2. Round Robin CPU Scheduling approach takes longer to swap
contexts.
3. Time quantum has a significant impact on its performance.
4. The procedures cannot have priorities established.
Highest Response Ratio Next (HRRN) Scheduling
Highest Response Ratio Next (HRNN) is one of the most optimal
scheduling algorithms. This is a non-preemptive algorithm in which, the
scheduling is done on the basis of an extra parameter called Response
Ratio. A Response Ratio is calculated for each of the available jobs and the
Job with the highest response ratio is given priority over the others.
Response Ratio is calculated by the given formula.
1. Response Ratio = (W+S)/S
Where,
1. W → Waiting Time
2. S → Service Time or Burst Time
If we look at the formula, we will notice that the job with the shorter burst
time will be given priority but it is also including an extra factor called
waiting time. Since,
1. HRNN α W
2. HRNN α (1/S)
Hence,
1. This algorithm not only favors shorter job but it also concern the
waiting time of the longer jobs.
2. Its mode is non preemptive hence context switching is minimal in
this algorithm.
Priority Scheduling Algorithm in OS (Operating System)
In Priority scheduling, there is a priority number assigned to each process.
In some systems, the lower the number, the higher the priority. While, in
the others, the higher the number, the higher will be the priority. The
Process with the higher priority among the available processes is given
the CPU. There are two types of priority scheduling algorithm exists. One
is Preemptive priority scheduling while the other is Non Preemptive
Priority scheduling.
The priority number assigned to each of the process may or may not vary.
If the priority number doesn't change itself throughout the process, it is
called static priority, while if it keeps changing itself at the regular
intervals, it is called dynamic priority.
Process Synchronization in OS (Operating
System)
Process Synchronization:
When two or more process cooperates with each other, their order of
execution must be preserved otherwise there can be conflicts in their
execution and inappropriate outputs can be produced.
The procedure involved in preserving the appropriate order of execution of
cooperative processes is known as Process Synchronization.
Race Condition:
A Race Condition typically occurs when two or more threads try to
read, write and possibly make the decisions based on the memory that
they are accessing concurrently.
Critical Section:
Critical Section is the part of a program which tries to access shared
resources. That resource may be any resource in a computer like a
memory location, Data structure, CPU or any IO device.
The critical section cannot be executed by more than one process at the
same time; operating system faces the difficulties in allowing and
disallowing the processes from entering the critical section.
Requirements of Synchronization mechanisms
Primary
1. Mutual Exclusion
Our solution must provide mutual exclusion. By Mutual Exclusion, we
mean that if one process is executing inside critical section then the other
process must not enter in the critical section.
2. Progress
Progress means that if one process doesn't need to execute into critical
section then it should not stop other processes to get into the critical
section.
Secondary
1. Bounded Waiting
We should be able to predict the waiting time for every process to get into
the critical section. The process must not be endlessly waiting for getting
into the critical section.
2. Architectural Neutrality
Our mechanism must be architectural natural. It means that if our solution
is working fine on one architecture then it should also run on the other
ones as well.
Solutions
I. Software Type:
(a) Lock Variables
(b) Strict alteration or Decker’s Algorithm
(c) Petersons Algorithm
II. Hardware Type:
(a) TSL Instruction Set
(b) Test and Set Lock
III. OS Type:
(a) Counting semaphore
(b) Binary Semaphore
IV. Programming Language Compiler Support Type:
(a) Monitors
Lock Variable
This is the simplest synchronization mechanism. This is a Software
Mechanism implemented in User mode. This is a busy waiting solution
which can be used for more than two processes.
In this mechanism, a Lock variable lock is used. Two values of lock can be
possible, either 0 or 1. Lock value 0 means that the critical section is
vacant while the lock value 1 means that it is occupied.
TSL Instruction Set
Copies the current value of flag into register and stores the value of ‘1’
into flag in a single atomic cycle without any pre-emption.
Paterson Solution
This is a software mechanism implemented at user mode. It is a busy
waiting solution can be implemented for only two processes. It uses
two variables that are turn variable and interested variable.