Operating system
SUDHEESH KP
Kerala PSC Expert
OUTLINE
● INTRODUCTION
○ OPERATING SYSTEM
○ TYPES OF OPERATING SYSTEM
○ CLASSIFICATION OF OS BASED ON THEIR FEATURES
■ BATCH OS
■ MULTIPROGRAMMING/MULTITASKING
■ TIME SHARING/MULTIUSER
■ REALTIME,DISTRBUTED,CLUSTERED,EMBEDDED
● PROCESS MANAGEMENT
○ PROCESS CONCEPT
■ PROCESS STATE
■ PROCESS STATE DIAGRAM
■ PROCESS CONTROL BLOCK
■ SYSTEM CALL
○ PROCESS SCHEDULING
■ SCHEDULING QUEUE
■ SCHEDULERS
● SCHEDULING ALGORITHMS
○ SCHEDULING CRITERIA
○ CPU SCHEDULING (PREEMPTIVE AND NON PREEMPTIVE)
○ GANTT CHART PREPARATION
○ CONCEPT RELATED TO SCHEDULING ALGORITHMS
● THREADS AND MULTITHREADING
● MISCHELLANEOUS
● IPC AND SYNCRONIZATION, AND CONCURRENCY
○ IPC MECHANISM
○ METHODS OF IPC
■ SIGNALS,PIPES,SEMAPHORES,RPC ETC
○ INTRODUCTION TO SYNCHRANZATION
■ RACE CONDITION
■ CRITICAL SECTION AND CRITICAL SECTION PROBLEMS
■ MUTUAL EXCLUSION PROBLEM
■ SOLUTION FOR CRITICAL SECTION PROBLEM
● SEMAPHORES
○ BINARY AND COUNTING SEMAPHORES
● CLASSICAL PROBLEMS F SYNCRANIZATION
○ PRODUCER CONSUMERS PROBLEM
○ READER AND WRITERS PROBLEM
○ DINING PHILOSOPHERS PROBLEMS
● DEADLOCK
○ DEFINITION
○ NECESSORY CONDITIONS FOR OCCURRENCE OF DEADLOCK
○ METHODS FOR HANDLING DEADLOCKS
■ DEADLOCK PREVENTION
■ DEADLOCK AVOIDANCE
■ DEADLOCK DETECTION
● MEMORY MANAGEMENT
○ MEMORY MANAGEMENT SCHEME
● VIRTUAL MEMORY
● PAGE REPLACEMENT POLICY
○ FIFO
○ OPTIMAL
○ LRU
● THRASHING
● File systems
○ Sequential and indexed file organization
○ Directory structures, Contiguous, linked and indexed allocations, Disk scheduling
algorithms.
INTRODUCTION TO SYLLABUS
● Batch, microprogramming, time sharing, multi processor and real time
systems, Process management, Process Control Block, Threading,
multithreading, CPU Scheduling, Schedulers, Context switching, Pre-emptive
and non preemptive scheduling, Scheduling algorithms – FCFS, SJF, Priority,
RR, Multi-level and multilevel feed back queue, Race condition, Critical
section problem, Deadlock – detection and prevention, Memory
Management – Address bindings, logical and physical addresses, contiguous
memory allocation – first fit, best fit, worst fit allocation, internal and external
fragmentation, Paging and segmentation, Demand paging, Page replacement
algorithms – FIFO, Optimal, LRU, Thrashing, File systems, Sequential and
indexed file organization, Directory structures, Contiguous, linked and
indexed allocations, Disk scheduling algorithms.
THANK YOU
TYPES OF
OPERATING SYSTEM
SUDHEESH KP
Kerala PSC Expert
TYPES OF OPERATING SYTEM
● BATCH OPERATING SYSTEM
● MULTIPROGRAMMING OPERATING SYSTEM
● TIME SHARING OPERATING SYSTEM
● MULTIPROCESSOR OPERATING SYSTEM
● REAL TIME OPERATING SYSTEM
BATCH OPERATING SYSTEM
● The users who using a batch operating system do not interact with the
computer directly
● There is an operator which takes similar jobs having the same requirement
and group them into batches.
● It is the responsibility of the operator to sort jobs with similar needs.
● Eg:
MULTIPROGRAMMING OPERATING SYSTEM
● Sharing the processor, when two or more programs reside in memory at the
same time, is referred as multiprogramming
● A computer running more than one program at a time (like running Excel and
Firefox simultaneously).
● non preemptive
● no user interaction
● prime concern: resource utilization
MULTITASKING/ TIME SHARING OPERATING
SYSTEM
● CPU execute multiple jobs by switching among them. Each task is given
some time to execute so that all the tasks work smoothly.
● switches occurs so frequently that the user can interact with each program
while it is running
● pre emptive in nature
● The time that each task gets to execute is called quantum.
● After this time interval is over OS switches over to the next task.
● prime concern: good response time
MULTI PROCESSOR OPERATING SYSTEM
● These systems have multiple processors working in parallel that share the
computer clock, memory, bus, peripheral devices etc
Symmetric Multiprocessors
● In these types of systems, each processor contains a similar copy of the
operating system and they all communicate with each other.
● All the processors are in a peer to peer relationship i.e. no master - slave
relationship exists between them.
Asymmetric Multiprocessors
● In asymmetric systems, each processor is given a predefined task. There is a
master processor that gives instruction to all the other processors.
Asymmetric multiprocessor system contains a master slave relationship.
REAL TIME OPERATING SYSTEM
Hard Real Time operating system:
● These operating systems guarantee that critical tasks be completed within a
range of time.
Soft real time operating system:
● This operating systems provides some relaxation in time limit.
USER MODE VS KERNEL MODE
● The system is in user mode when the operating system is running a user
application such as handling a text editor
● While the Kernel mode is the privileged mode where the process has
unrestricted access to system resources like hardware, memory, etc.
● Anything related to Process management, IO hardware management, and
Memory management requires process to execute in Kernel mode.
SYSTEM CALL
● system call is a method that allows a program to request services from the
kernel.
● Typically written in a high-level language (C or C++)
● Mostly accessed by programs via a high-level Application Program Interface
(API) rather than direct system call use
TYPES OF SYSTEM CALL
● Process control
● File management
● Device management
● Information maintenance
● Communications
Thank you
PROCESS
SUDHEESH KP
Kerala PSC Expert
OUTLINE
● Process Concept
● Process states
● Process control Block
● Process Scheduling
PROCESS CONCEPT
● Process – a program in execution; process execution
must progress in sequential fashion
● A process includes:
○ program counter
○ stack
○ data section
PROCESS IN MEMORY
PROCESS STATE
● As a process executes, it changes state
● new: The process is being created
● running: Instructions are being executed
● waiting: The process is waiting for some event to occur
● ready: The process is waiting to be assigned to a
process
● terminated: The process has finished execution
PROCESS STATE DIAGRAM
PROCESS CONTROL BLOCK (PCB)
Information associated with each process
● Process state
● Program counter
● CPU registers
● CPU scheduling information
● Memory-management information
● Accounting information
● I/O status information
PROCESS CONTROL BLOCK (PCB)
CPU SWITCH FROM PROCESS TO PROCESS
PROCESS SCHEDULING QUEUES
● Job queue – set of all processes in the system
● Ready queue – set of all processes residing in main
memory, ready and waiting to execute
● Device queues – set of processes waiting for an I/O
device
● Processes migrate among the various queues
REPRESENTATION OF PROCESS
SCHEDULING
Schedulers
● Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue
● Short-term scheduler (or CPU scheduler) – selects
which process should be executed next and allocates
CPU
Addition of Medium Term Scheduling
Schedulers (Cont.)
● Short-term scheduler is invoked very frequently (milliseconds) (must be
fast)
● Long-term scheduler is invoked very infrequently (seconds, minutes) (may
be slow)
● The long-term scheduler controls the degree of multiprogramming
● Processes can be described as either:
○ I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
○ CPU-bound process – spends more time doing computations; few very long CPU bursts
Which of the following personalities from
India is the only winner of Special Oscar in
the history of Indian Cinema so far?
A Mrinal Sen B Shyam Bengal
C Satyajit Ray D Mira Nair
SCHEDULERS
● Scheduler is responsible for selecting a process for
scheduling/operating with the resources, from the queues
serving the resource
● There are three types of schedulers
○ Long term scheduler
○ Short term scheduler
○ Medium term scheduler
● Long term scheduler
○ Also referred as job scheduler
○ selects which processes should be brought into the
ready queue
○ It control the degree of multiprogramming
● Short term scheduler
○ Also referred as CPU scheduler
○ Select process from ready queue for allocation of CPU
○ It execute atleast once every 100 msec
● Medium term scheduler
○ It introduces an intermediate level of scheduling
○ It is used to decrease the degree of multiprogramming
at times so as to complete remaining processes faster
by reducing the CPU contention, giving better
performance times. Optimizes memory usage
○ It is used to decrease the load on the CPU
○ It requires swapping
Schedulers (Cont.)
● Short-term scheduler is invoked very frequently (milliseconds) (must be
fast)
● Long-term scheduler is invoked very infrequently (seconds, minutes) (may
be slow)
● The long-term scheduler controls the degree of multiprogramming
● Processes can be described as either:
○ I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
○ CPU-bound process – spends more time doing computations; few very long CPU bursts
Which of the following personalities from
India is the only winner of Special Oscar in
the history of Indian Cinema so far?
A Mrinal Sen B Shyam Bengal
C Satyajit Ray D Mira Nair
THREAD IN OS
SUDHEESH KP
Kerala PSC Expert
BASICS
● A thread is a light weight process
● Thread is a basic unit of CPU utilization
● Thread consist of
○ Thread ID
○ Program counter
○ Register set
○ stack
● Resources like code, data, and files can be shared among
all threads within a process.
Single and Multithreaded
Processes
BENEFITS
● Responsiveness
● Resource Sharing
● Economy
● Utilization of Multiprocessor Architectures
MULTITHREADING MODEL
● There are two types of threads
○ User threads
○ Kernel threads
● User threads
● They are above the kernel and they are managed without
kernel support.
● User level threads implement in user-level libraries, rather
than via systems calls, so thread switching does not need
to call operating system and to cause interrupt to the
kernel.
Kernel Threads:
● Kernel knows and manages the threads.
● Instead of thread table in each process, the kernel itself
has thread table (a master one) that keeps track of all the
threads in the system.
● In addition kernel also maintains the traditional process
table to keep track of the processes.
● OS kernel provides system call to create and manage
threads.
● There are 3 way to establishing relationship between
user thread and kernel thread
○ Many to one model
○ One to one model
○ Many to many model
Many to one model
ONE TO ONE MODEL
MANY TO MANY MODEL
PROCESSOR OR CPU
SCHEDULING
SUDHEESH KP
Kerala PSC Expert
OUTCOMES
○ Upon the completion of the session, the learner will be
able to know the concepts of
■ NON PREEMPTIVE SCHEDULING
■ PREEMPTIVE SCHEDULING
■ SCHEDULING CRITERIA
■ SCHEDULING ALGORITHMS
BASICS
NON PREEMPTIVE SCHEDULING
● Non-preemptive algorithms are designed so that once a
process enters the running state, it cannot be preempted
until it completes its allotted time,.
PREEMPTIVE SCHEDULING
● preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process
anytime when a high priority process enters into a ready
state
CLASSIFICATION OF
SCHEDULING ALGORITHM
SCHEDULING CRITERIA AND
SCHEDULING TERMS
● Arrival Time: Time at which the process arrives in the
ready queue.
Completion Time: Time at which process completes its
execution.
Burst Time: Time required by a process for CPU
execution.
● Turn Around Time: Time Difference between completion
time and arrival time.
Turn Around Time = Completion Time – Arrival Time
● Waiting Time(W.T): Time Difference between turn around
time and burst time.
Waiting Time = Turn Around Time – Burst Time
● CPU utilization: It measure the CPU usage in terms of
how busy the processor is or load on the processor
PROCESSOR OR CPU
SCHEDULING ALGORITHMS
SUDHEESH KP
Kerala PSC Expert
OUTCOMES
○ Upon the completion of the session, the learner will be able to know the
concepts of
■ FCFS
■ SJF
■ Priority
■ RR
■ Multi-level and multilevel feed back queue
FIRST COME,FIRST SERVED
SCHEDULING (FCFS)
● Concept : PROCESS THAT REQUESTS THE CPU FIRST S ALLOCATED THE
CPU FIRST
● WORKING :
○ PROCESS REQUEST ARE KEPT IN READY QUEUE N FIFO ORDER i.e READY
QUEUE IS MAINTAINED AS FIFO QUEUE
○ WHEN A NEW PROCESS ENTER THE QUEUE, ITS PCB IS LINKED ONTO THE
TAIL/REAR OF THE QUEUE
○ WHEN A CPU IS FREE, IT IS ALLOCATED TO THE PROCESS THAT IS AT THE
HEAD/FRONT OF THE QUEUE
Process Burst Time
P1 24
P2 3
P3 3
● Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
● Suppose that the processes arrive in the order P2 , P3 , P1
OBSERVATIONS
SHORTEST JOB FIRST SCHEDULING
● CONCEPT: ALLOCATE THE CPU TO THE PROCESS WITH THE SMALLEST NEXT
CPU BIRST
● WORKING: ASSOCIATE WITH EACH PROCESS THE LENGTH OF ITS NEXT CPU
BURST. USE THESE LENGTHS TO SCHEDULE THE PROCESS WITH THE
SHORTEST TIME
● TWO SCHEMES:
○ NONPREEMPTIVE – ONCE CPU GIVEN TO THE PROCESS IT CANNOT BE
PREEMPTED UNTIL COMPLETES ITS CPU BURST
○ PREEMPTIVE – IF A NEW PROCESS ARRIVES WITH CPU BURST LENGTH LESS
THAN REMAINING TIME OF CURRENT EXECUTING PROCESS, PREEMPT.
THIS SCHEME IS KNOW AS THE
SHORTEST-REMAINING-TIME-FIRST (SRTF)
SIMPLE EXAMPLE
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
PROCESSOR OR CPU
SCHEDULING ALGORITHMS
SUDHEESH KP
Kerala PSC Expert
OUTCOMES
○ Upon the completion of the session, the learner will be able to know the
concepts of
■ FCFS
■ SJF
■ Priority
■ RR
■ Multi-level and multilevel feed back queue
FIRST COME,FIRST SERVED
SCHEDULING (FCFS)
● Concept : PROCESS THAT REQUESTS THE CPU FIRST S ALLOCATED THE
CPU FIRST
● WORKING :
○ PROCESS REQUEST ARE KEPT IN READY QUEUE N FIFO ORDER i.e READY
QUEUE IS MAINTAINED AS FIFO QUEUE
○ WHEN A NEW PROCESS ENTER THE QUEUE, ITS PCB IS LINKED ONTO THE
TAIL/REAR OF THE QUEUE
○ WHEN A CPU IS FREE, IT IS ALLOCATED TO THE PROCESS THAT IS AT THE
HEAD/FRONT OF THE QUEUE
Process Burst Time
P1 24
P2 3
P3 3
● Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
● Suppose that the processes arrive in the order P2 , P3 , P1
OBSERVATIONS
SHORTEST JOB FIRST SCHEDULING
● CONCEPT: ALLOCATE THE CPU TO THE PROCESS WITH THE SMALLEST NEXT
CPU BIRST
● WORKING: ASSOCIATE WITH EACH PROCESS THE LENGTH OF ITS NEXT CPU
BURST. USE THESE LENGTHS TO SCHEDULE THE PROCESS WITH THE
SHORTEST TIME
● TWO SCHEMES:
○ NONPREEMPTIVE – ONCE CPU GIVEN TO THE PROCESS IT CANNOT BE
PREEMPTED UNTIL COMPLETES ITS CPU BURST
○ PREEMPTIVE – IF A NEW PROCESS ARRIVES WITH CPU BURST LENGTH LESS
THAN REMAINING TIME OF CURRENT EXECUTING PROCESS, PREEMPT.
THIS SCHEME IS KNOW AS THE
SHORTEST-REMAINING-TIME-FIRST (SRTF)
SIMPLE EXAMPLE
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
PREEMPTIVE SJF
● OR SHORTEST REMAINING TIME FIRST OR SHORTEST RUNTIME FIRST
● CONCEPT: Preempt the currently executing process, if the newly arrived
process has shorter CPU burst time in comparison to the executing process.
● Preempted process is placed at the front of the ready queue
Example 1
Process Arrival Time Burst Time
P1 0.0 24
P2 0.4 3
P3 1.0 3
Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
PRIORITY SCHEDULING
● A priority number (integer) is associated with each process
● The CPU is allocated to the process with the highest priority (smallest integer ≡ highest
priority)
○ Preemptive
○ nonpreemptive
● SJF is a priority scheduling where priority is the predicted next CPU burst time
● Problem ≡ Starvation – low priority processes may never execute
● Solution ≡ Aging – as time progresses increase the priority of the process
ROUND ROBIN (RR)
● Each process gets a small unit of CPU time (time quantum), usually
10-100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
● If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q
time units at once. No process waits more than (n-1)q time units.
● Performance
○ q large ⇒ FIFO
○ q small ⇒ q must be large with respect to context switch, otherwise
overhead is too high
PROCESSOR OR CPU
SCHEDULING ALGORITHMS
SUDHEESH KP
Kerala PSC Expert
OUTCOMES
○ Upon the completion of the session, the learner will be able to know the
concepts of
■ FCFS
■ SJF
■ Priority
■ RR
■ Multi-level and multilevel feed back queue
FIRST COME,FIRST SERVED
SCHEDULING (FCFS)
● Concept : PROCESS THAT REQUESTS THE CPU FIRST S ALLOCATED THE
CPU FIRST
● WORKING :
○ PROCESS REQUEST ARE KEPT IN READY QUEUE N FIFO ORDER i.e READY
QUEUE IS MAINTAINED AS FIFO QUEUE
○ WHEN A NEW PROCESS ENTER THE QUEUE, ITS PCB IS LINKED ONTO THE
TAIL/REAR OF THE QUEUE
○ WHEN A CPU IS FREE, IT IS ALLOCATED TO THE PROCESS THAT IS AT THE
HEAD/FRONT OF THE QUEUE
Process Burst Time
P1 24
P2 3
P3 3
● Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
● Suppose that the processes arrive in the order P2 , P3 , P1
OBSERVATIONS
SHORTEST JOB FIRST SCHEDULING
● CONCEPT: ALLOCATE THE CPU TO THE PROCESS WITH THE SMALLEST NEXT
CPU BIRST
● WORKING: ASSOCIATE WITH EACH PROCESS THE LENGTH OF ITS NEXT CPU
BURST. USE THESE LENGTHS TO SCHEDULE THE PROCESS WITH THE
SHORTEST TIME
● TWO SCHEMES:
○ NONPREEMPTIVE – ONCE CPU GIVEN TO THE PROCESS IT CANNOT BE
PREEMPTED UNTIL COMPLETES ITS CPU BURST
○ PREEMPTIVE – IF A NEW PROCESS ARRIVES WITH CPU BURST LENGTH LESS
THAN REMAINING TIME OF CURRENT EXECUTING PROCESS, PREEMPT.
THIS SCHEME IS KNOW AS THE
SHORTEST-REMAINING-TIME-FIRST (SRTF)
SIMPLE EXAMPLE
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
PREEMPTIVE SJF
● OR SHORTEST REMAINING TIME FIRST OR SHORTEST RUNTIME FIRST
● CONCEPT: Preempt the currently executing process, if the newly arrived
process has shorter CPU burst time in comparison to the executing process.
● Preempted process is placed at the front of the ready queue
Example 1
Process Arrival Time Burst Time
P1 0.0 24
P2 0.4 3
P3 1.0 3
Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
PRIORITY SCHEDULING
● A priority number (integer) is associated with each process
● The CPU is allocated to the process with the highest priority (smallest integer ≡ highest
priority)
○ Preemptive
○ nonpreemptive
● SJF is a priority scheduling where priority is the predicted next CPU burst time
● Problem ≡ Starvation – low priority processes may never execute
● Solution ≡ Aging – as time progresses increase the priority of the process
ROUND ROBIN (RR)
● Each process gets a small unit of CPU time (time quantum), usually
10-100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
● If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q
time units at once. No process waits more than (n-1)q time units.
● Performance
○ q large ⇒ FIFO
○ q small ⇒ q must be large with respect to context switch, otherwise
overhead is too high
Example of RR with Time Quantum = 2
Process BT AT
P1 6 0
P2 2 0
P3 4 0
Example of RR with Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
● The Gantt chart is:
Process BT AT
P1 6 2
P2 2 1
P3 4 3
OBSERVATIONS
MULTILEVEL AND
MULTILEVEL FEEDBACK
QUEUE SCHEDULING
SUDHEESH KP
Kerala PSC Expert
MULTILEVEL QUEUE SCHEDULING
● The ready queue is partitioned into several separate queues
● Processes are assigned permanently to one queue based on some property
● Each queue has its own scheduling algorithm.
● Inter queue scheduling is also done using either
○ Fixed-priority preemptive scheduling. i.e. each of the queue has priority,
process in queue with higher priority are scheduled before other queue
MULTILEVEL SCHEDULING
MULTILEVEL FEEDBACK QUEUE
SCHEDULING
● The processes are scheduled as per their category, and if required moved,
up/down between the queues
● Three queues:
● Q0 – RR with time quantum 8 milliseconds
● Q1 – RR time quantum 16 milliseconds
● Q2 – FCFS
● Scheduling
● A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved
to queue Q1.
● At Q1 job is again served FCFS and receives 16 additional milliseconds. If it
still does not complete, it is preempted and moved to queue Q2.
PROCESS
SYNCHRONIZATION
SUDHEESH KP
Kerala PSC Expert
BACKGROUND
● Process
○ Independent Processes operating concurrently on a
systems are those that can neither affect other processes or be
affected by other processes.
○ Cooperating Processes are those that can affect or be
affected by other processes.
Why we need process cooperation?
• Information Sharing - There may be several processes which need
access to the same file for example. ( e.g. pipelines. )
• Computation speedup - Often a solution to a problem can be solved
faster if the problem can be broken down into sub-tasks to be solved
simultaneously ( particularly when multiple processors are involved. )
• Modularity - The most efficient architecture may be to break a
system down into cooperating modules. ( E.g. databases with a
client-server architecture. )
• Convenience - Even a single user may be multi-tasking, such as
editing, compiling, printing, and running the same code in different
windows
Fundamental models for IPC
1. Shared memory
2. Message passing
PROCESS SYNCHRONIZATION
● Process Synchronization is a way to coordinate
processes that use shared data.
● Process is categorized into two types on the basis of
synchronization
○ Independent Process
○ Cooperative Process
WHY WE NEED PROCESS SYNCHRONIZATION?
PROCESS
SYNCHRONIZATION
SUDHEESH KP
Kerala PSC Expert
● Co operating process is one that can affect or be affected
by other processes executing the system
BACKGROUND
● Concurrent access to shared data may result in data
inconsistency
● Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes
PRODUCER CONSUMER PROBELM
● A PRODUCER PROCESS PRODUCES INFORMATION
THAT IS CONSUMED BY CONSUMER PROCESS
● ON OF THE SOLUTION TO THIS PROBLEM IS SHARED
MEMORY
● count++ could be implemented as
● register1 = count
register1 = register1 + 1
count = register1
● count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
● Consider this execution interleaving with “count = 5” initially:
● S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
1. A situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular
order in which the access takes place, is called a ___________
a) DEADLOCK
b) RACE CONDITION
c) SPIN LOCK
d) PAGE FAULT
THE CRITICAL-SECTION PROBLEM
● Consider a system consisting of n processes {P0, P1, ...,
Pn−1}.
● Each process has a segment of code, called a critical
section,
in which the process may be changing common
variables, updating a table, writing a file, and so on.
● The important feature of the system is that, when one
process is executing in its critical section, no other
process is allowed to execute in its critical section.
● That is, no two processes are executing in their critical
sections at the same time.
● The critical-section problem is to design a protocol that
the processes can use to cooperate
SOLUTION TO CRITICAL-SECTION PROBLEM
1. Mutual Exclusion - If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed
indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
PROCESS
SYNCHRONIZATION PART 3
SUDHEESH KP
Kerala PSC Expert
PETERSON’S SOLUTION
● Peterson’s Solution is a classical software based solution to the critical
section problem.
● In Peterson’s solution, we have two shared variables:
boolean flag[i] :Initialized to FALSE, initially no one is interested in
entering the critical section
int turn : The process whose turn is to enter the critical section.
EXAMPLE
Peterson’s Solution preserves all three conditions :
● Mutual Exclusion is assured as only one process can
access the critical section at any time.
● Progress is also assured, as a process outside the critical
section does not block other processes from entering the
critical section.
● Bounded Waiting is preserved as every process gets a
fair chance.
TestAndSet LOCK
● TestAndSet is a hardware solution to the synchronization problem.
● In TestAndSet, we have a shared lock variable which can take either
of the two values, 0 or 1
● Before entering into the critical section, a process inquires about the
lock.
● If it is locked, it keeps on waiting until it becomes free and if it is not
locked, it takes the lock and executes the critical section.
SEMAPHORE
SUDHEESH KP
Kerala PSC Expert
SEMAPHORE
● Semaphore was proposed by Dijkstra in 1965 which is a very
significant technique to manage concurrent processes by using a
simple integer value, which is known as a semaphore
.
● Semaphore is simply an integer variable that is shared between
threads.
● This variable is used to solve the critical section problem and to
achieve process synchronization in the multiprocessing
environment.
● Semaphores are integer variables that are used to solve the critical
section problem by using two atomic operations, wait and signal
that are used for process synchronization.
WAIT: denoted by P
● The wait operation decrements the value of its argument S, if it is
positive.If S is negative or zero, then no operation is performed.
Definition of wait:
P (Semaphore S) {
while S <= 0
; // no-op
S--; }
SIGNAL : denoted by V
● The signal operation increments the value of its argument S
Definition of signal V:
V (Semaphore S) {
S++;
}
● Can only be accessed via two indivisible (atomic) operations
Semaphores are of two types:
● Binary semaphore –
this is also known as mutex lock. It can have only two values – 0 and
1. Its value is initialized to 1. It is used to implement the solution of
critical section problems with multiple processes.
● Counting semaphore –
its value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.
Disadvantages of Semaphore
● It requires Busy waiting
Semaphore Implementation with no Busy waiting
● With each semaphore there is an associated waiting queue. Each entry in a
waiting queue has two data items:
○ value (of type integer)
○ pointer to next record in the list
● Two operations:
○ block – place the process invoking the operation on the appropriate waiting
queue.
○ wakeup – remove one of processes in the waiting queue and place it in the ready
queue.
Semaphore Implementation with no Busy waiting (Cont.)
● Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
● Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Deadlock and Starvation
● Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes
● Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
● Starvation – indefinite blocking. A process may never be removed
from the semaphore queue in which it is suspended.
Classical Problems of
Synchronization
● Bounded-Buffer Problem
● Readers and Writers Problem
● Dining-Philosophers Problem
CLASSICAL PROBLEMS
OF SYNCHRONIZATION
SUDHEESH KP
Kerala PSC Expert
CLASSICAL PROBLEMS OF SYNCHRONIZATION
● Bounded-Buffer Problem
● Readers and Writers Problem
● Dining-Philosophers Problem
Bounded-Buffer Problem
● also called the Producers and Consumers problem.
● A finite supply of containers is available. Producers take an empty container
and fill it with a product. Consumers take a full container, consume the
product and leave an empty container.
● The main complexity of this problem is that we must maintain the count for
both the number of empty and full containers that are available.
● Producers produce a product and consumers consume the product, but both
use of one of the containers each time.
Consider:
● a buffer which can store n items
● a producer process which creates the items (1 at a time)
● a consumer process which processes them (1 at a time)
● A producer cannot produce unless there is an empty
buffer slot to fill.
● A consumer cannot consume unless there is at least one
produced item.
Bounded-Buffer Problem Process
● N buffers, each can hold one item
● Semaphore mutex initialized to the value 1
● Semaphore full initialized to the value 0
● Semaphore empty initialized to the value N.
PRODUCER PROCESS
do {
…
produce an item in nextp
…
wait(empty);
wait(mutex);…
add nextp to buffer…
signal(mutex);
signal(full);
} while (1);
CONSUMER PROCESS
do {
wait(full)
wait(mutex);
…
remove an item from buffer to nextc
…
signal(mutex);
signal(empty);
…
consume the item in nextc
} while (1);
READER WRITER PROBLEM
● A data set is shared among a number of concurrent
processes
○ Readers – only read the data set; they do not perform
any updates
○ Writers – can both read and write.
● Problem – allow multiple readers to read at the same
time. Only one single writer can access the shared data
at the same time.
● A data item such as a file is shared among several
processes.
● Each process is classified as either a reader or writer.
● Multiple readers may access the file simultaneously.
● A writer must have exclusive access (i.e., cannot share
with either a reader or another writer).
● A solution gives priority to either readers or writers.
● Shared Data
○ Data set
○ Semaphore mutex initialized to 1.
○ Semaphore wrt initialized to 1.
○ Integer readcount initialized to 0.
● The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (true)
● The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readercount == 1) wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if redacount == 0) signal (wrt) ;
signal (mutex) ;
} while (true)
DINING-PHILOSOPHERS PROBLEM
RICE
● Table, a bowl of rice in center, five chairs, and five
chopsticks, also in each chair there is a philosopher.
● A philosopher may pick up only one chopstick at a time.
● When a hungry philosopher has both her chopsticks at
the same time, she eats without releasing her chopsticks.
● When she is finished eating, she puts down both of her
chopsticks and starts thinking.
● From time to time, a philosopher gets hungry and tries to
pick up the two chopsticks that are closest to her
(chopsticks between her left and right neighbors).
The Structure of philosopher i
Do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (true) ;
Solution:
● Present each chopstick by a semaphore.
● A philosopher tries to grab the chopstick by executing a wait
operation on that semaphore.
● She releases her chopsticks by executing the signal operation on the
appropriate semaphores.
● This solution guarantees that no two neighbors are eating
simultaneously, but it could cause a deadlock.
● Suppose all five philosophers become hungry simultaneously, and
each grabs her left chopstick.
● When each philosopher tries to grab her right chopstick, she will
cause the possibility that the philosophers will starve to death.
DEADLOCK
SUDHEESH KP
Kerala PSC Expert
OUTLINE
● The Deadlock Problem
● System Model
● Deadlock Characterization
● Methods for Handling Deadlocks
● Deadlock Prevention
● Deadlock Avoidance
● Deadlock Detection
● Recovery from Deadlock
THE DEADLOCK PROBLEM
● A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.
● Example
● System has 2 tape drives.
● P1 and P2 each hold one tape drive and each needs another one.
● Deadlock is a situation where a set of processes are
blocked because each process is holding a resource and
waiting for another resource acquired by some other
process.
System Model
● A process in operating system uses resources in the
following way.
1) Requests a resource
2) Use the resource
3) Releases the resource
● Resource types R1, R2, . . ., Rm
● CPU cycles, memory space, I/O devices
● Each resource type Ri has Wi instances
Deadlock Characterization
● Deadlock can arise if the following four conditions hold simultaneously
(Necessary Conditions)
Mutual Exclusion: One or more than one resource are non-shareable (Only
one process can use at a time)
● Hold and Wait: A process is holding at least one resource and waiting for
resources.
● No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task
● Circular Wait: A set of processes are waiting for each other in circular form.
Resource-Allocation Graph
A set of vertices V and a set of edges E.
● V is partitioned into two types:
○ P = {P1, P2, …, Pn}, the set consisting of all the
processes in the system.
○ R = {R1, R2, …, Rm}, the set consisting of all resource
types in the system.
● request edge – directed edge P1 → Rj
● assignment edge – directed edge Rj → Pi
Resource-Allocation Graph (Cont.)
● Process
● Resource Type with 4 instances
P
i Rj
● Pi requests instance of Rj
P
● Pi is holding an instance of Rj i Rj
Example of a Resource Allocation Graph
Resource Allocation Graph With A Deadlock
Resource Allocation Graph With A Cycle
But No Deadlock
Basic Facts
● If graph contains no cycles ⇒ no deadlock.
● If graph contains a cycle ⇒
○ if only one instance per resource type, then
deadlock.
○ if several instances per resource type, possibility of
deadlock.
Methods for Handling Deadlocks
● Ensure that the system will never enter a deadlock state.
● Allow the system to enter a deadlock state and then recover.
● Ignore the problem and pretend that deadlocks never occur in the
system; used by most operating systems, including UNIX.
Deadlock Prevention
● Mutual Exclusion – not required for sharable resources; must hold for
non sharable resources.
● Hold and Wait – must guarantee that whenever a process requests a
resource, it does not hold any other resources.
○ Require process to request and be allocated all its resources
before it begins execution, or allow process to request resources
only when the process has none.
○ Low resource utilization; starvation possible.
● No Preemption –
○ If a process that is holding some resources requests another resource
that cannot be immediately allocated to it, then all resources currently
being held are released.
○ Preempted resources are added to the list of resources for which the
process is waiting.
○ Process will be restarted only when it can regain its old resources, as
well as the new ones that it is requesting.
● Circular Wait – impose a total ordering of all resource types, and require
that each process requests resources in an increasing order of
enumeration.
Deadlock Avoidance
● Requires that the system has some additional a priori information
available.
● Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
● The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-wait
condition.
● Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
Safe State
● When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state.
● System is in safe state if there exists a safe sequence of all processes.
● Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still
request can be satisfied by currently available resources + resources held by
all the Pj, with j<I.
○ If Pi resource needs are not immediately available, then Pi can wait until
all Pj have finished.
○ When Pj is finished, Pi can obtain needed resources, execute, return
allocated resources, and terminate.
○ When Pi terminates, Pi+1 can obtain its needed resources, and so on.
Basic Facts
● If a system is in safe state ⇒ no deadlocks.
● If a system is in unsafe state ⇒ possibility of deadlock.
● Avoidance ⇒ ensure that a system will never enter an unsafe state.
Safe, Unsafe , Deadlock State
Deadlock Avoidance
SUDHEESH KP
Kerala PSC Expert
OUTLINE
● DEADLOCK AVOIDANCE
○ SAFE STATE
○ SAFE SEQUENCE
○ UNSAFE STATE
○ RESOURCE-ALLOCATION-GRAPH ALGORITHM
○ BANKER’S ALGORITHM
Deadlock Avoidance
Requires that the system has some additional a priori information
available.
● Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
● The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a
circular-wait condition.
● Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
SAFE STATE
● A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock.
● A system is in a safe state only if there exists a safe sequence. A sequence
of processes
● <P1, P2, ..., Pn> is a safe sequence for the current allocation state if, for each
Pi, the resource requests that Pi can still make can be satisfied by the
currently available resources plus the resources held by all Pj, with j <i. In this
situation, if the resources that Pi needs are not immediately available, then Pi
can wait until all Pj have finished.
Resource-Allocation Graph Algorithm
● Claim edge Pi → Rj indicated that process Pj may request resource Rj;
represented by a dashed line.
● Claim edge converts to request edge when a process requests a resource.
● When a resource is released by a process, assignment edge reconverts to a
claim edge.
● Resources must be claimed a priori in the system.
Resource-Allocation Graph For Deadlock Avoidance
Unsafe State In Resource-Allocation Graph
Banker’s Algorithm
● Multiple instances.
● Each process must a priori claim maximum use.
● When a process requests a resource it may have to wait.
● When a process gets all its resources it must return them
in a finite amount of time.
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
● Available: Vector of length m. If available [j] = k, there are k instances of
resource type Rj available.
● Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k
instances of resource type Rj.
● Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated
k instances of Rj.
● Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances
of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i - 1,3, …, n.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi ≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi. If Requesti [j] = k then
process Pi wants k instances of resource type Rj.
1. If Requesti ≤ Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise Pi must
wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying
the state as follows:
Available = Available = Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
Example of Banker’s Algorithm
● 5 processes P0 through P4; 3 resource types A
(10 instances),
B (5instances, and C (7 instances).
● Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Example (Cont.)
● The content of the matrix. Need is defined to be Max –
Allocation.
Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
● The system is in a safe state since the sequence < P1, P3, P4, P2,
P0> satisfies safety criteria.
Example P1 Request (1,0,2) (Cont.)
● Check that Request ≤ Available (that is, (1,0,2) ≤ (3,3,2) ⇒ true.
Allocation Need Available
ABC ABC ABC
P0 0 1 0 743 230
P1 3 0 2 020
P2 3 0 1 600
P3 2 1 1 011
P4 0 0 2 431
● Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2>
satisfies safety requirement.
● Can request for (3,3,0) by P4 be granted?
● Can request for (0,2,0) by P0 be granted?
MEMORY MANAGEMENT
SUDHEESH KP
MEMORY MANAGEMENT
● Background
● Swapping
● Contiguous Allocation
● Paging
● Segmentation
● Segmentation with Paging
Memory management
● Memory management is the functionality of an operating system
which handles or manages primary memory and moves processes
back and forth between main memory and disk during execution.
Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is
free.
Logical and Physical Adress
● Logical Address is generated by CPU while a program is running.
● The logical address is virtual address as it does not exist
physically, therefore, it is also known as Virtual Address.
● This address is used as a reference to access the physical
memory location by CPU.
● The term Logical Address Space is used for the set of all logical
addresses generated by a program’s perspective.
● he hardware device called Memory-Management Unit is used for
mapping logical address to its corresponding physical address.
● Physical Address identifies a physical location of required data in
a memory.
● The user never directly deals with the physical address but can
access by its corresponding logical address.
● The user program generates the logical address and thinks that
the program is running in this logical address but the program
needs physical memory for its execution, therefore, the logical
address must be mapped to the physical address by MMU before
they are used.
● The term Physical Address Space is used for all physical
addresses corresponding to the logical addresses in a Logical
address space.
?. Consider the following heap (Figure) in which blank regions are not
in use and hatched region are in use.The sequence of requests for
blocks of size 300,25,125,50 can be satisfied if we use
Fragmentation
● As processes are loaded and removed from memory, the free
memory space is broken into little pieces.
● It happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory blocks
remains unused. This problem is known as Fragmentation.
External fragmentation
● Total memory space is enough to satisfy a request or to reside a
process in it, but it is not contiguous, so it cannot be used.
Internal fragmentation
● Memory block assigned to process is bigger. Some portion of
memory is left unused, as it cannot be used by another process.
Dynamic Storage-Allocation Problem
● How to satisfy a request of size n from a list of free holes
○ First-fit: Allocate the first hole that is big enough
○ Best-fit: Allocate the smallest hole that is big enough;
must search entire list, unless ordered by size. Produces
the smallest leftover hole.
○ Worst-fit: Allocate the largest hole; must also search
entire list. Produces the largest leftover hole.
Thank you
OPERATING SYSTEM
NIMISHA K M
KERALA PSC EXPERT
Directory Implementation
Linear list
● Linear list of file names with pointer to the data blocks
○ Simple to program
○ Time-consuming to execute
■ Linear search time
■ Could keep ordered alphabetically via linked list or use
B+ tree
Hash Table
● linear list with hash data structure
● Decreases directory search time
● Collisions – situations where two file names hash to the
same location
● Only good if entries are fixed size, or use chained-overflow
method
Allocation Methods - Contiguous
● An allocation method refers to how disk blocks are allocated for
files:
● Contiguous allocation – each file occupies set of contiguous
blocks
● Best performance in most cases
● Simple – only starting location (block #) and length (number
of blocks) are required
● Problems include finding space for file, knowing file size,
external fragmentation, need for compaction off-line
(downtime) or on-line
Allocation Methods - Linked
● Linked allocation – each file a linked list of blocks
● File ends at nil pointer
● No external fragmentation
● Each block contains pointer to next block
● No compaction, external fragmentation
● Free space management system called when new block
needed
● Improve efficiency by clustering blocks into groups but
increases internal fragmentation
● Reliability can be a problem
● Locating a block can take many I/Os and disk seeks
Linked Allocation
Allocation Methods - Indexed
● Indexed allocation
● Each file has its own index block(s) of pointers to its data blocks
● Logical view
Example
THANK YOU
OPERATING SYSTEM
NIMISHA K M
KERALA PSC EXPERT
File Concept
● Contiguous logical address space
● Types:
○ Data ,numeric ,character ,binary
○ Program
● Contents defined by file’s creator
○ Many types :-text file, source file, executable file
File Attributes
● Name – only information kept in human-readable form
● Identifier – unique tag (number) identifies file within file system
● Type – needed for systems that support different types
● Location – pointer to file location on device
● Size – current file size
● Protection – controls who can do reading, writing, executing
● Time, date, and user identification – data for protection,
security, and usage monitoring
● Information about files are kept in the directory structure, which
is maintained on the disk
File Operations
● File is an abstract data type
● Create
● Write – at write pointer location
● Read – at read pointer location
● Reposition within file - seek
● Delete
● Truncate
● Open(Fi) – search the directory structure on disk for entry Fi
, and move the content of entry to memory
● Close (Fi) – move the content of entry Fi in memory to directory
structure on disk
File Types – Name,
Extension
Access Methods-Sequential access
Direct Access
● Relative access method.
● A filed-length logical record that allows the
program to read and write record rapidly
● The direct access is based on the disk model of
a file since disk allows random access to any
file block.
● There is no restriction on the order of reading
and writing for a direct access file.
Example of Index and Relative Files
Directory Structure
A collection of nodes containing information about all files
Both the directory structure and the files reside on disk
DISK STRUCTURE
● Disk can be subdivided into partitions
● Disk or partition can be used raw – without a file system, or
formatted with a file system
● Partitions also known as minidisks, slices
● Entity containing file system known as a volume
● Each volume containing file system also tracks that file
system’s info in device directory or volume table of contents
FILE SYSTEM ORGANIZATION
Operations Performed on Directory
●Search for a file
● Create a file
● Delete a file
● List a directory
● Rename a file
Directory Organization
The directory is organized logically to obtain
● Efficiency – locating a file quickly
● Naming – convenient to users
● Two users can have same name for different files
● The same file can have several different names
● Grouping – logical grouping of files by properties, (e.g., all
Java programs, all games, …)
Single-Level Directory
● A single directory for all users
● Naming problem
● Grouping problem
Two-Level Directory
● Separate directory for each user
● Path name
● Can have the same file name for different user
● Efficient searching
● No grouping capability
Tree-Structured Directories
● Efficient searching
● Grouping Capability
● Current directory (working directory)
● cd /spell/mail/prog
● type list
● Absolute or relative path name
● Creating a new file is done in current
directory
● Delete a file
rm <file-name>
● Creating a new subdirectory is done in
current directory
mkdir <dir-name>
Example: if in current directory /mail
mkdir count
Deleting “mail” ⇒ deleting the entire subtree
rooted by “mail”
Acyclic-Graph Directories
● Have shared
subdirectories and files
THANK YOU
OPERATING SYSTEM
NIMISHA K M
KERALA PSC EXPERT
Demand Paging
● Similar to paging system with swapping
● Page is needed ⇒ reference to it
● invalid reference ⇒ abort
● not-in-memory ⇒ bring to memory
● Lazy swapper – never swaps a page into memory unless page will
be needed
● Swapper that deals with pages is a pager
Valid-Invalid Bit
With each page table entry a valid–invalid bit is
associated
(v ⇒ in-memory – memory resident, i ⇒ not-in-memory)
● Initially valid–invalid bit is set to i on all
entries
●During MMU address translation, if
valid–invalid bit in page table entry is i ⇒ page
fault
Page Fault
● If there is a reference to a page, first reference to that page
will trap to operating system:page fault
1. Operating system looks at another table to decide:
● Invalid reference ⇒ abort
● Just not in memory
2. Find free frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
Steps in Handling a Page Fault
Basic Page Replacement
1.Find the location of the desired page on disk
2. Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to
select a victim frame
- Write victim page to the disk;change the page table and frame
table accordingly
3. Bring the desired page into the (newly) free frame; update the
page and frame tables
4. Continue the process by restarting the instruction that caused
the trap
First-In-First-Out (FIFO) Algorithm
●Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
● 3 frames (3 pages can be in memory at a time per process)
● Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5
● Adding more frames can cause more page faults!
● Belady’s Anomaly
● How to track ages of pages?
● Just use a FIFO queue
Optimal Algorithm
Replace page that will not be used for longest period of time
● 9 is optimal for the example
● How do you know this?
● Can’t read the future
● Used for measuring how well your algorithm performs
Least Recently Used (LRU) Algorithm
● Use past knowledge rather than future
● Replace page that has not been used in the most amount of
time
● Associate time of last use with each page
12 faults – better than FIFO but worse than OPT
● Generally good algorithm and frequently used
Thrashing
If a process does not have “enough” pages, the page-fault rate is
very high
● Page fault to get page
● Replace existing frame
● But quickly need replaced frame back
● This leads to:
4 Low CPU utilization
4 Operating system thinking that it needs to increase the
degree of multiprogramming
4 Another process added to the system
● Thrashing ≡ a process is busy swapping pages in and ou
THANK YOU
OPERATING SYSTEM
NIMISHA K M
KERALA PSC EXPERT
NON-CONTIGUOUS MEMORY ALLOCATION
● The Non-contiguous memory allocation allows a process
to acquire the several memory blocks at the different
location in the memory according to its requirement.
● Paging and segmentation are the two ways which allow a
process's physical address space to be non-contiguous.
PAGING
Physical address space of a process can be noncontiguous;
process is allocated physical memory whenever the latter is
available
● Avoids external fragmentation
● Avoids problem of varying sized memory chunks
● Divide physical memory into fixed-sized blocks called
frames
● Size is power of 2, between 512 bytes and 16 Mbytes
● Divide logical memory into blocks of same size called pages
● Keep track of all free frames
● To run a program of size N pages, need to find N free frames
and load program
● Set up a page table to translate logical to physical addresses
● Backing store likewise split into pages
● Still have Internal fragmentation
Address Translation Scheme
● Address generated by CPU is divided into:
● Page number (p) – used as an index into a page table
which contains base address of each page in physical
memory
● Page offset (d) – combined with base address to define
the
physical memory address that is sent to the memory unit
Page number Page offset
p d
m-n n
● For given logical address space 2^m and page size 2^
n
Paging Hardware
Paging Model of Logical and Physical
Memory
Free Frames
Implementation of Page Table
● Page table is kept in main memory
● Page-table base register (PTBR) points to the page table
● Page-table length register (PTLR) indicates size of the page table
● In this scheme every data/instruction access requires two
memory accesses
● One for the page table and one for the data / instruction
● The two memory access problem can be solved by the use of a
special fast-lookup hardware cache called associative memory
or translation look-aside buffers (TLBs)
Memory Protection
Memory protection implemented by associating protection bit with
each frame to indicate if read-only or read-write access is allowed
● Can also add more bits to indicate page execute-only, and so on
● Valid-invalid bit attached to each entry in the page table:
➢ “valid” indicates that the associated page is in the process’
logical address space, and is thus a legal page
➢ “invalid” indicates that the page is not in the process’ logical
address space
➢ Or use page-table length register (PTLR)
● Any violations result in a trap to the kernel
Valid (v) or Invalid (i) Bit In A Page Table
Shared Pages
Shared code
● One copy of read-only (reentrant) code shared among
processes (i.e., text editors, compilers, window systems)
● Similar to multiple threads sharing the same process space
● Also useful for interprocess communication if sharing of
read-write pages is allowed
Private code and data
● Each process keeps a separate copy of the code and data
● The pages for the private code and data can appear anywhere in
the logical address space
Hierarchical Page Tables
● Break up the logical address space into multiple page tables
● A simple technique is a two-level page table
● We then page the page table
Two-Level Page-Table Scheme
Address-Translation Scheme
Hashed Page Tables
● Common in address spaces > 32 bits
● The virtual page number is hashed into a page table
● This page table contains a chain of elements hashing to the same
location
● Each element contains (1) the virtual page number (2) the value of
the mapped page frame (3) a pointer to the next element
● Virtual page numbers are compared in this chain searching for a
Match
● If a match is found, the corresponding physical frame is
extracted
Variation for 64-bit addresses is clustered page tables
● Similar to hashed but each entry refers to several pages (such
as 16) rather than 1
● Especially useful for sparse address spaces (where memory
references are non-contiguous and scattered)
Hashed Page Table
Inverted Page Table
● One entry for each real page of memory
● Entry consists of the virtual address of the page stored in that
real memory location, with information about the process that
owns that page
● Decreases memory needed to store each page table, but
increases time needed to search the table when a page reference
occurs
● Use hash table to limit the search to one — or at most a few —
page-table entries
● TLB can accelerate access
● But how to implement shared memory?
● One mapping of a virtual address to the shared physical
address
Segmentation
● Memory-management scheme that supports user view of
memory
● A program is a collection of segments
● A segment is a logical unit such as:
main program local variables, global
procedure variables
function common block
method stack
object symbol table
arrays
User’s View of a Program
Logical View of Segmentation
Segmentation Architecture
● Logical address consists of a two tuple:
○ <segment-number, offset>,
● Segment table – maps two-dimensional physical addresses;
each table entry has:
● base – contains the starting physical address where the
segments reside in memory
● limit – specifies the length of the segment
● Segment-table base register (STBR) points to the segment
table’s location in memory
● Segment-table length register (STLR) indicates number of
segments used by a program;
segment number s is legal if s < STLR
Segmentation Hardware
THANK YOU
THREAD IN OS
SUDHEESH KP
Kerala PSC Expert
BASICS
● A thread is a light weight process
● Thread is a basic unit of CPU utilization
● Thread consist of
○ Thread ID
○ Program counter
○ Register set
○ stack
● Resources like code, data, and files can be shared among
all threads within a process.
Single and Multithreaded
Processes
BENEFITS
● Responsiveness
● Resource Sharing
● Economy
● Utilization of Multiprocessor Architectures
MULTITHREADING MODEL
● There are two types of threads
○ User threads
○ Kernel threads
● User threads
● They are above the kernel and they are managed without
kernel support.
● User level threads implement in user-level libraries, rather
than via systems calls, so thread switching does not need
to call operating system and to cause interrupt to the
kernel.
Kernel Threads:
● Kernel knows and manages the threads.
● Instead of thread table in each process, the kernel itself
has thread table (a master one) that keeps track of all the
threads in the system.
● In addition kernel also maintains the traditional process
table to keep track of the processes.
● OS kernel provides system call to create and manage
threads.
● There are 3 way to establishing relationship between user
thread and kernel thread
○ Many to one model
○ One to one model
○ Many to many model
Many to one model
ONE TO ONE MODEL
MANY TO MANY MODEL
?
1. A method of executing two or more
programs concurrently using the same
computer describe: (Kerala PSC - HSST 348/ 2005)
a. (A) multi processing
b. (B) multiprogramming
c. (C) virtual processing
d. (D) time sharing
2.Which one is not shared by the threads of the
same process? (Kerala psc - HSST 188/ 2006)
a. (A) Stack
b. (B) Address space
c. (C) File descriptor table
d. (D) Message queue
3.A thread contains of a thread id,pc,register
set,and a ….
a. stack
b. queue
c. array
d. Linked list
4.A computer system that permits multiple
users to run programs at same time is (37
(Kerala psc 20/2013 - SST 2013) )
a. (A) Real time system
b. (B) Time sharing system
c. (C) Multiprogramming system
d. (D)Mu1ti tasking system
5.……….. is the scheduler which selects
processes from secondary storage for
scheduling (Kerala PSC 20/2013 - HSST 2013)
a. (A) Process scheduler
b. (B) Short term schedule
c. (C) Medium term scheduler
d. (D) Long term scheduler
6.Which one of the following is NOT a
fundamental process state? (Kerala PSC
20/2013 HSST 2013)
a. A)blocked
b. (B) ready
c. (C)terminated
d. (D) executing
7.program in execution is called? (Kerala PSC
12/2014 )
a. Process
b. (B) Instruction
c. (C) Procedure
d. (D) Function
8.Light weight process is called (Kerala PSC
12/2014 )
a. Thread
b. (B) Process
c. (C) Fork
d. (D) None
.
9.Which one is the example of real time OS
a. RTLinux
b. VxWorks
c. Windows CE
d. All of the above
.
10. Networked OS runs on
a. server
b. Every system in the network
c. Both server and every system n the network
d. None of the above
.
11. Which of the following is an example of
spooled device? GATE 95
a. A line printer is used to print the output of a
number of jobs
b. A terminal used to enter input data to running
program
c. A secondary storage device in virtual memory
system
d. A graphic display device.
12. System calls are usually invoked by using ?
GATE 99
a. A software interrupt
b. Polling
c. An indirect jump
d. A privileged instruction
13. A processor needs software interrupt to?
a. Test the interrupt system of the processor
b. Implement co-routines
c. Obtain system services which need execution of
privileged institutions
d. Return from subroutine
14. Let the time taken to switch between user and
kernel modes of execution be t1 while the time taken
to switch between two process be t2. Which of the
following is true?
a. t1>t2
b. t1=t2
c. t1<t2
d. Nothing can be said about the relation between
t1 and t2
15. Consider the following statements about user
level threads and kernel level threads. Which one of
the following statements is FALSE ?
a. Context switch time is longer for kernel level
threads than the user level threads.
b. User level threads do not need any hardware
support
c. Related kernel level threads can be scheduled on
different processors in a multi processor system.
d. Blocking one kernel level thread blocks all related
threads.
16. A thread is usually defined as a “light weight
process” because an operating system (OS)
maintains smaller data structures for a thread than
for a process. In relation to this which of the
following is true?
a. On per-thread basis, the OS maintains only CPU
register state.
b. The OS does not maintain a separate stack for
each thread
c. On per-thread basis, the OS does not maintain
virtual memory state
d. On per- thread basis, the OS maintains only
scheduling and accounting information
17. A process executes the code
Fork();
Fork();
Fork();
The total no of child process created is
a. 3.
b. 4
c. 7
d. 8
ANSWERS
1. A method of executing two or more
programs concurrently using the same
computer describe: (Kerala PSC - HSST 348/ 2005)
a. (A) multi processing
b. (B) multiprogramming
c. (C) virtual processing
d. (D) time sharing
2.Which one is not shared by the threads of the
same process? (Kerala psc - HSST 188/ 2006)
a. (A) Stack
b. (B) Address space
c. (C) File descriptor table
d. (D) Message queue
3.A thread contains of a thread id,pc,register
set,and a ….
a. stack
b. queue
c. array
d. Linked list
4.A computer system that permits multiple
users to run programs at same time is (37
(Kerala psc 20/2013 - SST 2013) )
a. (A) Real time system
b. (B) Time sharing system
c. (C) Multiprogramming system
d. (D)Multi tasking system
5.……….. is the scheduler which selects
processes from secondary storage for
scheduling (Kerala PSC 20/2013 - HSST 2013)
a. (A) Process scheduler
b. (B) Short term schedule
c. (C) Medium term scheduler
d. (D) Long term scheduler
6.Which one of the following is NOT a
fundamental process state? (Kerala PSC
20/2013 HSST 2013)
a. A)blocked
b. (B) ready
c. (C)terminated
d. (D) executing
7.program in execution is called? (Kerala PSC
12/2014 )
a. Process
b. (B) Instruction
c. (C) Procedure
d. (D) Function
8.Light weight process is called (Kerala PSC
12/2014 )
a. Thread
b. (B) Process
c. (C) Fork
d. (D) None
.
9.Which one is the example of real time OS
a. RTLinux
b. VxWorks
c. Windows CE
d. All of the above
.
10. Networked OS runs on
a. server
b. Every system in the network
c. Both server and every system n the network
d. None of the above
.
11. Which of the following is an example of
spooled device? GATE 95
a. A line printer is used to print the output of a
number of jobs
b. A terminal used to enter input data to running
program
c. A secondary storage device in virtual memory
system
d. A graphic display device.
12. System calls are usually invoked by using ?
GATE 99
a. A software interrupt
b. Polling
c. An indirect jump
d. A privileged instruction
13. A processor needs software interrupt to?
a. Test the interrupt system of the processor
b. Implement co-routines
c. Obtain system services which need execution of
privileged institutions
d. Return from subroutine
14 . Let the time taken to switch between user and
kernel modes of execution be t1 while the time taken
to switch between two process be t2. Which of the
following is true?
a. t1>t2
b. t1=t2
c. t1<t2
d. Nothing can be said about the relation between
t1 and t2
15. Consider the following statements about user
level threads and kernel level threads. Which one of
the following statements is FALSE ?
a. Context switch time is longer for kernel level
threads than the user level threads.
b. User level threads do not need any hardware
support
c. Related kernel level threads can be scheduled on
different processors in a multi processor system.
d. Blocking one kernel level thread blocks all related
threads.
16. A thread is usually defined as a “light weight
process” because an operating system (OS)
maintains smaller data structures for a thread than
for a process. In relation to this which of the
following is true?
a. On per-thread basis, the OS maintains only CPU
register state.
b. The OS does not maintain a separate stack for
each thread
c. On per-thread basis, the OS does not maintain
virtual memory state
d. On per- thread basis, the OS maintains only
scheduling and accounting information
17. A process executes the code
Fork();
Fork();
Fork();
The total no of child process created is
a. 3.
b. 4
c. 7
d. 8