0% found this document useful (0 votes)
7 views59 pages

Module I

This document provides an overview of advanced operating systems, focusing on concepts such as process management, synchronization mechanisms, and deadlocks. It discusses various types of advanced operating systems including distributed, multiprocessor, database, and real-time systems, along with their design approaches and practical issues. Additionally, it covers synchronization techniques like semaphores and mutexes, and explores the challenges of concurrent programming and deadlock management.

Uploaded by

nlprsnl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views59 pages

Module I

This document provides an overview of advanced operating systems, focusing on concepts such as process management, synchronization mechanisms, and deadlocks. It discusses various types of advanced operating systems including distributed, multiprocessor, database, and real-time systems, along with their design approaches and practical issues. Additionally, it covers synchronization techniques like semaphores and mutexes, and explores the challenges of concurrent programming and deadlock management.

Uploaded by

nlprsnl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ADVANCED OPERATING SYSTEMS

Module I :

Introduction and Process Deadlocks


I. Introduction: Overview,
What Is In This Chapter?
• Overview, Functions of an Operating System,

• Design Approaches, Types of Advanced Operating


Systems,

• Synchronization Mechanisms, Concept of a Process,


Concurrent Processes,

• The Critical Section Problem, Other Synchronization


Problems,

• Language Mechanisms for Synchronization, Axiomatic


Verification of Parallel Programs,
Overview, What is OS?
Functions of
an Operating
System An Operating System is a software that makes a
computer actually work.

It is the software that enables all the programs we use.

The OS organizes and controls the hardware.

OS acts as an interface between the application


programs and the machine hardware.

Examples: Windows, Linux, Unix, and Mac OS, etc..


Basic functions of operating systems are:

Functions of Two basic functions of operating systems are:


Resource management:
an Operating A user program accesses several hardware and software resources during its
execution.
System
Example: CPU, Main memory, I/O devices, and various types of software
(compiler, linker-loader, files, etc.).
Manages the resources and allocates them to users in an efficient and fair manner.
It encompasses the following functions.

Time management (CPU and disk scheduling).

Space management (main and secondary storage). Process synchronization and


deadlock handling. Accounting and status information.

User friendliness: It hides the unpleasant, low-level details and idiosyncrasies of


a bare H/W machine. It encompasses the following functions.

Execution environment (Process management creation, control, and termination,


file manipulation, interrupt handling, support for I/O operations, and Language
support).
Error detection and handling,
Protection and security,
Fault tolerance and failure recovery.
Design
Approaches ▪ Deal with the complexities of
modern systems

▪ Separation of Policies and


Mechanisms
▪ Policies - What should be done
▪ Mechanisms - How it should be
done
▪ Three common approaches:
▪ Layered Approach
▪ Kernel Approach
▪ Virtual Machine Approach
• Simplifies design, implementation, and testing.
Layered Approach • Modular by dividing the OS into functional layers.
Kernel
Based
Approach
• The kernel contains a
collection of primitives that
are used to build the OS.
• OS implements policy
• Kernel implements
mechanisms
Virtual Machine • Virtual software layer over hardware
Approach • Illusion of multiple instances of hardware
• Supports multiple instances of OSs
Types of
Advanced
OSs

• Distributed Operating Systems


• Multiprocessor Operating Systems
• Database Operating Systems
• Real-time Operating Systems
Controls and manages resources for a
network of autonomous computers
• manage both hardware and software resources
• behaves as a single monolithic system.

User not aware of program or resource


Distributed location

Operating Design issues same as traditional


Systems systems

Practical issues:

• lack of shared memory


• lack of a global clock
• unpredictable communication delays.
Multiprocessor
Operating Consists of a set of processors that
Systems • share a set of physical memory blocks
• share a common clock
• "share" over an interconnection network.

Control and manage resources

• hardware and software resources


• viewed as a uniprocessor system.

Design issues are the same as those of a


traditional system.

Practical issues:

• increased complexity of synchronization,


scheduling,
• memory management, protection, and security.
Database
Operating
Systems • Database systems place increased
demands on an operating system to
efficiently support:
• concept of a transaction
• manage large volumes of data
• concurrency control
• system failure control
Real-time Place application-specific special

Operating requirements on an operating


system.

Systems
Policies and mechanisms are geared
to ensuring jobs meet their
deadlines.

The problem is one of resource


scheduling and overall system utilization.
Traditional
Kernel
Architecture
Traditional
Kernel
Architecture

–Overview

Process Kernel

• compete for system • Manages resources -


resources allocates resources to
• blocks if it can not acquire a processes "optimally“
resource • implements policy for
resource allocation
(sharing)
• provide a high-level
abstract interface to
resources
• Centralized management
simplifies synchronization
and error recovery
Synchronization
only one active
The kernel is re- process at any
Nonpreemptive.
entrant. given time (others
are blocked).

Blocking
masking interrupts
operations
Blocking Operations

When a resource is sleep places process


unavailable (possibly blocked queue, sets when resource released
locked), the process sets state to asleep and calls wakeup() is called
the flag and calls sleep() swtch()

All sleep processes are


When running, the
woken and state set to
process must verify that
runnable (placed on the
the resource is available.
runnable queues).
Synchronization
Mechanisms
A common element in all synchronization
schemes is to allow a process to finish work on a
critical region of the program before other
processes have access to it.
• Applicable both to multiprocessors and to 2+ processes in a
single-processor (time-shared) processing system.

Called a critical region because its execution


must be handled as a unit.
Process Synchronization Works
Lock-and-Key
Synchronization
If it is available, the
process must pick it up
Process first checks if the
and put it in a lock to
key is available
make it unavailable to all
other processes.

Several locking
For this scheme to work,
mechanisms have been
both actions must be
developed, including test-
performed in a single
and-set, WAIT and
machine cycle.
SIGNAL, and semaphores.
Test-And-Set (TS) Locking

In a single machine Actual key is a single bit


Test-and-set is a single cycle it tests to see if in a storage location
indivisible machine key is available and, if it that can contain a zero
instruction (TS). is, sets it to (if it's free) or a one (if
“unavailable.” busy).

Works well for a small Simple procedure to


number of processes. implement.
Problems with
Test-And-Set
• When many processes are waiting to
enter a critical region, starvation could
occur because processes gain access
in an arbitrary fashion.
• Unless a first-come, first-served policy
were set up, some
processes could be favored over
others.
• Waiting processes remain in
unproductive, resource-
consuming wait loops (busy waiting).
• Consumes valuable processor time.
• Relies on the competing processes to
test the key.
WAIT and
SIGNAL Locking
• Modification of test-and-set.
• Adds 2 new operations, which are mutually exclusive
and become part of the process scheduler's set of
operations.
– WAIT
– SIGNAL
• Operations WAIT and SIGNAL frees processes from
“busy waiting” dilemma and returns control to OS which
can then run other jobs while waiting processes are idle.
WAIT
• Activated when the process encounters a busy
condition code.
• Sets the process control block (PCB) to the
blocked state
• Links it to the queue of processes waiting to
enter this particular critical region.
• Process Scheduler then selects another process
for execution.
SIGNAL
• Activated when a process exits the critical region
and the condition code is set to "free.”
• Checks the queue of processes waiting to enter
this critical region and selects one, setting it to
the READY state.
• Process Scheduler selects this process for
running.
• Semaphore – a nonnegative integer variable
used as a flag. Semaphores
• Signals if & when a resource is free & can be
used by a process.
• The most well-known semaphores are
signaling devices used by railroads to
indicate if a section of track is clear.
• Dijkstra (1965) -- 2 operations to operate the
semaphore to overcome the process
synchronization problem.
• P stands for the Dutch word *proberen* (to
test)
• V stands for *verhogen* (to increment)
P (Test) and V
(Increment)
• If we let s be a semaphore variable, then the V
operation on s is simply to increment s by 1.
• V(s): s: = s + 1
• Operation P on s is to test value of s and, if
it'snot zero, to decrement it by one.
• P(s): If s > 0 then s: = s − 1
• P and V are executed by OS in response to calls
issued by any one process naming a semaphore
as parameter.
MUTual EXclusion
(Mutex)
• P and V operations on semaphores enforce the
concept of mutual exclusion, which is necessary to
avoid having 2 operations attempt to execute at the
same time.
• Called mutex (MUTual EXclusion)
• P(mutex): if mutex > 0 then mutex: = mutex – 1
• V(mutex): mutex: = mutex + 1
Process Cooperation

• Occasions when several processes work directly together to


complete a common task.

• Two famous examples are problems of "producers and


consumers” and “readers and writers.“

• Each case requires both mutual exclusion and


synchronization, and they are implemented by using
semaphores.
Concurrent Programming

• Concurrent processing system -- one


job uses several processors to
execute sets of instructions in parallel.
• Requires a programming language
and a computer system that can
support this type of construct.
• Increases computation speed.
• Increases the complexity of
programming language and hardware
(machinery & communication among
machines).
• Reduces the complexity of working
with array operations within loops, of
performing matrix multiplication, of
conducting parallel searches in
databases, and of sorting or merging
files.
Explicit &
Implicit
Parallelism

• Explicit parallelism -- programmer must explicitly


state which instructions can be
executed in parallel.
• Implicit parallelism -- automatic detection by compiler
of instructions that can be performed in parallel.
High Level
Synchronization • Several high-level mechanisms that are easier to use
Mechanisms have been proposed.
• Monitors.
• Critical Regions.
• Read/Write locks
• We will study monitors (Java and P threads provide
synchronization mechanisms based on monitors)
• They were suggested by Dijkstra, developed more
thoroughly by Brinch Hansen, and formalized nicely
by Tony Hoare in the early 1970s
• Several parallel programming languages have
incorporated some version of monitors as their
fundamental synchronization mechanism
Producers
and
Consumers
One Process Produces
Some Data That
Another Process
Consumes Later.
Producers and
Consumers - 2
• Because buffer holds finite amount of data,
synchronization process must delay producer from
generating more data when buffer is full.
• Delay the consumer from retrieving data when the buffer
is empty.
• This task can be implemented by 3 semaphores:
• Indicate the number of full positions in the buffer.
• Indicate the number of empty positions in the buffer.
• Mutex will ensure mutual exclusion between processes
Definitions
of Producer
Producer
produce data
P (empty)
& P (mutex)
write data into buffer
Consumer V (mutex)
V (full)

Processes
Consumer
P (full)
P (mutex)
read data from buffer
V (mutex)
V (empty)
consume data
Producers and
Consumers
Algorithm

empty:= n
full:= 0
mutex:= 1
COBEGIN
repeat until no more data PRODUCER
repeat until buffer is empty CONSUMER
COEND
Readers and
Writers Readers and writers -- arises when 2 types of
processes need to access a shared resource such as a
file or database.

E.g., airline reservations systems.

Two solutions using P and V operations:

1. Give priority to readers over writers so readers


are kept waiting only if a writer is actually modifying
the data.
However, this policy results in writer starvation if
there is a continuous stream of readers.
Concurrent ▪ Concurrent processing system -- one job uses
Programming several
processors to execute sets of instructions in parallel.

▪ Requires a programming language and a computer


system that can support this type of construct.

▪ Increases computation speed.

▪ Increases the complexity of the programming


language and hardware (machinery & communication
among machines).

▪ Reduces the complexity of working with array


operations within loops, of performing matrix
multiplication, of conducting parallel searches in
databases, and of sorting or merging files.
What Is In This Chapter?
II. • What is a deadlock?
OPERATING
• Staying Safe: Preventing and Avoiding
SYSTEM Deadlocks

Deadlocks • Living Dangerously: Let the deadlock


happen, then detect it and recover from it.
DEADLOCKS
EXAMPLES:
• "It takes money to make money".
• You can't get a job without experience; you can't get experience
without a job.

BACKGROUND:

The cause of deadlocks: Each process needing what another process has. This
results from sharing resources such as memory, devices, links.

Under normal operation, a resource allocations proceed like this::

1. Request a resource (suspend until available if necessary ).


2. Use the resource.
3. Release the resource.
DEADLOCKS Bridge Crossing
Example

If a deadlock occurs, it can


Each section of a bridge
Traffic only in one be resolved if one car
can be viewed as a
direction. backs up (preempt
resource.
resources and rollback).

Several cars may have to


be backed up if a deadlock Starvation is possible.
occurs.
DEADLOCKS DEADLOCK
CHARACTERISATION
NECESSARY CONDITIONS
ALL of these four must happen simultaneously for a deadlock to occur:

Mutual exclusion
One or more than one resource must be held by a process in a non-sharable
(exclusive) mode.

Hold and Wait


A process holds a resource while waiting for another resource.

No Preemption
There is only voluntary release of a resource - nobody else can make a process
give up a resource.

Circular Wait
Process A waits for Process B waits for Process C .... waits for Process A.
DEADLOCKS RESOURCE
ALLOCATION GRAPH
A visual ( mathematical ) way to determine if a deadlock has, or may occur.

G = ( V, E ) The graph contains nodes and edges.

V Nodes consist of processes = { P1, P2, P3, ...} and resource types
{ R1, R2, ...}

E Edges are ( Pi, Rj ) or ( Ri, Pj )

An arrow from the process to resource indicates the process is requesting the
resource. An arrow from resource to process shows an instance of the resource
has been allocated to the process.

Process is a circle, resource type is square; dots represent number of instances of


resource in type. Request points to square, assignment comes from dot.
Pi
Pi
Rj Rj 6
Pi
DEADLOCKS RESOURCE
ALLOCATION GRAPH
• If the graph contains no cycles, then no process is deadlocked.
• If there is a cycle, then:
a) If resource types have multiple instances, then deadlock MAY exist.
b) If each resource type has 1 instance, then deadlock has occurred.

R3 Assigned to P3

Resource allocation graph

P2 Requests P3
DEADLOCKS RESOURCE
ALLOCATION GRAPH

Resource allocation graph


Resource allocation graph with a cycle but no deadlock.
with a deadlock.
DEADLOCKS Strategy
HOW TO HANDLE DEADLOCKS – GENERAL STRATEGIES

There are three methods:

Ignore Deadlocks: Most Operating systems do this!!

Ensure deadlock never occurs using either

Prevention Prevent any one of the 4 conditions from happening.

Avoidance Allow all deadlock conditions, but calculate cycles about to


happen and stop dangerous operations..

Allow deadlock to happen. This requires using both:

Detection Know a deadlock has occurred.

Recovery Regain the resources.


DEADLOCKS Deadlock
Prevention

Do not allow one of the four conditions to occur.

Mutual exclusion:
a) Automatically holds for printers and other non-sharables.
b) Shared entities (read only files) don't need mutual exclusion (and aren’t
susceptible to deadlock.)
c) Prevention not possible, since some devices are intrinsically non-sharable.

Hold and wait:


a) Collect all resources before execution.
b) A particular resource can only be requested when no others are being
held. A sequence of resources is always collected beginning with the
same one.
c) Utilization is low, starvation possible.
DEADLOCKS Deadlock
Prevention

Do not allow one of the four conditions to occur.

No preemption:

a) Release any resource already being held if the process can't get an
additional resource.
b) Allow preemption - if a needed resource is held by another process, which
is also waiting on some resource, steal it. Otherwise wait.

Circular wait:

a) Number resources and only request in ascending order.


b) EACH of these prevention techniques may cause a decrease in utilization
and/or resources. For this reason, prevention isn't necessarily the best
technique.
c) Prevention is generally the easiest to implement.
DEADLOCKS Deadlock
Avoidance
If we have prior knowledge of how resources will be requested, it's possible to
determine if we are entering an "unsafe" state.

Possible states are:

Deadlock No forward progress can be made.

Unsafe state A state that may allow deadlock.

Safe state A state is safe if a sequence of processes exist such that there
are enough resources for the first to finish, and as each finishes
and releases its resources there are enough for the next to finish.

The rule is simple: If a request allocation would cause an unsafe state, do not honor
that request.

NOTE: All deadlocks are unsafe, but all unsafes are NOT deadlocks.
DEADLOCKS Deadlock
Avoidance

NOTE: All deadlocks are unsafe, but all unsafes are NOT deadlocks.

UNSAFE

SAFE
DEADLOCK

Only with luck will O.S. can avoid


processes avoid deadlock.
deadlock.
DEADLOCKS Deadlock
Avoidance
Let's assume a very simple model: each process declares its maximum
needs. In this case, algorithms exist that will ensure that no unsafe state is
reached.
There are multiple instances of
the resource in these examples.
EXAMPLE:
There exists a total of 12 tape drives. The current state looks like this:
Process Max Needs Allocated Current
Needs

P0 10 5 5
In this example, < p1, p0, p2 >
is a workable sequence. P1 4 2 2

Suppose p2 requests and is P2 9 2 7


given one more tape drive.
What happens then?
Deadlock
DEADLOCKS
Avoidance
Safety Algorithm
A method used to determine if a particular state is safe. It's safe if there exists a
sequence of processes such that for all the processes, there’s a way to avoid
deadlock:

The algorithm uses these variables:

Need[I] – the remaining resource needs of each process.


Work - Temporary variable – how many of the resource are currently
available.
Finish[I] – flag for each process showing we’ve analyzed that process or not.

need <= available + allocated[0] + .. + allocated[I-1] <- Sign of success

Let work and finish be vectors of length m and n respectively.


DEADLOCKS Deadlock
Avoidance
Safety Algorithm
1. Initialize work = available
Initialize finish[i] = false, for i = 1,2,3,..n

2. Find an i such that:


finish[i] == false and need[i] <= work

If no such i exists, go to step 4.

3. work = work + allocation[i]


finish[i] = true
goto step 2

4. if finish[i] == true for all i, then the system is in a safe state.


DEADLOCKS Deadlock Detection
Need an algorithm that determines SINGLE INSTANCE OF A RESOURCE TYPE
if deadlock occurred.
• Wait for graph == remove the resources
Also need a means of recovering from the usual graph and collapse edges.
from that deadlock. • An edge from p(j) to p(i) implies that p(j) is
waiting for p(i) to release.
DEADLOCKS Deadlock Detection

SEVERAL INSTANCES OF A RESOURCE TYPE

Complexity is of order m * n * n.

We need to keep track of:

available - records how many resources of each type are available.


allocation - number of resources of type m allocated to process n.
request - number of resources of type m requested by process n.

Let work and finish be vectors of length m and n respectively.


DEADLOCKS Deadlock Detection

1. Initialize work[] = available[]


For i = 1,2,...n, if allocation[i] != 0 then
finish[i] = false; otherwise, finish[i] = true;

2. Find an i such that:


finish[i] == false and request[i] <= work

If no such i exists, go to step 4.

3. work = work + allocation[i]


finish[i] = true
goto step 2

4. if finish[i] == false for some i, then the system is in deadlock state.


IF finish[i] == false, then process p[i] is deadlocked.
DEADLOCKS Deadlock Recovery

So, the deadlock has occurred. Now, how do we get the resources back and gain forward
progress?

PROCESS TERMINATION:

• Could delete all the processes in the deadlock -- this is expensive.


• Delete one at a time until deadlock is broken ( time consuming ).
• Select who to terminate based on priority, time executed, time to completion, needs
for completion, or depth of rollback
• In general, it's easier to preempt the resource, than to terminate the process.

RESOURCE PREEMPTION:

• Select a victim - which process and which resource to preempt.


• Rollback to previously defined "safe" state.
• Prevent one process from always being the one preempted ( starvation ).
DEADLOCKS Deadlock Recovery

COMBINED APPROACH TO DEADLOCK HANDLING:

• Type of resource may dictate best deadlock handling. Look at ease of implementation, and
effect on performance.

• In other words, there is no one best technique.

• Cases include:

Preemption for memory,

Preallocation for swap space,

Avoidance for devices ( can extract Needs from process. )


DEADLOCKS
WRAPUP

In this section we have:

Looked at necessary conditions for a deadlock to occur.

Determined how to prevent, avoid, detect and recover from deadlocks.

You might also like