0% found this document useful (0 votes)
2 views78 pages

Process Management

The document outlines key concepts related to processes in operating systems, including process scheduling, inter-process communication, and multithreaded programming. It explains the lifecycle of processes, the structure and role of Process Control Blocks (PCBs), and various scheduling algorithms. Additionally, it covers client-server communication methods and the importance of process creation and termination.

Uploaded by

hepsi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views78 pages

Process Management

The document outlines key concepts related to processes in operating systems, including process scheduling, inter-process communication, and multithreaded programming. It explains the lifecycle of processes, the structure and role of Process Control Blocks (PCBs), and various scheduling algorithms. Additionally, it covers client-server communication methods and the importance of process creation and termination.

Uploaded by

hepsi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

Processes

 Process Concept: Process scheduling,


 Operations on processes,
 Inter-process communication,
 Communication in client server systems.
 Multithreaded Programming: Multithreading models, Thread
libraries, Threading issues, Examples.
Objectives
 To introduce the notion of a process -- a program in execution, which
forms the basis of all computation
 To describe the various features of processes, including scheduling,
creation and termination, and communication
 To explore inter-process communication using shared memory and
message passing
 To describe communication in client-server systems
PROCESS CONCEPT
• Process – a program in execution;
process execution must progress in sequential fashion
•Multiple parts
The program code, also called text section
Current activity including program counter, processor registers
• Process includes :
Stack containing temporary data
Function parameters, return addresses, local variables
Data section containing global variables
Heap containing memory dynamically allocated during run time
•Program is passive entity stored on disk (executable file), process is active
Program becomes process when executable file loaded into memory
•Execution of program started via GUI mouse clicks, command line entry of
its name, etc
•One program can be several processes
Consider multiple users executing the same program
Process in Memory
Process State
 As a process executes, it changes state
 new: The process is being created
 running: Instructions are being
executed
 waiting: The process is waiting for
some event to occur
 ready: The process is waiting to
be assigned to a processor
 terminated: The process has
finished execution
Diagram of Process State Or Lifecycle of a Process
Process Control Block (PCB)
•A process control block (PCB) is a data structure used by computer
operating
systems to store all the information about a process.

• When a process is created (initialized or installed), the operating system


creates a corresponding process control block.

• Process control block includes CPU scheduling, I/O


resource management, file management information etc.

• If that process gets suspended, the contents of the


registers are saved on a stack and the pointer to the
particular stack frame is stored in the PCB.

• By this technique, the hardware state can be


restored so that the process can be scheduled to run again.
Process Control Block (PCB)

• Process control block includes CPU scheduling, I/O


resource management, file management information etc.

• If that process gets suspended, the contents of the


registers are saved on a stack and the pointer to the
particular stack frame is stored in the PCB.

• By this technique, the hardware state can be


restored so that the process can be scheduled to run again.
Process Control Block (PCB)
 A Process Control Block is a data structure maintained by the
Operating System for every process.

 The PCB is identified by an integer process ID (PID). A PCB


keeps all the information needed to keep track of a process as
listed below in the table −
S.N. Information & Description
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running state.
7 CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
8 Memory management information
This includes the information of page table, memory limits, Segment table depending on memory used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
CONTD..
CPU Switch From Process to Process
Process
Scheduling
Definition
Scheduling is a method that is used to distribute valuable
computing resources, ususally process threads, data flows and
applications that need them.

Scheduling is done to balance the load on the system and ensure


equal distribution of resources and give some set of rules.
Operating System - Process
Scheduling
Definition
The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming


operating systems. Such operating systems allow more than one
process to be loaded into the executable memory at a time and
the loaded process shares the CPU using time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and
PCBs of all processes in the same execution state are placed in the
same queue. When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its new state
queue.
The Operating System maintains the following important process
scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in
main memory, ready and waiting to execute. A new process is
always put in this queue.
Device queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO,
Round Robin, Priority, etc.). The OS scheduler determines how
to move processes between the ready and run queues which
can only have one entry per processor core on the system; in
the above diagram, it has been merged with the CPU.
Two-State Process Model

Two-state process model refers to running and non-


running states which are described below −
S.No. State & Description

1 Running: When a new process is created, it enters into the system as in the
running state.

2 Not Running: Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular process.
Queue is implemented by using linked list. Use of dispatcher is as follows. When
a process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.
Schedulers

Schedulers are special system software which handle process


scheduling in various ways. Their main task is to select the jobs to
be submitted into the system and to decide which process to run.

Schedulers are of three types −

•Long-Term Scheduler
•Short-Term Scheduler
•Medium-Term Scheduler
Long Term Scheduler
 It is also called a job scheduler.
 A long-term scheduler determines programs are
admitted
which to the system for processing.
 It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU
scheduling.
 The primary objective of the job scheduler is to provide a
balanced mix of jobs, such as I/O bound and processor bound.
 It also controls the degree of multiprogramming. If the degree
of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of
processes leaving the system.
Short Term Scheduler

 It is also called as CPU scheduler. Its main objective is to


increase system performance in accordance with the chosen set
of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes
that are ready to execute and allocates CPU to one of them.

 Short-term schedulers, also known as dispatchers, make the


decision of which process to execute next. Short-term schedulers
are faster than long-term schedulers.
Medium Term Scheduler
 Medium-term scheduling is a part of swapping. It removes the
processes from the memory. It reduces the degree of
multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.
 A running process may become suspended if it makes an I/O
request. A suspended processes cannot make any progress
towards completion.
 In this condition, to remove the process from memory and
make space for other processes, the suspended process is moved
to the secondary storage. This process is called swapping, and the
process is said to be swapped out or rolled out. Swapping may be
necessary to improve the process mix.
Process Representation in Linux

Represented by the C structure

task_struct
pid t_pid; /* process identifier */
long state; /* state of the process */
unsigned int time_slice /* scheduling information */
struct task_struct *parent; /* this process’s parent */
struct list_head children; /* this process’s children */
struct files_struct *files; /* list of open files */
struct mm_struct *mm; /* address space of this process */
Ready Queue And Various I/O Device Queues
Representation of Process Scheduling
 Queueing diagram represents queues, resources,
flows
Context Switch

 When CPU switches to another process, the system must save the state
of the old process and load the saved state for the new process via a
context switch
 Context of a process represented in the PCB
 Context-switch time is overhead; the system does no useful work while
switching
 The more complex the OS and the PCB  the longer the context
switch
 Time dependent on hardware support
 Some hardware provides multiple sets of registers per CPU 
multiple contexts loaded at once
OPERATIONS ON PROCESSES

 System must provide mechanisms for:


 process creation,
 process termination,
Process Creation
 Parent process create children processes, which, in turn create
other
processes, forming a tree of processes
 Generally, process identified and managed via a process identifier
(pid)
 Resource sharing options
 Parent and children share all resources
 Children share subset of parent’s resources
 Execution options
 Parent and children execute concurrently
 Parent waits until children terminate
A Tree of Processes in Linux
init
pid =
1

login kthreadd sshd


pid = pid = 2 pid =
8415 3028

bash khelper pdflush sshd


pid = pid = pid = pid =
8416 6 200 3610

emacs tcsch
ps
pid = pid =
pid =
9204 4005
9298
 Address space
 Child duplicate of parent
 Child has a program loaded into it
 UNIX examples
 fork() system call creates new process
 exec() system call usedafter a fork() to replace the
process’
memory space with a new program
C Program Forking
Separate Process
Process Termination
 Process executes last statement and then asks the operating system to
delete it using the exit() system call.
 Returns status data from child to parent (via wait())
 Process’ resources are deallocated by operating system
 Parent may terminate the execution of children processes using the
abort() system call. Some reasons for doing so:
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 The parent is exiting and the operating systems does not allow
a
child to continue if its parent terminates
 Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates, then all its children must also be
terminated.
 cascading termination. All children, grandchildren, etc. are
terminated.
 The termination is initiated by the operating system.
 The parent process may wait for termination of a child process by using
the wait()system call. The call returns status information and the pid of
the terminated process
pid = wait(&status);
 If no parent waiting (did not invoke wait()) process is a zombie
 If parent terminated without invoking wait , process is an orphan
Interprocess Communication
Inter process communication (IPC) is a mechanism which allows processes to
communicate with each other and synchronize their actions.
Processes within a system may be:
1. Independent Process 2. Co-operating Process
Independent Process Co-operating Process
1. A process is independent if it cannot 1. A process is co-operating if it can
affect or be affected by the other affect or be affected by the other
processes executing in the system processes executing in the system.
2. Any process that does not share data 2. Any process that share data with any
with any other process is IP other process is CP

Reasons for cooperating processes:


Information sharing
Computation speedup
Modularity
Convenience
Operating Systems Client/Server Communication

Client/Server communication involves two components, namely a


client and a server. They are usually multiple clients in communication
with a single server. The clients send requests to the server and the
server responds to the client requests.

There are three main methods to client/server communication. These


are given as follows −
•Sockets
•Remote Procedure Calls
•Pipes
Operating Systems Client/Server Communication

SOCKET:
Sockets facilitate communication between two processes on the same
machine or different machines. They are used in a client/server
framework and consist of the IP address and port number. Many
application protocols use sockets for data connection and data transfer
between a client and a server.
Operating Systems Client/Server Communication

Remote Procedure Calls


• These are interprocess communication techniques that are used
for client- server based applications. A remote procedure call is also
known as a subroutine call or a function call.
• A client has a request that the RPC translates and sends to the
server. This request may be a procedure or a function call to a remote
server. When the server receives the request, it sends the required response
back to the client.
Pipes
These are interprocess communication methods that contain two end points.
Data is entered from one end of the pipe by a process and consumed from the
other end by the other process.

The two different types of pipes are ordinary pipes and named pipes. Ordinary
pipes only allow one way communication. For two way communication, two
pipes are required. Ordinary pipes have a parent child relationship between
the processes as the pipes can only be accessed by processes that created or
inherited them
Operating Systems Client/Server Communication

Client Server Computing:


In client server computing, the clients requests a resource and the
server provides that resource. A server may serve multiple clients at
the same time while a client is in contact with only one server. Both the
client and server usually communicate via a computer network but
sometimes they may reside in the same system.
An illustration of the client server system is given as follows −
Process Scheduling Algorithms

A Process Scheduler schedules different processes to be assigned to


the CPU based on particular scheduling algorithms. There are six
popular process scheduling algorithms which we are going to discuss in
this chapter −
 First-Come, First-Served (FCFS) Scheduling-In this Process
which requests
the cpu gets the cpu allocation first
 Shortest-Job-Next (SJN) Scheduling—Execution time should be
selected for execution next
 Priority Scheduling-it selects the tasks to work as per the priority
 Shortest Remaining Time-The process will be allocated to the task, which
is closest to its completion.
 Round Robin(RR) Scheduling-it works on principle, where each
person gets an equal share of something in turn(change the position)
 Multiple-Level Queues Scheduling-it separate the queues. In
this method processes are assigned to a queue based on a specific properity
First Come First Serve
(FCFS)

FCFS
•Jobs are executed on first come, first serve basis.
•It is a non-preemptive, pre-emptive scheduling algorithm.
•Easy to understand and implement.
•Its implementation is based on FIFO queue.
•Poor in performance as average wait time is high.
Wait time of each process is as follows −
Process Wait Time : Service Time - Arrival Time
P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75


Shortest Job Next (SJF/SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is

known in advance.
 Impossible to implement in interactive systems where required CPU

time is not known.


 The processer should know in advance how much time process will

take.
SJF

Given: Table of processes, and their Arrival time, Execution


time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Shortest Job First Scheduling Algorithm


Waiting time of each process is as follows −

Process Waiting Time=SERVICE TIME-ARRIVAL TIME

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


Priority Based Scheduling

• Priority scheduling is a non-preemptive algorithm and one of


the most common scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest
priority is to be executed first and so on.
• Processes with same priority are executed on first come first
served basis.
•Priority can be decided based on memory requirements,
time
requirements or any other resource requirement.
Priority Scheduling
Algorithm

Given: Table of processes, and their Arrival time, Execution time, and
priority. Here we are considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Priority Scheduling Algorithm


Priority Scheduling Algorithm

Waiting time of each process is as follows −

Process Waiting Time=service time-arriaval time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Waiting time=service time-arrival time


Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time

• Shortest remaining time (SRT) is the preemptive version of


the SJN algorithm.
• The processor is allocated to the job closest to completion
but it can be preempted by a newer ready job with shorter time to
completion.
• Impossible to implement in interactive systems where
required CPU time is not known.
•It is often used in batch environments where short jobs need to give
preference.
Round Robin Scheduling

•Round Robin is the preemptive process scheduling algorithm.


•Each process is provided a fix time to execute, it is
called
a quantum.
• Once a process is executed for a
given time period, it is preempted and other
process executes for a given time period.
•Context switching is used to save states of preempted process.
Round Robin Scheduling

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


Multiple-Level Queues Scheduling

• Multiple-level queues are not an independent scheduling


algorithm. They make use of other existing algorithms to group and
schedule jobs with common characteristics.
•Multiple queues are maintained for processes with common characteristics.
•Each queue can have its own scheduling algorithms.
•Priorities are assigned to each queue.
• For example, CPU-bound jobs can be scheduled in one queue and
all I/O- bound jobs in another queue. The Process Scheduler then alternately
selects jobs from each queue and assigns them to the CPU based on the
algorithm assigned to the queue.
THREADS

• A Thread is a flow of execution through the process code, with


its own program counter that keeps track of which instruction to
execute next, system registers .
•It is also called as light weight process
•A thread is a basic unit of CPU utilization; it comprises a thread ID, a
program counter, a register set, and a stack.
• It shares with other threads belonging to the same process
its code section, data section, and other operating-system resources,
such as open files and signals.

THREADS Overview
A word processor may have a thread for displaying graphics, another
thread for responding to keystrokes from the user, and a third thread
for performing spelling and grammar checking in the background.
Thread
Thread

•In certain situations, a single application may be required to perform


several similar tasks.
• For example, a web server accepts client
requests for web pages, images, sound, and so forth.
•A busy web server may have several (perhaps thousands) of clients
concurrently accessing it.
• If the web server ran as a traditional single-threaded process, it
would be able to service only one client at a time.
•The amount of time that a client might have to wait for its request to
be serviced could be enormous.
Multithreading Models

Single & Multithreaded


Processes
Difference between process and thread
Process Thread
1. Process is heavy weight 1. Light weight
2. It needs interaction with OS 2. Does not required with OS
3. It creates own memory of files 3. it share same set of
open
4. Multiple processes without using files,diff processes
Threads use more resources 4.multiple threaded
5. It is an independent processes are fewer resoruces
5.one thread having
read, write or change
another thread
Benefits
•Responsiveness
•Resource Sharing Economy
•Utilization of MP Architectures

User Threads Libraries


Thread management done by user-level threads library Three primary
thread libraries:

•POSIX Pthreads
•Win32 threads
•Java threads
Multithreading Models

Multithreading
Models

 Many-to-One
 One-to-One
 Many-to-Many
1. Many-to-One
Multithreading models

One-to-One
Multithreading models

Many to Many Model


Issues of Thread

Threading Issues:

•Semantics of fork() and exec() system


calls
•Thread cancellation
•Signal handling
•Thread pools
•Thread specific data
•Scheduler activations
Threading Issues in the Different Environment

Windows XP Threads
Each thread contains
•A thread id
•Register set
•Separate user and kernel stacks
•Private data storage area
•The register set, stacks, and private storage area are known as
the
context of the threads
•The primary data structures of a thread include:
•ETHREAD (executive thread block)
•KTHREAD (kernel thread block)
•TEB (thread environment block)
Threading Issues in the Different Environment

Linux Threads
•Linux refers to them as tasks rather than threads
•Thread creation is done through clone() system call
• clone() allows a child task to share the address space of the
parent task (process)
Java Threads
Java threads are managed by the JVM Java threads may be created by:
•Extending Thread class
•Implementing the Runnable interface
Process Synchronization

•Concurrent access to shared data result in data


inconsistency
may (change in behavior)
•Maintaining data consistency requires mechanisms
ensure
to the orderly execution of cooperating processes
• Suppose that we wanted to provide a
solution to the “producer-consumer” problem that fills
all the buffers.
• We cando so by having an integer
variable “count” that keeps track of the number of full
buffers.
•Initially, count is set to 0.
• It is incremented by the producer after it produces
a new buffer.
• It is decremented by the
consumer after it consumes a
buffer.
Producer consumer problem

Producer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE); // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0); // do
nothing nextConsumed =
buffer[out]; out = (out + 1) %
BUFFER_SIZE;
count--;
}/* consume the item in
nextConsumed
Critical section problem:

A section of code which reads or writes shared data.


Race Condition

The situation where two or more processes try to access and


manipulate the same data and output of the process
depends on the orderly execution of those processes is called
as Race Condition.

count++ could be implemented as register1 =


count register1 = register1 + 1 count = register1
count-- could be implemented as register2 = count
register2 = register2 - 1 count = register2
Race Condition

Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Requirements for the Solution to Critical-
Section Problem
Mutual Exclusion: - If process Pi is executing in its critical section,
then no
other processes can be executing in their critical sections
Progress: - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed
indefinitely.
Bounded Waiting: - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
To general approaches are used to handle critical sections in operating
systems: (1) Preemptive Kernel (2) Non Preemptive Kernel
Preemptive Kernel allows a process to be preempted
while it is running in kernel mode.
Non Preemptive Kernel does not allow a process running in kernel
mode to be preempted. (these are free from race conditions)
Semaphores

 A semaphore is a synchronization tool which is an integer value that,


apart from initialization, is accessed only through two standard atomic
operations; wait and signal. Semaphores can be used to deal with the
n-process critical section problem. It can be also used to solve various
Synchronization problems.
 As it is difficult for the application programmer to use these hardware
instructions, to overcome this difficulty we use the synchronization
tool called as Semaphore (that does not require busy waiting)
 Semaphore S – integer variable, apart from this initialization we can
access this only through two standard atomic operations called as
wait() and signal().
 Originally the wait() and signal() operations are termed as P() and V()
respectively. Which are termed from the Dutch words “proberen” and
“verhogen”.
The definition for wait() is as follows:
wait (S)
{
while S <= 0
; // no-op
S--;
}
The definition for signal() is as follows:
signal (S)
{
S++;
}
Classical Problems of Synchronization

•Bounded-Buffer Problem

•Readers and Writers


Problem

•Dining-Philosophers Problem
Bounded-Buffer Problem

•Let us assume N buffers, each can hold only one item.


•Semaphore mutex initialized to the value 1 which is used to provide
mutual exclusion.
•Semaphore full initialized to the value 0
•Semaphore empty initialized to the value N.
•Semaphore full and empty are used to count the number of buffers.
Bounded-Buffer Problem

The structure of the producer process while (true) {


// produce an item wait (empty);
wait (mutex);
// add the item to the buffer signal (mutex);
signal (full);
}
The structure of the consumer process while (true) {
wait (full); wait (mutex);
// remove an item from buffer signal (mutex);
signal (empty);

// consume the removed item


}
Readers-Writers Problem

•A data set is shared among a number of concurrent processes


•Readers – only read the data set, do not perform any updates
•Writers – can both read and write the data set (perform the updates).
• If two readers read the shared data simultaneously, there will be no
problem. If both a reader(s) and writer share the same data simultaneously
then there will be aproblem.
• In the solution of reader-writer problem, the reader process
share the following data structures:
•Semaphore Mutex, wrt;
int readcount;
•Where Semaphore mutex is initialized to 1. Semaphore wrt is initialized to 1.
•Integer readcount is initialized to 0.
Structure of reader and writer process

The structure of a writer process


while (true) { wait (wrt) ;

// writing is performed signal (wrt) ;


}

The structure of a reader process


while (true) {
wait (mutex) ; readcount ++ ;
if (readcount == 1) wait (wrt) ; signal (mutex)

// reading is performed

wait (mutex) ; readcount - - ;


if (readcount == 0) signal (wrt) ; signal (mutex) ;
}
Dining-Philosophers Problem

Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
Monitors

•A high-level abstraction that provides a convenient and


effective
mechanism forprocess synchronization.
• A procedure can access only those variables that are
declared in a monitor andformal parameters .
•Only one process may be active within the monitor at a time
Syntax of the monitor :-
monitor
{ monitor-name
// shared variabledeclarations
procedure P1 (…) { …. }

procedure Pn (…)
{……} Initialization code
( ….) { … }

}

You might also like