Amity School of Engineering & Technology
Module-2
Operating System-1
CSE 703
Advanced Problem Solving
Techniques
By: Dr. Ghanshyam Prasad Dubey
Associate Professor (CSE-ASET, AUMP)
Amity School of Engineering & Technology
• Module II: Operating System-I
– Processes and Scheduling policies :Introduction to
processes management, operating system views of
processes, various process transition states, Introduction
to Processor scheduling, Introduction to various types of
schedulers, Performance criteria in scheduling algorithms,
Concept of FCFS scheduling algorithm, Concept of priority
scheduling algorithm like SJF, Concept of non-preemptive
and preemptive algorithms, Concept of round-robin
scheduling algorithm, , Concept of multi-level queues,
feedback queues.
– Interprocess Communication: Introduction to
Interprocess Communication and Synchronization, Critical
regions and Conditional critical regions in a Semaphore.
Introduction to monitors, various modes of monitors,
Issues in message implementation, Concept of mutual
exclusion with messages.
Amity School of Engineering & Technology
Introduction to processes management
• A Program does nothing unless its instructions are
executed by a CPU.
• A program in execution is called a process. In order
to accomplish its task, process needs the computer
resources.
• There may exist more than one process in the
system which may require the same resource at the
same time.
• Therefore, the operating system has to manage all
the processes and the resources in a convenient
and efficient way.
• A process is a program in execution.
Amity School of Engineering & Technology
• The memory layout of a process is
typically divided into multiple
sections. These sections include:
• Text section—the executable code
• Data section—global variables
• Heap section—memory that is
dynamically allocated during
program run Time
• Stack section—temporary data
storage when invoking functions
(such as function parameters, return
addresses, and local variables)
Amity School of Engineering & Technology
OS views of processes
• The operating system does not perform any
functions on its own, but it provides an
atmosphere in which various programs and apps
can do useful work.
• The operating system may be observed from the
point of view of the user or the system, and it is
known as the user view and the system view.
• There are mainly two types of views of the
operating system.
1. User view
2. System view
Amity School of Engineering & Technology
• 1. User view − The user viewpoint focuses on how the
user interacts with the operating system through the
usage of various application programs.
• 1.1 Single user view point: These systems are much
more designed for a single user experience and meet
the needs of single user where the performance is not
given focus as the multiple user systems. Most computer
users use a monitor, keyboard, printer, mouse and other
accessories to operate their computer system.
• 1.2 Multiple user view point: These systems are
designed for multiple user experience and meet the
needs of multiple user. when there is one mainframe
computer and many users on their computer trying to
interact with their kernels over the mainframe to each
other. Ex. Client server)
Amity School of Engineering & Technology
• 1.3 Handled user view point: In the handled
user viewpoint smartphones interact via wireless
devices to perform numerous operations but
they are not as efficient as a computer interface,
limiting their usefulness.
• 1.4 Embedded System user view Point: The
embedded system lacks a user point of view.
The remote control used to turn on or off the tv is
all part of an embedded system in which the
electronic device communicates with another
program where the user view point is limited and
allows the user to engage with the application.
Amity School of Engineering & Technology
2. System View: A computer system comprises various
sources, such as hardware and software which must be
managed effectively. The operating system is responsible
for managing hardware resources and allocating them to
programs and users to ensure maximum performance. In
the system viewpoint the operating system is more
involved with hardware services -CPU time, memory
space, I/O operation , and so on.
2.1 Resource Allocation: There are many resources which
present in the hardware such as register, cache, RAM,
ROM, processors, I/O interaction, etc. his resource
allocation has to be done only by the operating system
which has used many techniques and strategies such that
it brings the most out of its processing and memory space.
There are various techniques such as paging, virtual
memory, caching, etc.
Amity School of Engineering & Technology
There are two resource allocation techniques −
• Resource partitioning approach − It divides the
resources in the system to many resource partitions,
where each partition may include various resources.
In this approach the operating system decides
beforehand what resources should be allocated to
which user program.
• Pool based approach −In the pool based approach
there is a common pool of resources. The operating
system checks the allocation status in the resource
table whenever a program makes a request for a
resource. If the resource is free, it allocates the
resources to the program.
Amity School of Engineering & Technology
2.2 Control program: In the control
programs it controls how input and output
devices (hardware) interact with the
operating system. The user may request
an action that can only be done with I/O
devices, The operating system must also
have proper communication, control,
detect, and handle such devices.
Amity School of Engineering & Technology
Process State
• The state of a process is defined in part by the
current activity of that process. A process may be
in one of the following states:
• New. The process is being created.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to
occur (such as an I/O completion or reception of a
signal).
• Ready. The process is waiting to be assigned to a
processor.
• Terminated. The process has finished execution.
Amity School of Engineering & Technology
Diagram of process state
Amity School of Engineering & Technology
Process Control Block
• Each process is represented in the operating system by
a process control block (PCB)—also called a task
control block. It contains many pieces of information
associated with a specific process, including these:
Amity School of Engineering & Technology
• Process state. The state may be new, ready,
running, waiting, halted, and so on.
• Program counter. The counter indicates the address
of the next instruction to be executed for this
process.
• CPU registers. The registers vary in number and
type, depending on the computer architecture. They
include accumulators, index registers, stack
pointers, and general-purpose registers, plus any
condition-code information.
• CPU-scheduling information. This information
includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
Amity School of Engineering & Technology
• Memory-management information. This information
may include such items as the value of the base and
limit registers and the page tables, or the segment
tables, depending on the memory system used by the
operating system
• Accounting information. This information includes the
amount of CPU and real time used, time limits, account
numbers, job or process numbers, and so on.
• I/O status information. This information includes the list
of I/O devices allocated to the process, a list of open
files, and so on.
In brief, the PCB simply serves as the repository for all
the data needed to start, or restart, a process, along
with some accounting data.
Amity School of Engineering & Technology
Process Scheduling
• The objective of multiprogramming is to have some
process running at all times so as to maximize CPU
utilization.
• The objective of time sharing is to switch a CPU
core among processes so frequently that users can
interact with each program while it is running.
• To meet these objectives, the process scheduler
selects an available process (possibly from a set of
several available processes) for program execution
on a core.
• Each CPU core can run one process at a time.
Amity School of Engineering & Technology
• For a system with a single CPU core, there will never be
more than one process running at a time, whereas a
multicore system can run multiple processes at one time.
• If there are more processes than cores, excess processes
will have to wait until a core is free and can be
rescheduled.
• The number of processes currently in memory is known as
the degree of multiprogramming.
• In general, most processes can be described as either I/O
bound or CPU bound.
• An I/O-bound process is one that spends more of its time
doing I/O than it spends doing computations.
• A CPU-bound process, in contrast, generates I/O
requests infrequently, using more of its time doing
computations.
Amity School of Engineering & Technology
The ready queue and wait queues
Amity School of Engineering & Technology
•
Scheduling Queues
As processes enter the system, they are put into a ready
queue, where they are ready and waiting to execute on a
CPU’s core.
• Processes that are waiting for a certain event to occur - such
as completion of I/O - are placed in a wait queue.
• new process is initially put in the ready queue. It waits there
until it is selected for execution, or dispatched.
• Once the process is allocated a CPU core and is executing,
one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O
wait queue.
• The process could create a new child process and then be placed in a
wait queue while it awaits the child’s termination.
• The process could be removed forcibly from the core, as a result of an
interrupt or having its time slice expire, and be put back in the ready
queue.
Amity School of Engineering & Technology
Queueing-diagram representation of
process scheduling
Amity School of Engineering & Technology
Types of Schedulers
• Process Scheduling handles the selection of a
process for the processor on the basis of a
scheduling algorithm and also the removal of a
process from the processor.
• There are many scheduling queues that are used in
process scheduling.
• When the processes enter the system, they are put
into the job queue.
• The processes that are ready to execute in the main
memory are kept in the ready queue.
• The processes that are waiting for the I/O device are
kept in the I/O device queue.
Amity School of Engineering & Technology
Long Term Scheduler
• The job scheduler or long-term scheduler selects
processes from the storage pool in the secondary
memory and loads them into the ready queue in the
main memory for execution.
• The long-term scheduler controls the degree of
multiprogramming.
• It must select a careful mixture of I/O bound and CPU
bound processes to yield optimum system throughput.
• If it selects too many CPU bound processes then the
I/O devices are idle and if it selects too many I/O bound
processes then the processor has nothing to do.
• The job of the long-term scheduler is very important and
directly affects the system for a long time.
Amity School of Engineering & Technology
Short Term Scheduler
• The short-term scheduler selects one of the
processes from the ready queue and schedules them
for execution.
• A scheduling algorithm is used to decide which
process will be scheduled for execution next.
• The short-term scheduler executes much more
frequently than the long-term scheduler as a process
may execute only for a few milliseconds.
• If it selects a process with a long burst time, then all
the processes after that will have to wait for a long
time in the ready queue.
• This is known as starvation and it may happen if a
wrong decision is made by the short-term scheduler.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Medium Term Scheduler
• The medium-term scheduler swaps out a
process from main memory.
• It can again swap in the process later from the
point it stopped executing.
• This can also be called as suspending and
resuming the process.
• This is helpful in reducing the degree of
multiprogramming.
• Swapping is also useful to improve the mix of I/O
bound and CPU bound processes in the
memory.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Performance criteria in scheduling algorithms
• CPU Utilization
• Throughput
• Turnaround Time
• Waiting Time
• Response Time
Amity School of Engineering & Technology
Context Switching
• A context switching is the mechanism to store
and restore the state or context of a CPU in
Process Control block so that a process
execution can be resumed from the same point
at a later time.
• Using this technique, a context switcher enables
multiple processes to share a single CPU.
Context switching is an essential part of a
multitasking operating system features.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Scheduling Algorithms
• CPU scheduling deals with the problem of deciding
which of the processes in the ready queue is to be
allocated the CPU’s core.
• There are two categories of scheduling:
– Non-preemptive: Here the resource can’t be taken from a
process until the process completes execution. The switching
of resources occurs when the running process terminates and
moves to a waiting state.
– Preemptive: Here the OS allocates the resources to a
process for a fixed amount of time. During resource allocation,
the process switches from running state to ready state or from
waiting state to ready state. This switching occurs as the CPU
may give priority to other processes and replace the process
with higher priority with the running process.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
•Arrival Time: Time at which the process arrives in the
ready queue.
•Completion Time: Time at which process completes its
execution.
•Burst Time: Time required by a process for CPU
execution.
•Turn Around Time: Time Difference between completion
time and arrival time.
Turn Around Time = Completion Time – Arrival Time
•Waiting Time(W.T): Time Difference between turn
around time and burst time.
Waiting Time = Turn Around Time – Burst Time
Amity School of Engineering & Technology
First-Come, First-Served Scheduling
• With this scheme, the process that requests the
CPU first is allocated the CPU first.
• The implementation of the FCFS policy is easily
managed with a FIFO queue.
• When a process enters the ready queue, its
PCB is linked onto the tail of the queue.
• When the CPU is free, it is allocated to the
process at the head of the queue.
• FCFS supports non-preemptive and preemptive
CPU scheduling algorithms.
Amity School of Engineering & Technology
• Consider the following set of processes that arrive at
time 0, with the length of the CPU burst given in
milliseconds:
• If the processes arrive in the order P1, P2, P3, and
are served in FCFS order. (Gantt Chart)
• The waiting time is 0 milliseconds for process P1, 24
milliseconds for process P2, and 27 milliseconds for
process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds.
Amity School of Engineering & Technology
Process Burst Time Arrival time Preemptive FCFS
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Amity School of Engineering & Technology
Shortest Job First
• The scheduler selects the process with the minimum
burst time for its execution.
• This algorithm has two versions: preemptive and
non-preemptive.
• The algorithm helps reduce the average waiting time
of processes that are in line for execution.
• Process throughput is improved as processes with
the minimum burst time are executed first.
• The turnaround time is significantly less.
• The SJF algorithm can’t be implemented for short-
term scheduling as the length of the upcoming CPU
burst can’t be predicted.
Amity School of Engineering & Technology
Process Burst Time Arrival time Non Preemptive SJF
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
The average waiting time=(0+1+4+7+14)/5=5.2ms.
Amity School of Engineering & Technology
Preemptive SJF
The average waiting time for this example is
[(10 − 1) + (1 − 1) + (17 − 2) + (5 − 3)]/4 = 26/4 = 6.5
milliseconds.
Nonpreemptive SJF scheduling would result in an
average waiting time of 7.75 milliseconds.
Amity School of Engineering & Technology
Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is
similar to FCFS scheduling, but preemption is
added to enable the system to switch between
processes.
• A small unit of time, called a time quantum or
time slice, is defined. A time quantum is
generally from 10 to 100 milliseconds in length.
• The ready queue is treated as a circular queue.
The CPU scheduler goes around the ready
queue, allocating the CPU to each process for a
time interval of up to 1 time quantum.
Amity School of Engineering & Technology
• The average waiting time under the RR policy is often
long.
• Consider the following set of processes that arrive at
time 0, with the length of the CPU burst given in
milliseconds: (use a time quantum of 4 milliseconds)
• P1 waits for 6 milliseconds (10 − 4), P2 waits for 4
milliseconds, and P3 waits for 7 milliseconds.
• Thus, the average waiting time is 17/3 = 5.66
milliseconds.
Amity School of Engineering & Technology
Priority Scheduling
• A priority is associated with each process,
and the CPU is allocated to the process
with the highest priority.
• Equal-priority processes are scheduled in
FCFS order.
• An SJF algorithm is simply a priority
algorithm.
Amity School of Engineering & Technology
• As an example, consider the following set of processes,
assumed to have arrived at time 0 in the order P1, P2, · ·
·, P5, with the length of the CPU burst given in
milliseconds:
• A process that is ready to run but waiting for the CPU
can be considered blocked. A priority scheduling
algorithm can leave some low-priority processes waiting
indefinitely.
Amity School of Engineering & Technology
Multilevel Queue Scheduling
• With both priority and round-robin scheduling, all
processes may be placed in a single queue, and the
scheduler then selects the process with the highest priority
to run.
• If there are multiple processes in the highest-priority
queue, they are executed in round-robin order.
• In practice, it is often easier to have separate queues for
each distinct priority, and priority scheduling simply
schedules the process in the highest-priority queue.
• This approach—known as multilevel queue— also works
well when priority scheduling is combined with round-robin:
if there are multiple processes in the highest-priority
queue, they are executed in round-robin order.
Amity School of Engineering & Technology
Separate queues for each priority
Amity School of Engineering & Technology
• A multilevel queue scheduling algorithm can
also be used to partition processes into several
separate queues based on the process type
Multilevel queue scheduling.
Amity School of Engineering & Technology
• The foreground and background are two types of
processes have different response-time
requirements and so may have different scheduling
needs.
• In addition, foreground processes may have priority
(externally defined) over background processes.
• Separate queues might be used for foreground and
background processes, and each queue might have
its own scheduling algorithm.
• The foreground queue might be scheduled by an
RR algorithm, while the background queue is
scheduled by an FCFS algorithm.
Amity School of Engineering & Technology
• Let’s look at an example of a multilevel
queue scheduling algorithm with four
queues, listed below in order of priority:
• 1. Real-time processes
• 2. System processes
• 3. Interactive processes
• 4. Batch processes
• Each queue has absolute priority over
lower-priority queues.
Amity School of Engineering & Technology
Multilevel Feedback Queue Scheduling
• The multilevel feedback queue scheduling
algorithm allows a process to move between
queues.
• The idea is to separate processes according to the
characteristics of their CPU bursts.
• If a process uses too much CPU time, it will be
moved to a lower-priority queue. This scheme
leaves I/O-bound and interactive processes—which
are typically characterized by short CPU bursts in
the higher-priority queues.
• In addition, a process that waits too long in a lower-
priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.
Amity School of Engineering & Technology
• For example, consider a multilevel feedback
queue scheduler with three queues, numbered
from 0 to 2.
Amity School of Engineering & Technology
• The scheduler first executes all processes
in queue 0. Only when queue 0 is empty
will it execute processes in queue 1.
• Similarly, processes in queue 2 will be
executed only if queues 0 and 1 are
empty.
• A process that arrives for queue 1 will
preempt a process in queue 2. A process
in queue 1 will in turn be preempted by a
process arriving for queue 0.
Amity School of Engineering & Technology
Interprocess Communication
• "Inter-process communication is used for
exchanging useful information between numerous
threads in one or more processes (or programs)."
• Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.
• A process is independent if it does not share data
with any other processes executing in the system.
• A process is cooperating if it can affect or be affected
by the other processes executing in the system.
• Clearly, any process that shares data with other
processes is a cooperating process.
Amity School of Engineering & Technology
• There are several reasons for providing an
environment that allows process cooperation:
– Information sharing: Since several applications may
be interested in the same piece of information (for
instance, copying and pasting), it must provide an
environment to allow concurrent access to such
information.
– Computation speedup: If we want a particular task
to run faster, we must break it into subtasks, each of
which will be executing in parallel with the others.
Notice that such a speedup can be achieved only if
the computer has multiple processing cores.
– Modularity: We may want to construct the system in
a modular fashion, dividing the system functions into
separate processes or threads
Amity School of Engineering & Technology
• There are two fundamental models of interprocess
communication: shared memory and message
passing.
• In the shared-memory model, a region of memory
that is shared by the cooperating processes is
established. Processes can then exchange
information by reading and writing data to the
shared region.
• In the message-passing model, communication
takes place by means of messages exchanged
between the cooperating processes. Message
passing is useful for exchanging smaller amounts of
data, because no conflicts need be avoided.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Synchronization in Interprocess Communication
• It is either provided by the interprocess
control mechanism or handled by the
communicating processes.
• Some of the methods to provide
synchronization are as follows:
• 1. Semaphore: A semaphore is a
variable that controls the access to a
common resource by multiple processes.
The two types of semaphores are binary
semaphores and counting semaphores.
Amity School of Engineering & Technology
• 2. Mutual Exclusion: Mutual exclusion requires
that only one process thread can enter the critical
section at a time. This is useful for synchronization
and also prevents race conditions.
• 3. Barrier: A barrier does not allow individual
processes to proceed until all the processes reach
it. Many parallel languages and collective routines
impose barriers.
• 4. Spinlock: This is a type of lock. The processes
trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known
as busy waiting because the process is not doing
any useful operation even though it is active.
Amity School of Engineering & Technology
Critical Regions
• A critical region refers to a section of code or a data
structure that must be accessed exclusively by one
method or thread at a time.
• Critical regions are utilized to prevent concurrent entry to
shared sources, along with variables, information
structures, or devices, that allow you to maintain
information integrity and keep away from race
conditions.
• Following are the characteristics and requirements for
critical regions:
• 1. Mutual Exclusion: Only one procedure or thread can
access the important region at a time. This ensures
that concurrent entry does not bring about facts
corruption or inconsistent states.
Amity School of Engineering & Technology
• 2. Atomicity: The execution of code within an essential
region is dealt with as an indivisible unit of execution. It
way that after a system or thread enters a vital place, it
completes its execution without interruption.
• 3. Synchronization: Processes or threads waiting to go
into a essential vicinity are synchronized to prevent
simultaneous access. They commonly appoint
synchronization primitives, inclusive of locks or
semaphores, to govern access and put in force mutual
exclusion.
• 4. Minimal Time Spent in Critical Regions: It is perfect
to reduce the time spent inside crucial regions to reduce
the capacity for contention and improve gadget overall
performance. Lengthy execution within essential regions
can increase the waiting time for different strategies or
threads.
Amity School of Engineering & Technology
Conditional critical regions
• Conditional critical regions (CCRs) are an
alternative to semaphores for thread
synchronization.
• CCRs were proposed to simplify semaphore
programming.
• Some conditional critical sections can have
multiple conditions:
– R/W lock: readers are waiting for writer to leave;
writers are waiting for reader or writer to leave
– bounded queue: dequeuers are waiting for
queue to be non-empty; enqueuers are waiting
for queue to be non-full
Amity School of Engineering & Technology
Introduction to Monitors
• Monitors are a synchronization tool used in process
synchronization to manage access to shared resources
and coordinate the actions of numerous threads or
processes.
• When opposed to low-level primitives like locks or
semaphores, they offer a higher-level abstraction for
managing concurrency.
• A synchronization technique called a monitor unifies
operations and data structures into a single entity.
• They contain both operations that can be carried out on
shared resources.
• By allowing only one thread or process to execute the
methods included in the monitor at once, monitors ensure
mutual exclusion.
Amity School of Engineering & Technology
• Monitors are used to prevent concurrent access to shared
resources by numerous threads or processes.
• When several entities attempt to alter the same resource
at once, they can avoid conflicts and data inconsistencies.
• A lock or mutex connected to a monitor ensures mutual
exclusion.
• The monitor can only be accessed by the thread or
process holding the lock.
• Condition variables are employed within the monitor to
control synchronization and communication.
• Using condition variables, threads or processes examine
conditions and wait for the conditions to become true.
• When a modification to the shared resource transforms
the condition into true and frees up waiting threads or
processes, signaling or notification takes place.
Amity School of Engineering & Technology
Monitors offer several advantages in process synchronization
• Simplicity − The design and implementation of concurrent
systems are made simpler by the high-level abstraction that
monitors offer.
• Mutual − Monitors enforce mutual exclusion by limiting the
concurrent execution of its methods to a single thread or
process.
• Encapsulation − Monitors encapsulate shared resources and
the operations connected to them, making it simpler to
analyze and control concurrency.
• Synchronization − To control coordination and
communication across threads or processes, monitors have
built-in synchronization methods, such as condition variables.
• Modularity − Monitors encourage modularity by gathering
similar procedures and data together.
Amity School of Engineering & Technology
Disadvantages of Monitor
• Limited Expressiveness − Within a single monitor
instance, monitors are made to handle mutual exclusion
and synchronization. They might not work effectively for
more elaborate synchronization patterns or complex
synchronization circumstances involving many monitors.
• Potential for Deadlocks − Monitors enforce mutual
exclusion, which helps prevent deadlocks, although they
do not completely rule out the possibility.
• Performance Overhead − Because locks must be
acquired and released, as well as because there may be
conflict between threads or processes attempting to
access the monitor, monitors may result in performance
overhead.
Amity School of Engineering & Technology
• Monitors have some limitations.
• For example, they can be less efficient than lower-
level synchronization primitives such as semaphores
and locks, as they may involve additional overhead
due to their higher-level abstraction.
• Additionally, monitors may not be suitable for all
types of synchronization problems, and in some
cases, lower-level primitives may be required for
optimal performance.
• Two different operations are performed on the
condition variables of the monitor.
– Wait.
– signal.
Amity School of Engineering & Technology
Concept of mutual exclusion with messages
• Mutual exclusion is a program object that blocks multiple
users from accessing the same shared variable or data at
the same time.
• With a critical section, a region of code in which multiple
processes or threads access the same shared resource,
this idea is put to use in concurrent programming.
• Only one thread can use a mutex at a time, thus, upon
program startup, a mutex with a specific name is created.
• To prevent other threads from accessing a shared resource
at the same time, a thread currently using that resource
must lock the mutex.
• The mutex is unlocked when the thread releases the
resource. Hardware and software levels can enforce
mutex.
Amity School of Engineering & Technology
Types of mutual exclusive devices
• Locks. A synchronization primitive that limits resource
access for multiple threads. Locks impose a mutual
exclusion concurrency control policy, and various
applications use different ways.
• Readers-writer locks. A reader-writer problem-solving
synchronization primitive. Write operations require
exclusive access, although RW locks allow concurrent
read-only access. Multiple threads can read data, but
writing or altering data requires an exclusive lock.
• Recursive locks. A mutual exclusion device that can be
locked many times using the same process/thread
without deadlocking.
Amity School of Engineering & Technology
• Semaphores. A variable or abstract data type used to
limit access to a shared resource by multiple threads and
avoid critical section issues in a concurrent system, like an
operating system that can handle multiple tasks at once.
• Monitors. A synchronization construct in concurrent
programming enables threads to mutually exclude one
another and to wait (block) for a condition to become
false.
• Message passing. A computer program invocation
method. The invoking software sends a message to a
process and relies on that process and its supporting
infrastructure to choose and run relevant code.
• Tuple space. For parallel and distributed computing, a
tuple space is a way to use the associative memory
paradigm. It gives you a place to store tuples that you can
access at the same time.
Amity School of Engineering & Technology
Thanks
Further Study:
1. A. Silberschatz, P.B. Galvin “Operating System
Concepts”, John Willey & son.
2. www.geeksforgeeks.org
3. https://Javatpoint.com
4. https://www.gatevidyalay.com/
5. https://www.tutorialspoint.com/