UNIT-1
INTRODUCTION TO OPERATING SYSTEM
Operating System
• An operating system (OS) is a crucial piece of software
that manages computer hardware and software resources
and provides common services for computer programs. It
acts as an intermediary between users and the computer
hardware, making it easier to execute and manage tasks.
Operating System - Purpose
• Mainframe OS • Optimize hardware
• Personal computer OS utilization
• Complex games and
business applications and
• OS for handheld computer everything in-between
• Easy interface for
programs execution
Computer System Organisation
Computer system organization refers to the way in which
the components of a computer system are structured and
interact to perform various tasks. It encompasses both the
physical hardware and the system software that manages
and integrates these components.
Components of computer system organisation:
• Computer-System Operation
• Storage Structure
• I/O Structure
Computer-System Operation
Central Processing Unit (CPU)
• Components:
• Arithmetic Logic Unit (ALU): Performs arithmetic and logical operations.
• Control Unit (CU): Directs the operation of the processor.
• Registers: Small, fast storage locations used for temporary data storage and
manipulation.
• Functions:
• Instruction Fetching: Retrieves instructions from memory.
• Instruction Decoding: Interprets the instructions.
• Instruction Execution: Carries out the instructions.
Storage/Memory:
• Types:
• Primary Memory (RAM): Volatile memory used for
temporary storage during processing.
• Secondary Memory: Non-volatile memory used for
permanent storage (e.g., hard drives, SSDs).
• Cache Memory: Small, high-speed memory located
close to the CPU to speed up access to frequently used
data.
I/O structure
• Input Devices: Allow users to provide data to the
computer (e.g., keyboard, mouse, scanner).
• Output Devices: Allow the computer to communicate
results to the user (e.g., monitor, printer, speakers).
• I/O Controllers: Manage the communication between the
CPU and peripheral devices.
Computer System architecture
• A computer system can be organised in a number of
different ways which we can categorise according to the
number of processors used,namely:
1.Single processor system
2.Multi processor system
1.Single processor system
Characteristics:
Single CPU:
• Only one central processing unit (CPU) handles all tasks.
• The CPU is responsible for executing instructions and processing data.
Simpler Design:
• Easier to design, implement, and manage.
• Fewer synchronization issues and no need for complex communication
mechanisms between CPUs.
Cost-Effective:
• Generally cheaper to produce and maintain compared to multi-
processor systems.
1.Single processor system
Limited Performance:
• Performance is limited to the capabilities of a single CPU.
• Cannot handle as many simultaneous tasks as a multi-processor
system.
Suitable Use Cases:
• Ideal for personal computers, low-end servers, and devices where the
workload is not excessively heavy.
Examples:
• Basic desktop computers
• Laptops
• Entry-level servers
2.Multi processor system
Characteristics:
Multiple CPUs:
• Consists of two or more CPUs working together.
• CPUs can be on the same chip (multi-core) or on separate chips.
Parallel Processing:
• Tasks can be divided among multiple processors, enabling parallel execution.
• Improves performance and efficiency, especially for compute-intensive tasks.
Complex Design:
• Requires sophisticated synchronization and communication mechanisms.
• More complex to design and manage due to the need to coordinate multiple
processors.
2.Multi processor system
Higher Cost:
• More expensive to produce and maintain.
• Increased power consumption and cooling requirements.
Scalability:
• Can scale to handle larger workloads by adding more
processors.
• Suitable for environments where high performance and
reliability are critical.
Advantages of Multi processor system
1.Increased throughput:
Increased throughput in an operating system (OS) refers to
the system’s ability to handle a higher volume of work in a
given amount of time. Achieving increased throughput
involves optimizing various aspects of the OS to ensure
efficient processing, resource management, and task
handling.
2.Economy of scale
3.Increased relaibility
Types of Multi processor system
• Asymmetric multiprocessing
• Symmetric multiprocessing
• Clustered Systems
Asymmetric multiprocessing
• Processors are assigned specific tasks.
• One processor typically acts as the master, controlling the
system and assigning tasks to other processors.
• Used in systems where certain processors are
specialized for specific tasks.
Symmetric multiprocessing
• All processors share the same memory and I/O devices.
• Each processor runs an identical copy of the operating
system.
• Processors are equally capable and balanced in their
workload.
Clustered Systems
Interconnected Nodes:
• Nodes are connected to each other
and the shared storage via a network.
• This connection enables
communication and coordination
between the nodes.
Shared Storage Access:
• All nodes have access to the shared
storage, allowing for data sharing and
redundancy.
• Ensures that data is consistent and
available to any node in the cluster.
Clustered Systems-Benefits
High Availability:
• If one node fails, other nodes can continue processing, ensuring service
continuity.
• Shared storage ensures that data remains accessible even if a node fails.
Scalability:
• Additional nodes can be added to the cluster to increase processing
power and handle larger workloads.
• Storage capacity can also be expanded as needed.
Load Balancing:
• Workloads can be distributed across multiple nodes, preventing any single
node from becoming a bottleneck.
• Enhances system performance and resource utilization.
Operating System structure
• It represents the structure of
an operating system (OS) with
respect to memory
management, particularly
illustrating a simplistic view of
memory allocation for jobs
within the system.
• Multi programming increases
CPU utilization by organising
jobs so that CPU always has
one to execute
Operating system structures:
• An operating system is a construct that allows the user
application programs to interact with system hardware
• Since,it is a complex structure it should be created with
utmost care so it can be used and modified easily
• Based on complexity they are classified into two namely
1.Simple structure
2.Layered structure
Simple structure OS
• It is generally used for a single computer or for a small
group of computers.
• Since the interfaces and functional levels are clearly
separated in this structure, programs are able to access
Input and Output routines, which may result in illegal
access to Input and Output routines.
• Example MS-DOS
• It is better that operating systems have a modular
structure that would lead to greater control over computer
system and its various application by hiding the
information internally without changing the outer
Advantages of Simple/Monolithic Structure:
• It delivers better application performance because of the
few interfaces between the application program and the
hardware.
• It is easy for kernel developers to develop such an
operating system.
Disadvantages of Simple/Monolithic Structure:
• The structure is very complicated, as no clear boundaries
exist between modules.
• It does not enforce data hiding in the operating system.
Layered Structure
• An OS can be broken into pieces and retain much more control over
the system.
• In this structure, the OS is broken into a number of layers (levels).
• The bottom layer (layer 0) is the hardware, and the topmost layer
(layer N) is the user interface.
• These layers are so designed that each layer uses the functions of
the lower-level layers.
• This simplifies the debugging process, if lower-level layers are
debugged and an error occurs during debugging, then the error
must be on that layer only, as the lower-level layers have already
been debugged.
Layered Structure
Advantages of Layered Structure
• Layering makes it easier to enhance the operating system, as
the implementation of a layer can be changed easily without
affecting the other layers.
• It is very easy to perform debugging and system verification.
Disadvantages of Layered Structure
• In this structure, the application’s performance is degraded as
compared to simple structure.
• It requires careful planning for designing the layers, as the
higher layers use the functionalities of only the lower layers.
Operating system services
• The OS coordinates the use of the hardware and
application programs for various users. It provides a
platform for other application programs to work. The
operating system is a set of special programs that run on
a computer system that allows it to work properly. It
controls input-output devices, execution of programs,
managing files, etc.
Operating system services
• Program execution • Security and Privacy
• Input Output Operations • Resource Management
• Communication between • User Interface
Process • Error handling
• File Management • Time Management
• Memory Management • Process Management
Program Execution
• It is the Operating System that manages how a program is going
to be executed. It loads the program into the memory after which
it is executed. The order in which they are executed depends on
the CPU Scheduling Algorithms. A few are FCFS, SJF, etc.
Input Output Operations
Operating System manages the input-output operations and
establishes communication between the user and device drivers.
Device drivers are software that is associated with hardware that is
being managed by the OS so that the sync between the devices
works properly. It also provides access to input-output devices to a
program when needed.
Communication Between Processes
• The Operating system manages the communication
between processes. Communication between processes
includes data transfer among them. If the processes are
not on the same computer but connected through a
computer network, then also their communication is
managed by the Operating System itself.
File Management
• The operating system helps in managing files also. If a
program needs access to a file, it is the operating system
that grants access. These permissions include read-only,
read-write, etc. It also provides a platform for the user to
create, and delete files. The Operating System is
responsible for making decisions regarding the storage of
all types of data or files, i.e, floppy disk/hard disk/pen
drive, etc. The Operating System decides how the data
should be manipulated and stored.
Privacy and security:
• Security : OS keep our computer safe from an
unauthorized user by adding security layer to it. Basically,
Security is nothing but just a layer of protection which
protect computer from bad guys like viruses and hackers.
OS provide us defenses like firewalls and anti-virus
software and ensure good safety of computer and
personal information.
• Privacy : OS give us facility to keep our essential
information hidden like having a lock on our door, where
only you can enter and other are not allowed . Basically ,
it respect our secrets and provide us facility to keep it
Resource Management
• System resources are shared between various
processes. It is the Operating system that manages
resource sharing. It also manages the CPU time among
processes using CPU Scheduling Algorithms. It also helps
in the memory management of the system. It also controls
input-output devices. The OS also ensures the proper use
of all the resources available by deciding which resource
to be used by whom
User interface
• User interface is essential and all operating systems
provide it. Users either interface with the operating
system through the command-line interface or graphical
user interface or GUI. The command interpreter executes
the next user-specified command.
• A GUI offers the user a mouse-based window and menu
system as an interface.
Error Handling
• The Operating System also handles the error occurring in
the CPU, in Input-Output devices, etc. It also ensures that
an error does not occur frequently and fixes the errors. It
also prevents the process from coming to a deadlock. It
also looks for any type of error or bugs that can occur
while any task. The well-secured OS sometimes also acts
as a countermeasure for preventing any sort of breach of
the Computer System from any external source and
probably handling them.
Time Management
• Imagine traffic light as (OS), which indicates all the
cars(programs) whether it should be stop(red)=>(simple
queue), start(yellow)=>(ready
queue),move(green)=>(under execution) and this light
(control) changes after a certain interval of time at each
side of the road(computer system) so that the
cars(program) from all side of road move smoothly
without traffic.
System calls
• A system call is a programmatic way in which a computer
program requests a service from the kernel of the
operating system it is executed on. A system call is a way
for programs to interact with the operating system.
Types of system calls
1. File System Operations
• These system calls are made while working with files in
OS, File manipulation operations such as creation,
deletion, termination etc.
• open(): Opens a file for reading or writing. A file could be
of any type like text file, audio file etc.
• read(): Reads data from a file. Just after the file is opened
through open() system call, then if some process want to
read the data from a file, then it will make a read() system
call.
1. File System Operations
• write(): Writes data to a file. Wheneve the user makes any
kind of modification in a file and saves it, that’s when this
is called.
• close(): Closes a previously opened file.
• seek(): Moves the file pointer within a file. This call is
typically made when we the user tries to read the data
from a specific position in a file.
2. Process Control
• These types of system calls deal with process creation,
process termination, process allocation, deallocation etc.
Basically manages all the process that are a part of OS.
• fork(): Creates a new process (child) by duplicating the
current process (parent). This call is made when a
process makes a copy of itself and the parent process is
halted temporarily until the child process finishes its
execution.
2. Process Control
• exec(): Loads and runs a new program in the current process
and replaces the current process with a new process.
• wait(): The primary purpose of this call is to ensure that the
parent process doesn’t proceed further with its execution
until all its child processes have finished their execution.
• exit(): It simply terminates the current process.
• kill(): This call sends a signal to a specific process and has
various purpose including – requesting it to quit voluntarily,
or force quit, or reload configuration.
3.Information maintainance:
• These types of system calls deals with memory allocation,
deallocation & dynamically changing the size of a memory
allocated to a process. In short, the overall management of
memory is done by making these system calls.
• brk(): Changes the data segment size for a process in HEAP
Memory. It takes an address as argument to define the end of
the heap and explicitly sets the size of HEAP.
• sbrk(): This call is also for memory management in heap, it also
takes an argument as an integer (+ve or -ve) specifying whether
to increase or decrease the size respectively.
3.Information maintainance:
• mlock() and unlock(): memory lock defines a mechanism
through which certain pages stay in memory and are not
swapped out to the swap space in the disk.
4.Communications:
• When two or more process are required to communicate,
then various IPC mechanism are used by the OS which
involves making numerous system calls
• pipe(): Creates a unidirectional communication channel
between processes
• socket(): Creates a network socket for communication.
• shmget(): It is short for – ‘shared-memory-get’. It allows
one or more processes to share a portion of memory and
achieve interprocess communication.
4.Communications:
• semget(): It is short for – ‘semaphore-get’. This call
typically manages the coordination of multiple processes
while accessing a shared resource that is, the critical
section.
5. Device Management
• The device management system calls are used to interact
with various peripherial devices attached to the PC or
even the management of the current device.
• SetConsoleMode(): This call is made to set the mode of
console (input or output). It allows a process to control
various console modes. In windows, it is used to control
the behaviour of command line.
• WriteConsole(): It allows us to write data on console
screen.
5. Device Management
• ReadConsole(): It allows us to read data from console
screen (if any arguments are provided).
• open(): This call is made whenever a device or a file is
opened. A unique file descriptor is created to maintain the
control access to the opened file or device.
• close(): This call is made when the system or the user
closes the file or device.
Process
• A process is a program in execution.
• For example, when we write a program in C or C++ and
compile it, the compiler creates binary code.
• The original code and binary code are both programs.
• A single program can create many processes when run
multiple times; for example, when we open a .exe or
binary file multiple times, multiple instances begin
(multiple processes are created).
Component and description
• Text Section: A Process, sometimes
known as the Text Section, also includes
the current activity represented by the value
of the Program Counter.
• Stack: The stack contains temporary data,
such as function parameters, returns
addresses, and local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory
allocated to process during its run time.
Process States:
• New: Newly Created Process (or) being-created process.
• Ready: After the creation process moves to the Ready
state, i.e. the process is ready for execution.
• Running: Currently running process in CPU (only one
process at a time can be under execution in a single
processor).
• Wait (or Block): When a process requests I/O access.
• Complete (or Terminated): The process completed its
execution.
Process Control Block(PCB)
• While creating a process, the operating system performs several
operations.
• To identify the processes, it assigns a process identification number
(PID) to each process.
• As the operating system supports multi-programming, it needs to
keep track of all the processes.
• For this task, the process control block (PCB) is used to track the
process’s execution status.
• Each block of memory contains information about the process state,
program counter, stack pointer, status of opened files, scheduling
algorithms, etc.
Structure of the Process Control Block
• A Process Control Block
(PCB) is a data structure
used by the operating
system to manage
information about a
process.
• The process control
keeps track of many
important pieces of
information needed to
manage processes
• Pointer: It is a stack pointer that is required to be saved
when the process is switched from one state to another to
retain the current position of the process.
• Process state: It stores the respective state of the
process.
• Process number: Every process is assigned a unique id
known as process ID or PID which stores the process
identifier.
• Program counter: Program Counter stores the counter,
which contains the address of the next instruction that is
to be executed for the process.
• Register: Registers in the PCB, it is a data structure. When
a processes is running and it’s time slice expires, the current
value of process specific registers would be stored in the
PCB and the process would be swapped out. When the
process is scheduled to be run, the register values is read
from the PCB and written to the CPU registers. This is the
main purpose of the registers in the PCB.
• Memory limits: This field contains the information about
memory management system used by the operating system.
This may include page tables, segment tables, etc.
• List of Open files: This information includes the list of files
opened for a process.
Operations on process:
• There are many operations that can be performed on
processes. Some of these are process creation, process
preemption, process blocking, process execution and
process termination.
1.Process creation:
Processes need to be created in the system for different
operations. This can be done by the following events −
• User request for process creation
• System initialization
• Execution of a process creation system call by a running
process
• Batch job initialization
• A process may be created by another process using
fork(). The creating process is called the parent process
and the created process is the child process. A child
process can have only one parent but a parent process
may have many children. Both the parent and child
processes have the same memory image, open files, and
environment strings. However, they have distinct address
spaces.
2.Process preemption
• An interrupt mechanism is
used in preemption that
suspends the process
executing currently and
the next process to
execute is determined by
the short-term scheduler.
• Preemption makes sure
that all processes get
some CPU time for
execution.
3.Process Blocking
• The process is blocked if it
is waiting for some event
to occur. This event may
be I/O as the I/O events
are executed in the main
memory and don't require
the processor. After the
event is complete, the
process again goes to the
ready state.
4.Process Termination
• After the process has completed the execution of its last
instruction, it is terminated. The resources held by a
process are released after it is terminated.
5.Process Executing
Processes executing concurrently in OS may be either
independent process or co-operating process
Process scheduling
• Process scheduling involves short-term schedulers, medium-term
schedulers and long-term schedulers. Details about these are given as
follows −
Long-Term Schedulers
• Long-term schedulers perform long-term scheduling. This involves
selecting the processes from the storage pool in the secondary memory
and loading them into the ready queue in the main memory for execution.
• The long-term scheduler controls the degree of multiprogramming. It must
select a careful mixture of I/O bound and CPU bound processes to yield
optimum system throughput. If it selects too many CPU bound processes
then the I/O devices are idle and if it selects too many I/O bound
processes then the processor has nothing to do.
Short-Term Schedulers
• Short-term schedulers perform short-term scheduling.
This involves selecting one of the processes from the
ready queue and scheduling them for execution. A
scheduling algorithm is used to decide which process will
be scheduled for execution next by the short-term
scheduler.
Medium-Term Schedulers
• Medium-term schedulers perform medium-term
scheduling. This involves swapping out a process from
main memory. The process can be swapped in later from
the point it stopped executing.
• This can also be called as suspending and resuming the
process and is helpful in reducing the degree of
multiprogramming. Swapping is also useful to improve the
mix of I/O bound and CPU bound processes in the
memory.
Scheduling criteria
• Process scheduler assigns different processes to CPU
based on particular scheduling algorithms.
• The scheduling is responsible for taking part in the
scheduling process that is the set of the policies and
mechanisms to control the order in which the jobs can be
completed. By using the scheduling algorithms the
scheduler is done.
• The scheduling criterion is responsible for helping in the design of
the good scheduler. These criteria are as follows −
• CPU Utilization
• The scheduling algorithm should be designed in such a way that the
usage of the CPU should be as efficient as possible.
• Throughput
• It can be defined as the number of processes executed by the CPU
in a given amount of time. It is used to find the efficiency of a CPU.
• Response Time
• The Response time is the time taken to start the job when the job
enters the queues so that the scheduler should be able to minimize
the response time.
• Response time = Time at which the process gets the CPU for the
first time - Arrival time
• Turnaround time
• Turnaround time is the total amount of time spent by the process
from coming in the ready state for the first time to its completion.
• Turnaround time = Burst time + Waiting time
• or
• Turnaround time = Exit time - Arrival time
• Waiting time
• The Waiting time is nothing but where there are many jobs
that are competing for the execution, so that the Waiting
time should be minimized.
• Waiting time = Turnaround time - Burst time
• Fairness
• For schedulers there should be fairness for making sure that
the processes get the fair share of chances to be executed.
Types of Process Scheduling Algorithms
The different types of process scheduling algorithms are as follows −
• FCFS(First Come First Serve)
• SJF or shortest job next.
• Round Robin.
• Shortest Remaining time.
• Priority Scheduling.
• Multiple level queues.
First Come First Serve (FCFS)
• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.
Waiting Time (WT):
• The time spent by a process waiting in the ready queue
for getting the CPU.
• The time difference b/w Turnaround Time and Burst Time
is called Waiting Time.
Burst Time (BT) /Service Time:This is the time required by
the process for its execution.
FCFS:
Advantages: Disadvantages:
• It is simple and easy to • The process with less execution
understand. time suffers i.e. waiting time is
• FCFS provides fairness by often quite long.
treating all processes equally and • This effect results in lower CPU
giving them an equal opportunity and device utilization.
to run.
• FCFS guarantees that every
process will eventually get a
chance to execute, as long as the
system has enough resources to
handle all the processes.
Shortest Job Next (SJN)
• This is also known as shortest job first, or SJF
• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where required CPU time is not
known.
• The processer should know in advance how much time process will take.
Process Arrival time Execution time Service time
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
SJN or SJF:
Advantages Disadvantages
• Shortest jobs are • SJF may cause
favored. starvation if shorter
• It is probably optimal, in processes keep coming.
that it gives the minimum This problem is solved by
average waiting time for aging.
a given set of processes. • It cannot be implemented
at the level of short-term
CPU scheduling.
Starvation
• Starvation in operating system occurs when a process
waits for an indefinite time to get the resource it requires
Priority Based Scheduling
• Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
Advantages Disadvantages:
• This provides a good • If high-priority processes use
mechanism where the relative up a lot of CPU time, lower-
importance of each process priority processes may starve
may be precisely defined. and be postponed indefinitely.
• PB scheduling allows for the The situation where a process
assignment of different never gets scheduled to run is
priorities to processes based called starvation.
on their importance, urgency, • Another problem is deciding
or other criteria. which process gets which
priority level assigned to it.
Shortest Remaining Time
• Shortest remaining time (SRT) is the preemptive version
of the SJN algorithm.
• The processor is allocated to the job closest to completion
but it can be preempted by a newer ready job with shorter
time to completion.
• Impossible to implement in interactive systems where
required CPU time is not known.
• It is often used in batch environments where short jobs
need to give preference.
Advantages: Disadvantages:
• Setting the quantum too short
• Every process gets an increases the overhead and
lowers the CPU efficiency, but
equal share of the CPU. setting it too long may cause a
• RR is cyclic in nature, so poor response to short
there is no starvation. processes.
• The average waiting time
under the RR policy is often
long.
• If time quantum is very high
then RR degrades to FCFS.
Multiple-Level Queues Scheduling
• Multiple-level queues are not an independent scheduling
algorithm. They make use of other existing algorithms to
group and schedule jobs with common characteristics.
• Multiple queues are maintained for processes with
common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
Advantages Disadvantages
• Application of separate • The lowest level process
scheduling for various faces the starvation
kinds of processes is problem.
possible.
• System Process – FCFS
• Interactive Process – SJF
• Batch Process – RR
• Student Process – PB
Thread:
• A thread is also called a lightweight process. Threads
provide a way to improve application performance
through parallelism.
• Threads represent a software approach to improving
performance of operating system by reducing the
overhead thread is equivalent to a classical process.
• A thread is a flow of execution through the process code,
with its own program counter that keeps track of which
instruction to execute next, system registers which hold
its current working variables, and a stack which contains
the execution history.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch
threads.
• Threads allow utilization of multiprocessor architectures to
a greater scale and efficiency.
Types of Thread
• Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed
threads acting on kernel, an operating system core.
User Level Threads
• In this case, the thread management kernel is not aware
of the existence of threads.
• The thread library contains code for creating and
destroying threads, for passing message and data
between threads, for scheduling thread execution and for
saving and restoring thread contexts.
• The application starts with a single thread.
Advantages Disadvantages
• Thread switching does not • In a typical operating system,
require Kernel mode most system calls are
privileges. blocking.
• User level thread can run on • Multithreaded application
any operating system. cannot take advantage of
• Scheduling can be application multiprocessing.
specific in the user level
thread.
• User level threads are fast to
create and manage.
Kernel Level Threads
• In this case, thread management is done by the Kernel. There is
no thread management code in the application area. Kernel
threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the
threads within an application are supported within a single
process.
• The Kernel maintains context information for the process as a
whole and for individuals threads within the process. Scheduling
by the Kernel is done on a thread basis. The Kernel performs
thread creation, scheduling and management in Kernel space.
Kernel threads are generally slower to create and manage than
the user threads.
Advantages Disadvantages
• Kernel can simultaneously • Kernel threads are generally
schedule multiple threads from slower to create and manage
the same process on multiple than the user threads.
processes. • Transfer of control from one
• If one thread in a process is thread to another within the
blocked, the Kernel can same process requires a
schedule another thread of the mode switch to the Kernel.
same process.
• Kernel routines themselves
can be multithreaded.
Multithreading Models
• Some operating system provide a combined user level thread and
Kernel level thread facility. Solaris is a good example of this
combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire
process. Multithreading models are three types
• Many to many relationship.
• Many to one relationship.
• One to one relationship.
Many to Many Model
• The many-to-many model multiplexes any
number of user threads onto an equal or
smaller number of kernel threads.
• The following diagram shows the many-to-
many threading model where 6 user level
threads are multiplexing with 6 kernel level
threads. In this model, developers can
create as many user threads as necessary
and the corresponding Kernel threads can
run in parallel on a multiprocessor
machine. This model provides the best
accuracy on concurrency and when a
thread performs a blocking system call, the
kernel can schedule another thread for
execution.
Many to One Model
• Many-to-one model maps many user level
threads to one Kernel-level thread. Thread
management is done in user space by the
thread library. When thread makes a
blocking system call, the entire process will
be blocked. Only one thread can access
the Kernel at a time, so multiple threads
are unable to run in parallel on
multiprocessors.
• If the user-level thread libraries are
implemented in the operating system in
such a way that the system does not
support them, then the Kernel threads use
the many-to-one relationship modes.
One to One Model
• There is one-to-one relationship of
user-level thread to the kernel-level
thread. This model provides more
concurrency than the many-to-one
model. It also allows another thread
to run when a thread makes a
blocking system call. It supports
multiple threads to execute in
parallel on microprocessors.
Difference between User-Level & Kernel-Level Thread
Threading Issues
• The fork() and exec() system call
• Signal handling
• Thread cancellation
• Thread local storage
• Scheduler activation
The fork() and exec() system calls
• The fork() is used to create a duplicate process. The meaning of the fork()
and exec() system calls change in a multithreaded program.
• If one thread in a program which calls fork(), does the new process duplicate
all threads, or is the new process single-threaded? If we take, some UNIX
systems have chosen to have two versions of fork(), one that duplicates all
threads and another that duplicates only the thread that invoked the fork()
system call.
• If a thread calls the exec() system call, the program specified in the
parameter to exec() will replace the entire process which includes all threads.
Signal handling
• Generally, signal is used in UNIX systems to notify a process that a particular
event has occurred. A signal received either synchronously or
asynchronously, based on the source of and the reason for the event being
signalled.
• All signals, whether synchronous or asynchronous, follow the same pattern
as given below −
• A signal is generated by the occurrence of a particular event.
• The signal is delivered to a process.
• Once delivered, the signal must be handled.
Thread Cancellation
• Thread cancellation is the task of terminating a thread before it has
completed.
• For example − If multiple database threads are concurrently searching
through a database and one thread returns the result the remaining threads
might be cancelled.
• A target thread is a thread that is to be cancelled, cancellation of target
thread may occur in two different scenarios −
• Asynchronous cancellation − One thread immediately terminates the target
thread.
• Deferred cancellation − The target thread periodically checks whether it
should terminate, allowing it an opportunity to terminate itself in an ordinary
fashion.
Thread Local Storage
• Thread Local Storage (TLS) is the method by which each
thread in a given multithreaded process can allocate
locations in which to store thread-specific data
Scheduler Activation
• Scheduler activations are a threading mechanism that,
when implemented in an operating system's process
scheduler, provide kernel-level thread functionality with
user-level thread flexibility and performance.
Inter process communication
• Interprocess communication is the mechanism provided
by the operating system that allows processes to
communicate with each other.
• This communication could involve a process letting
another process know that some event has occurred or
the transferring of data from one process to another.
Synchronization in Interprocess Communication
• Synchronization is a necessary part of interprocess
communication. It is either provided by the interprocess control
mechanism or handled by the communicating processes. Some of
the methods to provide synchronization are as follows −
Semaphore
• A semaphore is a variable that controls the access to a common
resource by multiple processes. The two types of semaphores are
binary semaphores and counting semaphores.
Mutual Exclusion
• Mutual exclusion requires that only one process thread can enter
the critical section at a time. This is useful for synchronization and
also prevents race conditions.
Barrier
• A barrier does not allow individual processes to proceed
until all the processes reach it. Many parallel languages
and collective routines impose barriers.
Spinlock
• This is a type of lock. The processes trying to acquire this
lock wait in a loop while checking if the lock is available or
not. This is known as busy waiting because the process is
not doing any useful operation even though it is active.
Approaches to inter process communication
• Pipe
• Socket
• File
• Signal
• Shared memory
• Message queue