cg
66.1)
VCS.
HAT IS OPERATING SYSTEM
Operating systems perform two basi
ynctions, extending the machine and neal ee
ing on who is doing the talki
rot one function oF the cing the aking youowr mast
ye Operating System as an Extended Machine
The architecture (instruction set, memory
ization, 1/O and bus structure) of most computers at
ye machine language level is primitive to program,
specially for input/output.
phe Operating System as a Resource Manager
‘The concept of the operating system is primarily
rovidng its users with a convenient interface isa top-
jown view. An alternative, bottom-up, view holds that the
erating system is there to manage all the pieces of a
complex system.
‘The operating system is to provide for an orderly: and
ontrlled allocation ofthe processors, memories and VO
levices among the various programs competing for them.
‘When a computer (or network) has multiple users,
he need for managing and protecting the memory, vo
reviges and other resources is even greater, since the users
might otherwise interfere with one another.
TYPES OF OPERATING SYSTEM
«Mainframe operating system
© Server operating system
© Multiprocessor operating system
© Real-time operating system
«Embedded operating system
+ Smart card operating system
OPERATING SYSTEM CONCEPTS
All operating systems have certain basic concepts
such as processes, memory and files that are central to
understanding them.
SYSTEM CALLS
The interface between the ope
user programs is defined by the set
the operating system provides.
rating system and the
t of system calls that
Orenatine Systems
Oprratinc Systems : IN A NUTSHELL
Weare thus forced to make a choice between (1) vague
generalities operating systems have system calls for reading,
files”) and (2) some specific system
Modern operating systems have system calls that
perform the same functions, even ifthe details differ. Since
the actual mechanics of issuing a system call are highly
machine dependent and often must be expressed in
assembly code A procedure library is provided to make it
possible to make system calls from C programs and often
from other languages as well.
CPU SCHEDULING
CPU scheduling is the task of selecting a waiting
process from the ready queue and allocating the CPU to
it. The CPU is allocated to the selected process by the
dispatcher.
First Come First Served (FCS) scheduling
First Come First Served (FCFS) scheduling is the
simplest scheduling algorithm, but it cam cause short
processes to wait for very long . Shortest Job-
First (SIF) scheduling is probably optimal, providing the
shortest average waiting time. Implementing SJF scheduling
is difficult because predicting the length of the next CPU
purst i difficult, The SIF algorithm is a special case of the
general priority scheduling algorithm, which simply
Tgeates the CPU to the highest priority process. Both
priority and SIF scheduling may suffer from starvation.
Aging is a technique to prevent starvation.
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling is more appropriate
for a time shared (interactive) system. RR scheduling
allocates the CPU to the first process in the reach queue
for q time units where qis the time quantum, After q time
units, if the process has not relinquished the CPU, itis
preempted ‘and the process is put atail of the ready queue.
‘The major problem isthe selection ofthe time quantum. If
the quantum is too large, RR scheduling degenerate to
FCFS scheduling and if the: ‘quantum is too small, scheduling
‘overhead in the form of context switch time becomes
excessive.2
The FCFS algorithm is non preemptive, the RR
algorithm is preemptive. The SJF and priotity algorithms
may be either preemptive or non preemptive
Multilevel queue algorithms allow different algorithms
to be used for various classes of processes. The most
‘common is a foreground interactive queue, which use RR
scheduling and a background batch queue, which uses
FCFS scheduling, Multilevel feedback queues allow
processes to move from one queue to another
INPUT/OUTPUT
All computers have physical devices for acquiring
input and producing output. After all, what good would a
computer be if the users could not tell it what to do and
could not get the results after it did the work requested.
Many kinds of input and output devices exist, including
keyboards, monitors, printers and so on. It is up to the
‘operating system to manage these devices.
‘One of the main functions of an operating system is
to control all the computer’s I/O (Input/Output) devices.
It must issue commands to the devices, catch interrupts
and handle errors. It should also provide an interface
between the devices and the rest of the system that is
simple and easy to use. To the extent possible, the interface
should be the same for all devices. The I/O code represent
a significant fraction of the total operating system. How
the operating system manages /O.
First we will look at some of the principles of I/O
hardware and then we will look at I/O software in general.
V/O software can be structured in layers with each layer
having a well-defined task to perform.
VO DEVICES
VO devices can be roughly divided into two categories:
block devices and character devices. A block device
is one that stores information in fixed-size blocks, each
one with its own address. Common block sizes range from
512 bytes to 32,768 bytes. The essential property of a block
device is that it is possible to read or write each block
independently of all the other ones.
Table Some typical device, network and bus data rates
| Data rate |
—_ [10 bytes/see
| 100 bytes/see
TKBisec
BKBiec
{Dual ISDN lir 16 KBise 4
| Laser printer_ —___ | 100 KB/sec
[Scanner 400 KBisee
[Classic Ethernet [125 MBisee
USB (Universal Se [1.5 MB/sec
nu T
ISA be | irae
FIDE (ATA-2) disk
4) 50 MBisec
FeO sue
aONET OC-L 2 network [78 MB see
C1 Ultra 2isk [HO MB: —
| Gigabit Fihernet [25 Mie
{ te oe [528 MB/sec
720 GBisee
| Sun Gigaplane XB backplane
PROCESS AND THREAD
In a multiprogramming system, the CPU q
switches from program to program running each for
for hundreds of milliseconds. While, strictly speakin
any instant of time, the CPU is running only one po
in the course of | second, it may work on several progra
thus giving the users the illusion of parallelism. Somet
people speak of pseudo parallelism in this conte
contrast it with the true hardware parallelism
multiprocessor systems (which have two or more C'
sharing the same physical memory). Keeping trac
multiple, paralle! activities is hard for people to
Therefore, operating system designers over the years
evolved a conceptual model (sequential processes)
makes parallelism easier to deal with.
THREADS
A process has an address space containing pro
text and data, as well as other resources. These reso
may include open files, child processes, pending ala
signal handlers, accounting informaticn and more.
putting them together in the form of a process, they
be managed more easily
‘The other concept is a process has a thre
‘execution, usually shortened to just thread. The ti
has a program counter that keeps track of which instruc
to execute next. It has registers, which hold its cut
working variables. It has a stack, which contains
execution history, with one frame for each procedure cal
‘but not yet returned from. Although a thread must ex
in some process, the thread and its process are differ
concepts and can be treated separately. Processes &
used to group resources together Threads are the enti
scheduled for execution on the CPU.1 Prooess | Process | Procens
pM ;
| QP we _
pent Kernel _} _ Kemel
ve
(a) )
al Three processes each with ane thread (b) One process with
‘hre threads
Usage
pret Instead of thinking about interrupts, timers and
+ context switches, we can think about parallel
s. Only now with threads we add anew
iement: the ability for the parallel entities to
Share an address space and all of its data among.
themselves. This ability is essential for certain
applications, which is why having multiple
processes (With their separate address spaces)
will not work.
‘A second argument for having threads is that
since they do not have any resources attached
to them, they are easier to create and destroy
than processes
3, Performance argument, threads yield no
performance gain when all of them are CPU
bound, but when there is substantial computing
and also substantial 1/O having threads allows
these activities to overlap.
4. Finally, threads are useful on systems with
multiple CPUs, where real parallelism i possible,
Implementing Threads in User Space
Each process needs its own private user space in
rad table to keep track of the threads in that process.
histubleis analogous to the kernel’s process table, except
tit keeps track only of the per-thread properties such
he each thread’s program counter, stack pointer, registers,
tte, et, The thread table is managed by the run-time
ystem,
Implementing Threads in the Kernel
No runtime system is needed in each, as shown in
it2. Also there is no thread table in each process. Instead,
he kee! has a thread table that keeps track of all the
heads in the system.
Thekernel’s thread table holds each thread’s registers,
te and other information.
leas that might block a thread are implemented
tem calls, at considerably greater cost than a call to
ovine system procedure. When a thread blocks, the
| atits option, can run either another thread from the
Process (if one is ready) or a thread from a different
ss,
Kernel threads do not new, nonblockini
1 not require any NeW hocking
system calls, In addition ifone thread in causes
yystem calls, ition, i f process
fault, the kernel can eas! c
havan runnable threads ‘and if so, run one oF
other
wae Ying For the required page to be browse” from
the disk.
Process Thread Process ‘Thread
ime 1 s Process Thread
turin ag age”
(a) (b)
Fig. 2: (a) A user-level threads package (0) A threads package
‘managed by the kernel
INTER-PROCESS COMMUNICATION
Processes frequently need to communicate with other
processes. For example, ina shell pipeline, the output of
the first process must be passed to the second process
‘and so on down the line, Thus, there is a need for
‘communication between processes.
‘There are three issues here. The first was alluded to
above: haw one process can pass information to another.
The second has to do with making sure two or more
processes do not get into each other's way when engaging
in critical activities (suppose two processes each try to
grab the last | MB of memory). The third concerns proper
sequesi.: », when dependencies are present. If process A
produces data and process B prints them, B has to wait
until A has produced some data before starting to print.
© Race Condition
Critical Region
Mutual Exclusion with Busy Waiting
Sleep and Wakeup
Semaphores
Mutexes
Aonitors
* Message Passing
CLASSICAL IPC PROBLEM
The Dining Philosophers Problem
In 1965, Dijkstra posed and solved a synchronization
problem, he called the dining philosophers problem.
Since that time, everyone inventing yet another
synchronization primitive has felt obligated to demonstrate
how wonderful the new primitive is by showing how
y it solves the dining philosophers problem. Thex)
Problem can be stated quite simply as follows. Five
ecarastbeld seated around me ircular table. Bach
ippery that a rh jate of spaghetti. The spaghetti is so.
Y Philosopher needs two forks to eat it
Between each pair of plates is one fork
The Readers and Writers Problem
Example, an airline reservation system, with many
competing processes wishing to read and write it. It is
acceptable to have multiple processes reading the database
at the same time, but if one process is updating (writing)
the database, no other processes may have access to the
database, not even readers.
Memory hierarchy, with a small amount of a very
fast expensive, volatile cache memory, tens of megabytes
of medium-speed, medium-price, volatile main memory
(RAM) and tens or hundreds of gigabytes of slow, cheap,
nonvolatile disk storage. It is the job of the operating system
to coordinate how these memoties are used.
The part of the operating system that manages the
memory hierarchy is called the memory manager. Its
job is to keep track of which parts of memory are in use
and which parts are not in use, to allocate memory to
processes when they need it and deallocate it when they
are done and to manage swapping between main memory
and disk when main memory is too small to hold all the
processes.
DEADLOCKS
When two or more processes are interacting, they
can sometimes get themselves into a stalemate situation,
they cannot get out of such a situation is called a deadlock.
A deadlock situation can arise if the following four
conditions hold simultaneously in a system.
1. Mutual exclusion : At least one resource
must be held in a non-sharable mode.
Hold and wait : A process must be holding
at least one resource and waiting to acquire
additional resource that are currently being
+= held by other processes.
3. No preemption : Resources cannot be
2,
preempted.
4. Cireular wait : A set { Py P,....P, } of
waiting process must exist such that P, is
waiting for a resource that to held by P,.
P, is waiting for a resource that is held by
P,.
@) A deadlock state occurs when two or more
processes are waiting indefinitely for an event
ech, (V Sem) C8. Sotveg
that can be caused only by one of the an
process. iy
To prevent deadlock we ensure that aj.
of the necessary conditions never holds
(ii) Principally there are three methods for
with deadlock. ‘tp
(iv) Use some protocol to prevent or avoig, de,
ensuring that the system will never
deadlock state. mie
Allow the system to enter deadlock stat. ‘
it and then recover. mt
(vi) Ignore the problem all together and preteny
deadlock never occur in the system. This"
is the one used by most operating yn,
including UNIX. I
(vii) Another method for avoiding deadlock
less stringent than the prevention algorihg,
tohavea prior information on how each,
will be utilizing the resources. The ban,
algorithm for example needs to kroy ,
maximum number of each resource class
may be requested by each process using 4
information, we can define a deadioy
avoidance algorithm.
(viii) Ifa system does not employ a protocol to en,
that deadlock will never occur then a deterti
and recovery scheme must be employed,
deadlock detection algorithm must be invok:
to determine whether a deadlock has occure
If a deadlock is detected, the system mu
recover either by terminating some oft
deadlocked process or by preempting resour
from some of the deadlocked processes. In
system that select victims for primarily on t
s basis of cost factors, starvation may occur.
a result, the selected process never comple
its designated task.
MEMORY MANAGEMENT
Every computer has some main memory that itus
to hold executing programs. In a very simple opeai
system, only one program at a time is in memory. Tor
second program, the first one has to be removed and
second one placed in memory.
More sophisticated operating systems all
programs to be in memory at the same time. To keep te
from interfering with one another (and with the operat
system), some kind of ptotection mechanism is need?
(ii)
(V)
low mutianism has to-be in the hardware, it ig
Memory Management
management systems can be divi
ener ne il
memory and disk during execution
ing) and those that do not,
rogramming without Swapping or Paging
simplest possible memory management scheme
jst one program at a time, sharing the memory
ott t program and the operating system.
jprogramming with Fixed Partitions
simple systems, monoprogrammin;
wi used any more. Most modem systems ‘allow
se to run at the same time. Having multiple
proves at once means that when one process
et epee for 1/0 to finish, another one can use
s¥fpu. Thus multiprogramming increases, the, CPU
Mion Network servers always have the ability to run
a (for different clients) at the same time,
a client (i¢., desktop) machines also have this ability
ic
sovally®-
‘Operating
game | [anctaten
a
”
ore User
Sin
tm
rere Sep
=
: :
"a i
® » ©
Fig.3: Three simple ways of organizing memory with an
speratng system and one user process. Other possibilities also exist
Modeling Multiprogramming
When multiprogramming is used, the CPU utilization
sabe improved. If the average process computes only
Dpercent of the time, it is sitting in memory with five
jocesses in memory at once, the CPU should be busy all
tetime. This model is unrealistically optimistic, however
sce it assumes that all five processes will never be waiting
‘VO at the same time.
Suppose that a process spends a fraction p of its time
waiting for /O to complete. With n processes in memory
‘once, the probability that all s+ processes are waiting for
l0(in which case the CPU will be idle) is p*. The CPU
tlzation is then given by the formula
CPU utilization = 1 —p”
Fig.4 shows the CPU utilization as @ function of 1,
salled the degree of multiprogrammi
Flg.4 : CPU utlzation asa fiction of the number of processes in memory
SWAPPING
__ As long as enoughtJobs can be kept in memory to
keep the CPU busy allttHe time, there is no reason to use
anything more complivated.
‘Two general approaches to memory management can
ibe used, depending (in part) on the available hardware.
The simplest strategy called swapping, consists of bringing
in each process in its entirely, running it for a while then
putting it back on the disk. The other strategy, called virtual
memory, allows programs to run even when they are only
partially in main memory.
The operation of a swapping system i
fig.5 Initially only process A is in memory. Then processes
Band C are created or swapped in from disk.
tme —
ZA, LIZA ZA, FA
I, ° © © e ce
8 °
me | (Pazee| [Sae
® -o© © @ o © @®
Fig. 5: Memory allocation changes as processes come into memory
and leave it. The shaded regions are unused memory
In Fig., A is swapped out to disk. Then D comes in
and B goes out. Finally A comes in again. Since 4 is now
ata different location. Addresses contained in it must be
relocated, either by software when it is swapped in or
(more likely) by hardware, during program execution.
VIRTUAL MEMORY
The solution usually adopted was to split the program
into pieces, called overlays, Overlay would start running,
first. When it was done, it would call another overlay. Some—<——— = ——
overlay systems were highly com
overlays in memory at once
Although the actual woy
and out was done by the syst
Program into pieces had to he done by the programmer.
Splitting up large programs into small, modular pieces was
time consuming and boring
nplex, allowing multiple
k of swapping overlays in
fem. the work of splitting the
Carently in use in main memory and the rest onthe disk,
For example, a 16-MB Program can run on a 4-MB
machine by carefully choosing which 4 MB to keep in
memory at each instant, with pieces ofthe program being
Swapped between disk and memory as needed
Virtual memory can also work inamultiprogramming
‘stem, with bts and pieces of many programs in memory
at once. While a program is waiting for part of itself tobe
brought in, it is waiting for VO and cannot run, So the
CPU can be given to another process, the same way as in
any other multiprogramming system. The topic covered
under this are :
1. Paging
2. Page tables
(i) Malti-levels page tables
ii) Structure of page table entry
3. TLB’s-Translation Look aside Buffers
4. Inverted page tables
PAGE REPLACEMENT ALGORITHM
The Least Recently Used (LRU)
A good approximation to the optimal algorithm is based
(onthe observation that pages that have been heavily used
in the last few instructions wil probably be heavily used
again in the next few. Conversely, pages that have not
been used for pages will probably remain unused for a
long time. This idea suggests a realizable. algorithm: when
a page fault occurs, throw out the page that has been
unused for the longest time. This strategy is called LRU
(Least Recently Used) paging.
Although LRU is theoretically realizable, it is not
cheap. To fully implement LRU, itis necessary to maintain
linked list ofall pages in memory, wth the most recently
used page a the front and the least recently used page at
the rear. The difficulty is thatthe list must be updated on
every memory reference,
Fen RS
gta
& e When 4 page fault, Occ
the action taken peng
the R bit. *
o R= OE viet the pa,
Re 1 Clear R ang con
6 a
&
a + The clock page replacement algoyg,,
For a machine with m page Frames the :
hardware can maintain a matrix or n xn bits, initia,
zero. Whenever page frame k is Feferenceg :
hardware first sets all the bits of Tow to 1, then so
the bits of column k to 0. At any instant, the roy Mg
binary value is lowest is the least recently useg 4, ‘
whose value is next lowest is next least recently
and sd forth. The workings of this algorithm are i
fg forfour page frames and page referers
order 0123210323
After pag¢ 0 js referenced, we have the si
‘After page | is referenced, we have the situation
forth.
DISK SCHEDULING
1. The basic hardware elements involved in i
buses, device controllers andthe devices tems
The work of moving data between devices and ny
‘memory is performed by the CPU as programed
orisoff loaded toa DMA controller. The kemel mi
that controls a device isa device driver The ssi
call interface provided to applications is design
handle several basi categories of hardware, nl
block devices, character devices, memory map
files, network sockets and programmed intend
timers. The system calls usually block the proces
that issue them, but non blocking and asynchronous
calls are used by the kernel itself and by application
that must not sleep while ‘Waiting for an /O operation
tocomplete,
Disk drives are the major secondary storage 10
device on most computers, requests for disk 10a
generated by the file system and by the viru
memory system, Each request specifies the addres
on the disk to be referenced, in the form of logic
block number.
. Disk scheduling algorithms can improve the effi
bandwidth, the average response time and
and
Sweit respon time. Algorithm:
wc ‘AN, LOOK and LOOK on SST
st re
wae such improvements by strategie designed
enter 8 for disk
: “occ cm on harmed by e
t ot genta . some yystems have uti xternal
ial iti
ere system to ey fragmented pao! scan
eH piook jaround to decrease the fragn Fhey then
(a nting 2 badly fragmented file patos
ei cant!) improve the performance, but an can
Cpa enced peremane oh te
Y entation 1S in progress. The operatin, ile the
wes the disk blocks. First, a disk a system
mel ss atied (0 create the sectors be ow
i | renew disk usually come Pref se
ie disk itioned and file ae en,
Jocks are allocated to store the sein
ems
oot blot gram. Finally. wh
Finally. when a block is corrupted, _
rt must have away to
stel xy to lock out that
senile wie re Sty tae Pibek or
pase am © cient swap space is a key to
peo systems usualy bypass ‘a at aes
nd, Use raw disk access for paging 1/0. Some
systems dedicate a raw disk partition to swap space
pd others usea file writing the file system instead,
allow the user or system administer to
the decision by providing both options.
Diech. W Bem) ca BR
Distributed information systems my ae
and access information such that clin, a a
share state information to manage y,4, ‘a
Since files are the main informatig,
mechanism in most computer systems, ha
isnaked. Access to filescan be contre in!
for ench type of access-read write, eg, "hy,
delete, list directory and $0 0n. File Protec
provided by passwords, by accessing"
adhoc techniques. by
The file system resides permanently oy
storage, which is designed to hold lag
data permanently. The most commen -
storage medium is the disk- physical di" uy
segmented into partitions to control Media a
to allow multiple possible varying file
spindle. These file systems are mounted op ma
file system architecture to make them va %
use. File systems are often implemented,
or module structure. The lower levels da a
physical properties of storage devices, oe
deal with symbolic ile concep into phys.
properties.
Every file system type can have different:
algorithms. A VFS layer allows the upper ayer,
with each file system type uniformly. Bren,
system can be integrated into the systems dig
structure and acted on by standard system ca.
VFS interface. A various files can be alloc
on the disk in three ways: through contiguos is,
indexed allocation. Contiguous allocation can si
external fragmentation. Direct access is very inf
with linked allocation. Indexed allocation may
substantial overhead for its index block.
These algorithms can be optimized in many w
Contiguous space may be enlarged through ew
to increase flexibility and to decrease ex
fragmentation. Indexed allocation can be dow:
clusters of multiple blocks to increase throug
to reduce the number of index entries ne
Indexing in large clusters is similar to conti
allocation with extents.
10.Free-space allocation methods also infve**
efficiency of use of disk space, the performs
the filesystem and the reliability of second s*™*
‘The methods used include bit vectors and
which places the linked fst in one contigus#*
a
es
°A, The aireetony management routines must consider
ficiency performance and reliability, A hash ;
isthe most frequently used method which i rae
iat is fast and
; wrk file systems, such as NI i
12 method 01OBY to allow users coat rnd
directories from remote machines as if they econ
jocal file systems. System calls on the client i 5
sransiated into network protocols and retranslated i *
filesystem operations on the server, Networkin and
multiple dlient access create challenges in the oa
of data consistency and performance. “
File Structure
Files can be structured in any of several ways. Three
possibilities are depicted in ig :
Byte _ Record
‘The file in fig.7(a) is an unstructured sequence of
bytes. In effect, the operating system does not know oF
care what is in the file, AIlit sees are bytes Any meaning,
must be imposed by user-level programs: Both UNIX and
Windows use this approach:
‘The file in fig.7(b) is @ structured sequence of files
known as records and in fig.7(¢)> the file is arranged in
form of tree.
File Type
Many operating systems support several types of files.
UNIX and Windows, for example, have regular files and
directories. UNIX also has charaeecter and block special
files Fourteen Regular files are the ones that contain user
information. All the files of fig.8 are regular files.
Directories are system files for maintaining the structure
of the file system. Character special files are related to
input/output and used to model serial LO devices such a
terminals, printers and networks. Block special files are
used to model disks.
Regular files are generally either ASCII files or binary
files.
aoa
@ (b) ©
Fig. 7: Three kinds of files, (a) Byte: sequence, (b) Record sequence
7 (Treero 5
Process MANAGEMENT
a...
Previous Years, Questions
ESS
QI (a) What you mean by'process and lifecycle of
Process Explain context switching between
two processes,
(®) What you mean by thread? Explain kernel
and user level thread. /RTU, 2017]
—————
Ans.(a) Process : A key concept in all operating systems
is the process. A process is basically a program in
Execution, Associated with each process is its address
Space, a list of memory locations from some minimum
(usually 0) to some maximum which the process can read
and write. The address space contains the executable/
Program, the program’s data and its stack. Also associated”
Process-is some set of registers, including the
Program couriter, stack-pointer and other hardware
Tegisters-and all-the other information needed to run the
program, ~
Process Life Cycle
‘When a process executes, it passes through different
states. These stages may differ in different operating
systems, and the names of these states are also not
standardized,
In general, a process can have one of the following
five states at a time:
(i) Start: This is the initial state when a process is
first started/created.
(ii) Ready: The process is waiting to be assigned
to a processor, Ready processes are waiting to
have the processor allocated to them by the
operating system so that they can run. Process
may come into this state after Start state or while
running it by but interrupted by the scheduler to
assign CPU to some other process.
(iv) Waiting: Process moves into the ya.
ifit needs to wait for a resource, uckig ,
for yGer input} or waitige for s gj
availible 7
(9) Terminated or Exit: Once the proces
~ its execution, ort is terminated bythe Ss
system, it is moved to the t terminated
it waits to be removed from main Meng
Context Switching in Processes
Process switching is context switching fe
process to a different process. It involves swine
all of the process abstractions and resources in fag
those belonging to a new process. Most notaby
expensively, this means switching the memory
space. This includes memory addresses, mappings
tables, and kernel resources, «relatively ees
Opgration, <—-—— a
Context Switching in Threads
Thread switching is context switching (>
thread to another in the same process. Thread sit
is much cheaper as it involves switching out ot
abstraction unique to threads, the processate (Such. as the pro,
generally very y effi counter
“gs the i
0 ne quently s
sed gnificanth
YC eS switching, a
0t read 24 tread is @ single
ai program.
mers are familiar
Ate era Sears with writing sequential.
ins YOU have pI Y “uitten a program that
ot sorts list of names, or computes alist of
ae bers: ‘These are sequential programs; each has,
ae an end, @ sequence, and at any given tim:
he runtime of the program there is a single
eign
es
if read is similar to a sequential p
Sequential flow
Kernel Level Thread
~~ Fig, 1 : Kernel-tevel threads
antages
—Since, kernel has full knowledge ¢ of all threads,
scheduler may decide to give more eee toa
process having large number of threads
) process havi number of threads
ograma single
4 also has beginning, an end, a |£nd, a sequence, and at
ne ring eine Oe py, hea
1m point of execution. Howevea thread itselfis no
nt rit cannot mun og Heo but runs within a
gga Kernel-level threads are especially good for
M™™ A thread is a basic unit 0 fon; it Th applications that frequently block *
comprises a thread 1D; rant counter, a register set, | Disadvantages SSSS——
ala stack, It shares with other threads belongi ion, sso aa ies he
cane process its code a. alion, data sonton ar sectio) |
senting system restiunces, such. restsii6es, such as pa ‘ les are on
{traditional process has a Lifa
rocess iksen it can perform
nore than(task at a time.
Threads are visible only from within the process,
stere they share all process r i ss space,
ape fil ie following state is unique ta eat
tel
hd stack pointer)
times slower than that of user-Tevel threa: of user-level threads.
por Sinee kemel Since kernel must manage and schedule threads
as well as processes. It require a full thread
Pen for each thread to maintain
ts information about threads. As a result there is
significant overhead and increased in kernel
complexity.
‘bat aad munges DEUS ee aE
abput_and manages the threat \ time System is
the KemmeTThas a thread table that Keeps track of all threads
_in the system, lnvadditfon, the kernel also maintains the
+ Thekernel-level threads are slow and. inefficient.
For instance, threads operations are hundreds
: x traditional process tal keep track of processes.
ignal mask Operafing Syst ides-system_ca create
‘Priority
and manage threads, Uctltreads are supparted above
the kernel and managed directly byt y the operating system.
Usér-level threads émptementinuser-level libraries, rather
than via system thread sw | switching does not need
to call coctline Seton and ents es to the
kernel. In fact, the kernel knows nothing about user-level
threads and managesthenrasif they were single-threade
ponte “ gle-threaded
Thread-private storage +
Kernel Level Threads : Kernel threads are
pied and managed directly by the operating system.
yall all contemporary operating systems - including
XP, Linux, Mac OS X, Solaris, and Tru64 UNIX
kemel threads.
PatyFig. 2: User level threads
Advantages
* The most obvious advantage of this technique is that
.user-level threads package can be implemented 01
an Operating System that eee sppe thre
+ User-level threads do not require mi eats
operating systems,
* Simple Representation : Each thread is
represented simply by a.PC, registers, stack and a
small cqaizol-bieck, all stored in the user process
address spacé-—
* Simple Management : This simply means thet}
creating a thread, switching between tl ind
synchronization between threads af all be done
. Without intervention of the kernel.
* Fast and Efficient : Thread switching is not,
more expensive than a procedure call.
Disadvantages
* There isa lack of coordination between threads.and
operating system kernel. Therefore, process as whole
‘gets one time slice irrespective of whether process
has-one thread reads within. It is up to
each thread to relinquish control to other threads.
+ User-level threads require non-blocking systems call
i.e. a multithreaded kemel. Otherwise, enitire process
will blocked in the kernel, even if there are runnabl,
threads left in the processes. For example, if one
thréad causes a page fault, the process blocks.
Q2 Explain the architecture of operating system
with neat aid clean diagram. [RTU. 2017)
“ OR
Explain the architecture of an operating system,
< IRLU, 2012]
—
Ans. Architecture of Operating System
system are used in computer world
Monolithic Architecture for Operating sy,
Sa
There are different architectures of th,
ly
Fig. shows the monolithic architect,
eg
operating system. It is the oldest architecture
developing an operating system. ty
Service
Procedures’
kes
Fig. 1: Monolithic architecture
The key'features of the monolithic kernels are
© Monolithic kernel interacts directly with the hag.
Monolithi¢ kernel can be optimized for pant
hardware architectural. Monolithic kernel is, ot ey
portable. . oe
Monolithic kemel gets loaded completely into mengy
at boot time.
Most system calls are made by trapping to the
having the work performed there, andbavingte
return the desired result to the user process.
System call 1. (User > Kernel Mode) 2. Cha
parameters 3. Call service routine 4. Service rou
call utilities Reschedule/Return to user.
Layered Architecture of Operating System
Dijkstra introduced the layered architecture
‘operating systems.— és
porn as eDtETPFISE SYStEM architecraae MES, aloy
99 5tem TOF TBM systems ig the petit Aching
Gal machine concept because Ipyy 8S €xampl
is area. The VM USEIBM pioncerc Pie oF
Pk nm OPerating system is buile
concept 10 subdivide a single buon
jgeomultiple, virtual computer sy PUter system
"oa fa and its own
Pee vat is, VM uses software techniques,
Brena perenne
Is are
th the hy
| for parti
nel is not ver,
Operating System n,
ly into memory
:to the kernel,
ving the kere]
ess
fe) 2. Check
rvice routine
Fig. 3: Vietual machine arhitecture
Yiqual machine has two main components :
{The control program (CP) controls the real machine.
( The conversational monitor system (CMS) controls
situal machine,
Miero-Kernel Architecture of Operating System
‘A very modesn-azchitecture is the micro-kemel
hitecture, This architecture strives to take out of the
le, so-as to limit the
ferme! as much functionality as possibl
sade executed in privileged mode and lowe
odifications and extensions. Itdoes $0 by mo " wn
Inrating system services from the kemmel 10 Ty
‘pae”, Thus, making, kernel #s stnall 85 P
fore, it is called Micro Kernel.
This is one advantge as it always s
emnory and thus consumes fess MEMO?
tem
hitecture for
rays in the main
of the system.
to allow easy >}
Exokerne qt ti
nel Architecta
te tecture of Operating System
okernel is a further e
ee "tension of the microkernel
srbroach where the “kernel” is almost
netionalit; it merely passes requests or mananos 8
“user space” libraries “ °
‘This would mean that
(for instance) requests for fle
‘access by one process would be passed by the Kemel to
‘the library that is diectly responsible for managing fle
systems. Initial reports are that, his in particule result
significant performance improvements, as it does act force
data to even pass through kemel data structures
Table below shows table that comparis one of these
architectures in brie.
Table: Comparison of kernel implementation of arious
operating system arcitecare.
Monolithic | Micro hernel] Exo Kern
implementation | All are TOnly tower /Neting's
of operating [implemented |ivel oping | mplemenant
system inkemst sem ner
abstractions [space faciesare pa
} |e
ee
Q3_What is opeca system? Explain is ope and
ci ie y it ysiem iat
servicgw-Provided by operating 5 xh ae 7
—
‘Ans, Operating Sysuem | An operating systent os
( are
a software that manages the computer ates
rovide an environment in which a user cam
Frogs ina convenient and efficient manner.An operat
ing sy
the user of a coma #1888 an intermediary between
shovninelow mera” the computer hanna a
Fig. : OS as an intermediary
Types of Operating Systems e
canbe classified Operating system
<< Sine lowing categories
ser “tasking, cl
@ ane Sarsiale tasking Operating systery /
Gi) Multi-user Operating syatn,
() Maltitasking Operating sy stem
(©) Real-time Operating system
(vi) Network Operating systet
(vil) Distributed Operating syste
(D Single-user Single-tasking Operating Syste
An operating system that allows a single user to work 6
& Computer at a time and can execute a single job at a
time is known as singe-user single-tasking operating
system. For example, MS-DOS is a single-user single-
tasking operating system because you can open and run
only one application at a time in MS-DOS -
ii) Batch Operating System: A single user
Operating system that can execute various types oP JOOS
in batches but one after another. In a batch-operatit
environment, users submit their programs.an e
operator and the operator groups the simi 9s, and then
Joads them (all groups of progranss along with their relevant
data) simultaneously. When the execution of one program
for performing a similar kind of jobs is over, a new program
is loaded for the execution by the operating system. Batch
‘operating system is suitable for such applications that
require long computation time without user intervention.
Some. examples of such pplication are payroll processing,
forecasting, statistical analysis, etc.
i) Multi-user Operating System: It permits
simultaneous access to a computer system through two
ormore terminals for users. UNIX is an example of multi-
user operating system. It allows two or more users to runy
programs at the same time. Some operating systems permit!
hundreds or even thousands of concurrent users.
(iv) Multi-tasking Operating System : It is also
called Multiprocessing Operating system. A multtas
operating system is able to handle more than one proces:
‘as the jobs have to be executed on more than one
~
processor (CPUS). The runing state ci ey
process oraak, A multitasking operating tn
tw or more processes {0 Execute simular iy
‘A multiuser operating system ali
access to a computer system through, My,
terminals. Although frequency as,
rmultiprogramming, multiuser operating gy. tq
imply muliprogramming Or mulitaskiny
transaction processing system such a5 raya
system that hundreds of terminals under coy re :
program is an example of multiuser o, atin
Window 98/2000/XP/ Vista, OS/2, UNIX. 18s,
examples of multi-tasking operating sysen, UXq
Time Sharing System : Time sharing,
(CPU) management technique. In time shag
system, the CPU is allocated t0 each user 92%
fora small amount of time called time-slcg it
milseconds).Atime slice allocated yep ™
roundscbin schedeling algorithm. As soon gy,
is over, the CPU is allocated to the next user"
Time sharing system is a form of mul
operating system which operates is an inne
with a quick response time. The user types 2°
the computer through a keyboard. The ¢
processes it and a response (if any) is diplayn”
user’s terminal. A time sharing system allows yn
to simultaneously share the computer Tesoures
each ation or command in time-sharing sey,
very small faction oftime, ony litle CPU ting.
for each user. As the CPU switches rpidiy ja
user to another user, each user is given impress,
has his own computer while itis actualy on
shared among many users. Most time sharing se
use time-slice (round robin) scheduling of CPU,
approach, programs are executed with rotating pi
that increases during waiting and drops after ihe sy:
is granted. In order to preventa program from mon
the processor, a program executing longer than the
defined time-slice is interrupted by the operating
and placed at the end of the queue of waiting po
Memory management in time sharing system
for the protection and separation of user prograns
Output management feature of time-sharing systems
beable to handle multiple users (terminals), Howe
processing of terminals interrupts is not time eri
to the relative slow speed of terminals and us
required by most multiuser environment allocat:
deallocation of device must be perfomed in ama
preserves integrity and provides for good perfor
(v) Real-time Operating System: Ar”
operating system (RTOS) is a multitasking 0**d for real-time applications
Such
judd embedded systems (prageammanie
yschold appliance controtiers), industrial
trial control and sciemitic research
~ eg gacilitates the crea real-time syst
FOF puarantee the
=
The
cative of real-time system is to provide q
" k
‘> Resource utilisation and user conver
ee concern t0 real
ser ©
real-t
Correct development of the software
nience
Me system. In order to
lick response time, most of the time processing,
soar mary memory. Ifa job is not completed witht
ei yeni this situation is called deadline averrun,
jae e operating system must minimise the possible
sine oy example ofa large-scale real-time operating
an 7 Transaction Processing Facility developed by
st0"P airlines and IBM for the Sabre Airline
sr jons System.
if i) Network Operating System (NOS): A
ei operating system (NOS) is an operating system