0% found this document useful (0 votes)
14 views14 pages

PVP20 OS Unit-2

The document outlines key concepts in process management within operating systems, including the definition of processes, their states, and the role of the Process Control Block (PCB). It also discusses process scheduling, types of schedulers, and the differences between process switching and context switching. Additionally, it explains how processes can create new processes, forming a hierarchy of parent and child processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

PVP20 OS Unit-2

The document outlines key concepts in process management within operating systems, including the definition of processes, their states, and the role of the Process Control Block (PCB). It also discusses process scheduling, types of schedulers, and the differences between process switching and context switching. Additionally, it explains how processes can create new processes, forming a hierarchy of parent and child processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

PVPSIT IT OPERATINGSYSTEMS PVP20

UNIT-2
SYLLABUS:
ProcessManagement:Processes:Processconcept,ProcessScheduling,OperationsonProcesses,Inter-process Communication.
Threads:Overview,Multithreadingmodels
CPUScheduling:BasicConcepts,SchedulingCriteria,SchedulingAlgorithms(FCFS,SJF,Priority,RR)

ProcessManagement:
Processes:
Processconcept:
Process:Processisthefundamentalconceptofoperatingsystemstructure.
 Aprogramunderexecutionisreferredasa process.
 Processcanalsobedefinedasanactiveentitythat canbeassignedtoaprocessorfor execution.
 Aprocessisadynamicobjectthatresidesinmain memory.
 Aprocessincludesthecurrentvaluesofprogram,counter,processorsandregisters.
 EachprocesspossessitsownvirtualCPU.
 Aprocesscontainsthefollowingtwoelements.
1. ProgramCode
2. Aset ofdata
Program:Programreferstothecollectionofinstructionsgiventothesysteminanyprogramminglanguage.
 Aprogramisastaticobjectresidinginafile.
 Thespanningtimeofaprogramis unlimited.
 Aprogramcanexistinasingleplaceinspace.
 Aprogramincontrasttoaprocessisapassiveentity.
 A program canconsistsof differenttypesofinstructionssuchasarithmeticinstructions,memory instructions and
input/output instructions etc.
Processstates:
 Aprogramloadedintomemoryandexecutingiscalledprocess.Itisanactive entity.
 Asaprocessexecutes,itchanges state.
 Thestateofaprocessisdefinedinpartbythecurrentactivityofthat process.
 Eachprocessmaybeinoneofthefollowingstates:
State/Activity Description

New Processisbeingcreated

Running Instructionsarebeingexecutedontheprocessorrunning

Waiting/blocked Processiswaitingforsomeeventtooccurwaiting/blocked

Ready Processiswaitingtousetheprocessorready

terminated Processhasfinishedexecutionterminated

DiagramofProcessStates
1. New ->Ready :OS creates process and prepares the process to be executed,then OS moved the process into ready
queue.
2. Ready->Running:OSselectsoneoftheJobsfromreadyQueueandmovethemfromreadytoRunning.
3. Running->Terminated :When the Execution of a process has Completed, OS terminates that process from running
state. Sometimes OS terminates the process for some other reasons including Time exceeded, memory unavailable,
access violation, protection Error, I/O failure and soon.
4. Running->Ready :When the time slot of the processor expired (or) If the processor received any interrupt signal,the
OS shifted Running -> Ready State.
5. Running -> Waiting :A process is put into the waiting state, if the process need an event occur (or) an I/O Device
require.
6. Waiting->Ready :A process in the waiting state is moved to ready state whenthe event for which it has been
Completed.

ProcessControlBlock:
 Eachprocessisrepresentedintheoperatingsystembyaprocesscontrolblock(PCB).

PVPSIT IT 1|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

 Itisalsobecalledastaskcontrol block.

ProcessControl Block
 ThecomponentsofPCBare
1) Processstate
2) Programcounter
3) CPURegisters
4) CPU-Schedulinginformation
5) Memorymanagementinformation
6) Accountinginformation
7) I/Ostatusinformation
1) Processstate:Thestatemaybenew,readyrunning,waiting,halted,andsoon.
2) Programcounter:Thecounterindicatestheaddressofthenextinstructiontobeexecutedforthis process.
3) CPU registers:The registers vary in number and type, depending on the computer architecture. They include
accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. Along
with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.
4) CPU-scheduling information:This information includes a process priority, pointers to scheduling queues, and any
other scheduling parameters.
5) Memory Managemet Information:This information may include such information as the the page tables, or the
segment tables, depending on the system.value of the base and limit registers, memory system used by the operating.
6) Accounting Information:This information includes the amount of CPU and real time used, time limits, account
numbers, job or process numbers, and so on.
7) I/O status information:This information includes the list of I/O devices allocated to the process, a list of open files,
and so on.

DiagramshowingCPUswitchesfromprocesstoprocess

ProcessScheduling:
Q) WhatisProcessScheduling?
 TheobjectiveofCPUistohavesomeprocessrunningatalltimes,tomaximizetheCPUutilization.
 In multi-programming systems, a situation often occurs when different programs complete for the processor tobe
executed first. Since, only one processor is available, two processes cannot be executed simultaneously.
 TheobjectiveoftimesharingistoswitchtheCPU amongtheprocessorssofrequentlythatuserscaninteract with each
program while it is running.
 To meet these objectives, the process scheduler selects an available process, possibly from the set of available
processes for the program execution on the CPU.
 Forasingleprocessor systemtherewillneverbemorethanonerunningprocess.
 Iftherearemorethanoneprocess,therestwillhavetowaituntiltheCPUisfreeandcanberescheduled.

PVPSIT IT 2|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

ThereadyqueueandvariousI/Odevicequeues
Q)DescribeProcessSchedulingQueues?Ans:
 TherearethreetypesofQueuesinwhichtheprocessispresent.
1) Job Queue:As processes enter the system, they are put into a job queue, which consists of all processes in
the system.
2) ReadyQueue: Theprocesses thatare residing inmainmemory andare ready andwaiting to executeare kept on
a list called the ready queue.
3) DeviceQueue:ThelistofprocesseswaitingforaparticularI/Odeviceiscalledadevicequeue.Each device has its
own device queue.
 Acommonrepresentationofprocessschedulingisaqueueingdiagram,

QueueingDiagramrepresentationofProcessScheduling
 Therearetwotypeofqueueswhicharepresent:Thereadyqueueandsetofdevice queues.
 Thecirclesrepresenttheresourcesthatservethequeuesandthearrowsindicatetheflowofprocessesinthe system.
 Newprocessisinitiallyputinthereadyqueue.Itwaitsthereuntilitisselectedforexecution,orisdispatched.
 OncetheprocessisallocatedtheCPUandisexecuting,oneofseveraleventscouldoccur:
1) TheprocesscouldissueanI/0requestandthenbeplacedinanI/O queue.
2) Theprocesscouldcreateanewsubprocessandwaitforthesubprocess's termination.
3) The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready
queue.
 In the first two cases, the process eventuallyswitches fromthe waitingstatetothereadystateandisthen put back in the
ready queue.
 Aprocess continues thiscycle untilitterminates, at whichtime it is removed fromall queues and has its PCBand
resources deallocated.

Q) DescribeProcessSchedulers?
(or)
DescribedifferenttypesofSchedulers?
Ans:
 Schedulingisdefinedastheactivityofdecidingtheresourcestobesendtotheprocessontheir request.
 Schedulerisaprogram,whichselectsauserprogramfromdiskandallocatesCPUtothatprogram.
 Aprogrammigratesbetweenthevariousschedulingqueuesthroughoutitslifetime.
 Theoperatingsystemmustselect,forschedulingpurposes,processesfromthequeuesinsome fashion.
 Theselectionofaprocessfromqueuesiscarriedoutbytheappropriatescheduler.
 Therearethreetypesofschedulers.Theyareasfollows,
1. LongTermSchedulers (LTS)
2. ShortTermSchedulers(STS)
3. MediumTermSchedulers(MTS)
1) Long Term Schedulers (LTS): The long-term scheduler is also called as job scheduler, selects processes from
this pool and loads them into main memory (ready queue) for execution.
 Longtermschedulersisusedtodecide,whichprocessaretobeselectedforprocessing.

PVPSIT IT 3|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

 Longtermschedulersisdefinedasprogram(part ofoperatingsystem)whichselectsajobfromthediskand transfers


it into main memory
 Ifthenumberofreadyprocessesinthereadyqueuebecomesveryhigh,theoverheadintheoperating system for
maintaining the long lists, context switching and dispatching over-crosses the limit.
 Therefore, it is useful to letin, only alimitednumberofprocesses in the ready queue to complete for the CPU.
The long term scheduler manages this.
2) Short Term Schedulers (STS): The short-term scheduler also called as CPU scheduler, selects from among the
processesthat arereadytoexecuteandallocatestheCPUtooneof them. It decideswhichofthereadyprocesses are to be
scheduled or dispatched next. If a process requires an I/O device, which is not present available then process
enters device queue. Short term scheduler maintains ready queue and device queue.
 The difference between Long Term Scheduler (LTS) and Short Term Scheduler (STS) is that the LongTerm
Scheduler is called less frequently, where as the Short Term Scheduler is called more frequently.
 LongTermScheduler must select aprogramfromdiskinto main memoryonlyonce.i.e.,whentheprogram is
executed.
 However, Short Term Scheduler must select a job from ready queue often.(for every 1 second, in UNIX
operating system) i.e., for every one second the Short Term Scheduler is called, it will select one Process
Control Block (PCB) from the ready queue and gives CPU that job.
 After 1 second is completed, again Short Term Scheduler is called for selecting one more job from theready
queue. This process repeats.
 Thus, because of the short duration between the executions, the Short TermSchedulers must be very fast in
selecting a job, otherwise CPU will sit idle.
 However, long term schedule is called less frequently so because of duration between executions, Long
Term Schedulers can afford to take some time in selecting a good job from disk. A good job is defined as
one which is a process mix of CPU bound and I/O bound.
 AnI/OboundprocessisonethatspendsmoreofitstimeduringI/Othanitsendsduringitscomputations.
 ACPUboundprocessgeneratesI/Orequestsinfrequently,usingmoreofitstimeduringcomputations.
3) Medium Term Schedulers (MTS): The medium term scheduler sometimes removes the partially executed
process from memory to reduce the degree of multi programming. If process request an I/O device in the
middle of the execution, then the process removed from the main memory and loaded into the waiting queue,
often to decrease the overhead of CPU and later resumed. When the I/O operation completed, then the jobmoved
from waiting queue to ready queue. These two operations performed by medium term scheduler.
 When the degree ofmulti-programming increases,CPU utilization alsoincreases.At one stage, theCPU
utilization is maximum for specific number of user programs in memory.
 Atthisstage,theCPUutilizationdropsifthedegreeofmulti-programmingisincreased.
 Immediately,operatingsystemobserversdecreasein CPUutilizationandcallsMediumTermScheduler.
 The Medium Term Scheduler will swap on excess programs from memory and puts on disk. With this the
CPU utilization increases.
 After some time, when some programs leaves memory, Medium Term Schedulers will swap in those
programs which were swapped out back into memory and execution stops. This scheme which is known as
swapping, is performed by Medium Term Scheduler.
 Theswapinadswapout must bedoneatappropriatetimesbyMediumTermScheduler.

AdditionofMediumTermSchedulerstotheQueueingDiagram

SwitchingtheCPUtoanotherprocessrequiresperformingastatesaveofthecurrentprocessandastaterestoreof a different
process. This task is known as a context switch

Q) DifferentiatebetweenProcessSwitchingandContextSwitching?
Ans:
Process Switching:If an executing process is interrupted, the operating system selects another process and changes its
state to the running state by taking the control out of an interrupted process and assigning it to the currently executing
process.
 Theeventsthatcantransfercontroltotheoperatingsystemareinterruptsand‘trap’.
 Someexamplesofinterruptsincludethefollowing.
i. Clockbased interrupts
ii. Input/Outputinterrupts
iii. MemoryFaults etc.
 As interrupts are caused due to an event that occurs outside the currently executing process, the control is
transferred directly to the interrupt handler, which then shifts it to the operating system routine, where as withthe
trap, the operating system determines weather the error is fatal or not.
 An example of fatal error is the ‘hyperlink’ in an application. If we click on the link then it must reach an
application, but due to some problem if it does not gt linked to that application, it is said to be of fatal type of
error.
 Iftheerrorisfatal,thentheprocessbeingexecutedisshiftedtoaterminatedstateandaprocessswitchoccurs.

PVPSIT IT 4|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

Context Switching:Context switching refers to the process of switching the CPU to some other process thereby saving
the state of the old process and loading the saved state for the new process.
 Context switching is actually an overhead which means that apart from switching no other tasks will be
performed.
 Each machine carry different switching speed based on factors such as memory speed, number of registers that
are needed to be copied and the presence of special instructions. (For example, a single instruction that can be
used to load or store registers).
 Thetimerequiredtodocontextswitchingtypicallydependsonhardwaresupport.
 Anexampleofswitchtypeisaprocessorthatcanprovidemorethanonesetof registers.
 Incontextswitch,thepointerneedstobechangedtotheactivesetof registers.
 Theamountofworkthatistobedoneduringtheprocessofcontextswitchingismoreincaseofcomplex operating systems.
 Acontextswitchmayoccurwithoutchangingthestateofprocessbeingexecuted.
 Hence,involvesthelesseroverheadthanthesituationinwhichchangesintheprocessstatesoccursfrom running to ready
or blocked states.
 Incaseofchangesintheprocessstate,theoperatingsystemhastomakecertainchangesinitsenvironment which are:
1) Thecontextassociatedwiththeprocessoralongwithprogramcounterandotherregistersare saved.
2) Updates the Process Control Block (PCB) wit the process being executed. This involves changing the state
of the process to one of the available process states. Updating of other fields is also required.
3) ThePCBofthisprocessismovedtosomeappropriatequeue.
4) Executionoftheactiveprocessistransferredbyselectingofsomeother process.
5) UpdatesthePCBofthechosenprocessasitincludesthechangesinitsstatetorunningstate.
6) Updatesthedatastructuresassociatedwiththememorymanagement whichmayrequirethe management of the
address translation process.
7) RestoresthecontextofthesuspendedprocessbyloadingthepreviousvaluesofthePCandotherCPU registers.
Thus,theprocessinwhichinvolvesastatechange,requiresconsiderablymoreeffortthanthe contextswitch.

OperationonProcess:
process creation:
 Aprocessmaycreateseveralnewprocesses,viaacreate-processsystemcall,duringthecourseof execution.
 Thecreatingprocessiscalleda parentprocess,andthenewprocessesarecalledthe childrenofthat process.
 Eachofthesenewprocessesmayinturncreateotherprocesses,formingatreeofprocesses.
 Most operating systems identify processes according to a unique process identifier (pid), which is typically an
integer number.
 When a process creates a subprocess, that subprocess can obtain resources from the OS directly or it may be
constrained to a subset of the parent process resources. Restricting a child process to a subset of the parent's
resources prevents any process from overloading the system by creating too many subprocesses.
 Whenaprocesscreatesanewprocess,twopossibilitiesexistintermsofexecution:
1. Theparentcontinuestoexecuteconcurrentlywithits children.
2. Theparentwaitsuntilsomeorallofitschildrenhaveterminated.
 Therearealsotwopossibilitiesintermsoftheaddressspaceofthenewprocess:
1. Thechildprocessisaduplicateoftheparentprocess(it hasthesameprogramanddataastheparent).
2. Thechildprocesshasanewprogramloadedintoit.
 When one process creates a new process, the identity of the newly created process is passed to the parent so that
a parent knows the identities of all its children.
 Anewprocessiscreatedbythefork()systemcall.
 Bothprocesses(theparentandthechild)continueexecutionattheinstructionafterthefork(),withone difference:
 The return code for the fork() is zero for the new (child) process and the (nonzero) process identifier of the
child is returned to the parent.

CProgramforkingseparate process

PVPSIT IT 5|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

Theexec() systemcall isused after afork()systemcall byone ofthetwoprocesses toreplacethe process's memoryspace with a
new program.
 Theparentcanthencreatemorechildrenorifithasnothingelsetodowhilethechildruns,itcanissueawait()
systemcalltomoveitselfoffthereadyqueueuntiltheterminationofthechild.

Process Creation
ProcessTermination:
 A process terminates when it finishes executing its final statement and asks the operating system to delete it by
using the exit() system call.
 Atthatpoint,theprocessmayreturnastatusvalue(typicallyaninteger)toitsparent process.
 All the resources of the process-including physical and virtual memory, open files, and I/0 buffers-are deallocated
by the operating system.
 Aparent mayterminatetheexecutionofoneofitschildrenforavarietyofreasons,suchasthese:
1. The child has exceeded its usage of some of the resources that it has been allocated. (To determine
whether this has occurred, the parent must have a mechanism to inspect the state of its children.)
2. Thetaskassignedtothechildisnolonger required.
3. The parent is exiting, and the operating system does not allow a child to continue if its parent terminates.
If a process terminates (either normally or abnormally), then all its children must also be terminated.
This phenomenon, referred to as cascading termination, which is normally initiated by the operating
system.

Inter-processCommunication:
 InterProcessCommunication(IPC)isdefinedasthecommunicationbetweenprocesstoprocess.
 IPCprovidesamechanismtoallowtwoprocessestocommunicateandtosynchronizetheiractions.
 Processesexecutingconcurrentlyintheoperatingsystemmaybeeitherindependentprocessesorcooperating processes.
 Anyprocessthatdoesnotsharedatawithanyotherprocessisindependentprocess.
 Anyprocessthatsharesdatawithotherprocessesisa cooperating process.
 Thereareseveralreasonsforprovidinganenvironmentthatallowsprocesscooperation:
a) Informationsharing
b) Computationspeedup
c) Modularity
d) Convenience.
 CooperatingprocessesrequireanInterProcessCommunication(IPC)mechanismthatwillallowthemtoexchange data
and information.
 Therearetwofundamentalmodelsofinterprocesscommunication:
1) Shared Memory
2) MessagePassing.
1) SharedMemorySystems:
 InterProcessCommunicationusing sharedmemorymodel,aregion ofmemory thatissharedby cooperating processes
is established.
 Processescanthenexchangeinformationbyreadingandwritingdatatothesharedregion.
 Inthemessagepassingmodel,communicationtakesplacebymeansofmessagesexchangedbetweenthe cooperating
processes.
 Itusuallyhastodealwithproblemsofmutualexclusionandcriticalsectionproblem.
 Sharedmemoryallowsmaximumspeedandconvenienceof communication.
 Sharedmemoryisfasterthanmessage passing.
 Insharedmemorysystems,systemcallsarerequiredonlytoestablishshared-memoryregions.
 Once shared memory is established, all accesses are treated as routine memory accesses, and no assistance from
the kernel is required.

2) MessagepassingSystems:

PVPSIT IT 6|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

 Inthemessagepassingmodel,communicationtakesplacebymeansofmessagesexchangedbetweenthe cooperating
processes.
 Messagepassingisusefulforexchangingsmalleramountsofdata,becausenoconflictsneedbeavoided.
 Messagepassingisalsoeasiertoimplementthanissharedmemoryforintercomputercommunication.
 MessagepassingisslowerthanShared memory.
 Messagepassingsystemsaretypicallyimplementedusingsystemcallsandthusrequirethemoretime-consuming task of
kernel intervention.
 Amessage-passingfacilityprovidesatleasttwooperations:send(message)andreceive(message).
 Messagessentbyaprocesscanbeofeitherfixedorvariablesize.
 If processes Pand Qwant to communicate,theymust send messages toand receive messages fromeach other;a
communication link must exist between them.

 Severalmethodsforlogicallyimplementingalinkandthesend0orreceive()operationsarelistedbelow:
1. Directorindirectcommunication
2. Symmetricorasymmetriccommunication
3. Automaticorexplicitbuffering
4. SendbyCopyor Send byReference
5. Fixed-sizedorVariable-sizedmessages
 Naming:
 Processesthatwanttocommunicatemusthaveawaytorefertoeach other.
 Theycanuseeitherdirectorindirectcommunication.
1) DirectCommunication:
 Eachprocessthatwantstocommunicatemustexplicitlynametherecipientorsenderofthe
communication.
 Here,thesendandreceiveprimitivesaredefinedas:
 send(P,message) -SendamessagetoprocessP.
 receive(Q,message)-Receiveamessagefromprocess Q.
 Propertiesofcommunicationlinkinthisschemeare:
1. Linksareestablished automatically.
2. Alinkisassociatedwithexactlyonepairofcommunicatingprocesses.
3. Betweeneachpairthereexistsexactlyonelink.
4. Thelinkmaybeunidirectional,butisusuallybi-directional.
 The above scheme exhibits symmetry as both the sender and receiver must name the other for
communication.
 A slight variation of the above scheme exhibits asymmetry in addressing. Here, only the sender
names the recipient but, the recipient need not name the sender. The send and receive primitives
for this scheme are:
 send(P,message) –sendamessagetoprocessP
 receive(id, message) -Receive a message from any process; the variable id is set to the name
of the process with which communication has taken place.
 The disadvantage in both symmetric and asymmetric schemes is the limited modularity of
the resulting process definitions.
2) Indirect Communication:
 Withindirectcommunication,themessagesaresenttoandreceivedfrommailboxes,orports.
 A mailbox can be viewed abstractly as an object which messages can be placed by processes and
from which messages can be removed. Each mailbox has a unique identification.
 In this scheme, a process can communicate with some other process via a number of different
mailboxes. Two processes can communicate only if they share a mailbox. The send and receive
primitives are defined as follows:
Themessagesaresenttoandreceivedfrommailboxes,orports.
 send(A,message)-SendamessagetomailboxA.
 receive(A,message)-ReceiveamessagefrommailboxA.
 Acommunicationlinkinthisschemehasthefollowingproperties:
1. Linkestablishedonlyifprocessesshareacommonmailbox.
2. Alinkmaybeassociatedwithmanyprocesses.
3. Eachpairofprocessesmayshareseveralcommunicationlinks,eachcommunicationlink
corresponding to one mailbox..
4. Linkmaybeunidirectionalorbi-directional.
A mailbox may be owned either by a process or by the operating system. If the mailbox is owned by a process
(that is, the mailbox is part of the address space of the process), then we distinguish between the owner (who can only
receivemessagesthroughthismailbox)andtheuser(whocanonlysendmessagestothemailbox).Sinceeachmailbox

PVPSIT IT 7|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

has a unique owner, there can be no confusion about who should receive a message sent to this mailbox. When a process
that owns a mailbox terminates, the mailbox disappears. Any process that subsequently sends a message to this mailbox
must be notified that the mailbox no longer exists. Also, a mailbox owned by the operating system is independent and is
not attached to any particular process and must provide a mechanism that allows a process to
 Createanew mailbox
 Sendandreceivemessagesthroughthemailboxand
 Deleteamailbox.
 Synchronization:
 Communicationbetweenprocessestakesplacebycallstosendandreceiveprimitives.
 Messagepassingmaybeeitherblockingornonblockingalsoknownassynchronousand asynchronous.
 Blocking send: The sending process is blocked until the message is received by the receiving process or by the
mailbox.
 Nonblockingsend:Thesendingprocesssendsthemessageandresumesoperation.
 Blockingreceive:Thereceiverblocksuntilamessageis available.
 Nonblockingreceive:Thereceiverretrieveseitheravalidmessageoranull.
 Buffering:
 Whethercommunicationisdirectorindirect,messagesexchangedbycommunicatingprocessesresideina temporary
queue.
 Basically,suchqueuescanbeimplementedinthreeways:
1. Zero capacity
2. Boundedcapacity
3. Unboundedcapacity
1) Zerocapacity:Thequeuehasamaximumlengthofzero.
2) Boundedcapacity:Thequeuehasfinitelengthn.
3) Unboundedcapacity:Thequeue'slengthispotentiallyinfinite.
 The zero-capacityis sometimes is referred asa message with no buffering; other cases arereferred as the system
with automatic buffering.

Threads:
Overview
 Athread(alsocalledaslightweightprocess)isabasicunitofCPUutilization;
 Itconsistsofprogramcounter,registersetandstackspace.
 Multiplethreadscanbecreatedbyaparticularprocess.
 Athreadsharescodesection,datasectionandOSresources(openfiles,signals)withpeerthreads.
 However,eachofthemwillhavetheirseparateregistersetvaluesandstack.
 Consider an example of pagemaker or any word processing application. It has two important threads executingin
it, one to read the keystrokes from the keyboard and other a spelling and grammar checking thread running in
background.
 Thefollowingdiagramshowsprocesshavingsinglethreadandmultithreadedprocess.

Singlethreadedandmultithreadedprocess
 ThreadStates:
1) BornState:Athreadisjust created.
2) ReadyState:ThethreadiswaitingforCPU.
3) Running:Systemassignstheprocessortothe thread.
4) Sleep:Asleepingthreadbecomesreadyafterthedesignatedsleeptime expires.
5) Dead:TheExecutionofthethreadfinished.
 Benefitsof Threads:
1. Responsiveness
2. Resourcesharing
3. Economy
4. Scalability
 TypesofThreads:
1. Kernel-supportedthreads(MachandOS/2)
2. User-levelthreads
3. Hybridapproachimplementsbothuser-levelandkernel-supportedthreads(Solaris2).

Multithreadingmodels
 AprocessisdividedintonumberofsmallertaskseachtaskiscalledaThread.
 NumberofThreadswithinaProcessexecuteatatimeiscalled Multithreading.

PVPSIT IT 8|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

 If a program, is multithreaded, even when some portion of it is blocked, the whole program is not blocked.
Therest of the program continues working if multiple CPU’s are available.
 Multithreadinggivesbestperformance.Ifwehaveonlyasinglethread,numberofCPU’savailable,No performance
benefits achieved.
 AthreadisabasicunitofCPUutilization.
 Atraditionalprocesshasasinglethreadof control.
 Ifaprocesshasmultiplethreadsofcontrol,itcanperformmorethanonetaskatatime.

Singlethreadedandmultithreadedprocess
 Thebenefitsofmultithreadedprogrammingare
1. Responsiveness
2. Resourcesharing
3. Economy
4. Scalability.
 Processcreationisheavy-weight,whilethreadcreationislight-weight.
 Cansimplifycode,increaseefficiency.
1) Responsiveness:
 Multithreading an interactive application may allow a program to continue running even if part of it is blocked or
is performing a lengthy operation, thereby increasing responsiveness to the user.
2) Resourcesharing:
 Processesmayonlyshareresourcesthroughtechniquessuchassharedmemoryormessagepassing.
 Threadssharethememoryandtheresourcesoftheprocesstowhichtheybelongbydefault.
 The benefit of sharing codeand data is thatit allows an application to haveseveraldifferentthreads of activitywithin
the same address space.
3) Economy:
 Allocatingmemoryandresourcesforprocesscreationiscostly.Becausethreadssharetheresourcesofthe process to
which they belong, it is more economical to create and context-switch threads
4) Scalability:
 The benefits of multithreading canbegreatly increasedin amultiprocessor architecture, wherethreadsmay be
running in parallel on different processors.
 Asingle-threadedprocesscanonlyrunononeprocessor,regardlesshowmanyare available.
 MultithreadingonamultiCPUmachineincreasesparallelism.

Multithreadingmodels:
 AthreadisabasicunitofCPUutilization.
 Atraditionalprocesshasasinglethreadofcontrol.
 Ifaprocesshasmultiplethreadsofcontrol,itcanperformmorethanonetaskatatime.
 ThebenefitsofmultithreadedprogrammingareResponsiveness,Resourcesharing,Economy,scalability.
 Supportforthreadsmaybeprovidedeitherattheuserlevel,foruserthreadsorbythekernel,forkernelthreads.
 Userthreadsaresupportedabovethekernelandaremanagedwithoutkernelsupport.
 Kernelthreadsaresupportedandmanageddirectlybytheoperatingsystem.
 Arelationshipmustexistbetweenuserthreadsandkernelthreads.

 Differentmultithreadedmodelsexhibitdifferentkindsofrelationshipstheuserandkernelthreadsmayhave. They
are:
1. Many-to-OneModel
2. One-to-OneModel
3. Many-to-ManyModel
1) Many-to-OneModel:

PVPSIT IT 9|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

 Themany-to-onemodelmapsmanyuser-levelthreadstoonekernelthread.
 Threadmanagementisdonebythethreadlibraryinuserspace,soitis efficient.
 Buttheentireprocesswillblockifathreadmakesablockingsystemcall.
 Multiplethreadsareunabletoruninparallelon multiprocessors.
 Trueconcurrencyisnotgainedbecausethekernelcanscheduleonlyonethreadatatime.

Many-to-OneModel
2) One-to-OneModel:
 Theone-to-onemodelmapseachuserthreadtoakernelthread.
 Itprovidesmoreconcurrencythanthemany-to-one modelbyallowinganotherthreadtorunwhenathreadmakes a
blocking system call.
 Italsoallowsmultiplethreadstoruninparallelon multiprocessors.
 Theonlydrawbacktothismodelisthatcreatingauserthreadrequirescreatingthecorrespondingkernelthread.
 Becausetheoverheadofcreatingkernelthreadscanburdentheperformanceofan application.
 Itallowsforgreaterconcurrency,butthedeveloperhastobecarefulnottocreatetoomanythreadswithinan application

One-to-OneModel
3) Many-to-ManyModel:
 Themany-to-manymodelmultiplexesmanyuser-levelthreadstoasmallerorequalnumberofkernel threads.
 Thenumberofkernelthreadsmaybespecifictoeitheraparticularapplicationoraparticularmachine.
 Developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel
on a multiprocessor.
 Also,whenathreadperformsablockingsystemcall,thekernelcanscheduleanotherthreadforexecution.

Many-to-ManyModel
 Two-level model:multiplexes many user-level threads to a smaller or equal number of kernel threads but also
allows a user-level thread to be bound to a kernel thread. This variation sometimes referred to as the two-level
model.

Two-levelModel

PVPSIT IT 10|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

CPU Scheduling:
BasicConcepts:
 CPU-I/O Burst Cycle: Scheduling is a fundamental operating-system function. Process execution consists of
acycle of CPU execution and I/O wait where, processes alternate between a CPU burst and an I/O burst.

AlternatingSequenceofCPUandI/O bursts
 CPU Scheduler: A CPU scheduler selects from among the processes in memory that are ready to execute,
and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:
1) Switchesfromrunningstatetowaitingstate{I/Orequestorwaitingforachildprocess}
2) Switchesfromrunningstatetoreadystate{whenaninterruptoccurs}
3) Switchesfromwaitingstatetoready{completionofI/O}
4) Terminates
 Scheduling taken place under circumstances 1 and 4 is called as nonpreemptive or cooperative scheduling and
otherwise its called as preemptive.
 Innonpreemptivescheduling,oncetheCPUhasbeenallocatedtoaprocess,theCPUiskeptbyit,untilit releases the
CPU either by terminating or by switching to the waiting state.
 Inpreemptivescheduling,processmayhaveCPUtakenawaybeforecompletionofcurrentCPUburst.
 Dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler;
thisinvolves:
 Switchingcontext
 Switchingtousermode
 Jumpingtotheproperlocationintheuserprogramtorestartthatprogram.
 Dispatchlatencyisthetimetakenbythedispatchertostoponeprocessandstartanotherrunning.

SchedulingCriteria:
 DifferentCPU-schedulingalgorithmshavedifferentpropertiesandmayfavoroneclassofprocessesover another.
 DifferentcriteriaaretakenintoaccountforcomparisonofCPU-schedulingalgorithms.Theyare:
CPUutilization–keeptheCPUasbusyaspossible
Throughput–Onemeasureofworkisthenumberofprocesses completedpertimeunit,called throughput.
Turnaround time – The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the
ready queue, executing on the CPU, and doing I/O.
Waitingtime–Waitingtimeisthesumoftheperiodsspentwaitinginthereadyqueue.
Responsetime – amount of time it takesfromwhen arequest wassubmitted until the first responseisproduced, not
output (for time-sharing environment)
 The criteria for optimization of a CPU scheduling algorithm are Max CPU utilization, Max throughput, Min
turnaround time, Min waiting time, Min response time.

SchedulingAlgorithms:
CPUschedulingalgorithms
First-Come,First-Served(FCFS)Scheduling:
 Thesimplestandanon-preemptiveCPUschedulingalgorithmisthefirst-come,first-served(FCFS) scheduling
algorithm.
 Here,theprocessthatrequeststheCPUfirstisallocatedtheCPUfirst.
 TheimplementationiseasilymanagedbyaFIFOqueue.
 Whentheprocessentersthereadyqueue,itsPCBislinkedtothetail ofthequeue.
 WhenCPUisfree,itisallocatedtotheprocessattheheadofthequeue.Therunningprocessisthenremoved from the queue.
 TheaveragewaitingtimeunderFCFSpolicy,isoftenquite long.
 TakeanexampleofthreeprocesswiththeirnecessaryCPU-bursttimegivenin milliseconds.

PVPSIT IT 11|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

ExampleforFCFS Scheduling:
Consider thefollowingsetofprocessesthatarriveat time0,withthe lengthoftheCPUburst giveninmillisecondsforthe processes
P1, P2, P3:

If the process arrive in the order P1, P2, P3, and are served as FCFS order, then the resultant Gantt Chart, which is a bar
chart that illustrates a particular schedule the start and finish times of each of the processes.

So, The waiting time for process P1 =0 milliseconds


The waiting time for process P2 = 24 milliseconds
ThewaitingtimeforprocessP3=27milliseconds.
Thus,theaveragewaitingtime=(0+24+27)/3=17milliseconds.
But,iftheprocessesarriveintheorder P2,P3,P1,thenwegetthefollowingGanttchart,

Here, The waiting time for process P1 =6 milliseconds


The waiting time for process P2 = 0 milliseconds
ThewaitingtimeforprocessP3=3milliseconds.
Now,theaveragewaitingtimeis(6+0+3)/3=3milliseconds,whichisasubstantialreduction. So, the
average varies with the variation in process CPU burst time.
 Another difficulty with FCFS is, it tends to favour CPU bound process with I/O bound process. Consider thatthere
is a collection of processes one of which mostly uses CPU and a number of processes which uses I/O devices.
When a CPU bound process is running all the I/O bound processes must wait, which causes the I/O
devicestobeidle. Afterfinishingits CPU operation,the CPU bound process moves toan I/O device. Now,all the I/O
bound processes, very short CPU bursts execute quickly and move back to I/O queues and causes the CPU to sit
idle. In this way FCFS may result in different use of both processor and I/O devices.
 Once the CPU has been allocated to a process, it will not release the CPU until it is terminated or switched to the
waiting state. So, this algorithm is non-primitive. It is difficult to implement for time sharing systems in which
each user gets the CPU on a time based sharing.
 If there is one CPU bound process and many I/O bound processes, and if the CPU bound process gets hold of the
CPU, the other processes have to wait for the big process to get off the CPU. This is called a convoy effect.
 This convoy effect results in lower CPU and device utilization than might be possible if the shorter processeswere
allowed to go first.

Shortest-Job-First(SJF)schedulingalgorithm:
 Inthisapproach,weassociatewitheachprocessthelengthofitsnextCPUburst.
 Usetheselengthstoscheduletheprocesswiththeshortesttime.
 WhentheCPUisavailable,itisassignedtotheprocessthathasthesmallestnextCPUburst.
 If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. This is also called
as “shortest next CPU burst”.

ProcessP1isstartedattime0,sinceitistheonlyprocessinthequeue. Process P2
arrives at time 1.
TheremainingtimeforprocessP1(7milliseconds)islargerthanthetimerequiredbyprocessP2(4milliseconds),so process P1 is
preempted, and process P2 is scheduled.
Theaveragewaitingtimeforthisexampleis((10 -1)+(1-1)+(17 -2)+(5-3))/4=26/4=6.5milliseconds. A non-
preemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.
ExampleforShortestJobFirst(SJF)andShortest-Remaining-Time-First(SRTF)Scheduling:
ConsiderthefollowingarrivaltimeandbursttimeforprocessesP1,P2, P3,P4:

PVPSIT IT 12|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

If all the jobs are of the same length, then SRTF becomes the same as FCFS (i.e. FCFS is best can do if all jobs the same
length).

PrioritySchedulingAlgorithm:
 Thealgorithmassociateseachprocesswithapriority,theprocesswithhighestprioritywillgettheCPUfirst.
 Iftherearetwoprocesseswiththesamepriority,FCFSschedulingisusedtobreakthetie.
 AnSJFalgorithmissimplyapriorityalgorithm,wherethepriority(p)istheinverseofthenext (predicted)CPU burst. The
larger the CPU burst, the lower the priority and vice versa.
ExampleforPriority Scheduling:

Usingpriorityscheduling,wewouldscheduletheseprocessesaccordingtothefollowingGantt chart:

So, The waiting time for process P1 =6 milliseconds


The waiting time for process P2 = 0 milliseconds
The waitingtime for process P3 = 16 milliseconds
ThewaitingtimeforprocessP4=18milliseconds. The
waiting time for process P5 = 1 milliseconds.
Thus,theaveragewaitingtime=(6+0+16+18+1)/5=41/5=8.2milliseconds.

 Theallotmentofprioritiescanbecarriedoutinternallyaswellas externally.
 In an internally defined priorities are alloted based on the computation of certain measurable quantities such as CPU
burst time, time limits etc.
 Intheexternallydefinedpriority,thesearebasedoncertainexternalfactorassociatedwiththeprocess.
 Anexampleofcertainprioritiesbasedontheimportanceoftheprocess.
 Similar to the SJF, priority scheduling can also be used to preemitive and non-preemitive. But, major drawback
associated with this algorithm is indefinite blocking which is also called as starvation.
 In this case the process with lowest prioritywill neverget the CPUbecauseit keeps on allotingthe highestpriorityto
other processes. To avoid this, a technique called “Aging” is employed which increases the priority of processes that
are present in the ready queue for a long time.

RoundRobinschedulingalgorithm:
 Theround-robin(RR)schedulingissimilartoFCFSbutwithpreemptionaddedtoswitchthe processes.
 Theround-robin(RR)schedulingalgorithmisdesignedespeciallyfortimesharingsystems.
 EachprocessgetsasmallunitoftheCPUtimecalledthetimesliceortimequantum,whichisusually10-100 milliseconds.
 Thereadyqueueistreatedasacircular queue.

 TheCPU schedulergoesaround theready queue,allocating theCPU to eachprocessfora timeintervalofupto1 time


quantum.
 IfaprocessCPUburstexceeds1 timequantum,thatprocessispreemptedandisputback in the ready queue. The round robin
(RR) scheduling algorithm is preemptive scheduling algorithm.
 Afterthistimehaselapsed andif theprocesshasn’t finishedexecution,thetimersetsan interrupt to theoperating system.
 A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will
then select the next process in the ready queue.
 Itisapreemptiveschedulingalgorithm.
 If there are n processes in the ready queue and the time quantum isq, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits more than (n-1)q time units.

ExampleforRoundRobin Scheduling:
Considerthefollowingexample,wherethetimequantumis4 milliseconds.

PVPSIT IT 13|Page
PVPSIT IT OPERATINGSYSTEMS PVP20

ProcessP1getsthefirst4millisecondsandthenitspreemptedafterthefirsttimequantumandtheCPUisgiventothe next process.


SinceprocessP2doesnotneed4milliseconds,itquitsbeforeitstimequantumexpires. The CPU
is then given to the next process, process P3.
Onceeachprocesshasreceived1timequantum,theCPUisreturnedtoprocessP 1foranadditionaltimequantum. So, The
waiting time for process P1 = (10-4) = 6 milliseconds
ThewaitingtimeforprocessP2=4milliseconds
Thewaitingtimeforprocess P3=7milliseconds
Thus,theaveragewaitingtime=(6+4+7)/3=17/3=5.66milliseconds.

PVPSIT IT 14|Page

You might also like