0% found this document useful (0 votes)
89 views6 pages

PROCESS MANAGEMENT - Lecture

The document discusses process management and scheduling in operating systems. It defines a process as a program currently using the processor. Processes can be in different states like ready, running, blocked, etc. The operating system manages processes through process control blocks containing information about each process. It also discusses interrupt handling, scheduling algorithms like round robin and shortest job first, and the goals of scheduling like fairness and throughput optimization.

Uploaded by

axorbtanjiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views6 pages

PROCESS MANAGEMENT - Lecture

The document discusses process management and scheduling in operating systems. It defines a process as a program currently using the processor. Processes can be in different states like ready, running, blocked, etc. The operating system manages processes through process control blocks containing information about each process. It also discusses interrupt handling, scheduling algorithms like round robin and shortest job first, and the goals of scheduling like fairness and throughput optimization.

Uploaded by

axorbtanjiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 6

PROCESS MANAGEMENT/SCHEDULING

The major responsibilities of a multitasking operating system (O/S) is PROCESS


MANAGEMENT. E.g. of multitasking O/S is UNIX and Windows(2000 & XP). The
operating System must allocate resources to processes, enable processes to share and
exchange information, protect the resources of each process from other processes and
enable synchronization amongst process. (William Stallings, pg 100).

WHAT IS A PROCESS?

A process can simply be defined as a program in execution. it can be defined as a


program currently making use of the processor at any one time. The diagram below
shows the various states of a process:

Process State Diagram (MBIS09 Handbook)

A process can be on any of the following states:


i. Ready: This is when the process is ready to be run on the processor.
ii. Running: This is when the process is currently making use of the processor.
iii. Blocked: This is when the process is waiting for an input such as user response
or data from another process. A process may be in the blocked state if it needs
to access a resource.
Other variations of the above named states are:
iv. Ready Suspend: This is when a process is swapped out of a memory by
Memory Management system in order to free memory for other process.
v. Blocked Suspend: This is when a process is swapped out of memory after
incurring an O/I wait
vi. Terminate: This is when a process has finished its run.

Because the Operating System allocates resource such as CPU and Memory etc, it
must be careful to create a computational environment in which one process's
execution does not interfere with another.
To fully identify a process at any given time, a process is composed of a unique
component know as the Process Control Block (PCB). The PCB contains the
following information:
 The process state
 The program counter
 The CPU register; such as index register, condition codes, accumulators
 Accounting information such as amount of CPU and real time used
 Memory Management information e.g. base and index registers
 Input and output status information, CPU scheduling.
The Operating System effectively manages various processes by using the following:
I. Interrupt Handlers
II. Schedulers
III. Inter-process Communicators

Interrupt System: The interrupt is vital for the function of any OS, Its functions are:
 To alert the O/S when an ‘event’ occur so that it can suspend its current activity
and deal appropriately with the new situation.
 To enable several programs and I/O activities to proceed independently and
asynchronously while having overall control been retained by the O/S.
Interrupt types:
Interrupt may be generated by a number of resources such as;
 By the I/O controller, signalling normal completion or occurrence of an error or
failed condition
 By an internal process clock; used to interrupt the O/S at predetermined
intervals to attend to time critical activities.
 Generated by hardware faults e.g. power failure
 Can be generated by error conditions within user program

PROCESS SCHEDULING

Processor management is concerned with the internal priorities of programs already in


the main memory. As a program finishes processing and space becomes available, it is
important to decide which program is loaded into the memory next. This is been
achieved using scheduling algorithms.

According to William Stallings a Scheduler is an operating system module used to


determine which program will be loaded into the main memory when space becomes
available. Queuing is placing application programs on a waiting line for eventual
loading into main memory.

Often, a program is placed on a particular queue based on its resources needs. For
Instance programs requiring magnetic tape, special printer forms can be separated from
those calling for more normal setup. Once a program is in main memory, the dispatcher
uses its internal priority to determine its right access the processor but an external
priority has to do with getting the program into memory in the first place, and once its
in the memory, its external priority is no longer relevant.

As programs enter the system, they are placed on a queue by the queuing routine, when
space becomes available, the scheduler selects a program from the queue and loads it
into the main memory. Generally, the first program on the queue is loaded first but in
more sophisticated situations, more sophisticated priority can be used.

Scheduling objectives:
 Fairness to all processes
 Be predictable
 Minimise overhead
 Balance available resources
 Enforcement of priorities
 Achieve balance between response and utilisation
 Maximise throughput
 Avoid indefinite postponement and starvation
 Favour processes exhibiting desirable behaviour
 Degrade gracefully under heavy load.

There are three main Scheduling used:


 Long-term scheduler
 Medium-term scheduler

 Short-term scheduler .

Long-term Scheduling: The long-term scheduling controls the degree of


multiprogramming and decides on which job or jobs to accept and turn into processes.
The more jobs created, the smaller the percentage of time that each process can take to
be executed; so long-term scheduling may limit the degree of multiprogramming to
provide satisfactory services to the current set of processes.

Medium-term scheduling: which determines which processes suspended and


resumed." It swaps processes out to improve job mix". (William Buchanan, Distributed
systems and Networks, 71)

Short-term Scheduling: Short term scheduling is concerned with the allocation of


processor time to processes in order to meet some pre-defined system performance
objectives. The task of short-term scheduling is to select specific resources and exact
times for all activities of the orders to be scheduled.

Criteria for short term scheduling

i. User-oriented criteria: This relates to the behaviours of the system as perceived by the
individual user or process. For example, Response time in an interactive system is the
elapsed time between the submission of a request until the response begins to appear as
output, which quality is visible to the user.
ii. System oriented criteria: Focuses on effective and efficient utilization of the processor.
An example is Throughput which is that rate at which processes are completed. It
focuses on system performance rather than services provided to the user, therefore it is
of concern to the system Administrator not the user population.

Scheduling Levels are:

High level Scheduler (External priority) is responsible for deciding which job currently
on disc are to be brought into memory.

Low Level Scheduler (Internal Priority) is responsible for deciding which of the ready
processes is to be run.

Scheduling Policies
A policy under scheduling can be pre-emptive or Non-pre-emptive. Pre-emptive
scheduling permits a process to be removed from the processor when other high
priority processes come in but Non-pre-emptive scheduling cannot permit this, it works
on First-come-First-serve, its fairer though it keeps short jobs waiting.

The pre-emptive policies incur greater overhead than Non-pre-emptive ones but may
provide better services to the total population of processes because they prevent any
one process form monopolizing the processor for very long.( William Stallings Internals
and Design principles)

Scheduling Algorithms:

FIFO (First In First Out) which is Non pre-emptive tends to favour processor-bound
over I/O bound processes, this will result in inefficient use of both the processor and
the I/O devices as whenever a processor-bound process is running, the I/O events will
be idle even if there is potential work for them to do.

RR (Round Robin): RR is some times referred to as "time- slicing" (Charles H.Sauer


etal, Computer systems performance modelling) .With this Pre-emptive algorithm,
clock interrupts are generated at periodic interval. When the interrupt occurs, the
currently running process is placed in the ready queue and the most ready job is
selected on a first come first serve basis. The process is given a time slice before being
pre-empted.

The principal design issue is the length of the time quantum/slice to be used, if the
quantum is very short, then the short process will move through the system relatively
quickly. It is advisable that very shot quanta be avoided, always the time quantum
should be greater than the time required for a typical interaction.

Like FIFO, RR tends to favour processor-bound processes which results into inefficient
use of I/O devices and an increase in the measure of response time.

SJF/SPN (Shortest Job First/Shortest Process Next): This is another approach to


reducing the bias in favouring of long processes inherent in FIFO. This is a Non pre-
emptive policy in which the process with the shortest processing time is run next. Short
processes jump to the head of the queue past longer jobs.

One difficulty with this policy is the the need to know or at least estimate the required
processing time of each process. There is also a possibility of starvation for longer
processes as long as there is steady supply for shorter processes which also makes it not
desirable for a time-sharing or transaction processing environment because of the lack
of pre-emption.

SHORTEST REMAINING TIME: Is a pre-emptive version of SJF/SPN, the scheduler


always chooses the process that has the shortest expected remaining processing time.
HIGHEST RESPONSE RATIO NEXT: This a Non pre-emptive policy. When a current
process is completed or blocked, a ready process with the greatest value Response Ratio
is chosen, so longer processes always get past shorter jobs. The formulae for calculating
the response time is given below:

HRRN = waiting Time+ Service time


Service Time

Multi-level Feedback Policy: A simple version is to perform pre-emption in the same


fashion as for RR at particular intervals. The Operating systems allocate the processor to
a process and when the process blocks or is pre-empted, feeds it back into one of the
several priority queues.
Newer and shorter jobs are favoured over older and longer jobs. When a process first
enters the system, it is placed for execution, after each execution, it is demoted to the
lower priority queue meaning a shorter process will complete quickly without
migrating down the hierarchy of ready queues while a longer process will gradually
drift downwards.
A problem with this is that, the longer processes will stretch alarmingly and will go into
starvation if new shorter processes are regularly entering the machine.

There are other policies used for scheduling network processes such as:
Multi processor : Scheduling in a multiprocessor involves three inter-related issues,
i. The assignment of processes to Processor
ii. The use of Multiprogramming on individual processors

iii. The actual dispatching of a process

Real time scheduling: Real time systems are operating systems that must schedule and
manage real time tasks. A real time task is that which is executed in connection with some
processes or set of events external to the computer system and must meet one or more deadlines
to interact effectively and correctly with the environment. It is made of the following
characteristics: Characteristics of Real -Time Operating systems Determinism, Responsiveness,
User control, Reliability, Fail-soft operation. Real time scheduling Algorithms are:

i. Static table driven approaches


ii. Static priority driven pre-emptive approach

iii. Dynamic planning-based approach

iv. Dynamic best effort approach

You might also like