0% found this document useful (0 votes)
36 views97 pages

Embedded Systems OS Essentials

The document discusses the fundamentals of operating systems (OS), including their types, functions, and management of processes, memory, files, and I/O systems. It differentiates between General Purpose Operating Systems (GPOS) and Real-Time Operating Systems (RTOS), highlighting their characteristics and operational differences. Key concepts such as process management, task scheduling, and memory management techniques are also covered, emphasizing the importance of predictable performance in RTOS.

Uploaded by

azimanaaz16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views97 pages

Embedded Systems OS Essentials

The document discusses the fundamentals of operating systems (OS), including their types, functions, and management of processes, memory, files, and I/O systems. It differentiates between General Purpose Operating Systems (GPOS) and Real-Time Operating Systems (RTOS), highlighting their characteristics and operational differences. Key concepts such as process management, task scheduling, and memory management techniques are also covered, emphasizing the importance of predictable performance in RTOS.

Uploaded by

azimanaaz16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 97

UNIT – IV

RTOS and IDE for Embedded System Design: Operating system


basics, types of operating systems, tasks, Processes, Signals,
process and threads, Multithreading, Multiprocessing and
Multitasking, Task Scheduling: Scheduling Algorithms-Non
Preemptive Scheduling, preemptive scheduling with numerical.
What is Operating System?
• An operating system is software that manages a computer’s
hardware.
• It also provides a basis for application programs and acts as an
intermediary between the computer user and the computer
hardware.
OPERATING SYSTEM BASICS

• The operating system acts as a bridge between the user applications/tasks and the
underlying system resources through a set of system functionalities and services.

• The OS manages the system resources and makes them available to the user
applications/tasks on a need basis.

• The primary functions of an operating system is


 Make the system convenient to use
 Organise and manage the system resources efficiently and correctly
• Two modes of operation in which the OS can execute: User mode and Kernel mode
• If the program is executing in user mode, then that program does not have access to
memory, hardware and all the other resources.
• In Kernel mode(which is a privileged mode), OS has access to memory hardware and
all the resources
• When program is working in user mode, if it wants to make use of the resources, it
makes a call to the OS. When this call is made, the program switches from user mode
to kernel mode. This call is known as System Call.
The Kernel
• The kernel is the core of the operating system and is responsible for
managing the system resources and the communication among the
hardware and other system services.
• Kernel acts as the abstraction layer between system resources and
user applications.
• Kernel contains a set of system libraries and services.
For a general purpose OS, the kernel contains different services for
handling the following.
• Process Management
• Primary Memory Management
• File System Management
• I/O System (Device) Management
• Secondary Storage Management
• Protection Systems
• Interrupt Handler
Process Management
• Process is a program that is under execution.
• Process management refers to the activities involved in managing the
execution of multiple processes in an operating system. It includes
creating, scheduling, and terminating processes, as well as allocating
system resources such as CPU time, memory, and I/O devices.
• The OS is responsible for the following activities of process management.
• Creating & deleting both user & system processes.
• Suspending & resuming processes.
• Providing mechanism for process synchronization.
• Providing mechanism for process communication.
• Providing mechanism for deadlock handling.
Memory Image of a Process

1.Text Section: A Process, sometimes known as


the Text Section, also includes the current activity
represented by the value of the Program Counter.
2.Stack: The stack contains temporary data, such
as function parameters, returns addresses, and
local variables.
3.Data Section: Contains the global variable.
4.Heap Section: Dynamically allocated memory
to process during its run time.
The execution context of the process (values of CPU registers)
•PC has the address of the next instruction to be executed of
process
•Process context is in CPU registers when process is running
•Context is stored in memory when process is paused and restored
when run again (Context Switching)
Different States of a process
New: – This is the initial state where process is about to create.
Ready: – After creation of process, the process enters in the
ready state and it waits for the CPU to be assigned.
Running: – The process is running under the CPU and program
is executing by the CPU
Waiting: – When the process request to I/O then it enters in the
wait state. It executes in the main memory and doesn’t require
CPU.
Terminated or exit: – When the process finishes its execution or
it its terminated by operating system, it is moved to terminated
state.
• A process scheduler is a component of the operating system that is responsible
for determining which process to run next on the CPU. The scheduler uses various
algorithms to assign CPU time to processes based on factors such as process
priority, resource availability, and time-sharing.

• Process synchronization refers to the coordination of multiple processes to


ensure that they do not interfere with each other or cause conflicts when
accessing shared resources. It involves using synchronization primitives such as
locks, semaphores, and monitors to ensure mutual exclusion and orderly access
to shared resources.

• A process control block (PCB) is a data structure used by the operating system to
store information about a process, such as its state, priority, and resource usage.
The PCB allows the operating system to manage and control the execution of the
process and is used by the scheduler to make scheduling decisions.
A process has the following attributes.
1. Process Id: A unique identifier assigned by the operating system
2. Process State: Can be ready, running, etc.
3. CPU registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
4. Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc.
5. I/O status information: For example, devices allocated to the process, open files,
etc.
6. CPU scheduling information: For example, Priority (Different processes may have
Process Management
• Process management deals with managing the processes/tasks.
• Process management includes setting up the memory space for the process,
loading the process’s code into the memory space, allocating system resources,
scheduling and managing the execution of the process, setting up and managing
the Process Control Block (PCB), Inter Process Communication and
synchronisation, process termination/ deletion, etc.
Primary Memory Management
• The term primary memory refers to the volatile memory (RAM) where processes
are loaded and variables and shared data associated with each process are stored.
• The Memory Management Unit (MMU) of the kernel is responsible for
Keeping track of which part of the memory area is currently used by which
process
 Allocating and De-allocating memory space on a need basis (Dynamic memory
allocation).
File System Management
• File is a collection of related information.
• A file could be a program (source code or executable), text files, image files,
word documents, audio/video files, etc.
• Each of these files differ in the kind of information they hold and the way in
which the information is stored.
• The file operation is a useful service provided by the OS.
• The file system management service of Kernel is responsible for
 The creation, deletion and alteration of files
Creation, deletion and alteration of directories
Saving of files in the secondary storage memory (e.g. Hard disk storage)
 Providing automatic allocation of file space based on the amount of free space
available
Providing a flexible naming convention for the files
I/O System (Device) Management
• Kernel is responsible for routing the I/O requests coming from different user
applications to the appropriate I/O devices of the system.
• In a well-structured OS, the direct accessing of I/O devices are not allowed and
the access to them are provided through a set of Application Programming
Interfaces (APIs) exposed by the kernel.
• The kernel maintains a list of all the I/O devices of the system.
• This list may be available in advance, at the time of building the kernel.
• The kernel talks to the I/O device through a set of low-level systems calls, which
are implemented in a service, called device drivers.
• The device drivers are specific to a device or a class of devices.
• The Device Manager is responsible for
 Loading and unloading of device drivers
 Exchanging information and the system specific control signals to and from the
device
Secondary Storage Management
• The secondary storage management deals with managing the
secondary storage memory devices, if any, connected to the system.
• Secondary memory is used as backup medium for programs and data
since the main memory is volatile.
• In most of the systems, the secondary storage is kept in disks (Hard
Disk).
• The secondary storage management service of kernel deals with
Disk storage allocation
Disk scheduling (Time interval at which the disk is activated to backup
data)
Free Disk space management
Protection Systems
• Protection deals with implementing the security policies to restrict the access to both
user and system resources by different applications or processes or users.
• In multiuser supported operating systems, one user may not be allowed to view or
modify the whole/portions of another user’s data or profile details.
• In addition, some application may not be granted with permission to make use of some
of the system resources.
• This kind of protection is provided by the protection services running within the kernel.

Interrupt Handler
• Kernel provides handler mechanism for all external/internal interrupts generated by
the system
Monolithic Kernel and Microkernel
Monolithic Kernel
• In monolithic kernel architecture, all kernel services run in the kernel
space.
• Here all kernel modules run within the same memory space under a
single kernel thread.
• The tight internal integration of kernel modules in monolithic kernel
architecture allows the effective utilisation of the low-level features of
the underlying system.
• The major drawback of monolithic kernel is that any error or failure in
any one of the kernel modules leads to the crashing of the entire kernel
application.
• LINUX, SOLARIS, MS-DOS kernels are examples of monolithic kernel.
Microkernel
• The microkernel design incorporates only the essential set of
Operating System services into the kernel.
• The rest of the Operating System services are implemented in
programs known as ‘Servers’ which runs in user space.
• This provides a highly modular design and OS-neutral abstraction to
the kernel.
• Memory management, process management, timer systems and
interrupt handlers are the essential services, which forms the part of
the microkernel.
• Mach, QNX, Minix 3 kernels are examples for microkernel.
TYPES OF OPERATING SYSTEMS
• General Purpose Operating System (GPOS)
• Real-Time Operating System (RTOS)
General Purpose Operating System (GPOS)
• The operating systems, which are deployed in general computing
systems, are referred as General Purpose Operating Systems ( GPOS).
• The kernel of such an OS is more generalised and it contains all kinds
of services required for executing generic applications.
• General-purpose operating systems are often quite non-deterministic
in behaviour.
• Their services can inject random delays into application software and
may cause slow responsiveness of an application at unexpected times.
• GPOS are usually deployed in computing systems where deterministic
behaviour is not an important criterion.
Real-Time Operating System (RTOS)
• A Real-Time Operating System or RTOS implements policies and rules
concerning time-critical allocation of a system’s resources.
• The RTOS decides which applications should run in which order and
how much time needs to be allocated for each application.
• Predictable performance is the hallmark of a well-designed RTOS.
• This is best achieved by the consistent application of policies and
rules. Policies guide the design of an RTOS.
• Rules implement those policies and resolve policy conflicts.
Task/ Process management
• Deals with setting up the memory space for the tasks, loading the task’s code into the
memory space, allocating system resources, setting up a Task Control Block (TCB) for the
task and task/process termination/deletion.
A Task Control Block (TCB) is used for holding the information corresponding to a task.
• TCB usually contains the following set of information.
• Task ID: Task Identification Number
• Task State: The current state of the task (e.g. State = ‘Ready’ for a task which is ready to
execute)
• Task Type: Task type. Indicates what is the type for this task. The task can be a hard real
time or soft real time or background task.
• Task Priority: Task priority (e.g. Task priority = 1 for task with priority = 1)
• Task Context Pointer: Context pointer. Pointer for context saving
• Task Memory Pointers: Pointers to the code memory, data memory and stack memory for
the task
• Task System Resource Pointers: Pointers to system resources (semaphores, mutex, etc.) used
by the task
• Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
• The parameters and implementation of the TCB is kernel dependent.
• The TCB parameters vary across different kernels, based on the task
management implementation.
• Task management service utilises the TCB of a task in the following
way
Creates a TCB for a task on creating a task
 Delete/remove the TCB of a task when the task is terminated or
deleted
Reads the TCB to get the state of a task
Update the TCB with updated parameters on need basis (e.g. on a
context switch)
Modify the TCB to change the priority of the task dynamically
 Task/Process Scheduling: Deals with sharing the CPU among various
tasks/processes. A kernel application called ‘Scheduler’ handles the task scheduling.
Scheduler is nothing but an algorithm implementation, which performs the efficient
and optimal scheduling of tasks to provide a deterministic behavior.
• Task/Process Synchronization: Deals with synchronizing the concurrent access of
a resource, which is shared across multiple tasks and the communication between
various tasks.
• Error/Exception handling: Deals with registering and handling the errors
occurred/exceptions raised during the execution of tasks. Insufficient memory,
timeouts, deadlocks, deadline missing, bus error, divide by zero, unknown
instruction execution etc, are examples of errors/exceptions. Errors/Exceptions can
happen at the kernel level services or at task level. Deadlock is an example for
kernel level exception, whereas timeout is an example for a task level exception. The
OS kernel gives the information about the error in the form of a system call (API).
• Memory Management:
 The memory management function of an RTOS kernel is slightly different compared to the General Purpose Operating
Systems
 The memory allocation time increases depending on the size of the block of memory needs to be allocated and the state of
the allocated memory block (initialized memory block consumes more allocation time than un- initialized memory block)
 Since predictable timing and deterministic behavior are the primary focus for an RTOS, RTOS achieves this by
compromising the effectiveness of memory allocation
 RTOS generally uses ‘block’ based memory allocation technique, instead of the usual dynamic memory allocation
techniques used by the GPOS.
 RTOS kernel uses blocks of fixed size of dynamic memory and the block is allocated for a task on a need basis. The blocks
are stored in a ‘Free buffer Queue’.
 Most of the RTOS kernels allow tasks to access any of the memory blocks without any memory protection to achieve
predictable timing and avoid the timing overheads
 RTOS kernels assume that the whole design is proven correct and protection is unnecessary. Some commercial RTOS
kernels allow memory protection as optional and the kernel enters a fail-safe mode when an illegal memory access occurs
 The memory management function of an RTOS kernel is slightly different compared to the General Purpose Operating
Systems
 A few RTOS kernels implement Virtual Memory concept for memory allocation if the system supports secondary memory
storage (like HDD and FLASH memory).
• Interrupt Handling:
 Interrupts inform the processor that an external device or an associated task requires immediate attention of the
CPU.
 Interrupts can be either Synchronous or Asynchronous.
 Interrupts which occurs in sync with the currently executing task is known as Synchronous interrupts. Usually
the software interrupts fall under the Synchronous Interrupt category. Divide by zero, memory segmentation
error etc are examples of Synchronous interrupts.
 For synchronous interrupts, the interrupt handler runs in the same context of the interrupting task.
 Asynchronous interrupts are interrupts, which occurs at any point of execution of any task, and are not in sync
with the currently executing task.
 The interrupts generated by external devices (by asserting the Interrupt line of the processor/controller to
which the interrupt line of the device is connected) connected to the processor/controller, timer overflow
interrupts, serial data reception/ transmission interrupts etc are examples for asynchronous interrupts.
• For asynchronous interrupts, the interrupt handler is usually written as separate task (Depends on OS Kernel
implementation) and it runs in a different context. Hence, a context switch happens while handling the
asynchronous interrupts.
 Priority levels can be assigned to the interrupts and each interrupts can be enabled or disabled individually.
 Most of the RTOS kernel implements ‘Nested Interrupts’ architecture. Interrupt nesting allows the pre-emption
(interruption) of an Interrupt Service Routine (ISR), servicing an interrupt, by a higher priority interrupt.
• Time Management:
Interrupts inform the processor that an external device or an associated task requires
immediate attention of the CPU.
Accurate time management is essential for providing precise time reference for all
applications
The time reference to kernel is provided by a high-resolution Real Time Clock (RTC)
hardware chip (hardware timer)
The hardware timer is programmed to interrupt the processor/controller at a fixed rate. This
timer interrupt is referred as ‘Timer tick’
The ‘Timer tick’ is taken as the timing reference by the kernel. The ‘Timer tick’ interval may
vary depending on the hardware timer. Usually the ‘Timer tick’ varies in the microseconds
range
The time parameters for tasks are expressed as the multiples of the ‘Timer tick’
The System time is updated based on the ‘Timer tick’
If the System time register is 32 bits wide and the ‘Timer tick’ interval is 1microsecond, the
System time register will reset in
• 2^32 * 10^-6/ (24 * 60 * 60) = 49700 Days =~ 0.0497 Days = 1.19 Hours
• If the ‘Timer tick’ interval is 1 millisecond, the System time register will reset in
• 2^32 * 10^-3 / (24 * 60 * 60) = 497 Days = 49.7 Days =~ 50 Days
• The ‘Timer tick’ interrupt is handled by the ‘Timer Interrupt’ handler of kernel. The ‘Timer tick’
interrupt can be utilized for implementing the following actions.
 Save the current context (Context of the currently executing task)
 Increment the System time register by one. Generate timing error and reset the System time
register if the timer tick count is greater than the maximum range available for System time
register
 Update the timers implemented in kernel (Increment or decrement the timer registers for each
timer depending on the count direction setting for each register. Increment registers with count
direction setting = ‘count up’ and decrement registers with count direction setting = ‘countdown’)
 Activate the periodic tasks, which are in the idle state
 Invoke the scheduler and schedule the tasks again based on the scheduling algorithm
 Delete all the terminated tasks and their associated data structures (TCBs)
 Load the context for the first task in the ready queue. Due to the re- scheduling, the ready task
might be changed to a new one from the task, which was pre-empted by the ‘Timer Interrupt’ task
• Hard Real-time System:
A Real Time Operating Systems which strictly adheres to the timing constraints for a task
A Hard Real Time system must meet the deadlines for a task without any slippage
Missing any deadline may produce catastrophic results for Hard Real Time Systems,
including permanent data lose and irrecoverable damages to the system/users
Emphasize on the principle ‘A late answer is a wrong answer’
Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles are typical
examples of Hard Real Time Systems
As a rule of thumb, Hard Real Time Systems does not implement the virtual memory
model for handling the memory. This eliminates the delay in swapping in and out the code
corresponding to the task to and from the primary memory
The presence of Human in the loop (HITL) for tasks introduces un- expected delays in the
task execution. Most of the Hard Real Time Systems are automatic and does not contain a
‘human in the loop’
 Soft Real-time System:
Real Time Operating Systems that does not guarantee meetingdeadlines, but, offer the best
effort to meet the deadline
Missing deadlines for tasks are acceptable if the frequency ofdeadline missing is within the
compliance limit of the Quality of Service(QoS)
A Soft Real Time system emphasizes on the principle ‘A late answer is an acceptable answer,
but it could have done bit faster’
Soft Real Time systems most often have a ‘human in the loop (HITL)’
Automatic Teller Machine (ATM) is a typical example of Soft Real Time System. If the ATM
takes a few seconds more than the ideal operation time, nothing fatal happens.
An audio video play back system is another example of Soft Real Time system. No potential
damage arises if a sample comes late by fraction of a second, for play back.
Signals

• A signal is a software generated interrupt that is sent to


a process by the OS because of when user press ctrl-c
or another process tell something to this process.
• There are fix set of signals that can be sent to a
process. signal are identified by integers.
• Signal number have symbolic names.
• There are several default signal handler routines. Each
signal is associated with one of these default handler
routine.
Threads
• A thread is a basic unit of CPU utilization.
• It comprises a thread ID, a program counter (PC), a register set, and a stack.
• It shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and
signals.
• A traditional process has a single thread of control.
• If a process has multiple threads of control, it can perform more than one task
at a time.
• Threads are also called lightweight processes as they possess some of the
properties of processes.
• A single-threaded process has one program counter specifying the next
instruction to execute.
• This single thread of control allows the process to perform only one task at a
time. Thus, the user cannot simultaneously type in characters and run the
spell checker.
Process Thread
Process means any program is in execution. Thread means a segment of a process.
The process takes more time to terminate. The thread takes less time to terminate.
It takes more time for creation. It takes less time for creation.
It also takes more time for context switching. It takes less time for context switching.

The process is less efficient in terms of communication. Thread is more efficient in terms of communication.

We don’t need multi programs in action for multiple


Multiprogramming holds the concepts of multi-process. threads because a single process consists of multiple
threads.
The process is isolated. Threads share memory.
A Thread is lightweight as each thread in a process shares
The process is called the heavyweight process.
code, data, and resources.
If one process is blocked then it will not affect the If a user-level thread is blocked, then all other user-level
execution of other processes threads are blocked.
The process has its own Process Control Block, Stack, and Thread has Parents’ PCB, its own Thread Control Block, and
Address Space. Stack and common Address space.
Why threads?
• A multithreaded process has multiple program counters, each
pointing to the next instruction to execute for a given thread.
• A multithreaded word processor could, for example, assign one
thread to manage user input while another thread runs the spell
checker.
Concept of multithreading
Use of multiple threads to execute a process brings the following
advantage.
• Better memory utilization. Multiple threads of the same process share
the address space for data memory. This also reduces the complexity
of inter thread communication since variables can be shared across
the threads.
 Since the process is split into different threads, when one thread
enters a wait state, the CPU can be utilized by other threads of the
process that do not require the event, which the other thread is
waiting, for processing. This speeds up the execution of the process.
 Efficient CPU utilization. The CPU is engaged all time.
Thread Creation
• The first thing you have to be able to do to write a multi-threaded
program is to create new threads, and thus some kind of thread
creation interface must exist.
The pthread_create() function starts a new thread named ‘specified in first
parameter’ in the calling process. The new thread starts execution by invoking
start_routine(); arg is passed as the sole argument of start_routine().

There are four arguments: thread, attr, start routine and arg.
The first, thread, is a pointer to a structure of type pthread_t

The second argument ‘attr’ is used to specify any attributes the thread might have like stack size
, information regarding any scheduling priority.
In most cases, the defaults will be fine; then simply pass the value NULL .

The third arguments specifies which function the thread should start executing.
A function name (start_routine),which is passes a single argument of type void*(as indicated in
the parentheses after start_routine),and which returns a value of type void*(i.e.,a void pointer).

The fourth argument, arg is the argument to be passed to the function where the thread begins
execution.
Thread Completion

This routine takes two arguments.

The first is of type pthread_t, and is used to specify which thread to


wait for. This variable is initialized by the thread creation routine.

The second argument is a pointer to the return value you expect


toget back.
Write a C program to demonstrate Creation and Completion of Threads.
Tasks, Processes & Threads :
 In the Operating System context, a task is defined as the program in execution and the related
information maintained by the Operating system for the program
 A process requires various system resources like CPU for executing the process, memory for
storing the code corresponding to the process and associated variables, I/O devices for
information exchange etc

The structure of a Processes


 The concept of ‘Process’ leads to concurrent execution (pseudo parallelism) of tasks and thereby
the efficient utilization of the CPU and other system resources
 Concurrent execution is achieved through the sharing of CPU among the processes.
 A process mimics a processor in properties and holds a set of registers, process status, a Program
Counter (PC) to point to the next executable instruction of the process, a stack for holding the
local variables associated with the process and the code corresponding to the process
 A process, which inherits all the properties of the CPU, can be considered as a virtual processor,
awaiting its turn to have its properties switched into the physical processor
 When the process gets its turn, its registers and Program counter register becomes mapped to the
physical registers of the CPU
The memory occupied by the process is segregated into three regions namely; Stack memory, Data
memory and Code memory
The ‘Stack’ memory holds all temporary data such as variables local to theprocess
Data memory holds all global data for the process
The code memory contains the program code (instructions) corresponding to the process
On loading a process into the main memory, a specific area of memory is allocated for the process
The stack memory usually starts at the highest memory address from the memory area allocated for the
process (Depending on the OS kernel implementation)
• Process States & State Transition
 The creation of a process to its termination is not a single step
operation
 The process traverses through a series of states during its transition
from the newly created state to the terminated state
 The cycle through which a process changes its state from ‘newly
created’ to ‘execution completed’ is known as ‘Process Life Cycle’.
The various states through which a process traverses through during a
Process Life Cycle indicates the current status of the process with
respect to time and also provides information on what it is allowed to
do next
Process States & State Transition:
 Created State: The state at which a process is being created is referred as ‘Created State’. The
Operating System recognizes a process in the ‘Created State’ but no resources are allocated to the
process
 Ready State: The state, where a process is incepted into the memory and awaiting the processor
time for execution, is known as ‘Ready State’. At this stage, the process is placed in the ‘Ready list’
queue maintained by the OS
 Running State: The state where in the source code instructions corresponding to the process is being
executed is called ‘Running State’. Running state is the state at which the process execution happens
 Blocked State/Wait State: Refers to a state where a running process is temporarily from
suspended execution and does not have immediate access to resources. The blocked state might
have invoked by various conditions like- the process enters a wait state for an event to occur (E.g.
Waiting for user inputs such as keyboard input) or waiting for getting access to a shared
resource like semaphore, mutex etc
 Completed State: A state where the process completes its execution
 The transition of a process from one state to another is known as ‘State transition’
 When a process changes its state from Ready to running or from running to blocked or terminated or
from blocked to running, the CPU allocation for the process may also change
Threads
 A thread is the primitive that can execute code
 A thread is a single sequential flow of control within a process
 ‘Thread’ is also known as lightweight process
 A process can have many threads of execution

 Different threads, which are part of a process, share the same address
space; meaning they share the data memory, code memory and heap
memory area
 Threads maintain their own thread status (CPU register values),
Program Counter (PC) and stack
TASK COMMUNICATION
• In a multitasking system, multiple tasks/processes run concurrently (in pseudo parallelism) and
each process may or may not interact between.
• Based on the degree of interaction, the processes running on an OS are classified as
 Co-operating Processes: In the co-operating interaction model one process requires the inputs
from other processes to complete its execution.
 Competing Processes: The competing processes do not share anything among themselves but
they share the system resources. The competing processes compete for the system resources
such as file, display device, etc. Co-operating processes exchanges information and
communicate through the following methods.
 Co-operation through Sharing: The co-operating process exchange data through some shared
resources.
 Co-operation through Communication: No data is shared between the processes. But they
communicate for synchronisation.

The mechanism through which processes/tasks communicate each other is known as Inter
Process/Task Communication (IPC). Inter Process Communication is essential for process co-
ordination. The various types of Inter Process Communication (IPC) mechanisms adopted by
process are kernel (Operating System) dependent.
MULTIPROCESSING AND MULTITASKING
• Multiprocessing describes the ability to execute multiple processes
simultaneously.
• Systems which are capable of performing multiprocessing, are known
as multiprocessor systems.
• Multiprocessor systems possess multiple CPUs and can execute
multiple processes simultaneously.
• The ability of the operating system to have multiple programs in
memory, which are ready for execution, is referred as
multiprogramming.
• In a uniprocessor system, it is not possible to execute multiple
processes simultaneously.
• However, it is possible for a uniprocessor system to achieve some
degree of pseudo parallelism in the execution of multiple processes
by switching the execution among different processes.
• The ability of an operating system to hold multiple processes in
memory and switch the processor (CPU) from executing one process
to another process is known as multitasking. Multitasking creates the
illusion of multiple tasks executing in parallel. Multitasking involves
the switching of CPU from executing one task to another
• Whenever a CPU switching happens, the current context of execution
should be saved to retrieve it at a later point of time when the CPU
executes the process, which is interrupted currently due to execution
switching.
• The context saving and retrieval is essential for resuming a process exactly
from the point where it was interrupted due to CPU switching.
• The act of switching CPU among the processes or changing the current
execution context is known as ‘Context switching’.
• The act of saving the current context which contains the context details
(Register details, memory details, system resource usage details, execution
details, etc.) for the currently running process at the time of CPU switching
is known as ‘ Context saving’.
• The process of retrieving the saved context details for a process, which is
going to be executed due to CPU switching, is known as ‘ Context retrieval’.
• Multitasking involves ‘ Context switching’ , ‘Context saving’ and ‘Context
retrieval’.
Types of Multitasking
• Co-operative Multitasking: Co-operative multitasking is the most primitive
form of multitasking in which a task/process gets a chance to execute only when
the currently executing task/process voluntarily relinquishes the CPU.
• In this method, any task/process can avail the CPU as much time as it wants.
• Since this type of implementation involves the mercy of the tasks each other for
getting the CPU time for execution, it is known as co-operative multitasking.
• If the currently executing task is non-cooperative, the other tasks may have to
wait for a long time to get the CPU
Preemptive Multitasking: Preemptive multitasking ensures that every task/process gets a chance to
execute.
• When and how much time a process gets is dependent on the implementation of the preemptive
scheduling.
• As the name indicates, in preemptive multitasking, the currently running task/process is preempted to
give a chance to other tasks/process to execute.
• The preemption of task may be based on time slots or task/processpriority
Non-preemptive Multitasking: The process/task, which is currently given the CPU time,
is allowed to execute until it terminates (enters the ‘Completed’ state) or enters the
‘Blocked/Wait’ state, waiting for an I/O.
The co- operative and non-preemptive multitasking differs in their behavior when they are
in the ‘Blocked/Wait’ state.
In co-operative multitasking, the currently executing process/task need not relinquish the
CPU when it enters the ‘Blocked/Wait’ sate, waiting for an I/O, or a shared resource access
or an event to occur whereas in non-preemptive multitasking the currently executing task
relinquishes the CPU when it waits for an I/O.
Task Scheduling:
 In a multitasking system, there should be some mechanism in place to share the CPU among the
different tasks and to decide which process/task is to be executed at a given point of time
 Determining which task/process is to be executed at a given point of time is known as
task/process scheduling
 Task scheduling forms the basis of multitasking
 Scheduling policies forms the guidelines for determining which task is to be executed when
 The scheduling policies are implemented in an algorithm and it is run by the kernel as a service
 The kernel service/application, which implements the scheduling algorithm, is known as
‘Scheduler’
 The task scheduling policy can be pre-emptive, non-preemptive or co- operative
 Depending on the scheduling policy the process scheduling decision may take place when a
process switches its state to
 ‘Ready’ state from ‘Running’ state
 ‘Blocked/Wait’ state from ‘Running’ state
 ‘Ready’ state from ‘Blocked/Wait’ state
 ‘Completed’ state
The selection of a scheduling criteria/algorithm should consider
• CPU Utilization: The scheduling algorithm should always make the
CPU utilization high. CPU utilization is a direct measure of how
much percentage of the CPU is being utilized.
• Throughput: This gives an indication of the number of processes
executed per unit of time. The throughput for a good scheduler should
always be higher.
• Turn around Time: It is the amount of time taken by a process for
completing its execution. It includes the time spent by the process for
waiting for the main memory, time spent in the ready queue, time
spent on completing the I/O operations, and the time spent in
execution. The turn around time should be a minimum for a good
scheduling algorithm.
• Waiting Time: It is the amount of time spent by a process in the
‘Ready’ queue waiting to get the CPU time for execution. The waiting
time should be minimal for a good scheduling algorithm.

• Response Time: It is the time elapsed between the submission of a


process and the first response. For a good scheduling algorithm, the
response time should be as least as possible.
Non-preemptive Scheduling

• Non-preemptive scheduling is employed in systems, which implement


non-preemptive multitasking model.
• In this scheduling type, the currently executing task/process is
allowed to run until it terminates or enters the ‘Wait’ state waiting for
an I/O or system resource.
• First-Come-First-Served (FCFS)/ FIFO Scheduling : First-Come-First-
Served ( FCFS) scheduling algorithm allocates CPU time to the
processes based on the order in which they enter the ‘Ready’ queue.
• The first entered process is serviced fi rst. It is same as any real world
application where queue systems are used

Drawbacks:
Favors monopoly of process. A process, which does not contain any I/O operation, continues
its execution until it finishes its task
In general, FCFS favors CPU bound processes and I/O bound processes may have to wait
until the completion of CPU bound process, if the currently executing process is a CPU bound
process. This leads to poor device utilization.
The average waiting time is not minimal for FCFS scheduling algorithm
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around
Time (TAT) for each process and the average waiting time and Turn
Around Time (Assuming there is no I/O waiting for the processes).
Calculate the waiting time and Turn Around Time (TAT) for each process
and the Average waiting time and Turn Around Time (Assuming there is
no I/O waiting for the processes) for the above example if the process
enters the ‘Ready’ queue together in the order P2, P1, P3.
Last-Come-First Served (LCFS)/LIFO Scheduling The Last-Come-First
Served (LCFS) scheduling algorithm also allocates CPU time to the
processes based on the order in which they are entered in the ‘Ready’
queue.
The last entered process is serviced fi rst. LCFS scheduling is also known
as Last In First Out ( LIFO) where the process, which is put last into the
‘Ready’ queue, is serviced fi rst.
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together
in the order P1, P2, P3 (Assume only P1 is present in the ‘Ready’ queue
when the scheduler picks it up and P2, P3 entered ‘Ready’ queue after
that). Now a new process P4 with estimated completion time 6 ms
enters the ‘Ready’ queue after 5 ms of scheduling P1. Calculate the
waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Turn Around Time (Assuming there is no I/O
waiting for the processes). Assume all the processes contain only CPU
operation and no I/O operations are involved.
• Shortest Job First (SJF) Scheduling Shortest Job First ( SJF) scheduling
algorithm ‘sorts the ‘Ready’ queue’ each time a process relinquishes
the CPU (either the process terminates or enters the ‘Wait’ state
waiting for I/O or system resource) to pick the process with shortest
(least) estimated completion/run time. In SJF, the process with the
shortest estimated run time is scheduled fi rst, followed by the next
shortest process, and so on.

Drawbacks:
A process whose estimated execution completion time is high may not get a chance to
execute if more and more processes with least estimated execution time enters the ‘Ready’
queue before the process with longest estimated execution time starts its execution
May lead to the ‘Starvation’ of processes with high estimated completion time
Difficult to know in advance the next shortest process in the ‘Ready’ queue for scheduling
since new processes with different estimated execution time keep entering the ‘Ready’
queue at any point of time.
• Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready
queue together. Calculate the waiting time and Turn Around Time
(TAT) for each process and the Average waiting time and Turn Around
Time (Assuming there is no I/O waiting for the processes) in SJF
algorithm.
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together.
A new process P4 with estimated completion time 2 ms enters the
‘Ready’ queue after 2 ms of execution of P2. Calculate the waiting time
and Turn Around Time (TAT) for each process and the Average waiting
time and Turn Around Time
Priority Based Scheduling
Priority based non-preemptive scheduling algorithm ensures that a process with high
priority is serviced at the earliest compared to other low priority processes in the ‘Ready’
queue.
The priority of a task/process can be indicated through various mechanisms.
The Shortest Job First (SJF) algorithm can be viewed as a priority based scheduling where
each task is prioritised in the order of the time required to complete the task. The lower
the time required for completing a process the higher is its priority in SJF algorithm.
Another way of priority assigning is associating a priority to the task/process at the time of
creation of the task/process. The priority is a number ranging from 0 to the maximum
priority supported by the OS. The maximum level of priority is OS dependent. For Example,
Windows CE supports 256 levels of priority (0 to 255 priority numbers). While creating the
process/task, the priority can be assigned to it. The priority number associated with a
task/process is the direct indication of its priority. The priority variation from high to low is
represented by numbers from 0 to the maximum priority or by numbers from maximum
priority to 0. For Windows CE operating system a priority number 0 indicates the highest
priority and 255 indicates the lowest priority
Drawbacks:
Similar to SJF scheduling algorithm, non-preemptive priority based algorithm also
possess the drawback of ‘Starvation’ where a process whose priority is low may not
get a chance to execute if more and more processes with higher priorities enter the
‘Ready’ queue before the process with lower priority starts its execution.
‘Starvation’ can be effectively tackled in priority based non-preemptive scheduling
by dynamically raising the priority of the low priority task/process which is under
starvation (waiting in the ready queue for a longer time for getting the CPU time)
The technique of gradually raising the priority of processes which are waiting in the
‘Ready’ queue as time progresses, for preventing ‘Starvation’, is known as ‘Aging’.
• Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0—highest
priority, 3—lowest priority) respectively enters the ready queue
together. Calculate the waiting time and Turn Around Time (TAT) for
each process and the Average waiting time and Turn Around Time
(Assuming there is no I/O waiting for the processes) in priority based
scheduling algorithm.
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0—highest priority, 3—
lowest priority) respectively enters the ready queue together. If a new
process P4 with estimated completion time 6 ms and priority 1 enters
the ‘Ready’ queue after 5 ms of execution of P1. Assume all the
processes contain only CPU operation and no I/O operations are
involved.
Preemptive Scheduling
Preemptive scheduling is employed in systems, which implements preemptive
multitasking model.
In preemptive scheduling, every task in the ‘Ready’ queue gets a chance to execute.
When and how often each process gets a chance to execute (gets the CPU time) is
dependent on the type of preemptive scheduling algorithm used for scheduling the
processes.
In this kind of scheduling, the scheduler can preempt (stop temporarily) the
currently executing task/process and select another task from the ‘Ready’ queue
for execution.
When to pre-empt a task and which task is to be picked up from the ‘Ready’ queue
for execution after preempting the current task is purely dependent on the
scheduling algorithm.
A task which is preempted by the scheduler is moved to the ‘Ready’ queue.
The act of moving a ‘Running’ process/task into the ‘Ready’ queue by the
scheduler, without the processes requesting for it is known as ‘Preemption’
Preemptive SJF Scheduling/ Shortest Remaining Time (SRT)
• The non-preemptive SJF scheduling algorithm sorts the ‘Ready’ queue
only after completing the execution of the current process or when the
process enters ‘Wait’ state, whereas the preemptive SJF scheduling
algorithm sorts the ‘Ready’ queue when a new process enters the ‘Ready’
queue and checks whether the execution time of the new process is
shorter than the remaining of the total estimated time for the currently
executing process.
• If the execution time of the new process is less, the currently executing
process is preempted and the new process is scheduled for execution.
• Thus preemptive SJF scheduling always compares the execution
completion time (It is same as the remaining time for the new process) of
a new process entered the ‘Ready’ queue with the remaining time for
completion of the currently executing process and schedules the process
with shortest remaining time for execution.
• Preemptive SJF scheduling is also known as Shortest Remaining Time
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together.
A new process P4 with estimated completion time 2 ms enters the
‘Ready’ queue after 2 ms. Assume all the processes contain only CPU
operation and no I/O operations are involved.
Round Robin (RR) Scheduling
In Round Robin scheduling, each process in the ‘Ready’ queue is executed for a pre-
defined time slot.
The execution starts with picking up the first process in the ‘Ready’ queue
It is executed for a pre-defined time and when the pre-defined time elapses or the
process completes (before the pre-defined time slice), the next process in the
‘Ready’ queue is selected for execution.
This is repeated for all the processes in the ‘Ready’ queue.
Once each process in the ‘Ready’ queue is executed for the pre-defined time period,
the scheduler comes back and picks the first process in the ‘Ready’ queue again for
execution.
The sequence is repeated.
This reveals that the Round Robin scheduling is similar to the FCFS scheduling and
the only difference is that a time slice based preemption is added to switch the
execution between the processes in the ‘Ready’ queue. The ‘Ready’ queue can be
considered as a circular queue in which the scheduler picks up the first process for
execution and moves to the next till the end of the queue and then comes back to
Three processes with process IDs P1, P2, P3 with estimated completion
time 6, 4, 2 milliseconds respectively, enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around
Time (TAT) for each process and the Average waiting time and Turn
Around Time (Assuming there is no I/O waiting for the processes) in RR
algorithm with Time slice = 2 ms.
Priority Based Scheduling
• Priority based preemptive scheduling algorithm is same as that of the
non-preemptive priority based scheduling except for the switching of
execution between tasks.
• In preemptive scheduling, any high priority process entering the
‘Ready’ queue is immediately scheduled for execution whereas in the
non-preemptive scheduling any high priority process entering the
‘Ready’ queue is scheduled only after the currently executing process
completes its execution or only when it voluntarily relinquishes the
CPU.
• The priority of a task/ process in preemptive scheduling is indicated in
the same way as that of the mechanism adopted for nonpreemptive
multitasking.
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds and priorities 1, 3, 2 (0—highest priority, 3—
lowest priority) respectively enters the ready queue together. A new
process P4 with estimated completion time 6 ms and priority 0 enters
the ‘Ready’ queue after 5 ms of start of execution of P1. Assume all the
processes contain only CPU operation and no I/O operations are
involved.

You might also like