0% found this document useful (0 votes)
7 views66 pages

Module 5

The document provides an overview of Real-Time Operating Systems (RTOS) and their design, highlighting the core functions of operating systems, the architecture of kernels, and the differences between monolithic and microkernel designs. It explains the characteristics of General Purpose Operating Systems (GPOS) versus Real-Time Operating Systems (RTOS), detailing task management, memory management, and interrupt handling specific to RTOS. Additionally, it distinguishes between hard and soft real-time systems, emphasizing their operational requirements and examples.

Uploaded by

mac evans
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views66 pages

Module 5

The document provides an overview of Real-Time Operating Systems (RTOS) and their design, highlighting the core functions of operating systems, the architecture of kernels, and the differences between monolithic and microkernel designs. It explains the characteristics of General Purpose Operating Systems (GPOS) versus Real-Time Operating Systems (RTOS), detailing task management, memory management, and interrupt handling specific to RTOS. Additionally, it distinguishes between hard and soft real-time systems, emphasizing their operational requirements and examples.

Uploaded by

mac evans
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module5

RTOS and IDE for Embedded System


Design
@ McGraw-Hill Education

Designing with RTOS


Operating System Basics

 The Operating System acts as a bridge between the user applications/tasks and
the underlying system resources through a set of system functionalities and
services
 OS manages the system resources and makes them available to the user
applications/tasks on a need basis
 The primary functions of an Operating system is
 Make the system convenient to use
 Organize and manage the system resources efficiently and correctly
UserApplications
Application Programming
Interface (API)
MemoryManagement
Kernel Services

ProcessManagement
Time Management
File System Management
I/O SystemManagement
Device Driver
Interface
Underlying Hardware

The Operating System Architecture 2


@ McGraw-Hill Education

Designing with RTOS


The Kernel

 The kernel is the core of the operating system


It is responsible for managing the system resources and the communication
among the hardware and other system services
Kernel contains a set of system libraries and services. For a general purpose OS,
the kernel contains different services like
Process Management
Primary Memory Management
File System management
I/O System (Device) Management
Secondary Storage Management
Protection
Time management
Interrupt Handling

3
@ McGraw-Hill Education

Designing with RTOS


Kernel Space and User Space
 The program code corresponding to the kernel applications/services are kept in
a contiguous area (OS dependent) of primary (working) memory and is
protected from the un-authorized access by user programs/applications
 The memory space at which the kernel code is located is known as ‘Kernel
Space’
 All user applications are loaded to a specific area of primary memory and this
memory area is referred as ‘User Space’
 The partitioning of memory into kernel and user space is purely Operating
System dependent
 An operating system with virtual memory support, loads the user applications
into its corresponding virtual memory space with demand paging technique
 Most of the operating systems keep the kernel application code in main memory
and it is not swapped out into the secondary memory

4
@ McGraw-Hill Education

Designing with RTOS


Monolithic Kernel
 All kernel services run in the kernel space
 All kernel modules run within the same
memory space under a single kernel thread
The tight internal integration of kernel
modules in monolithic kernel architecture
allows the effective utilization of the low-level
features of the underlying system
The major drawback of monolithic kernel is
that any error or failure in any one of the
kernel modules leads to the crashing of the
entire kernel application
LINUX, SOLARIS, MS-DOS kernels are
examples of monolithic kernel

5
@ McGraw-Hill Education

Designing with RTOS


Microkernel
The microkernel design incorporates only the
essential set of Operating System services into
the kernel
Rest of the Operating System services are
implemented in programs known as ‘Servers’
which runs in user space
Memory management, process management,
timer systems and interrupt handlers are
examples of essential services, which forms
the part of the microkernel
QNX, Minix 3 kernels are examples for
microkernel

6
benefits
Robustness: If a problem is encountered in any of the
services, which runs as ‘Server’ application, the same
can be reconfigured and re-started without the need for
re-starting the entire OS. Thus, this approach is highly
useful for systems.
Configurability: Any services, which run as ‘Server’
application can be changed without the need to restart
the whole system. This makes the system dynamically
configurable.
@ McGraw-Hill Education

Designing with RTOS


Types of Operating Systems
Depending on the type of kernel and kernel services, purpose and type of computing
systems where the OS is deployed and the responsiveness to applications, Operating
Systems are classified into
General Purpose Operating System (GPOS)
 Operating Systems, which are deployed in general computing systems
The kernel is more generalized and contains all the required services to execute
generic applications
 Need not be deterministic in execution behavior
 May inject random delays into application software and thus cause slow
responsiveness of an application at unexpected times
Usually deployed in computing systems where deterministic behavior is not an
important criterion
 Personal Computer/Desktop system is a typical example for a system where
GPOSs are deployed.
 Windows XP/MS-DOS etc are examples of General Purpose Operating System
7
@ McGraw-Hill Education

Designing with RTOS


Real Time Purpose Operating System (RTOS)

Operating Systems, which are deployed in embedded systems demanding real-


time response
Deterministic in execution behavior. Consumes only known amount of time for
kernel applications
Implements scheduling policies for executing the highest priority
task/application always
 Implements policies and rules concerning time-critical allocation of a system’s
resources
Windows CE, QNX, VxWorks MicroC/OS-II etc are examples of Real Time
Operating Systems (RTOS

8
@ McGraw-Hill Education

Designing with RTOS


The Real Time Kernel

The kernel of a Real Time Operating System is referred as Real Time kernel. In
complement to the conventional OS kernel, the Real Time kernel is highly
specialized and it contains only the minimal set of services required for running
the user applications/tasks. The basic functions of a Real Time kernel are

– Task/Process management
– Task/Process scheduling
– Task/Process synchronization
– Error/Exception handling
– Memory Management
– Interrupt handling
– Time management

9
Designing with RTOS
Real Time Kernel – Task/Process Management
Deals with setting up the memory space for the tasks, loading the task’s code into
the memory space, allocating system resources, setting up a Task Control Block
(TCB) for the task and task/process termination/deletion. A Task Control Block
(TCB) is used for holding the information corresponding to a task. TCB usually
contains the following set of information
• Task ID: Task Identification Number
• Task State: The current state of the task. (E.g. State= ‘Ready’ for a task which is ready to execute)
• Task Type: Task type. Indicates what is the type for this task. The task can be a hard real time or
soft real time or background task.
• Task Priority: Task priority (E.g. Task priority =1 for task with priority = 1)
• Task Context Pointer: Context pointer. Pointer for context saving
• Task Memory Pointers: Pointers to the code memory, data memory and stack memory for the task
• Task System Resource Pointers: Pointers to system resources (semaphores, mutex etc) used by the
task
• Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
• Other Parameters Other relevant task parameters
The parameters and implementation of the TCB is kernel dependent. The TCB parameters vary across10
different kernels, based on the task management implementation
Designing with RTOS
• Task/Process Scheduling: Deals with sharing the CPU among various
tasks/processes. A kernel application called ‘Scheduler’ handles the task
scheduling. Scheduler is nothing but an algorithm implementation, which
performs the efficient and optimal scheduling of tasks to provide a
deterministic behavior.
• Task/Process Synchronization: Deals with synchronizing the concurrent
access of a resource, which is shared across multiple tasks and the
communication between various tasks.
• Error/Exception handling: Deals with registering and handling the errors
occurred/exceptions raised during the execution of tasks. Insufficient memory,
timeouts, deadlocks, deadline missing, bus error, divide by zero, unknown
instruction execution etc, are examples of errors/exceptions. Errors/Exceptions
can happen at the kernel level services or at task level. Deadlock is an
example for kernel level exception, whereas timeout is an example for a task
level exception. The OS kernel gives the information about the error in the
form of a system call (API).

11
@ McGraw-Hill Education

Designing with RTOS


Memory Management
 The memory management function of an RTOS kernel is slightly different compared to
the General Purpose Operating Systems
In general, the memory allocation time increases depending on the size of the block of
memory needs to be allocated and the state of the allocated memory block (initialized
memory block consumes more allocation time than un-initialized memory block)
RTOS generally uses ‘block’ based memory allocation technique, instead of the usual
dynamic memory allocation techniques used by the GPOS.
 RTOS kernel uses blocks of fixed size of dynamic memory and the block is allocated for
a task on a need basis. The blocks are stored in a ‘Free buffer Queue’.
Most of the RTOS kernels allow tasks to access any of the memory blocks without any
memory protection to achieve predictable timing and avoid the timing overheads
RTOS kernels assume that the whole design is proven correct and protection is
unnecessary. Some commercial RTOS kernels allow memory protection as optional and
the kernel enters a fail-safe mode when an illegal memory access occurs

12
@ McGraw-Hill Education

Designing with RTOS


Interrupt Handling

Interrupts inform the processor that an external device or an associated task requires
immediate attention of the CPU.
Interrupts can be either Synchronous or Asynchronous.
Interrupts which occurs in sync with the currently executing task is known as
Synchronous interrupts. Usually the software interrupts fall under the Synchronous
Interrupt category. Divide by zero, memory segmentation error etc are examples of
Synchronous interrupts.
Asynchronous interrupts are interrupts, which occurs at any point of execution of any
task, and are not
in sync with the currently executing task.
For asynchronous interrupts, the interrupt handler is usually written as separate task
(Depends on OS Kernel implementation) and it runs in a different context. Hence, a
context switch happens while handling the asynchronous interrupts.
Priority levels can be assigned to the interrupts and each interrupts can be enabled or
disabled individually.
@ McGraw-Hill Education

Designing with RTOS


Time Management
 Interrupts inform the processor that an external device or an associated task requires immediate
attention of the CPU.
Accurate time management is essential for providing precise time reference for all applications
The time reference to kernel is provided by a high-resolution Real Time Clock (RTC) hardware
chip (hardware timer)
The hardware timer is programmed to interrupt the processor/controller at a fixed rate. This timer
interrupt is referred as ‘Timer tick’
The ‘Timer tick’ is taken as the timing reference by the kernel. The ‘Timer tick’ interval may vary
depending on the hardware timer. Usually the ‘Timer tick’ varies in the microseconds range
The time parameters for tasks are expressed as the multiples of the ‘Timer tick’
The System time is updated based on the ‘Timer tick’

15
@ McGraw-Hill Education

Designing with RTOS


Time Management
The ‘Timer tick’ interrupt is handled by the ‘Timer Interrupt’ handler of kernel. The ‘Timer
tick’ interrupt can be utilized for implementing the following actions.

 Save the current context (Context of the currently executing task)


 Increment the System time register by one. Generate timing error and reset the System
time register if the timer tick count is greater than the maximum range available for
System time register
 Update the timers implemented in kernel (Increment or decrement the timer registers
for each timer depending on the count direction setting for each register. Increment
registers with count direction setting = ‘count up’ and decrement registers with count
direction setting = ‘count down’)
 Activate the periodic tasks, which are in the idle state
 Invoke the scheduler and schedule the tasks again based on the scheduling algorithm
 Delete all the terminated tasks and their associated data structures (TCBs)
 Load the context for the first task in the ready queue. Due to the re-scheduling, the
ready task might be changed to a new one from the task, which was pre-empted by the
‘Timer Interrupt’ task
16
@ McGraw-Hill Education

Designing with RTOS


Hard Real-time System
 A Real Time Operating Systems which strictly adheres to the timing constraints for a
task
 A Hard Real Time system must meet the deadlines for a task without any slippage
 Missing any deadline may produce catastrophic results for Hard Real Time Systems,
including permanent data lose and irrecoverable damages to the system/users
 Emphasize on the principle ‘A late answer is a wrong answer’
 Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles are typical
examples of Hard Real Time Systems
 As a rule of thumb, Hard Real Time Systems does not implement the virtual memory
model for handling the memory. This eliminates the delay in swapping in and out the
code corresponding to the task to and from the primary memory
 The presence of Human in the loop (HITL) for tasks introduces un-expected delays in
the task execution. Most of the Hard Real Time Systems are automatic and does not
contain a ‘human in the loop’

17
@ McGraw-Hill Education

Designing with RTOS


Soft Real-time System
 Real Time Operating Systems that does not guarantee meeting deadlines, but,
offer the best effort to meet the deadline
 Missing deadlines for tasks are acceptable if the frequency of deadline missing
is within the compliance limit of the Quality of Service (QoS)
 A Soft Real Time system emphasizes on the principle ‘A late answer is an
acceptable answer, but it could have done bit faster’
 Soft Real Time systems most often have a ‘human in the loop (HITL)’
 Automatic Teller Machine (ATM) is a typical example of Soft Real Time
System. If the ATM takes a few seconds more than the ideal operation time,
nothing fatal happens.
 An audio video play back system is another example of Soft Real Time system.
No potential damage arises if a sample comes late by fraction of a second, for
play back

18
@ McGraw-Hill Education

Designing with RTOS


Tasks, Processes & Threads

 In the Operating System context, a task is defined as the program in execution


and the related information maintained by the Operating system for the
program
 Task is also known as ‘Job’ in the operating system context
 A program or part of it in execution is also called a ‘Process’
 The terms ‘Task’, ‘job’ and ‘Process’ refer to the same entity in the Operating
System context and most often they are used interchangeably
 A process requires various system resources like CPU for executing the
process, memory for storing the code corresponding to the process and
associated variables, I/O devices for information exchange etc

19
@ McGraw-Hill Education

Designing with RTOS


The structure of a Processes
 The concept of ‘Process’ leads to concurrent
execution (pseudo parallelism) of tasks and
thereby the efficient utilization of the CPU and
other system resources
 Concurrent execution is achieved through the
sharing of CPU among the processes
 A process holds a set of registers, process status,
a Program Counter (PC) to point to the next
executable instruction of the process, a stack for
holding the local variables associated with the
process and the code corresponding to the
process
 When the process gets its turn, its registers and
Program counter register becomes mapped to the
physical registers of the CPU

20
@ McGraw-Hill Education

Designing with RTOS


Memory organization of a Processes
 The memory occupied by the process is
segregated into three regions namely; Stack
memory, Data memory and Code memory Stack Memory

 The ‘Stack’ memory holds all temporary data Stack memory grows
such as variables local to the process downwards
Data memory grows
 Data memory holds all global data for the process
upwards
 The code memory contains the program code
(instructions) corresponding to the process
Data Memory
 On loading a process into the main memory, a
specific area of memory is allocated for the
process Code Memory

 The stack memory usually starts at the highest


memory address from the memory area allocated
for the process (Depending on the OS kernel
implementation) 21
@ McGraw-Hill Education

Designing with RTOS


Process States & State Transition
 The creation of a process to its termination is
Created
not a single step operation
 The process traverses through a series of states Incepted into memory

during its transition from the newly created


Ready
state to the terminated state
 The cycle through which a process changes its

Scheduled for
Interrupted or

Execution
Preempted
state from ‘newly created’ to ‘execution Blocked

completed’ is known as ‘Process Life Cycle’.


The various states through which a process Running

traverses through during a Process Life Cycle


Execution Completion
indicates the current status of the process with
respect to time and also provides information Completed
on what it is allowed to do next

22
@ McGraw-Hill Education
Designing with RTOS
Process States & State Transition
• Created State: The state at which a process is being created is referred as ‘Created
State’. The Operating System recognizes a process in the ‘Created State’ but no
resources are allocated to the process
• Ready State: The state, where a process is incepted into the memory and awaiting the
processor time for execution, is known as ‘Ready State’. At this stage, the process is
placed in the ‘Ready list’ queue maintained by the OS
• Running State: The state where in the source code instructions corresponding to the
process is being executed is called ‘Running State’. Running state is the state at which
the process execution happens.
• Blocked State/Wait State: Refers to a state where a running process is temporarily
suspended from execution and does not have immediate access to resources. The
blocked state might have invoked by various conditions like- the process enters a wait
state for an event to occur (E.g. Waiting for user inputs such as keyboard input) or
waiting for getting access to a shared resource like semaphore, mutex etc
• Completed State: A state where the process completes its execution
 The transition of a process from one state to another is known as ‘State transition’
 When a process changes its state from Ready to running or from running to blocked or
terminated or from blocked to running, the CPU allocation for the process may also23
change
@ McGraw-Hill Education

Designing with RTOS


Threads

 A thread is the primitive that can execute code


 A thread is a single sequential flow of control
within a process
 ‘Thread’ is also known as lightweight process
 A process can have many threads of execution
 Different threads, which are part of a process, share
the same address space.
 Threads maintain their own thread status (CPU
register values), Program Counter (PC) and stack

24
@ McGraw-Hill Education

Designing with RTOS


The Concept of multithreading

25
@ McGraw-Hill Education

Designing with RTOS


The Concept of multithreading

Use of multiple threads to execute a process brings the following


advantage.
Better memory utilization. Multiple threads of the same process share
the address space for data memory. This also reduces the complexity of
inter thread communication since variables can be shared across the
threads.
Since the process is split into different threads, when one thread enters a
wait state, the CPU can be utilized by other threads of the process that
do not require the event, which the other thread is waiting, for
processing. This speeds up the execution of the process.
Efficient CPU utilization. The CPU is engaged all time.

26
@ McGraw-Hill Education
Designing with RTOS
User & Kernel level threads

• User Level Thread: : User level threads do not have


kernel/Operating System support and they exist solely in the
running process. Even if a process contains multiple user level
threads, the OS treats it as single thread and will not switch the
execution among the different threads of it. It is the
responsibility of the process to schedule each thread as and when
required. In summary, user level threads of a process are non-
preemptive at thread level from OS perspective.
• Kernel Level/System Level Thread: Kernel level threads are
individual units of execution, which the OS treats as separate
threads. The OS interrupts the execution of the currently running
kernel thread and switches the execution to another kernel thread
based on the scheduling policies implemented by the OS.
29
@ McGraw-Hill Education

Designing with RTOS


Thread V/s Process
Thread Process
Thread is a single unit of execution and is part Process is a program in execution and contains one
of process. or more threads.
A thread does not have its own data memory Process has its own code memory, data memory and
and heap memory. It shares the data memory stack memory.
and heap memory with other threads of the
same process.
A thread cannot live independently; it lives A process contains at least one thread.
within the process.
There can be multiple threads in a process. Threads within a process share the code, data and
The first thread (main thread) calls the main heap memory. Each thread holds separate memory
function and occupies the start of the stack area for stack (shares the total stack memory of the
memory of the process. process).
Threads are very inexpensive to create Processes are very expensive to create. Involves
many OS overhead.
Context switching is inexpensive and fast Context switching is complex and involves lot of
OS overhead and is comparatively slower.
If a thread expires, its stack is reclaimed by If a process dies, the resources allocated to it are
the process. reclaimed by the OS and all the associated threads
of the process also dies.
31
@ McGraw-Hill Education

Designing with RTOS


Multiprocessing & Multitasking

 The ability to execute multiple processes simultaneously is referred as multiprocessing


 Systems which are capable of performing multiprocessing are known as multiprocessor
systems
 The ability of the Operating System to have multiple programs in memory, which are ready
for execution, is referred as multiprogramming
 Multitasking refers to the ability of an operating system to hold multiple processes in
memory and switch the processor (CPU) from executing one process to another process
 Multitasking involves ‘Context switching’, ‘Context saving’ and ‘Context retrieval’
 Context switching refers to the switching of execution context from task to other
 When a task/process switching happens, the current context of execution should be saved to
(Context saving) retrieve it at a later point of time when the CPU executes the process, which
is interrupted currently due to execution switching
 During context switching, the context of the task to be executed is retrieved from the saved
context list. This is known as Context retrieval
2
@ McGraw-Hill Education

Designing with RTOS


Types of Multitasking
Depending on how the task/process execution switching act is implemented, multitasking
can is classified into
• Co-operative Multitasking: Co-operative multitasking is the most primitive form of
multitasking in which a task/process gets a chance to execute only when the currently executing
task/process voluntarily relinquishes the CPU. In this method, any task/process can avail the CPU
as much time as it wants. Since this type of implementation involves the mercy of the tasks each
other for getting the CPU time for execution, it is known as co-operative multitasking.
Preemptive Multitasking: Preemptive multitasking ensures that every task/process gets a
chance to execute. When and how much time a process gets is dependent on the implementation
of the preemptive scheduling. As the name indicates, in preemptive multitasking, the currently
running task/process is preempted to give a chance to other tasks/process to execute. The
preemption of task may be based on time slots or task/process priority
• Non-preemptive Multitasking: The process/task, which is currently given the CPU time, is
allowed to execute until it terminates (enters the ‘Completed’ state) or enters the ‘Blocked/Wait’
state, waiting for an I/O. The co-operative and non-preemptive multitasking differs in their
behavior when they are in the ‘Blocked/Wait’ state. In co-operative multitasking, the currently
executing process/task need not relinquish the CPU when it enters the ‘Blocked/Wait’ sate,
waiting for an I/O, or a shared resource access or an event to occur whereas in non-preemptive

4
multitasking the currently executing task relinquishes the CPU when it waits for an I/O.
@ McGraw-Hill Education

Designing with RTOS


Task Scheduling
 In a multitasking system, there should be some mechanism in place to share the
CPU among the different tasks and to decide which process/task is to be
executed at a given point of time
 Determining which task/process is to be executed at a given point of time is
known as task/process scheduling
 Task scheduling forms the basis of multitasking
 Scheduling policies forms the guidelines for determining which task is to be
executed when
 The scheduling policies are implemented in an algorithm and it is run by the
kernel as a service
 The kernel service/application, which implements the scheduling algorithm, is
known as ‘Scheduler’
 The task scheduling policy can be pre-emptive, non-preemptive or co-operative

5
@ McGraw-Hill Education

Designing with RTOS


Task Scheduling - Scheduler Selection
The selection of a scheduling criteria/algorithm should consider
• CPU Utilization: The scheduling algorithm should always make the CPU utilization
high. CPU utilization is a direct measure of how much percentage of the CPU is being
utilized.
• Throughput: This gives an indication of the number of processes executed per unit of
time. The throughput for a good scheduler should always be higher.
• Turnaround Time: It is the amount of time taken by a process for completing its
execution. It includes the time spent by the process for waiting for the main memory,
time spent in the ready queue, time spent on completing the I/O operations, and the
time spent in execution. The turnaround time should be a minimum for a good
scheduling algorithm.
• Waiting Time: It is the amount of time spent by a process in the ‘Ready’ queue
waiting to get the CPU time for execution. The waiting time should be minimal for a
good scheduling algorithm.
• Response Time: It is the time elapsed between the submission of a process and the
first response. For a good scheduling algorithm, the response time should be as least as
possible. To summarize, a good scheduling algorithm has high CPU utilization, minimum
Turn Around Time (TAT), maximum throughput and least response time.
7
@ McGraw-Hill Education

Designing with RTOS


Task Scheduling - Queues
The various queues maintained by OS in association with CPU scheduling are
• Job Queue: Job queue contains all the processes in the system
• Ready Queue: Contains all the processes, which are ready for execution and
waiting for CPU to get their turn for execution. The Ready queue is empty
when there is no process ready for running.
• Device Queue: Contains the set of processes, which are waiting for an I/O
device

8
@ McGraw-Hill Education

Designing with RTOS


Task Scheduling – Task transition through various Queues

Process 1
Process 2
Process 3
Process 4
----------------- Scheduler
Process n

Job Queue

Move Process to ‘Device Queue’


Process Admitted Process 1
Process 2 Run Process
Process n to Completion

Ready Queue
Move preempted process to ‘Ready’ queue Process

Resource Request
CPU

By Process
Move I/O Completed
Process to ‘Ready’ queue

Device
Manager
Process 1 Process
Process 2
Process n

Device Queue
9
Non- Preemptive scheduling
First-Come-First-Served (FCFS)/ FIFO Scheduling
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together in
the order P1, P2, P3. Calculate the waiting time and Turn Around Time
(TAT) for each process and the average waiting time and Turn Around
Time (Assuming there is no I/O waiting for the processes).

Average waiting time = (Waiting time for all processes) / No. of Processes
= 8.33 milliseconds
Average Turn Around Time= (Turn Around Time for all processes) / No.
of Processes
= 15.66 milliseconds
Average Execution Time = (Execution time for all processes)/No. of
processes
= 7.33
Last-Come-First Served (LCFS)/LIFO Scheduling

Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together in
the order P1, P2, P3 (Assume only P1 is present in the ‘Ready’ queue when
the scheduler picks it up and P2, P3 entered ‘Ready’ queue after that).
Now a new process P4 with estimated completion time 6 ms enters the
‘Ready’ queue after 5 ms of scheduling P1. Calculate the waiting time and
Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the
processes). Assume all the processes contain only CPU operation and no
I/O operations are involved
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
Average Turn Around Time = (Turn Around Time for all processes) / No.
of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
( =10+11+23+28)/4
= 72/4 = 18 milliseconds
Shortest Job First (SJF) Scheduling
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together.
Calculate the waiting time and Turn Around Time (TAT) for each process
and the Average waiting time and Turn Around Time (Assuming there is
no I/O waiting for the processes) in SJF algorithm.

Average waiting time= (0+5+12)/3 = 17/3


= 5.66 milliseconds.
Average Turn Around Time = (5+12+22)/3 = 39/3
= 13 milliseconds
The average Execution time= (10+5+7)/3 = 22/3 = 7.33
Priority Based Scheduling

Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0—highest priority, 3—
lowest priority) respectively enters the ready queue together. Calculate the
waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Turn Around Time (Assuming there is no I/O
waiting for the processes) in priority based scheduling algorithm.

Average waiting time = (0+10+17)/3 = 27/3 = 9 milliseconds


Average Turn Around Time = (10+17+22)/3 = 49/3 = 16.33 milliseconds
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling

 Employed in systems, which implements preemptive multitasking model


 Every task in the ‘Ready’ queue gets a chance to execute. When and how often
each process gets a chance to execute (gets the CPU time) is dependent on the
type of preemptive scheduling algorithm used for scheduling the processes
 The scheduler can preempt the currently executing task/process and select
another task from the ‘Ready’ queue for execution
 When to pre-empt a task and which task is to be picked up from the ‘Ready’
queue for execution after preempting the current task is purely dependent on the
scheduling algorithm
 A task which is preempted by the scheduler is moved to the ‘Ready’ queue. The
act of moving a ‘Running’ process/task into the ‘Ready’ queue by the scheduler,
without the processes requesting for it is known as ‘Preemption’
 Time-based preemption and priority-based preemption are the two important
approaches adopted in preemptive scheduling
23
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Preemptive SJF Scheduling/
Shortest Remaining Time (SRT)
 The non preemptive SJF scheduling algorithm sorts the ‘Ready’ queue only
after the current process completes execution or enters wait state, whereas the
preemptive SJF scheduling algorithm sorts the ‘Ready’ queue when a new
process enters the ‘Ready’ queue and checks whether the execution time of the
new process is shorter than the remaining of the total estimated execution time
of the currently executing process
 If the execution time of the new process is less, the currently executing process
is preempted and the new process is scheduled for execution
 Always compares the execution completion time (ie the remaining execution
time for the new process) of a new process entered the ‘Ready’ queue with the
remaining time for completion of the currently executing process and schedules
the process with shortest remaining time for execution

24
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Preemptive SJF Scheduling

• Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together. A new process P4 with estimated completion time
2ms enters the ‘Ready’ queue after 2ms. Assume all the processes contain only CPU operation
and no I/O operations are involved.

• At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue
and the SRT scheduler picks up the process with the Shortest remaining time for execution
completion (In this example P2 with remaining time 5ms) for scheduling. Now process P4 with
estimated execution completion time 2ms enters the ‘Ready’ queue after 2ms of start of execution
of P2. The processes are re-scheduled for execution in the following order

25
@ McGraw-Hill Education

Designing with RTOS


Non-preemptive scheduling – Preemptive SJF Scheduling
The waiting time for all the processes are given as

Waiting Time for P2 = 0 ms + (4 -2) ms = 2ms (P2 starts executing first and is interrupted by P4 and has to wait till the completion of
P4 to get the next CPU slot)
Waiting Time for P4 = 0 ms (P4 starts executing by preempting P2 since the execution time for completion of P4 (2ms) is less than
that of the Remaining time for execution completion of P2 (Here it is 3ms))
Waiting Time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Waiting Time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds

Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 2 ms
(Time spent in Ready Queue + Execution Time = (Execution Start Time – Arrival Time) + Estimated Execution Time = (2-2) + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue + Execution Time)

Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4
= 11.75 milliseconds
26
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Round Robin (RR) Scheduling

 Each process in the ‘Ready’ queue is executed for a


pre-defined time slot.
 The execution starts with picking up the first process
in the ‘Ready’ queue. It is executed for a pre-defined
time
 When the pre-defined time elapses or the process
completes (before the pre-defined time slice), the
next process in the ‘Ready’ queue is selected for
execution.
 This is repeated for all the processes in the ‘Ready’
queue
 Once each process in the ‘Ready’ queue is executed
for the pre-defined time period, the scheduler comes
back and picks the first process in the ‘Ready’ queue
again for execution
 Round Robin scheduling is similar to the FCFS
scheduling and the only difference is that a time slice
based preemption is added to switch the execution
between the processes in the ‘Ready’ queue 27
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Round Robin Scheduling

• Three processes with process IDs P1, P2, P3 with estimated completion time 6, 4, 2 milliseconds
respectively, enters the ready queue together in the order P1, P2, P3. Calculate the waiting time and
Turn Around Time (TAT) for each process and the Average waiting time and Turn Around Time
(Assuming there is no I/O waiting for the processes) in RR algorithm with Time slice= 2ms.

• The scheduler sorts the ‘Ready’ queue based on the FCFS policy and picks up the first process P1
from the ‘Ready’ queue and executes it for the time slice 2ms. When the time slice is expired, P1 is
preempted and P2 is scheduled for execution. The Time slice expires after 2ms of execution of P2.
Now P2 is preempted and P3 is picked up for execution. P3 completes its execution within the time
slice and the scheduler picks P1 again for execution for the next time slice. This procedure is
repeated till all the processes are serviced. The order in which the processes are scheduled for
execution is represented as

28
@ McGraw-Hill Education

Designing with RTOS


Non-preemptive scheduling – Round Robin Scheduling
The waiting time for all the processes are given as

Waiting Time for P1 = 0 + (6-2) + (10-8) = 0+4+2= 6ms (P1 starts executing first and waits for two time slices to get
execution back and again 1 time slice for getting CPU time)
Waiting Time for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1 executes for 1 time slice and waits for two
time slices to get the CPU time)
Waiting Time for P3 = (4 -0) = 4ms (P3 starts executing after completing the first time slices for P1 and P2 and completes its
execution in a single time slice.)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (6+6+4)/3 = 16/3
= 5.33 milliseconds

Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 10 ms (-Do-)
Turn Around Time (TAT) for P3 = 6 ms (-Do-)

Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (12+10+6)/3 = 28/3
= 9.33 milliseconds
29
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Priority based Scheduling

 Same as that of the non-preemptive priority based scheduling except for the
switching of execution between tasks
 In preemptive priority based scheduling, any high priority process entering the
‘Ready’ queue is immediately scheduled for execution whereas in the non-
preemptive scheduling any high priority process entering the ‘Ready’ queue is
scheduled only after the currently executing process completes its execution or
only when it voluntarily releases the CPU
 The priority of a task/process in preemptive priority based scheduling is
indicated in the same way as that of the mechanisms adopted for non-
preemptive multitasking

30
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Priority based Scheduling

• Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
and priorities 1, 3, 2 (0- highest priority, 3 lowest priority) respectively enters the ready queue
together. A new process P4 with estimated completion time 6ms and priority 0 enters the ‘Ready’
queue after 5ms of start of execution of P1. Assume all the processes contain only CPU operation
and no I/O operations are involved.

• At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue
and the scheduler picks up the process with the highest priority (In this example P1 with priority
1)for scheduling. Now process P4 with estimated execution completion time 6ms and priority 0
enters the ‘Ready’ queue after 5ms of start of execution of P1. The processes are re-scheduled for
execution in the following order

31
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Priority based Scheduling
The waiting time for all the processes are given as

Waiting Time for P1 = 0 + (11-5) = 0+6 =6 ms (P1 starts executing first and gets preempted by P4 after 5ms and
again gets the CPU time after completion of P4)
Waiting Time for P4 = 0 ms (P4 starts executing immediately on entering the ‘Ready’ queue, by preempting P1)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (6 + 0 + 16 + 23)/4 = 45/4
= 11.25 milliseconds
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 6ms (Time spent in Ready Queue + Execution Time = (Execution Start Time –
Arrival Time) + Estimated Execution Time = (5-5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (16+6+23+28)/4 = 73/4
= 18.25 milliseconds 32
@ McGraw-Hill Education

Designing with RTOS


Preemptive scheduling – Priority based Scheduling
Summary:
 Priority based preemptive scheduling gives real time attention to high priority
tasks
 Priority based preemptive scheduling is adopted in systems which demands
‘Real Time’ behavior
 Most of the RTOSs implements the preemptive priority based scheduling
algorithm for process/task scheduling
 Preemptive priority based scheduling also possess the same drawback of non-
preemptive priority based scheduling – ‘Starvation’
 This can be eliminated by the ‘Aging’ technique (Temporarily boosting the
priority of a task which is ‘starving’ for a long time

33
@ McGraw-Hill Education

Designing with RTOS


Task Communication

In a multitasking system, multiple tasks/processes run concurrently (in pseudo parallelism)


and each process may or may not interact between. Based on the degree of interaction, the
processes /tasks running on an OS are classified as

• Co-operating Processes: In the co-operating interaction model one process requires the
inputs from other processes to complete its execution.
• Competing Processes: The competing processes do not share anything among
themselves but they share the system resources. The competing processes compete for
the system resources such as file, display device etc

The co-operating processes exchanges information and communicate through


• Co-operation through sharing: Exchange data through some shared resources.
• Co-operation through Communication: No data is shared between the processes. But
they communicate for execution synchronization.

2
@ McGraw-Hill Education

Designing with RTOS


Inter Process (Task) Communication (IPC)

 IPC refers to the mechanism through which tasks/processes communicate each other
 IPC is essential for task /process execution co-ordination and synchronization
 Implementation of IPC mechanism is OS kernel dependent
 Some important IPC mechanisms adopted by OS kernels are:
 Shared Memory
 Global Variables
 Pipes (Named & Un-named)
 Memory mapped Objects
 Message Passing
 Message Queues
 Mailbox
 Mail slot
 Signals
 Remote Procedure Calls (RPC)

3
@ McGraw-Hill Education

Designing with RTOS


IPC – Shared Memory
 Processes share some area of the memory to communicate among them
 Information to be communicated by the process is written to the shared
memory area
 Processes which require this information can read the same from the shared
memory area
 Same as the real world concept where a ‘Notice Board’ is used by the college
to publish the information for students (The only exception is; only college has
the right to modify the information published on the Notice board and students
are given ‘Read’ only access. Meaning it is only a one way channel)

Concept of Shared Memory

4
@ McGraw-Hill Education

Designing with RTOS


IPC – Shared Memory: Pipes
‘Pipe’ is a section of the shared memory used by processes for communicating. Pipes follow the client-server
architecture. A process which creates a pipe is known as pipe server and a process which connects to a pipe is
known as pipe client. A pipe can be considered as a conduit for information flow and has two conceptual ends.
It can be unidirectional, allowing information flow in one direction or bidirectional allowing bi-directional
information flow. A unidirectional pipe allows the process connecting at one end of the pipe to write to the
pipe and the process connected at the other end of the pipe to read the data, whereas a bi-directional pipe
allows both reading and writing at one end

The implementation of ‘Pipes’ is OS dependent. Microsoft® Windows Desktop Operating Systems support
two types of ‘Pipes’ for Inter Process Communication. Namely;
• Anonymous Pipes: The anonymous pipes are unnamed, unidirectional pipes used for data transfer between
two processes.
• Named Pipes: Named pipe is a named, unidirectional or bi-directional pipe for data exchange between
processes. Like anonymous pipes, the process which creates the named pipe is known as pipe server. A
process which connects to the named pipe is known as pipe client. With named pipes, any process can act as
both client and server allowing point-to-point communication. Named pipes can be used for communicating
between processes running on the same machine or between processes running on different machines
connected to a network
Process 1 Process 2
Write Pipe Read
(Named/un-named)

Concept of Shared Memory 5


@ McGraw-Hill Education

Designing with RTOS


IPC – Message Passing
 A synchronous/asynchronous information exchange mechanism
for Inter Process/ thread Communication
 Through shared memory lot of data can be shared whereas only
limited amount of info/data is passed through message passing
 Message passing is relatively fast and free from the
synchronization overheads compared to shared memory

9
@ McGraw-Hill Education

Designing with RTOS


IPC – Message Passing: Message Queues

 Process which wants to talk to another process


posts the message to a First-In-First-Out
(FIFO) queue called ‘Message queue’, which
stores the messages temporarily in a system
defined memory object, to pass it to the desired
process
 Messages are sent and received through send
(Name of the process to which the message is
to be sent, message) and receive (Name of the
process from which the message is to be
received, message) methods
 The messages are exchanged through a Concept of Message Queue
message queue
 The implementation of the message queue,
send and receive methods are OS kernel
dependent.

10
@ McGraw-Hill Education

Designing with RTOS


IPC – Message Passing: Mailbox

 A special implementation of message queue


Task 1

 Usually used for one way communication


 Only a single message is exchanged through

Post Message
mailbox whereas ‘message queue’ can be used for
exchanging multiple messages
Mail Box
 One task/process creates the mailbox and other

Broadcast Message
tasks/process can subscribe to this mailbox for
getting message notification
 The implementation of the mailbox is OS kernel
dependent
 The MicroC/OS-II RTOS implements mailbox as a
Task 2 Task 3 Task 4

mechanism for inter task communication


Concept of Message Queue

11
@ McGraw-Hill Education

Designing with RTOS


IPC – Message Passing: Signal
 An asynchronous notification mechanism
 Mainly used for the execution synchronization of tasks process/tasks
 Signal do not carry any data and are not queued
 The implementation of signals is OS kernel dependent
 VxWorks RTOS kernel implements ‘signals’ for inter process communication
 A task/process can create a set of signals and register for it
 A task or Interrupt Service Routine (ISR) can signal a ‘signal’
 Whenever a specified signal occurs it is handled in a signal handler associated
with the signal

12
Integration and Testing of
Embedded Hardware and
Firmware
Integration of
Hardware & Firmware
Deals with embedding firmware to target device
Out of Circuit Programming
In System Programming (ISP)
ISP with SPI Protocol
I/O lines involved in SPI
MOSI
MISO
SCK
RST
GND
In Application Programming (IAP)
Technique used by firmware running on the target device for modifying a
selected portion of the code memory
Use of Factory Programmed Chip

Firmware Loading for OS based Device

Board Power up
The Embedded System
Development
Environment
Integrated Development Environment (IDE)
Disassembler/
Disassembler : MachineDecompiler
code to Assembly code
Decompiler : Machine code to high level language instruction

You might also like