Module 5
Module 5
The Operating System acts as a bridge between the user applications/tasks and
the underlying system resources through a set of system functionalities and
services
OS manages the system resources and makes them available to the user
applications/tasks on a need basis
The primary functions of an Operating system is
Make the system convenient to use
Organize and manage the system resources efficiently and correctly
UserApplications
Application Programming
Interface (API)
MemoryManagement
Kernel Services
ProcessManagement
Time Management
File System Management
I/O SystemManagement
Device Driver
Interface
Underlying Hardware
3
@ McGraw-Hill Education
4
@ McGraw-Hill Education
5
@ McGraw-Hill Education
6
benefits
Robustness: If a problem is encountered in any of the
services, which runs as ‘Server’ application, the same
can be reconfigured and re-started without the need for
re-starting the entire OS. Thus, this approach is highly
useful for systems.
Configurability: Any services, which run as ‘Server’
application can be changed without the need to restart
the whole system. This makes the system dynamically
configurable.
@ McGraw-Hill Education
8
@ McGraw-Hill Education
The kernel of a Real Time Operating System is referred as Real Time kernel. In
complement to the conventional OS kernel, the Real Time kernel is highly
specialized and it contains only the minimal set of services required for running
the user applications/tasks. The basic functions of a Real Time kernel are
– Task/Process management
– Task/Process scheduling
– Task/Process synchronization
– Error/Exception handling
– Memory Management
– Interrupt handling
– Time management
9
Designing with RTOS
Real Time Kernel – Task/Process Management
Deals with setting up the memory space for the tasks, loading the task’s code into
the memory space, allocating system resources, setting up a Task Control Block
(TCB) for the task and task/process termination/deletion. A Task Control Block
(TCB) is used for holding the information corresponding to a task. TCB usually
contains the following set of information
• Task ID: Task Identification Number
• Task State: The current state of the task. (E.g. State= ‘Ready’ for a task which is ready to execute)
• Task Type: Task type. Indicates what is the type for this task. The task can be a hard real time or
soft real time or background task.
• Task Priority: Task priority (E.g. Task priority =1 for task with priority = 1)
• Task Context Pointer: Context pointer. Pointer for context saving
• Task Memory Pointers: Pointers to the code memory, data memory and stack memory for the task
• Task System Resource Pointers: Pointers to system resources (semaphores, mutex etc) used by the
task
• Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
• Other Parameters Other relevant task parameters
The parameters and implementation of the TCB is kernel dependent. The TCB parameters vary across10
different kernels, based on the task management implementation
Designing with RTOS
• Task/Process Scheduling: Deals with sharing the CPU among various
tasks/processes. A kernel application called ‘Scheduler’ handles the task
scheduling. Scheduler is nothing but an algorithm implementation, which
performs the efficient and optimal scheduling of tasks to provide a
deterministic behavior.
• Task/Process Synchronization: Deals with synchronizing the concurrent
access of a resource, which is shared across multiple tasks and the
communication between various tasks.
• Error/Exception handling: Deals with registering and handling the errors
occurred/exceptions raised during the execution of tasks. Insufficient memory,
timeouts, deadlocks, deadline missing, bus error, divide by zero, unknown
instruction execution etc, are examples of errors/exceptions. Errors/Exceptions
can happen at the kernel level services or at task level. Deadlock is an
example for kernel level exception, whereas timeout is an example for a task
level exception. The OS kernel gives the information about the error in the
form of a system call (API).
11
@ McGraw-Hill Education
12
@ McGraw-Hill Education
Interrupts inform the processor that an external device or an associated task requires
immediate attention of the CPU.
Interrupts can be either Synchronous or Asynchronous.
Interrupts which occurs in sync with the currently executing task is known as
Synchronous interrupts. Usually the software interrupts fall under the Synchronous
Interrupt category. Divide by zero, memory segmentation error etc are examples of
Synchronous interrupts.
Asynchronous interrupts are interrupts, which occurs at any point of execution of any
task, and are not
in sync with the currently executing task.
For asynchronous interrupts, the interrupt handler is usually written as separate task
(Depends on OS Kernel implementation) and it runs in a different context. Hence, a
context switch happens while handling the asynchronous interrupts.
Priority levels can be assigned to the interrupts and each interrupts can be enabled or
disabled individually.
@ McGraw-Hill Education
15
@ McGraw-Hill Education
17
@ McGraw-Hill Education
18
@ McGraw-Hill Education
19
@ McGraw-Hill Education
20
@ McGraw-Hill Education
The ‘Stack’ memory holds all temporary data Stack memory grows
such as variables local to the process downwards
Data memory grows
Data memory holds all global data for the process
upwards
The code memory contains the program code
(instructions) corresponding to the process
Data Memory
On loading a process into the main memory, a
specific area of memory is allocated for the
process Code Memory
Scheduled for
Interrupted or
Execution
Preempted
state from ‘newly created’ to ‘execution Blocked
22
@ McGraw-Hill Education
Designing with RTOS
Process States & State Transition
• Created State: The state at which a process is being created is referred as ‘Created
State’. The Operating System recognizes a process in the ‘Created State’ but no
resources are allocated to the process
• Ready State: The state, where a process is incepted into the memory and awaiting the
processor time for execution, is known as ‘Ready State’. At this stage, the process is
placed in the ‘Ready list’ queue maintained by the OS
• Running State: The state where in the source code instructions corresponding to the
process is being executed is called ‘Running State’. Running state is the state at which
the process execution happens.
• Blocked State/Wait State: Refers to a state where a running process is temporarily
suspended from execution and does not have immediate access to resources. The
blocked state might have invoked by various conditions like- the process enters a wait
state for an event to occur (E.g. Waiting for user inputs such as keyboard input) or
waiting for getting access to a shared resource like semaphore, mutex etc
• Completed State: A state where the process completes its execution
The transition of a process from one state to another is known as ‘State transition’
When a process changes its state from Ready to running or from running to blocked or
terminated or from blocked to running, the CPU allocation for the process may also23
change
@ McGraw-Hill Education
24
@ McGraw-Hill Education
25
@ McGraw-Hill Education
26
@ McGraw-Hill Education
Designing with RTOS
User & Kernel level threads
4
multitasking the currently executing task relinquishes the CPU when it waits for an I/O.
@ McGraw-Hill Education
5
@ McGraw-Hill Education
8
@ McGraw-Hill Education
Process 1
Process 2
Process 3
Process 4
----------------- Scheduler
Process n
Job Queue
Ready Queue
Move preempted process to ‘Ready’ queue Process
Resource Request
CPU
By Process
Move I/O Completed
Process to ‘Ready’ queue
Device
Manager
Process 1 Process
Process 2
Process n
Device Queue
9
Non- Preemptive scheduling
First-Come-First-Served (FCFS)/ FIFO Scheduling
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together in
the order P1, P2, P3. Calculate the waiting time and Turn Around Time
(TAT) for each process and the average waiting time and Turn Around
Time (Assuming there is no I/O waiting for the processes).
Average waiting time = (Waiting time for all processes) / No. of Processes
= 8.33 milliseconds
Average Turn Around Time= (Turn Around Time for all processes) / No.
of Processes
= 15.66 milliseconds
Average Execution Time = (Execution time for all processes)/No. of
processes
= 7.33
Last-Come-First Served (LCFS)/LIFO Scheduling
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together in
the order P1, P2, P3 (Assume only P1 is present in the ‘Ready’ queue when
the scheduler picks it up and P2, P3 entered ‘Ready’ queue after that).
Now a new process P4 with estimated completion time 6 ms enters the
‘Ready’ queue after 5 ms of scheduling P1. Calculate the waiting time and
Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the
processes). Assume all the processes contain only CPU operation and no
I/O operations are involved
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
Average Turn Around Time = (Turn Around Time for all processes) / No.
of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
( =10+11+23+28)/4
= 72/4 = 18 milliseconds
Shortest Job First (SJF) Scheduling
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together.
Calculate the waiting time and Turn Around Time (TAT) for each process
and the Average waiting time and Turn Around Time (Assuming there is
no I/O waiting for the processes) in SJF algorithm.
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0—highest priority, 3—
lowest priority) respectively enters the ready queue together. Calculate the
waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Turn Around Time (Assuming there is no I/O
waiting for the processes) in priority based scheduling algorithm.
24
@ McGraw-Hill Education
• Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together. A new process P4 with estimated completion time
2ms enters the ‘Ready’ queue after 2ms. Assume all the processes contain only CPU operation
and no I/O operations are involved.
• At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue
and the SRT scheduler picks up the process with the Shortest remaining time for execution
completion (In this example P2 with remaining time 5ms) for scheduling. Now process P4 with
estimated execution completion time 2ms enters the ‘Ready’ queue after 2ms of start of execution
of P2. The processes are re-scheduled for execution in the following order
25
@ McGraw-Hill Education
Waiting Time for P2 = 0 ms + (4 -2) ms = 2ms (P2 starts executing first and is interrupted by P4 and has to wait till the completion of
P4 to get the next CPU slot)
Waiting Time for P4 = 0 ms (P4 starts executing by preempting P2 since the execution time for completion of P4 (2ms) is less than
that of the Remaining time for execution completion of P2 (Here it is 3ms))
Waiting Time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Waiting Time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds
Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 2 ms
(Time spent in Ready Queue + Execution Time = (Execution Start Time – Arrival Time) + Estimated Execution Time = (2-2) + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4
= 11.75 milliseconds
26
@ McGraw-Hill Education
• Three processes with process IDs P1, P2, P3 with estimated completion time 6, 4, 2 milliseconds
respectively, enters the ready queue together in the order P1, P2, P3. Calculate the waiting time and
Turn Around Time (TAT) for each process and the Average waiting time and Turn Around Time
(Assuming there is no I/O waiting for the processes) in RR algorithm with Time slice= 2ms.
• The scheduler sorts the ‘Ready’ queue based on the FCFS policy and picks up the first process P1
from the ‘Ready’ queue and executes it for the time slice 2ms. When the time slice is expired, P1 is
preempted and P2 is scheduled for execution. The Time slice expires after 2ms of execution of P2.
Now P2 is preempted and P3 is picked up for execution. P3 completes its execution within the time
slice and the scheduler picks P1 again for execution for the next time slice. This procedure is
repeated till all the processes are serviced. The order in which the processes are scheduled for
execution is represented as
28
@ McGraw-Hill Education
Waiting Time for P1 = 0 + (6-2) + (10-8) = 0+4+2= 6ms (P1 starts executing first and waits for two time slices to get
execution back and again 1 time slice for getting CPU time)
Waiting Time for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1 executes for 1 time slice and waits for two
time slices to get the CPU time)
Waiting Time for P3 = (4 -0) = 4ms (P3 starts executing after completing the first time slices for P1 and P2 and completes its
execution in a single time slice.)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (6+6+4)/3 = 16/3
= 5.33 milliseconds
Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 10 ms (-Do-)
Turn Around Time (TAT) for P3 = 6 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (12+10+6)/3 = 28/3
= 9.33 milliseconds
29
@ McGraw-Hill Education
Same as that of the non-preemptive priority based scheduling except for the
switching of execution between tasks
In preemptive priority based scheduling, any high priority process entering the
‘Ready’ queue is immediately scheduled for execution whereas in the non-
preemptive scheduling any high priority process entering the ‘Ready’ queue is
scheduled only after the currently executing process completes its execution or
only when it voluntarily releases the CPU
The priority of a task/process in preemptive priority based scheduling is
indicated in the same way as that of the mechanisms adopted for non-
preemptive multitasking
30
@ McGraw-Hill Education
• Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
and priorities 1, 3, 2 (0- highest priority, 3 lowest priority) respectively enters the ready queue
together. A new process P4 with estimated completion time 6ms and priority 0 enters the ‘Ready’
queue after 5ms of start of execution of P1. Assume all the processes contain only CPU operation
and no I/O operations are involved.
• At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue
and the scheduler picks up the process with the highest priority (In this example P1 with priority
1)for scheduling. Now process P4 with estimated execution completion time 6ms and priority 0
enters the ‘Ready’ queue after 5ms of start of execution of P1. The processes are re-scheduled for
execution in the following order
31
@ McGraw-Hill Education
Waiting Time for P1 = 0 + (11-5) = 0+6 =6 ms (P1 starts executing first and gets preempted by P4 after 5ms and
again gets the CPU time after completion of P4)
Waiting Time for P4 = 0 ms (P4 starts executing immediately on entering the ‘Ready’ queue, by preempting P1)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (6 + 0 + 16 + 23)/4 = 45/4
= 11.25 milliseconds
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 6ms (Time spent in Ready Queue + Execution Time = (Execution Start Time –
Arrival Time) + Estimated Execution Time = (5-5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (16+6+23+28)/4 = 73/4
= 18.25 milliseconds 32
@ McGraw-Hill Education
33
@ McGraw-Hill Education
• Co-operating Processes: In the co-operating interaction model one process requires the
inputs from other processes to complete its execution.
• Competing Processes: The competing processes do not share anything among
themselves but they share the system resources. The competing processes compete for
the system resources such as file, display device etc
2
@ McGraw-Hill Education
IPC refers to the mechanism through which tasks/processes communicate each other
IPC is essential for task /process execution co-ordination and synchronization
Implementation of IPC mechanism is OS kernel dependent
Some important IPC mechanisms adopted by OS kernels are:
Shared Memory
Global Variables
Pipes (Named & Un-named)
Memory mapped Objects
Message Passing
Message Queues
Mailbox
Mail slot
Signals
Remote Procedure Calls (RPC)
3
@ McGraw-Hill Education
4
@ McGraw-Hill Education
The implementation of ‘Pipes’ is OS dependent. Microsoft® Windows Desktop Operating Systems support
two types of ‘Pipes’ for Inter Process Communication. Namely;
• Anonymous Pipes: The anonymous pipes are unnamed, unidirectional pipes used for data transfer between
two processes.
• Named Pipes: Named pipe is a named, unidirectional or bi-directional pipe for data exchange between
processes. Like anonymous pipes, the process which creates the named pipe is known as pipe server. A
process which connects to the named pipe is known as pipe client. With named pipes, any process can act as
both client and server allowing point-to-point communication. Named pipes can be used for communicating
between processes running on the same machine or between processes running on different machines
connected to a network
Process 1 Process 2
Write Pipe Read
(Named/un-named)
9
@ McGraw-Hill Education
10
@ McGraw-Hill Education
Post Message
mailbox whereas ‘message queue’ can be used for
exchanging multiple messages
Mail Box
One task/process creates the mailbox and other
Broadcast Message
tasks/process can subscribe to this mailbox for
getting message notification
The implementation of the mailbox is OS kernel
dependent
The MicroC/OS-II RTOS implements mailbox as a
Task 2 Task 3 Task 4
11
@ McGraw-Hill Education
12
Integration and Testing of
Embedded Hardware and
Firmware
Integration of
Hardware & Firmware
Deals with embedding firmware to target device
Out of Circuit Programming
In System Programming (ISP)
ISP with SPI Protocol
I/O lines involved in SPI
MOSI
MISO
SCK
RST
GND
In Application Programming (IAP)
Technique used by firmware running on the target device for modifying a
selected portion of the code memory
Use of Factory Programmed Chip
Board Power up
The Embedded System
Development
Environment
Integrated Development Environment (IDE)
Disassembler/
Disassembler : MachineDecompiler
code to Assembly code
Decompiler : Machine code to high level language instruction