0% found this document useful (0 votes)
17 views10 pages

Esiot Unit 3

Unit 3 covers processes and operating systems, focusing on priority-based scheduling methods such as Rate-Monotonic Scheduling (RMS) and Earliest-Deadline-First (EDF), along with inter-process communication mechanisms like shared memory and message passing. It also discusses distributed embedded systems, their architecture, and the OSI model for network communication. Finally, it provides design principles for real-world applications including an audio player, engine control unit, and video accelerator.

Uploaded by

dygaming38
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views10 pages

Esiot Unit 3

Unit 3 covers processes and operating systems, focusing on priority-based scheduling methods such as Rate-Monotonic Scheduling (RMS) and Earliest-Deadline-First (EDF), along with inter-process communication mechanisms like shared memory and message passing. It also discusses distributed embedded systems, their architecture, and the OSI model for network communication. Finally, it provides design principles for real-world applications including an audio player, engine control unit, and video accelerator.

Uploaded by

dygaming38
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT 3

Based on the document provided, here is the detailed information on the


requested topics for UNIT III: PROCESSES AND OPERATING SYSTEMS.

1. Priority-Based Scheduling

Priority-based scheduling is a scheduling policy where each process (or task)


is assigned a priority, and the scheduler always selects the ready process
with the highest priority to run on the CPU. This is fundamental to real-time
operating systems (RTOS) for ensuring that critical tasks meet their
deadlines.

(Source: Pages 106, 108, 118-121)

There are two main ways to assign priorities:

Static Priority Scheduling: Priorities are assigned before the system starts
running and do not change during execution.

Dynamic Priority Scheduling: Priorities are assigned during execution and can
change based on system conditions.

A. Rate-Monotonic Scheduling (RMS)

RMS is the most widely used static priority scheduling algorithm for real-time
systems.

Priority Assignment Rule: The priority of a task is inversely proportional to its


period. The shorter the period, the higher the priority.
System Model and Assumptions (for RMA – Rate-Monotonic Analysis):

All processes are periodic.

Processes are independent (no data dependencies).

Deadlines are at the end of the period.

Execution time for each process is constant.

Context switching time is negligible.

Optimality: RMS is optimal among static priority algorithms. If any static


priority algorithm can schedule a set of tasks, then RMS can also schedule it.

Example: Consider three processes:

P1: Execution time = 1, Period = 4

P2: Execution time = 2, Period = 6

P3: Execution time = 3, Period = 12

Under RMS, the priorities would be P1 (highest) > P2 > P3 (lowest). A


timeline would show P1 running every 4 time units, preempting P2 or P3 if
they are running. P2 would run when P1 is not active, and P3 would only run
when neither P1 nor P2 is ready.

B. Earliest-Deadline-First (EDF) Scheduling


EDF is a dynamic priority scheduling algorithm.

Priority Assignment Rule: The priority is based on the task’s deadline. The
task with the nearest (earliest) deadline has the highest priority.

Implementation: Since deadlines are dynamic relative to the current time,


the scheduler must re-evaluate priorities whenever a task completes or a
new task becomes ready. This makes EDF more complex to implement than
RMS.

Optimality: EDF is optimal among all priority-driven algorithms. It can


achieve higher CPU utilization than RMS. If any scheduling algorithm can
schedule a set of tasks, then EDF can also schedule it.

C. Comparison: RMS vs. EDF

Feature Rate-Monotonic Scheduling (RMS) Earliest-Deadline-First (EDF)

Priority Type Static (fixed) Dynamic (changes during execution)

CPU Utilization Lower; schedulability is guaranteed only if utilization is


below a certain bound (e.g., 69% for many tasks). Higher; can
theoretically achieve 100% utilization.

Implementation Simpler. Priorities are fixed, so scheduling decisions are


faster. More complex. Requires recalculating priorities and maintaining a
sorted list of tasks by deadline.

Predictability Highly predictable. Overload behavior is easier to manage


(lower priority tasks will miss deadlines first). Less predictable under
overload conditions. A “domino effect” can cause many tasks to miss their
deadlines.

2. Inter-Process Communication (IPC)


IPC mechanisms allow different processes to communicate and synchronize
their actions. The operating system provides these mechanisms.

(Source: Pages 124-126)

There are two major styles of IPC:

A. Shared Memory Communication

In this model, two or more processes communicate through a common,


shared area of memory.

Mechanism: One process writes data to the shared memory location, and
another process reads the data from that same location.

Synchronization Problem (Race Condition): A critical issue is ensuring that


processes do not interfere with each other. If one process is writing to the
shared memory while another is reading or writing, the data can become
corrupted.

Solution: Semaphores and Atomic Operations: To prevent race conditions,


access to the shared memory must be synchronized.

Atomic Operation: An operation (like test-and-set) that is guaranteed to


execute without interruption.

Semaphore: A synchronization variable used to control access to a shared


resource. A process must acquire the semaphore (perform a “P” or wait
operation) before accessing the resource and release It afterward (a “V” or
signal operation).
Example: The ARM SWP (swap) instruction can be used to implement an
atomic test-and-set for a semaphore, ensuring only one process can enter a
critical section at a time.

B. Message Passing Communication

In this model, processes communicate by explicitly sending and receiving


messages. There is no shared address space between them.

Mechanism: A sender process formats a message and uses an OS service to


send it to a receiver process. The receiver process uses an OS service to
receive the message.

Use Cases: This model is natural for distributed systems, where processes
run on different physical processors and do not share memory. It is also
common in systems with highly independent components, such as a home
automation system where different microcontrollers control different devices.

3. Distributed Embedded Systems

A distributed embedded system consists of multiple Processing Elements


(Pes) connected by a network. These systems are used to handle complex
tasks that may be physically distributed or require more performance than a
single processor can offer.

(Source: Pages 128-131)

A. Reasons for Using Distributed Systems


Cost-Effective Performance: It is often cheaper to use several smaller, slower
Pes than one very large, fast one.

Physical Distribution: The application may require Pes to be located in


different physical places (e.g., sensors and actuators in a large factory).

Modularity and Scalability: Systems can be built in a modular way, making


them easier to design, test, and upgrade.

Fault Tolerance: The failure of one PE does not necessarily bring down the
entire system.

B. Architecture

Processing Elements (Pes): These can be general-purpose CPUs,


microcontrollers, DSPs, or specialized hardware ASICs.

Network: The Pes communicate over one or more networks (e.g., CAN bus,
Ethernet, I²C). The network provides a communication link between the Pes.

Key Characteristic: Pes do not fetch instructions over the network. They
execute their own local programs and use the network only for explicit data
communication (IPC).

C. Network Abstractions (OSI Model)

To manage the complexity of network communication, the OSI 7-layer model


is often used as a reference. It breaks down network functions into layers:

Physical Layer: Electrical and physical specifications.


Data Link Layer: Error detection and control on a single link.

Network Layer: End-to-end data transmission and routing.

Transport Layer: Reliable, connection-oriented service.

And higher layers: Session, Presentation, Application.

4. Design of Audio Player, Engine Control Unit, and Video Accelerator

These examples illustrate the application of embedded system design


principles to real-world problems.

A. Design of an Audio Player (MP3 Player)

(Source: Pages 133-137)

Function: To play compressed audio files (e.g., MP3). This involves three basic
functions: audio storage, audio decompression, and a user interface.

Core Technology: The key is the audio decompression algorithm. MP3 uses a
lossy compression technique based on perceptual coding, which removes
parts of the audio signal that the human ear is unlikely to notice (e.g.,
through masking).

Architecture:

A main RISC processor for system control, file management, and UI.
A specialized DSP (Digital Signal Processor) for performing the
computationally intensive audio effects and decoding.

Memory (Flash for storage, DRAM for buffering).

An audio interface (DAC) to convert digital audio to analog for


headphones/speakers.

System Integration: The device needs a file system compatible with PCs (for
loading music) and a simple UI for navigation.

B. Design of an Engine Control Unit (ECU)

(Source: Pages 138-143)

Function: To control a fuel-injected engine’s operation to optimize


performance, fuel economy, and emissions.

Key Challenge: A Multirate System. The ECU must handle tasks that run at
different periodic rates. For example:

Spark advance angle calculation might be updated every 1 ms.

Injector pulse width might be updated every 2 ms.

Reading other sensors might occur at slower rates.

Architecture:
Inputs: Reads data from multiple sensors (throttle position, RPM, air
temperature, exhaust oxygen).

Processing: It computes two main outputs: injector pulse width and spark
advance angle. This calculation is often done in two stages: a baseline
calculation from a lookup table followed by a correction based on other
sensor inputs.

Outputs: Drives actuators (fuel injectors, spark plugs).

Implementation: Due to the multirate nature and hard deadlines, an RTOS is


essential for scheduling the various tasks. The hardware must be robust
enough to withstand the harsh environment of an engine compartment
(extreme temperatures, electrical noise).

C. Design of a Video Accelerator

(Source: Pages 144-148)

Function: A hardware accelerator designed to speed up a single, highly


demanding task in video processing: block motion estimation.

Algorithm: Motion estimation is used in video compression (like MPEG) to


reduce temporal redundancy. It finds the best match for a block of pixels (a
macroblock) in the current frame within a search area of a previous frame.
The offset between the best match and the original position is the motion
vector.

Why an Accelerator? A full-search motion estimation is extremely


computationally expensive, making it too slow for a general-purpose CPU to
perform in real-time.
Architecture:

Implemented as a dedicated hardware unit, often on an FPGA or ASIC.

The architecture is highly parallel. It typically consists of an array of simple


Pes, where each PE calculates the difference between the macroblock and
one possible position in the search area. This allows all comparisons to be
done simultaneously.

It communicates with a host PC via a bus like PCI, receiving the macroblock
and search area and returning the computed motion vector.

You might also like