0% found this document useful (0 votes)
10 views32 pages

? Operating System Comprehensive Study Notes

Uploaded by

Nishi Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views32 pages

? Operating System Comprehensive Study Notes

Uploaded by

Nishi Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

📚 Operating System Comprehensive Study Notes

📖 Chapter 1: Introduction to Operating


Systems
An Operating System (OS) is system software that manages computer hardware, software
resources, and provides services for computer programs. It acts as an intermediary between
users and the computer hardware.

Functions of an Operating System:

●​ Process Management: Manages processes in a system including process


scheduling, creation, and termination.​

●​ Memory Management: Keeps track of each byte in a computer’s memory and


manages allocation and deallocation.​

●​ File Management: Manages files on various storage devices.​

●​ Device Management: Manages device communication via respective drivers.​

●​ Security and Protection: Protects system data and resources against unauthorized
access.​

●​ User Interface (CLI/GUI): Provides interface for interaction between user and
system.​

●​ I/O System Management: Manages input/output operations and respective devices.​

Types of Operating Systems:

●​ Batch OS: No interaction with the user. Jobs with similar needs are batched
together.​

●​ Time-Sharing OS: Multiple users access the system at the same time via time
slices.​

●​ Distributed OS: Manages a group of distinct computers and makes them appear as
a single computer.​

●​ Network OS: Provides services to computers connected on a network.​


●​ Real-Time OS: Provides immediate processing and response to input.​

●​ Mobile OS: Optimized for mobile devices.​

Structure of Operating Systems:

●​ Monolithic Kernel: All OS services run in kernel mode.​

●​ Microkernel: Only essential services run in kernel; others in user space.​

●​ Layered Architecture: OS is divided into layers, each built atop the other.​

●​ Client-Server Model: Client requests services from server processes.​

Diagram: OS Structure Models

Structure Description

Monolithic All services together inside kernel space

Microkernel Minimal services in kernel; others in user mode

Layered Layer-wise abstraction; lower layers offer services to upper


layers

Client-Server Servers offer specific services, clients request them

Key Characteristics of Modern OS:

●​ Concurrency: Ability to execute multiple processes.​

●​ Parallelism: Ability to run tasks on multiple processors.​

●​ Multi-user: Support for multiple users.​

●​ Portability: Can run on different hardware platforms.​

Table: Comparison of OS Types

Type Multi-use Real-time Distribute Example


r Support d

Batch OS No No No Early IBM batch


systems
Time-Sharing Yes No No Unix, Windows
OS

Distributed OS Yes Sometimes Yes LOCUS, Amoeba

Real-Time OS Sometime Yes No VxWorks, RTLinux


s

Mobile OS Yes No No Android, iOS

(Next: Chapter 2: Process Management)

📖 Chapter 1: Introduction to Operating


Systems
...(existing content remains unchanged)...

📖 Chapter 2: Process Management


What is a Process?

A process is a program in execution. It consists of the program code, current activity (value
of Program Counter, contents of the CPU’s registers), process stack, heap, and a data
section.

Process States:

A process can be in one of the following states:

State Description

New Process is being created

Ready Waiting to be assigned to a


processor

Running Instructions are being executed

Waiting Waiting for some event (like I/O)

Terminated Finished execution

Diagram: Process State Transition

New → Ready → Running → (Waiting/Terminated) → Ready


Process Control Block (PCB):

Each process is represented in the operating system by a Process Control Block (PCB)
containing:

●​ Process ID
●​ Process State
●​ Program Counter
●​ CPU Registers
●​ Memory Management Info
●​ Accounting Info
●​ I/O Status Info

Diagram: PCB Contents Structure

Process Scheduling:

Process scheduling determines which process in the ready state should be moved to the
running state.

Types of Schedulers:

●​ Long-Term Scheduler (Job Scheduler): Selects processes from the job pool.
●​ Short-Term Scheduler (CPU Scheduler): Selects among processes in ready
queue.
●​ Medium-Term Scheduler: Suspends and resumes processes.

Scheduling Criteria:

●​ CPU Utilization
●​ Throughput
●​ Turnaround Time
●​ Waiting Time
●​ Response Time

CPU Scheduling Algorithms:


Algorithm Description Advantage Disadvantage

FCFS First process that arrives Simple Convoy effect, poor


is executed first average times

SJF Executes shortest burst Minimum average Starvation possible


time process first waiting time

Priority Highest priority process Flexible Starvation of


Scheduling executes first low-priority processes
Round Robin Each process gets fixed Fair, better High context switch
time quantum response time overhead

Multilevel Multiple queues for Separates Rigid, inflexible


Queue different priority classes processes by
category

Diagram: Gantt Chart Example for FCFS and RR

Context Switching:

The procedure of saving the state of a running process and loading the state of another
process.

📖 Chapter 3: Process Synchronization


What is Process Synchronization?

In a multiprogramming environment, several processes may attempt to access shared data


concurrently. Process synchronization ensures orderly execution so shared data remains
consistent and avoids race conditions.

Race Condition:

Occurs when two or more processes access shared data concurrently and the final result
depends on the sequence of access.

Example: If two processes simultaneously increment a shared counter, the final result might
be incorrect without synchronization.

Critical Section Problem:

A critical section is a portion of a program where shared resources are accessed.

Conditions for a proper solution:

●​ Mutual Exclusion: Only one process can be in the critical section at a time.
●​ Progress: If no process is in its critical section, and one or more wish to enter,
selection cannot be postponed indefinitely.
●​ Bounded Waiting: A process waiting to enter its critical section must do so within a
bounded number of turns.

Diagram: Critical Section Entry/Exit

+----------------+ +----------------+ +--------------+


| Entry Section | -----> | Critical Section| -----> | Exit Section |
+----------------+ +----------------+ +--------------+
| |
---------------------------------------------------------
Remainder Section

Software-Based Solutions:

●​ Peterson’s Algorithm: Ensures mutual exclusion between two processes.


●​ Bakery Algorithm: Generalized for multiple processes.

Hardware-Based Solutions:

●​ Disabling Interrupts
●​ Test-and-Set Instruction
●​ Compare-and-Swap Instruction
●​ Mutex Locks

Synchronization Tools:
Tool Description

Semaphor Integer variable with atomic wait() and signal() operations. Used for signaling.
e

Mutex Mutual exclusion lock allowing only one process access at a time.

Monitor High-level construct with shared variables, procedures, and condition


variables.

Classical Synchronization Problems:

●​ Bounded Buffer (Producer-Consumer Problem)


●​ Readers-Writers Problem
●​ Dining Philosophers Problem

Diagram: Dining Philosophers Layout

[P1]---Fork---[P2]
| |
Fork Fork
| |
[P5]---Fork---[P3]
\ /
[P4]

Each philosopher needs both the left and right fork (represented by semaphores) to eat.

(Next: Chapter 4: Deadlocks)


📖 Chapter 4: Deadlocks
What is a Deadlock?

A deadlock is a situation in multiprogramming where two or more processes are waiting


indefinitely for a resource held by each other.

Necessary Conditions for Deadlock:

According to Coffman’s conditions, four conditions must hold simultaneously:

1.​ Mutual Exclusion: At least one resource must be held in a non-sharable mode.
2.​ Hold and Wait: A process holding resources can request additional resources held
by other processes.
3.​ No Preemption: Resources cannot be forcibly removed from processes holding
them.
4.​ Circular Wait: A closed chain of processes exists, where each process holds at least
one resource and waits for a resource held by the next process.

Diagram: Deadlock Circular Wait Example

P1 --> P2 --> P3 --> P4 --> P1

Each process is waiting for a resource held by the next process in the chain.

Resource Allocation Graph (RAG):

A graphical representation of processes and the resources they request/hold.

●​ Process → Circle
●​ Resource → Rectangle
●​ Request Edge → Arrow from process to resource
●​ Assignment Edge → Arrow from resource to process

Example RAG:

P1 --> R1
R1 --> P2
P2 --> R2
R2 --> P1

This cycle indicates a deadlock.

Methods for Handling Deadlocks:

1.​ Deadlock Prevention: Ensure that at least one of Coffman’s conditions cannot hold.
2.​ Deadlock Avoidance: System dynamically examines resource-allocation state to
avoid unsafe states. Example: Banker’s Algorithm.
3.​ Deadlock Detection and Recovery: Allow deadlocks, detect them, and recover.
4.​ Ignore Deadlock: Used by most OS like Windows and UNIX (Ostrich Algorithm).

Banker’s Algorithm:

A resource allocation and deadlock avoidance algorithm.

●​ Evaluates requests and only grants if the system remains in a safe state.
●​ Uses Available, Max, Allocation, and Need matrices.

Table: Banker’s Algorithm Example State

Proces Max Allocation Need


s

P1 7 5 2

P2 3 2 1

P3 9 3 6

(Next: Chapter 5: Memory Management)

📖 Chapter 5: Memory Management


What is Memory Management?

It is a core function of an OS that handles or manages primary memory and moves


processes back and forth between main memory and disk during execution.

Memory Allocation Methods:


Method Description Example

Contiguous Allocates a single contiguous section of Fixed/Variable


memory per process partitioning

Non-Contiguo Breaks memory into multiple blocks allocated Paging, Segmentation


us separately

Contiguous Memory Allocation:

●​ Fixed Partitioning: Memory is divided into fixed-sized partitions.


●​ Variable Partitioning: Partitions created dynamically based on process needs.

Diagram: Memory with Fixed and Variable Partitions


| Process | Partition |
|---------|------------|
| P1 | 100 KB |
| P2 | 150 KB |
| P3 | 200 KB |

Non-Contiguous Memory Allocation:

1.​ Paging: Divides memory into fixed-size pages and frames.


○​ Logical memory split into pages.
○​ Physical memory split into frames.

Diagram: Paging

Logical Memory: Page 0, Page 1, Page 2...


Physical Memory: Frame 5, Frame 9, Frame 1...
Page Table maps Pages to Frames.

2.​ Segmentation: Divides memory logically into variable-sized segments.

Diagram: Segmentation Table

| Segment | Base Address | Limit |


|:----------|:--------------|:--------|
| Code | 1000 | 500 |
| Data | 2000 | 800 |
| Stack | 3000 | 300 |

Virtual Memory:

Allows execution of processes that may not be completely in memory.

●​ Uses demand paging.


●​ Reduces external fragmentation.

Page Replacement Algorithms:

●​ FIFO (First-In-First-Out)
●​ LRU (Least Recently Used)
●​ Optimal (Optimal Page Replacement)

Diagram: LRU Page Replacement Example

Pages: 7 0 1 2 0 3 0 4
Frame Size: 3
LRU replaces the least recently used page when a new page arrives.

(Next: Chapter 6: Virtual Memory)


📖 Chapter 6: Virtual Memory
What is Virtual Memory?

Virtual Memory is a technique that allows the execution of processes that may not be
completely in memory. It enables a system to use disk space as an extension of RAM by
transferring pages of data between physical memory and disk storage.

Advantages of Virtual Memory:

●​ Enables large programs to run with limited physical memory.


●​ Increases system multiprogramming capability.
●​ Isolates process memory spaces, improving security.
●​ Reduces external fragmentation.

Demand Paging:

A type of virtual memory implementation where pages are loaded into memory only when
needed.

Diagram: Demand Paging Operation

Process requests page → Page not in memory (page fault)


→ OS loads page from disk into memory
→ Process continues execution

Page Replacement Algorithms:

When a requested page is not in memory, one of the existing pages must be replaced.

Algorith Description Advantage Drawback


m

FIFO Removes the oldest loaded Simple to Poor performance in


page in memory. implement some cases (Belady’s
anomaly)

LRU Removes the page that has not Better Needs additional
been used for the longest time. performance hardware or bookkeeping

Optimal Replaces the page that will not Best possible Impractical in real
be used for the longest time in replacement systems
the future.

Thrashing:

A condition where excessive paging operations reduce CPU performance.


Causes:

●​ High degree of multiprogramming.


●​ Insufficient physical memory.

Diagram: Thrashing Scenario

High Page Fault Rate → More Disk Swapping → Less CPU Execution Time → System
Slowdown

Solution:

●​ Reduce degree of multiprogramming.


●​ Use working set model.
●​ Employ efficient page replacement algorithms.

(Next: Chapter 7: File Systems)

📖 Chapter 7: File Systems


What is a File System?

A file system is a method by which operating systems store and organize files on storage
devices. It manages file naming, access, storage, retrieval, and protection.

File Attributes:

Each file has associated metadata:

●​ Name
●​ Type
●​ Size
●​ Location
●​ Protection
●​ Time, Date, and User ID

File Operations:

●​ Create
●​ Open
●​ Read
●​ Write
●​ Delete
●​ Close
●​ Append

File Access Methods:


Access Type Description Example

Sequential Data accessed in linear order Reading log files

Direct Any block of data can be read or written Database records


(Random) directly access

Indexed Uses an index to access file blocks Library catalogue


lookup

Directory Structures:
Structure Description

Single-level All files in one directory

Two-level Each user has their own directory

Tree-structure Hierarchical arrangement, allows directories inside


d directories

Acyclic Graph Shared subdirectories/files allowed

General Graph Cycles permitted (managed with links and garbage


collection)

Diagram: Tree-Structured Directory

Root
├── Home
│ ├── User1
│ │ ├── FileA
│ │ └── FileB
│ └── User2
│ └── FileC
└── System
├── Config
└── Logs

File Allocation Methods:


Method Description Pros Cons

Contiguou Each file occupies a set of Fast access External fragmentation


s contiguous blocks

Linked Each block contains a pointer No fragmentation Slow direct access,


to the next block extra overhead
Indexed Uses an index block with all Supports random Overhead of index
block addresses listed access block

(Next: Chapter 8: I/O Systems)

📖 Chapter 8: I/O Systems


What is an I/O System?

An I/O (Input/Output) system manages communication between the computer system and
external devices (keyboard, mouse, disk drives, printers, etc.). It is responsible for
transferring data between internal memory and external peripheral devices efficiently,
reliably, and securely.

Components of I/O Systems:

●​ I/O Devices: Physical hardware components that generate or accept data.


●​ Device Controller: Hardware interface that manages specific devices and
communicates with the operating system.
●​ I/O Modules: Coordinate and manage the data transfer between devices and system
memory/CPU.
●​ I/O Buffers: Temporary storage areas to handle data transfer mismatches between
fast CPU and slower I/O devices.

I/O Techniques:
Technique Description Example

Programmed I/O CPU actively polls device until operation Keyboard input
completes handling

Interrupt-Driven Device interrupts CPU when ready, CPU Disk read completion
services interrupt signal

Direct Memory Transfers data directly between I/O device High-speed


Access (DMA) and memory without CPU intervention disk/network transfers

Diagram: I/O System Overview

CPU <--> Main Memory


| |
DMA Device Controllers
|
I/O Devices

I/O Scheduling:
Efficient disk scheduling improves overall system performance and reduces disk seek time.

Algorithm Description

FCFS Requests handled in arrival order

SSTF (Shortest Seek Time Selects request closest to current head position
First)

SCAN (Elevator) Disk arm moves toward one end, servicing as it goes, then
reverses direction

C-SCAN Like SCAN but services in one direction only, jumping back
to the beginning

Diagram: Disk Scheduling Algorithms Comparison

[FCFS]: 20 → 50 → 10 → 90
[SSTF]: 50 → 20 → 10 → 90
[SCAN]: 20 → 50 → 90 → 10 (then reverse)
[C-SCAN]: 20 → 50 → 90 → 0 → 10

Buffering:

Technique of using memory buffers to handle data transfer rate mismatches between
producers (CPU/processes) and consumers (I/O devices).

Type Description

Single Buffering One buffer holds data temporarily while being transferred

Double Buffering Two buffers: one for reading, one for writing, improves throughput

Circular Buffering A ring buffer where head/tail pointers track data, for continuous
streams

Diagram: Double Buffering Concept

Process → Buffer 1 (write) → I/O Device


→ Buffer 2 (write) → I/O Device

Spooling:

(Simultaneous Peripheral Operations Online) — technique where data is temporarily held in


disk files until devices are ready.

●​ Used commonly for printing.

Device Drivers:
Special software modules that enable the OS to communicate with hardware devices.
Translates OS I/O requests into device-specific commands.

Example: Printer driver converts document data into print instructions.

(Next: Chapter 9: Security and Protection)

📖 Chapter 9: Security and Protection


What is Security in Operating Systems?

Security in an operating system ensures that the system’s resources (data, memory,
processes, devices) are protected against unauthorized access, misuse, alteration, or
destruction.

Types of Security:

●​ User Authentication: Verifying the identity of a user before granting access.


●​ Access Control: Restricting access to system resources based on user
permissions.
●​ Data Protection: Ensuring data privacy and integrity.
●​ Network Security: Protecting data during transfer over a network.
●​ Physical Security: Protecting hardware from damage or theft.

Protection:

Protection in OS refers to mechanisms that control access by programs, processes, or users


to system resources.

Protection Objectives:

●​ Prevent unauthorized access.


●​ Ensure legitimate access.
●​ Isolate processes.

Access Control Mechanisms:


Technique Description

Access Control List List associated with each object defining permissions for each
(ACL) user

Capability List List associated with each user/process defining resources they
can access

Diagram: Access Matrix Model


File1 File2 Printer
User1 Read Read Write
User2 - Write -
User3 Read - Read

Threats to Security:

●​ Malware: Viruses, worms, trojans.


●​ Phishing: Social engineering attacks.
●​ Denial of Service (DoS): Overwhelming system resources.
●​ Data Breach: Unauthorized data access.

Security Strategies:

●​ Authentication via passwords, biometric, or smart cards.


●​ Encryption for secure data transfer.
●​ Auditing and logging of security incidents.
●​ Regular system and software updates.
●​ Firewall implementation.

Encryption Techniques:
Method Description

Symmetric Encryption Same key for encryption and


decryption

Asymmetric Encryption Public and private key pairs

Example: SSL/TLS protocols use asymmetric encryption for key exchange and symmetric
for session data.

Diagram: Simple Encryption Process

Plain Text → [Encryption Algorithm + Key] → Cipher Text


Cipher Text → [Decryption Algorithm + Key] → Plain Text

(Next: Chapter 10: System Calls)

📖 Chapter 10: System Calls


What is a System Call?

A System Call is a programmatic way in which a program requests a service from the
operating system's kernel. It acts as a controlled entry point into the kernel, allowing
user-level processes to access hardware and system services securely.
Why System Calls are Required:

●​ Process management
●​ File management
●​ Device handling
●​ Communication between processes
●​ Information maintenance

Types of System Calls:


Category Example System Calls

Process Control fork(), exit(), wait(), exec()

File Management open(), read(), write(), close()

Device Management ioctl(), read(), write(), close()

Information Maintenance getpid(), alarm(), sleep()

Communication pipe(), shmget(), mmap(), msgget(),


semop()

System Call Execution Process:

Diagram: Basic System Call Flow

User Program

System Call Library (API)

Trap to Kernel Mode

Kernel Executes Service

Control Returns to User Program

Example: open() System Call in UNIX/Linux

1.​ User program calls open() with filename and mode.


2.​ OS checks file permissions.
3.​ Allocates file descriptor.
4.​ Returns descriptor to user process.

Modes of Operation:

●​ User Mode: Limited access to system resources.


●​ Kernel Mode: Full access to all hardware resources and instructions.
Diagram: Mode Switch in OS

User Mode ↔ Kernel Mode


(System Call triggers switch to Kernel Mode)

(Next: Chapter 11: Operating System Architectures)

📖 Chapter 11: Operating System


Architectures
What is an Operating System Architecture?

Operating system architecture defines the overall design and structure of the operating
system, organizing its components and how they interact. It provides a systematic way to
manage hardware and software resources while ensuring security, efficiency, and reliability.

Types of OS Architectures:

1️⃣ Monolithic Architecture:

●​ Entire OS operates in kernel space.


●​ All functionalities like process management, memory, I/O, and file systems reside
within a single kernel.
●​ Fast and efficient, but difficult to maintain and extend.

Example: Early UNIX

Diagram: Monolithic Kernel Structure

+-------------------------+
| User Apps |
+-------------------------+
| Kernel |
| (File System, Memory, I/O, Process Management) |
+-------------------------+
| Hardware |
+-------------------------+

2️⃣ Layered Architecture:

●​ OS divided into several layers, each built on top of lower ones.


●​ Each layer communicates only with adjacent layers.
●​ Enhances modularity, ease of debugging, and maintenance.

Example: THE operating system

Diagram: Layered Architecture


+------------+
| User Layer |
+------------+
| Service Layer |
+------------+
| Kernel Layer |
+------------+
| Hardware Layer |
+------------+

3️⃣ Microkernel Architecture:

●​ Minimal kernel containing only essential services: process communication, basic


scheduling, low-level hardware management.
●​ Other services (file systems, device drivers) run in user space.
●​ Improves system security and stability.

Example: QNX, MINIX, MacOS X Kernel (XNU)

Diagram: Microkernel Structure

+-------------------------+
| User Services |
+-------------------------+
| OS Services (User) |
+-------------------------+
| Microkernel |
+-------------------------+
| Hardware |
+-------------------------+

4️⃣ Hybrid Architecture:

●​ Combines Monolithic and Microkernel principles.


●​ Kernel is monolithic-like but modular, with some services operating in user mode.
●​ Balances performance with modularity.

Example: Windows NT, modern Linux

5️⃣ Modular Architecture:

●​ OS functionalities split into independent, dynamically loadable modules.


●​ Offers flexibility to add/remove modules at runtime.

Example: Linux Kernel Modules

Diagram: Modular Kernel Concept

+-----------------+
| Core Kernel |
+-----------------+
| Module 1 | Module 2 | Module 3 |
+-----------------+
| Hardware |
+-----------------+

Comparison of OS Architectures:
Architectur Advantages Disadvantages
e

Monolithic Fast, efficient Hard to maintain, risky for bugs

Layered Modular, secure Overhead due to layer separation

Microkernel Secure, reliable, easy to extend Higher overhead on


communication

Hybrid Good balance of speed and modularity Complex implementation

Modular Flexibility, dynamic module Complexity in dependency


management handling

(Next: Chapter 12: Distributed Systems)

📖 Chapter 12: Distributed Systems


What is a Distributed System?

A Distributed System is a collection of independent computers that appears to its users as


a single coherent system. These computers collaborate, communicate, and coordinate their
actions by passing messages over a network.

Key Features:

●​ Resource Sharing: Hardware and software resources shared among systems.


●​ Concurrency: Multiple processes run simultaneously on different systems.
●​ Scalability: System can easily grow by adding new nodes.
●​ Fault Tolerance: System continues to function despite partial failures.

Advantages:

●​ High reliability and availability.


●​ Better resource sharing.
●​ Increased performance via load distribution.
●​ Scalability and modular growth.
Challenges:

●​ Complex synchronization.
●​ Security issues in a distributed environment.
●​ Data consistency and integrity.
●​ Managing concurrent processes and failures.

Types of Distributed Systems:


Type Example

Distributed Computing Distributed DBMS, Cloud services

Distributed Information Systems Google File System (GFS),


Hadoop

Distributed Pervasive Systems IoT networks, Smart grids

Distributed OS Models:

●​ Client-Server Model: Clients request services from centralized servers.


●​ Peer-to-Peer Model: All nodes have equal capabilities; communicate directly.
●​ Clustered Systems: Multiple systems interconnected to function as one.
●​ Cloud Systems: Dynamic resource allocation over internet-based infrastructure.

Diagram: Client-Server vs Peer-to-Peer

[Client-Server]
Clients → Server

[Peer-to-Peer]
Node ↔ Node ↔ Node

Distributed File Systems (DFS):

Allows files to be stored across multiple machines but accessed as if stored locally.

●​ Examples: NFS (Network File System), Hadoop Distributed File System (HDFS)

Synchronization in Distributed Systems:

●​ Clock Synchronization: Ensures system clocks remain consistent.


●​ Mutual Exclusion: Ensures only one process accesses a resource at a time.
●​ Election Algorithms: Select a coordinator in distributed environments.

Example:

●​ Bully Algorithm
●​ Ring Algorithm
Diagram: Distributed Mutual Exclusion (Token-Based)

Process 1 → Token → Process 2 → Process 3

Deadlocks in Distributed Systems:

●​ Similar to traditional systems but involves multiple systems.


●​ Detection requires global state.
●​ Prevention via timeout or ordered resource requests.

(Next: Chapter 13: Real-Time and Embedded Operating Systems)

Chapter 13: Real-Time and Embedded


Operating Systems
What is a Real-Time Operating System (RTOS)?

An RTOS is designed to serve real-time applications that process data and respond within a
strictly defined time frame. It ensures guaranteed response times for critical operations.

Characteristics of RTOS:

●​ Deterministic Response: Predictable response to events.


●​ Multitasking: Supports concurrent task execution.
●​ Priority-Based Scheduling: High-priority tasks are executed first.
●​ Minimal Latency: Reduced delay in task switching and interrupt handling.
●​ Reliability and Fault Tolerance: Continuous operation in critical systems.

Types of Real-Time Systems:


Type Description Example

Hard Missing a deadline is catastrophic Flight control systems


RTOS

Soft RTOS Missing deadlines affects performance but Video streaming,


tolerable gaming

Examples:

●​ VxWorks, RTLinux, FreeRTOS, QNX, Micrium uC/OS

RTOS Task Scheduling:

●​ Preemptive Scheduling: High-priority tasks can interrupt lower ones.


●​ Non-Preemptive Scheduling: A task runs until it voluntarily yields control.
Diagram: RTOS Scheduling

Task 1 (High Priority)


↑ Preempts
Task 2 (Low Priority)

Embedded Operating Systems

An Embedded OS runs on embedded systems — specialized computing systems that


perform dedicated functions within larger systems.

Characteristics of Embedded OS:

●​ Resource-Constrained: Operates with limited memory, CPU, and power.


●​ Task-Specific: Designed for a particular application or device.
●​ Real-Time Capabilities: Many embedded OS function as RTOS.
●​ Compact and Efficient: Minimal kernel footprint.
●​ High Reliability and Stability: Crucial in mission-critical applications.

Examples:

●​ Embedded Linux, VxWorks, FreeRTOS, TinyOS, Contiki

Differences Between RTOS and General-Purpose OS:


Feature Real-Time OS (RTOS) General Purpose OS
(GPOS)

Timing Constraints Strict and deterministic No strict constraints

Priority Scheduling Essential Optional

Latency Minimal latency Higher latency

Application Use Critical systems (medical, aviation) Personal computers, servers

Applications of Embedded OS:

●​ Automotive systems
●​ Mobile devices
●​ Smart appliances
●​ Industrial control systems
●​ IoT devices

(Next: Additional Topics / Recap)


📖 Frequently Asked OS Interview
Questions (With Answers)
1️⃣ What is an Operating System?

An operating system is system software that acts as an intermediary between the user and
computer hardware. It provides services like process management, memory management,
file systems, security, and device management.

2️⃣ What is the difference between Process and Thread?


Aspect Process Thread

Definition Independent program in execution Lightweight process within a


process

Memory Has its own memory space Shares memory with other
threads

Communicatio Uses Inter-Process Communication Shares data easily with other


n (IPC) threads

Overhead Higher Lower

3️⃣ What is a Deadlock?

A deadlock is a situation where a group of processes are blocked because each process is
holding a resource and waiting to acquire a resource held by another process, forming a
circular wait.

Necessary conditions for Deadlock:

●​ Mutual Exclusion
●​ Hold and Wait
●​ No Preemption
●​ Circular Wait

Prevention Methods:

●​ Avoid any one of the four conditions.


●​ Use Banker's Algorithm for avoidance.
4️⃣ What is a System Call?

A system call is a programmatic interface between a running program and the operating
system, enabling programs to request services like file access, process management, and
device management.

5️⃣ Explain Virtual Memory.

Virtual memory is a memory management technique where secondary storage (disk) is used
to extend physical memory. Processes can run without being fully loaded into RAM.

Key Components:

●​ Paging
●​ Demand Paging
●​ Page Replacement Algorithms (FIFO, LRU, Optimal)

6️⃣ What are the differences between Paging and Segmentation?


Aspect Paging Segmentation

Division Unit Fixed-size pages Variable-sized segments

Logical Division No consideration for program Divides based on logical


structure divisions

Fragmentation Internal External

7️⃣ What is a Scheduler?

A scheduler determines which process runs next.

Types:

●​ Long-term Scheduler: selects processes from job pool.


●​ Short-term Scheduler: decides which ready process to execute next.
●​ Medium-term Scheduler: temporarily removes processes from memory.

8️⃣ Difference between Preemptive and Non-Preemptive Scheduling.

●​ Preemptive: CPU can be taken away from a running process.


●​ Non-Preemptive: Process runs till it voluntarily releases CPU.
Example:

●​ Preemptive: Round Robin, SRTF


●​ Non-Preemptive: FCFS, SJF

9️⃣ What is a Kernel? Types?

The kernel is the core of an operating system that manages system resources.

Types:

●​ Monolithic Kernel
●​ Microkernel
●​ Hybrid Kernel
●​ Modular Kernel

🔟 What is Thrashing?
Thrashing occurs when the system spends more time swapping pages in and out of memory
than executing processes, drastically reducing performance.

Solution:

●​ Reduce degree of multiprogramming.


●​ Use efficient page replacement algorithms.

1️⃣1️⃣ What is Context Switching?

Context Switching is the process of storing the state of a running process and restoring the
state of the next scheduled process by the CPU.

Key Steps:

●​ Save CPU registers and program counter of the current process.


●​ Load the saved context of the new process.
●​ Resume execution.

...(existing Q12–Q20 content remains unchanged)...

1️⃣1️⃣ What is Context Switching?


Context Switching is the process of storing the state of a running process and restoring the
state of the next scheduled process by the CPU.

Key Steps:

●​ Save CPU registers and program counter of the current process.​

●​ Load the saved context of the new process.​

●​ Resume execution.​

1️⃣2️⃣ What is the difference between Internal and External Fragmentation?


Type Description

Internal Fragmentation Wasted memory inside allocated memory blocks

External Fragmentation Wasted memory between allocated blocks making it


unusable

1️⃣3️⃣ What is Demand Paging?

A memory management scheme where pages are loaded into memory only when required
during execution, reducing memory load and improving efficiency.

Advantages:

●​ Reduced memory requirement.​

●​ Faster program start.​

●​ Efficient memory use.​

1️⃣4️⃣ Explain Belady's Anomaly.

In certain page replacement algorithms (like FIFO), increasing the number of page frames
can sometimes increase the number of page faults — this unexpected behavior is called
Belady's Anomaly.

Occurs in:
●​ FIFO​

Does not occur in:

●​ LRU​

●​ Optimal​

1️⃣5️⃣ What is a Race Condition?

A situation where multiple processes access and manipulate shared data concurrently, and
the outcome depends on the non-deterministic order of execution.

Solution:

●​ Use synchronization primitives (mutex, semaphore).​

1️⃣6️⃣ Define Critical Section.

A section of code where shared resources are accessed. It must be executed by only one
process at a time to avoid data inconsistency.

Solution:

●​ Mutual Exclusion​

●​ Progress​

●​ Bounded Waiting​

1️⃣7️⃣ What is Swapping?

Swapping is the process of moving entire processes between main memory and disk to
optimize memory utilization and handle multiprogramming.

Used when:

●​ Memory is full​
●​ Higher priority process arrives​

1️⃣8️⃣ Difference between Logical and Physical Address.


Address Type Description

Logical Address Address generated by the CPU (virtual


address)

Physical Actual location in physical memory (RAM)


Address

1️⃣9️⃣ What is a Zombie Process?

A process that has completed execution but still has an entry in the process table, retaining
its process ID (PID).

Occurs when:

●​ Parent fails to call wait() for terminated child process.​

Solution:

●​ Ensure parent retrieves child status via wait()​

2️⃣0️⃣ What is IPC (Inter-Process Communication)?

A mechanism allowing processes to communicate and synchronize their actions.

Methods:

●​ Shared Memory​

●​ Message Passing​

●​ Pipes​

●​ Semaphores​

2️⃣1️⃣ What is the difference between Multithreading and Multiprocessing?


Aspect Multithreading Multiprocessing

Definition Multiple threads within a single Multiple independent processes


process

Memory Threads share the same address Each process has its own
Sharing space memory

Overhead Lower Higher

Communication Easier (shared memory) Complex (IPC mechanisms


required)

2️⃣2️⃣ Semaphore vs Mutex?


Feature Semaphore Mutex

Type Signaling mechanism Locking mechanism

Resource Allows multiple processes/threads Allows only one thread at a time


Access

Usage For synchronizing multiple For mutual exclusion of a critical


resources section

2️⃣3️⃣ What is a Priority Inversion?

A situation where a higher-priority process waits for a lower-priority process to release a


resource, potentially causing unbounded waiting.

Solution: Priority Inheritance Protocol.

2️⃣4️⃣ What’s the difference between User-Level and Kernel-Level Threads?


Feature User-Level Threads Kernel-Level Threads

Managed By User-level thread library Operating System Kernel

Context Switching Faster Slower

Visibility to Kernel Invisible Fully visible

2️⃣5️⃣ What is Cooperative vs Preemptive Multitasking?

●​ Cooperative Multitasking: A process voluntarily yields control.


●​ Preemptive Multitasking: OS forcibly suspends a running process to give CPU to
another.

Modern OS: Mostly Preemptive.

2️⃣6️⃣ What happens if a high-priority thread enters an infinite loop?

It will monopolize the CPU, causing lower-priority threads to starve.

Solution:

●​ Use preemptive multitasking.


●​ Implement watchdog mechanisms.

2️⃣7️⃣ Explain Busy Waiting.

A synchronization technique where a process continuously checks for a condition without


relinquishing the CPU.

Disadvantage: Wastes CPU cycles.

Alternative: Use blocking mechanisms like semaphores.

2️⃣8️⃣ Difference between Monolithic and Microkernel Architecture?


Feature Monolithic Kernel Microkernel

Structure All OS services run in kernel Minimal kernel, services run in user
mode mode

Performanc Faster More secure, modular


e

Stability Less stable (errors crash entire More stable (fault isolation)
OS)

2️⃣9️⃣ What is the Producer-Consumer Problem?

A classic synchronization problem where the producer generates data and the consumer
processes it.

Solution: Use semaphores or mutexes to ensure mutual exclusion and avoid race
conditions.
3️⃣0️⃣ What is a Daemon Process?

A background process that runs continuously, typically without user interaction, handling
system tasks.

Example:

●​ crond (Linux)
●​ httpd (Web Server)

You might also like