Operating System (OS)
Introduction to Operating System (OS) An Operating System (OS) is a system software that acts as an
interface between the user and the computer hardware. It manages hardware resources, provides an
environment for application programs, and ensures efficient execution of tasks.
Functional Behavior and Responsibilities of OS
The OS has several important functions:
Processor Management (CPU Scheduling):
Allocates CPU to various processes.
Uses scheduling algorithms (e.g., FCFS, Round Robin) to ensure fairness and efficiency.
Memory Management:
Keeps track of each byte in a computer’s memory.
Allocates and deallocates memory space as needed by programs.
Manages virtual memory and paging.
File Management:
Manages files on different storage devices.
Provides access methods, security, and file naming conventions.
I/O System Management:
Controls input/output devices.
Uses device drivers and buffering to improve performance.
Process Management:
Handles creation, scheduling, and termination of processes.
Provides inter-process communication (IPC) and synchronization.
Security & Protection:
Protects data and resources from unauthorized access.
Implements authentication and access control.
Error Detection & Response:
Detects internal and external errors.
Takes corrective actions.
Resource Allocation:
Manages multiple users or programs competing for resources.
[Link] for Monitor / Command Interpreter
The Command Interpreter (also called Shell) is a part of the OS that:
Accepts user commands (like dir, ls, copy) and interprets them for the OS to execute.
Acts as a monitor program that manages execution of user jobs.
Provides an interface between the user and OS services.
Why it is needed:
To avoid interacting directly with hardware.
To make it easier for users to execute complex operations using simple commands.
To provide automation (through scripts) and batch processing.
Examples:
Windows: Command Prompt / PowerShell
Linux/Unix: Bash shell, Zsh
3. Types of Operating Systems
1. Batch Operating System
Definition: Executes jobs in batches without user interaction. Jobs with similar needs are
grouped and run together.
Key Feature: No direct user interaction during execution.
Examples:
o IBM OS/360
o Early Microsoft MS-DOS
2. Time-Sharing Operating System (Multitasking OS)
Definition: Allows multiple users to share system resources simultaneously. Each user
gets a small time slice of CPU.
Key Feature: Provides interactive use of computer resources.
Examples:
o UNIX
o Multics
o Windows Server (time-sharing mode)
3. Real-Time Operating System (RTOS)
Definition: Responds to inputs or events in a fixed, predictable time frame. Used in
critical applications.
Types:
o Hard RTOS: Strict deadlines (e.g., missile systems).
o Soft RTOS: Deadlines are important but not absolute (e.g., video streaming).
Examples:
o VxWorks
o QNX
o RTLinux
4. Distributed Operating System
Definition: Manages a group of independent computers and makes them appear as a
single system.
Key Feature: Resource sharing across networked systems.
Examples:
o Amoeba
o LOCUS
o Microsoft Windows Server (in distributed environments)
5. Network Operating System (NOS)
Definition: Runs on a server and provides services to manage data, users, groups, and
security over a network.
Key Feature: Supports file sharing, printer sharing, communication among systems.
Examples:
o Microsoft Windows Server
o Novell NetWare
o Linux (with Samba, NFS)
6. Multiprocessing Operating System
Definition: Supports multiple CPUs in a single computer system for parallel processing.
Key Feature: Improves performance by executing multiple processes simultaneously.
Examples:
o UNIX
o Linux
o Windows NT
7. Mobile Operating System
Definition: Specifically designed for smartphones, tablets, and handheld devices.
Key Feature: Lightweight, supports touchscreen, connectivity, and app management.
Examples:
o Android
o iOS
o Windows Mobile
4. System Structure of OS
The OS structure defines how components are organized and interact. Common structures
include:
1. Monolithic System
Definition: The simplest structure where the entire OS runs in a single large kernel.
Features:
o All system services (file system, device drivers, memory management, etc.) run in
kernel mode.
o High performance, but difficult to maintain/debug.
Examples:
o MS-DOS
o Early UNIX
2. Layered Approach
Definition: OS is divided into layers, each built on top of the lower one.
Features:
o Each layer provides services to the upper layer and hides implementation details.
o Easier debugging and design.
Examples:
o THE Operating System (Dijkstra’s layered OS).
Structure Example:
o Layer 0: Hardware
o Layer 1: CPU scheduling, memory management
o Layer 2: Device drivers
o Layer 3: System calls & file system
o Layer 4: User programs
3. Microkernel Architecture
Definition: Only the most essential functions (inter-process communication, scheduling,
low-level memory management) run in the kernel; other services run in user mode.
Advantages:
o Modular, more secure, easier to extend.
Disadvantages:
o More context switching → slower than monolithic.
Examples:
o QNX, Mach, Minix
4. Modular (Hybrid) Kernel
Definition: Combination of monolithic and microkernel. Core services in kernel, while
others are modules that can be dynamically loaded.
Features:
o Flexibility + performance.
Examples:
o Modern Linux
o Windows NT, Windows 10
5. Virtual Machine Structure
Definition: Provides an interface identical to the underlying hardware → multiple OS can
run simultaneously on the same hardware.
Features:
o Strong isolation, flexibility in experimentation.
Examples:
o VMware, VirtualBox, IBM VM/370
6. Client-Server Model
Definition: System services (like file management, printing) run as user-level processes
(servers), while the kernel only provides communication (IPC).
Example:
o Used in distributed systems and some microkernel designs.
5. Hierarchical and Layered Organization of OS
Hierarchical Organization:
o OS functions arranged in hierarchies—lower levels handle hardware, higher levels
handle user interfaces.
o Each layer provides services to the layer above and uses services of the layer below.
Layered Organization Example:
Layer 0 – Hardware
Layer 1 – CPU Scheduling & Memory
Layer 2 – Device Management
Layer 3 – File System
Layer 4 – User Interface / Command Interpreter
Advantages:
Modularity, Easier to debug, Enhances security and system reliability.
6. I/O Methods and Interrupt Structure
I/O Methods:
1. Programmed I/O:
o CPU issues commands and waits until the I/O operation completes.
o Inefficient as CPU remains idle.
2. Interrupt-Driven I/O:
o CPU issues I/O command and continues other work.
o When I/O finishes, device sends an interrupt signal to CPU.
3. Direct Memory Access (DMA):
o Device transfers data directly to memory without CPU involvement.
o CPU is only interrupted after the entire block is transferred.
Interrupt Structure:
An interrupt is a signal that temporarily halts the CPU’s current execution and transfers control
to the Interrupt Service Routine (ISR).
Types of interrupts:
o Hardware Interrupts: Generated by external devices (e.g., keyboard, printer).
o Software Interrupts: Generated by programs (e.g., system calls).
Interrupts improve system efficiency and responsiveness.
1. Process Definition
A process is a program in execution.
A program is a passive entity (set of instructions).
A process is an active entity with its own program counter, stack, data section, and state.
Each process has:
Program Code (text section)
Program Counter (PC) – points to the next instruction
Stack – for function calls, parameters, return addresses
Heap – for dynamic memory allocation
Data Section – global variables, static variables
👉 Example: When you open a browser, the executable program becomes a running process.
3. Parallel Processes and Constructs
Parallel processes are multiple processes executing concurrently (possibly on multiple CPUs).
Parallel Constructs:
Fork / Join: Used to create and synchronize multiple processes.
Pipes: For data flow between processes.
Parallel Loops: Allow splitting iterations among processes.
Critical Sections: Sections of code where shared resources are accessed.
Benefits:
Better CPU utilization
Faster execution for independent tasks
Improved throughput
Challenges:
Race conditions, deadlocks, and synchronization issues arise if processes share resources.
4. Process Interaction
Processes interact mainly in two ways:
a. Independent Processes:
Do not affect each other’s execution.
No shared data.
b. Cooperating Processes:
Share data/resources, may affect each other’s execution.
Require Inter-Process Communication (IPC) mechanisms.
IPC Mechanisms:
1. Message Passing:
o Processes exchange messages through system calls.
o Useful in distributed systems.
2. Shared Memory:
o Processes share a memory region for communication.
o Faster but requires synchronization.
3. Synchronization Tools:
o Semaphores, Mutexes, Monitors to prevent race conditions.
5. Operating System Kernel
The Kernel is the core part of the OS that:
Runs in privileged mode.
Manages process scheduling, memory, I/O, system calls, interrupts, and resources.
Remains loaded in memory at all times.
Types of Kernels:
Monolithic Kernel: All services in one block (e.g., traditional Unix).
Microkernel: Only essential functions; other services run in user mode (e.g., Minix).
Hybrid Kernel: Combines both (e.g., Windows, modern Linux).
6. Data Structures for Processes and Resources
The OS maintains information about processes using Process Control Blocks (PCB) and other structures.
Process Control Block (PCB) contains:
Process ID (PID)
Process state
Program counter
CPU registers
Memory management info (page tables, base-limit registers)
Accounting info (CPU usage, priority)
I/O status (open files, devices)
👉 PCBs are stored in different queues:
Ready Queue: For ready processes
Waiting Queue: For processes waiting for I/O
Job Queue: All processes in the system
Resource Tables:
Keep track of devices, files, and memory blocks allocated to each process.
7. Context Switching
Context Switching is the mechanism where the CPU switches from one process to another, saving and
restoring the context (state) of each process.
Steps:
1. Save the context (registers, program counter) of the current process in its PCB.
2. Load the context of the next scheduled process from its PCB.
3. Update memory management structures if required.
4. Resume execution of the next process.
👉 Context Switch Time is overhead (no useful work is done), so OS tries to minimize it.
8. Process Control Primitives
These are basic OS operations to manage processes:
Create (fork): Create a new process.
Destroy (exit/kill): Terminate a process.
Suspend: Pause a process temporarily.
Resume: Resume a suspended process.
Block: Move process to waiting state.
Wakeup: Move process from waiting to ready.
Dispatch: Assign CPU to a process.
Example (UNIX/Linux):
fork() → create child process
exec() → replace process memory
wait() → wait for child process
kill() → send signals to terminate
9. Process Scheduling
Process Scheduling is the selection of a process from the ready queue to run next on the CPU.
Scheduling Queues:
Job Queue – All processes in the system.
Ready Queue – Waiting for CPU.
Device Queue – Waiting for I/O devices.
Schedulers:
1. Long-Term Scheduler (Job Scheduler):
o Decides which processes are admitted to the system.
o Controls degree of multiprogramming.
2. Short-Term Scheduler (CPU Scheduler):
o Selects which process from the ready queue gets CPU next.
o Runs frequently.
3. Medium-Term Scheduler:
o Used in swapping systems to suspend/resume processes to optimize memory.
Scheduling Algorithms:
FCFS (First Come First Served) – Simple but can cause convoy effect.
SJF (Shortest Job First) – Minimum average waiting time.
Priority Scheduling – Based on priority, can lead to starvation.
Round Robin – Time sharing using time quantum.
Multilevel Queue / Multilevel Feedback Queue – For complex, mixed environments.
Goals of Scheduling:
CPU Utilization
Throughput
Turnaround Time
Waiting Time
Response Time
Fairness
Next :
1. The Determinacy Problem
Definition:
The determinacy problem arises when the outcome of program execution depends on the relative
speed or timing of concurrent processes, i.e., the final result is not predictable and may differ from one
run to another.
This happens mainly when:
Two or more parallel processes access shared data simultaneously.
There is no proper synchronization between them.
Example: Suppose two processes P1 and P2 both increment a shared variable X (initial value 5):
P1: X = X + 1, P2: X = X + 1
If both execute simultaneously:
Expected Result: 7
Actual Possible Results: 6 or 7 depending on which process executes last.
This is called a Race Condition — a classic example of the determinacy problem.
Causes of Determinacy Problem:
Shared data access without synchronization
Non-atomic operations (e.g., read-modify-write)
Uncontrolled scheduling of concurrent processes
👉 Goal:
Ensure that concurrent execution is deterministic, i.e., gives the same result every time, regardless of
process timing.
2. Mutual Exclusion
Definition:
Mutual Exclusion ensures that only one process at a time can access a shared resource or critical
section, preventing race conditions.
A critical section is a portion of the program where shared data is accessed or modified.
To implement mutual exclusion, a process must:
1. Request Entry → Check if no other process is in the critical section.
2. Enter Critical Section → Access shared resource.
3. Exit Critical Section → Release resource for others.
4. Remainder Section → Rest of the program.
Requirements of Mutual Exclusion Solutions:
1. Mutual Exclusion: Only one process can enter its critical section at a time.
2. Progress: If no process is in its critical section, others can enter without unnecessary delay.
3. Bounded Waiting: No process should wait indefinitely to enter its critical section.
4. No Assumptions on Speed: Should work regardless of processor speed.
Methods to Achieve Mutual Exclusion:
Software Methods:
o Peterson’s Algorithm
o Dekker’s Algorithm
Hardware Support:
o Test-and-Set Instruction
o Compare-and-Swap
Synchronization Primitives:
o Semaphores, Mutex Locks, Monitors
3. Semaphores
Definition:
A Semaphore is a synchronization tool used to solve mutual exclusion and process synchronization
problems. It is a variable that is accessed only through two atomic operations:
wait(S) or P(S) → Decrement S; if S < 0, process waits
signal(S) or V(S) → Increment S; if S ≤ 0, wake up one waiting process
Types of Semaphores:
1. Binary Semaphore (Mutex):
o Takes only 0 or 1 values.
o Used for mutual exclusion.
2. Counting Semaphore:
o Can take any non-negative value.
o Used to control access to a resource with multiple instances.
Properties of Semaphores:
Atomic Operations: wait and signal cannot be interrupted.
Efficient: No busy waiting if implemented with a waiting queue.
Powerful: Can solve mutual exclusion, synchronization, and coordination problems.
4. Process Synchronization
Definition:
Process Synchronization ensures that multiple processes execute in a coordinated manner,
especially when they share resources or need to follow a particular sequence.
Synchronization is needed to:
Prevent race conditions
Maintain data consistency
Ensure correct execution order
a. Synchronization in Critical Sections
When multiple processes access shared data, their critical sections must be executed one at a
time using synchronization tools like:
Locks
Semaphores
Monitors
b. Synchronization for Cooperation
Some processes may need to coordinate their execution order, e.g.:
👉 Producer–Consumer Problem:
Producer adds items to a buffer.
Consumer removes items from the buffer.
Synchronization is needed to ensure:
o Producer doesn’t add when buffer is full.
o Consumer doesn’t remove when buffer is empty.
c. Classical Synchronization Problems:
Producer–Consumer Problem
Readers–Writers Problem
Dining Philosophers Problem
Sleeping Barber Problem
All these problems illustrate determinacy issues, mutual exclusion, and process
synchronization challenges, and are typically solved using semaphores, monitors, or other
synchronization primitives.
Operating System Notes
1. Conditional Critical Regions (CCR)
Definition: A high-level synchronization construct that allows mutual exclusion +
condition synchronization.
Form:
region v when B do
S;
o v → shared variable
o → condition (boolean expression)
B
o → critical section code
S
Working:
o Process can execute only if condition B is true.
o Otherwise, process waits until condition satisfied.
Use Case: Producer–Consumer problem (producer enters region only if buffer not full).
2. Monitors
Definition: A programming language construct (abstract data type) that manages shared
resources.
Features:
1. Encapsulation of shared variables + procedures.
2. Mutual exclusion: only one process executes monitor at a time.
3. Condition variables used for synchronization:
wait() → suspend process until condition is signaled.
signal() → resume one waiting process.
Advantages over Semaphores:
o Easier to use (less error-prone).
o Provides automatic mutual exclusion.
Example Use: Producer–Consumer with notFull and notEmpty conditions.
3. Inter-Process Communication (IPC)
Purpose: To allow processes to exchange information and synchronize execution.
Methods:
1. Shared Memory
Fast, efficient.
Needs explicit synchronization.
2. Message Passing
Functions: send(message), receive(message).
No shared memory.
Simpler but slower.
3. Pipes – unidirectional channel between processes.
4. Sockets – communication between processes across a network.
5. Signals – limited form (notifications).
4. Deadlock Problem
Definition: A state where a set of processes are blocked because each is holding a
resource and waiting for another.
Coffman’s Conditions (all must hold for deadlock):
1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait
5. Deadlock Solutions
A. Prevention
Break at least one Coffman condition:
Mutual Exclusion → make resource sharable.
Hold and Wait → request all resources at once.
No Preemption → allow preemption.
Circular Wait → impose resource ordering.
B. Avoidance
System makes safe decisions using Banker’s Algorithm.
Processes declare maximum need in advance.
C. Detection & Recovery
Allow deadlock → detect → recover.
Detection: Wait-for graph or Resource Allocation Graph.
Recovery:
o Terminate processes.
o Preempt resources.
D. Ignorance
Some OS (e.g., UNIX, Windows) ignore deadlock assuming it is rare.
Memory Management Concepts
1. Memory Management Concepts
Definition:
Memory management is the function of the operating system that handles allocation and
deallocation of memory to processes during execution.
Responsibilities of Memory Management:
1. Allocation/Deallocation: Assign memory blocks to processes and reclaim after
completion.
2. Relocation: Move programs within memory as required.
3. Protection: Prevent one process from interfering with another.
4. Sharing: Allow controlled sharing among processes.
5. Logical vs Physical Addressing: Translate logical addresses (used by programs)
to physical addresses (used by hardware).
6. Support for Multiprogramming: Keep multiple processes in memory
simultaneously.
2. Relocation
Problem: A program may not always be loaded at the same place in memory.
Relocation: Adjusting addresses in a program so that it can execute correctly regardless
of its memory location.
Types of Addresses:
o Logical (Virtual) Address: Generated by CPU.
o Physical Address: Actual location in RAM.
Techniques:
1. Static Relocation:
Done at load time.
Program fixed to one place.
No movement after loading.
2. Dynamic Relocation:
Done at execution time using base & limit registers or MMU (Memory
Management Unit).
Program can be moved in memory during execution.
3. Linking
Definition: Linking is the process of combining different program modules and
libraries into a single executable.
Types of Linking:
1. Static Linking:
All modules and library routines are combined into one executable before
execution.
Larger executable size.
Faster execution (everything included).
2. Dynamic Linking:
Linking is done during execution (at runtime).
Uses shared libraries (.dll, .so).
Saves memory and allows updates without recompiling.
4. Multiprogramming with Fixed Partitioning
Definition: Memory is divided into fixed-size partitions. Each partition can hold exactly
one process.
Types:
1. Equal-size partitions
All partitions are of the same size.
Simple to manage.
May cause internal fragmentation (unused space within a partition).
2. Unequal-size partitions
Different sized partitions created.
Process assigned to a best-fit partition.
Reduces internal fragmentation but can still cause external
fragmentation (free partitions scattered).
Limitations:
o Number of processes ≤ number of partitions.
o Poor memory utilization.
2nd last topic :
1. Partitions
Definition: Dividing memory into sections to load multiple processes.
Types:
1. Fixed Partitioning – memory divided into fixed-size blocks (internal/external
fragmentation).
2. Variable Partitioning – partitions created dynamically according to process size
(reduces internal fragmentation).
2. Swapping
Definition: Process of moving processes between main memory and backing store
(disk) to maximize CPU utilization.
Steps:
o When memory is full, inactive process is swapped out to disk.
o When CPU needs it again, process is swapped back into memory.
Overhead: Time taken for swap-in and swap-out.
3. Variable Partitions
Memory divided dynamically → partition sizes depend on process requirements.
Advantages: Better memory utilization than fixed partitioning.
Problem: External fragmentation (free memory exists but not contiguous).
Solution: Compaction (rearranging processes to free large blocks).
4. Overlays
Definition: Technique that allows a program larger than main memory to run by keeping
only required instructions/data in memory at one time.
Used in: Early systems with limited RAM.
Modern replacement: Virtual memory.
5. Virtual Memory
Definition: A technique that gives an illusion of large memory by combining RAM +
disk.
Working: Only part of program loaded into memory; rest kept on disk.
Advantages:
o Allows execution of large programs.
o Multiprogramming increases.
Techniques Used: Paging, Segmentation (or both).
6. Segmentation
Definition: Memory divided into logical segments (code, data, stack).
Each segment has:
o Base address (starting address).
o Limit (length).
Advantages: Matches logical program structure.
Problem: External fragmentation.
7. Paging
Definition: Memory divided into fixed-size pages (logical) and frames (physical).
Address split into:
o Page number (index into page table).
o Page offset.
Advantages: Eliminates external fragmentation.
Disadvantage: Page table overhead, internal fragmentation (last page).
8. Storage Allocation Strategies
Used to allocate free memory blocks:
1. First Fit – Allocate first available block big enough. (Fast, may cause fragmentation).
2. Best Fit – Allocate smallest available block that fits. (Minimizes wasted space, but
slower).
3. Worst Fit – Allocate largest block. (Leaves big leftover blocks, not efficient).
9. Load Control
Definition: Mechanism to decide how many processes should be kept in memory.
If too many processes → overhead increases.
If too few → CPU underutilization.
OS maintains balance.
10. Thrashing
Definition: Situation where CPU spends most of its time swapping pages in and out of
memory instead of executing instructions.
Cause: Overcommitment of memory (too many processes with large memory needs).
Effects:
o CPU utilization drops.
o System becomes unresponsive.
Solution:
o Reduce degree of multiprogramming.
o Use working set model.