0% found this document useful (0 votes)
10 views11 pages

Operating Systems Short Notes

An Operating System (OS) is a program that acts as an intermediary between users/applications and computer hardware, managing resources like CPU and memory. It includes key functions such as process management, memory management, and I/O device management, while different types of OS (batch, multitasking, real-time) handle tasks differently. Important concepts include process/thread management, CPU scheduling, inter-process communication, deadlock, and memory management techniques like paging and segmentation.

Uploaded by

Dhawal Waghulde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views11 pages

Operating Systems Short Notes

An Operating System (OS) is a program that acts as an intermediary between users/applications and computer hardware, managing resources like CPU and memory. It includes key functions such as process management, memory management, and I/O device management, while different types of OS (batch, multitasking, real-time) handle tasks differently. Important concepts include process/thread management, CPU scheduling, inter-process communication, deadlock, and memory management techniques like paging and segmentation.

Uploaded by

Dhawal Waghulde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1.

What is an Operating System (OS)

Definition (simple):

An OS is a program (or set of programs) that acts as an *intermediary* between the user (and
applications) and the computer hardware. It manages hardware resources (CPU, memory, disk,
I/O) and provides services to applications.

Key functions (you must remember):

Process management (create, schedule, terminate)

Memory management (allocate, free, virtual memory)

Storage/file system management

I/O device management

User interface (UI/CLI) and application interface

Security and protection

Exam-ready bullets:

OS sits between hardware and user/application

OS manages resources such as CPU, memory, storage, I/O.

Without OS, each application would need to manage hardware directly (which is inefficient and
unsafe).

OS types (batch, multitasking, real-time) differ in how they manage tasks.

2. Process / Thread Management

What is a Process:

A process is an instance of a program in execution. It has attributes: process ID (PID), program


counter, registers, memory space, state.
What is a Thread:

A thread is a smaller unit of execution within a process. Multiple threads can share the same
memory space of the process.

Key points:

Process states: e.g., New → Ready → Running → Waiting (Blocked) → Terminated.

Threads share memory, have lower overhead; process are heavier.

Context switching overhead is higher for process than thread (typically).

Multithreading helps responsiveness and better CPU utilisation in many cases.

Memory aid:

Process = “house”, thread = “room in the house”. Many rooms (threads) share the house
(process).

State diagram: Think of “Ready queue → CPU → I/O wait → back to Ready” like people waiting
in line, doing tasks, waiting for coffee (I/O).

Exam-ready bullets:

A process has its own memory space; threads share memory space of process.

The PCB holds information like process state, program counter, registers, memory-maps.

Thread context-switch is faster than process context-switch.

Process states: New, Ready, Running, Waiting/Blocked, Terminated.

Many OS questions: Which process state is the process in if it is waiting for I/O?” →
Waiting/Blocked.

3. CPU Scheduling

What it is:

CPU scheduling is deciding which process/thread gets the CPU next (in systems where
multitasking exists).
Why important:

Key algorithms to remember:

FCFS (First-Come First-Served)


SJF (Shortest Job First) / SRTF (Shortest Remaining Time First)
Priority Scheduling
Round Robin (RR)
Multilevel Queue / Multilevel Feedback Queue (less frequent but good to know)

Key metrics:

Turnaround Time = finish time – arrival time

Waiting Time = turnaround time – burst time

Response Time = time from submission to first response (for interactive systems)

Throughput = number of processes completed per unit time

CPU Utilisation = % of time CPU is busy

Memory aid:

Imagine people waiting for food in different queues (jobs).

FCFS = first in first out; SJF = shortest wait gets served first; RR = each gets a fixed time slice
and cycle back if not done.

Turnaround vs Waiting: Turnaround is from when I gave order to when food served,waiting is
“how long I stood in queue.

Exam-ready bullets:

In FCFS, average waiting time may be large if a long job arrives before short job (convoy
effect).

SJF gives minimum average waiting time (for non-preemptive) but requires knowledge of
upcoming burst time.

Round Robin is good for time-sharing systems; time quantum too small → more overhead, too
large → behaves like FCFS.
Priority scheduling may lead to starvation (low priority jobs never run) unless ageing is used.

CPU scheduling aims: maximize CPU utilisation, maximise throughput, minimise turnaround &
waiting time, fairness, predictability.

4. Inter-Process Communication (IPC) / Concurrency / Synchronisation

What it is:

IPC: mechanisms for processes (or threads) to communicate or synchronise.

Concurrency: multiple processes/threads executing at overlapping time.

Synchronisation: ensuring correct order and safe access to shared resources.

Key concepts:

Critical section: part of code where shared resource is accessed.

Mutual exclusion: only one process in critical section at a time.

Race condition: when outcome depends on the sequence/timing of processes.

Synchronization tools: locks, semaphores, monitors, condition variables.

Deadlock (which we’ll discuss separately) is a concurrency hazard.

IPC mechanisms: pipes, message queues, shared memory, signals (in OS such as Unix).

Memory aid:

Think of a bathroom key analogy: only one person can hold the key (mutex) and enter the
bathroom (critical section). If two try at same time, race condition.

Semaphore = green-light/red-light controlling entry.

Exam-ready bullets:

Mutual exclusion ensures only one process enters its critical section at a time.
Race condition arises when processes access/change shared data and the final result depends
on who goes first.

Solutions: Disable interrupts (simple, for OS only), lock or mutex, semaphore (counting or
binary), monitor (higher-level).

IPC mechanisms: shared memory (fastest but complex), message passing (structured but
slower), pipes, sockets (for network).

Concurrency bugs: deadlock, starvation, livelock.

5. Deadlock

What it is:

Deadlock is a situation where a set of processes are each waiting for an event that can only be
caused by another process in the set. So none can proceed.

Very high chance of appearing.

Key conditions (Coffman conditions):

1. Mutual exclusion (resources cannot be shared)

2. Hold and wait (a process holds a resource and waits for another)

3. No preemption (cannot take resource away)

4. Circular wait (set of processes each waiting for resource held by next in circle)

Ways to handle deadlock:

Prevention: ensure one of the four conditions cannot hold (e.g., no hold‐and‐wait, or
preemption allowed)

Avoidance: use algorithms like Banker's algorithm (in some OS teaching)

Detection & Recovery: let deadlock occur,detect it, then recover (terminate one process,
preempt)
Memory aid:

Picture four friends with only one chair each: each friend holds a chair and wants another
friend’s chair → circular wait → deadlock.

Conditions = M-H-N-C (Mutual, Hold, No preemption, Circular) → remember “MHNC” or “My


Holy No-Chair”.

Exam-ready bullets:

All four Coffman conditions must be present for a deadlock to occur.

Resource-allocation graph: if there is a cycle, and resources are single instance per type →
deadlock.

Prevention method example: impose ordering of resources.

Avoidance example: Banker's algorithm (check “safe state”).

Detection example: OS periodically checks for cycle in resource graph and recovers by
aborting/rolling back.

Recovery example: preemption of resources, process termination.

6. Memory Management

What it is:

Memory management is how OS handles primary memory (RAM) — allocating memory to


processes, managing free space, handling address translation, fragmentation, virtual memory,
paging & segmentation.

Key sub-topics you must master:

Contiguous allocation: simplest memory allocation (fixed partitioning, variable partitioning)

Fragmentation: internal (wasted space within allocated region) vs external (free holes too small)

Paging: divides memory into fixed size pages; avoids external fragmentation; needs page table.

Segmentation: logical segments (code, data, stack) of varied length.


Virtual memory: process sees a large address space; OS brings pages in/out from disk; uses
page faults.

Page replacement algorithms: FIFO, LRU (Least Recently Used), Optimal (theoretical).

Concepts: TLB (Translation Lookaside Buffer), effective access time (EAT), demand paging.

Memory aid:

Think of memory as hotel: rooms (pages) to allocate; fragmentation = lots of small unusable
rooms.

Paging = hotel has fixed‐sized rooms; segmentation = rooms of varied size.

Virtual memory = guest arrives, but room not yet ready (page fault) → must fetch room (page)
from storage.

Replacement algorithms = who to evict when hotel is full (FIFO = oldest guest, LRU = guest who
hasn’t visited recently).

Exam-ready bullets:

External fragmentation: free memory exists but not contiguous for requested size.

Internal fragmentation: allocated memory larger than requested so some wasted inside.

Paging avoids external fragmentation; but still internal fragmentation in last page maybe.

Page table holds mapping from logical (virtual) page number to physical frame number.

In virtual memory, a page fault occurs when referenced page not in physical memory → OS
must fetch it and update page table.

Effective access time (EAT) = (1 − p) × memory access time + p × (page fault service time)
where p = page‐fault rate.

FIFO page replacement suffers Belady’s anomaly (increasing frames increases page faults)
under some cases; LRU avoids it (generally better).

Segmentation provides logical view (code, stack, data), paging provides physical simplification.

TLB improves speed of address translation (cache for page table entries).
7. File System and I/O (Brief)

Key concepts:

File: collection of related data stored as single unit.

Directory structure: single level, two-level, tree structure.

File allocation methods: contiguous, linked, indexed.

I/O devices: block vs character devices, buffering, caching.

Device drivers as part of OS.

Memory aid:

Think of file allocation like parking lots: contiguous = one large lot, linked = chain of spots,
indexed = list of spots via index sheet.

Buffering = waiting area (queue) for I/O.

Exam-ready bullets:

In contiguous allocation, file occupies a set of contiguous blocks → fast sequential access but
suffers external fragmentation.

In linked allocation, each block holds pointer to next → no external fragmentation but random
access is slow.

Indexed allocation uses index block which contains pointers to the actual file blocks → supports
direct access.

I/O buffering improves performance by decoupling processor speed vs device speed.

Device drivers abstract hardware details from OS; OS communicates via driver interface.

8. Types of Operating Systems (Quick Overview)

Types & definitions:

Batch OS: jobs collected, processed in batch without user interaction (older).
Time-sharing OS (Multiprogramming + fast switching): many users interact simultaneously (e.g.,
Unix).

Multiprogramming OS: keeps CPU busy by having multiple programs loaded and ready to
execute.

Real-time OS: strict timing constraints (hard real-time) – e.g., embedded systems, industrial
controllers.

Distributed OS: OS across multiple machines, resources shared.

Embedded OS (often a subset/real-time) for devices.

Memory aid:

Batch = “batches at bakery, line of jobs, no interaction”.

Time-sharing = “many people share the CPU like many people share a waiter at restaurant”.

Real-time = “react now or system fails” (e.g., anti-lock braking system).

Exam-ready bullets:

Time-sharing OS gives interactive response to multiple users; uses RR scheduling.

Real-time OS must guarantee timely responses; missing deadline may be catastrophic.

Batch OS has little/no user interaction; jobs queued and processed.

Distributed OS provides transparency of distributed resources to users.

Context switch occurs when OS changes CPU from one process to another: involves
saving/restoring registers, PC, etc.

Ready Cheat-Sheet for OS (for last-minute revision)

Here is a compact cheat-sheet you can print and revise just before the exam. Here are key
bullet-points you must remember:

OS = intermediary between hardware & users/applications.


Functions of OS: process management, memory management, storage I/O management, user
interface, protection & security.

Process vs Thread: process = independent program execution; thread = lightweight unit within a
process. Threads share process memory.

Process states: New → Ready → Running → Waiting/Blocked → Terminated.

Scheduling metrics: CPU utilisation, throughput, turnaround time, waiting time, response time.

Scheduling algorithms: FCFS, SJF/SRTF, Priority, Round Robin, Multilevel Queue.

Paging vs Segmentation: Paging = fixed‐size pages/frames; Segmentation = logical


variable‐length segments.

Fragmentation: external (holes between blocks) vs internal (wasted space inside allocated
block).

Virtual memory: use disk to simulate more RAM; page fault occurs when page not in memory.

Page replacement algorithms: FIFO, LRU, Optimal. Belady’s anomaly possible with FIFO.

IPC & Synchronisation: critical section, mutual exclusion, race condition, locks, semaphores,
monitors.

Deadlock conditions: mutual exclusion, hold and wait, no preemption, circular wait.

File allocation: contiguous, linked, indexed.

Types of OS: Batch, Time-sharing, Real-time, Distributed, Embedded.

Context switch = overhead: saving/restoring registers, PC, etc. Threads switch faster than
processes (less overhead).

TLB (Translation Lookaside Buffer) = cache for page table entries → speeds address
translation.

Effective Access Time (EAT) = (1−p)×memory access + p×page fault time (where p = page-fault
rate).
Round Robin: if time quantum is too small → too many context switches; too large → behaves
like FCFS.

You might also like