0% found this document useful (0 votes)
18 views69 pages

Operating System Interview

This document provides an overview of operating systems (OS), detailing their types, functions, structures, and components. It explains the necessity of an OS for resource management, user interaction, and system protection, and distinguishes between various OS types such as single-process, multitasking, and real-time systems. Additionally, it covers concepts like processes, threads, memory management, and the differences between 32-bit and 64-bit operating systems.

Uploaded by

Rahi Malowa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views69 pages

Operating System Interview

This document provides an overview of operating systems (OS), detailing their types, functions, structures, and components. It explains the necessity of an OS for resource management, user interaction, and system protection, and distinguishes between various OS types such as single-process, multitasking, and real-time systems. Additionally, it covers concepts like processes, threads, memory management, and the differences between 32-bit and 64-bit operating systems.

Uploaded by

Rahi Malowa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT 1 – Introduction to Operating Systems

1. Software Types
 Application Software → Does a specific task for the user (e.g., MS Word,
Calculator).

 System Software → Controls the computer system and provides a platform to run
application software (e.g., Operating System).

2. What is an Operating System (OS)?


 An OS is software that manages all resources of a computer (hardware + software).

 It provides an environment where users can run programs easily.


 It hides the complexity of hardware and works as a resource manager.

 Acts as a middleman between hardware and software.

3. Why do we need an OS?

1. Without OS:
o Applications become bulky and complex (they must directly handle
hardware).

o One application may exploit resources without control.


o No memory protection (one program can overwrite another).

2. With OS:

o Easy interaction with hardware.

o Safe, efficient, and organized system.

4. What is an OS made of?


 A collection of system software.

5. Functions of an Operating System


 Access to hardware (CPU, memory, devices).

 User interface → Lets user interact with computer (e.g., Windows desktop, Linux
terminal).

 Hardware management → Controls CPU, memory, disk, input/output devices.


 Software execution → Loads and runs programs.

 File management → Organizes, reads, writes, secures data.


 Security & access control → Manages permissions and protects system.

 Resource management (Arbitration) → Manages memory, devices, files, processes,


and security.
 Abstraction → Hides complexity of hardware.

 Facilitates application execution with isolation and protection.

6. OS Structure

User

Application Programs

Operating System


Computer Hardware

✅ In short:
The Operating System is the bridge between user and hardware.
It manages resources, ensures protection, hides complexity, and provides a smooth
environment to run applications.

Lec-2: Types of Operating Systems


OS Goals

 ✅ Maximum CPU utilization (CPU should not stay idle)

 ✅ Less process starvation (no process should wait too long)

 ✅ Higher priority job execution (important tasks should run first)


Types of Operating Systems

1. Single Process OS

o Runs only one process at a time.

o Example: MS DOS (1981)


2. Batch Processing OS

o Jobs are collected in batches and executed one after another (no interaction
with user during execution).
o Example: ATLAS (Manchester Univ., 1950s–1960s)

3. Multiprogramming OS
o Multiple programs are kept in memory; CPU switches between them when
one is waiting for I/O.
o Increases CPU utilization.
o Example: THE (Dijkstra, 1960s)

4. Multitasking OS
o Allows multiple tasks to run simultaneously by rapidly switching (time-
sharing).

o Example: CTSS (MIT, 1960s)

5. Multiprocessing OS

o Uses multiple CPUs to execute processes in parallel.


o Increases performance and reliability.

o Example: Windows NT

6. Distributed OS
o A group of computers works together and appears as a single system to the
user.

o Example: LOCUS

7. Real-Time OS (RTOS)

o Provides immediate response to inputs.

o Used in critical applications like robotics, defense, and medical systems.

o Example: ATCS

Lec-3: Multi-Tasking vs Multi-Threading

Program
 A program is an executable file with a set of instructions.

 Stored in disk, ready to be executed.


Process

 A program under execution.


 Stored in RAM (primary memory).

Thread

 A single sequence of execution inside a process.


 Independent path of execution (lightweight).

 Used for parallelism → divides a process into smaller tasks.


 Example:

o Multiple tabs in a browser.

o In a text editor → typing, spell-checking, auto-save happening together.

Multi-Tasking
 Running more than one task at the same time.

 Achieved by context switching between processes.


 CPU = 1

 Memory Protection → OS gives separate memory to each process.


 Example: Playing music + Writing in Word + Downloading files.
Multi-Threading

 A process is divided into multiple threads (sub-tasks).


 Each thread has its own execution path, but shares resources of the same process.

 CPU ≥ 1 (works better with multiple CPUs).


 No separate memory protection → Threads of same process share resources.

 Example: In a browser → one process, but threads for loading pages, downloading,
playing videos.

Thread Scheduling
 Threads are scheduled based on priority.

 OS gives time slices to each thread for execution.

Context Switching
 Thread Context Switching

o Switch between threads of the same process.


o Faster (no memory switch).

o CPU cache state is preserved.

 Process Context Switching


o Switch between different processes.

o Slower (involves memory switch).


o CPU cache state is flushed.

✅ Key Differences

Feature Multi-Tasking Multi-Threading

One process divided into


Meaning Multiple processes running
multiple threads

CPU 1 ≥ 1 (better with more CPUs)

Each process has separate Threads share same memory of a


Memory
memory process

Context Between threads of same


Between processes
Switching process

Speed Slower (more overhead) Faster (less overhead)

In short
 Multi-tasking → Multiple processes running at the same time.

 Multi-threading → Multiple threads inside a single process running.

 Thread switching = fast, Process switching = slow.

Lec-4: Components of OS
1. Kernel

 Core part of the OS.


 Directly interacts with hardware.

 Performs critical tasks.


 First part of OS to load at startup.

 Called the heart of the OS.

2. User Space

 Where application software runs.


 Cannot directly access hardware.

 Interacts with kernel using:


o GUI (Graphical User Interface)

o CLI (Command Line Interface)

Shell

 Works as command interpreter.


 Takes commands from the user and sends them to kernel for execution.

Functions of Kernel

1. Process Management
o Scheduling processes on CPU.
o Creating & deleting processes.

o Suspending & resuming processes.


o Process synchronization & communication.

2. Memory Management
o Allocating & deallocating memory.

o Keeping track of which part of memory is used by which process.

3. File Management

o Creating/deleting files & directories.

o Mapping files into secondary storage.


o Backup & stability.

4. I/O Management
o Controls I/O operations & devices.

o Spooling → Managing jobs of different speeds (e.g., print jobs, mail jobs).
o Buffering → Temporary data storage (e.g., YouTube video buffering).

o Caching → Fast memory storage (e.g., web cache).

Types of Kernels

1. Monolithic Kernel
 All functions in a single large kernel.

 Bulky, requires more memory.


 If one module crashes → whole kernel crashes.

 Fast performance (less communication overhead).

 Examples: Linux, Unix, MS-DOS.

2. Micro Kernel
 Only essential functions in kernel:

o Memory management
o Process management

 Other functions (file mgmt, I/O mgmt) are in user space.


 Smaller, reliable, stable.
 But performance is slower (due to switching between user & kernel mode).

 Examples: L4 Linux, Symbian OS, MINIX.

3. Hybrid Kernel
 Combination of Monolithic + Micro kernel.

 Balances speed, modularity, and reliability.

 IPC (Inter-Process Communication) happens with fewer overheads.

 Examples: MacOS, Windows NT/7/10.

4. Nano/Exo Kernel

 Extremely small kernel.


 Focus on minimal functionality.

 Delegates more responsibility to user-level services.


Communication between User Mode & Kernel Mode
 Done via Inter-Process Communication (IPC).

 Two processes may need to communicate → even though they have separate
memory spaces.
 IPC methods:

o Shared Memory
o Message Passing

✅ In short:

 Kernel = Heart of OS (manages CPU, memory, I/O, files).

 User Space = Where apps run (interacts via GUI/CLI).


 Shell = Command interpreter.

 Kernels:

o Monolithic → Big, fast, but risky.

o Micro → Small, stable, but slower.


o Hybrid → Mix of both.

o Nano/Exo → Minimal design.

 IPC = Communication between processes (via shared memory or messages).

Lec-7: 32-bit vs 64-bit OS


1. 32-bit OS

 Has 32-bit registers.


 Can access 2³² memory addresses = 4 GB max physical memory.
 CPU processes 32 bits of data/information at once.

2. 64-bit OS

 Has 64-bit registers.

 Can access 2⁶⁴ memory addresses = ~17 billion GB memory (theoretical).

 CPU processes 64 bits of data/information at once.


Advantages of 64-bit OS over 32-bit OS
1. Addressable Memory

o 32-bit → Limited to 4 GB memory.

o 64-bit → Can access way more memory (virtually unlimited).


2. Resource Usage

o 32-bit OS cannot use more than 4 GB RAM, even if installed.


o 64-bit OS can use excess RAM efficiently.

3. Performance

o All calculations happen in registers.


o Larger registers → handle bigger calculations faster.

o 32-bit CPU: 4 bytes per instruction cycle.


o 64-bit CPU: 8 bytes per instruction cycle.

o (In billions of cycles/sec → huge speed difference).


4. Compatibility

o 64-bit CPU can run both 32-bit and 64-bit OS.


o 32-bit CPU can only run 32-bit OS.
5. Better Graphics Performance

o 64-bit supports 8-byte graphics calculations, so graphics-intensive apps run


smoother and faster.

✅ In Short (Easy to Remember):

 32-bit → max 4 GB RAM, slower, less powerful.

 64-bit → supports more RAM, faster, better graphics, backward compatible.

Lec-8: Storage Devices Basics


Types of Memory in a Computer System

1. Register

o Smallest unit of storage.

o Located inside the CPU.


o Stores instructions, addresses, or data temporarily.

o Fastest memory but very limited in size.


2. Cache Memory

o High-speed memory between CPU and RAM.

o Stores frequently used instructions/data for quick access.


o Reduces CPU waiting time.

3. Main Memory (Primary Memory)


o Example: RAM.

o Stores data and programs currently being used.

o Volatile → data is lost when power is off.


4. Secondary Memory

o Example: Hard disks, SSDs, CDs, Magnetic tapes.


o Used for long-term data storage.

o Non-volatile → data is kept even after power off.


o Slower but cheaper than primary memory.

Comparison: Primary vs Secondary Memory


1. Cost

o Primary memory = costly.


o Registers = most expensive (due to design and speed).

o Secondary memory = cheaper.


2. Access Speed

o Primary memory = faster.

o Order of speed: Register > Cache > Main memory > Secondary memory.

3. Storage Size

o Secondary memory = large capacity.


o Primary memory = limited capacity.

4. Volatility
o Primary memory = volatile (data lost when power off).

o Secondary memory = non-volatile (data stays).


✅ In Short (Easy to Remember):
 Registers → Smallest, fastest, costly, inside CPU.

 Cache → Fast helper memory between CPU & RAM.

 Main Memory (RAM) → Temporary, fast, volatile.


 Secondary Memory → Large, permanent, cheap, slower.

Lecture 9. Process Basics (Easy Notes)


1. What is a Program?

 Compiled code, ready to execute.

2. What is a Process?
 A program under execution.

3. How OS Creates a Process


Steps:

1. Load program + static data into memory.


2. Allocate runtime stack (for function calls, local variables).

3. Allocate heap (for dynamic memory).


4. Perform I/O tasks.

5. OS gives control to program’s main() function.


4. Architecture of a Process
A process has 4 parts:

 Stack → Local variables, function arguments, return values.

 Heap → Dynamically allocated memory.


 Data → Global + static variables.

 Text → Program code (compiled instructions).

5. Attributes of a Process

 Unique ID (every process can be identified).


 Process Table → OS tracks all processes using this table.

o Each entry = PCB (Process Control Block).


 PCB (Process Control Block): Stores details of a process:

o Process ID
o Program Counter (next instruction)

o Process State (Running, Ready, Waiting…)


o Priority
o CPU Registers

o List of open files/devices

6. PCB Structure (Quick View)

Field Description

Process ID Unique identifier

Program Counter Next instruction address

Process State Current status

Priority Decides CPU time allocation

Registers Stores CPU values when swapped

List of open files Files used by process


Field Description

List of devices Devices allocated

🔑 Note on PCB Registers:


When process pauses, register values are saved in PCB.
When process resumes, values are restored from PCB.

⚡ So simply:
👉 Program = code
👉 Process = running program
👉 PCB = process details storage

LECTURE 10 Process States and Process Queues

1. Process States

When a program runs, it goes through different stages:


 New – The OS is creating the process.

 Ready – The process is in memory and waiting for CPU.

 Running – The CPU is executing the process instructions.


 Waiting – The process is waiting for input/output (I/O).

 Terminated – The process has finished, and its record is removed.


(The diagram shows how processes move between these states.)

2. Process Queues

Processes are kept in different queues depending on their state:


 Job Queue –
o Holds new processes in secondary storage.

o Long-term scheduler (LTS) picks which processes to load into memory.


 Ready Queue –

o Holds processes in main memory waiting for CPU.


o CPU scheduler (short-term scheduler) picks which process gets the CPU
next.
 Waiting Queue –

o Holds processes waiting for I/O.

3. Degree of Multiprogramming

 This is the number of processes in memory at one time.


 LTS controls this number by deciding how many jobs to admit.

4. Dispatcher

 A part of the OS that gives control of the CPU to the process selected by the
scheduler.

Lec-11: Swapping | Context-Switching | Orphan process | Zombie process

1. Swapping

 Meaning: Moving processes between main memory (RAM) and secondary storage
(disk) to manage memory.

 Points:
o Time-sharing systems may have a medium-term scheduler (MTS).

o Processes are removed from memory to reduce multiprogramming.

o Later, the removed process can be brought back and continued from where
it stopped.

o This removal and bringing back is called swapping.


o Swap-in and swap-out is done by MTS.

o Used when memory is overcommitted and needs to be freed.

👉 Example: If RAM is full, one process is moved to disk (swap-out). Later, when needed,
it is brought back (swap-in).

2. Context-Switching

 Meaning: Changing CPU control from one process to another.


 Steps:

o Save the state of the current process.


o Load the state of the new process from PCB (Process Control Block).

 Note:
o It is pure overhead (no useful work is done during switching).

o Time taken depends on machine speed, memory speed, and registers.

👉 Example: Like pausing one video and resuming another.

3. Orphan Process

 Meaning: A process whose parent process has finished/terminated but the child
process is still running.
 Such processes are adopted by init process (first process of OS).

👉 Example: Parent dies, but child continues → OS assigns a new guardian (init).

4. Zombie Process (Defunct Process)

 Meaning: A process that has finished execution but still has an entry in the process
table.
 Details:

o Common for child processes → parent still needs to read child’s exit status.
o Once parent reads exit status using wait(), the process is fully removed →
this is called reaping.
o Until reaping, it remains a zombie process.

👉 Example: Process is dead but name is still in attendance list until teacher marks it out.

✅ In Short:

 Swapping → Move process between RAM & Disk.


 Context Switching → CPU changes from one process to another.

 Orphan Process → Parent dead, child alive (adopted by init).


 Zombie Process → Process finished, but still listed in process table until parent
acknowledges.

Lec-12: Process Scheduling | FCFS | Convoy Effect


1. Process Scheduling

 It is the basis of multiprogramming OS.


 By switching CPU among processes, OS makes the system more productive.

 Many processes are in memory →

o If one process is waiting, CPU is switched to another process.


o This cycle continues.

👉 Example: Like a teacher asking one student a question, then moving to another
while the first is thinking.

2. CPU Scheduler
 When CPU is idle, OS must pick a process from the ready queue.

 Done by the Short-Term Scheduler (STS).

3. Non-Preemptive Scheduling

 Once CPU is given to a process → it cannot be taken away until the process finishes
or goes to wait.

 Problems:
o Starvation: Long jobs block short jobs.

o Low CPU utilization.

👉 Example: One student speaks very long, others wait silently.

4. Preemptive Scheduling
 CPU can be taken away from a process after a fixed time quantum or when it
changes state.
 Advantages:
o Less starvation.

o High CPU utilization.

👉 Example: Teacher gives each student limited time to answer, then moves to the
next.

5. Goals of CPU Scheduling


 Use CPU as much as possible (max utilization).

 Reduce turnaround time (TAT).


 Reduce waiting time (WT).

 Reduce response time (RT).

 Increase throughput (no. of processes finished per unit time).

6. Important Terms
 Throughput: Processes finished per unit time.

 Arrival Time (AT): Time when process comes into ready queue.

 Burst Time (BT): CPU time needed by process.


 Turnaround Time (TAT): Time from arrival → finish.
TAT=CT–ATTAT = CT – ATTAT=CT–AT

 Waiting Time (WT): Time spent waiting for CPU.


WT=TAT–BTWT = TAT – BTWT=TAT–BT

 Response Time (RT): Time from arrival → first CPU allocation.


 Completion Time (CT): Time when process finishes execution.

7. FCFS (First Come, First Serve)

 Process arriving first in the queue gets CPU first.

 Problem: If one process has long Burst Time, it delays all others.
o This creates the Convoy Effect.

👉 Convoy Effect:

 When many short processes get stuck behind one long process.

 Causes poor resource management and increases waiting time.

✅ In Short:

 Non-preemptive: Once started, can’t stop until finished (long jobs block short).
 Preemptive: CPU can be taken away after time quantum (fair chance).

 FCFS: Simple, fair but causes Convoy Effect.

 Goals: Max CPU use, less waiting, fast response, more throughput.
Lec-13: CPU Scheduling | SJF | Priority | RR

1. Shortest Job First (SJF) – Non-Preemptive


 The process with the smallest burst time (BT) is given CPU first.

 Problems:
o Exact BT is not known (must be estimated).

o If first process has a very large BT → others wait (convoy effect).

o Starvation may occur (long jobs may never get CPU if many short jobs keep
coming).

 Formula/Criteria: AT + BT

 Advantage: Minimizes average waiting time.

👉 Example: Small tasks get done quickly, but a big task may wait too long.

2. SJF – Preemptive (Shortest Remaining Time First, SRTF)


 Always choose the process with smallest remaining burst time.

 Advantages:

o Less starvation compared to non-preemptive.


o No convoy effect.

o Average waiting time is even better.

👉 Example: If a new shorter job arrives, CPU switches to it immediately.

3. Priority Scheduling – Non-Preemptive

 Each process is assigned a priority. Higher priority → CPU first.

 If two processes have same priority → handled like FCFS.


 SJF is actually a special case of priority scheduling where priority = 1/BT.

👉 Example: Emergency patient treated before normal patients.

4. Priority Scheduling – Preemptive


 Current running process will be stopped if a higher priority process arrives.

 Problem: Starvation for low priority processes (may never get CPU).
 Solution: Aging → gradually increase the priority of waiting processes.

o Example: Increase priority by 1 every 15 minutes.

👉 Example: Normal patients still get treated after waiting long enough.

5. Round Robin (RR)

 Most popular scheduling method.

 Like FCFS but preemptive.


 Each process gets a fixed time quantum (TQ).

 If BT > TQ: process is paused and placed back in queue.


 If BT ≤ TQ: process finishes.

 Advantages:

o No process waits forever → very low starvation.


o No convoy effect.

o Fair to all processes.


 Problems:
o If TQ is too small → too many context switches (overhead).

o If TQ is too large → behaves like FCFS.

👉 Example: Like giving each student 2 minutes to speak, then moving to the next
student.

✅ In Short:

 SJF Non-preemptive: Smallest BT first, but convoy effect & starvation possible.
 SJF Preemptive (SRTF): Smallest remaining BT first, less starvation.

 Priority Non-preemptive: High priority first, but lower ones may starve.
 Priority Preemptive: Higher priority interrupts running process. Aging solves
starvation.
 RR: Time-sharing, fair to all, popular, depends on TQ.
Lec-14: MLQ (Multi-Level Queue) | MLFQ (Multi-Level Feedback Queue)

1. Multi-Level Queue Scheduling (MLQ)

 Meaning: Ready queue is divided into multiple separate queues based on process
priority.
 Rules:

o Each process is permanently assigned to one queue (cannot move).


o Assignment depends on: process type, priority, memory size, etc.
o Each queue can have its own scheduling algorithm.
👉 Example: System processes → Priority scheduling, Interactive processes
→ RR, Batch processes → FCFS.

Types of Processes:
 System Process (SP): Created by OS, highest priority.

 Interactive Process (IP): Needs user input (foreground).

 Batch Process (BP): Runs silently in background (lowest priority).


Features:

 Scheduling between queues uses Fixed Priority Preemptive Scheduling.


👉 Example: Foreground > Background.

 If an interactive process arrives while a batch process is running → batch is


preempted.
 Problems:

o Lower-priority processes may starve.


o Convoy effect possible.

2. Multi-Level Feedback Queue Scheduling (MLFQ)


 Meaning: Similar to MLQ, but processes can move between queues.

 Rules:
o Multiple sub-queues exist.
o If a process uses too much CPU time → moved to lower priority queue.

o If a process waits too long in a lower queue → moved to higher priority


queue (prevents starvation).

 Advantages:
o Less starvation compared to MLQ.

o Flexible → can be tuned to meet system needs.


o Good for balancing I/O bound and CPU bound processes.

👉 In short:

 MLQ = Fixed queues, process can’t move (simple but unfair to low-priority).
 MLFQ = Flexible queues, process can move (fairer, less starvation).

✅ In Short (Comparison):

Feature MLQ MLFQ

Process Movement No (fixed queue) Yes (can move up/down)

Starvation High Less (due to aging)

Flexibility Rigid Flexible

Use Case Simple systems Complex systems, multitasking OS

You said:

in easy and cover all topic

ChatGPT said:

Got it 👍. Let me simplify the notes from your image into easy language while covering all
the topics:

Lec-11: Swapping | Context-Switching | Orphan process | Zombie process

1. Swapping
 Meaning: Moving processes between main memory (RAM) and secondary storage
(disk) to manage memory.
 Points:
o Time-sharing systems may have a medium-term scheduler (MTS).

o Processes are removed from memory to reduce multiprogramming.

o Later, the removed process can be brought back and continued from where
it stopped.
o This removal and bringing back is called swapping.

o Swap-in and swap-out is done by MTS.


o Used when memory is overcommitted and needs to be freed.

👉 Example: If RAM is full, one process is moved to disk (swap-out). Later, when needed,
it is brought back (swap-in).

2. Context-Switching

 Meaning: Changing CPU control from one process to another.

 Steps:
o Save the state of the current process.
o Load the state of the new process from PCB (Process Control Block).

 Note:
o It is pure overhead (no useful work is done during switching).

o Time taken depends on machine speed, memory speed, and registers.

👉 Example: Like pausing one video and resuming another.

3. Orphan Process

 Meaning: A process whose parent process has finished/terminated but the child
process is still running.
 Such processes are adopted by init process (first process of OS).

👉 Example: Parent dies, but child continues → OS assigns a new guardian (init).

4. Zombie Process (Defunct Process)

 Meaning: A process that has finished execution but still has an entry in the process
table.
When the parent calls wait(), it reads the exit status of the child.

 Details: At that moment, the OS removes the zombie’s entry from the process table.

o Common for child processes → parent still needs to read child’s exit status.
o Once parent reads exit status using wait(), the process is fully removed →
this is called reaping.
o Until reaping, it remains a zombie process.

👉 Example: Process is dead but name is still in attendance list until teacher marks it out.
✅ In Short:
 Swapping → Move process between RAM & Disk.

 Context Switching → CPU changes from one process to another.

 Orphan Process → Parent dead, child alive (adopted by init).


 Zombie Process → Process finished, but still listed in process table until parent
acknowledges.

Would you like me to also make a diagram/flow chart summary (like one-page revision
sheet) for these topics so you can quickly revise before exams?

You said

in easy and cover all topic

ChatGPT said:

Perfect 👍. Let’s simplify your Lec-12 notes into easy language while covering all points.

Lec-12: Process Scheduling | FCFS | Convoy Effect

1. Process Scheduling

 It is the basis of multiprogramming OS.


 By switching CPU among processes, OS makes the system more productive.

 Many processes are in memory →


o If one process is waiting, CPU is switched to another process.
o This cycle continues.

👉 Example: Like a teacher asking one student a question, then moving to another while
the first is thinking.

2. CPU Scheduler
 When CPU is idle, OS must pick a process from the ready queue.

 Done by the Short-Term Scheduler (STS).

3. Non-Preemptive Scheduling
 Once CPU is given to a process → it cannot be taken away until the process finishes
or goes to wait.

 Problems:
o Starvation: Long jobs block short jobs.

o Low CPU utilization.

👉 Example: One student speaks very long, others wait silently.

4. Preemptive Scheduling

 CPU can be taken away from a process after a fixed time quantum or when it
changes state.

 Advantages:

o Less starvation.

o High CPU utilization.

👉 Example: Teacher gives each student limited time to answer, then moves to the next.

5. Goals of CPU Scheduling

 Use CPU as much as possible (max utilization).


 Reduce turnaround time (TAT).

 Reduce waiting time (WT).


 Reduce response time (RT).

 Increase throughput (no. of processes finished per unit time).

6. Important Terms

 Throughput: Processes finished per unit time.


 Arrival Time (AT): Time when process comes into ready queue.

 Burst Time (BT): CPU time needed by process.

 Turnaround Time (TAT): Time from arrival → finish.


TAT=CT–ATTAT = CT – ATTAT=CT–AT

 Waiting Time (WT): Time spent waiting for CPU.


WT=TAT–BTWT = TAT – BTWT=TAT–BT

 Response Time (RT): Time from arrival → first CPU allocation.


 Completion Time (CT): Time when process finishes execution.

7. FCFS (First Come, First Serve)

 Process arriving first in the queue gets CPU first.

 Problem: If one process has long Burst Time, it delays all others.
o This creates the Convoy Effect.

👉 Convoy Effect:

 When many short processes get stuck behind one long process.

 Causes poor resource management and increases waiting time.

✅ In Short:

 Non-preemptive: Once started, can’t stop until finished (long jobs block short).

 Preemptive: CPU can be taken away after time quantum (fair chance).

 FCFS: Simple, fair but causes Convoy Effect.


 Goals: Max CPU use, less waiting, fast response, more throughput.

Do you want me to also prepare a table with formulas + one example (like 3 processes
with AT & BT) to clearly show how TAT, WT, RT, CT are calculated in FCFS?

You said:

in easy and cover all topic

ChatGPT said:

Perfect 👍. Let’s simplify your Lec-13 notes into easy language while covering all topics.

Lec-13: CPU Scheduling | SJF | Priority | RR

The total amount of time a process requires the CPU


1. Shortest Job First (SJF) – Non-Preemptive for its execution

 The process with the smallest burst time (BT) is given CPU first.

 Problems: Example

o Exact BT is not known (must be estimated). Suppose a process needs 10 ms on the

CPU to finish.

→ Its burst time = 10 ms.


o If first process has a very large BT → others wait (convoy effect).

o Starvation may occur (long jobs may never get CPU if many short jobs keep
coming).
 Formula/Criteria: AT + BT

 Advantage: Minimizes average waiting time.

👉 Example: Small tasks get done quickly, but a big task may wait too long.

2. SJF – Preemptive (Shortest Remaining Time First, SRTF)

 Always choose the process with smallest remaining burst time.


 Advantages:
o Less starvation compared to non-preemptive.

o No convoy effect.
o Average waiting time is even better.

👉 Example: If a new shorter job arrives, CPU switches to it immediately.

3. Priority Scheduling – Non-Preemptive


 Each process is assigned a priority. Higher priority → CPU first.

 If two processes have same priority → handled like FCFS.

 SJF is actually a special case of priority scheduling where priority = 1/BT.

👉 Example: Emergency patient treated before normal patients.

4. Priority Scheduling – Preemptive

 Current running process will be stopped if a higher priority process arrives.

 Problem: Starvation for low priority processes (may never get CPU).
 Solution: Aging → gradually increase the priority of waiting processes.

o Example: Increase priority by 1 every 15 minutes.

👉 Example: Normal patients still get treated after waiting long enough.

5. Round Robin (RR)

 Most popular scheduling method.


 Like FCFS but preemptive.

 Each process gets a fixed time quantum (TQ).


 If BT > TQ: process is paused and placed back in queue.

 If BT ≤ TQ: process finishes.

 Advantages:
o No process waits forever → very low starvation.

o No convoy effect.
o Fair to all processes.

 Problems:

o If TQ is too small → too many context switches (overhead).


o If TQ is too large → behaves like FCFS.

👉 Example: Like giving each student 2 minutes to speak, then moving to the next
student.

✅ In Short:

 SJF Non-preemptive: Smallest BT first, but convoy effect & starvation possible.
 SJF Preemptive (SRTF): Smallest remaining BT first, less starvation.

 Priority Non-preemptive: High priority first, but lower ones may starve.

 Priority Preemptive: Higher priority interrupts running process. Aging solves


starvation.

 RR: Time-sharing, fair to all, popular, depends on TQ.

Would you like me to also make a comparison table (SJF vs Priority vs RR) for revision? It’ll
give you a quick one-page glance for exams.
You said:
in easy and cover all topic

ChatGPT said:

Great 👍. Let’s make your Lec-14 (MLQ | MLFQ) notes easy to understand while covering
everything.

Lec-14: MLQ (Multi-Level Queue) | MLFQ (Multi-Level Feedback Queue)


1. Multi-Level Queue Scheduling (MLQ)
 Meaning: Ready queue is divided into multiple separate queues based on process
priority.

 Rules:
o Each process is permanently assigned to one queue (cannot move).

o Assignment depends on: process type, priority, memory size, etc.


o Each queue can have its own scheduling algorithm.
👉 Example: System processes → Priority scheduling, Interactive processes
→ RR, Batch processes → FCFS.
Types of Processes:

 System Process (SP): Created by OS, highest priority.


 Interactive Process (IP): Needs user input (foreground).

 Batch Process (BP): Runs silently in background (lowest priority).

Features:
 Scheduling between queues uses Fixed Priority Preemptive Scheduling.
👉 Example: Foreground > Background.

 If an interactive process arrives while a batch process is running → batch is


preempted.

 Problems:
o Lower-priority processes may starve.

o Convoy effect possible.

2. Multi-Level Feedback Queue Scheduling (MLFQ)

 Meaning: Similar to MLQ, but processes can move between queues.


 Rules:

o Multiple sub-queues exist.


o If a process uses too much CPU time → moved to lower priority queue.
o If a process waits too long in a lower queue → moved to higher priority
queue (prevents starvation).
 Advantages:

o Less starvation compared to MLQ.


o Flexible → can be tuned to meet system needs.

o Good for balancing I/O bound and CPU bound processes.

👉 In short:

 MLQ = Fixed queues, process can’t move (simple but unfair to low-priority).

 MLFQ = Flexible queues, process can move (fairer, less starvation).

✅ In Short (Comparison):

Feature MLQ MLFQ

Process Movement No (fixed queue) Yes (can move up/down)

Starvation High Less (due to aging)

Flexibility Rigid Flexible

Use Case Simple systems Complex systems, multitasking OS

Would you like me to also prepare a diagram-based one-page summary of MLQ vs MLFQ
(like flow chart + table) so you can revise just by looking at it before exams?

You said:

Nice 👍 you’ve shared the comparison table and an MLFQ diagram.


Let me explain this in easy words so you can revise quickly:

Comparison of CPU Scheduling Algorithms

Algorithm Design Preemption Convoy Effect Overhead

FCFS (First Come First Serve) Simple ❌ No ✅ Yes ❌ No

SJF (Shortest Job First – Non-Preemptive) Complex ❌ No ✅ Yes ❌ No

PSJF / SRTF (Preemptive SJF) Complex ✅ Yes ❌ No ✅ Yes

Priority (Non-Preemptive) Complex ❌ No ✅ Yes ❌ No

Priority (Preemptive) Complex ✅ Yes ❌ No ✅ Yes

RR (Round Robin) Simple ✅ Yes ❌ No ✅ Yes

MLQ (Multi-Level Queue) Complex ✅ Yes ✅ Yes ✅ Yes


Algorithm Design Preemption Convoy Effect Overhead

MLFQ (Multi-Level Feedback Queue) Complex ✅ Yes ❌ No ✅ Yes

LECTURE 15 Concurrency & Multithreading (Easy Notes)

1. Concurrency
 Running multiple instructions at the same time in the system.
 Happens when multiple threads or processes run in parallel.

2. Thread

 A lightweight process.
 A single path of execution inside a process.

 Allows parallelism by dividing a program into independent tasks.

 Example:
o Browser with multiple tabs.

o Text editor → typing, spell-check, saving (all done by different threads).

3. Thread Scheduling

 OS decides which thread runs based on priority.

 Each thread gets time slices of CPU for execution.

4. Thread Context Switching

 OS saves current thread state → switches to another.


 Faster than process switching because:

o Only registers, program counter, stack are saved.

o Memory space not switched.


 CPU cache state is preserved.

5. How threads get CPU access?


 Each thread has its own program counter.

 OS scheduler selects threads based on scheduling policy.


 OS executes instructions from that thread.

6. Context Switching in Threads


 Like processes, but stored in Thread Control Block (TCB).

 TCB keeps info for resuming threads later.

7. Will a single CPU gain from Multithreading? On a single CPU, multiple threads still run one after another

(just rapidly switched).

 ❌ No, because: On a multi-core CPU, multiple threads can run literally at the same time

faster execution.

o Only one CPU → threads must still context switch.


o No performance gain (just overhead).

8. Benefits of Multithreading
 Responsiveness → Faster response to users.

 Resource Sharing → Efficient use of shared resources.


 Economy → Cheaper than creating new processes (threads share same
memory/resources).

 Scalability → Works great with multiprocessors, better performance.

🔑 Extra Concept: Process Synchronization

 Ensures multiple processes/threads can access shared resources without


interfering with each other.

 Example: Shared memory, files, variables → should not conflict.

✅ In short:

 Concurrency = doing multiple tasks “at the same time.”

 Thread = small unit of execution.


 Multithreading = better resource use, responsiveness, and scalability.
LECTURE 16 Critical Section Problem (Easy Notes)

1. Why Synchronization is Needed?


When multiple processes/threads access the same shared data without
Synchronization →then there occurs problems like:

 Race condition
 Inconsistent data (one process is updating the data and other process read same
data at same time then other process will read false value)
 Deadlocks

2. Critical Section (CS)


 A part of the program where processes/threads access shared resources (variables,
files, memory, etc.).

 Since multiple processes run at the same time, data can be interrupted and
corrupted.

3. Major Thread Scheduling Issue


Race Condition

 Happens when two or more threads/processes access and update shared data at
the same time.

 The result depends on the thread scheduling order (unpredictable).


 Called "racing" because processes race to access data first.

4. Solutions to Race Condition


a. Atomic Operations

 Make the critical section an atomic unit = all-or-nothing.


(((((Either the operation happens fully and correctly, or it doesn’t happen at all.)))))

 Executed in one CPU cycle → no interruption.


b. Mutual Exclusion with Locks

 Only one thread/process can enter the CS at a time.

c. Semaphores

 Special signaling mechanism used for process synchronization.


5. Can a Simple Flag Variable Fix It?

 ❌ No, because it cannot fully avoid race conditions.

6. Peterson’s Solution
 Works for 2 processes only.

 Ensures mutual exclusion and prevents race conditions.

7. Mutex/Locks
 Used to implement mutual exclusion.

 Only one thread/process can enter CS at a time.


Disadvantages (Mutex/Locks ke):

1. Contention → Multiple threads waiting for one lock.

o If the thread holding lock dies → others wait forever.


2. Deadlocks → Two processes waiting for each other forever.

3. Debugging is hard.
4. Starvation → Starvation is a problem in which a process or thread waits forever to
access a shared resource because other processes getting priority over it.

8. Goals of Process Synchronization (3 conditions)

1. Mutual Exclusion → Only one process in CS at a time.


2. Progress → If no process in CS, one waiting must be allowed to enter.

3. Bounded Waiting → A process should not wait forever to enter CS.

✅ In short:

 Critical section = shared resource code.


 Race condition = data conflict due to parallel access.

 Solutions = Atomic ops, Locks, Semaphores, Peterson’s solution.


 Problems with locks = Contention, Deadlock, Debugging, Starvation.
LEC-17: Conditional Variables & Semaphores for Thread Synchronization

1. 🔄 Conditional Variable

a. What is it?

It lets a thread wait (pause) until a specific condition becomes true.

🧾 Example: A delivery boy waits until the package arrives.

b. It always works with a lock.


c. Behavior:

 A thread enters wait state only after locking.


 Then it releases the lock and waits for another thread to give a signal.

 Once got a signal, it reacquires the lock and continues.


d. Why use it?

 (constant checking).

🧾 Busy waiting is like a person checking the oven every 10 seconds instead of just
setting a timer.

e. "Contention is not here" means:


 Since waiting threads release the lock, others can work — no blocking or fighting.
 ==================================================================

 ✅ Deadlock (DL) – Easy Explanation

 🔷 1. What is Deadlock?

 In a computer, many programs (called processes) run together and need resources
like CPU, files, printers, etc.

 When two or more processes:


 Are waiting for each other’s resources,

 And none can continue,


👉 this situation is called Deadlock.

 🔷 2. How does Deadlock happen?

 A process asks for a resource (R):

 If it's free → it gets it.


 If it’s already taken, the process goes into waiting.

 If that resource is never released, the process is stuck forever.

 👉 This is a Deadlock.

 🔷 3. What really causes Deadlock?

 If:

 Two or more processes are waiting for resources


 And those resources are held by each other

 Then ➤ no one proceeds ➤ Deadlock.

 🔷 4. Why is Deadlock a problem?

 It freezes the system.


 Programs never finish.

 Resources get wasted.

 New jobs can’t start.

 🔷 5. Examples of Resources

 Resources are things processes need to work:

 Memory (RAM)
 CPU cycles

 Files
 Printers

 Locks
 Sockets

 I/O devices

 🔷 6. One resource can have many parts

 Example: A CPU can have 2 cores ➤ 2 processes can use it at the same time.


 🔷 7. How a process uses a resource?

 Request → Ask for the resource.


 Use → Work with it.

 Release → Free it after use.

 👉 Like borrowing and returning a book from the library.

 🔷 8. Visual Example of Deadlock

 Process T1 has R1, wants R2


Process T2 has R2, wants R1
Both wait ➤ Deadlock 🔄

 ⚠️ 9. Deadlock Conditions (All 4 Must Happen)

 Condition  Meaning  Example

 Mutual  Only 1 process can use a  1 person can use the


Exclusion resource at a time bathroom at a time

 Holding one resource and  You hold a pen, wait for


 Hold & Wait
waiting for another a notebook

 No  Can’t forcibly take a  Can’t snatch notebook


Preemption resource from your friend

 A waits for B, B for C, C


 Circular Wait  Processes wait in a circle
for A

 🛠️ 10. How to Handle Deadlocks

 ✅ a. Prevention / Avoidance

 Use smart rules so deadlock never happens.

 ✅ b. Detection & Recovery

 Let deadlock happen ➤ Detect it ➤ Fix it.

 ✅ c. Ignorance

 Just ignore deadlocks 😅 (used in Linux sometimes)


👉 Called Ostrich Algorithm — "bury head in sand".

 🔐 11. How to Ensure Deadlock Never Happens


 Use:

 Deadlock Prevention

 Deadlock Avoidance

 ✅ 12. Deadlock Prevention (Break One of the 4 Conditions)

 🧾 a. Mutual Exclusion

 Use locks only when needed.

 Share resources where possible.

 E.g., Read-only files can be read by many at the same time.

 🔸 But: Some resources can’t be shared (like printers), so this condition can’t
always be broken.

 🔴 Hold & Wait Condition

 It means:

 A process is holding one resource,

 And waiting for another one.

 Real-life example:
You have a pen (holding), and you're waiting for a notebook to write.
You’re not giving up the pen while waiting — that’s Hold & Wait.

 ✅ How to Avoid Hold & Wait

 🔸 Option 1: Ask for everything at once before starting

 📖 What it means (in systems):

 A process must request all resources it will need at the beginning, before starting.

 🔍 Real-life example:

 You want to write something, so you ask for both pen and notebook together.

 If both are available, you take them and start writing.


 If even one is missing, you don’t start — you wait without taking anything.

 ✔️ This avoids Hold & Wait because you're not holding one item and waiting for
another.

 🔸 Option 2: Release everything before asking new ones

 📖 What it means (in systems):

 If a process is already using a resource and now needs another, it must:


 First give up what it has,

 Then ask again for all the needed resources.

 🔍 Real-life example:

 You have a pen but now need a notebook.

 You return the pen first,

 Then ask for pen + notebook together again.

 ✔️ This avoids Hold & Wait because you never hold one thing while waiting for
another.

 🧾 c. No Preemption

 -> If a process can’t get everything it needs, it must release everything it’s
holding, and try again later.

 Let system take back resources when needed:

 If a process can’t get something, take away what it already has.

 Restart it later when all needed items are available.


 Example:
You want a pen + paper. If no paper, return pen too, and try again later.

 🧾 d. Circular Wait

 Break the waiting loop:


 Set a rule for request order (e.g., always ask for pen before notebook).

 So no cycle can form.


 Example:
P1 and P2 both need R1 and R2.
Tell them:

 Always ask for R1 first, then R2.


This avoids looping.

📝 Deadlock – Short Notes for Quick Revision

🔐 What is Deadlock?

A situation where two or more processes are stuck forever waiting for each other’s
resources.

🧾 Real-Life Analogy

 You have a pen.

 Your friend has a notebook.


 You want their notebook, they want your pen.

 Both are waiting ➝ Deadlock.

🔄 Deadlock Necessary Conditions (All 4 must be true)

Condition Meaning Real-Life Example

Mutual Only one can use at a


One person in bathroom
Exclusion time

Hold one, wait for Holding pen, waiting for


Hold & Wait
another notebook

Can’t forcefully take Can’t snatch notebook from


No Preemption
resource someone

Circular Wait Circular waiting chain A waits for B, B for C, C for A

🛠️ How to Handle Deadlocks


Method Idea

Prevention Break any of the 4 conditions

Avoidance Use safe state logic (Banker's Algo)

Detection & Recovery Let DL happen, then detect & fix

Ignore Assume it won’t happen (Ostrich 🐦)

✅ Deadlock Prevention Strategies

Condition Broken Strategy

Mutual Exclusion Allow shared access where possible (read-only)

Hold & Wait Request all at once, or release before asking

No Preemption Take back held resources if needed

Circular Wait Use resource ordering (R1 before R2)

=======================================================================

a. What is a Semaphore?
A semaphore is a method used to manage access to shared resources (like printers,
databases, files) in multithreaded or multiprocess programs.

b. It is a number (like a counter):

It keeps track of how many resources are available.

c. Multiple threads can run at the same time if resources are available.

d. Semaphore helps share limited resources:


 For example, if you have 3 printers, semaphore value = 3.
 It lets 3 threads print at the same time.

 A mutex, in contrast, allows only 1 thread at a time.

e. Binary Semaphore (like a switch):


 Can be 0 or 1.

 It’s like a lock (either free or taken).


 Also called mutex.

f. Counting Semaphore:
 The number can be more than 1.

 It controls access to multiple instances of a resource.


 Used when multiple identical resources are shared.

✅ Example:
If 5 threads want to access a pool of 3 databases, the counting semaphore starts at 3.
Only 3 can access it at once.

g. To Avoid Busy Waiting:


 Modify wait() and signal():

o If resource is not available, the thread goes to wait state instead of checking again
and again.

h. Wake-up Mechanism:

 If one thread is waiting, and another does signal(), the waiting thread is moved to ready
state.

🧾 Like waking a sleeping person once food is ready instead of them asking every minute

✅ Summary Table:

Term Easy Meaning

Semaphore Counter for available resources

Binary Semaphore 0 or 1 (lock/unlock)

Counting Semaphore More than 1 (multiple users allowed)

wait() Decrease semaphore, block if not possible

signal() Increase semaphore, wake up blocked process

Busy Waiting Continuously checking — bad (avoided with semaphores)

Lecture 24 Memory Management Techniques – Contiguous Memory Allocation

1. Why Memory Management Is Needed

 In multiprogramming, many processes share the main memory.


 The CPU picks from the Ready Queue, so we need to keep multiple processes in
memory to improve performance.

 Hence, we must manage memory properly to keep multiple processes running


efficiently.

2. Logical Address vs Physical Address

Type Logical Address (Virtual) Physical Address

Generated by CPU Memory Management Unit (MMU)

Access by user Yes No (indirect only)

Location Not real (virtual) Real physical location in RAM

Address Space 0 to max R to (R + max), where R = base address

Example:
 CPU generates a Logical Address = 366

 Relocation Register contains Base Address = 14000


 Physical Address = 366 + 14000 = 14366

3. Address Translation Using MMU


 The MMU adds the base address (from relocation register) to the logical address to
get the physical address.

 Limit register checks that the logical address is valid.

 If the logical address is larger than the limit → Error (trap).

4. How OS Provides Isolation and Protection

 OS uses Virtual Address Space (VAS) to isolate process memory.


(((((1. Virtual Address Space (VAS)

 Each process is given its own private address space.

 The OS, with the help of the Memory Management Unit (MMU) and page tables,
maps these virtual addresses to physical memory.

 Even if two processes use the same virtual address (say 0x1000), the OS ensures
they are mapped to different physical locations.

 This provides isolation, since one process cannot access another’s memory directly.
)))))

 Relocation Register = Base Address (e.g., 10000)


 Limit Register = Max address allowed (e.g., 74600)

 MMU dynamically maps the logical address to physical using relocation register.

 If any program tries to access other memory, the OS will block it with a trap.

5. Allocation Methods
 Contiguous Allocation: Process gets one continuous block of memory.

 Non-contiguous Allocation: Process memory can be scattered in parts. Ex : Paging

6. Contiguous Memory Allocation


a. Fixed Partitioning

 Memory is divided into fixed-size blocks (equal or unequal).

Example:

 Partition 1: 1MB

 Partition 2: 1MB
 Partition 3: 3MB

 Partition 4: 1MB

 Processes:
o P1 (1MB)

o P2 (1MB)

o P3 (3MB)

o P4 (1MB)
o P5 (4MB) can't be loaded → no partition big enough.
Limitations:

1. Internal Fragmentation:
o If a 1MB partition holds a 0.7MB process → 0.3MB is wasted.

2. External Fragmentation:

o Free space exists, but not in a single block.

3. No Flexibility in Process Size:


o Process can’t be larger than the largest partition.

4. Low Multi-programming:
o Fixed number of processes only.

b. Dynamic Partitioning
 Partitions are created at runtime based on process size.

Example:
 P1 (5MB), P2 (2MB), P3 (3MB), P4 (4MB)

 Space left = 8MB (2MB + 3MB + 3MB)

 P5 (8MB) still can’t be loaded due to non-contiguous space.


Advantages Over Fixed Partitioning:

1. No internal fragmentation
2. No limit on process size

3. Better multi-programming
Limitation:

 External Fragmentation still happens.

✅ Summary Table

Process
Internal External Multi-
Technique Size
Fragmentation Fragmentation programming
Limit

Fixed
Yes Yes Yes Low
Partitioning

Dynamic
No Yes No Better
Partitioning

LECTURE 25. 1. What is Defragmentation/Compaction?

 Problem: When memory is split into small non-contiguous free blocks, we get
external fragmentation.

 Solution: Compaction (Defragmentation)


o Brings all free space together into one big block.

o Moves all allocated memory blocks to one side of memory.


o Helps fit larger processes into memory.

🧾 Think of it like this:


Imagine books (processes) on a bookshelf, but with empty spaces (free memory)
scattered in between. Compaction moves all books to one side, making one big empty
space.

✔️ Advantages:

 Removes external fragmentation.

 Allows larger processes to be loaded.

❌ Disadvantage:

 Slows down the system as moving data around takes time.

2. How is Free Space Represented in OS?

 Free memory blocks are kept in a Free List.

 This Free List is stored using a Linked List structure.

3. How to Allocate Memory from Free Space?

The OS uses allocation algorithms to find the best hole (free space) to place a new
process of size n.
a. First Fit

 Allocates the first free space that is big enough.


 Fast, easy to implement.

Example:
Free holes: [5MB, 10MB, 20MB]
Process size: 9MB
→ Allocated in 10MB (first fit)

b. Next Fit

 Like First Fit, but starts search from the last process was allocated.

Example:
If last process was put in 5MB block, it starts search from 10MB next time.
c. Best Fit
 Allocates the smallest hole(space) that is big enough.

 Less internal fragmentation.

 More external fragmentation may occur due to leftover small holes.


Example:
Free holes: [5MB, 10MB, 12MB]
Process size: 9MB
→ Allocated in 10MB (smallest that fits)

d. Worst Fit

 Allocates the largest available hole.

 Leaves bigger leftover space for future processes.

 Slow because it searches entire list.

Example:
Free holes: [5MB, 10MB, 20MB]
Process size: 9MB
→ Allocated in 20MB (biggest hole)

✅ Summary Table

Example (Holes =
Fragmentation
Algorithm Works How? Fast? [5, 10, 20], Process
Type
= 9)

First hole that May have internal


First Fit ✔️ Allocates 10MB
fits frag.

Starts search
Next Fit ✔️ Similar to First Fit Allocates 10MB
from last hole

Smallest hole Less internal, more


Best Fit ❌ Allocates 10MB
that fits external

Largest hole Less external,


Worst Fit ❌ Allocates 20MB
that fits more internal
LECTURE 26. Paging | Non-Contiguous Memory Allocation

1. Problem with Dynamic Partitioning


 Main disadvantage: External fragmentation (free spaces are scattered, can’t be
used together).

 Can fix it with compaction (moving data together), but it’s slow and adds overhead.
 We need a better and flexible method to load processes into memory → Paging.

2. Idea Behind Paging


 Example: If memory has two free blocks of 1KB each, and a process needs 2KB in one
piece, it won’t fit → external fragmentation.

 Solution: Divide memory into small equal-size blocks.

o Physical memory → divided into frames.

o Logical memory → divided into pages.

o Frame size = Page size (e.g., 4KB).

3. What is Paging?

 Definition: it is a Memory management technique where a process’s memory can


"Paging is a memory management technique where the memory provided to a process can be

non-contiguous."
be non-contiguous.

 Advantages:
o No external fragmentation.
o No compaction needed.

 Process:
1. Physical memory → divided into fixed-size frames. equal, fixed size,

2. Logical memory → divided into pages of same size.


3. Page Table → maps which page is in which frame.

 Stores base address of each page in physical memory.


4. Logical address = page number (p) + page offset (d).
 p → used to find base address from Page Table.

 d → added to base address to get physical address.

(((((
 Processor ek special register (jaise x86 architecture CPU me CR3 register) me page
table ka starting address rakh leta hai, taki use jaldi mil jaye.

👉 Speed problem kaise solve hoti hai?


 Agar har memory access ke liye MMU pehle page table dekhna pade RAM me, to
do memory access lagenge → bahut slow ho jayega.
 Isliye CPU ke andar ek chhota sa cache hota hai jise bolte hain TLB (Translation
Lookaside Buffer).

 TLB me recently use kiye gaye page table entries store hoti hain, taki baar-baar
RAM me na jana pade.

Easy Summary 🚀

 Page Table → RAM me hota hai.


 Uska address CPU ke register me hota hai.

 Fast lookup ke liye → TLB CPU ke andar hota hai.

)))))

4. Page Table Details


 Stored in main memory.
 Page Table Base Register (PTBR) → points to the start of the page table for the
current process.
 Changing page tables during context switching → only PTBR changes.

5. How Paging Avoids External Fragmentation


 Pages of a process can be stored anywhere in free frames of physical memory.

 No need for one large continuous block.

6. Why Paging is Slow & How to Make it Fast

 Problem: Every memory access requires going to the page table in main memory →
slow.
When the CPU wants some data, it first has to check the page table to find where that data
is stored in physical memory.
But the page table is kept in main memory (RAM), so:

1. Step 1: Go to RAM to get the frame number from the page table.
2. Step 2: Go to RAM again to get the actual data.

This means two memory accesses instead of one, which makes it slower

 Solution: Use Translation Look-aside Buffer (TLB) → a small, fast cache for page
table entries.

7. Translation Look-aside Buffer (TLB)


 What it is: Hardware cache that stores recent page number → frame number
mappings.

 How it works:
1. CPU generates a logical address → split into p (page number) and d (offset).

2. TLB checked first:


 TLB hit → frame found quickly → physical address generated.
 TLB miss → go to page table in main memory, then update TLB.

 ASID (Address Space Identifier):


o Each TLB entry has ASID to identify which process it belongs to.

o Allows TLB to store entries for multiple processes.


o If ASID doesn’t match → TLB miss.

✅ Summary:
Paging solves external fragmentation by splitting memory into fixed-size blocks
(pages/frames) and mapping them using a page table. TLB speeds it up by caching page
table entries.

LECTURE 27 Virtual Memory, Demand Paging & Page Faults (Easy Notes)
1. What is Virtual Memory?
 Virtual memory lets the computer run programs that don’t fully fit into physical
RAM.

 It does this by using part of the hard disk (swap space) as if it were extra RAM.
 Example: If you have 4 GB RAM but need 8 GB, the extra space comes from the
disk.

2. Advantages of Virtual Memory


 Programs can be bigger than actual RAM.
 Benefits:

1. Not limited by physical RAM size.


2. More programs can run at once → better CPU usage and throughput.

3. Both the system and user benefit from flexibility.

3. Demand Paging

 A way of virtual memory management.


 Only load pages (parts of the program) into RAM when needed.

 Pages that are not used are present in secondary memory (disk).
 If a page is needed but not in RAM → Page Fault occurs.

 The OS decides which old page to replace using page replacement algorithms.

 Uses Lazy Swapper → loads a page only when it’s needed, not before.

4. Pager vs Swapper
 Pager: Handles individual pages of a process.

 Swapper: Works with entire processes (less efficient in virtual memory context).

5. How Demand Paging Works

a. Pager guesses which pages will be used.


b. Loads only those, not the whole process → saves memory.
c. OS reduces swap time and memory use.
d. Uses valid–invalid bit in the page table:

 1 (valid): Page is in RAM and can be used.

 0 (invalid): Page is on disk or invalid.


6. Page Table & Example
 Shows where each page is: in RAM or on disk.

 If a program never tries to use an invalid page, it runs fine without loading it.

7. What Happens on a Page Fault

 If a program tries to use a page not in RAM:


1. OS checks if access is valid.

2. If valid but not in RAM → bring it from disk.

3. Find an empty frame in RAM.


4. Read the page from disk into that frame.

5. Update the page table (mark it valid).


6. Resume the program.

8. Steps in Handling a Page Fault (Diagram)

1. CPU requests a page.


2. Page table says "not in memory" → trap to OS.
3. OS finds page in disk.

4. Load it into RAM.


5. Update page table.

6. Restart the instruction.

9. Pure Demand Paging

 Start running a process with zero pages in RAM.

 Pages are only loaded when needed.

 Never load a page until required.

10. Locality of Reference


 Programs usually use the same set of pages repeatedly for some time → OS uses
this to improve performance.
11. Advantages of Virtual Memory
 More programs can run at the same time (higher multiprogramming).

 Large apps can run on small physical memory.

12. Disadvantages

 Slower performance because swapping takes time.


 Thrashing: Too much swapping, CPU spends more time moving pages than running
programs.

LECTURE – 29

Page Replacement Algorithms

1. What happens when a Page Fault occurs?

 Page Fault: When a program tries to access a page (part of memory) that is not
currently in RAM.
 The OS (Operating System) must bring that page from disk (swap space) into RAM.

 If RAM is full, OS replaces one page in RAM with the new one.

2. Why do we replace pages?

 Sometimes, RAM is full and all frames are busy.


 OS needs to free one frame by removing an existing page and loading the required
one.

3. What is a Page Replacement Algorithm?


 It’s a method to decide which page to remove from RAM when we need space for a
new page.

 Goal: Reduce the number of page faults.

Types of Page Replacement Algorithms


A. FIFO (First In, First Out)

 Replace the oldest loaded page.


 Easy to implement.

 Problem: Performance may be bad.


o Might remove a page that’s still being used.

o Might keep an unused page for a long time.

 Belady’s Anomaly: In FIFO, sometimes increasing the number of frames increases


page faults (opposite of what we expect).

B. Optimal Page Replacement


 Replace the page that will not be used for the longest time in the future.

 Lowest page faults possible.

 Problem: OS must know future page requests (not possible in real life).

 Mostly used for theoretical comparison, not practical.

C. LRU (Least Recently Used)

 Replace the page that has not been used for the longest time in the past.
 Assumption: If it hasn’t been used recently, it’s less likely to be used soon.

 Implementation methods:

1. Counters: Keep a time-stamp for each page; remove the oldest one.

2. Stack: Keep most recently used pages on top, least used at bottom.

D. Counting-based Replacement
 Keep a count of references for each page.

1. LFU (Least Frequently Used)

o Replace page with lowest reference count.


o Assumes less-used pages are less important.

2. MFU (Most Frequently Used)


o Replace page with highest reference count.

o Idea: If a page was used a lot, maybe it’s already done being used.
 Note: LFU and MFU are not commonly used in practice.
✅ Summary Table

Algorithm How it Works Pros Cons

Remove oldest May remove important


FIFO Easy
page pages, Belady’s anomaly

Remove farthest Minimum page


Optimal Needs future knowledge
future use faults

Remove least Good approximation


LRU Needs tracking
recently used of optimal

Remove least Good if usage is May remove a page


LFU
frequently used steady about to be used

Remove most
MFU Rarely used Not always logical
frequently used

LECTURE 30 Thrashing – Easy Notes

1. What is Thrashing?
2. THRASHING means spending more time by cpu in the swap out and swap in
proces (handling page fault) instead of actual works(EXECTION OF
INSTRUCTIONS)

 When a process doesn’t have enough frames in memory for the pages it needs, it
will page fault again and again.

 Since all pages are important and in active use, replacing any page means you’ll
soon need it again → more page faults.

 This creates a loop: replace → need again → replace again, wasting time.

2. Definition

 Thrashing = High paging activity (lots of page faults) causing the system to spend
more time handling page faults than running actual processes.

3. How to Identify Thrashing


 CPU utilization drops even though the degree of multiprogramming (number of
processes) increases.

 Graph:
o At first, more processes = better CPU usage.

o After a point → too many page faults → CPU usage drops sharply.
4. Techniques to Handle Thrashing
A. Working Set Model

 Based on Locality of Reference:

o A process works on a set of pages (locality) for some time.


o If we give enough frames for that locality, page faults stay low.

o When the process moves to a new locality, page faults happen again.
 If frames < locality size → Thrashing will occur.

B. Page Fault Frequency (PFF) Control


 Thrashing = high page fault rate.

 Control page fault rate:


1. Set an upper limit and lower limit for page fault rate.

2. If page fault rate is too high → give process more frames.


3. If page fault rate is too low → take some frames away (free for other
processes).

4. Keep page fault rate in the middle range to avoid thrashing.

✅ In short:
Thrashing happens when processes don’t get enough frames and spend more time
swapping pages than running.
Prevent it by giving enough frames for current locality or controlling page fault rate.

========================================================================

What is a process and process table?

👉 A program is just a set of instructions (like a .exe or .java file) stored on disk. It is not doing
anything until it's run.

👉 When you run the program, the operating system creates a process, which is a running
version of that program. That running version is called an instance.
The Process Table is a data structure used by the operating system (OS) to keep track of all
active (running, ready, waiting) processes in the system.
This information is stored in a structure often called the Process Control Block (PCB).

Field Description

Process ID (PID) Unique identifier for the process

Process State Running, Ready, Waiting (Blocked), etc.

Program Counter Address of the next instruction to execute

CPU Registers Values of all CPU registers for context switching

Memory Info Base and limit registers, memory maps, page tables

Scheduling Info Priority, scheduling queue pointers, etc.

I/O Status List of I/O devices assigned or in use

Accounting Info

1. What are the different states of the process ?

🔄 1. New

 The process is being created. (→ "Someone is creating the process.")

 It has not yet started execution.( The present perfect tense is used to describe:

An action that has not happened up to the present moment, but might still
happen.)

🧾 2. Ready

 The process is ready to run and waiting in the ready queue.


 It is just waiting for the CPU to become available.

🔵 3. Running

 CPU is currently executing the process."


 Only one process per core can be in this state at a time.
🔴 4. Waiting (or Blocked)

 The process is waiting for some event to occur (e.g., I/O completion, resource
availability).
 It cannot continue until the event happens.

✅ 5. Terminated (or Exit)

 The process has finished execution or has been killed.


 It is removed from the process table after cleanup.

2. What is a Thread?
A thread is the smallest unit of execution within a process.

🔍 In Simple Words:
 A process is like a container.

 A thread is like a worker inside the container doing the actual job.

✅ 4. Differences Between Process and Thread


Process is program under action and thread is the smallest segment of instructions (segment of a
process) that can be handled independently by a scheduler.

Here’s a clear and simple comparison between a process and a thread:

Aspect Process Thread

A unit of execution inside a


Definition A program in execution
process

Shares memory with other


Memory Has its own memory space
threads in the same process

Communication between processes is


Communication is easier and
Communication complex (uses IPC – Inter-Process
faster (shared memory)
Communication)

Creation Heavyweight – takes more time and Lightweight – faster to create and
Overhead resources to create manage

If a process crashes, it doesn’t affect other If one thread crashes, it can crash
Crash Impact
processes the entire process
Aspect Process Thread

May be scheduled with other


CPU Scheduling Handled independently by the OS
threads of the same process

Chrome, Word, VLC – each runs as a separate Multiple tabs in Chrome may be
Examples
process threads of the same process

🧾 Real-life Analogy:

 Process = A house with separate rooms and resources

 Thread = People (workers) inside that house sharing the same kitchen, bathroom, etc.

3. What are the benefits of multithreaded programming?

Multithreaded programming allows a single program (process) to have multiple threads running
concurrently, performing different tasks. This brings many advantages:

🚀 1. Improved Performance (Responsiveness)

 Multiple threads can run simultaneously, especially on multi-core processors.

 Example: In a game, one thread handles input, another updates graphics, another plays
sound — all at the same time.

⚡ 2. Faster Execution (Parallelism)


 Threads can perform different tasks in parallel, reducing overall execution time.

 Useful in CPU-intensive and I/O-bound operations.

🧾♂️ 3. Increased Responsiveness in User Interfaces

 In GUI applications, one thread can handle the user interface, while another handles
background work (like file downloads), so the UI doesn’t freeze.

✅ 6. What is Thrashing?
Thrashing happens when the system is overloaded with processes, and there isn’t enough RAM to
keep their data, so it keeps swapping data to (to disk-> moves pages from ram to disk)and from
disk(load data into ram from disk).

📉 Result of Thrashing:
 Very poor performance

 High CPU usage but little useful work

 Applications freeze or slow down badly

🧾 How to Reduce or Prevent Thrashing?


 Use fewer programs at once.

 Increase physical RAM.


 Use better page replacement algorithms.

 Apply load control (reduce the degree of multiprogramming).

✅ 7. What is a Buffer?

A buffer is a temporary storage area used to hold data while it is being transferred between two
places that operate at different speeds or at different times.

🔍 In Simple Words:

A buffer helps manage the difference in speed between two devices or operations (like CPU and
disk, or input and processing).

📦 Common Use of Buffer:

 When data is being transferred, the buffer holds it temporarily so the sender and receiver
don’t have to work at the same speed.

🎧 example. Audio Playback


 Use: Buffers hold a few seconds of audio data before playing

Example -> Keyboard Input (Command Line / Text Editor)

 Use: When you type on a keyboard, the characters are first stored in a keyboard buffer.

✅ 8. What is Virtual Memory?


Virtual Memory is a memory management technique that gives the illusion of a large main
memory (RAM) to processes — even if the
actual physical RAM is limited.

🔍 In Simple Words:

Virtual memory allows your computer to run more programs than the physical RAM can hold, by
using a portion of the hard disk as temporary RAM.

📌 How it Works :

 The operating system divides memory into pages.


 If RAM is full, inactive pages are moved to disk (this part of the disk is called the swap space
or page file).
 When those pages are needed again, they are swapped back into RAM.

🧾 Why is it Useful?

Advantage Description

📈 Larger address space Programs can use more memory than physically available

🔁 Multitasking Enables more processes to run simultaneously

🧾 Isolation Each process thinks it has its own private memory space

💾 Efficient memory use Keeps only the needed parts of programs in RAM

✅ 11. What is a Kernel ?


The kernel is the core (central part) of an operating system.
It is responsible for managing the system's hardware and resources and acts as a bridge between
software and hardware.

🔍 In Simple Words:
The kernel is like the brain of the operating system — it directly communicates with the CPU,
memory, and devices, and allows programs to use hardware safely and efficiently.

📌 Key Functions of the Kernel:

Function Description

🧾 Process Management Creates, schedules, and terminates processes

💾 Memory Management Allocates and frees memory; handles virtual memory

💽 Device Management Manages I/O devices using device drivers

📂 File System Management Controls file access, creation, and storage

🔐 System Security & Ensures safe access to resources (prevents one program from
Protection affecting another)

🧾 Types of Kernels:

Type Description

Monolithic Kernel All OS services run in one large block (e.g., Linux)

Microkernel Only essential services in kernel; rest run in user space (e.g., Minix, QNX)
Type Description

Hybrid Kernel Combination of both (e.g., Windows, macOS)

💡 Real-Life Analogy:
Think of the kernel as the engine of a car:

 You (user) press the accelerator (application request).

 The engine (kernel) translates that into power (hardware instructions).

 So, how does it work?


 🧾 Think of it like this:

Component Role
Operating System The full system that includes user interfaces, system libraries, tools, and
(OS) the kernel
The core of the OS — handles the actual work like memory allocation,
Kernel
CPU scheduling, I/O, etc.
Programs that request services from the OS (like opening a file or using
User Applications
memory)

12. What are the different scheduling algorithms?


1. First-Come, First-Served (FCFS) Scheduling

2. Shortest-Job-Next (SJN) Scheduling

3. Priority Scheduling
4. Shortest Remaining Time

5. Round Robin(RR) Scheduling

6. Multiple-Level Queues Scheduling

13. Describe the objective of multi-programming?

Multiprogramming increases CPU utilization by keeping multiple jobs (code and data) in the memory
so that the CPU always has one to execute in case some job gets busy with 1/0. Single CPU Context
switching for processes. Switch happens when current process goes to wait state. CPU idle time
reduced.

SINGLE CPU

- Context switching for processes.

- Switch happens when current process goes to wait state.


- CPU idle time reduced.

17. Briefly explain FCFS?


FCFS stands for First Come First served. In the FCFS scheduling algorithm, the job that arrived first in
the ready queue is allocated to the CPU and then the job that came second and so on.

FCFS is a non-preemptive scheduling algorithm as a process that holds the CPU until it either
terminates or performs I/O. Thus, if a longer job has been assigned to the CPU then many shorter
jobs after it will have to wait.(after it they must wait->present necessity ….will have to ->future
necessity)

🔐 20. What is the Banker's Algorithm?

The Banker's Algorithm is a way for the operating system to avoid deadlock while giving out
resources (like memory, CPU time, etc.) to different processes.

🛠️ How does it work (in simple steps)?


1. Each process tells the system how much maximum resource it may need.

2. When a process requests some resources:


o The system pretends to give the resources.

o Then it checks: Can all processes still finish?

3. If yes, the request is granted.

4. If no, the system waits to avoid deadlock.

5.
6.
7. 21. State the main difference between logical and physical
address space?
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Logical address is generated by


Basic It is located in a memory unit.
the CPU.

Logical Address Space is a set of


Physical Address is a set of all
Address all logical addresses generated by
physical addresses mapped to the
Space the CPU in reference to a
corresponding logical addresses.
program.

Users can view the logical Users can never view the
Visibility
address of a program. physical address of the program.

Generation generated by the CPU. Computed by MMU.


Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

The user can use the logical The user can indirectly access
Access address to access the physical physical addresses but not
address. directly.

🧾 24. What is Fragmentation?


Fragmentation happens when memory is broken into small, scattered pieces that are too tiny for
use — even though total free memory is enough.

🧾 Why does it happen?


When programs (processes) come and go from memory:

 They leave gaps behind.


 These gaps are too small to fit new programs.

 So, memory looks full to the system — but actually, it’s just badly organized.

📦 Types of Fragmentation:
1. External Fragmentation
Free memory is spread out in many small chunks, not in one big block.
➤ Example: 10 MB free but split into 2 MB + 3 MB + 5 MB → can’t fit a 6 MB program.
2. Internal Fragmentation
Memory is given in fixed-sized blocks, and some part of each block goes unused.
➤ Example: You get a 4 KB block but only need 3 KB → 1 KB is wasted inside.

📄25. What is Paging?


Paging is a memory management technique that helps the operating system store and access data
efficiently — without needing a large, continuous block of memory.

🧾 Why is it needed?
When a process needs to be loaded into memory:

 It's hard to find one big empty space.

 Paging solves this by breaking the process into small fixed-size pieces and putting them
wherever space is available.
🧾 How does it work?

1. Memory is divided into equal-sized blocks:

o In main memory: blocks are called frames.

o In process/secondary memory: blocks are called pages.

2. When a program runs:

o It’s broken into pages.

o These pages are loaded into available frames (anywhere in memory — not
necessarily together).

📦 Example:
 Suppose memory has 4 KB-sized frames.

 A process of 10 KB is split into 3 pages (4 KB + 4 KB + 2 KB).


 These pages can be loaded into any 3 free frames in RAM, even if they’re not next to each
other.

✅ Advantages:

 No need for continuous memory.

 Reduces wasted space.


 Solves the external fragmentation problem.

26. How does swapping result in better memory management?

What is Swapping?

Swapping means moving a process out of main memory (RAM) to secondary storage (like hard
disk), and later bringing it back when needed.

🧾 Why is Swapping Used?


 RAM is limited.

 If many processes are running, there may not be enough space.

 So, the OS temporarily moves inactive or waiting processes to the disk to free up RAM for
other active processes.

🚀 How does it help memory management?

1. ✅ Frees up RAM:
Removes inactive or blocked processes from RAM.
2. ✅ Increases CPU usage:
CPU doesn’t sit idle — always has something to run.

3. ✅ Allows multitasking:
More processes can "run" than fit in memory at once — some are active, some are
swapped out.

4. ✅ Efficient use of memory:


Only processes that need to run right now are kept in RAM.

29. When does thrashing occur?

Thrashing occurs when the operating system spends most of its time swapping pages in and out of
memory too often, instead of doing real work.

❓ When does it occur?


Thrashing occurs when:

1. Too many processes are loaded into memory.

2. Each process keeps requesting pages that are not in memory (page faults).

3. The system keeps swapping pages between RAM and disk again and again.

4. As a result, CPU spends more time swapping than running actual programs.

30. What is the best page size when designing an operating system?

📄 What is Page Size?

Page size is the size of each block of memory used in paging — a memory management technique.
Example: 4 KB, 8 KB, etc.

❓ Is there one best page size?


No, there is no single "best" page size for all systems.

⚖️ Why? It depends on factors like:

1. 🧾 Page Table Size


o Smaller pages → More pages → Bigger page table

o Larger pages → Fewer pages → Smaller page table

2. 🔄 Paging Overhead
o Too small pages mean more frequent paging, which can slow things down.

3. 🗑️ Internal Fragmentation
o Large pages waste memory if the process doesn’t use the full page.

4. ⚡ Disk I/O and Access Speed


o Larger pages transfer more data at once — faster, but maybe wasteful.

✅ Typical Sizes in Real Systems:

 Common page sizes: 4 KB, 8 KB, 16 KB, etc.

 Modern systems often use multiple page sizes (e.g., 4 KB for regular use, 2 MB for large
memory operations)

31. What is multitasking?


Multitasking is a logical extension of a multiprogramming system that supports
multiple programs to run concurrently.

In multitasking, more than one task is executed at the same time. In this technique,
the multiple tasks, also known as processes, share common processing resources
such as a CPU.

32. What is caching?


Caching is a technique used to store frequently accessed data in a smaller and faster memory called
cache memory, which is closer to the CPU than the main memory (RAM). The cache holds copies of
data and instructions that are often used by the processor, Cache memory is used to reduce the
average time to access data from the Main memory.

Modern CPUs typically have multiple levels of caches (L1, L2, and L3), and they may include separate
caches for instructions and data. By using caching, the overall system performance improves
significantly, as it minimizes delays caused by slower memory access.

34. What is the functionality of an Assembler?

The Assembler is used to translate the program written in Assembly language into machine code.

What is machine code?

Machine code is the lowest-level programming language that a computer's CPU can directly
understand and execute.

It consists of binary instructions (0s and 1s) specific to the architecture of the processor.

35. What are interrupts?

Interrupts are signals sent by hardware or software to the processor indicating that an immediate
action is required. When an interrupt occurs, it temporarily halts(pronoun->holt) the current
execution, saves the processor's state, and transfers control to a special routine called the Interrupt
Service Routine (ISR) to handle the event.

36. What is GUI?

GUI stands for Graphical User Interface. It is a type of user interface that allows users to interact with
electronic devices ([Link], smartphones, or any digital system) using graphical elements such
as icons, buttons, menus, and windows, instead of relying only on text-based commands.

38. What is a pipe and when is it used?


A Pipe is a technique used for inter-process communication. A pipe is a mechanism by which the
output of one process is directed into the input of another process. Thus it provides a one-way flow
of data between two related processes.

Pipes are commonly used in Unix/Linux operating systems, especially in command-line


environments, to connect related processes so they can work together efficiently.

You might also like