CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
OS Module 4-Memory Management
Content:
4.1 Memory Management Requirements, Memory Partitioning: Fixed, Partitioning, Dynamic
Partitioning, Memory Allocation Strategies: Best-Fit, First Fit, Worst Fit, Paging and
Segmentation, TLB
4.2 Virtual Memory: Demand Paging, Page Replacement Strategies: FIFO, Optimal, LRU,
Thrashing
4.1 Memory Management in OS:
Memory:
• Memory can be defined as a collection of data in a specific format. It is used to store instructions
and process data.
• The memory comprises a large array or group of words or bytes, each with its own location.
The primary purpose of a computer system is to execute programs.
• These programs, along with the information they access, should be in the main memory during
execution. The CPU fetches instructions from memory according to the value of the program
counter.
Main Memory:
• Main Memory is a large array of words or bytes, ranging in size from hundreds of thousands to
billions.
• Main memory is a repository of rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and information are kept when the
processor is effectively utilizing them.
• Main memory is associated with the processor, so moving instructions and information into and
out of the processor is extremely fast.
• Main memory is also known as RAM (Random Access Memory). This memory is volatile.
RAM loses its data when a power interruption occurs.
Logical and Physical Address:
Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It is also
known as a Virtual address. Logical address space can be defined as the size of the process. A logical
address can be changed.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
Physical Address Space: An address seen by the memory unit (i.e. the one loaded into the memory
address register of the memory) is commonly known as a “Physical Address”. A Physical address is
also known as a Real address. The set of all physical addresses corresponding to these logical addresses
is known as Physical address space. A physical address is computed by MMU. The run-time mapping
from virtual to physical addresses is done by a hardware device Memory Management Unit(MMU).
The physical address always remains constant.
Memory Management:
Memory management is the functionality of an operating system which handles or manages primary
memory and moves processes back and forth between main memory and disk during execution.
Memory management keeps track of each and every memory location, regardless of either it is allocated
to some process, or it is free. It checks how much memory is to be allocated to processes.
It decides which process will get memory at what time. It tracks whenever some memory gets freed or
unallocated and correspondingly it updates the status.
4.1.1 Memory Management Requirements:
Memory management is a crucial function of an operating system (OS), ensuring efficient use of
memory resources while maintaining system stability and security. The following are the key memory
management requirements in detail:
1. Relocation
• When a program is executed, it is not always loaded into the same physical memory location.
• In a multitasking environment, processes are frequently swapped in and out of memory, and
their locations may change.
• Programs use logical addresses, which the OS translates to physical addresses at runtime.
How does the OS handle relocation?
• The OS uses base and limit registers to map logical addresses to physical addresses.
• The Memory Management Unit (MMU) performs dynamic address translation.
• Relocation allows processes to execute regardless of where they are loaded in memory.
Example:
• A process may be initially loaded at address 1000 but later moved to 5000. The OS must ensure
that all addresses used by the program adjust accordingly.
2. Protection
• Prevents a process from accidentally or intentionally accessing another process’s memory.
• Protects system memory (OS kernel) from user processes.
• Prevents unauthorized modification of critical data.
How does the OS ensure protection?
• Base and Limit Registers: Define the memory range a process can access.
• Access Control: Read, write, and execute permissions are enforced.
• Paging and Segmentation: Ensure that processes cannot access unauthorized memory areas.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
• Memory Isolation: Prevents security vulnerabilities like buffer overflow attacks.
Example:
• If Process A tries to access Process B’s memory without permission, the OS will generate a
segmentation fault and terminate the process.
3. Sharing
• Some programs and data (e.g., shared libraries) need to be accessed by multiple processes
simultaneously.
• Reduces redundancy by allowing common code to be used by multiple applications.
• Enables Inter-Process Communication (IPC).
How does the OS manage sharing?
• Shared Memory Segments: A section of memory accessible to multiple processes.
• Paging: Allows shared pages to be mapped into multiple processes’ address spaces.
• Access Control Lists (ACLs): Define which processes can access shared memory.
Example:
• If multiple programs use the same print driver, the OS loads it into a shared memory space
instead of loading multiple copies.
4. Logical Organization
• Programs are structured in modules (e.g., code, data, stack, heap).
• The OS must manage these modules separately to allow for better memory utilization.
• Different modules may have different access rights (e.g., code is executable but not writable,
data is writable).
How does the OS handle logical organization?
• Segmentation: Divides memory into segments based on program structure.
• Paging: Divides memory into fixed-size pages, which the OS maps to frames in physical
memory.
Example:
• A compiler program has separate memory segments for:
o Code (executable instructions)
o Symbol Table (stores variable names)
o Stack (stores function calls)
o Heap (dynamic memory allocation)
5. Physical Organization
• Memory is divided into main memory (RAM) and secondary storage (HDD/SSD).
• RAM is limited, and data must be efficiently moved between memory and storage.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
• Programs must be stored and retrieved without significant delays.
How does the OS manage physical organization?
• Swapping: Moves inactive processes from RAM to disk to free up memory.
• Virtual Memory: Uses disk storage as an extension of RAM.
• Memory Hierarchy Management: Balances speed and storage capacity between registers,
cache, RAM, and disk.
Example:
• If RAM is full, the OS may move an inactive process to disk (swap space) and bring it back
when needed.
6. Allocation and Deallocation
• The OS must allocate memory when a process starts and free it when it terminates.
• Poor allocation strategies can lead to memory fragmentation and inefficient use.
How does the OS allocate memory?
• Contiguous Allocation: Assigns a single block of memory to a process.
• Paging: Divides memory into fixed-size pages.
• Segmentation: Allocates memory in variable-sized segments.
How does the OS deallocate memory?
• Frees memory when a process exits.
• Reclaims unused memory to prevent memory leaks.
Example:
• A web browser requests 200MB of memory when opened and releases it when closed.
7. Fragmentation Management
• External Fragmentation: Free memory exists, but is scattered in small chunks, preventing
large allocations.
• Internal Fragmentation: Allocated blocks have unused space, wasting memory.
How does the OS reduce fragmentation?
• Compaction: Rearranges memory to create larger contiguous blocks.
• Paging: Eliminates external fragmentation by using fixed-size pages.
• Buddy System: Splits memory into power-of-two sized blocks.
Example:
• If a 100MB process needs memory but only 10MB chunks are available, external
fragmentation prevents allocation.
8. Virtual Memory Management
• Allows a system to run programs larger than physical RAM.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
• Enables multitasking by keeping only active parts of programs in RAM.
• Reduces memory pressure by using disk space as extended RAM.
How does virtual memory work?
• Paging: Breaks memory into pages that are swapped in and out of RAM as needed.
• Page Replacement Algorithms:
o Least Recently Used (LRU): Removes the least-used page.
o First-In-First-Out (FIFO): Removes the oldest page.
• Demand Paging: Loads pages into memory only when required.
Example:
• A 4GB RAM system running an 8GB application uses virtual memory to handle the extra load.
9. Multitasking Support
• Modern operating systems run multiple processes simultaneously.
• The OS must allocate and deallocate memory dynamically.
• Efficient multitasking improves system performance.
How does the OS support multitasking?
• Swapping: Moves inactive processes to disk to free RAM.
• Memory Partitioning:
o Fixed Partitions: Allocates fixed memory blocks to processes.
o Dynamic Partitions: Adjusts memory allocation based on demand.
• Scheduling Algorithms: Determines which process gets CPU and memory resources.
Example:
• Running a web browser, music player, and text editor simultaneously requires efficient memory
management.
4.1.2 Memory Partitioning: Fixed Partitioning, Dynamic Partitioning:
Memory partitioning is the process of dividing the main memory (RAM) into sections to manage and
allocate space for multiple processes. This helps in efficient memory utilization and ensures multiple
programs can run simultaneously.
There are two primary types of memory partitioning:
1. Fixed Partitioning (Static Partitioning)
2. Dynamic Partitioning (Variable Partitioning)
1. Fixed Partitioning (Static Partitioning)
• Fixed partitioning divides memory into a predefined number of fixed-size partitions.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
• Each partition can hold only one process at a time.
Working of Fixed Partitioning:
1. The OS divides memory into fixed partitions before execution.
2. When a process arrives, it is assigned to a partition based on size.
3. If a process is too large, it cannot be allocated (even if total free space is available).
4. If a process is too small, unused space within the partition leads to internal fragmentation.
Advantages:
• Simple to implement – Easy to allocate and manage memory.
• Fast allocation – OS does not need to search for space dynamically.
• Supports multiprogramming – Multiple processes can run simultaneously.
Disadvantages:
• Internal fragmentation – Unused memory inside large partitions is wasted.
• Inefficient memory use – A small process may occupy a large partition.
• Fixed number of partitions – Limits the number of concurrent processes.
Example of Fixed Partitioning:
Partition Size (MB) Process Assigned Process Size (MB) Wasted Space (MB)
P1 100 Process A 60 40
P2 200 Process B 150 50
P3 300 Process C 250 50
• Even though total memory is available, a process requiring 250MB cannot fit into a 200MB
partition.
• Internal fragmentation occurs due to unused space in each partition.
2. Dynamic Partitioning (Variable Partitioning)
• Dynamic partitioning allocates memory based on the exact size of the process, creating
variable-sized partitions.
• Memory is allocated dynamically at runtime.
Working of Dynamic Partitioning:
1. When a process arrives, it gets exactly the amount of memory it needs.
2. Once the process terminates, the memory is freed and made available for new processes.
3. Over time, external fragmentation may occur as free memory gets scattered in small non-
contiguous blocks.
Advantages:
• Efficient memory utilization – No internal fragmentation.
• More processes can be accommodated – Partitions are created dynamically.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
• No fixed limits – Number of partitions is not predetermined.
Disadvantages:
• External fragmentation – Free memory is scattered, making large allocations difficult.
• Slower allocation – The OS must search for suitable free space.
• Compaction needed – The OS may need to rearrange memory to reduce fragmentation.
Example of Dynamic Partitioning:
Process Size (MB) Memory Block Allocated (MB) Remaining Free Space (MB)
Process A 120 120 0
Process B 250 250 0
Process C 180 180 0
• No internal fragmentation, as each process gets the exact space it needs.
• However, when processes terminate, memory holes can form, leading to external
fragmentation.
4.1.3 Memory Allocation Strategies:
Memory allocation is the process of assigning memory blocks to processes based on certain strategies
to optimize performance and reduce fragmentation. There are different techniques used to allocate
memory efficiently, including:
4.1.3.1. Best-Fit Allocation
4.1.3.2. First-Fit Allocation
4.1.3.3. Worst-Fit Allocation
4.1.3.4. Paging
4.1.3.5. Segmentation
4.1.3.6. TLB
4.1.3.1. Best-Fit Allocation
• The OS searches for the smallest free memory block that can accommodate the process.
• Reduces internal fragmentation by using memory efficiently.
Working of Best Fit Allocation:
1. The OS scans all available free memory blocks.
2. Finds the smallest block that is large enough for the process.
3. Allocates the memory and leaves the remaining space as a free block.
Advantages:
• Reduces wasted memory (internal fragmentation).
• Efficient memory utilization for small processes.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
Disadvantages:
• Slower allocation – Requires searching through all free blocks.
• Leads to external fragmentation – Small leftover memory blocks remain unused.
Example of Best-Fit Allocation:
Given memory blocks:
Block Number Block Size (MB)
1 100
2 500
3 200
4 300
5 600
Processes to be allocated:
Process Size (MB)
P1 250
P2 120
P3 400
Best-Fit Allocation Result:
Process Size (MB) Allocated Block Remaining Space in Block
P1 250 Block 4 (300MB) 50MB
P2 120 Block 1 (100MB) Not allocated (too small)
P3 400 Block 5 (600MB) 200MB
• P2 could not be allocated as the smallest available block (100MB) was too small.
• Leads to external fragmentation (small unallocated blocks remain).
4.1.3.2. First-Fit Allocation:
• Allocates the first available free block that is large enough for the process.
• Stops searching once a suitable block is found.
Working of First-Fit Allocation:
1. The OS scans memory blocks from the beginning.
2. Allocates the first block large enough to fit the process.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
3. Leaves the remaining memory as a free block.
Advantages:
• Fast allocation – Stops searching once a suitable block is found.
• Works well when large processes arrive frequently.
Disadvantages:
• Can lead to external fragmentation.
• May waste larger blocks when smaller ones could be used.
Example of First-Fit Allocation:
Memory blocks available:
Block Number Block Size (MB)
1 100
2 500
3 200
4 300
5 600
Processes to be allocated:
Process Size (MB)
P1 250
P2 120
P3 400
First-Fit Allocation Result:
Process Size (MB) Allocated Block Remaining Space in Block
P1 250 Block 2 (500MB) 250MB
P2 120 Block 3 (200MB) 80MB
P3 400 Block 5 (600MB) 200MB
• Stops searching after finding the first suitable block.
• Can lead to external fragmentation when smaller blocks remain unused.
4.1.3.3. Worst-Fit Allocation
• Allocates the largest available free block to the process.
• Leaves the biggest remaining space for future processes.
Working of Worst-Fit Allocation:
1. The OS searches for the largest free memory block.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
2. Allocates memory to the process and leaves the rest as a free block.
Advantages:
• Helps reduce external fragmentation by keeping fewer large free blocks.
Disadvantages:
• Leads to internal fragmentation (unused space inside large blocks).
• Inefficient for small processes.
Example of Worst-Fit Allocation:
Process Size (MB) Allocated Block Remaining Space in Block
P1 250 Block 5 (600MB) 350MB
P2 120 Block 5 (350MB) 230MB
P3 400 Block 2 (500MB) 100MB
• The largest block is always chosen, leading to internal fragmentation.
4.1.3.4. Paging:
• Paging is a fixed-size memory management technique that divides the process and physical
memory into equal-sized blocks called pages and frames.
• Pages → Fixed-sized blocks in the logical memory (process memory).
• Frames → Fixed-sized blocks in the physical memory (RAM).
• A page table is used to keep track of the mapping between pages and frames.
Working of Paging:
1. The logical memory (process memory) is divided into fixed-size pages.
2. The physical memory (RAM) is divided into fixed-size frames of the same size as pages.
3. When a process is loaded into memory, its pages are placed into available frames in RAM.
4. A page table is maintained to keep track of which page is stored in which frame.
5. The MMU (Memory Management Unit) translates logical addresses (page numbers) to
physical addresses (frame numbers).
Example of Paging:
Given:
• Page size = 4KB
• Process size = 10KB
• Available frames in RAM = 4, 7, 2
Memory Mapping:
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
Page Number Page Offset Mapped Frame Number Physical Address
Page 0 0-4KB Frame 4 4 * 4KB = 16KB
Page 1 4-8KB Frame 7 7 * 4KB = 28KB
Page 2 8-10KB Frame 2 2 * 4KB = 8KB
• Logical Address = Page Number + Offset
• Physical Address = Frame Number + Offset
Advantages of Paging:
• No External Fragmentation – Since memory is allocated in fixed blocks, there are no unused
gaps in memory.
• Efficient Memory Utilization – Processes can be loaded anywhere in available frames,
allowing flexibility.
• Faster Allocation and Access – The OS can allocate memory quickly using the page table.
Disadvantages of Paging:
• Internal Fragmentation – If a process does not completely fill the last page, some memory is
wasted.
• Page Table Overhead – Each process needs a page table, which consumes additional memory.
• Page Table Lookups Increase Access Time – Address translation requires extra processing,
which can slow down memory access.
4.1.3.5. Segmentation:
• Segmentation is a variable-sized memory management technique that divides a process into
logical segments (e.g., Code, Data, Stack, Heap).
• Each segment has a base address and a limit (size).
• A segment table stores segment information and maps logical addresses to physical addresses.
Working of Segmentation:
1. The OS divides a process into segments (e.g., Code, Stack, Data).
2. Each segment is placed in memory, but segments can be of different sizes.
3. A segment table is used to keep track of segment locations.
4. Logical addresses consist of Segment Number + Offset.
5. The MMU (Memory Management Unit) translates logical addresses to physical addresses
using the segment table.
Example of Segmentation:
Given:
• A process has the following segments:
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
Segment Name Size (KB) Base Address (Starting Memory Location)
Code 4KB 2000
Data 3KB 6000
Stack 2KB 9000
Logical to Physical Address Translation:
• Logical Address = (Segment Number, Offset)
• Physical Address = Base Address + Offset
Example:
• Address (1, 500) means: Segment 1 (Data Segment) + Offset 500
• Physical Address = 6000 + 500 = 6500
Advantages of Segmentation:
• No Internal Fragmentation – Memory is allocated as needed, avoiding wasted space inside
segments.
• Logical Structure Mapping – It aligns with the structure of programs, making memory
management more intuitive.
• Better User Control – Programs can access segments independently, which improves
efficiency.
Disadvantages of Segmentation:
• External Fragmentation – Free memory is scattered across different segments, making
allocation difficult.
• Complex Memory Management – The OS must maintain a segment table and handle dynamic
allocations.
• Compaction Required – To reduce fragmentation, the OS may need to shift segments, which
is time-consuming.
Consider the requests from processes in given order 300K, 25K, 125K, and 50K. Let there be two blocks
of memory available of size 150K followed by a block size 350K.
Which of the following partition allocation schemes can satisfy the above requests?
A) Best fit but not first fit.
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.
Solution:
Let us try all options.
Best Fit:
300K is allocated from a block of size 350K. 50 is left in the block.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
25K is allocated from the remaining 50K block. 25K is left in the block.
125K is allocated from 150 K block.
25K is left in this block also.
50K can’t be allocated even if there is 25K + 25K space available.
First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is allocated from the 150K block, 125K is left out.
Then 125K and 50K are allocated to the remaining left out partitions.
So, the first fit can handle requests.
So, option B is the correct choice.
4.1.3.6. Translation Lookaside Buffer (TLB):
The Translation Lookaside Buffer (TLB) is a specialized cache used in a computer’s memory
management unit (MMU) to speed up virtual-to-physical address translation.
When a CPU accesses memory, it uses virtual addresses, which must be translated into physical
addresses using the page table. However, looking up the page table in RAM is slow. To reduce this
latency, the TLB stores recently used page table entries, allowing quicker address translation.
Working of TLB:
1. CPU generates a virtual address.
2. TLB checks if the page number is stored (TLB hit/miss).
o TLB Hit: The corresponding physical frame number is retrieved quickly.
o TLB Miss: The OS must access the page table in RAM, update the TLB, and retry the
access.
3. Physical address is used to access memory.
TLB Characteristics:
• Small but fast: Contains only a limited number of entries.
• Fully associative or set associative: Unlike traditional caches, the TLB often allows any
virtual page number to be stored in any slot.
• Flush on context switch: Some OSs flush the TLB on a process switch to avoid conflicts
(unless they support address space identifiers (ASIDs)).
Performance Impact:
• A high TLB hit rate improves performance significantly.
• A TLB miss incurs additional overhead due to page table lookup.
TLB vs Page Table:
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
Feature TLB Page Table
Location MMU (Hardware) Main Memory (RAM)
Speed Very Fast Slower (Memory Access)
Size Small (Limited Entries) Large
Usage Caches recent address mappings Stores complete mappings
4.2 Virtual Memory:
Virtual Memory is a memory management technique that provides an illusion of a larger main
memory (RAM) than physically available. It enables a system to run large applications and multitask
efficiently by using a combination of RAM and disk space (swap space or page file).
Need of Virtual Memory:
• Process Isolation: Each process gets its own virtual address space, preventing interference.
• Efficient Multitasking: Multiple programs can run simultaneously, even if they exceed
available RAM.
• Large Program Execution: Applications larger than RAM can run without modification.
• Memory Protection: Prevents a process from accessing memory allocated to another process.
• Simplifies Memory Allocation: The OS can allocate memory dynamically without worrying
about fragmentation.
Working of Virtual Memory:
1. Address Translation (Virtual to Physical Address Mapping)
Each process gets a virtual address space, which is mapped to physical memory (RAM) through a
structure called the page table.
• Virtual Address: Used by programs.
• Physical Address: The actual memory location in RAM.
• MMU (Memory Management Unit): A hardware component that performs the address
translation.
When a process accesses memory:
1. The CPU generates a virtual address.
2. The MMU checks the page table for the corresponding physical address.
3. If the page is in RAM (page hit), it is accessed immediately.
4. If the page is not in RAM (page fault), the OS loads it from disk (swap space).
Advantages of Virtual Memory:
• Enables large applications to run
• Efficient memory usage
• Provides memory protection
• Allows multitasking
Disadvantages of Virtual Memory:
• Page faults slow performance
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
• High disk usage (wears out SSDs)
• Thrashing (too much swapping = poor performance)
4.2.1 Demand Paging:
Demand paging is a memory management technique used in operating systems (OS) to load pages into
memory only when they are needed. Instead of loading the entire process into memory at once, the OS
brings in pages dynamically as required, reducing memory usage and improving system efficiency.
Demand Paging Working:
1. Lazy Loading: When a process starts, only essential pages (like the first instruction) are loaded;
the rest are kept in secondary storage (disk).
2. Page Fault Handling:
o If a process requests a page that is not in RAM, a page fault occurs.
o The OS suspends the process and fetches the required page from disk.
o The page is loaded into an available frame, and the page table is updated.
o The process resumes execution.
Components of Demand Paging
• Page Table: Keeps track of which pages are in memory.
• Valid-Invalid Bit: Each page has a bit indicating whether it is in RAM (valid) or not (invalid).
• Page Fault Handler: The part of the OS that handles page faults by fetching missing pages
from the disk.
• Secondary Storage (Swap Space): Stores pages that are not currently in RAM.
Advantages of Demand Paging
• Reduces Memory Waste: Loads only necessary pages.
• Allows Large Programs to Run: A program larger than available RAM can execute efficiently.
• Better Multiprogramming: More processes can reside in memory, improving CPU utilization.
• Faster Startup Time: No need to load the entire process before execution.
Disadvantages of Demand Paging
• Page Fault Overhead: Frequent page faults can slow down the system.
• Thrashing: If too many page faults occur, the system spends more time swapping pages than
executing processes.
• Complex Implementation: Managing page tables and handling faults increases OS
complexity.
4.2.2 Page Replacement Strategies
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
When a process executes, it requires certain pages to be in main memory (RAM). However, due to
limited memory, not all pages of a process can be kept in RAM at all times. Page Replacement
Strategies are used to decide which page to remove when a new page needs to be loaded into memory.
Goals of Page Replacement Algorithms
• Minimize page faults: Fewer page faults mean better performance.
• Efficient memory utilization: Make the best use of available RAM.
• Avoid thrashing: Ensure that too many pages are not swapped in and out unnecessarily.
Belady’s Anomaly:
Belady’s Anomaly is a counterintuitive phenomenon in page replacement algorithms where increasing
the number of memory frames increases the number of page faults instead of reducing them. This
occurs in certain non-optimal page replacement algorithms, such as First-In-First-Out (FIFO).
Why Does Belady’s Anomaly Occur?
• Some non-optimal page replacement algorithms (like FIFO) may remove frequently used
pages when frames increase.
• This happens because FIFO does not consider how recently or frequently a page has been
used.
• When a process gets more frames, the replacement order may lead to pages being swapped out
more frequently, causing more faults.
Algorithms Affected by Belady’s Anomaly
• First-In-First-Out (FIFO): Affected because it blindly removes the oldest page.
• Random Page Replacement: Can be affected in some cases.
Algorithms That Do NOT Suffer from Belady’s Anomaly
• Least Recently Used (LRU): Replaces least recently used pages, avoiding unnecessary
removals.
• Optimal Page Replacement: Always removes the page that won’t be used for the longest time
(ideal case).
How to Prevent Belady’s Anomaly?
• Use LRU or Optimal Page Replacement instead of FIFO.
• Implement Working Set Model, which ensures only active pages stay in memory.
• Dynamically adjust frame allocation based on actual page faults.
1. FIFO (First-In-First-Out) Page Replacement
• The oldest page in memory (loaded first) is removed when a new page needs to be loaded.
• Uses a queue structure (first page inserted is the first to be removed).
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
Implementation Steps
1. Maintain a queue of pages in memory.
2. When a new page is needed and memory is full:
o Remove the oldest page (front of the queue).
o Insert the new page at the end of the queue.
Example
Consider a page reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• Assume 3 frames (memory slots).
Step Pages in Memory Page Fault?
1 1-- Yes
2 12- Yes
3 123 Yes
4 234 Yes
5 341 Yes
6 412 Yes
7 125 Yes
Total Page Faults: 9
Advantages:
Simple to implement (easy queue-based approach).
Fair – replaces oldest page (no need to track usage).
Disadvantages:
Belady’s Anomaly: Increasing the number of frames can increase page faults.
Not optimal: May remove a frequently used page.
2. Optimal Page Replacement
• Replace the page that will not be used for the longest time in the future.
• Theoretical best-case algorithm, used for comparison.
Implementation Steps
1. Scan the page reference string.
2. Identify which page will not be used for the longest time.
3. Replace that page.
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
Example (Same Reference String, 3 Frames)
Total Page Faults:
Advantages:
Least page faults (theoretically optimal).
Best memory utilization.
Disadvantages
Not feasible in real OS (requires future knowledge).
Used for analysis, not practical implementation.
3. LRU (Least Recently Used) Page Replacement
• Replace the page that has not been used for the longest time.
• Assumes that recently used pages are more likely to be used again soon.
Implementation Steps
1. Maintain a record of page usage (e.g., timestamps, stack).
2. When a page needs to be replaced:
o Identify the least recently used page.
o Remove it and insert the new page.
Example (Same Reference String, 3 Frames)
Total Page Faults:
Advantages:
Good performance – better than FIFO.
Does not suffer from Belady’s Anomaly.
Disadvantages:
Expensive to implement (requires history tracking).
More complex than FIFO.
4. Thrashing
• Occurs when a system spends more time swapping pages in and out of memory than
executing processes.
• Leads to low CPU utilization and system slowdowns.
Causes of Thrashing
1. Too many processes running simultaneously (not enough frames).
CO 4: Analyze performance of memory allocation based on space complexity and page replacement
policies based on time complexity.
2. Insufficient RAM (high page fault rate).
3. Bad page replacement policy (frequently removing necessary pages).
Symptoms of Thrashing
• High page fault rate.
• Slow program execution.
• High disk activity (frequent swapping).
How to Prevent Thrashing?
Working Set Model: Keep the most actively used pages in memory.
Page Fault Frequency (PFF) Control: Adjust allocated frames dynamically.
Reduce Degree of Multiprogramming: Limit the number of active processes.
Comparison of Page Replacement Algorithms
Belady’s
Algorithm Efficiency Complexity Practical Usage
Anomaly?
FIFO Yes Poor Low (Simple) No
No (Used for
Optimal No Best High (Future Knowledge)
analysis)
Moderate (History
LRU No Good Yes (Used in real OS)
Tracking)
Thrashing
- - High Required in OS
Control