Mod-03
Memory Management Unit-
The Memory Management Unit (MMU) is a hardware component that is
responsible for translating logical addresses (generated by programs) into physical
addresses in the system’s RAM. It ensures efficient and secure memory usage by
managing the virtual memory system.
Function of MMU-
a. Address Translation: It translates logical addresses (used by the program) to
physical addresses (in RAM) by adding the base address (from the Base
Register) to the logical address.
b. Memory Protection: It ensures that a process can only access its own
allocated memory, preventing one program from accessing the memory of
another program or the OS.
c. Memory Management Techniques:
Paging: Divides memory into fixed-size blocks (pages) and maps
virtual pages to physical pages.
Segmentation: Divides memory into segments such as code, data, and
stack for better organization and access control.
Working of MMU-
The MMU translates logical addresses generated by the program into physical
addresses by adding the value from the Base Register. It ensures the address is
within the allocated range using the Limit Register. If the translation is not in the TLB,
it checks the page table, and then the CPU accesses the physical memory location.
Swapping(with diagram)- is a memory
management technique used by the operating
system to manage processes when the physical
memory (RAM) is full. It involves transferring
processes or parts of processes between the
main memory and secondary storage (disk) to
free up space in RAM for other processes.
Working:
Swap-out: When the system runs out of
memory, an inactive process or part of it is
moved (swapped out) from RAM to disk (swap space).
Swap-in: When the swapped-out process is needed again, it is loaded back into RAM
(swapped in) from the disk.
Advantage Disadvantage
Efficient Memory Use: Allows the Performance Overhead: Swapping
system to run more processes than the incurs time overhead as accessing data
available physical memory by swapping from the disk is slower than from RAM,
inactive processes to disk leading to potential system slowdowns
(thrashing).
Multitasking Support: Enables the Disk Space Requirement: Requires
operating system to support significant disk space for the swap file
multitasking, allowing multiple or swap partition. Improper management
programs to run concurrently. can lead to storage issues.
Prevents System Crashes: By Increased I/O Operations: Frequent
swapping out processes, it prevents swapping leads to high disk I/O
memory exhaustion and system crashes operations, affecting system
due to lack of available RAM. performance.
External Fragmentation- External fragmentation occurs when free memory is
divided into small scattered blocks over time, and although there is enough total
free memory, it is not contiguous, making it impossible to allocate to a new process.
Explanation-
As processes are loaded and removed from memory, free spaces get created
between allocated memory blocks. These small non-contiguous free spaces are not
usable for large processes even though their total size may be sufficient. Memory
gets fragmented externally (outside allocated areas).
Example- Suppose a system has 3 free memory blocks of 50 KB, 30 KB, and 20 KB. A
process requiring 80 KB cannot be loaded even though the total free memory (100
KB) is enough, because it is not in a single continuous block.
Causes Effects Solution
Dynamic memory Inefficient use of memory. Compaction: Rearranging
allocation without memory contents to
compacting. combine all free spaces into
one large block.
Processes of different Difficulty in allocating Paging and Segmentation:
sizes frequently coming large processes. These memory management
and going. techniques help avoid
external fragmentation.
Internal Fragmentation- Internal fragmentation occurs when memory is divided into
fixed-sized blocks and a process does not use the entire block, leaving unused
(wasted) space inside the allocated memory block.
Explanation- When memory is allocated in fixed-sized partitions, a process may not
perfectly fit into the assigned partition. The unused space within that partition is
called internal fragmentation because it is internal (inside) the allocated memory but
remains wasted. This leftover space cannot be assigned to other processes, leading
to inefficient memory utilization.
Example- Suppose a memory block of 100 KB is allocated to a process that only
needs 70 KB. The remaining 30 KB is unused and causes internal fragmentation.
Causes Effects Solution
Fixed-size memory Wastage of memory space. Use dynamic memory
allocation. allocation (variable-size
partitions) instead of fixed-
size partitions.
Processes with Reduced overall system Implement paging or
memory performance. segmentation techniques to
requirements minimize fragmentation.
smaller than
partition size.
Internal vs External Fragmentation??(from sir’s pdf)
How external fragmentation is removed? –
External fragmentation is removed by a method called compaction. In compaction,
the operating system shifts all the occupied memory blocks together to create a
single large contiguous block of free memory. This rearrangement helps eliminate
the small scattered free spaces and makes it possible to allocate larger processes
without any issue. However, compaction requires time and system resources, as it
involves moving processes in memory, which may cause delays. Apart from
compaction, techniques like paging and segmentation are also used to minimize
external fragmentation by allowing non-contiguous memory allocation, thus
improving memory utilization.
Valid & Invalid Bit- The valid/invalid bit is used in the page table to indicate whether
a page is part of a process’s logical address space.
If a page is assigned to a frame, it is marked with a valid bit, meaning the page is part
of the process’s logical address space and can be accessed safely. If a page is not
assigned to any frame, it is marked with an invalid bit, meaning it does not belong to
the process and accessing it will cause an error.
Paging- Paging is a memory management technique where the process is divided
into fixed-size blocks called pages, and the main memory is also divided into blocks
of the same size called frames. When a process needs to run, its pages are loaded
into any available frames in memory. This means the process doesn't need to be in
one continuous block of memory.
Advantage of Paging-
o No External Fragmentation – Memory is used more efficiently because small
gaps between blocks are avoided.
o Better Use of Memory – Pages can be placed anywhere in memory, not
necessarily in a continuous block.
o Easy Swapping – Pages can be easily moved between RAM and disk as
needed.
Disadvantage of Paging-
o Internal Fragmentation – Some space inside pages may go unused and get
wasted.
o Page Table Overhead – Every process needs a page table, which uses extra
memory.
o Slower Access Time – Accessing data takes slightly more time due to the
extra step of page table lookup.
Demand Paging vs Pure Demand Paging
Demand paging/Lazy Swapping is a memory management scheme where a page of
a process is brought into memory only when it is needed (i.e., when a page fault
occurs). Initially, the entire process isn't loaded into memory; only the required
pages are loaded on demand.
Pure demand paging is a more specific form of demand paging where no pages are
loaded into memory when the process starts. All the pages are loaded only when
they are accessed for the first time (on-demand). This is a strict version of demand
paging.
Advantage of Demand Paging-
o Efficient Memory Use – Only the pages that are needed are loaded into
memory, which saves memory space and reduces wastage.
o Faster Process Start – The process can start running without needing the
entire process to be loaded into memory, improving startup time.
o Less I/O Overhead – Since not all pages are loaded initially, the system does
less input/output work, leading to efficient use of resources.
o Supports Large Processes – Large processes can be run even on systems with
limited memory, as only parts of the process are loaded at any given time.
Disadvantage of Demand Paging-
o Page Fault Overhead – Every time a page is accessed that isn't in memory, a
page fault occurs, which leads to an overhead in terms of time and
performance.
o Slower Execution – When pages are not in memory, accessing them can
cause delays as the system needs to fetch them from secondary storage
(disk).
o Complex Memory Management – The system needs to manage page tables
and handle page faults efficiently, making memory management more
complex.
o Thrashing – If there are too many page faults (because memory is not large
enough), the system may end up spending more time swapping pages in and
out of memory than executing the program. This is called thrashing.
Locality of Reference- Locality of Reference refers to the tendency of a program to
access the same set of memory locations repeatedly within a short period of time.
This concept is important for memory management and plays a major role in
optimizing system performance.
Locality can be divided into two types:
i. Temporal Locality: Definition: If a particular memory location is accessed, it is
likely to be accessed again in the near future. Example: A variable or data that
is used multiple times in a loop.
ii. Spatial Locality: Definition: If a memory location is accessed, nearby memory
locations are likely to be accessed soon after. Example: Array elements or
contiguous blocks of data being accessed one after the other.
Page Fault- A page fault occurs when a program tries to access a page in virtual
memory that is not currently loaded into main memory (RAM). In other words, the
required page is not found in the page table and needs to be fetched from secondary
storage (like a disk or SSD).
When a page fault happens:
o The Operating System (OS) suspends the process that caused the page fault.
o It then loads the required page from disk (or secondary memory) into main
memory.
o Once the page is loaded, the OS updates the page table and resumes the
process.
Steps to Handling a Page Fault
o Program Needs a Page: The program tries
to access a page (piece of data) from
memory.
o Check if the Page is in Memory: The
computer checks if the page is already in
RAM (memory).
o Page Not Found: If the page is not in
memory, a page fault happens. This
means the required data is not in RAM.
o OS Takes Control: The Operating System
(OS) steps in and pauses the program.
o Is the Page Access Valid?: The OS checks
if the program is asking for the correct
data. If not, it stops the program. If it’s
valid, the OS moves on to fix the problem.
o Find Space in RAM: The OS looks for free space in RAM to load the page.
o Swap Out (If Needed): If RAM is full, the OS will remove an old page and save it
to the disk to make room for the new page.
o Load the Page from Disk: The OS loads the required page from the disk into
memory (RAM).
o Update the Page Table: The OS updates a page table to show that the page is
now in memory.
o Resume the Program: The OS gives control back to the program and lets it
continue.
Page hit
o A Page Hit occurs when the program requests a page from memory, and that
page is already present in RAM (main memory). In other words, the data the
program needs are already in memory, so the OS doesn't need to load it from
the disk.
o Example: If a program asks for a specific page and it is already in RAM, it's a
Page Hit.
o Good Scenario: Fast, as the program gets the data directly from RAM.
Page miss-
o A Page Miss happens when the program requests a page that is not in RAM
(it's missing from memory). When this happens, the Operating System (OS)
needs to bring the required page from secondary storage (disk) into RAM.
o Example: If a program asks for a page, but the page is not in memory (it's stored
on disk), it's a Page Miss.
o Bad Scenario: Slower, because it takes time to read the page from disk.
Thrashing- Thrashing happens when the operating system spends most of its time
swapping data between RAM and disk (or secondary storage) instead of executing
the program’s instructions. This typically occurs when there is not enough memory
(RAM) available for the running programs, leading to excessive page faults and page
swaps. (graph)
Thrashing happens due to-
i. Insufficient RAM: When there is not enough physical memory (RAM) to hold all
the pages required by the running processes, the OS begins to swap pages
frequently between RAM and the disk.
ii. Excessive Page Faults: When a program constantly needs pages that are not in
memory (i.e., page faults), the OS keeps bringing pages from the disk, leading
to continuous swapping.
Effects of Thrashing-
i. Decreased System Performance: The system becomes extremely slow
because most of the CPU time is spent managing the swapping of pages
instead of executing actual program instructions
ii. High Disk Activity: Disk activity increases significantly as pages are swapped
between RAM and secondary storage. This results in high disk usage and
slow I/O operations.
iii. Program Delays: Programs take much longer to execute due to the delay
caused by constant swapping
Prevent Thrashing by-
i. Increase Physical Memory (RAM): Adding more RAM to the system can
reduce thrashing by providing enough memory for the programs to run
smoothly.
ii. Use Efficient Page Replacement Algorithms: Algorithms like LRU (Least
Recently Used) or Optimal Page Replacement can minimize page faults and
reduce the need for excessive swapping
iii. Limit the Number of Running Processes: Reducing the number of
concurrent programs running can help ensure that each program has
enough memory available to run without causing thrashing.
Page Table- A page table is a data structure used by the operating system to map
virtual addresses (used by programs) to physical addresses (in RAM).
It Is Needed because:
o When paging is used in memory management, each process uses virtual
memory.
o The page table tells where each virtual page is stored in physical memory
(which frame).
Each entry in the page table contains:
i. Frame number – where the page is stored in RAM
ii. Present/Absent bit – whether the page is in memory or not
iii. Protection bits – read/write/execute permissions
Advantages:
i. Helps in efficient memory management
ii. Supports virtual memory and paging
iii. Provides protection between processes
Disadvantages:
i. Takes extra space in memory
ii. Slows down performance due to lookups
Compaction/ Memory Compaction - Compaction is a memory management
technique used to overcome the problem of external fragmentation. It involves
shifting processes in memory to make all free memory blocks contiguous (together
in one place), so large memory blocks can be allocated to new processes.
Example-
Process A
Process A
Process B
Process B Process C
Process C
After Compaction ->
Need for Compaction:
o During memory allocation and deallocation, small non-contiguous free spaces
are created.
o These scattered holes make it difficult to allocate large processes even if total
free memory is enough.
o Compaction shifts processes towards one side to combine free spaces.
Advantages Disadvantages
Solves the problem of external Time-consuming as data needs to be
fragmentation moved
Makes larger memory blocks available Requires CPU overhead
Improves overall memory utilization May need to update addresses of
processes (relocation)
Address Binding- Address Binding is the process of associating program instructions
and data to physical memory addresses. It converts logical addresses (used by
programs) into physical addresses (used by the hardware).
When a program is written, it does not know where it will be placed in memory. So,
the addresses it uses are logical or virtual. These must be bound or mapped to actual
physical memory addresses.
Types of Addresses:
o Logical Address: Also called virtual address, Generated by the CPU during
program execution
o Physical Address: Actual location in main memory (RAM), Used by the
hardware to access data
Binding is Needed:
o To make programs portable (can run anywhere in memory)
o To manage memory efficiently in multi-tasking systems
o To allow features like paging and segmentation
Example: A program may refer to a variable at address 100, but the OS loads it at
physical address 1000. The conversion from 100 (logical) to 1000 (physical) is done
through address binding.
Address Binding can happen at different stages:
i. Compile Time
ii. Load Time
iii. Execution Time
3 Stages of Address Binding-
i. Compile Time-
Binding is done during compilation.
The physical memory address is decided at compile time.
Used when we know in advance where the program will be placed in
memory.
If the starting location changes, the program must be recompiled.
Example- If the compiler knows the program starts at address 1000, it
will generate physical addresses directly.
ii. Load Time-
Binding is done at the time of loading the program into memory.
Physical addresses are assigned by the loader.
The program can be relocated to different memory locations.
Example- The OS decides where to load the program and updates the
addresses accordingly during loading.
iii. Execution Time-
Binding is done at runtime (during execution).
Logical addresses are converted to physical addresses dynamically
using MMU (Memory Management Unit).
Allows the process to move in memory while running.
Example- In execution-time binding, the conversion happens while
the program is running, allowing the program to move in memory
without needing to recompile or reload.
This is how dynamic memory allocation works in modern systems
using paging or segmentation.
Dynamic Loading(vs)- Dynamic Loading is a technique where programs load modules
(or libraries) into memory only when they are needed during execution, instead of
loading all the modules at the start.
Example- For example, a word processor might not load the spell checker module
until the user selects the "Check Spelling" option.
How It Works:
In dynamic loading, the program calls a specific function or module only when
needed. These modules are loaded into memory dynamically at runtime. If a
module is never called, it will never be loaded into memory, saving resources.
Advantages:
i. Reduces memory usage: Only the necessary code is loaded into memory.
ii. Faster startup time: The program starts quicker because only essential modules
are loaded initially.
iii. Flexible: New modules can be added or updated without modifying the
program.
Dynamic Linking- Dynamic Linking is the process where a program links to libraries
or modules at runtime, rather than at compile time.
Example: A web browser might use a graphics library to display images. The library
is linked dynamically when the program runs.
How It Works:
In dynamic linking, the executable contains references to external functions or
libraries (e.g., .dll files in Windows or .so files in Linux). These libraries are linked
when the program starts running or when a function from that library is called. The
linking process happens at execution time.
Advantages:
i. Reduces program size: The program does not need to include the code from
the library, just a reference to it.
ii. Easier updates: Libraries can be updated or replaced without recompiling the
entire program.
Virtual Memory- Virtual Memory is a
memory management technique that creates
an illusion for users that they have a large
amount of physical memory, even if the
system's actual RAM is much smaller. It uses
secondary storage (like hard disks or SSDs) to
store data that doesn’t fit in the RAM.
Explain the conversion of Virtual Address to
Physical Address in Paging with example?
How It Works:
Virtual memory divides memory into pages
(for paging systems) or segments (for segmentation). When RAM becomes full, the
OS transfers some data from RAM to the disk, known as paging or swapping. The
CPU uses virtual addresses which are translated to physical addresses using a
Memory Management Unit (MMU).
Example:
Imagine your computer has 4GB of RAM, but you want to run a program that
requires 8GB of memory. Virtual memory allows the program to use additional
memory from the hard drive without crashing.
Benefits of Having Virtual Memory-
i. Allows Larger Programs: Virtual memory allows programs to use more memory
than what is physically available in RAM. This is especially important for large
applications like video editors or complex simulations.
ii. Better Multitasking: Virtual memory allows multiple programs to run at the same
time, even if the total memory required exceeds the physical RAM. It ensures that
each program has its own space and doesn't interfere with others.
iii. Isolation and Protection: Virtual memory provides memory isolation between
different programs, preventing one program from corrupting another's memory
space. It also helps in security by isolating the operating system's memory from
user programs.
iv. Simplifies Memory Management: Programs don’t need to worry about the
physical memory size or how memory is allocated. The operating system handles
memory allocation using virtual addresses. This makes memory management
easier and more efficient.
v. Efficient Use of RAM: Virtual memory ensures efficient utilization of RAM by
keeping only the most frequently used data in physical memory and moving other
data to disk. It reduces wasted space and maximizes the use of available
resources.
TLB- The Translation Lookaside Buffer (TLB) is a small, high-speed memory cache
that stores recent virtual-to-physical address translations. It helps speed up the
process of translating virtual addresses to physical addresses during memory
accesses.
How It Works:
TLB holds a limited number of entries
(typically 64 or 128), each containing a
virtual address and its corresponding
physical address.
When the CPU needs to access memory, it
first checks the TLB for the virtual address.
If the virtual address is found in the TLB
(called a TLB hit), the translation is done
quickly.
If the address is not found in the TLB
(called a TLB miss), the system will look up
the page table to find the correct physical
address and update the TLB with the new
translation
TLB Hit: If the requested virtual address is found in the TLB, the physical address is
quickly retrieved.
TLB Miss: If the virtual address is not in the TLB, the system checks the page table
and updates the TLB with the new translation.
Logical Address vs Physical Address
Logical Address Space
o It is the range/set of all possible logical addresses a program can use.
o Example: If a program is allowed to use addresses from 0 to 1023, then:
Logical Address Space = 0 to 1023
o It defines the total memory a program can reference.
Physical Address Space
o It is the range/set of all possible physical addresses in the system’s RAM.
o It represents the entire memory capacity of the hardware.
o Example:
If your RAM is 4GB, the physical address space might be from 0 to
4,294,967,295.
Effective Memory Access Time- Effective Memory Access Time is the average time
the system takes to access memory, considering the possibility of page faults.
It combines:
o Normal memory access time, and
o The extra time taken when a page fault occurs.
Formula:
EMAT=(1−p) × ma + p × pf_time
Where: ma = Memory access time (e.g., 100 ns); p = Page fault rate (between 0 and
1); pf_time = Time to service a page fault (can be in ms)
Example- Let’s say:
Memory access time = 100 ns
Page fault rate = 0.001
Page fault service time = 10 ms ( = 10,000,000 ns)
EMAT= (1−0.001) ×100+0.001×10,000,000=99.9+10,000=10,099.9 ns
So, the average time becomes much slower because of even a few page faults.
Segmentation- Segmentation is a memory management scheme where the logical
address space of a process is divided into variable-sized segments. Each segment
represents a logical unit of the program, such as functions, arrays, stacks, and data.
Unlike paging, where memory is divided into fixed-sized blocks (pages),
segmentation divides memory based on the program's logical structure, resulting in
variable-sized segments.
Paging vs Segmentaion- from sir’s pdf
Key Features:
i. Each segment is identified by a segment number and has a base address
(starting point) and a length.
ii. Segments can be of different sizes, depending on the program's needs.
iii. The segment table is used to map the logical segments to physical memory
addresses.
iv. The logical address consists of two parts:
v. Segment Number: Identifies which segment is being referenced.
vi. Offset: Specifies the location within the segment.
Advantages:
i. Flexibility: Segmentation allows segments to grow or shrink independently,
providing efficient memory allocation.
ii. Logical organization: The program's memory is divided based on logical
structure, making the program easier to understand and manage.
iii. Protection: Segments can be isolated, making it possible to protect one
segment from being accessed by others (e.g., preventing code from
modifying data).
Disadvantages:
i. External Fragmentation: Since segments are of different sizes, free memory
can become fragmented, leading to inefficient use of memory.
ii. Complexity: Segmentation can be harder to manage compared to paging
because it requires keeping track of varying segment sizes.
Segmentation Fault- A Segmentation Fault (often called a segfault) is a specific kind
of runtime error that occurs when a program attempts to access a memory location
that it is not permitted to access. The operating system (OS) detects this violation
and typically terminates the program to prevent further damage, such as corrupting
the memory.
It is Caused due to:
i. Out-of-bounds array access: Attempting to access an element outside the
allocated memory for an array.
ii. Dereferencing a NULL pointer: Accessing memory through a pointer that
hasn’t been assigned a valid address.
iii. Accessing freed memory: Using a pointer after the memory it points to has
been deallocated (dangling pointer).
iv. Writing to read-only memory: Trying to modify data in a segment that is
marked as read-only.
Example-
int arr[5];
arr[10] = 100; // Segmentation Fault (trying to access memory beyond the
array size)
Belady’s Anomaly- Belady's Anomaly refers to a situation in page replacement
algorithms where increasing the number of page frames results in an increase in the
number of page faults rather than a decrease. ??(or bhi hoga ig)This is
counterintuitive because we typically expect that having more memory (page
frames) would reduce the number of page faults.It was discovered by Leslie Belady
in 1969 and mainly occurs in FIFO (First-In-First-Out) page replacement algorithms.
Example of Page Replacement Algorithm-
i. IFO (First-In-First-Out)
ii. LRU (Least Recently Used)
iii. Optimal Page Replacement
iv. MRU (Most Recently Used)
v. LFU (Least Frequently Used)
vi. MFU (Most Frequently Used)
vii. Random Page Replacement
viii. Clock (Second Chance) Algorithm
FIFO Algorithm-
FIFO is a page replacement algorithm in which the oldest loaded page in memory is
replaced first when a new page needs to be loaded and there is no free space.
How it Works:
i. Pages are maintained in a queue.
ii. The first page added is the first to be removed, regardless of how frequently
or recently it was used.
Example:
Page reference string: 1, 2, 3, 4, 1, 2
Frames: 3
→ Page faults occur when a new page is not in memory.
→ FIFO replaces the oldest page first.
1 2 3 4 1 2
0 1 1 1 4 4 4
1 2 2 2 1 1
2 3 3 3 2
Pg hit=0
Pg Fault=total reference-pg hit
= 6-0=6
LRU Algorithm- LRU is a page replacement algorithm that replaces the page that has
not been used for the longest time when a new page needs to be loaded.
How it Works:
a. It keeps track of the recent usage of pages.
b. The least recently used page is removed first.
Example-
Page Reference String: 1, 2, 3, 4, 1, 2
Frames: 3
NRU Algorithm- NRU is a page replacement algorithm that uses the "referenced"
and "modified" bits of a page to decide which page to replace. It prefers to remove a
page that is not recently used and not modified.
Working:
i. Pages are divided into 4 classes based on Reference (R) and Modify (M) bits:
ii. (0, 0): Not Referenced, Not Modified ← best to remove
iii. (0, 1): Not Referenced, Modified
iv. (1, 0): Referenced, Not Modified
v. (1, 1): Referenced, Modified ← avoid removing
vi. The OS resets R bits at regular intervals.
Advantage: Efficient decision-making based on actual usage and changes.
Disadvantage: Needs hardware support (R & M bits) and bit checking.
Example-
MRU Algorithm- MRU is a page replacement algorithm that removes the most
recently used page assuming it won’t be used again soon (opposite of LRU).
Working:
i. Keeps track of the most recently accessed page.
ii. When replacement is needed, removes the page accessed most recently.
Advantage: Better in specific workloads (e.g., sequential access).
Disadvantage: Generally, not ideal for general-purpose use; can lead to more page
faults.
Example-
Optimal Algorithm- The Optimal page replacement algorithm replaces the page that
will not be used for the longest time in the future.
It gives the minimum number of page faults but is not practically possible (used for
theoretical comparison).
How it Works:
i. Look ahead in the page reference string.
ii. Replace the page whose next use is farthest in the future.
Example:
Page Reference String: 1, 2, 3, 4, 1, 2
Frames:
Paging vs Segmentation-
Feature Paging Segmentation
Memory Division Divides memory into fixed-size Divides memory into variable-size
pages segments
Size All pages are of equal size Each segment can be of different
size
Address Uses page number + offset Uses segment number + offset
Purpose To avoid external fragmentation To represent logical divisions like
code, stack, data
Fragmentation May cause internal fragmentation May cause external fragmentation
View of User User does not see paging User can see segments (like
modules/functions)
Example OS loads process in fixed-sized OS loads code segment, stack
chunks segment, etc., separately
Page Replacement Algorithm- A Page Replacement Algorithm is used by the
operating system when a page fault occurs and there is no free frame available in
memory. It decides which page to remove to load the required one.
It is Needed because:
i. When memory is full and a new page needs to be brought in.
ii. Helps in efficient memory management and reduces page faults.
First Fit Algorithm- The First Fit algorithm allocates the first available block of
memory that is large enough to accommodate the process. It searches from the
beginning of the memory blocks and assigns the first suitable one.
How it Works:
i. Scans memory from the beginning.
ii. Allocates the first free block that is large enough.
iii. If no space is found, moves to the next block.
Advantage: Simple and fast in finding memory for processes.
Disadvantage: May lead to fragmentation (unused gaps between allocated blocks).
Best Fit Algorithm- The Best Fit algorithm allocates the smallest available block that
is large enough to accommodate the process. It tries to minimize wasted space by
using the best possible fit.
How it Works:
i. Scans through all memory blocks to find the smallest block that fits.
ii. Allocates the process to the best fitting block.
Advantage: Minimizes wasted space inside allocated blocks.
Disadvantage: Requires searching the entire memory space, so it is slower than First
Fit. Can create many small leftover fragments.
Worst Fit Algorithm- The Worst Fit algorithm allocates the largest available block to
a process. It attempts to leave the biggest possible leftover space for future
allocations.
How it Works:
i. Scans all memory blocks and selects the largest block that fits the process.
ii. Allocates the process to the largest block, leaving the maximum space
behind.
Advantage: Reduces the chance of small fragmentation since large blocks are
allocated first.
Disadvantage: May cause inefficient memory usage and result in larger gaps being
left in memory.
Dirty Bit- A Dirty Bit is a flag (bit) used in paging systems to show whether a page in
memory has been modified since it was last loaded from the disk.
How It Works: When a page is loaded from disk, the dirty bit is set to 0. If the
process writes/updates data on that page, the dirty bit is set to 1.
When the page is removed from memory:
o If dirty bit = 1, it must be saved back to disk (because it has changes).
o If dirty bit = 0, it can be discarded, no need to save (no changes made).
Purpose: Saves time and I/O operations by avoiding unnecessary writes to disk.
Helps in efficient page replacement.
Example:
Suppose a program loads a page and updates some data. The dirty bit becomes 1.
So, when that page is replaced, the OS writes the updated page back to disk.
Fixed Partitioning vs Variable Partitioning
important
Memory Protection- Memory protection is a feature provided by the Operating
System to prevent one process from accessing the memory of another process.
Purpose:
o To ensure data security and process isolation
o To prevent accidental or malicious access
How it works:
o OS uses base and limit registers, page tables, or segmentation to define valid
memory areas for each process.
o If a process tries to access memory outside its boundary → trap or error
occurs.
Memory Sharing - Memory sharing allows multiple processes to access the same
memory segment to share data.
Purpose:
o Enables inter-process communication (IPC)
o Saves memory by avoiding duplication of data
Example:
o Multiple processes using the same code segment (like shared libraries)
o Shared memory IPC for fast communication between processes
Address Translation- Address Translation is the process of converting a logical
address (generated by a program) into a physical address (used by the hardware).
Needed:
o Programs use logical addresses for flexibility.
o Hardware accesses memory using physical addresses.
o The OS uses Memory Management Unit (MMU) to perform this translation.
How It Works:
o Logical Address → Generated by the CPU.
o MMU (Memory Management Unit) translates it using: Paging, Segmentation,
Base & limit registers
o Physical Address → Final address in RAM.
Example:
If a logical address is 100, and the base address is 1000,
then physical address = 1000 + 100 = 1100
Contiguous Memory Management- Contiguous memory management is a technique
where each process is stored in a single, continuous block of memory.
Key Points:
o Memory is divided into fixed or variable-sized partitions.
o Each process is loaded into one contiguous block.
o It’s easy to implement and fast in accessing data.
Types:
o Fixed Partitioning – Memory is divided into equal parts.
o Variable Partitioning – Partitions vary based on process size.
Advantages:
o Simple and fast.
o Easy to implement.
Disadvantages:
o Internal Fragmentation (unused space inside partitions).
o External Fragmentation (free memory scattered in small pieces).
o Difficult to resize process memory.
Techniques of Memory Management- Memory management techniques are the
methods used by the operating system to manage and allocate memory to different
processes.
Main Techniques:
Contiguous Allocation
o Each process gets a single block of memory.
o Simple but causes fragmentation.
Paging
o Memory and processes are divided into fixed-size blocks.
o Removes external fragmentation.
o Segmentation
o Divides processes based on logical sections (like code, data).
o More flexible than paging.
o Virtual Memory
o Allows execution of processes not fully in RAM.
o Uses hard disk as an extension of memory.
o Purpose:
o To efficiently utilize memory.
o To protect and isolate memory between processes.
o To allow multiprogramming (multiple processes running at once).