0% found this document useful (0 votes)
17 views10 pages

Operating System Virtual Memory

Virtual memory is a memory management technique that allows operating systems to use disk storage to compensate for physical memory shortages, enabling efficient multitasking and memory protection. Key mechanisms include swapping, demand paging, and various page replacement algorithms like FIFO, LRU, and LFU, each with its own advantages and limitations. Understanding issues like thrashing and alternatives such as demand segmentation is essential for effective operating system design.

Uploaded by

M Sridhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views10 pages

Operating System Virtual Memory

Virtual memory is a memory management technique that allows operating systems to use disk storage to compensate for physical memory shortages, enabling efficient multitasking and memory protection. Key mechanisms include swapping, demand paging, and various page replacement algorithms like FIFO, LRU, and LFU, each with its own advantages and limitations. Understanding issues like thrashing and alternatives such as demand segmentation is essential for effective operating system design.

Uploaded by

M Sridhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Introduction to Operating System

Virtual Memory
Virtual Memory is a fundamental concept in operating systems that revolutionizes
how software interacts with hardware resources. This presentation will guide you
through the core principles, mechanisms, and algorithms that make virtual
memory an indispensable part of modern computing.
What is Virtual Memory?
Virtual memory is a memory management technique that allows the operating
system (OS) to compensate for physical memory shortages by temporarily
transferring data from random access memory (RAM) to disk storage. This process
gives programs the illusion of having a continuous and larger memory space than
physically available.

• It uses both RAM and disk storage (swap space) to run larger or multiple
programs simultaneously.

• Enables efficient multitasking and better CPU utilization, as programs don't need
to be entirely loaded into physical memory.

• Provides memory protection, isolating processes from each other's memory


spaces.
Swapping

Concept Process Overhead


Swapping is a core mechanism in virtual When memory is scarce, the OS selects a While crucial for memory management,
memory, involving the temporary transfer less active process, copies its entire swapping can be slow due to disk I/O
of an entire process from main memory memory space to the swap disk, and operations. Excessive swapping, known
(RAM) to secondary storage (hard disk). marks its RAM as available. When the as "thrashing," severely degrades system
This frees up RAM for other processes. swapped-out process is needed again, it's performance.
swapped back into RAM.
Demand Paging
Demand paging is an optimization of swapping where pages of a
program are only loaded into physical memory when they are actually
needed (on demand), rather than loading the entire program at once.
This significantly reduces the amount of physical memory required for a
process.

• Lazy Loading: Only the necessary pages are brought into memory,
improving startup times and reducing memory footprint.

• Efficient Resource Use: Physical memory is conserved, allowing


more processes to reside in RAM concurrently.

• Foundation for Virtual Memory: Demand paging is the primary


technique used to implement virtual memory in modern operating
systems.
Page Fault

What is it? Detection


A page fault occurs when a program tries to access a memory The Memory Management Unit (MMU) detects the page fault by
page that is part of its logical address space but is not currently checking the valid-invalid bit in the page table entry. If the bit
loaded into physical RAM. indicates "invalid" (page not in RAM), a trap is generated.

Handling Impact
The OS handles the page fault by locating the desired page on Page faults are normal in virtual memory systems but incur
disk, finding a free frame in RAM, loading the page, updating the performance overhead due to disk I/O. Frequent page faults can
page table, and restarting the interrupted instruction. lead to thrashing.
Page Replacement Algorithms
When a page fault occurs and all physical memory frames are occupied, the operating system must decide which page to remove from RAM to
make room for the new page. This decision is made by page replacement algorithms.

FIFO Algorithm Optimal Page Replacement Algorithm


The simplest page replacement algorithm, FIFO (First-In, First- The optimal algorithm replaces the page that will not be used
Out), removes the page that has been in memory for the for the longest period of time in the future. This algorithm
longest time. It's easy to implement but doesn't always yields the lowest possible page fault rate but is impossible to
perform well because the oldest page might still be actively implement in practice because it requires foreknowledge of
used. future page accesses. It serves as a benchmark for other
algorithms.
Least Recently Used (LRU) Algorithm
The Least Recently Used (LRU) algorithm attempts to approximate the
optimal algorithm by replacing the page that has not been used for the
longest period of time. This is based on the principle of locality,
assuming that pages recently used are likely to be used again soon.

• Implementation: LRU can be implemented using a counter or a


stack. With a counter, each page entry has a time-of-use field, which
is updated whenever the page is referenced. The page with the
smallest time value is replaced. With a stack, whenever a page is
referenced, it is moved to the top of the stack. The page at the
bottom is replaced.

• Performance: Generally performs well, significantly better than FIFO,


and is widely used in practice.

• Overhead: Requires significant overhead to track page usage, which


can be expensive in terms of hardware support or software
complexity.
Least Frequently Used (LFU) Algorithm

1 2 3

Concept Mechanism Limitations


The Least Frequently Used (LFU) Each page in memory has a counter LFU suffers from the problem that a page
algorithm replaces the page that has the associated with it, which is incremented heavily used in the early stages of a
smallest access count. This means the every time the page is accessed. When a process might become unused later but
page that has been referenced the fewest page needs to be replaced, the operating still remain in memory due to its high
times will be evicted. system selects the page with the lowest count. This requires additional
counter value. mechanisms (like periodic resetting of
counts) to address.
Most Frequently Used (MFU) Algorithm
The Most Frequently Used (MFU) algorithm is the inverse of LFU. It replaces the
page that has been used most often. The rationale behind MFU is that a page that
has been used heavily in the past may no longer be in active use, and thus, its
removal might free up memory for pages that are more likely to be needed.

• Mechanism: Similar to LFU, MFU also relies on an access count for each page.
However, instead of evicting the page with the lowest count, it evicts the page
with the highest count.

• Practicality: MFU is rarely used in practice because it often performs poorly.


Pages that are frequently used are usually those that are currently active and
needed by the running process. Removing them would likely lead to immediate
page faults and performance degradation.

• Contrast with LRU: While LRU focuses on temporal locality (recency), MFU
focuses on frequency, often leading to less optimal decisions for typical program
behaviors.
Thrashing & Demand Segmentation
Understanding the pitfalls of virtual memory, like thrashing, and exploring alternative management techniques, such as demand segmentation,
is crucial for designing robust operating systems.

Thrashing Demand Segmentation


Thrashing occurs when the system spends more time paging Demand segmentation is an extension of demand paging, where
(swapping pages between main memory and disk) than executing programs are divided into logical segments rather than fixed-size
application instructions. This happens when a process does not pages. Segments are loaded into memory only when referenced.
have enough physical memory to hold its "working set" of pages, This offers more flexibility in managing memory based on program
leading to a high page fault rate and severe performance structure but introduces complexity in segment allocation and
degradation. protection.

You might also like