Chapter 10: Virtual Memory
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Chapter 10: Virtual Memory
Outline
• Background
• Demand Paging
• Page Replacement
Objectives
• Define virtual memory and describe its benefits.
• Illustrate how pages are loaded into memory using demand
paging.
• Apply the FIFO, optimal, and LRU page-replacement algorithms.
Operating System Concepts – 10th Edition 10.2 Silberschatz, Galvin and Gagne ©2018
Background
In Chapter 9, we discussed various memory-management strategies
used in computer systems.
All these strategies have the same goal: to keep many processes in
memory simultaneously to allow multiprogramming.
However, they require that an entire process be in memory before it
can execute.
The entire program is rarely used and may not be needed at the same
time
Error code, unusual routines, large data structures
Operating System Concepts – 10th Edition 10.3 Silberschatz, Galvin and Gagne ©2018
Virtual memory
Virtual memory – is a technique that allows the execution of processes that
are not completely in memory. A separation of user logical memory from
physical memory
Advantages:
• Only part of the program needs to be in memory for execution
• Programs can be larger than physical memory (i.e., Logical address space
can therefore be much larger than physical address space)
• Allows processes to share files and libraries, and to implement shared
memory. (i.e., it allows address spaces to be shared by several processes)
• Allows for more efficient process creation
• More and faster programs running concurrently
• Less I/O needed to load or swap processes
Disadvantages:
• Virtual memory is not easy to implement, and may decrease performance if it is
used carelessly.
Operating System Concepts – 10th Edition 10.4 Silberschatz, Galvin and Gagne ©2018
Virtual memory (Cont.)
Virtual address space – logical view of how process is stored in
memory. Usually starts at address 0 and has contiguous addresses
until end of space
• Meanwhile, physical memory organized in page frames
• MMU must map logical to physical
Virtual memory can be implemented via:
• Demand paging
• Demand segmentation
• Backing store is typically part of a hard disk that is used
by a paging or swapping system to store information not
currently in main memory.
• Backing store is slower and cheaper than main memory.
Operating System Concepts – 10th Edition 10.5 Silberschatz, Galvin and Gagne ©2018
Fig 10.1 Virtual Memory That is Larger Than Physical Memory
Operating System Concepts – 10th Edition 10.6 Silberschatz, Galvin and Gagne ©2018
Figure 10.2 Virtual address space of a process in memory.
Usually design logical address space for stack to start at Max
logical address and grow “down” while heap grows “up”
• Unused address space between the two is hole
No physical memory needed until heap or stack
grows to a given new page
Virtual address spaces that include holes are known as sparse
address spaces.
Using a sparse address space is beneficial because the holes
can be filled as the stack or heap segments grow or if we wish
to dynamically link libraries (or other shared objects) during
program execution.
Operating System Concepts – 10th Edition 10.7 Silberschatz, Galvin and Gagne ©2018
Figure 10.3 Shared library using virtual memory.
Virtual memory allows files and memory to be shared by two or more processes through page
sharing. This leads to the following benefits:
System libraries shared via mapping into virtual address space
Shared memory by mapping pages read-write into virtual address space
Pages can be shared during fork(), thus speeding up process creation
Operating System Concepts – 10th Edition 10.8 Silberschatz, Galvin and Gagne ©2018
Demand Paging
Instead of bringing the entire process into memory at load time, we can load
a page into memory only when it is needed. This technique is known as
demand paging and is commonly used in virtual memory systems.
With demand-paged virtual memory, pages are loaded only when they are demanded
during program execution. Pages that are never accessed are thus never loaded
into physical memory. It is similar to paging system with swapping (see figure) .
• Less I/O needed, no unnecessary I/O
• Less memory needed
• Faster response
• More users
Lazy swapper – never swaps a page into memory unless page will be
needed
• Swapper that deals with pages is a pager
Operating System Concepts – 10th Edition 10.9 Silberschatz, Galvin and Gagne ©2018
Demand Paging
Operating System Concepts – 10th Edition 10.10 Silberschatz, Galvin and Gagne ©2018
Demand Paging
As a result, while a process is executing, some pages will be in memory, and
some will be in secondary storage.
Thus, we need new MMU functionality to implement demand paging
This is done by some form of hardware support to distinguish between the
two. The valid – invalid bit scheme can be used for this purpose.
when the bit is set to “valid,” the associated page is both legal and in
memory (memory resident).
If the bit is set to “invalid,” the page either is not valid (that is, not in
the logical address space of the process) or is valid but is currently in
secondary storage (not memory resident)
But what happens if the process tries to access a
page that was not brought into memory?
Access to a page marked invalid causes a page fault. (slide 15)
Operating System Concepts – 10th Edition 10.11 Silberschatz, Galvin and Gagne ©2018
Figure 10.4 Page Table When Some Pages are not in Main Memory
Operating System Concepts – 10th Edition 10.12 Silberschatz, Galvin and Gagne ©2018
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
(v in-memory , i not-in-memory)
Initially valid–invalid bit is set to i on all entries
During MMU address translation, if valid–invalid bit in page table entry is i page
fault.
Example of a page table snapshot:
Operating System Concepts – 10th Edition 10.13 Silberschatz, Galvin and Gagne ©2018
Steps in Handling Page Fault
The procedure for handling this page fault is straightforward:
1. If there is a reference to a page, first reference to that page will trap to
operating system (Page fault)
2. Operating system looks at another table (internal table usually kept with
the process control block) to decide:
• Invalid reference abort
• valid but Just not in memory
3. Find free frame (by taking one from the free-frame list, for example).
4. Read the desired page into the newly allocated frame.
5. Reset tables to indicate page now in memory –i.e. Set validation bit = v
6. Restart the instruction that caused the page fault
Operating System Concepts – 10th Edition 10.14 Silberschatz, Galvin and Gagne ©2018
Steps in Handling a Page Fault (Cont.)
Operating System Concepts – 10th Edition 10.15 Silberschatz, Galvin and Gagne ©2018
Aspects of Demand Paging
Extreme case – start process with no pages in memory
• OS sets instruction pointer to first instruction of process, non-memory-resident ->
page fault
• And for every other process pages on first access
• This is called Pure demand paging
Actually, a given instruction could access multiple pages (one page for the instruction
and many for data) may cause multiple page faults per instruction
• Consider fetch and decode of instruction which adds 2 numbers from memory and
stores result back to memory
• Programs tend to have locality of reference, which results in reasonable
performance for demand paging
Hardware support needed for demand paging:
• Page table with valid / invalid bit
• Secondary memory (swap device with swap space)
• Instruction restart
Operating System Concepts – 10th Edition 10.16 Silberschatz, Galvin and Gagne ©2018
Free-Frame List
When a page fault occurs, the operating system must bring the desired page from
secondary storage into main memory.
Most operating systems maintain a free-frame list -- a pool of free frames for
satisfying such requests.
Operating system typically allocate free frames using a technique known as
zero-fill-on-demand -- the content of the frames zeroed-out before being allocated.
When a system starts up, all available memory is placed on the free-frame list.
• As free frames are requested (for example, through demand paging), the size of the
free-frame list shrinks.
• At some point, the list either falls to zero or falls below a certain threshold, at which
point it must be re-populated.
We cover strategies for both of these situations in Page Replacement Section
Operating System Concepts – 10th Edition 10.17 Silberschatz, Galvin and Gagne ©2018
What Happens if There is no Free Frame?
Over-allocation of memory is shown in Fig 10.9 in the next slide.
• While a process is executing, a page fault occurs.
• The operating system determines where the desired page is residing on
secondary storage but then finds that there are no free frames on the free-frame
list; all memory is in use.
• The fact that there are no free frames is depicted by a question mark.
Operating System Concepts – 10th Edition 10.18 Silberschatz, Galvin and Gagne ©2018
Fig 10.9 Need For Page Replacement
Operating System Concepts – 10th Edition 10.19 Silberschatz, Galvin and Gagne ©2018
What Happens if There is no Free Frame?
Performance – want an algorithm which will result in the minimum number of
page faults
Algorithm :
1. Terminate? Not best choice as users should not be aware that their
processes are running on a paged system
2. Swap out process? No longer used by most operating systems due to
the overhead of copying entire processes between memory and swap
space.
3. Replace the page? The most common solution.
Page replacement – find some page in memory, but not really in use, page it out
Same page may be brought into memory several times
Operating System Concepts – 10th Edition 10.20 Silberschatz, Galvin and Gagne ©2018
Page Replacement
Prevent over-allocation of memory by modifying page-fault service routine to
include page replacement
Use modify (dirty) bit to reduce overhead of page transfers – only modified
pages are written to disk
Page replacement is basic to demand paging. Page replacement completes
separation between logical memory and physical memory; and hence large
virtual memory can be provided on a smaller physical memory
Operating System Concepts – 10th Edition 10.21 Silberschatz, Galvin and Gagne ©2018
Basic Page Replacement
The modified page-fault routine with page replacement: (Figure 10.10)
1. Find the location of the desired page on disk
2. Find a free frame:
a. If there is a free frame, use it
b. If there is no free frame, use a page replacement algorithm to select a victim frame
c. Write victim frame to disk (secondary storage )
3. Bring the desired page into the (newly) free frame; update the page and frame
tables
4. Continue the process by restarting the instruction that caused the trap.
Operating System Concepts – 10th Edition 10.22 Silberschatz, Galvin and Gagne ©2018
Figure 10.10 Page Replacement
Operating System Concepts – 10th Edition 10.23 Silberschatz, Galvin and Gagne ©2018
Basic Page Replacement
Note now potentially 2 page transfers for page fault (one for the page-
out and one for the page-in) are required– increasing access time.
®
• We can reduce this overhead by using a modify bit (or dirty bit), which
indicates if the page was modified.
• When we select a page for replacement, we examine its modify bit:
1. If the bit is set, we know that the page has been modified since it was
read in from secondary storage Must write the page to storage.
2. If the modify bit is not set, the page has not been modified since it was
read into memory No need to write the memory page to storage: it is
already there (applies to read-only pages )
Operating System Concepts – 10th Edition 10.24 Silberschatz, Galvin and Gagne ©2018
Page and Frame Replacement Algorithms
We must solve two major problems to implement demand paging:
if we have multiple processes in memory, we must decide how many
frames to allocate to each process;
and when page replacement is required, we must select the frames
that are to be replaced
by developing the following algorithms:
• Frame-allocation algorithm determines
How many frames to give each process
Which frames to replace
• Page-replacement algorithm
Want lowest page-fault rate on both first access and re-access
There are many different page-replacement algorithms. Every operating
system probably has its own replacement scheme.
Operating System Concepts – 10th Edition 10.25 Silberschatz, Galvin and Gagne ©2018
Page and Frame Replacement Algorithms
We evaluate an algorithm by running it on a particular string of memory references
and computing the number of page faults on that string
String is just page numbers, not full addresses
Repeated access to the same page does not cause a page fault
Results depend on number of frames available
In all our examples, the reference string of referenced page numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
Operating System Concepts – 10th Edition 10.26 Silberschatz, Galvin and Gagne ©2018
Graph of Page Faults Versus the Number of Frames
Operating System Concepts – 10th Edition 10.27 Silberschatz, Galvin and Gagne ©2018
1. First-In-First-Out (FIFO) Algorithm
The simplest page-replacement algorithm is a first-in, first-out (FIFO)
algorithm.
A FIFO replacement algorithm associates with each page the time
when that page was brought into memory. (Notice that it is not strictly
necessary to record the time when a page is brought in)
When a page must be replaced, the oldest page is chosen.
We can create a FIFO queue to hold all pages in memory.
We replace the page at the head of the queue.
When a page is brought into memory, we insert it at the tail of the
queue.
How to track ages of pages?
Just use a FIFO queue
Operating System Concepts – 10th Edition 10.28 Silberschatz, Galvin and Gagne ©2018
FIFO Algorithm Example 1
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
3 frames (3 pages can be in memory at a time per process)
15 page faults
The FIFO page-replacement algorithm is easy to understand and program.
However, its performance is not always good; it can vary by reference string:
consider the following example 1,2,3,4,1,2,5,1,2,3,4,5
Adding more frames can cause more page faults!
Belady’s Anomaly
Operating System Concepts – 10th Edition 10.29 Silberschatz, Galvin and Gagne ©2018
FIFO Algorithm Example 2
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per process)
__ page faults
4 frames
__ page faults
Operating System Concepts – 10th Edition 10.30 Silberschatz, Galvin and Gagne ©2018
FIFO Algorithm Example 2
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per process)
1 1 1 4 4 4 5 5 5 5 5 5 __ page faults
2 2 2 1 1 1 1 1 3 3 3
3 3 3 2 2 2 2 2 4 4
4 frames
1 1 1 1 1 1 5 5 5 5 4 4
2 2 2 2 2 2 1 1 1 1 5
__ page faults
3 3 3 3 3 3 2 2 2 2
4 4 4 4 4 4 3 3 3
Operating System Concepts – 10th Edition 10.31 Silberschatz, Galvin and Gagne ©2018
FIFO Illustrating Belady’s Anomaly
Figure below shows the curve of page faults for this reference string versus the number
of available frames.
Notice that the number of faults for 4 frames (10) is greater than the number of faults for
3 frames (9)!
We would expect that giving more memory to a process would improve its performance,
but this is not always true.
This most unexpected result is known as Belady’s anomaly:
for some page-replacement algorithms, the page-fault rate may increase as the
number of allocated frames increases.
Operating System Concepts – 10th Edition 10.32 Silberschatz, Galvin and Gagne ©2018
2. Optimal Algorithm
9 is optimal for the previous example. How do you know this? (Fig 10.14)
• Can’t read the future but such an algorithm does exist ..
Replace page that will not be used for longest period of
time.
(+)The algorithm that has the lowest page-fault rate of all algorithms and will never
suffer from Belady’s anomaly.
(-) This algorithm is difficult to implement, because it requires future knowledge of
the reference string.
It is used for measuring how well your algorithm performs (for comparison studies)
9 page faults
Operating System Concepts – 10th Edition 10.33 Silberschatz, Galvin and Gagne ©2018
3. Least Recently Used (LRU) Algorithm
If the optimal algorithm is not feasible, perhaps an approximation of the optimal
algorithm is possible.
The key distinction between the FIFO and OPT algorithms (other than looking
backward versus forward in time) is that:
o The FIFO algorithm uses the time when a page was brought into memory,
whereas the OPT algorithm uses the time when a page is to be used.
If we use the recent past as an approximation of the near future, then we have a
new approach: the least recently used (LRU) algorithm:
Use past knowledge rather than future
Associate time of last use with each page
replace the page that has not been used for the longest period of time
Operating System Concepts – 10th Edition 10.34 Silberschatz, Galvin and Gagne ©2018
Least Recently Used (LRU) Algorithm
12 page faults
12 faults – better than FIFO but worse than OPT
(+) Generally good algorithm and frequently used.
(+) Like optimal replacement, LRU replacement does not suffer from Belady’s
anomaly
But how to implement?
• (-) An LRU page-replacement algorithm may require substantial hardware
assistance. The problem is to determine an order for the frames defined by
the time of last use. Two implementations are feasible:
Operating System Concepts – 10th Edition 10.35 Silberschatz, Galvin and Gagne ©2018
LRU Algorithm (Cont.)
1. Counter implementation
Every page entry has a counter;
The CPU clock is incremented for every memory reference of this entry,
copy the clock into the counter
When a page needs to be changed, look at the counters to find smallest value
(-) Require search through page table
2. Stack implementation
• Keep a stack of page numbers in a doubly linked list form
• If a page is referenced, it is removed from the stack and put on the top
This requires 6 pointers to be changed at worst
• So, the most recently used page is always at the top of the stack, and the least recently
used page is always at the bottom
• (-) Each update is more expensive,
• (+) but there is no search for a replacement
Operating System Concepts – 10th Edition 10.36 Silberschatz, Galvin and Gagne ©2018
End of Chapter 10
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018