Virtual Memory
•The technique that allows the execution of processes that
are not completely in memory.
•Programs can be larger in size as compared to main
memory.
•Abstracts the main memory into an extremely large
uniform array of memory, separating logical memory as
viewed by users from physical memory.
•Frees programmers from memory – storage limitations.
Memory Management_Lecture Notes Slide Number 1
Virtual Memory …
•The ability to execute a program that is only partially in
memory lends following benefits
•Program would no longer be constrained by the amount of
physical memory is available. Users would be able to write
programs for an extremely large virtual address space.
Memory Management_Lecture Notes Slide Number 2
Virtual Memory …
•As each user program could take less physical memory
space, more programs could be run simultaneously, with a
corresponding increase in CPU utilization and throughput
but with no increase in response time and turnaround
time.
•Less I/O would be required to load or swap each user
program into memory i.e. user programs may run faster.
Memory Management_Lecture Notes Slide Number 3
Virtual Memory …
Page 0
Page 1
Page 2
.
.
.
Memory
Page v Map
Virtual Physical
Memory Memory
Memory Management_Lecture Notes Slide Number 4
Virtual Memory …
•Involves the separation of logical memory as perceived by
users from physical memory.
•The separation allows an extremely large virtual memory
to be provided for programmers when only a smaller
physical memory is available and that can be better
illustrated using the previous illustration.
Memory Management_Lecture Notes Slide Number 5
Virtual Memory …
•Makes the task of programming easier as the programmer
is no longer required to worry about the amount of
available physical memory.
•Virtual address space of a process refers to the logical or
virtual view of how the process is stored in memory.
•The view is that a process begins at a certain logical
address (say) address 0 and exists in contiguous memory
as illustrated below.
Memory Management_Lecture Notes Slide Number 6
Virtual Memory …
Max
Stack
Virtual
Address
Space
Heap
Data
Code
0
Virtual Address Space
Memory Management_Lecture Notes Slide Number 7
Virtual Memory …
•Physical memory may be organized in page frames and
that the physical page frames assigned to a process may
not be contiguous. MMU maps logical pages to physical
page frames in memory.
•In the previous illustration, we may observe that the heap
grow upward in memory as it is sued for dynamic memory
allocation whereas stack grow downward in memory
through successive function calls.
Memory Management_Lecture Notes Slide Number 8
Virtual Memory …
•The large blank space or hole between stack and heap is
nothing but the part of virtual address space but will
require actual physical space iff the heap or stack grows.
•The virtual address space that includes the holes are
termed as sparse address space.
•Virtual memory allows files and memory to be shared by
two or more process through page sharing.
Memory Management_Lecture Notes Slide Number 9
Virtual Memory …
Max Max
Stack Stack
Shared Shared Shared
Library Pages Library
Heap Heap
Data Data
Code Code
0 0
Virtual Address Space Virtual Address Space
Shared Library using Virtual Memory
Memory Management_Lecture Notes Slide Number 10
Demand Paging
•During CPU scheduling, we came with the idea regarding
utilization enhancement of CPU. This requires utilization
enhancement of memory.
•Memory is central to the operation of computer system.
•Memory is an array of bytes, with its own address.
•CPU fetches instructions from memory from memory
according to the value of
Memory Management_Lecture Notes Slide Number 11
Demand Paging …
Swap Out 0 1 2 3
Program A
4 5 6 7
8 9 10 11
Program B 12 13 14 15
16 17 18 19
20 21 22 23
Main Memory
Secondary Memory
Memory Management_Lecture Notes Slide Number 12
Demand Paging …
•When a process is swapped in, all its pages are not
swapped in all at once. Instead they are swapped in only
when the process needs them i.e. on demand and hence
got the name Demand Paging.
•This is termed a lazy swapper, although a pager is a more
accurate term.
Memory Management_Lecture Notes Slide Number 13
Demand Paging …
•The basic idea behind paging is that when a process is
swapped in, the pager only loads into memory those pages
that it expects the process to need.
•Pages that are not loaded into memory are marked as
invalid in the page table, using the invalid bit.(The rest of
the page table entry may either be blank or contain
information about where to find the swapped-out page on
the hard drive.)
Memory Management_Lecture Notes Slide Number 14
Demand Paging …
•If the process only ever accesses pages that are loaded in
memory(memory resident pages), then the process runs
exactly as if all the pages were loaded in to memory.
Memory Management_Lecture Notes Slide Number 15
Demand Paging …
0 Frame Valid / Invalid Bit
0 A 1
1 B 2
3 V V V
2 C 4 A 0 A V
3 D 5 1 B I V A B
6 C
4 E 7 2 C V C D E
8 3 D I
5 F 9 F F G G
6 G 10 4 E I
11 5 F V V V V
7 H 12
13 6 G I
Logical Memory V V V
14 7 H I
15
Physical Memory Page Table
Memory Management_Lecture Notes Slide Number 16
Page Fault
•If a page is needed that was not originally loaded up, then
a page fault trap is generated, which must be handled in a
series of steps:
•The memory address requested is first checked, to make
sure it was a valid memory request.
•If the reference was invalid, the process is terminated.
Otherwise, the page must be paged in.
•A free frame is located, possibly from a free-frame list.
Memory Management_Lecture Notes Slide Number 17
Page Fault …
•A disk operation is scheduled to bring in the necessary
page from disk. ( This will usually block the process on an
I/O wait, allowing some other process to use the CPU in
the meantime. )
•When the I/O operation is complete, the process's page
table is updated with the new frame number, and the
invalid bit is changed to indicate that this is now a valid
page reference.
Memory Management_Lecture Notes Slide Number 18
Page Fault …
•The instruction that caused the page fault must now be
restarted from the beginning.
•In an extreme case, NO pages are swapped in for a process
until they are requested by page faults. This is known as
pure demand paging.
•In theory each instruction could generate multiple page
faults but in practice this is very rare, due to locality of
reference.
Memory Management_Lecture Notes Slide Number 19
Page Fault …
•The hardware necessary to support virtual memory is the
same as for paging and swapping:
•A page table and secondary memory.
•A crucial part of the process is that the instruction must be
restarted from scratch once the desired page has been
made available in memory.
•For most simple instructions this is not a major difficulty.
Memory Management_Lecture Notes Slide Number 20
Page Fault …
•Although, there are some architectures that allow a single
instruction to modify a fairly large block of data, ( which
may span a page boundary ), and if some of the data gets
modified before the page fault occurs, this could cause
problems.
•One solution is to access both ends of the block before
executing the instruction, guaranteeing that the necessary
pages get paged in before the instruction begins.
Memory Management_Lecture Notes Slide Number 21
Page Fault …
•Although, there are some architectures that allow a single
instruction to modify a fairly large block of data, ( which
may span a page boundary ), and if some of the data gets
modified before the page fault occurs, this could cause
problems.
•One solution is to access both ends of the block before
executing the instruction, guaranteeing that the necessary
pages get paged in before the instruction begins.
Memory Management_Lecture Notes Slide Number 22
Page Fault …
3 Page is on backup store
Operating
System
2 Trap
1 Reference
Load M
6
Free Frame
4 Bring in Missing Page
Restart Instruction
Page Table
5 Reset Page Table Physical
Memory
Memory Management_Lecture Notes Slide Number 23
Performance of Demand Paging
•Obviously there is some slowdown and performance hit
whenever a page fault occurs and the system has to go get
it from memory, but just how big a hit is it exactly?
•There are many steps that occur when servicing a page
fault and some of the steps are optional or variable.
Memory Management_Lecture Notes Slide Number 24
Performance of Demand Paging …
•Suppose that a normal memory access requires 200
nanoseconds, and that servicing a page fault takes 8
milliseconds i.e. 8,000,000 nanoseconds, i.e. 40,000 times
a normal memory access.
• With a page fault rate of p, ( on a scale from 0 to 1 ), the
effective access time is now:
•( 1 - p ) * ( 200 ) + p * 8000000= 200 + 7,999,800 * p
Memory Management_Lecture Notes Slide Number 25
Performance of Demand Paging …
•which clearly depends heavily on p! Even if only one access
in 1000 causes a page fault, the effective access time drops
from 200 nanoseconds to 8.2 microseconds, a slowdown
of a factor of 40 times. In order to keep the slowdown less
than 10%, the page fault rate must be less than 0.0000025,
or one in 399,990 accesses.
Memory Management_Lecture Notes Slide Number 26
Performance of Demand Paging …
•A subtlety is that swap space is faster to access than the
regular file system, because it does not have to go through
the whole directory structure. For this reason some
systems will transfer an entire process from the file system
to swap space before starting up the process, so that
future paging all occurs from the ( relatively ) faster swap
space.
Memory Management_Lecture Notes Slide Number 27
Performance of Demand Paging …
•Some systems use demand paging directly from the file
system for binary code ( which never changes and hence
does not have to be stored on a page operation ), and to
reserve the swap space for data segments that must be
stored.
•This approach is used by both Solaris and BSD Unix.
Memory Management_Lecture Notes Slide Number 28
Page Replacement
•During CPU scheduling, we came with the idea regarding
utilization enhancement of CPU. This requires utilization
enhancement of memory.
•Memory is central to the operation of computer system.
•Memory is an array of bytes, with its own address.
•CPU fetches instructions from memory from memory
according to the value of
Memory Management_Lecture Notes Slide Number 29
Page Replacement Algorithm
•FIFO
•LRU
•LRU Optimal
Memory Management_Lecture Notes Slide Number 30
Page Replacement Algorithm: FIFO
•Oldest page in main memory is the one which will be
selected for replacement.
•Easy to implement, keep a list, replace pages from the tail
and add new pages at the head.
Memory Management_Lecture Notes Slide Number 31
Page Replacement Algorithm: LRU
•Page which has not been used for the longest time in main
memory is the one which will be selected for replacement.
•Easy to implement, keep a list, replace pages by looking
back into time.
Memory Management_Lecture Notes Slide Number 32
Page Replacement Algorithm: LRU
•An optimal page-replacement algorithm has the lowest
page-fault rate of all algorithms.
•An optimal page-replacement algorithm exists, and has
been called OPT or MIN.
•Replace the page that will not be used for the longest
period of time.
•Use the time when a page is to be used.
Memory Management_Lecture Notes Slide Number 33
Thrashing
•The condition or the situation when the system is spending
more time in demand paging rather than executing.
Memory Management_Lecture Notes Slide Number 34