0% found this document useful (0 votes)
52 views29 pages

Chapter 4.2-Continued (Autosaved)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views29 pages

Chapter 4.2-Continued (Autosaved)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

CHAPTER 4

(continued):
VIRTUAL MEMORY
Motivations for Virtual Memory
• Use Physical DRAM as a Cache for the Disk
• Address space of a process can exceed physical memory size
• Sum of address spaces of multiple processes can exceed physical memory

• Simplify Memory Management


• Multiple processes resident in main memory.
• Each process with its own address space
• Only “active” code and data is actually in memory
• Allocate more memory to process as needed.

• Provide Protection
• One process can’t interfere with another.
• because they operate in different address spaces.
• User process cannot access privileged information
• different sections of address spaces have different permissions.
 Virtual memory is stored in a hard disk image. The physical
memory holds a small number of virtual pages in physical
page frames.
 A mapping between a virtual and a physical memory:
Motivation #1: DRAM a “Cache” for Disk

• Full address space is quite large:


• 32-bit addresses: ~4,000,000,000 (4 billion) bytes
• 64-bit addresses: ~16,000,000,000,000,000,000 (16 quintillion) bytes

• Disk storage is ~300X cheaper than DRAM storage


• To access large amounts of data in a cost-effective manner, the
bulk of the data must be stored on disk

80 GB: ~$110
1GB: ~$200
4 MB: ~$500

SRAM DRAM Disk


Levels in Memory Hierarchy
cache virtual memory

C
CPU a Memory
c
disk
regs h
e

Register Cache Memory Disk Memory


size: 32 B 32 KB-4MB 1024 MB 100 GB
speed: 1 ns 2 ns 30 ns 8 ms
$/Mbyte: $125/MB $0.20/MB $0.001/MB

larger, slower, cheaper


DRAM vs. SRAM as a “Cache”

• DRAM vs. disk is more extreme than SRAM vs. DRAM


• Access latencies:
• DRAM ~10X slower than SRAM
• Disk ~100,000X slower than DRAM
• Importance of exploiting spatial locality:
• First byte is ~100,000X slower than successive bytes on disk

SRAM DRAM Disk


A System with Virtual Memory
• Examples:
Memory
• workstations, servers, modern PCs, etc.
0:
Page Table 1:
Virtual Physical
Addresses 0: Addresses
1:

CPU

P-1:
N-1:

Disk
 Address Translation: Hardware converts virtual
addresses to physical addresses via OS-managed
lookup table (page table)
Page Faults (like “Cache Misses”)
• What if an object is on disk rather than in memory?
• Page table entry indicates virtual address not in memory
• OS exception handler invoked to move data from disk into memory
• current process suspends, others can resume
• OS has full control over placement, etc.

Before fault After fault


Memory
Memory
Page Table
Virtual Page Table
Physical
Addresses Addresses Virtual Physical
Addresses Addresses
CPU
CPU

Disk
Disk
Solution: Separate Virt. Addr. Spaces
• Virtual and physical address spaces divided into equal-sized
blocks
• blocks are called “pages” (both virtual and physical)

• Each process has its own virtual address space


• operating system controls how virtual pages as assigned to physical
memory
0
Virtual 0 Address Physic
Address VP 1 Translation PP 2 al
VP 2 Addres
Space ... s
for
N-1 Space
Process
1: PP 7 (DRAM
Virtual 0 )
VP 1
Address PP 10
VP 2
Space ...
for M-1
N-1
Process
2:
Motivation #3: Protection
• Page table entry contains access rights information
• hardware enforces this protection (trap into OS if violation occurs)
Page Tables Memory
Read? Write? Physical Addr 0:
VP 0: Yes No PP 9 1:
Process i: VP 1: Yes Yes PP 4
VP 2: No No XXXXXXX
• • •
• • •
• • •

Read? Write? Physical Addr


VP 0: Yes Yes PP 6
Process j: VP 1: Yes No PP 9 N-1:
VP 2: No No XXXXXXX
• • •
• • •
• • •
VM Address Translation
• Virtual Address Space
• V = {0, 1, …, N–1}
• Physical Address Space
• P = {0, 1, …, M–1}
• M<N
VM Address Translation
• Parameters
• P = 2 = page size (bytes).
p

• N = 2 = Virtual address limit


n

n–1
• M = 2 = Physical address limit
m
p p–1 0
virtual page number page offset virtual address

address translation

m–1 p p–1 0
physical page number page offset physical address

Page offset bits don’t change as a result of translation


Page Tables
Virtual Page Memory resident
Number page table
(physical page
Valid or disk address) Physical Memory
1
1
0
1
1
1
0
1
0 Disk Storage
1 (swap file or
regular file system file)
Address Translation via Page Table
page table base register virtual address
n–1 p p–1 0
VPN acts virtual page number (VPN) page offset
as
table
physical page number (PPN)
validaccess
index

if valid=0
then page
not in memory m–1 p p–1 0
physical page number (PPN) page offset

physical address
Page Table Operation

• Computing Physical Address


• Page Table Entry (PTE) provides information
about page
• if (valid bit = 1) then the page is in memory.
• Use physical page number (PPN) to construct address
• if (valid bit = 0) then the page is on disk
• Page fault
• Checking Protection
• Access rights field indicate allowable access
• e.g., read-only, read-write, execute-only
• typically support multiple protection modes (e.g., kernel
vs. user)
• Protection violation fault if user doesn’t have
necessary permission
Integrating VM and Cache
VA PA miss
Trans- Main
CPU Cache
lation Memory
hit
data

• Most Caches “Physically Addressed”


• Accessed by physical addresses
• Allows multiple processes to have blocks in cache at same time
• Allows multiple processes to share pages
• Cache doesn’t need to be concerned with protection issues
• Access rights checked as part of address translation

• Perform Address Translation Before Cache Lookup


• But this could involve a memory access itself (of the PTE)
• Of course, page table entries can also become cached
Speeding up Translation with a TLB
• “Translation Lookaside Buffer” (TLB)
• Small hardware cache in MMU
• Maps virtual page numbers to physical page numbers
• Contains complete page table entries for small number of pages

hit
VA PA miss
TLB Main
CPU Cache
Lookup Memory

miss hit

Trans-
lation
data
Address Translation with a TLB
n–1 p p–1 0
virtual page numberpage offset virtual
address

valid tagphysical page number


TLB
. . .

TLB hit
physical address

tag index byte offset

valid tag data


Cache

cache hit data


Multi-Level Page Tables
Level 2
Tables
• Given:
• 4KB (212) page size
• 32-bit address space
Level 1
• 4-byte PTE
Table
• Problem:
• Would need a 4 MB page table!
• 220 *4 bytes

• Common solution
...
• multi-level page tables
• e.g., 2-level table (P6)
• Level 1 table: 1024 entries, each of which points to a
Level 2 page table.
• Level 2 table: 1024 entries, each of which points to a
page
Simple Memory System Example
• Addressing
• 14-bit virtual addresses
• 12-bit physical address
• Page size = 64 bytes

13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPN VPO
(Virtual Page Number) (Virtual Page Offset)

11 10 9 8 7 6 5 4 3 2 1 0

PPN PPO
(Physical Page Number)(Physical Page Offset)
Simple Memory System Page Table

• Only show first 16 entries

VPN PPN Valid VPN PPN Valid


00 28 1 08 13 1
01 – 0 09 17 1
02 33 1 0A 09 1
03 02 1 0B – 0
04 – 0 0C – 0
05 16 1 0D 2D 1
06 – 0 0E 11 1
07 – 0 0F 0D 1
Simple Memory System TLB

• TLB
• 16 entries
TLBT TLBI
13 12• 114-way
10 9associative
8 7 6 5 4 3 2 1 0

VPN VPO

Set Tag PPN Valid Tag PPN Valid Tag PPN Valid Tag PPN Valid
0 03 – 0 09 0D 1 00 – 0 07 02 1
1 03 2D 1 02 – 0 04 – 0 0A – 0
2 02 – 0 08 – 0 06 – 0 03 – 0
3 07 – 0 03 0D 1 0A 34 1 02 – 0
Simple Memory System Cache
• Cache
CT CI CO
• 16 lines
11 10 9 8 7 6 5 4 3 2 1 0
• 4-byte line size
• Direct mapped PPN PPO

Idx Tag Valid B0 B1 B2 B3 Idx Tag Valid B0 B1 B2 B3


0 19 1 99 11 23 11 8 24 1 3A 00 51 89
1 15 0 – – – – 9 2D 0 – – – –
2 1B 1 00 02 04 08 A 2D 1 93 15 DA 3B
3 36 0 – – – – B 0B 0 – – – –
4 32 1 43 6D 8F 09 C 12 0 – – – –
5 0D 1 36 72 F0 1D D 16 1 04 96 34 15
6 31 0 – – – – E 13 1 83 77 1B D3
7 16 1 11 C2 DF 03 F 14 0 – – – –
Address Translation Example #1
• Virtual Address 03D4
TLBT TLBI
13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPN VPO
VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____

• Physical Address
CT CI CO
11 10 9 8 7 6 5 4 3 2 1 0

PPN PPO
Offset ___ CI___ CT ____ Hit? __ Byte: ____
Address Translation Example #2
• Virtual Address 0B8F
TLBT TLBI
13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPN VPO
VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____

• Physical Address
CT CI CO
11 10 9 8 7 6 5 4 3 2 1 0

PPN PPO
Offset ___ CI___ CT ____ Hit? __ Byte: ____
Address Translation Example #3
• Virtual Address 0040
TLBT TLBI
13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPN ___ VPN


TLBI ___ TLBT ____ TLB Hit? __ Page Fault?
VPO __ PPN: ____

• Physical Address
CT CI CO
11 10 9 8 7 6 5 4 3 2 1 0

PPN PPO
Offset ___ CI___ CT ____ Hit? __ Byte: ____
Exercise
Consider a virtual memory system with the following properties:
• 40-bit virtual address
• 16-Kbyte pages
• 36-bit physical byte address
- What is the total size of the page table, assuming that the valid,
protection, dirty and use bits take a total of 4 bits and that all the
virtual pages are in use.
- Assume that the virtual memory system is implemented with a 2-
way set-associative TLB with a total of 256 TLB entries. Describe
the virtual-to-physical mapping, specifying the width of all fields in
mapping process.

You might also like