0% found this document useful (0 votes)
26 views20 pages

Chap 9 Virtual Memory

Chapter 9 discusses virtual memory, highlighting its benefits such as allowing programs to run with less physical memory and improving CPU utilization. Key concepts include demand paging, page replacement, and copy-on-write, which enhance process management and memory efficiency. The chapter also covers the mechanics of handling page faults and the performance implications of demand paging strategies.

Uploaded by

mnicole1075
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views20 pages

Chap 9 Virtual Memory

Chapter 9 discusses virtual memory, highlighting its benefits such as allowing programs to run with less physical memory and improving CPU utilization. Key concepts include demand paging, page replacement, and copy-on-write, which enhance process management and memory efficiency. The chapter also covers the mechanics of handling page faults and the performance implications of demand paging strategies.

Uploaded by

mnicole1075
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Chapter 9: Virtual Memory

 Background
 Demand Paging
 Copy-on-Write

Chapter 9: Virtual Memory 



Page Replacement
Allocation of Frames
 Thrashing
 Memory-Mapped Files
 Allocating Kernel Memory
 Other Considerations
 Operating-System Examples

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.2 Silberschatz, Galvin and Gagne ©2013

Objectives Background
 Code needs to be in memory to execute, but entire program rarely
 To describe the benefits of a virtual memory system
used
 To explain the concepts of demand paging, page-replacement
 Error code, unusual routines, large data structures
algorithms, and allocation of page frames
 Entire program code not needed at same time
 To discuss the principle of the working-set model
 Consider ability to execute partially-loaded program
 To examine the relationship between shared memory and
memory-mapped files  Program no longer constrained by limits of physical memory
 To explore how kernel memory is managed  Each program takes less memory while running -> more
programs run at the same time
 Increased CPU utilization and throughput with no increase
in response time or turnaround time
 Less I/O needed to load or swap programs into memory ->
each user program runs faster

Operating System Concepts – 9th Edition 9.3 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.4 Silberschatz, Galvin and Gagne ©2013
Background (Cont.) Background (Cont.)
 Virtual memory – separation of user logical memory from  Virtual address space – logical view of how process is
physical memory stored in memory
 Only part of the program needs to be in memory for execution  Usually start at address 0, contiguous addresses until end of
space
 Logical address space can therefore be much larger than physical
address space  Meanwhile, physical memory organized in page frames
 Allows address spaces to be shared by several processes  MMU must map logical to physical
 Allows for more efficient process creation  Virtual memory can be implemented via:
 More programs running concurrently  Demand paging
 Less I/O needed to load or swap processes  Demand segmentation

Operating System Concepts – 9th Edition 9.5 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.6 Silberschatz, Galvin and Gagne ©2013

Virtual Memory That is Larger Than Physical Memory Virtual-address Space


 Usually design logical address space for
stack to start at Max logical address and
grow “down” while heap grows “up”
 Maximizes address space use
 Unused address space between
the two is hole
 No physical memory needed
until heap or stack grows to a
given new page
 Enables sparse address spaces with
holes left for growth, dynamically linked
libraries, etc
 System libraries shared via mapping into
virtual address space
 Shared memory by mapping pages read-
write into virtual address space
 Pages can be shared during fork(),
speeding process creation

Operating System Concepts – 9th Edition 9.7 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.8 Silberschatz, Galvin and Gagne ©2013
Shared Library Using Virtual Memory Demand Paging
 Could bring entire process into memory
at load time
 Or bring a page into memory only when
it is needed
 Less I/O needed, no unnecessary
I/O
 Less memory needed
 Faster response
 More users
 Similar to paging system with swapping
(diagram on right)
 Page is needed  reference to it
 invalid reference  abort
 not-in-memory  bring to memory
 Lazy swapper – never swaps a page
into memory unless page will be needed
 Swapper that deals with pages is a
pager

Operating System Concepts – 9th Edition 9.9 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.10 Silberschatz, Galvin and Gagne ©2013

Basic Concepts Valid-Invalid Bit


 With each page table entry a valid–invalid bit is associated
 With swapping, pager guesses which pages will be used before
(v  in-memory – memory resident, i  not-in-memory)
swapping out again
 Initially valid–invalid bit is set to i on all entries
 Instead, pager brings in only those pages into memory
 Example of a page table snapshot:
 How to determine that set of pages?
 Need new MMU functionality to implement demand paging
 If pages needed are already memory resident
 No difference from non demand-paging
 If page needed and not memory resident
 Need to detect and load the page into memory from storage
 Without changing program behavior
 Without programmer needing to change code

 During MMU address translation, if valid–invalid bit in page table


entry is i  page fault

Operating System Concepts – 9th Edition 9.11 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.12 Silberschatz, Galvin and Gagne ©2013
Page Table When Some Pages Are Not in Main Memory Page Fault

 If there is a reference to a page, first reference to that page will


trap to operating system:
page fault
1. Operating system looks at another table to decide:
 Invalid reference  abort
 Just not in memory
2. Find free frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault

Operating System Concepts – 9th Edition 9.13 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.14 Silberschatz, Galvin and Gagne ©2013

Steps in Handling a Page Fault Aspects of Demand Paging


 Extreme case – start process with no pages in memory
 OS sets instruction pointer to first instruction of process, non-
memory-resident -> page fault
 And for every other process pages on first access
 Pure demand paging
 Actually, a given instruction could access multiple pages -> multiple
page faults
 Consider fetch and decode of instruction which adds 2 numbers
from memory and stores result back to memory
 Pain decreased because of locality of reference
 Hardware support needed for demand paging
 Page table with valid / invalid bit
 Secondary memory (swap device with swap space)
 Instruction restart

Operating System Concepts – 9th Edition 9.15 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.16 Silberschatz, Galvin and Gagne ©2013
Instruction Restart Performance of Demand Paging
 Consider an instruction that could access several different locations  Stages in Demand Paging (worse case)
1. Trap to the operating system
 block move
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page on the disk
5. Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced
2. Wait for the device seek and/or latency time
3. Begin the transfer of the page to a free frame
6. While waiting, allocate the CPU to some other user
 auto increment/decrement location
7. Receive an interrupt from the disk I/O subsystem (I/O completed)
 Restart the whole operation? 8. Save the registers and process state for the other user
 What if source and destination overlap? 9. Determine that the interrupt was from the disk
10. Correct the page table and other tables to show page is now in memory
11. Wait for the CPU to be allocated to this process again
12. Restore the user registers, process state, and new page table, and then resume the
interrupted instruction

Operating System Concepts – 9th Edition 9.17 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.18 Silberschatz, Galvin and Gagne ©2013

Performance of Demand Paging (Cont.) Demand Paging Example


 Three major activities  Memory access time = 200 nanoseconds
 Service the interrupt – careful coding means just several hundred  Average page-fault service time = 8 milliseconds
instructions needed  EAT = (1 – p) x 200 + p (8 milliseconds)
 Read the page – lots of time = (1 – p x 200 + p x 8,000,000
 Restart the process – again just a small amount of time = 200 + p x 7,999,800
 Page Fault Rate 0  p  1  If one access out of 1,000 causes a page fault, then
 if p = 0 no page faults EAT = 8.2 microseconds.
 if p = 1, every reference is a fault This is a slowdown by a factor of 40!!
 Effective Access Time (EAT)  If want performance degradation < 10 percent
EAT = (1 – p) x memory access  220 > 200 + 7,999,800 x p
+ p (page fault overhead 20 > 7,999,800 x p
+ swap page out  p < .0000025
+ swap page in )  < one page fault in every 400,000 memory accesses

Operating System Concepts – 9th Edition 9.19 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.20 Silberschatz, Galvin and Gagne ©2013
Demand Paging Optimizations Copy-on-Write
 Swap space I/O faster than file system I/O even if on the same device  Copy-on-Write (COW) allows both parent and child processes to initially
 Swap allocated in larger chunks, less management needed than file share the same pages in memory
system  If either process modifies a shared page, only then is the page copied
 Copy entire process image to swap space at process load time  COW allows more efficient process creation as only modified pages are
 Then page in and out of swap space copied
 Used in older BSD Unix  In general, free pages are allocated from a pool of zero-fill-on-demand
 Demand page in from program binary on disk, but discard rather than paging pages
out when freeing frame  Pool should always have free frames for fast demand page execution
 Used in Solaris and current BSD  Don’t want to have to free a frame as well as other processing on
 Still need to write to swap space page fault

 Pages not associated with a file (like stack and heap) – anonymous  Why zero-out a page before allocating it?
memory  vfork() variation on fork() system call has parent suspend and child
 Pages modified in memory but not yet written back to the file system using copy-on-write address space of parent
 Designed to have child call exec()
 Mobile systems
 Typically don’t support swapping  Very efficient

 Instead, demand page from file system and reclaim read-only pages
(such as code)

Operating System Concepts – 9th Edition 9.21 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.22 Silberschatz, Galvin and Gagne ©2013

Before Process 1 Modifies Page C After Process 1 Modifies Page C

Operating System Concepts – 9th Edition 9.23 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.24 Silberschatz, Galvin and Gagne ©2013
What Happens if There is no Free Frame? Page Replacement
 Used up by process pages  Prevent over-allocation of memory by modifying page-
 Also in demand from the kernel, I/O buffers, etc fault service routine to include page replacement
 How much to allocate to each?  Use modify (dirty) bit to reduce overhead of page
 Page replacement – find some page in memory, but not really in transfers – only modified pages are written to disk
use, page it out  Page replacement completes separation between logical
 Algorithm – terminate? swap out? replace the page? memory and physical memory – large virtual memory can
be provided on a smaller physical memory
 Performance – want an algorithm which will result in minimum
number of page faults
 Same page may be brought into memory several times

Operating System Concepts – 9th Edition 9.25 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.26 Silberschatz, Galvin and Gagne ©2013

Need For Page Replacement Basic Page Replacement


1. Find the location of the desired page on disk

2. Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to
select a victim frame
- Write victim frame to disk if dirty

3. Bring the desired page into the (newly) free frame; update the page
and frame tables

4. Continue the process by restarting the instruction that caused the trap

Note now potentially 2 page transfers for page fault – increasing EAT

Operating System Concepts – 9th Edition 9.27 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.28 Silberschatz, Galvin and Gagne ©2013
Page Replacement Page and Frame Replacement Algorithms

 Frame-allocation algorithm determines


 How many frames to give each process
 Which frames to replace
 Page-replacement algorithm
 Want lowest page-fault rate on both first access and re-access
 Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of page
faults on that string
 String is just page numbers, not full addresses
 Repeated access to the same page does not cause a page fault
 Results depend on number of frames available
 In all our examples, the reference string of referenced page
numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1

Operating System Concepts – 9th Edition 9.29 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.30 Silberschatz, Galvin and Gagne ©2013

Graph of Page Faults Versus The Number of Frames First-In-First-Out (FIFO) Algorithm
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)

15 page faults
 Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5
 Adding more frames can cause more page faults!
 Belady’s Anomaly
 How to track ages of pages?
 Just use a FIFO queue

Operating System Concepts – 9th Edition 9.31 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.32 Silberschatz, Galvin and Gagne ©2013
FIFO Illustrating Belady’s Anomaly Optimal Algorithm
 Replace page that will not be used for longest period of time
 9 is optimal for the example
 How do you know this?
 Can’t read the future
 Used for measuring how well your algorithm performs

Operating System Concepts – 9th Edition 9.33 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.34 Silberschatz, Galvin and Gagne ©2013

Least Recently Used (LRU) Algorithm LRU Algorithm (Cont.)


 Counter implementation
 Use past knowledge rather than future
 Every page entry has a counter; every time page is referenced
 Replace page that has not been used in the most amount of time
through this entry, copy the clock into the counter
 Associate time of last use with each page
 When a page needs to be changed, look at the counters to find
smallest value
 Search through table needed
 Stack implementation
 Keep a stack of page numbers in a double link form:
 Page referenced:
 move it to the top
 12 faults – better than FIFO but worse than OPT  requires 6 pointers to be changed
 Generally good algorithm and frequently used  But each update more expensive
 But how to implement?  No search for replacement
 LRU and OPT are cases of stack algorithms that don’t have
Belady’s Anomaly

Operating System Concepts – 9th Edition 9.35 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.36 Silberschatz, Galvin and Gagne ©2013
Use Of A Stack to Record Most Recent Page References LRU Approximation Algorithms
 LRU needs special hardware and still slow
 Reference bit
 With each page associate a bit, initially = 0
 When page is referenced bit set to 1
 Replace any with reference bit = 0 (if one exists)
 We do not know the order, however
 Second-chance algorithm
 Generally FIFO, plus hardware-provided reference bit
 Clock replacement
 If page to be replaced has
 Reference bit = 0 -> replace it
 reference bit = 1 then:
– set reference bit 0, leave page in memory
– replace next page, subject to same rules

Operating System Concepts – 9th Edition 9.37 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.38 Silberschatz, Galvin and Gagne ©2013

Second-Chance (clock) Page-Replacement Algorithm Enhanced Second-Chance Algorithm

 Improve algorithm by using reference bit and modify bit (if


available) in concert
 Take ordered pair (reference, modify)
1. (0, 0) neither recently used not modified – best page to replace
2. (0, 1) not recently used but modified – not quite as good, must
write out before replacement
3. (1, 0) recently used but clean – probably will be used again soon
4. (1, 1) recently used and modified – probably will be used again
soon and need to write out before replacement
 When page replacement called for, use the clock scheme but
use the four classes replace page in lowest non-empty class
 Might need to search circular queue several times

Operating System Concepts – 9th Edition 9.39 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.40 Silberschatz, Galvin and Gagne ©2013
Counting Algorithms Page-Buffering Algorithms
 Keep a counter of the number of references that have been made  Keep a pool of free frames, always
to each page  Then frame available when needed, not found at fault time
 Not common  Read page into free frame and select victim to evict and add
to free pool
 Lease Frequently Used (LFU) Algorithm: replaces page with  When convenient, evict victim
smallest count
 Possibly, keep list of modified pages
 When backing store otherwise idle, write pages there and set
 Most Frequently Used (MFU) Algorithm: based on the argument
to non-dirty
that the page with the smallest count was probably just brought in
and has yet to be used  Possibly, keep free frame contents intact and note what is in them
 If referenced again before reused, no need to load contents
again from disk
 Generally useful to reduce penalty if wrong victim frame
selected

Operating System Concepts – 9th Edition 9.41 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.42 Silberschatz, Galvin and Gagne ©2013

Applications and Page Replacement Allocation of Frames


 All of these algorithms have OS guessing about future page  Each process needs minimum number of frames
access  Example: IBM 370 – 6 pages to handle SS MOVE instruction:
 Some applications have better knowledge – i.e. databases  instruction is 6 bytes, might span 2 pages
 Memory intensive applications can cause double buffering  2 pages to handle from
 OS keeps copy of page in memory as I/O buffer  2 pages to handle to
 Application keeps page in memory for its own work  Maximum of course is total frames in the system
 Operating system can given direct access to the disk, getting out  Two major allocation schemes
of the way of the applications
 fixed allocation
 Raw disk mode
 priority allocation
 Bypasses buffering, locking, etc
 Many variations

Operating System Concepts – 9th Edition 9.43 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.44 Silberschatz, Galvin and Gagne ©2013
Fixed Allocation Priority Allocation
 Equal allocation – For example, if there are 100 frames (after  Use a proportional allocation scheme using priorities rather
allocating frames for the OS) and 5 processes, give each process than size
20 frames
 Keep some as free frame buffer pool  If process Pi generates a page fault,
 Proportional allocation – Allocate according to the size of process  select for replacement one of its frames
 Dynamic as degree of multiprogramming, process sizes  select for replacement a frame from a process with lower
change priority number
m  64
si  size of process pi s1  10
S   si s2  127
m  total number of frames a1 
10
 62 » 4
s 137
ai  allocation for pi  i  m 127
S a2   62 » 57
137

Operating System Concepts – 9th Edition 9.45 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.46 Silberschatz, Galvin and Gagne ©2013

Global vs. Local Allocation Non-Uniform Memory Access


 Global replacement – process selects a replacement frame  So far all memory accessed equally
from the set of all frames; one process can take a frame from  Many systems are NUMA – speed of access to memory varies
another
 Consider system boards containing CPUs and memory,
 But then process execution time can vary greatly interconnected over a system bus
 But greater throughput so more common  Optimal performance comes from allocating memory “close to”
the CPU on which the thread is scheduled
 Local replacement – each process selects from only its own  And modifying the scheduler to schedule the thread on the
set of allocated frames same system board when possible
 More consistent per-process performance  Solved by Solaris by creating lgroups
 But possibly underutilized memory  Structure to track CPU / Memory low latency groups
 Used my schedule and pager
 When possible schedule all threads of a process and
allocate all memory for that process within the lgroup

Operating System Concepts – 9th Edition 9.47 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.48 Silberschatz, Galvin and Gagne ©2013
Thrashing Thrashing (Cont.)
 If a process does not have “enough” pages, the page-fault rate is
very high
 Page fault to get page
 Replace existing frame
 But quickly need replaced frame back
 This leads to:
 Low CPU utilization
 Operating system thinking that it needs to increase the
degree of multiprogramming
 Another process added to the system

 Thrashing  a process is busy swapping pages in and out

Operating System Concepts – 9th Edition 9.49 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.50 Silberschatz, Galvin and Gagne ©2013

Demand Paging and Thrashing Locality In A Memory-Reference Pattern


34
 Why does demand paging work?
Locality model
32
 Process migrates from one locality to another
 Localities may overlap 30

 Why does thrashing occur? 28

 size of locality > total memory size

m emory address
 Limit effects by using local or priority page replacement 26

24

22

page num bers


20

18

execution time

Operating System Concepts – 9th Edition 9.51 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.52 Silberschatz, Galvin and Gagne ©2013
Working-Set Model Keeping Track of the Working Set
   working-set window  a fixed number of page references
Example: 10,000 instructions  Approximate with interval timer + a reference bit
 WSSi (working set of Process Pi) =  Example:  = 10,000
total number of pages referenced in the most recent  (varies in time)  Timer interrupts after every 5000 time units
 if  too small will not encompass entire locality
 Keep in memory 2 bits for each page
 if  too large will encompass several localities
 Whenever a timer interrupts copy and sets the values of all
 if  =   will encompass entire program
reference bits to 0
 D =  WSSi  total demand frames
 If one of the bits in memory = 1  page in working set
 Approximation of locality
 Why is this not completely accurate?
 if D > m  Thrashing
 Improvement = 10 bits and interrupt every 1000 time units
 Policy if D > m, then suspend or swap out one of the processes

Operating System Concepts – 9th Edition 9.53 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.54 Silberschatz, Galvin and Gagne ©2013

Page-Fault Frequency Working Sets and Page Fault Rates


 More direct approach than WSS n Direct relationship between working set of a process and its
 Establish “acceptable” page-fault frequency (PFF) rate page-fault rate
and use local replacement policy n Working set changes over time
 If actual rate too low, process loses frame n Peaks and valleys over time
 If actual rate too high, process gains frame

Operating System Concepts – 9th Edition 9.55 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.56 Silberschatz, Galvin and Gagne ©2013
Memory-Mapped Files Memory-Mapped File Technique for all I/O

 Memory-mapped file I/O allows file I/O to be treated as routine  Some OSes uses memory mapped files for standard I/O
memory access by mapping a disk block to a page in memory  Process can explicitly request memory mapping a file via mmap()
 A file is initially read using demand paging system call
 A page-sized portion of the file is read from the file system into  Now file mapped into process address space
a physical page  For standard I/O (open(), read(), write(), close()), mmap
 Subsequent reads/writes to/from the file are treated as anyway
ordinary memory accesses  But map file into kernel address space
 Simplifies and speeds file access by driving file I/O through  Process still does read() and write()
memory rather than read() and write() system calls
 Copies data to and from kernel space and user space
 Also allows several processes to map the same file allowing the
 Uses efficient memory management subsystem
pages in memory to be shared
 Avoids needing separate subsystem
 But when does written data make it to disk?
 Periodically and / or at file close() time  COW can be used for read/write non-shared pages
 Memory mapped files can be used for shared memory (although
 For example, when the pager scans for dirty pages
again via separate system calls)

Operating System Concepts – 9th Edition 9.57 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.58 Silberschatz, Galvin and Gagne ©2013

Memory Mapped Files Shared Memory via Memory-Mapped I/O

Operating System Concepts – 9th Edition 9.59 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.60 Silberschatz, Galvin and Gagne ©2013
Shared Memory in Windows API Allocating Kernel Memory
 First create a file mapping for file to be mapped  Treated differently from user memory
 Then establish a view of the mapped file in process’s virtual  Often allocated from a free-memory pool
address space  Kernel requests memory for structures of varying sizes
 Consider producer / consumer  Some kernel memory needs to be contiguous
 Producer create shared-memory object using memory mapping  I.e. for device I/O
features
 Open file via CreateFile(), returning a HANDLE
 Create mapping via CreateFileMapping() creating a
named shared-memory object
 Create view via MapViewOfFile()
 Sample code in Textbook

Operating System Concepts – 9th Edition 9.61 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.62 Silberschatz, Galvin and Gagne ©2013

Buddy System Buddy System Allocator


 Allocates memory from fixed-size segment consisting of physically-
contiguous pages
 Memory allocated using power-of-2 allocator
 Satisfies requests in units sized as power of 2
 Request rounded up to next highest power of 2
 When smaller allocation needed than is available, current chunk
split into two buddies of next-lower power of 2
 Continue until appropriate sized chunk available
 For example, assume 256KB chunk available, kernel requests 21KB
 Split into AL and AR of 128KB each
 One further divided into BL and BR of 64KB
– One further into CL and CR of 32KB each – one used to
satisfy request
 Advantage – quickly coalesce unused chunks into larger chunk
 Disadvantage - fragmentation

Operating System Concepts – 9th Edition 9.63 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.64 Silberschatz, Galvin and Gagne ©2013
Slab Allocator Slab Allocation
 Alternate strategy
 Slab is one or more physically contiguous pages
 Cache consists of one or more slabs
 Single cache for each unique kernel data structure
 Each cache filled with objects – instantiations of the data
structure
 When cache created, filled with objects marked as free
 When structures stored, objects marked as used
 If slab is full of used objects, next object allocated from empty
slab
 If no empty slabs, new slab allocated
 Benefits include no fragmentation, fast memory request
satisfaction

Operating System Concepts – 9th Edition 9.65 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.66 Silberschatz, Galvin and Gagne ©2013

Slab Allocator in Linux Slab Allocator in Linux (Cont.)


 For example process descriptor is of type struct task_struct  Slab started in Solaris, now wide-spread for both kernel mode and
 Approx 1.7KB of memory user memory in various OSes
 New task -> allocate new struct from cache  Linux 2.2 had SLAB, now has both SLOB and SLUB allocators
 Will use existing free struct task_struct  SLOB for systems with limited memory
 Slab can be in three possible states  Simple List of Blocks – maintains 3 list objects for small,
medium, large objects
1. Full – all used
 SLUB is performance-optimized SLAB removes per-CPU
2. Empty – all free
queues, metadata stored in page structure
3. Partial – mix of free and used
 Upon request, slab allocator
1. Uses free struct in partial slab
2. If none, takes one from empty slab
3. If no empty slab, create new empty

Operating System Concepts – 9th Edition 9.67 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.68 Silberschatz, Galvin and Gagne ©2013
Other Considerations -- Prepaging Other Issues – Page Size
 Prepaging  Sometimes OS designers have a choice
 To reduce the large number of page faults that occurs at  Especially if running on custom-built CPU
process startup  Page size selection must take into consideration:
 Prepage all or some of the pages a process will need, before  Fragmentation
they are referenced
 Page table size
 But if prepaged pages are unused, I/O and memory was wasted
 Resolution
 Assume s pages are prepaged and α of the pages is used
 I/O overhead
 Is cost of s * α save pages faults > or < than the cost of
 Number of page faults
prepaging
s * (1- α) unnecessary pages?  Locality
 α near zero  prepaging loses  TLB size and effectiveness
 Always power of 2, usually in the range 212 (4,096 bytes) to 222
(4,194,304 bytes)
 On average, growing over time

Operating System Concepts – 9th Edition 9.69 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.70 Silberschatz, Galvin and Gagne ©2013

Other Issues – TLB Reach Other Issues – Program Structure

 TLB Reach - The amount of memory accessible from the TLB  Program structure
 int[128,128] data;
 TLB Reach = (TLB Size) X (Page Size)  Each row is stored in one page
 Program 1
 Ideally, the working set of each process is stored in the TLB
for (j = 0; j <128; j++)
 Otherwise there is a high degree of page faults for (i = 0; i < 128; i++)
data[i,j] = 0;
 Increase the Page Size
 This may lead to an increase in fragmentation as not all 128 x 128 = 16,384 page faults
applications require a large page size
 Program 2
 Provide Multiple Page Sizes
for (i = 0; i < 128; i++)
 This allows applications that require larger page sizes the for (j = 0; j < 128; j++)
opportunity to use them without an increase in fragmentation data[i,j] = 0;

128 page faults

Operating System Concepts – 9th Edition 9.71 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.72 Silberschatz, Galvin and Gagne ©2013
Other Issues – I/O interlock Operating System Examples

 I/O Interlock – Pages must


sometimes be locked into memory  Windows
 Consider I/O - Pages that are used
for copying a file from a device  Solaris
must be locked from being selected
for eviction by a page replacement
algorithm
 Pinning of pages to lock into
memory

Operating System Concepts – 9th Edition 9.73 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.74 Silberschatz, Galvin and Gagne ©2013

Windows Solaris
 Uses demand paging with clustering. Clustering brings in pages  Maintains a list of free pages to assign faulting processes
surrounding the faulting page  Lotsfree – threshold parameter (amount of free memory) to
 Processes are assigned working set minimum and working set begin paging
maximum  Desfree – threshold parameter to increasing paging
 Working set minimum is the minimum number of pages the  Minfree – threshold parameter to being swapping
process is guaranteed to have in memory
 Paging is performed by pageout process
 A process may be assigned as many pages up to its working set
maximum  Pageout scans pages using modified clock algorithm

 When the amount of free memory in the system falls below a  Scanrate is the rate at which pages are scanned. This ranges
threshold, automatic working set trimming is performed to from slowscan to fastscan
restore the amount of free memory  Pageout is called more frequently depending upon the amount of
 Working set trimming removes pages from processes that have free memory available
pages in excess of their working set minimum  Priority paging gives priority to process code pages

Operating System Concepts – 9th Edition 9.75 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition 9.76 Silberschatz, Galvin and Gagne ©2013
Solaris 2 Page Scanner

End of Chapter 9

Operating System Concepts – 9th Edition 9.77 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013

You might also like