MEMORY MANAGEMENT
12/27/2016 prepared by: Roshan Nandan 1
BASIC MEMORY MANAGEMENT
• Memory management systems can be divided into two classes: those
that move processes back and forth between main memory and disk
during execution (swapping and paging), and those that do not.
• Mono-programming without Swapping or Paging:
12/27/2016 prepared by: Roshan Nandan 2
Continue………….
• The simplest possible memory management scheme is to run just one
program at a time, sharing the memory between that program and the
operating system.
• Three variations on this theme are shown in Fig:
• The operating system may be at the bottom of memory in RAM (Random
Access Memory), or it may be in ROM (Read-only Memory) at the top of
memory, or the device drivers may be at the top of memory in a ROM and
the rest of the system in RAM down below.
• OS reads program in from disk and it is executed.
• Only one program at a time from in main memory can use whole of
available memory
• To protect one process from another, their address spaces must be spared
by memory management scheme. An active process should never attempt
to access erroneously or maliciously the contents of each others address
space.
12/27/2016 prepared by: Roshan Nandan 3
Continue………….
• Here the protection of OS program from user code is must otherwise
the system will crash down. And this protection is done by special
hardware mechanism such as a dedicated register called as a Fence
Register.
• If we upgrade OS it is difficult to keep fixed Fence address, so replace
it by so called Fence Register
• As we know that OS usually reside in low memory area as shown in
above fig.
• Fence Register is set to highest address occupied by OS code.
• Memory address generated by user program to access certain
memory location is compared with fence register’s content. If address
generated is below fence ,it will be trapped and denied permission.
12/27/2016 prepared by: Roshan Nandan 4
Basic Hardware:
• Program must be brought (from disk) into memory and placed within
a process for it to be run
• Main memory and registers are only storage CPU can access directly
• Register access in one CPU clock (or less)
• Main memory can take many cycles
• Cache sits between main memory and CPU registers
• Protection of memory required to ensure correct operation
• A pair of base and limit registers define the logical (virtual) address
space
12/27/2016 prepared by: Roshan Nandan 5
420940 - 300040 = 120900 (logical address space)
Address Binding:
• Process of mapping logical address to real physical address in the memory is called
address binding.
• Input queue = the collection of processes on the disk that is waiting to be brought into
memory for execution
• Processes can normally reside in any part of the physical memory
• Addresses in the source program are generally symbolic (‘count’)
• A compiler binds these symbolic addresses to re-locatable addresses (’14 bytes from the
beginning of this module’)
• The linkage editor / loader binds these re-locatable addresses to absolute addresses
(‘74014’)
• Address binding of instructions and data to memory addresses can happen at three
different stages
12/27/2016 prepared by: Roshan Nandan 6
• Compile time:
• When it is known at compile time where the process will reside, compile
time binding is used to generate the absolute code.
• Must recompile code if starting location changes
• Load time:
• When it is not known at compile time where the process will reside in
memory, then the compiler generates re-locatable code
• Execution time:
• If the process can be moved during its execution from one memory
segment to another, then binding must be delayed to be done at run time
• Need hardware support for address maps (e.g., base and limit registers)
• Steps a user program needs to go through (some optional) before being
executed
12/27/2016 prepared by: Roshan Nandan 7
12/27/2016 prepared by: Roshan Nandan 8
Logical Versus Physical Address Space:
• Logical address = one generated by the CPU
• Physical address = one seen by the memory unit, and loaded into the
memory-address register of the memory
• The compile-time and load-time address-binding methods generate
identical logical & physical addresses
• The execution-time address-binding scheme results in differing
logical (= ‘virtual’) & physical addresses
• Logical(virtual)-address space = the set of all logical addresses
generated by a program
• Physical-address space = the set of all physical addresses
corresponding to these logical addresses
• Memory-management unit (MMU) = a hardware device that does
the run-time mapping from virtual to physical addresses
12/27/2016 prepared by: Roshan Nandan 9
• The MMU:
• Hardware device that maps virtual to physical address
• In MMU scheme, the value in the relocation (base) register is added to every
address generated by a user process at the time it is sent to memory
• The user program deals with logical addresses; it never sees the real physical
addresses
• Dynamic relocation using a relocation register
12/27/2016 prepared by: Roshan Nandan 10
Dynamic Loading:
• As we have seen so far, the entire program and all data of a process must be in
physical memory for the process to execute. The size of a process is thus limited
to the size of physical memory. To obtain better memory-space utilization, we can
use dynamic loading
• With dynamic loading, a routine is not loaded until it is called
• All routines are kept on disk in a re-locatable load format
• The main program is loaded into memory and is executed
• When a routine needs to call another one, the calling routine first checks to see
whether the other routine has been loaded
• If not, the re-locatable linking loader is called to load the desired routine into
memory
• Then, control is passed to the newly loaded routine
Advantage of dynamic loading:
• An unused routine is never loaded
• Dynamic loading doesn't require special support from the OS
• The user must design his programs to take advantage of this
• However, OS's may help the programmer by providing library routines to
implement dynamic loading
12/27/2016 prepared by: Roshan Nandan 11
Memory Allocation Technique
Memory allocation
1. contiguous 2. Non- contiguous
a. Fixed partition a. Paging
allocation
b. Variable partition b. Segmentation
allocation
12/27/2016 prepared by: Roshan Nandan 12
Multiprogramming with Fixed Partitions
• Mono-programming is sometimes used on small computers with
simple operating systems
• On timesharing systems, multiple processes is running in memory at
once which means that when one process is blocked waiting for I/O
to finish, another one can use the CPU. Thus multiprogramming
increases the CPU utilization.
• The easiest way to achieve multiprogramming is simply to divide
memory up into n (possibly unequal) partitions.
12/27/2016 prepared by: Roshan Nandan 13
• The disadvantage of sorting the incoming jobs into separate queues becomes apparent
when the queue for a large partition is empty but the queue for a small partition is full,
as is the case for partitions 1 and 3 in Fig. 4-2(a)
• To remove this single queue is maintained.
• Whenever a partition becomes free, the job closest to the front of the queue that fits in
it could be loaded into the empty partition and run.
• We divide the memory into several fixed size partitions where each partition will
accommodate only one program for each execution .
• The no of programs (i.e degree of multiprogramming) residing in the memory will be
bound by no of partition .
• When program terminates, that partition is freed for another program waiting in queue.
• Once the partition are defined ,the OS keeps track of status (allocated /free) of the
memory partitions and this is done through a data structure called as partition
description table(PDT) .
• In order to load a new process , OS checks for free memory partition with the help of
PDT, if the search is found to be successful then entry is marked as “ALLOCATED”, when
the process terminates or is swapped out , it is update “FREE”.
• Most common strategies to allocate free partitions to the new process :
• First Fit
• Best Fit
• Worst Fit
• Next Fit
12/27/2016 prepared by: Roshan Nandan 14
Multiprogramming with Variable Partitions
• It is solution of wastage of memory in fixed partitions
• It allows jobs to occupy as much space as they need
• A process can grow if there is an adjacent hole.
• Otherwise the growing process is moved to the hole large enough for
it or swap out one or processes for it.
• It has also some degree of waste
• When process finishes and leaves hole, the hole ma not be large
enough to place new job.
• Thus, variable partition multiprogramming , waste does occur.
• Following two activities should be taken place , to reduce wastageof
memory:
• Coalescing
• Compaction
12/27/2016 prepared by: Roshan Nandan 15
12/27/2016 prepared by: Roshan Nandan 16
• Coalescing: The process of merging two adjacent holes to form a
single larger hole is called coalescing.
• Compaction:
• Even when holes are coalesced , no individual hole may be large
enough to hold the job, although the sum of holes is larger than the
storage required for process.
• It is possible to combine all the holes to one big one by moving all the
processes downward as far as possible, this technique is called
memory compaction.
12/27/2016 prepared by: Roshan Nandan 17
Swapping
• Swapping is a mechanism in which a process can be swapped
temporarily out of main memory to a backing store, and then brought
back into memory for continued execution
• This process is done if there is not enough main memory to hold all
currently active processes.
• Backing store is a usually a hard disk drive or any other secondary
storage which fast in access and large enough to accommodate copies
of all memory images for all users.
• to the amount of memory swapped. Let us assume that the user
process is of size 100KB and the backing store is a standard hard disk
with transfer rate of 1 MB per second. The actual transfer of the 100K
process to or from memory will take
• 100KB / 1000KB per second
= 1/10 second
= 100 milliseconds
12/27/2016 prepared by: Roshan Nandan 18
12/27/2016 prepared by: Roshan Nandan 19
Memory Management with Bit Maps
• When memory is assigned dynamically, the operating system must manage it.In general terms,
there are two ways to keep track of memory usage: bit maps and link lists
• With bit maps, memory is divided up into allocation units(possibly from few words to several KB)
• Corresponding to each allocation unit is a bit in bit map (0 indicates free and 1 indicates occupied)
• The smaller the allocation units larger the bit map.
• Disvantage: Searching a bitmap for a process to run ( having given length, say k)
is slow operation.
12/27/2016 prepared by: Roshan Nandan 20
Memory Management with Linked Lists
• Each entry in a list specifies a hole(H) or process(P)
• This is the address at which it starts, the length, and a pointer to next
entry.
• In diagram below :
• (a)Replace process X by hole H
• (b)and (c) two entries are coalesced
• Three entries are merged.
12/27/2016 prepared by: Roshan Nandan 21
Non-contiguous storage Allocation
• It was decided to have non-contiguous physical address space of a
process because:
• To permit sharing of code and data among processes.
• To resolve the problem of external fragmentation
• To enhance the degree of multiprogramming
• To support the virtual memory concept
• However it involves complex implementation and an additional costs
in terms of memory and processing
• Three main methods :
• Paging
• Segmentation
• Segmentation with paging
12/27/2016 prepared by: Roshan Nandan 22
Paging
12/27/2016 prepared by: Roshan Nandan 23
• Paging is a memory management technique in which the memory is
divided into fixed size pages.
• Paging is used for faster access to data.
• It is a logical concept
• Permits the physical-address space of a process be noncontiguous
• Traditionally: support for paging has been handled by hardware
• Recent designs: the hardware & OS are closely integrated
Basic Method :
• Physical memory is broken into fixed-sized blocks called frames
• Logical memory is broken into same-sized blocks called pages
• When a process is to be executed, its pages are loaded into any available
memory frames from the backing store
• The backing store has blocks the same size as the memory frames
• Every address generated by the CPU is divided into 2 parts:
• a page number (to index the page table)
• a page offset
12/27/2016 prepared by: Roshan Nandan 24
• The page table contains the base address of each page in memory
• This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit
• The page size, defined by the hardware, is usually a power of 2
• Paging schemes have no external, but some internal fragmentation
• Small page sizes mean less internal fragmentation
• However, there is less overhead involved as page size increases
• Also, disk I/O is more efficient with larger data transfers
• When a process arrives in the system to be executed, its size, expressed in pages,
is examined
• (Noncontiguous) frames are allocated to each page of the process
• The frame table contains entries for each physical page frame, indicating which
are allocated to which pages of which processes
Address Translation Scheme:
• Address generated by CPU is divided into
• Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
• Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory
12/27/2016
unit
prepared by: Roshan Nandan 25
• For given logical address space 2^m and page size 2^n page
• An illustration of the hardware that supports paging:
12/27/2016 prepared by: Roshan Nandan 26
Mapping:
12/27/2016 prepared by: Roshan Nandan 27
12/27/2016 prepared by: Roshan Nandan 28
Logical address and physical
Example
12/27/2016 prepared by: Roshan Nandan 29
Page translation
12/27/2016 prepared by: Roshan Nandan 30
protection
• Memory protection is achieved by protection bits for each frame
• Normally, these bits are kept in the page table
• One bit can define a page to be read-write or read-only
• Every reference to memory goes through the page table to find the correct frame number, so the
protection bits can be checked
• A valid-invalid bit is usually attached to each page table entry
• ‘Valid’: the page is in the process’ logical-address space
• ‘Invalid’: the page is not in the process’ logical-address space
• Illegal addresses are trapped by using the valid-invalid bit
• Valid (v) or Invalid (i) Bit in a Page Table
• Many processes use only a small fraction of the address space available to them, so it’s wasteful
to create a page table with entries for every page in the address range
• A page-table length register (PTLR) can indicate the size of the page table
12/27/2016 prepared by: Roshan Nandan 31
12/27/2016 prepared by: Roshan Nandan 32
Paging hardware
• The page table is implemented as a set of dedicated registers
• The CPU dispatcher reloads these registers just like the others
• Instructions to load / modify the page-table registers are privileged, so that only the OS can change the
memory map
• Disadvantage: works only if the page table is reasonably small
• The page table is kept in memory, and a page-table base register (PTBR) points to the page table
• Changing page tables requires changing only this one register, substantially reducing context-switch time
• Disadvantage: two memory accesses are needed to access one byte
• Use a small, fast-lookup hardware cache: the translation look-aside buffer (TLB)
• The TLB is associative, high-speed memory
• Each entry in the TLB consists of a key and a value
• When the associative memory is presented with an item, it is compared with all keys simultaneously
• If the item is found, the corresponding value field is returned
• The search is fast, but the hardware is expensive
• The TLB is used with page tables in the following way:
• When a logical address is generated by the CPU, its page number is presented to the TLB
• If the page number is found, its frame number is immediately available and is used to access memory
• If the page number is not in the TLB, a memory reference to the page table must be made
• The obtained frame number can be used to access memory
• If the TLB is full of entries, the OS must replace one
• Some TLBs have wired down entries that can't be removed
• Some TLBs store address-space identifiers (ASIDs) in each entry of the TLB, that uniquely identify each
process
12/27/2016
and provide address space protection for that process
prepared by: Roshan Nandan 33
• Paging Hardware with TLB:
• The percentage of times that a particular page number is found in the TLB is called the
hit ratio
• Effective access time:
• Associative Lookup = ε time unit
• Assume memory cycle time is 1 microsecond
• Hit ratio –percentage of times that a page number is found in the associative registers; ratio
related to number of associative registers
• Hit ratio = ±
• Effective Access Time(EAT)
• EAT = (1 + ε) α+ (2 + ε)(1 –α)
• = 2 + ε–α
12/27/2016 prepared by: Roshan Nandan 34
12/27/2016 prepared by: Roshan Nandan 35
SEGMENTATION
• Segmentation is a technique to break memory into logical pieces
where each piece represents a group of related information.
• For example, data segments or code segment for each process, data
segment for operating system and so on.
• Segmentation can be implemented using or without using paging.
• Memory-management scheme that supports the users’ view of
memory
• A program is a collection of segments. A segment is a logical unit
such as: main program, procedure, function, method, object, local
variables, global variables, common block, stack, symbol table, arrays
Basic Method
• Segmentation is a memory-management scheme that supports this
user view memory
• A logical address space is a collection of segments
12/27/2016 prepared by: Roshan Nandan 36
• The user's view of a Program:
• The user specifies each address by: a segment name and an offset
• (Segments are implemented with numbers rather than names)
• Logical view of segmentation:
12/27/2016 prepared by: Roshan Nandan 37
• Segmentation is one of the most common ways to achieve memory
protection.
• Because internal fragmentation of pages takes place, the user’s view
of memory is lost
• The user will view the memory as a combination of segments
• In this type, memory addresses used are not contiguous
• Each memory segment is associated with a specific length and a set of
permissions.
• When a process tries to access the memory it is first checked to see
whether it has the required permission to access the particular
memory segment and whether it is within the length specified by
that particular memory segment.
12/27/2016 prepared by: Roshan Nandan 38
Logical Addressing in segmentation
12/27/2016 prepared by: Roshan Nandan 39
Logical to Physical Address Translation in
Segmentation:
12/27/2016 prepared by: Roshan Nandan 40
Segmentation Hardware:
• Segment table - maps two-dimensional user defined address into
one-dimensional physical address
• base - starting physical address of the segment
• limit - length of segment
12/27/2016 prepared by: Roshan Nandan 41
Example of segmentation:
12/27/2016 prepared by: Roshan Nandan 42
12/27/2016 prepared by: Roshan Nandan 43
Protection and Sharing:
• Segmentation lends itself to the implementation of protection and
sharing policies
• Each entry has a base address and length so inadvertent memory
access can be controlled
• Sharing can be achieved by segments referencing multiple processes
• Two processes that need to share access to a single segment would
have the same segment name and address in their segment tables.
Disadvantages :
• External fragmentation.
• Costly memory management algorithm
• Unequal size of segments is not good in the case of swapping.
12/27/2016 prepared by: Roshan Nandan 44
Segmentation with Paging
In a combined
paging/segmentation
system a user’s
address space is Segmentation is
broken up into a visible to the
number of segments. programmer
Each segment is Paging is
broken up into a transparent to the
number of fixed-sized programmer
pages which are
equal in length to a
main memory frame
12/27/2016 prepared by: Roshan Nandan 45
IMPLEMENTING SEGMENTATION
Segment number : used to index the
segment table who’s entry gives the starting
address of the page table for that segment.
Logical Page number : used to index that page table
Address space to obtain the corresponding frame number
Offset : used to locate the word within the
frame.
12/27/2016 prepared by: Roshan Nandan 46
ADDRESSES IN A SEGMENTED PAGING
SYSTEM:
• The segment number indexes into the segment table which yields the
base address of the page table for that segment.
• Check the remainder of the address (page number and offset) against
the limit of the segment.
• Use the page number to index the page table. The entry is the
frame. (The rest of this is just like paging.)
• Concate the frame and the offset to get the physical address.
12/27/2016 prepared by: Roshan Nandan 47
ADDRESS TRANSLATION:
12/27/2016 prepared by: Roshan Nandan 48
12/27/2016 prepared by: Roshan Nandan 49
12/27/2016 prepared by: Roshan Nandan 50
12/27/2016 prepared by: Roshan Nandan 51
12/27/2016 prepared by: Roshan Nandan 52
Page Replacement
• we increase our degree of multiprogramming, we are over-allocating
memory:
• While a process is executing, a page fault occurs
• The hardware traps to the OS, which checks its internal tables to see that
this page fault is a genuine one
• The OS determines where the desired page is residing on disk, but then
finds no free frames on the free- frame list
• The OS then could:
• Terminate the user process (Not a good idea)
• Swap out a process, freeing all its frames, and reducing the level of
multiprogramming
•12/27/2016
Perform page replacement prepared by: Roshan Nandan 53
• The need for page replacement arises:
• Prevent over-allocation of memory by modifying page-fault service
routine to include page replacement
• Use modify (dirty) bit to reduce overhead of page transfers -only
modified pages are written to disk
• Page replacement completes separation between logical memory
and physical memory -large virtual memory can be provided on a
smaller physical memory
12/27/2016 prepared by: Roshan Nandan 54
Basic Page Replacement :
• Basic page replacement approach:
• If no frame is free, we find one that is not being used and free it
• Page replacement takes the following steps:
• Find the location of the desired page on the disk
• Find a free frame:
• If there is a free frame, use it, else
• Select a victim frame with a page-replacement algorithm
• Write the victim page to the disk and change the page & frame tables
accordingly
• Read the desired page into the (newly) free frame and change the
page & frame tables
• Restart the user process
12/27/2016 prepared by: Roshan Nandan 55
12/27/2016 prepared by: Roshan Nandan 56
• Note: if no frames are free, two page transfers (one out & one in) are required, which
doubles the page-fault service time and increases the effective access time accordingly
• We can reduce this overhead by using a modify / dirty bit:
• When a page is modified, its modify bit is set
• If the bit is set, the page must be written to disk
• If the bit is not set, you don't need to write it to disk since it is already there, which reduces I/O
time
• We must solve two major problems to implement demand paging:
• Develop a frame-allocation algorithm
• If we have multiple processes in memory, we must decide how many frames to allocate to each process
• Develop a page-replacement algorithm
• When page replacement is required, we must select the frames that are to be replaced
• When selecting a particular algorithm, we want the one with the lowest page-fault rate
• To evaluate an algorithm, run it on a reference string (a string of memory references)
and compute the number of page faults
• Want lowest page-fault rate
• Evaluate algorithm by running it on a particular string of memory references (reference string)
and computing the number of page faults on that string
• In all our examples, the reference string is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• You can generate reference strings or trace a given system and record the address of
each memory reference
12/27/2016 prepared by: Roshan Nandan 57
• FIFO Page Replacement :
• The simplest page-replacement algorithm is the first-in, first-out
(FIFO) algorithm
• A FIFO replacement algorithm associates with each page the time
when that page was brought into memory
• When a page must be replaced, the oldest page is chosen
• Notice it is not strictly necessary to record the time when a page is
brought in
• We can create a FIFO queue to hold all pages in memory
• We replace the page at the head of the queue
• When a page is brought into memory, we insert it at the tail of the
queue
• Easy to understand and implement
12/27/2016 prepared by: Roshan Nandan 58
12/27/2016 prepared by: Roshan Nandan 59
12/27/2016 prepared by: Roshan Nandan 60
12/27/2016 prepared by: Roshan Nandan 61
12/27/2016 prepared by: Roshan Nandan 62
• Second-Chance (clock) Page-Replacement Algorithm:
12/27/2016 prepared by: Roshan Nandan 63