0% found this document useful (0 votes)
113 views81 pages

Unit-2 Memory Management - Detail

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views81 pages

Unit-2 Memory Management - Detail

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

MEMORY MANAGEMENT

UNIT : 2
Computer Memory
 Computer memory is any physical device capable of
storing information temporarily or permanently.
What is called Memory Management?
 Memory management is the functionality of an
operating system which handles or manages primary
memory and moves processes back and forth between
main memory and disk during execution.
 Memory management keeps track of each and every
memory location, regardless of either it is allocated to
some process or it is free.
 It checks how much memory is to be allocated to
processes.
 It decides which process will get memory at what time.
 It tracks whenever some memory gets freed or
unallocated and correspondingly it updates the status.
Main Memory
Memory Management Requirements
 1. Relocation:
 Programmer does not know where the program will be
placed in memory when it is executed
 While the program is executing, it may be swapped to
disk and returned to main memory at a different
location (relocated)
 2. Protection:
 Processes should not be able to reference memory
locations in another process without permission.
 Memory protection requirement must be satisfied by the
processor (hardware) rather than the operating system
(software).
 Operating system cannot anticipate all of the memory
references a program will make
 3. Sharing:
 Allow several processes to access the same portion of
memory
 Better to allow each process access to the same copy of
the program rather than have their own separate copy
 4. Logical Organization:
 Programs are written in modules
 Modules can be written and compiled independently
 Different degrees of protection given to modules (read-
only, execute-only) – Share modules among processes.
 5. Physical Organization:
 Two Level organization is done here i.e. main memory
and secondary memory.
 The user/programmer cannot know how a program or
process will be split across the levels.
 Programmer does not know how much space will be
available
Swapping
Swapping
 A process must be in memory to be executed.
 A process, however, can be swapped temporarily out of
memory to a backing store and then brought back into
memory for continued execution.
 For example, assume a multiprogramming environment.
 The memory manager will start to swap out the process
that just finished and to swap another process into the
memory space that has been freed.
Swapping
 A variant of this swapping policy is used for priority-
based scheduling algorithms.
 If a higher-priority process arrives and wants service, the
memory manager can swap out the lower-priority
process and then load and execute the higher-priority
process.
 When the higher-priority process finishes, the lower-
priority process can be swapped back in and continued.
 This variant of swapping is sometimes called roll-out and
roll-in.
Memory Allocation:
 One of the simplest methods for allocating memory is to
divide memory into several fixed-sized partitions.
 Each partition may contain exactly one process. Thus, the
degree of multiprogramming is bound by the number of
partitions.
 In this multiple-partition method, when a partition is free,
a process is selected from the input queue and is loaded
into the free partition.
 When the process terminates, the partition becomes
available for another process.
Memory Allocation:
Contiguous memory allocation
Contiguous memory allocation
 In general, the memory blocks available comprise a set
of holes of various sizes scattered throughout memory.
 When a process arrives and needs memory, the system
searches the set for a hole that is large enough for this
process.
 If the hole is too large, it is split into two parts.
 One part is allocated to the arriving process; the other is
returned to the set of holes.
 When a process terminates, it releases its block of
memory, which is then placed back in the set of holes.
Contiguous memory allocation
Contiguous memory allocation Techniques

 First Fit
 Best Fit
 Worst Fit
FIRST FIT
 First fit
 Allocate the first hole that is big enough.
 Searching can start either at the beginning of the set of
holes or at the location where the previous first-fit search
ended.
 We can stop searching as soon as we find a free hole
that is large enough.
First Fit
First Fit
 In this method, first job(Process) claims the first
available memory with space more than or equal to
it’s size.
 The operating system doesn’t search for
appropriate partition but just allocate the
job(Process) to the nearest memory partition
available with sufficient size.
 Job(Process) < = Memory space
Example
First fit Advantages /Disadvantages
 Advantages of First-Fit Memory Allocation:
It is fast in processing. As the processor allocates the nearest
available memory partition to the job, it is very fast in
execution.
 Disadvantages of First-Fit Memory Allocation :
It wastes a lot of memory. The processor ignores if the size of
partition allocated to the job is very large as compared to the
size of job or not. It just allocates the memory. As a result, a lot
of memory is wasted and many jobs may not get space in the
memory, and would have to wait for another job to complete.
 Internal Fragmentation(Waste of Memory)
Best Fit
 Best fit
 Allocate the smallest hole that is big enough.
 We must search the entire list, unless the list is ordered
by size.
 This strategy produces the smallest leftover hole.
Best Fit
Best Fit
 In this method, the operating system first searches
the whole of the memory according to the size of
the given job(Process) and allocates it to the closest-
fitting free partition in the memory, making it able
to use memory efficiently.
Best Fit Example
Best Fit-Advantages/Disadvantages
 Advantages of Best-Fit Allocation :
Memory Efficient. The operating system allocates the job minimum
possible space in the memory, making memory management very
efficient. To save memory from getting wasted, it is the best
method.
 Disadvantages of Best-Fit Allocation :
It is a Slow Process. Checking the whole memory for each job
makes the working of the operating system very slow. It takes a
lot of time to complete the work.
Worst Fit
 Worst fit
 Allocate the largest hole.
 Again, we must search the entire list, unless it is sorted by
size.
 This strategy produces the largest leftover hole, which
may be more useful than the smaller leftover hole from a
best-fit approach.
Worst Fit
Worst Fit
 In this allocation technique, the process traverses the
whole memory and always search for the largest
hole/partition, and then the process is placed in
that hole/partition. It is a slow process because it
has to traverse the entire memory to search the
largest hole.
Worst Fit-Example
Worst Fit Disadvantages
 Disadvantages of Worst-Fit Allocation :
It is a slow process because it traverses all the
partitions in the memory and then selects the largest
partition among all the partitions, which is a time-
consuming process.
Memory De-allocation
 Memory deallocation is the process of freeing up
memory that is no longer needed by a program.
This is important because it allows the memory to be
used by other programs.
Paging
 Paging is a non-contiguous memory allocation
technique.
 Paging avoids external fragmentation and the need
for compaction.
 The basic method for implementing paging involves
breaking physical memory into fixed-sized blocks
called Frames and break each process into
individual Pages.
 When a process is to be executed, its pages are
loaded into any available memory frames.
Paging
 Logical Address: A logical address, also known as
a virtual address, is an address generated by the
CPU during program execution.
 Every address generated by the CPU is divided into
two parts: a page number (p) and a page offset (d).
 Page number: The page number is used as an
index into a Page Table.
 Page offset: It refers to the number of bits
necessary to represent a certain word on a page.
Paging
 Physical Address: A physical address is the actual
address in main memory where data is stored.
 Physical address is divided into two parts:
a Frame number (F) and a Frame offset (d).
 Frame Number: Main memory location.
 Frame offset : It refers to the number of bits
necessary to represent a certain word on a frame.
Paging
 Page Table: page table is a data structure used by
the operating system to keep track of the mapping
between logical addresses used by a process and
the corresponding physical addresses in the system’s
memory.
Page number Frame
(Index) Number
0 4
1 3
2 8
3 9
Paging
 Frame Table:
Frame table is a data structure that keeps track of
physical frames that are allocated or free.

Frame number Status


1 Busy
2 Free
3 Free
4 Busy
Paging
 Logical to Physical address translation
The translation from logical to physical addresses
is performed by the operating system's memory
management unit(MMU).
The MMU uses a page table to translate logical
addresses into physical addresses. The page table maps
each logical page number to a physical frame number.
Paging
Paging
Free Frames: (a) before allocation
(b) after allocation
Paging
Advantages
 Allocate non-contiguous memory space to the process.

 Swapping becomes very easy when page size and frame


size are equal.

 Paging is helpful when we have limited main memory.

 No external fragmentation.
Paging

Limitations
 Paging leads to internal fragmentation when page size
and frame size are not equal.
 Page tables consume additional space in the system.
 To access the pages from the frames (main memory) the
system has to first access the page table which ultimately
increases the memory access time.
 For each process, the system needs to maintain a separate
page table.
Virtual Memory
 A computer can address more memory than the amount physically
installed on the system.
 This extra memory is actually called virtual memory and it is a
section of a hard disk that's set up to emulate the computer's RAM.
 Virtual memory enables data that is in RAM and not currently
being used to be transferred to the hard disk. This frees up room in
RAM for other programs and data. When the data on the hard
disk is needed again, any other unused data is transferred to the
hard disk before the original data is transferred back to RAM.
 Virtual memory is implemented using Demand Paging
Demand Paging
 Paging technique load the entire program in physical
memory at program execution time.
 However, a problem with this approach is that we may
not initially need the entire program in memory.
 Loading the entire program into memory results in
loading the executable code for all options, regardless
of whether an option is ultimately selected by the user
or not.
Demand Paging
 An alternative strategy is to load pages only as they
are needed. This technique is known as demand paging.
 With demand-paging, pages are only loaded when
they are demanded during program execution; pages
that are never accessed are thus never loaded into
physical memory.
 Pager is responsible for swapping the pages between
main memory and secondary memory (disk) in demand
paging. A pager is concerned with the individual pages
of a process.
Demand Paging

Program A: 4 pages
Program B: 4 pages.
Valid-Invalid scheme for Demand Paging

Valid: in main memory


Invalid: in Hard disk.
Segmentation
 Memory segmentation is the division of a computer’s
primary memory into segments or sections.
 Segmentation is a memory-management scheme
that supports user view of memory.
 A logical address space is a collection of segments.
 Each segment has a name and a length.
Segmentation
Segmentation
 Consider how you think of a program when you are
writing it.
 You think of it as a main program with a set of methods,
procedures, or functions.
 It may also include various data structures: objects,
arrays, stacks, variables, and so on.
 Each of these modules or data elements is referred to by
name.
 You talk about "the stack," "the math library," "the main
program," without caring what addresses in memory
these elements occupy.
Segmentation
 Users prefer to view memory as a collection of variable-
sized segments, with no necessary ordering among
segments.
 Each of these segments is of variable length; the length
is defined by the purpose of the segment in the
program.
 The user therefore specifies each address by two
quantities: a segment name and an offset.
Segmentation
 For simplicity of implementation, segments are numbered
and are referred to by a segment number, rather than
by a segment name.
 Thus, a logical address consists of a two tuple:
<segment-number, offset>

 Normally, the user program is compiled, and the


compiler automatically constructs segments reflecting the
input program.
Segmentation
 User can refer the objects in a program through segment
table.
 Each entry in the segment table has a segment base and
a segment limit.
 The segment base contains the starting physical address
where the segment resides in memory, and the segment
limit specifies the length of the segment.
Segmentation
Segmentation
Segmentation

Advantages:
 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
 As a complete module is loaded all at once, segmentation improves CPU
utilization.
 The user’s perception of physical memory is quite similar to segmentation.
Users can divide user programs into modules via segmentation. These
modules are nothing more than separate processes’ codes.
 The user specifies the segment size, whereas, in paging, the hardware
determines the page size.
Segmentation
Disadvantages:
As processes are loaded and removed from the memory,
the free memory space is broken into little pieces, causing
External fragmentation.

Segmentation can be more complex to implement and


manage than paging.
Cache Memory
Cache Memory
 Cache Memory is a special very high-speed memory.
 The cache is a smaller and faster memory that stores copies of the
data that frequently used.
 The most important use of cache memory is that it is used to reduce
the average time to access data from the main memory.
 Cache memory is an extremely fast memory type that acts as a
buffer between RAM and the CPU.
 Cache Memory is used to speed up and synchronize with a high-
speed CPU.
Cache Memory-Advantages
 Cache Memory is faster in comparison to main memory and
secondary memory.
 Programs stored by Cache Memory can be executed in less time.
 The data access time of Cache Memory is less than that of the main
memory.
 Cache Memory stored data and instructions that are regularly used
by the CPU, therefore it increases the performance of the CPU.
Cache Memory-Disadvantages
 Cache Memory is costlier than primary memory and secondary
memory.
 Data is stored on a temporary basis in Cache Memory.
 Whenever the system is turned off, data and instructions stored in
cache memory get destroyed.
Page Replacement policies
 A page replacement algorithm is needed to decide
which page needs to be replaced when a new
page comes in.
 Page Fault: A page fault happens when a running
program accesses a memory page that is mapped
into the virtual address space but not loaded in
physical memory.
FIFO-First in First out
 This is the simplest page replacement algorithm. In
this algorithm, the operating system keeps track of
all pages in the memory in a queue, the oldest
page is in the front of the queue. When a page
needs to be replaced page in the front of the
queue is selected for removal.
FIFO Example
FIFO
 4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
FIFO Example

Total Page faults=6


LRU-Least Recently Used
 As the name suggests, this algorithm is based on the
strategy that whenever a page fault occurs, the
least recently used page will be replaced with a
new page. So, the page not utilized for the longest
time in the memory (compared to all other pages)
gets replaced.
LRU-Example

Total Page faults=4


LRU-Example
 4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
LRU-Example
 4 , 7, 6, 1, 7, 6, 1, 2, 7, 2

Total Page faults=6


Optimal Page replacement
 In this algorithm, pages are replaced which would
not be used for the longest duration of time in the
future.
 Replacing the page that is not in demand in the
future.
 It is only concept but not possible to implement.
 Impossible to have perfect knowledge of future events.
Optimal Page replacement-Example
Optimal Page replacement-Example
Optimal Page replacement-Example

Total page faults=5


Belady’s Anomaly.
 Belady’s anomaly exists in FIFO page replacement
algorithms.
 In the FIFO, page faults may increase by
increasing the number of frames.
Belady’s Anomaly-Example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
FIFO-Three frame

Total Page faults=9


FIFO-Four Frame

Total Page faults=10

You might also like