0% found this document useful (0 votes)
24 views18 pages

UNIT 2 Memory Management

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views18 pages

UNIT 2 Memory Management

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT 2:- MEMORY MANAGEMENT IN OPERATING SYSTEM

Memory is the important part of the computer is used to store the data. Its management is
critical to the computer system because the amount of main memory available in a computer
system is very limited. At any time, many processes are competing for it. Moreover, to increase
performance, several processes are executed simultaneously. For this, we must keep several
processes in the main memory, so it is even more important to manage them effectively.

Role of Memory management


Following are the important roles of memory management in a computer system:
o Memory manager is used to keep track of the status of memory locations, whether it is free or
allocated. It addresses primary memory by providing abstractions so that software perceives a
large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to execute programs
larger than the size or amount of available memory. It does this by moving information back and
forth between primary memory and secondary memory by using the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process from
being corrupted by another process. If this is not ensured, then the system may exhibit
unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.
Memory Management Techniques:
The memory management techniques can be classified into following main categories:
o Contiguous memory management schemes
o Non-Contiguous memory management schemes
Contiguous memory management schemes:
In a Contiguous memory management scheme, each program occupies a single contiguous block
of storage locations, i.e., a set of memory locations with consecutive addresses.
Single contiguous memory management schemes:
The Single contiguous memory management scheme is the simplest memory management scheme
used in the earliest generation of computer systems. In this scheme, the main memory is divided
into two contiguous areas or partitions. The operating systems reside permanently in one partition,
generally at the lower memory, and the user process is loaded into the other partition.
Advantages of Single contiguous memory management schemes:
o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is given full
processor's time, and no other processor will interrupt it.
Disadvantages of Single contiguous memory management schemes:
o Wastage of memory space due to unused memory as the process is unlikely to use all the available
memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main memory.
o It can not be executed if the program is too large to fit the entire available main memory space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs simultaneously.
Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it limits computers to
execute only one program at a time resulting in wastage in memory space and CPU time. The
problem of inefficient CPU use can be overcome using multiprogramming that allows more than
one program to run concurrently. To switch between two processes, the operating systems need to
load both processes into the main memory. The operating system needs to divide the available
main memory into multiple parts to load multiple processes into the main memory. Thus multiple
processes can reside in the main memory simultaneously.
The multiple partitioning schemes can be of two types:
o Fixed Partitioning
o Dynamic Partitioning
Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning. These partitions can be of the same size or different
sizes. Each partition can hold a single process. The number of partitions determines the degree of
multiprogramming, i.e., the maximum number of processes in memory. These partitions are made
at the time of system generation and remain fixed after that.
Advantages of Fixed Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Fixed Partitioning memory management schemes:
o This scheme suffers from internal fragmentation.
o The number of partitions is specified at the time of system generation.
Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning scheme.
In a dynamic partitioning scheme, each process occupies only as much memory as they require
when loaded for processing. Requested processes are allocated memory until the entire physical
memory is exhausted or the remaining space is insufficient to hold the requesting process. In this
scheme the partitions used are of variable size, and the number of partitions is not defined at the
system generation time.
Advantages of Dynamic Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Dynamic Partitioning memory management schemes:

o This scheme also suffers from internal fragmentation.


o The number of partitions is specified at the time of system segmentation.
Non-Contiguous memory management schemes:
In a Non-Contiguous memory management scheme, the program is divided into different blocks
and loaded at different portions of the memory that need not necessarily be adjacent to one
another. This scheme can be classified depending upon the size of blocks and whether the blocks
reside in the main memory or not.
paging
Paging is a technique that eliminates the requirements of contiguous allocation of main memory.
In this, the main memory is divided into fixed-size blocks of physical memory called frames. The
size of a frame should be kept the same as that of a page to maximize the main memory and avoid
external fragmentation.
Advantages of paging:
o Pages reduce external fragmentation.
o Simple to implement.
o Memory efficient.
o Due to the equal size of frames, swapping becomes very easy.
o It is used for faster access of data.
Segmentation
Segmentation is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into variable-size blocks of physical memory called
segments. It is based on the way the programmer follows to structure their programs. With
segmented memory allocation, each job is divided into several segments of different sizes, one for
each module. Functions, subroutines, stack, array, etc., are examples of such modules.
----------------------------------------------------------------------------------------------------------------------
----

Page Replacement Algorithms in Operating Systems


In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when a new page comes
in. Page replacement becomes necessary when a page fault occurs and no free page frames
are in memory. in this article, we will discuss different types of page replacement
algorithms.
Page Replacement Algorithms
Page replacement algorithms are techniques used in operating systems to manage
memory efficiently when the physical memory is full. When a new page needs to be loaded
into physical memory, and there is no free space, these algorithms determine which existing
page to replace.
If no page frame is free, the virtual memory manager performs a page replacement
operation to replace one of the pages existing in memory with the page whose reference
caused the page fault. It is performed as follows: The virtual memory manager uses a page
replacement algorithm to select one of the pages currently in memory for replacement,
accesses the page table entry of the selected page to mark it as “not present” in memory, and
initiates a page-out operation for it if the modified bit of its page table entry indicates that it
is a dirty page.
Common Page Replacement Techniques
 First In First Out (FIFO)
 Optimal Page replacement
 Least Recently Used (LRU)
 Most Recently Used (MRU)
First In First Out (FIFO)
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3-page frames. Find the
number of page faults using FIFO Page Replacement Algorithm.

FIFO – Page Replacement


Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available
in memory, so it replaces the oldest page slot i.e 1. —> 1 Page Fault. 6 comes, it is also not
available in memory, so it replaces the oldest page slot i.e 3 —> 1 Page Fault. Finally, when 3
come it is not available, so it replaces 0 1-page fault.
Implementation of FIFO Page Replacement Algorithm
 Program for Page Replacement Algorithm (FIFO)
Optimal Page Replacement
In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page frame.
Find number of page fault using Optimal Page Replacement Algorithm.

Optimal Page Replacement


Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not
used for the longest duration of time in the future—> 1 Page fault. 0 is already there so —> 0
Page fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already
available in the memory. Optimal page replacement is perfect, but not possible in practice as
the operating system cannot know future requests. The use of Optimal Page replacement is
to set up a benchmark so that other replacement algorithms can be analyzed against it.
Least Recently Used
In this algorithm, page will be replaced which is least recently used.
Example Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page
frames. Find number of page faults using LRU Page Replacement Algorithm.
Least Recently Used – Page Replacement
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is least
recently used —> 1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Implementation of LRU Page Replacement Algorithm
 Program for Least Recently Used (LRU) Page Replacement algorithm
Most Recently Used (MRU)
In this algorithm, page will be replaced which has been used recently. Belady’s anomaly can
occur in this algorithm.
Example 4: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page
frames. Find number of page faults using MRU Page Replacement Algorithm.
Most Recently Used – Page Replacement
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —> 1 Page fault
when 0 comes it will take place of 3 —> 1 Page fault
when 4 comes it will take place of 0 —> 1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —> 1 Page fault
when 0 comes it will take place of 3 —> 1 Page fault
when 3 comes it will take place of 0 —> 1 Page fault
when 2 comes it will take place of 3 —> 1 Page fault
when 3 comes it will take place of 2 —> 1 Page fault
First In First Out (FIFO) page replacement algorithm –
This is the simplest page replacement algorithm. In this algorithm, operating system keeps
track of all pages in the memory in a queue, oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.
Example -1. Consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially all slots
are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available
in memory so it replaces the oldest page slot i.e 1. —>1Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —
>6 Page Fault.
So total page faults = 5.
Belady’s Anomaly in Page Replacement Algorithms
Belady’s Anomaly is a phenomenon in operating systems where increasing the number of
page frames in memory leads to an increase in the number of page faults for certain page
replacement algorithms. Normally, as more page frames are available, the operating system
has more flexibility to keep the necessary pages in memory, which should reduce the
number of page fault
This phenomenon is commonly experienced in the following page replacement algorithms:
 First in first out (FIFO)
 Second chance algorithm
 Random page replacement algorithm
What is a Page Fault?
A page fault is a type of interrupt or exception that occurs in a
computer’s operating system when a program attempts to
access a page of memory that is not currently loaded into
physical RAM (Random Access Memory). Instead, the page is
stored on disk in a storage space called the page file or swap
space.
Reason for Belady’s Anomaly
Belady’s Anomaly happens because algorithms like FIFO replace pages based on their arrival
order, without considering how often or soon they will be used again. Adding more frames can
change the replacement order, causing frequently used pages to be evicted earlier and increasing
page faults. This occurs due to the lack of awareness of future page references and the non-
optimal nature of such algorithms.
Algorithms like Optimal and LRU (Least Recently Used) avoid Belady’s Anomaly because:
 They consider the future or past usage of pages.
 They use a stack-based structure, ensuring that pages in smaller frame setups will also
be present in larger setups, preventing anomalies.

Features of Belady’s Anomaly


 Page fault rate: Page fault rate is the number of page faults that occur during the
execution of a process. Belady’s Anomaly occurs when the page fault rate increases as
the number of page frames allocated to a process increases.
 Page replacement algorithm: Belady’s Anomaly is specific to some page replacement
algorithms, including the First-In-First-Out (FIFO) algorithm and the Second-Chance
algorithm.
 System workload: Belady’s Anomaly can occur when the system workload changes.
Specifically, it can happen when the number of page references in the workload
increases.
 Page frame allocation: Belady’s Anomaly can occur when the page frames allocated
to a process are increased, but the total number of page frames in the system remains
constant. This is because increasing the number of page frames allocated to a process
reduces the number of page faults initially, but when the workload increases, the
increased number of page frames can cause the process to evict pages from its
working set more frequently, leading to more page faults.
 Impact on performance: Belady’s Anomaly can significantly impact system
performance, as it can result in a higher number of page faults and slower overall
system performance. It can also make it challenging to choose an optimal number of
page frames for a process.
Advantages
 Better insight into algorithm behavior: Belady’s Anomaly can provide insight into
how a page replacement algorithm works and how it can behave in different
scenarios. This can be helpful in designing and optimizing algorithms for specific use
cases.
 Improved algorithm performance: In some cases, increasing the number of frames
allocated to a process can actually improve algorithm performance, even if it results in
more page faults. This is because having more frames can reduce the frequency of
page replacement, which can improve overall performance.
Disadvantages
 Poor predictability: Belady’s Anomaly can make it difficult to predict how an
algorithm will perform with different configurations of frames and pages, which can
lead to unpredictable performance and system instability.
 Increased overhead: In some cases, increasing the number of frames allocated to a
process can result in increased overhead and resource usage, which can negatively
impact system performance.
 Unintuitive behavior: Belady’s Anomaly can result in unintuitive behavior, where
increasing the number of frames allocated to a process results in more page faults,
which can be confusing for users and system administrators.
 Difficulty in optimization: Belady’s Anomaly can make it difficult to optimize page
replacement algorithms for specific use cases, as the behavior of the algorithm can be
unpredictable and inconsistent.
Program for Least Recently Used (LRU) Page Replacement algorithm
In operating systems that use paging for memory management, page replacement
algorithm are needed to decide which page needed to be replaced when new page
comes in. Whenever a new page is referred and not present in memory, page fault
occurs and Operating System replaces one of the existing pages with newly needed
page. Different page replacement algorithms suggest different ways to decide which
page to replace. The target for all algorithms is to reduce number of page faults.
In Least Recently Used (LRU) algorithm is a Greedy algorithm where the page to be
replaced is least recently used. The idea is based on locality of reference, the least
recently used page is not likely
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page
slots empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —> 1 Page
fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.

Topic: Address binding in Operating System


The Address Binding refers to the mapping of computer instructions and data to physical memory
locations. Both logical and physical addresses are used in computer memory. It assigns a physical
memory region to a logical pointer by mapping a physical address to a logical address known as a
virtual address. It is also a component of computer memory management that the OS performs on
behalf of applications that require memory access.
Types of Address Binding in Operating System
There are mainly three types of an address binding in the OS. These are as follows:
1. Compile Time Address Binding
2. Load Time Address Binding
3. Execution Time or Dynamic Address Binding
Compile Time Address Binding
It is the first type of address binding. It occurs when the compiler is responsible for performing
address binding, and the compiler interacts with the operating system to perform the address
binding. In other words, when a program is executed, it allocates memory to the system code of
the computer. The address binding assigns a logical address to the beginning of the memory
segment to store the object code. Memory allocation is a long-term process and may only be
modified by recompiling the program.

Load Time Address Binding


It is another type of address binding. It is done after loading the program in the memory, and it
would be done by the operating system memory manager, i.e., loader. If memory allocation is
specified when the program is assigned, no program in its compiled state may ever be transferred
from one computer to another. Memory allocations in the executable code may already be in use
by another program on the new system. In this case, the logical addresses of the program are not
connected to physical addresses until it is applied and loaded into memory.
Execution Time or Dynamic Address Binding
Execution time address binding is the most popular type of binding for scripts that aren't compiled
because it only applies to variables in the program. When a variable in a program is encountered
during the processing of instructions in a script, the program seeks memory space for that
variable. The memory would assign the space to that variable until the program sequence finished
or unless a specific instruction within the script released the memory address connected to a
variable.
----------------------------------------------------------------------------------------------------------------------
--
Topic : Memory Sharing And Protection
Memory protection is a crucial component of operating systems which permits them to avert one
method's storage from being utilized by another. Memory safeguarding is vital in contemporary
operating systems since it enables various programs to run in tandem lacking tampering with their
respective storage space
Advantages
Applying security for memory in a platform offers multiple perks.
Listed below are a few of the primary benefits −
 Improved Stability − Memory security prevents one program from accessing another procedure's
memory area, which can enhance system stability and prevent the loss of vital information.
 Increased Security − Memory protection helps to prevent the unauthorized access of private
information, as the OS will interrupt and terminate any application attempting to access
unauthorized RAM, preventing security breaches.
 Better Resource Management − Memory shielding allows multiple processes to run
concurrently without affecting each other's memory space, improving the overall efficiency of the
system's resource management.
 More Efficient Memory Usage − Simulated memory security strategies can optimize the use of
memory while decreasing the amount of RAM necessary for the system, allowing multiple
programs to use the same physical storage space.
 Facilitates Multitasking − Memory protection enables multiple processes to run simultaneously,
allowing for multitasking and running multiple programs at the same time.
Disadvantages
Applying security for memory in a platform offers multiple perks, alongside downfalls as well
which are considered below −
 Overhead − Guarding memory requires additional software and hardware resources, which can
lead to higher costs and reduced system efficiency.
 Complexity − Memory protection adds complexity to the operating system, making development,
testing, and maintenance more difficult.
 Memory Fragmentation − Virtual memory can cause memory fragmentation, where real
memory is broken into inadequate, pseudo contiguous blocks.
 Limitation − Memory protection is not foolproof and can be circumvented in certain situations.
For example, a malicious user might exploit vulnerabilities in the OS to gain access to another
process's memory area.
 Compatibility Issues − Some older software programs may be incompatible with memory
protection features, limiting the operating system's ability to protect memory from unauthorized
access.
----------------------------------------------------------------------------------------------------------------------
----
Topic : Paging And Segmentation
a storage mechanism used in OS to retrieve processes from secondary storage to the main
memory as pages.
Translation of logical Address into physical Address
As a CPU always generates a logical address and we need a physical address for accessing the
main memory. This mapping is done by the MMU(memory management Unit) with the help of
the page table . Lets first understand some of the basic terms then we will see how this translation
is done.
Logical Address: The logical address consists of two parts page number and page offset.
1. Page Number: It tells the exact page of the process which the CPU wants to access.
2. Page Offset: It tells the exact word on that page which the CPU wants to read.
Physical Address: The physical address consists of two parts frame number and page offset.
1. Frame Number: It tells the exact frame where the page is stored in physical memory.
2. Page Offset: It tells the exact word on that page which the CPU wants to read. It requires no
translation as the page size is the same as the frame size so the place of the word which CPU
wants access will not change.
Physical Address = Frame Number + Page Offset
 Page table: A page stable contains the frame number corresponding to the page number of some
specific process. So, each process will have its own page table. A register called Page Table Base
Register(PTBR) which holds the base value of the page table.
How paging works

 Processes are divided into pages


 Main memory is divided into frames
 Pages are loaded into frames when needed
 Pages are stored in secondary storage when not needed
 A page fault occurs when a page is needed but isn't in memory
 The operating system loads the required page from disk into memory

Benefits of paging

 Efficient memory allocation


 Access control
 Translation between logical and physical memory spaces
 Simple and inexpensive memory allocation
 Pages are easy to share
 No external fragmentation
 More efficient swapping

Advantages of Paging

 Eliminates Fragmentation: Since pages can be scattered across physical memory, paging
eliminates the problem of external fragmentation.
 Efficient Memory Use: It allows for more efficient use of memory because processes do not
need to be contiguous in memory.
 Swapping: It makes swapping of processes between main memory and disk more efficient.

Disadvantages of Paging

 Overhead: Managing the page table adds overhead.


 Internal Fragmentation: If a process doesn't fully use a page, there may be wasted space in
the last page.

Segmentation
In Operating Systems, Segmentation is a memory management technique in which the memory is divided
into the variable size parts. Each part is known as a segment which can be allocated to a process.
Advantages of Segmentation

1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages

1. It can have external fragmentation.


2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

Virtual memory

Virtual memory is a storage scheme that provide user an illusionof having a big main memory.this
is done by treating a part of secondary memory as the main memory.
Key Concepts:

1. Virtual Address Space:


o Each process in a computer system is given the illusion of having its own continuous,
private memory address space, called the virtual address space.
o This allows programs to assume they have access to a large and contiguous block of
memory, even though the physical memory may be fragmented or insufficient.

2. Page and Page Tables:


o The virtual memory is divided into blocks of memory called pages (typically 4KB in
size), and the physical memory is divided into corresponding blocks called page
frames.
o The operating system keeps a page table, which maps virtual addresses to physical
addresses. When a program accesses a memory address, the page table is consulted
to translate the virtual address to a physical one.

3. Swap Space (Disk-based Memory):


o When the physical RAM is full, the OS can move some pages of memory to a disk-
based storage area called swap space (or page file in some systems).
o This process is called paging or swapping.
o When a page is swapped out of RAM to disk and needs to be accessed again, it’s
swapped back into memory.

4. Demand Paging:
o When a program starts, not all of its code or data is loaded into memory. Instead,
only the parts that are needed are loaded, which helps save memory. Additional
parts of the program (pages) are loaded as needed.
o This is known as demand paging.
5. Thrashing:
o If a system spends too much time swapping pages in and out of memory (because
there’s not enough RAM), it can lead to a performance problem called thrashing,
where the system becomes very slow.

Benefits of Virtual Memory:

 Isolation and Protection: Each process is isolated from others, preventing one process from
directly accessing the memory of another process. This improves security and stability.
 Increased Addressable Memory: Virtual memory allows programs to access more memory
than what’s physically available, making it easier to handle larger datasets or more complex
programs.
 Efficiency: It helps in utilizing the system’s memory more efficiently by loading only the
necessary parts of programs into physical memory.

You might also like