0% found this document useful (0 votes)
79 views18 pages

Memory Management Techniques

The document discusses memory management techniques including single and multiple partition allocation, dynamic storage allocation strategies like first fit and best fit, swapping, memory compaction, paging, shared pages, page tables, and translation lookaside buffers. Paging divides memory into fixed sized blocks called frames and logical memory into blocks called pages, with page tables mapping logical to physical addresses.

Uploaded by

Vaibhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views18 pages

Memory Management Techniques

The document discusses memory management techniques including single and multiple partition allocation, dynamic storage allocation strategies like first fit and best fit, swapping, memory compaction, paging, shared pages, page tables, and translation lookaside buffers. Paging divides memory into fixed sized blocks called frames and logical memory into blocks called pages, with page tables mapping logical to physical addresses.

Uploaded by

Vaibhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

OS UNIT-2: Chapter 1 1

OS UNIT-2: Chapter 1 2
OS UNIT-2: Chapter 1 3
OS UNIT-2: Chapter 1 4
OS UNIT-2: Chapter 1 5
OS UNIT-2: Chapter 1 6

Single Partition Allocation:


• In single partition allocation, physical memory is divided into fixed-sized regions.
• This division ensures that address spaces are distinct, preventing one user from interfering
with another user or the system.
• The number of partitions determines the level of multiprogramming.
• When a process is scheduled, it is assigned a partition.
• Protection around each partition is determined by upper and lower bounds, as well as base
and limit values set in the hardware.
Multiple-Partition Allocation:
• In multiple-partition allocation, the degree of multiprogramming is limited by the number
of partitions.
• Variable-partition sizes are used for efficiency, tailored to a given process’s needs.
• Throughout memory, there are scattered holes—blocks of available memory.
• When a process arrives, it is allocated memory from a hole large enough to accommodate
it.
• When a process exits, its partition is freed, and adjacent free partitions are merged.
Dynamic Storage-Allocation Problem
Memory allocation is the process by which computer programs are assigned memory or
space. There are three primary allocation strategies:
1. First Fit: In the First Fit method, the partition allocated is the first sufficient block from
the top of the main memory. It scans memory from the beginning and selects the first
available block that is large enough. Thus, it allocates the first hole that meets the size
requirement.
2. Best Fit: The Best Fit strategy assigns the process to the partition that is the smallest
sufficient among the free available partitions. It searches the entire list of holes to find the
smallest hole whose size is greater than or equal to the size of the process. Unless the
holes are ordered by size, this approach consumes CPU time.
3. Worst Fit: In contrast to Best Fit, the Worst Fit algorithm allocates the process to the
partition that is the largest sufficient among the freely available partitions in the main
memory. It also requires searching the entire list of holes to find the largest hole and
allocate it to the process.
To address memory fragmentation issues, consider the following:
• External Fragmentation: This occurs when there are many small pieces of free memory.
To mitigate this, it's essential to allocate memory in a way that avoids creating small
holes.
• Minimum Allocation Size: Determine an appropriate minimum size for memory
allocation. Allocating memory in smaller chunks can help reduce external fragmentation.
OS UNIT-2: Chapter 1 7

• Internal Fragmentation: Avoid internal fragmentation by ensuring that the allocated


memory is fully utilized by the requesting program. For instance, if memory is handed
out in fixed increments (such as powers of 2), ensure that the program uses all of it
effectively.
Swapping in Computer Systems
Swapping is a mechanism that allows a process to be temporarily moved out of the main
memory (RAM) and into secondary storage (usually disk). This frees up memory space for
other processes. Here are the key points about swapping:
1. Purpose: Swapping ensures efficient memory utilization by temporarily relocating
processes when memory becomes scarce.
2. Process Swap: When a process is swapped out, it is moved from memory to a backing
store (disk). Later, it can be brought back into memory for continued execution.
3. Memory Compaction: Swapping also serves as a technique for memory compaction,
helping to manage memory fragmentation.
4. Backing Store: The backing store is a fast disk with enough capacity to store copies of
memory images for all users. It must provide direct access to these memory images.
5. Roll Out and Roll In: This variant of swapping is used in priority-based scheduling
algorithms. Lower-priority processes are swapped out to make way for higher-priority
processes.
6. Transfer Time: The major part of swap time is the transfer time. The total transfer time is
directly proportional to the amount of memory swapped.
7. Address Binding: Whether the swapped-out process needs to be swapped back in to the
same physical addresses depends on the address binding method used.
8. Example Scenario:
o A 100MB process is swapping to a hard disk with a transfer rate of 50MB/sec.
o Swap out time is 2000 ms.
o Plus, the swap-in time for the same-sized process.
o The total context switch swapping component time is 4000ms (4 seconds).

Memory Compaction

When a program is moved out of memory or terminates, it leaves behind a hole (a contiguous
unused area) in the main memory. Later, when a new process needs to be loaded, it may be
allocated one of these available holes. However, there can be situations where the main
memory has numerous small holes, none of which is large enough to accommodate a new
process. This fragmentation leads to inefficiency.

To address this issue, compaction becomes essential. Here are the key points about memory
compaction:

1. Purpose: Compaction aims to re-allocate existing programs in contiguous regions,


creating a sufficiently large free area for allocating new processes.
2. Reducing External Fragmentation: By shuffling memory contents and placing all free
memory together in one large block, compaction helps reduce external fragmentation.
OS UNIT-2: Chapter 1 8

3. Dynamic Relocation: Compaction is possible only if relocation is dynamic and can be


performed at execution time.
4. I/O Considerations:
o Latch Job in Memory: While a process is involved in I/O operations, it should
remain in memory to avoid unnecessary swapping.
o I/O into OS Buffers: Limit I/O operations to OS buffers to minimize memory
fragmentation.

Paging
It is a memory management technique that allows a program's memory to be allocated from
non-contiguous physical locations, thus avoiding issues with fragmentation and the need for
compaction.
In paging: -
The logical address space of a process can be non-contiguous, meaning the program's
memory is allocated from physical memory wherever available and whenever needed.
Physical memory is divided into fixed-sized blocks called frames, typically with sizes
ranging from 512 bytes to 8192 bytes.
Similarly, the logical memory is divided into blocks of the same size called pages.
The system keeps track of all free frames in physical memory.
To execute a program consisting of n pages, the system needs to find n free frames and
load the program into them.
A page table is set up to translate logical addresses to physical addresses.
Paging introduces internal fragmentation.

Addressing in paging involves:

• Page number (p): Used as an index into a page table which contains the base address of
each page in physical memory.
• Page offset (d): Combined with the base address to define the physical memory address
that is sent to the memory unit.

For a given logical address space of size 2m and a page size of 2n, the address translation
scheme is as follows:

Logical Address = Page number + Page offset


Physical Address = Frame number + Page offset
Shared Pages
Shared pages refer to data that occupies a single physical page in memory but is referenced by
multiple logical pages. This concept is particularly valuable for common code segments where
write protection is essential, ensuring that writable data isn't mixed with code.
Shared pages are extremely beneficial for facilitating read/write communication between
processes.
OS UNIT-2: Chapter 1 9

Implementation of Page Table


1. Page Table Overview:
• The page table contains the base address of each page in physical memory.
• It's stored in main memory.
• Key registers: Page Table Base Register (PTBR) points to the page table, and Page-
Table Length Register (PTLR) indicates its size.
2. Memory Access Challenge:
• Every data/instruction access involves two memory accesses: one for the page table and
one for the actual data/instruction.
3. Solution: Translation Look-aside Buffers (TLBs):
• TLBs are fast-lookup hardware caches that mitigate the two-memory access problem.
• They significantly reduce the time taken to access a user memory location.
• TLBs are typically small, ranging from 64 to 1,024 entries.
• On a TLB miss, the value is loaded into the TLB for faster access next time.
• Replacement policies must be considered for managing TLB entries.
• Some TLB entries can be wired down for permanent fast access.
4. TLB Functionality:
• TLBs store recently accessed page table entries, enabling quick retrieval.
• Some TLBs use Address-Space Identifiers (ASIDs) to uniquely identify each process,
enhancing address-space protection.
5. Memory Protection:
• Protection mechanisms are implemented by associating protection bits with each frame.
• These bits determine whether read-only or read-write access is allowed, ensuring data
integrity and security.
Structure of Page Tables
1. Size of Page Tables:
❖ In a 32-bit address space with 4 KB (2^12) page size:
❖ With 2^20 paging entries (1M), each entry 4 bytes: total size is 4 MB.
❖ For a 64-bit address space, the page table can become significantly larger.
2. Challenges:
❖ Non-contiguous Allocation: Memory may not be allocated contiguously in physical
memory.
❖ Size Constraints: The page table may not fit entirely into physical memory due to its
large size in a 64-bit address space.
4. Solutions:
OS UNIT-2: Chapter 1 10

a. Hierarchical Paging:
- Another name for Hierarchical Paging is multilevel paging.
- In this type of Paging the logical address space is broke up into Multiple page tables.
- Hierarchical Paging is one of the simplest techniques and for this purpose, a two-
level page table and three-level page table can be used.
- Advantages:
o Allows for efficient memory management by loading only the required page
tables into memory.
o Provides flexibility in managing large address spaces.
b. Hashed Page Tables:
- Uses a hash function to map virtual page numbers to page table entries.
- This Page table mainly contains a chain of elements hashing to the same elements.
- Each element mainly consists of:
o The virtual page numbers
o The value of the mapped page frame.
o A pointer to the next element in the linked list.
- Advantages:
o Helps in distributing the page table entries across the memory space, reducing
lookup time.
o Useful for handling large address spaces with unpredictable access patterns.
c. Inverted Page Tables:
- The Inverted Page table basically combines A page table and A frame table into a
single data structure.
- There is one entry for each virtual page number and a real page of memory
- And the entry mainly consists of the virtual address of the page stored in that real
memory location along with the information about the process that owns the page.
- Though this technique decreases the memory that is needed to store each page
table; but it also increases the time that is needed to search the table whenever a
page reference occurs.
- Advantages:
o Efficiently handles sparse address spaces by avoiding the need to allocate
space for unused pages.
o Suitable for systems with limited memory resources.
Hierarchical Page Table Example
1. Two Level Page Table: Two-level paging in which a page table itself is paged. So, we will
have two-page tables' inner page table and outer page table. Consider a system having 32-bit
logical address space and a page size of 1 KB. It is further divided into Page Number consisting
of 20 bits and Page Offset consisting of 12 bits.
As we page the Page table, the page number is further divided into, Page Numbers consisting
of 10 bits and Page Offset consisting of 12 bits.
OS UNIT-2: Chapter 1 11

Thus, the Logical address is as follows:

In the above image, P1 is an index into the Outer Page table, and P2 indicates the displacement
within the page of the Inner page Table.

As address translation works from outer page table inward so is known as forward-mapped
Page Table.

The above figure shows the Address Translation scheme of a 2-level page table.
OS UNIT-2: Chapter 1 12

The above Figure shows Hashed Page Table.

The above figure shows the address translation scheme of the Inverted Page Table.

Why Use a Hash Table?


Let's consider the scenario where we need to store records of 10,000 students, each
identified by a 5-digit ID, in a container.
1. Linked List: Implementing this with a linked list would take O(n) time. Each insertion
or retrieval operation would require traversing the list linearly, which can be
inefficient for large datasets.
2. Height-Balanced Tree: Alternatively, using a height-balanced tree could provide O
(logn) access time. While this improves over the linked list approach, it still involves
traversing the tree, which may not be optimal for performance-critical applications.
3. Array: An array of size 100,000 could offer O (1) access time. However, this approach
leads to significant space wastage, especially if the number of records is much smaller
than the array size. This inefficiency becomes more pronounced with larger datasets.
Is there a more efficient way to achieve constant-time access without excessive space
wastage?
In such cases, a hash table becomes a compelling solution. A hash table allows for
constant-time access on average, making it ideal for scenarios where performance is
critical. By hashing each student ID into an index within a suitably sized array, we can
achieve O (1) access time without wasting excessive memory.
What are the Types of Hashing?
1. Static Hashing:
- In static hashing, a fixed number of locations are allocated based on the hash
function's mapping of search-key values.
- Collisions are resolved by adding overflow entries, typically managed through
techniques like chaining or open addressing.
OS UNIT-2: Chapter 1 13

2. Dynamic Hashing:
- Dynamic hashing allows the hash table to dynamically resize and accommodate
more items as needed.
- As the table grows, the associated hash function may need to be adjusted to
maintain efficiency.
- Collisions are addressed by dynamically growing the hash table, which involves
rehashing existing keys to new locations within a larger table.

Join Now: -
Virtual memory
📑 NFS Info Junction: -
Virtual memory enables the execution of
program instructions that may not entirely fit
https://whatsapp.com/channel/0029VaMvqwsDeON2ZWt3Bu1M
into system memory.
©️ NFS Club: -
It achieves this by calling instructions as needed
https://chat.whatsapp.com/CX7qycrvE2QIpfeBPBrbxe
during program execution.
Implementation involves using secondary
Huge Thanks to Adith for the Notes🙏
storage, such as a hard disk, to supplement main
memory.
Data is transferred between secondary and main
storage on demand, as required by the executing
program.
Modified data is written back to secondary
storage based on a predetermined algorithm, ensuring efficient management of memory
resources.
Demand Paging
o Demand paging is a technique used in virtual memory systems where pages of programs
are not immediately copied from disk to main memory.
o Instead, pages are brought into main memory only when they are needed for execution.
o When the CPU accesses a virtual address (logical address), the corresponding page
number is looked up in the page table.
o If the page table indicates that the required page is not currently in main memory, then
a page fault occurs, and the page must be fetched from disk into main memory.
Pure Demand Paging:
➢ Pure demand paging is a specific form of demand paging where no pages are initially
loaded into memory.
➢ This means that even the very first instruction executed by the CPU causes a page fault
because none of the required pages are in main memory.
OS UNIT-2: Chapter 1 14

➢ Pure demand paging can significantly impact the performance of a computer system as
it generally increases the effective access time of memory.
➢ Since each page access incurs the overhead of fetching from disk, frequent page faults
can lead to slower program execution and decreased overall system efficiency.
Page Fault Handling:
A page fault occurs when the CPU attempts to access a page that is not currently in main
memory.
When a page fault happens, the operating system needs to handle it by bringing the requested
page into main memory.
There are two scenarios for handling page faults:
➢ Swapping-in from Disk: If the requested page is valid but not in main memory, it is
fetched (swapped-in) from disk into main memory to satisfy the CPU's demand.
➢ Trap Generation: If the requested page is not within the logical address space of the
process, a trap is generated and handed over to the operating system for appropriate
action.
Determining Validity of the Requested Page:
To determine whether the reference to the requested page is within the logical address space
of the process, an internal table may be consulted.
Page Replacement Algorithms
Once the main memory fills up, a page must be swapped out to make room for any pages to
be swapped in. This is known as page replacement.
Several page replacement algorithms for page replacement purpose, which are as follows:
➢ First In First Out (FIFO) algorithm

o https://www.youtube.com/watch?v=16kaPQtYo28
➢ Optimal Page algorithm
OS UNIT-2: Chapter 1 15

o https://www.youtube.com/watch?v=jeJIKKQcqpU

➢ Least Recently Used (LRU) algorithm

The LRU algorithm does not suffer from Belady’s anomaly. It is a type of stack algorithm. In
stack algorithms, the set of pages in memory for n pages is always a subset of the set of
pages in memory for n+1 pages.
o https://www.youtube.com/watch?v=u23ROrlSK_g
Advantages of Paging
o Paging offers simplified memory management so that the programs need not worry
about the physical memory addresses.
o It allows efficient memory usage.
OS UNIT-2: Chapter 1 16

o Aids in eliminating external fragmentation.


o Provides memory protection by preventing unauthorized access.
o The mapping between virtual and physical addresses is quite simple.
Disadvantages of Paging
o Since the pages are of fixed size, there is a possibility for internal fragmentation.
o Page table overhead arises in systems that require large memory.
o Complex page replacement.
o An additional layer of memory access is created by paging.
o Too many pages in a physical memory at the same time leads to thrashing.
Segmentation
Segmentation is a memory management technique dividing each job into segments of
different sizes, one for each module.
Each segment contains related functions.
Segments are loaded into non-contiguous memory, though each segment is loaded into
a contiguous block of available memory.
Segmentation resembles paging, but segments vary in length while pages are of fixed
size.
Typical segments include global variables, procedure call stack, function code, local
variables, and large data structures.
The operating system maintains a segment map table for each process.
It also keeps a list of free memory blocks along with segment numbers, sizes, and
corresponding memory locations in main memory.
For each segment, the table stores the starting address and length.
Segmentation Architecture
- Logical Address Structure: Consists of a two-tuple:
<segment-number, offset>.
- Segment Table: Maps two-dimensional physical addresses. Each entry contains:
o Base: Starting physical address where segments reside in memory.
o Limit: Length of the segment.
- Segment-table Base Register (STBR): Points to the segment table's location in
memory.
- Segment-table Length Register (STLR): Indicates the number of segments used by a
program.
- A segment number 's' is considered legal if s < STLR.
Segmentation vs Paging
OS UNIT-2: Chapter 1 17

Combined Segmentation & Paging


In combined paging/segmentation system a user’s address space is broken up into a number of
segments. Each segment is broken up into a number of fixed size pages which are equal in
length to a main memory frame.
The virtual address consists of:
• a segment number: used to index the segment table whose entry gives the starting
address of the page table for that segment.
• a page number: used to index that page table to obtain the corresponding frame
number.
• an offset: used to locate the word within the frame.
OS UNIT-2: Chapter 1 18

Join Now:-

NFS Info Junction:-


https://whatsapp.com/channel/002
9VaMvqwsDeON2ZWt3Bu1M

©️ NFS Club:-
https://chat.whatsapp.com/CX7qyc rvE2QIpfeBPBrbxe

Huge Thanks to Adith

You might also like