Memory Management
Memory Management
Anjali Jindia 3
Logical and Physical Address Space:
• Logical Address space: An address generated by the CPU is known as
“Logical Address”. It is also known as a Virtual address. It can be
defined as the size of the process, which can be changed.
• Physical Address space: An address seen by the memory unit (i.e. the
one loaded into the memory address register of the memory) is
commonly known as a “Physical Address”. A Physical address is also
known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address
space. A physical address is computed by MMU. The run-time mapping
from virtual to physical addresses is done by a hardware device
Memory Management Unit (MMU). The physical address always
remains constant.
Static and Dynamic Loading:
To load a process into the main memory is done by a loader. There are two
different types of loading :
• Static loading:- In static loading load the entire program into a fixed
address. It requires more memory space.
Anjali Jindia 4
Anjali Jindia 5
• Dynamic loading:- The entire program and all data of a process must
be in physical memory for the process to execute. So, the size of a
process is limited to the size of physical memory. To gain proper
memory utilization, dynamic loading is used. In dynamic loading, a
routine is not loaded until it is called. All routines are residing on disk
in a relocatable load format. One of the advantages of dynamic loading
is that unused routine is never loaded. This loading is useful when a
large amount of code is needed to handle it efficiently.
Memory Allocation to a process
To get a process executed it must be first placed in the memory. Assigning
space to a process in memory is called memory allocation. Memory
allocation is a general aspect of the term binding.
• Suppose, there is an entity in a program, with the set of attributes.
Now, a variable of this entity will have values for these set of
attributes. For storing these values, we must have memory allotted to
these attributes. So, the act of assigning the memory address to the
attribute of the variable is called memory allocation.
Anjali Jindia 6
And the act of specifying/binding the values to the attributes of the
variable is called binding. This action of binding must be performed
before the variable is used during the execution of the program.
We have two types of memory allocation or we can say two methods of
binding, static and dynamic binding.
• Static Memory Allocation
• Static memory allocation is performed when the compiler compiles
the program and generate object files, linker merges all these object
files and creates a single executable file, and loader loads this single
executable file in main memory, for execution. In static memory
allocation, the size of the data required by the process must be
known before the execution of the process initiates.
• If the data sizes are not known before the execution of the process,
then they have to be guessed. If the data size guessed is larger than
the required, then it leads to wastage of memory. If the guessed size
is smaller, then it leads to inappropriate execution of the process.
Anjali Jindia 7
• Static memory allocation method does not need any memory
allocation operation during the execution of the process. As all the
memory allocation operation required for the process is done before
the execution of the process has started. So, it leads
to faster execution of a process.
• Static memory allocation provides more efficiency when compared
by the dynamic memory allocation.
• Dynamic Memory Allocation
• Dynamic memory allocation is performed while the program is in
execution. Here, the memory is allocated to the entities of the
program when they are to be used for the first time while the
program is running.
• The actual size, of the data required, is known at the run time so, it
allocates the exact memory space to the program thereby, reducing
the memory wastage.
• Dynamic memory allocation provides flexibility to the execution of
the program. As it can decide what amount of memory space will be
required by the program.
Anjali Jindia 8
• If the program is large enough then a dynamic memory allocation is
performed on the different parts of the program, which is to be used
currently. This reduces memory wastage and improves the
performance of the system.
• Allocating memory dynamically creates an overhead over the system.
Some allocation operations are performed repeatedly during the
program execution creating more overheads, leading in slow execution
of the program.
• Dynamic memory allocation does not require special support from the
operating system. It is the responsibility of the programmer to design
the program in a way to take advantage of dynamic memory allocation
method.
• Thus the dynamic memory allocation is flexible but slower than static
memory allocation.
Anjali Jindia 9
• Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user
processes from each other, and from changing operating-system code and data.
Relocation register contains value of smallest physical address whereas limit
register contains range of logical addresses. Each logical address must be less
than the limit register.
• Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized
partitions where each partition should contain only one process. When a
partition is free, a process is selected from the input queue and is loaded into
the free partition. When the process terminates, the partition becomes available
for another process.
Anjali Jindia 10
Stacks
A stack is a special area of computer’s memory which stores temporary
variables created by a function. In stack, variables are declared, stored and
initialized during runtime. It is a temporary storage memory. When the
computing task is complete, the memory of the variable will be automatically
erased. The stack section mostly contains methods, local variable, and
reference variables.
Heaps
The heap is a memory used by programming languages to store global
variables. By default, all global variable are stored in heap memory space. It
supports Dynamic memory allocation.
The heap is not managed automatically and is not as tightly managed by the
CPU. It is more like a free-floating region of memory.
• Heap should be used when we need to allocate a large block of memory.
For example, you want to create a large size array or big structure to keep
that variable around for a long time then you should allocate it on the heap.
• However, if we are working with relatively small variables that are only
required until the function using them is alive, then we need to use the
stack, which is faster and easier. Anjali Jindia 11
Anjali Jindia 12
KEY DIFFERENCE
• Stack is a linear data structure whereas Heap is a hierarchical data
structure.
• Stack memory is allocated in a contiguous block whereas Heap
memory is allocated in any random order.
• Stack doesn’t require to de-allocate variables whereas in Heap de-
allocation is needed.
• Stack memory will never become fragmented whereas Heap memory
can become fragmented as blocks of memory are first allocated and
then freed.
• Stack accesses local variables only while Heap allows you to access
variables globally.
• Stack variables can’t be resized whereas Heap variables can be resized.
• Stack allocation and deallocation are done by compiler instructions
whereas Heap allocation and deallocation is done by the programmer.
Anjali Jindia 13
Memory Allocation Model
Main memory usually has two partitions −
• Low Memory − Operating system resides in this memory.
• High Memory − User processes are held in high memory.
Operating system uses the following memory allocation mechanism.
Anjali Jindia 14
Swapping :
When a process is executed it must be in memory. Swapping is a process
of swapping a process temporarily into a secondary memory from the main
memory, which is fast as compared to secondary memory.
Swapping allows more processes to run and fit into memory at one time.
Swapping is also known as roll-out, roll in, because if a higher priority
process arrives and wants service, the memory manager can swap out the
lower priority process and then load and execute the higher priority
process. After finishing higher priority work, the lower priority process
swapped back in memory and continued to the execution process.
Anjali Jindia 15
Fixed-size Partition Scheme
• This technique is also known as Static partitioning. In this scheme, the
system divides the memory into fixed-size partitions. The size of each
partition is fixed as indicated by the name of the technique and it
cannot be changed.
• In this partition scheme, each partition may contain exactly one
process. There is a problem that this technique will limit the degree of
multiprogramming because the number of partitions will basically
decide the number of processes.
• Whenever any process terminates then the partition becomes available
for another process.
Advantages
• This scheme is simple and is easy to implement
• It supports multiprogramming as multiple processes can be stored
inside the main memory.
• Management is easy using this scheme
Anjali Jindia 16
Disadvantages of Fixed-size Partition Scheme
• Internal Fragmentation
Suppose the size of the process is lesser than the size of the partition in that
case some size of the partition gets wasted and remains unused. This wastage
inside the memory is generally termed as Internal fragmentation
• Limitation on the size of the process
If in case, size of a process is more than that of a maximum-sized partition
then that process cannot be loaded into the memory. Due to this, a condition is
imposed on the size of the process and it is that the size of the process cannot
be larger than the size of the largest partition.
• External Fragmentation
It is another drawback of the fixed-size partition scheme as total unused space
by various partitions cannot be used in order to load the processes even though
there is the availability of space but it is not in the contiguous fashion.
• Degree of multiprogramming is less
In this partition scheme, as the size of the partition cannot change according to
the size of the process. Thus the degree of multiprogramming is very less and
is fixed. Anjali Jindia 17
Variable-size Partition Scheme
• This scheme is also known as Dynamic partitioning and came into
existence to overcome the drawback of internal fragmentation that is
caused by Static partitioning. In this partitioning scheme, allocation is
done dynamically.
• The size of the partition is not declared initially. Whenever any process
arrives, a partition of size equal to the size of the process is created and
then allocated to the process. Thus the size of each partition is equal to the
size of the process.
• As partition size varies according to the need of the process so in this
partition scheme there is no internal fragmentation.
Anjali Jindia 19
Let us understand this with the help of an example: In the above diagram-
process P1(3MB) and process P3(8MB) completed their execution. Hence
there are two spaces left i.e. 3MB and 8MB. Let’s there is a Process P4 of
size 10 MB comes. But the empty space in memory cannot be allocated as
no spanning is allowed in contiguous allocation. Because the rule says that
process must be continuously present in the main memory in order to get
executed. Thus it results in External Fragmentation.
• Difficult Implementation
The implementation of this partition scheme is difficult as compared to the
Fixed Partitioning scheme as it involves the allocation of memory at run-
time rather than during the system configuration. As we know that OS
keeps the track of all the partitions but here allocation and deallocation are
done very frequently and partition size will be changed at each time so it
will be difficult for the operating system to manage everything.
Anjali Jindia 20
• The modern computer system has memory space divided into blocks of
varying sizes. The operating system assigns these memory block to the
different process demanding empty memory spaces. This memory is
assigned in the main memory segment depending on the demanded
memory and the empty memory slots present in the main memory.
• Assigning memory blocks to the process is challenging as the primary
memory is needed to be divided among the operating system, user
process, and the operating system process. Therefore, the system uses
different algorithms to allocate memory from the main memory
segment. These algorithms are also known as the memory partitioning
algorithms are broadly categorized under the following algorithms:
• First Fit
In the first fit approach is to allocate the first free partition or hole large
enough which can accommodate the process. It finishes after finding the
first suitable free partition.
Advantage Fastest algorithm because it searches as little as possible.
Disadvantage The remaining unused memory areas left after allocation
become waste if it is too smaller. Thus request for larger memory
requirement cannot be accomplished. Anjali Jindia 21
• Best Fit
The best fit deals with allocating the smallest free partition which meets
the requirement of the requesting process. This algorithm first searches the
entire list of free partitions and considers the smallest hole that is adequate.
It then tries to find a hole which is close to actual process size needed.
Advantage Memory utilization is much better than first fit as it searches
the smallest free partition first available.
Disadvantage It is slower and may even tend to fill up memory with tiny
useless holes.
• Worst Fit
In worst fit approach is to locate largest available free portion so that the
portion left will be big enough to be useful. It is the reverse of best fit.
Advantage Reduces the rate of production of small gaps.
Disadvantage If a process requiring larger memory arrives at a later stage
then it cannot be accommodated as the largest hole is already split and
occupied.
Anjali Jindia 22
Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb
(in order), how would the first-fit, best-fit, and worstfit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of memory?
ANSWER:
First-fit: 212K is put in 500K partition 417K is put in 600K partition
112K is put in 288K partition (new partition 288K=500K-212K) 426K
must wait
Best-fit: 212K is put in 300K partition 417K is put in 500K partition 112K
is put in 200K partition 426K is put in 600K partition
Anjali Jindia 23
FRAGMENTATION
In contiguous memory allocation whenever the processes come into
RAM, space is allocated to them. These spaces in RAM are divided either
on the basis of fixed partitioning(the size of partitions are fixed before the
process gets loaded into RAM) or dynamic partitioning (the size of the
partition is decided at the run time according to the size of the process). As
the process gets loaded and removed from the memory these spaces get
broken into small pieces of memory that it can’t be allocated to the coming
processes. This problem is called fragmentation.
Fragmentation is an unwanted problem where the memory blocks cannot
be allocated to the processes due to their small size and the blocks remain
unused.
There are two types of fragmentation:
• Internal Fragmentation
• External Fragmentation
Anjali Jindia 24
• Internal Fragmentation
In this fragmentation, the process is allocated a memory block of size more
than the size of that process. Due to this some part of the memory is left
unused and this cause internal fragmentation.
• External Fragmentation
In this fragmentation, although we have total space available that is needed
by a process still we are not able to put that process in the memory because
that space is not contiguous. This is called external fragmentation.
Anjali Jindia 25
• How to remove external fragmentation?
This problem is occurring because we are allocating memory
continuously to the processes. So, if we remove this condition external
fragmentation can be reduced. This is what is done in paging &
segmentation(non-contiguous memory allocation techniques) where
memory is allocated non-contiguously to the processes.
Another way to remove external fragmentation is compaction. When
dynamic partitioning is used for memory allocation then external
fragmentation can be reduced by merging all the free memory together in
one large block. This technique is also called defragmentation. This
larger block of memory is then used for allocating space according to the
needs of the new processes.
Both internal and external fragmentation is a natural process
that relates to either empty memory space or memory wasting.
Although the problem of both cases cannot be solved
completely, it can be reduced to some extent with the solutions
mentioned above.
Anjali Jindia 26
COMPACTION
Definition
The process of putting the used partitions at one end and creating one big
free area at the other end for the new process.
Anjali Jindia 27
Fragmentation memory after compaction
Anjali Jindia 28
Compaction method problems
Although, the compaction solves the problem of external fragmentation
but there are some disadvantages of compaction.
• The compaction is only possible when the program supports dynamic
reallocation. When the dynamic reallocation is supported by the
program, the compaction process will reallocate the program and the
data and change the base register to reflect the new base address. This
is how the reallocation method takes place. So, if the program does
not support dynamic reallocation then compaction cannot be
applied.
• The compaction process is time consuming and very expensive. As the
simplest algorithm of compaction requires to move all the process
towards one end of the memory and all the free space towards the other
direction to produce one large hole available memory. These all steps
will first of all be costly and also this requires a lot of CPU time
therefore it decreases the efficiency of the Computer system.
Anjali Jindia 29
• The compaction method is able to solve the external fragmentation but
the disadvantages of compaction makes it a less effective solution.
Now, the other ways to solve the external fragmentation is non
contiguous memory allocation. Where the memory is divided into
smaller memory chunks and distributed all over the main memory. The
non contiguous memory allocation effectively deals with external
fragmentation.
Anjali Jindia 30
PAGING
Paging is a memory-management scheme that permits the physical address
space of a process to be noncontiguous. Paging avoids the considerable problem
of fitting memory chunks of varying sizes onto the backing store; most memory-
management schemes used before the introduction of paging suffered from this
problem. The problem arises because, when some code fragments or data
residing in main memory need to be swapped out, space must be found on the
backing store.
Because of its advantages over earlier methods, paging in its various forms is
commonly used in. most operating systems. Traditionally, support for paging has
been handled by hardware.
The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping is
known as paging technique.
• The Physical Address Space is conceptually divided into a number of fixed-
size blocks, called frames.
• The Logical address Space is also splitted into fixed-size blocks,
called pages.
• Page Size = Frame Size Anjali Jindia 31
Anjali Jindia 32
The basic method for implementing paging involves breaking physical
memory into fixed-sized blocks called frames and breaking logical
memory into blocks of the same size called pages. When a process is to be
executed, its pages are loaded into any available memory frames from the
backing store. The backing store is divided into fixed-sized blocks that are
of the same size as the memory frames. The hardware support for paging is
illustrated in previous figure.
Anjali Jindia 33
Every address generated by the CPU is divided into two parts: a page
number (p) and a page offset (d). The page number is used as an index into
a page table. The page table contains the base address of each page in
physical memory. This base address is combined with the page offset to
define the physical memory address that is sent to the memory unit. The
page size (like the frame size) is defined by the hardware. It is typically a
power of 2, varying between 512 bytes and 16 MB per page, depending on
the computer architecture.
Anjali Jindia 34
Anjali Jindia 35
Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages),
we show how the user's view of memory can be mapped into physical
memory. Logical address 0 is page 0, offset 0. Indexing into the page table,
we find that page 0 is in frame 5. Thus, logical address 0 maps to physical
address 20 (= (5 x 4) + 0). Logical address 3 (page 0, offset 3) maps to
physical address 23 {- (5x4 ) + 3). Logical address 4 is page 1, offset 0;
according to the page table, page 1 is mapped to frame 6. Thus, logical
address 4 maps to physical address 24 (= (6x4) + 0). Logical address 13
maps to physical address 9.
The difference between the user's view of memory and the actual physical
memory is reconciled by the address-translation hardware. The logical
addresses are translated into physical addresses. This mapping is hidden
from the user and is controlled by the operating system. that the user
process by definition is unable to access memory it does not own. It has no
way of addressing memory outside of its page table, and the table includes
only those pages that the process owns. Since the operating system is
managing physical memory, it must be aware of the allocation details of
physical memory—which frames are allocated, which frames are
available, how manv total frames there are, and so on.
Anjali Jindia 36
This information is generally kept in a data structure called a frame table.
The frame table has one entry for each physical page frame, indicating
whether the latter is free or allocated and, if it is allocated, to which page
of which process or processes.
Anjali Jindia 37
Hardware Support
Each operating system has its own methods for storing page tables. Most
allocate a page table for each process. A pointer to the page table is stored
with the other register values (like the instruction counter) in the process
control block. The hardware implementation of the page table can be done
in several ways. In the simplest case, the page table is implemented as a set
of dedicated registers. These registers should be built with very high-speed
logic to make the paging-address translation efficient.
Most contemporary computers, however, allow the page table to be very
large (for example, 1 million entries). For these machines, the use of fast
registers to implement the page table is not feasible. Rather, the page table
is kept in main memory, and a page-table base register (PTBR) points to
the page table. If we want to access location /, we must first index into the
page table, using the value in the PTBR offset by the page number. With
this scheme, two memory accesses are needed to access a byte (one for the
page-table entry, one for the byte). Thus, memory access is slowed by a
factor of 2. This delay would be intolerable under most circumstances.
Anjali Jindia 38
The standard solution to this problem is to use a special, small, fastlookup
hardware cache, called a translation look-aside buffer (TLB). The TLB is
associative, high-speed memory. Each entry in the TLB consists of two
parts: a key (or tag) and a When a logical address is generated by the CPU,
its page number is presented to the TLB. If the page number is found, its
frame number is immediately available and is used to access memory
value.
Anjali Jindia 39
We add the page number and frame number to the TLB, so that they will
be found quickly on the next reference. If the TLB is already full of
entries, the operating system must select one for replacement.
Replacement policies range from least recently used (LRU) to random.
Furthermore, TLBs allow some entries to be wired down, meaning that
they cannot be removed from the TLB. Typically, TLB entries for kernel
code are wired down. Some TLBs store address-space identifiers (ASIDs)
in each TLB entry. An ASID uniquely identifies each process and is used
to provide address-space protection for that process.
Protection
Memory protection in a paged environment is accomplished by protection
bits associated with each frame. Normally, these bits are kept in the page
table. One bit can define a page to be read-write or read-only.
At the same time that the physical address is being computed, the
protection bits can be checked to verify that no writes are being made to a
read-only page. An attempt to write to a read-only page causes a hardware
trap to the operating system (or memory-protection violation).
Anjali Jindia 40
We can easily expand this approach to provide a finer level of protection.
We can create hardware to provide read-only, read-write, or execute-only
protection; or, by providing separate protection bits for each kind of
access, we can allow any combination of these accesses. Illegal attempts
will be trapped to the operating system. One additional bit is generally
attached to each entry in the page table: a valid-invalid bit. When this bit is
set to "valid," the associated page is in the process's logical address space
and is thus a legal (or valid) page.
When the bit is set to "invalid“, the page is not in the process's logical
address space. Illegal addresses are trapped by use of the valid-invalid bit.
The operating system sets this bit for each page to allow or disallow access
to the page.
Anjali Jindia 41
Shared Pages
Anjali Jindia 42
Anjali Jindia 43
If the code is reentrant code (or pure code); it can be shared. Reentrant code is
non-self modifying code; it never changes during execution. Thus, two or
more processes can execute the same code at the same time. Each process has
its own copy of registers and data storage to hold the data for the process’s
execution. The data for different processes will be different.
Only one copy of the editor need to be kept in physical memory. Each user’s
page table maps onto the same physical copy of the editor, but data pages are
mapped onto different frames. Other heavily used programs such as compilers,
window systems, run – time libraries, database systems can also be shared. To
be sharable, the code must be reentrant.
Shared memory can also be described as a method of inter process
communication. Some OS’s implement shared memory using shared pages.
Anjali Jindia 44
Advantages and Disadvantages of Paging
Anjali Jindia 45
SEGMENTATION
Need of Segmentation
One of the important drawbacks of memory management in the Operating
system is the separation of the user's view of memory and the actual physical
memory and Paging is a technique that provides the separation of these two
memories. Users‘ view is basically mapped onto physical storage. And this
mapping allows differentiation between Physical and logical memory.
Anjali Jindia 47
It is possible that the operating system divides the same function into
different pages and those pages may or may not be loaded into the memory
at the same time also Operating system does not care about the User's view
of the process. Due to this technique system's efficiency decreases.
Segmentation is a better technique because it divides the process into
segments.
User's View of a Program
LOGICAL ADDRESS
Anjali Jindia 48
Basic Method
A computer system that is using segmentation has a logical address space that
can be viewed as multiple segments. And the size of the segment is variable i.e.
it may grow or shrink. During the execution each segment has a name and
length. And the address mainly specifies both, name of the segment and the
displacement within the segment.
Therefore the user specifies each address with the help of two quantities:
segment name and offset.
For simplified Implementation segments are numbered; thus referred to as
segment number rather than segment name.
Thus the logical address consists of two tuples:
<segment-number, offset>
where,
Segment Number(s): Segment Number is used to represent the number of bits
that are required to represent the segment.
Offset(d) Segment offset is used to represent the number of bits that are
required to represent the size of the segment.
Anjali Jindia 49
Segmentation Architecture
Segment Table
A Table that is used to store the information of all segments of the process is
commonly known as Segment Table. Generally, there is no simple relationship
between logical addresses and physical addresses in this scheme.
The mapping of a two-dimensional Logical address into a one-dimensional
Physical address is done using the segment table.
This table is mainly stored as a separate segment in the main memory.
The table that stores the base address of the segment table is commonly
known as the Segment table base register (STBR)
In the segment table each entry has :
• Segment Base/base address: The segment base mainly contains the starting
physical address where the segments reside in the memory.
• Segment Limit: The segment limit is mainly used to specify the length of
the segment.
Segment Table Base Register(STBR) The STBR register is used to point to the
segment table's location in the memory.
Anjali Jindia 50
Segment Table Length Register(STLR) This register indicates the
number of segments used by a program. The segment number s is legal
if s<STLR
Anjali Jindia 51
Anjali Jindia 52
Segmentation Hardware
The logical address generated by CPU consist of two parts:
• Segment Number(s): It is used as an index into the segment table.
• Offset(d): It must lie in between '0' and 'segment limit'. In this case, if the
Offset exceeds the segment limit then the trap is generated.
Thus; correct offset + segment base= address in Physical memory
and segment table is basically an array of base-limit register pair.
Anjali Jindia 53
Advantages of Segmentation
• In the Segmentation technique, the segment table is mainly used to keep the
record of segments. Also, the segment table occupies less space as compared
to the paging table.
• There is no Internal Fragmentation.
• Segmentation generally allows us to divide the program into modules that
provide better visualization.
• Segments are of variable size.
Disadvantages of Segmentation
• Maintaining a segment table for each process leads to overhead
• This technique is expensive.
• The time taken in order to fetch the instruction increases as two memory accesses
are required.
• Segments are of unequal size in segmentation and thus are not suitable for
swapping.
• This technique leads to external fragmentation as the free space gets broken down
into smaller pieces along with the processes being loaded and removed from the
main memory then this will result in a lot of memory waste.
Anjali Jindia 54
Example of Segmentation
Given below is the example of the segmentation, There are five segments
numbered from 0 to 4. These segments will be stored in Physical memory
as shown. There is a separate entry for each segment in the segment table
which contains the beginning entry address of the segment in the physical
memory( denoted as the base) and also contains the length of the
segment(denoted as limit).
Anjali Jindia 55
Segment 2 is 400 bytes long and begins at location 4300. Thus in this case
a reference to byte 53 of segment 2 is mapped onto the location 4300
(4300+53=4353). A reference to segment 3, byte 852 is mapped to
3200(the base of segment 3)+852=4052.
A reference to byte 1222 of segment 0 would result in the trap to the OS,
as the length of this segment is 1000 bytes.
Anjali Jindia 56
Paged Segmentation and Segmented Paging
A big challenge with single level paging is that if the logical address space is large,
then the page table may take up a lot of space in main memory. Since each process
has its own page table, a lot of memory will be consumed when single level paging
is used.
Anjali Jindia 58
Segmented Paging
A solution to the problem is to use segmentation along with paging to reduce the size
of page table. Traditionally, a program is divided into four segments, namely code
segment, data segment, stack segment and heap segment.
The size of the page table can be reduced by creating a page table for each segment.
To accomplish this hardware support is required. The address provided by CPU will
now be partitioned into segment no., page no. and offset.
Anjali Jindia 59
The memory management unit (MMU) will use the segment table which will
contain the address of page table(base) and limit. The page table will point to the
page frames of the segments in main memory.
Anjali Jindia 60
Advantages of Segmented Paging
• The page table size is reduced as pages are present only for data of segments,
hence reducing the memory requirements.
• Gives a programmers view along with the advantages of paging.
• Reduces external fragmentation in comparison with segmentation.
• Since the entire segment need not be swapped out, the swapping out into virtual
memory becomes easier .
Anjali Jindia 61
• Paged Segmentation
• In segmented paging, not every process has the same number of segments
and the segment tables can be large in size which will cause external
fragmentation due to the varying segment table sizes. To solve this
problem, we use paged segmentation which requires the segment table to
be paged. The logical address generated by the CPU will now consist of
page no #1, segment no, page no #2 and offset.
• The page table even with segmented paging can have a lot of invalid
pages. Instead of using multi level paging along with segmented paging,
the problem of larger page table can be solved by directly applying multi
level paging instead of segmented paging.
Anjali Jindia 62
Anjali Jindia 63
Advantages of Paged Segmentation
• No external fragmentation
• Reduced memory requirements as no. of pages limited to segment size.
• Page table size is smaller just like segmented paging,
• Similar to segmented paging, the entire segment need not be swapped out.
• Increased flexibility in memory allocation: Paged Segmentation allows for a
flexible allocation of memory, where each segment can have a different size, and
each page can have a different size within a segment.
• Improved protection and security: Paged Segmentation provides better protection
and security by isolating each segment and its pages, preventing a single segment
from affecting the entire process’s memory.
Increased program structure: Paged Segmentation provides a natural program
structure, with each segment representing a different logical part of a program.
• Improved error detection and recovery: Paged Segmentation enables the
detection of memory errors and the recovery of individual segments, rather than
the entire process’s memory.
Anjali Jindia 64
• Reduced overhead in memory management: Paged Segmentation reduces the
overhead in memory management by eliminating the need to maintain a single,
large page table for the entire process’s memory.
• Improved memory utilization: Paged Segmentation can improve memory
utilization by reducing fragmentation and allowing for the allocation of larger
blocks of contiguous memory to each segment.
Disadvantages of Paged Segmentation
• Internal fragmentation remains a problem.
• Hardware is complexier than segmented paging.
• Extra level of paging at first stage adds to the delay in memory access.
• Increased complexity in memory management: Paged Segmentation introduces
additional complexity in the memory management process, as it requires the
maintenance of multiple page tables for each segment, rather than a single page
table for the entire process’s memory.
• Increased overhead in memory access: Paged Segmentation introduces additional
overhead in memory access, as it requires multiple lookups in multiple page
tables to access a single memory location.
• Reduced performance: Paged Segmentation can result in reduced performance, as
the additional overhead in memory management and access can slow down the
Anjali Jindia 65
overall process.
• Increased storage overhead: Paged Segmentation requires additional storage
overhead, as it requires additional data structures to store the multiple page tables
for each segment.
• Increased code size: Paged Segmentation can result in increased code size, as the
additional code required to manage the multiple page tables can take up valuable
memory space.
• Reduced address space: Paged Segmentation can result in a reduced address
space, as some of the available memory must be reserved for the storage of the
multiple page tables.
Anjali Jindia 66
Virtual Memory in Operating System
Anjali Jindia 71
When no frames are free
This situation doubles the page fault service time. To reduce this we use
the Modify bit or Dirty bit.
i) When this scheme is used, each page or frame has a modify bit
associated with it in the hardware.
ii) Whenever any word or byte in the page has to be written into then the
modify bit for a page is set by hardware. It indicates that the page has
been modified.
iii) If bit is set - page has been modified since it has read in from the disk.
If bit is not set - Page has not been modified since it was read into
memory.
Anjali Jindia 72
• FIFO PAGE REPLACEMENT ALGORITHM
It is a very simple way of Page replacement and is referred to as First in First
Out. This algorithm mainly replaces the oldest page that has been present in
the main memory for the longest time.
• This algorithm is implemented by keeping the track of all the pages in the
queue.
• As new pages are requested and are swapped in, they are added to the tail
of a queue and the page which is at the head becomes the victim.
• This is not an effective way of page replacement but it can be used for
small systems.
Advantages
• This algorithm is simple and easy to use.
• FIFO does not cause more overhead.
Disadvantages
• This algorithm does not make the use of the frequency of last used time
rather it just replaces the Oldest Page.
• There is an increase in page faults as page frames increases.
• The performance of this algorithm Anjali Jindia
is the worst. 73
Anjali Jindia 74
BELADY’S ANOMALY
Generally, on increasing the number of frames to a process’ virtual memory,
its execution becomes faster as fewer page faults occur. Sometimes the reverse
happens, i.e. more page faults occur when more frames are allocated to a
process. This most unexpected result is termed Belady’s Anomaly.
This phenomenon is commonly experienced in the following page
replacement algorithms:
• First in first out (FIFO)
• Second chance algorithm
• Random page replacement algorithm
Anjali Jindia 75
Case-1: If the system has 3 frames, the given reference string the using
FIFO page replacement algorithm yields a total of 9 page faults. The
diagram below illustrates the pattern of the page faults occurring in the
example.
Case-2: If the system has 4 frames, the given reference string using the
FIFO page replacement algorithm yields a total of 10 page faults. The
diagram below illustrates the pattern of the page faults occurring in the
example.
Anjali Jindia 76
• LEAST RECENTLY USED PAGE REPLACEMENT ALGORITHM
This algorithm stands for "Least recent used" and this algorithm helps the
Operating system to search those pages that are used over a short duration of
time frame.
• The page that has not been used for the longest time in the main memory
will be selected for replacement.
• This algorithm is easy to implement.
• This algorithm makes use of the counter along with every page.
• Advantages of LRU
• It is an efficient technique.
• With this algorithm, it becomes easy to identify the faulty pages that are not
needed for a long time.
• It helps in Full analysis.
• Disadvantages of LRU
• It is expensive and has more complexity.
• There is a need for an additional data structure.
Anjali Jindia 77
Anjali Jindia 78
• OPTIMAL PAGE REPLACEMENT ALGORITHM
This algorithm mainly replaces the page that will not be used for the
longest time in the future. The practical implementation of this algorithm
is not possible.
• Practical implementation is not possible because we cannot predict in
advance those pages that will not be used for the longest time in the
future.
• This algorithm leads to less number of page faults and thus is the best-
known algorithm
Advantages
• Lowest page fault rate.
• Never suffers from Belady's anomaly.
• Twice as good as FIFO Page Replacement Algorithm.
Disadvantages
• In this algorithm future awareness of the program is needed.
• Practical Implementation is not possible because the operating system
is unable to track the future request
Anjali Jindia 79
Anjali Jindia 80
• COUNTING-BASED PAGE REPLACEMENT
• Least Frequently Used, LFU: Replace the page with the lowest
reference count. A problem can occur if a page is used frequently
initially and then not used any more, as the reference count remains
high.
• Most Frequently Used, MFU: Replace the page with the highest
reference count. The logic behind this idea is that pages that have
already been referenced a lot have been in the system a long time, and
we are probably done with them, whereas pages referenced only a few
times have only recently been loaded, and we still need them.
Anjali Jindia 81
THRASHING
Swapping a process out means removing all of its pages from memory, or marking
them so that they will be removed by the normal page replacement process.
Suspending a process ensures that it is not runnable while it is swapped out. At
some later time, the system swaps back the process from the secondary storage to
the main memory. When a process is busy swapping pages in and out then this
situation is called thrashing. In the below given diagram, the initial degree of
multiprogramming up to some extent of point(lambda), the CPU utilization is very
high and the system resources are utilized 100%. But if we further increase the
degree of multiprogramming the CPU utilization will drastically fall down and the
system will spend more time only on the page replacement and the time is taken to
complete the execution of the process will increase. This situation in the system is
called thrashing.
Anjali Jindia 82
Causes of Thrashing :
• High degree of multiprogramming : If the number of processes keeps on
increasing in the memory then the number of frames allocated to each
process will be decreased. So, fewer frames will be available for each
process. Due to this, a page fault will occur more frequently and more CPU
time will be wasted in just swapping in and out of pages and the utilization
will keep on decreasing.
• Lack of Frames: If a process has fewer frames then fewer pages of that
process will be able to reside in memory and hence more frequent
swapping in and out will be required. This may lead to thrashing. Hence
sufficient amount of frames must be allocated to each process in order to
prevent thrashing.
Recovery from Thrashing :
• Do not allow the system to go into thrashing by instructing the long-term
scheduler not to bring the processes into memory after the threshold.
• If the system is already thrashing then instruct the mid-term scheduler to
suspend some of the processes so that we can recover the system from
thrashing.
Anjali Jindia 83