0% found this document useful (0 votes)
80 views24 pages

Unit 4 Memory Management Notes

The document covers main memory organization, including address binding, logical vs. physical address space, and memory management techniques like paging and segmentation. It explains concepts such as swapping, contiguous memory allocation, memory fragmentation, and various memory partitioning algorithms (First Fit, Best Fit, Worst Fit, Next Fit). Additionally, it details the paging technique, including the structure of the page table and hardware support for paging.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views24 pages

Unit 4 Memory Management Notes

The document covers main memory organization, including address binding, logical vs. physical address space, and memory management techniques like paging and segmentation. It explains concepts such as swapping, contiguous memory allocation, memory fragmentation, and various memory partitioning algorithms (First Fit, Best Fit, Worst Fit, Next Fit). Additionally, it details the paging technique, including the structure of the page table and hardware support for paging.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Main Memory

Topics covered:
1. Organizing memory hardware
1. Address binding
2. Logical versus physical address space
3. Swapping
4. Contiguous Memory Allocation
5. Memory allocation and fragmentation
2. memory-management techniques,
1. Paging,Structure of the Page Table
2. Segmentation

What is Address binding:


Address binding is the process of mapping from one address space to another address space

An address binding can be done in three different ways:

1. Compile time: If memory location known a priori, absolute code can be generated; must recompile code
if starting location changes.
2. Load time: Must generate relocatable code if memory location is not known at compile time
3. Execution time: Binding delayed until run time if the process can be moved during its execution from
one memory segment to another. Need hardware support for address maps (e.g., base and limit registers)

Differentiate between logical (relative) address and physical address


1. Logical address – generated by the CPU; also referred to as virtual address
2. Physical address – address seen by the memory unit
* Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time address-
binding scheme
Logical address space is the set of all logical addresses generated by a user program
Physical address space is the set of all physical addresses generated by a program
.
What is MMU

 Hardware device that at run time maps virtual to physical address


 simple MMU scheme where the value in the relocation register is added to every address generated
by a user process at the time it is sent to memory
 Base register now called relocation register
 The user program deals with logical addresses; it never sees the real physical addresses

What is Swapping ?
Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to
secondary storage (disk) and make that memory available to other processes.

Backing store – fast disk large enough to accommodate copies of all memory images for all users; must
provide direct access to these memory images

Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority
process is swapped out so higher-priority process can be loaded and executed

Major part of swap time is transfer time; total transfer time is directly proportional to the amount of
memory swapped.( size of the process).System maintains a ready queue of ready-to-run processes
which have memory images on disk

Explain in detail Contiguous Memory Allocation

Memory allocation is a process by which computer programs are assigned memory or [Link] memory
is divided into two types of partitions

1. Low Memory - Operating system resides in this type of memory.


2. High Memory- User processes are held in high memory.

Contiguous memory allocation is a memory allocation method that allocates a single contiguous section of memory
to a process.
Relocation registers used to protect user processes from each other, and from changing operating-system code
and data
 Base register contains value of smallest physical address
 Limit register contains range of logical addresses – each logical address must be less than the
limit register
 MMU maps logical address dynamically

Multiple-partition allocation : Degree of multiprogramming limited by number of partitions..Variable-


partition sizes for efficiency (sized to a given process’ needs)

Hole – block of available memory; holes of various size are scattered throughout [Link] a process
arrives, it is allocated memory from a hole large enough to accommodate it Process exiting frees its partition,
adjacent free partitions combined . Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

Explain different ways a request of memory size n from a process is satisfied from a
list of free holes? Or

Explain different memory partition algorithm.

The system uses different algorithms to allocate memory from the main memory segment. These algorithms
are also known as the memory partitioning algorithms are broadly categorized under the following
algorithms:

 First Fit
 Next Fit
 Best Fit
 Worst Fit

First Fit:
 In this method, whichever partition has the memory space equal or more than the demanded memory
by the process, is allocated to the process at the first attempt during memory traversing. This method
continues until all the processes are allocated free memory blocks or no correct free memory block
is left that can be assigned to a process.
 First fit is a straight and fast algorithm.
Next Fit :

 Next fit is a modified version of ‘first fit’.


 It begins as the first fit to find a free partition but when called next time it starts searching from where
it left off, not from the beginning.

Best Fit :
 The best fit memory allocation method the memory is traversed until a suitable empty block is found.
In this method, the memory wastage is minimal as the approach allocates the memory blocks with
minimum memory wastage.
 Assigns smallest hole that is big enough.
 This algorithm produces the smallest leftover hole
 It is a Slow Process. Checking the whole memory for each job makes the working of the operating
system very slow. It takes a lot of time to complete the work.
Worst Fit:
 The worst fit approach is directly opposite to the best-fit approach. In this method, the CPU searches
for a memory block which is greater to the memory in demand. In fact, it searches for an empty
memory block which is exceptionally bigger than the demanded memory. The approach is known as
the worst fit method as it causes the maximum amount of memory wastage in the memory.
 Allocates the largest hole
 This algorithm produces the largest leftover hole which is more useful.
2. Consider a swapping system in which memory consists of the following hole sizes in memory order: 10
KB, 4 KB, 20 KB, 18 KB, 7 KB, 9 KB, 12 KB, and 15 KB. Which hole is taken for successive segment
requests of: (i) 12 KB (ii) 10 KB (iii) 9 KB for first fit, best fit, worst fit approaches.

P1 12KB
P2 10KB
P3 9KB

First Fit Approach:


Hole Total Size
Process 1 (size – 12 KB) s size
* The process cannot fit in the first Hole as 1 10KB 10KB P2
10KB (Hole size) < 12 KB (Process size).
* The process cannot fit in the second Hole as 2 4KB 4 KB
4 KB (Hole size) < 12 KB (Process size 3 20KB 12 KB P1
* The process can fit in the third Hole as
20 KB (Hole size) > 12 KB (Process size) 8KB New
* The process is assigned to the third hole. hole
* This creates another hole of size 8KB (20KB – 12KB 4 18KB 9 KB P3

Process 2 (size – 10KB) 9KB New


* The process cannot fit in the first Hole as hole
10KB (Hole size) = 10 KB (Process size). 5 7 KB
Process 3 (size – 9 KB) 6 9 KB
* first Hole is occupied 7 12 KB
* The process cannot fit in the second Hole as
4 KB (Hole size) < 9KB (Process size) 8 15 KB
* third Hole is occupied
* The process can fit in the fourth Hole as Main Memory after
18 KB (Hole size) > 9 KB (Process size)
Allocation of holes to the process
* The process is assigned to the third hole.
* This creates another hole of size 8KB (20KB – 12KB according to first fit Approach
First fit: P1-> 20K, p2-> 10K , p3-> 18K.

Best Fit Approach:


Process 1 (Size - 12KB):
* The memory is traversed searching for the memory space Hole Size
with minimum wastage. s
*7th hole is assigned to process 1. 1 10KB P2
Process 2 (Size - 10KB) 2 4 KB
* 1st hole is assigned to process 2 3 20
Process 3 (Size - 9KB) KB
* 6st hole is assigned to process 2
4 18
Best fit: p1-> 12K , p2-> 10K ,P3-> 9K KB
5 7 KB
6 9 KB P3
7 12 P1
KB
Worst Fit Approach:

Process 1 (Size - 12KB):


* The memory is traversed searching for the memory space Hole TotalSiz size Allocatio
With largest hole. s e n
*3rd hole is assigned to process 1. 1 10KB
* Create a new hole of size 8KB 2 4 KB
Process 2 (Size - 10KB) 3 20 KB 12KB P1
* 4th hole is assigned to process 2 8KB New hole
* Creates a new hole of size 8KB
Process 3 (Size - 9KB) 4 18 KB 10KB P2
* 8th hole is assigned to process 3
* Creates a new hole of size 6KB. 8KB New hole
5 7 KB
Worst fit: P1-> 20K, P3-> 18K P3->15K
6 9 KB
7 12 KB
8 15 KB 9KB P3

6KB New Hole

Ans: first fit: (a) 20K (b) 10K (c) 18K


Best fit: (a) 12K (b) 10K (c) 9K
Worst fit: (a) 20K (b) 18K (c) 15K
Next fit: (a) 20 KB (b) 18K (c) 9K

What is memory fragmentation? Differentiate Internal fragmentation and external fragmentation.


Memory fragmentation is the memory which is not used and gets wasted. It means that the memory is divided
into parts of fixed size and when some processes try to occupy the memory space, they sometimes are not able to
occupy the whole memory leading to some holes in the memory. This is memory fragmentation. It is of 2 types:
1. External fragmentation
2. Internal Fragmentation

Internal Fragmentation occurs when we allocate some memory to certain program or application but actually
don't use them

External Fragmentation occurs due to allocation and deallocation of memory which leads into hole that is leaving
small memory spaces which is not sufficient for other programs to use.
Topics covered:
3. memory-management techniques,
3. Paging,
4. Structure of the Page Table
5. Segmentation

Explain in detail about paging technique.


What Paging is

Paging is a memory management scheme that permits the physical address space of the process to be
noncontiguous.

In Paging the main memory is partitioned into equal size chunks that are relatively small and that each
process is divided into small fixed sixed chunks of the same size. Then the chunk of the process know as pages,
could be assigned to available chunk of memory known as frames or page frames.

In this schema there is no external fragmentation and there can be internal fragmentation consisting of
only a fraction of the last page of a process.

Working of Paging technique:

Suppose process A stored on disk consists of 4 pages. When it is time to load the process the OS finds
four free frame and load four pages of process A into four frames. Process B consisting of three pages and
process C consisting of four pages, are subsequently loaded. Then is suspended and swapped out of the memory.

Now suppose OS wants to bring Process D into the memory but there are not sufficient unused contiguous frame
to hold the process. Now under paging scheme the OS loads the process in non-contiguous frame i.e process is
loaded in frame 4,5,6,11 and 12. This is illustrated below

Assignment of process to free frames.

Thus in Paging partitions are of smaller size, a program may occupy more than one partition; and these
partition need not be contiguous.

Page Table:
Operating System maintains Page Table for each process. The page table shows the frame location for each
process. Within the program, each logical address consists of a page number and an offset with in a page. With
paging, the logical-to-physical address translation is done by the processor hardware.

Below diagram show table for all the process page to frame allocation in memory for above diagram

Generally when CPU send a request for a page 0 to process it is logical address , looking at the page table
system will now that physically the page is stored in frame 1. Location of frame 1 is the physical address, where
the page 0 is stored.

Paging model of logical and physical memory

How Page table is used to map logical address to physical Address:

Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d). The page number is
used as an index into a page table. The page table contains the base address of each page in physical memory. This base
address is combined with the page offset to define the physical memory address that is sent to the memory unit. The paging
model of memory is shown below.

Paging Hardware.
Example of Logical to Physical address Translation in Paging scheme

Consider a logical address of n + m bits, where the leftmost n bits are the page number and the
rightmost m bits are the offset. Let n = 6 and m = 10. Following set are needed for address translation:
Example 1:

Example 2:

Consider the memory in Figure. Using a page size of 4 bytes and a physical memory of 32 bytes (8
pages), we show how the user's view of memory can be mapped into physical memory.
1. Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5.
Thus, logical address 0 maps to physical address 20 (= (5 x 4) + 0).
2. Logical address 3 (page 0, offset 3) maps to physical address 23 (= (5 x 4) + 3).
3. Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6.
4. Thus, logical address 4 maps to physical address 24 (= (6 x 4) + 0).
5. Logical address 13 maps to physical address 9.

Explain hardware support for paging:

The hardware implementation of the page table can be done in several ways:
1. Dedicated Registers:
a. The page table is implemented as a set of dedicated registers. These registers should be built
with very high-speed logic to make the paging-address translation efficient.
b. The CPU dispatcher reloads these registers, just as it reloads the other registers.
c. The use of registers for the page table is satisfactory if the page table is reasonably small.
2. Page-table base register (PTBR) :
a. A pointer to the page table is stored.
b. Changing page tables requires changing only this one register, substantially reducing context-
switch time.
c. The problem with this approach is the time required to access a user memory location.
3. Translation look-aside buffer (TLB) :
a. The TLB is associative, high-speed memory. Each entry in the TLB consists of two parts:
a key (or tag) and a value.
b. When the associative memory is presented with an item, the item is compared with all keys
simultaneously.
If the item is found, the corresponding value field is returned. The search is fast; the
hardware, however, is expensive.
c. If the page number is not in the TLB (known as a TLB miss), a memory reference to the page
table must be
made. When the frame number is obtained, we can use it to access memory.
d. Some TLBs allow entries to be wired down, meaning that they cannot be removed from the
TLB.

e. The percentage of times that a particular page number is found, in the ILS is called the hit ratio.

Explain how memory protection is provided in paged environment


Memory protection in a paged environment is accomplished by protection bits associated with each frame,
Normally, these bits are kept in the page table.
One bit can define a page to be read-write or read-only. Every reference to memory goes through the page table to
find the correct frame number. At the same time that the physical address is being computed, the protection bits can be
checked to verify that no writes are being made to a read-only page. An attempt to write to a read-only page causes a
hardware trap to the operating
One additional bit is generally attached to each entry in the page table: a valid-invalid bit. When this bit is set to
"valid," the associated page is in the process's logical address space and is thus a legal (or valid) page. When the bit is set to
"invalid," the page is not in the process's logical address space. Illegal addresses are trapped by use of the valid-invalid bit.
The concept is illustrated in the figure below

Explain the common techniques for structuring the page table.


Most common techniques for structuring the page table are:
1. Hierarchical Paging
2. Hashed Page Tables
3. Inverted Page Tables
Hierarchical Paging
Most modern computer systems support a large logical address space (232 to 2(4). In such an
environment, the page table itself becomes very large. In Hierarchical paging the page table is not
contiguously stored in main memory, Instead the page table is divided into smaller pieces. It is also known
as multilevel paging.
A simple technique in hierarchical paging is a two level page table or three level page table.

A two-level page-table scheme :


One way is to use a two-level paging algorithm, in which the page table itself is also paged.

With a page size of 4 KB. A logical address is divided into a page number consisting of 20 bits and
a page offset consisting of 12 bits. Because we page the page table, the page number is further ivided
into a 10-bit page number and a 10-bit page offset. Thus, a logical address is as follows:
Where Pl is an index into the outer page table and P2 is the displacement within the page of the outer
page table. The address-translation method for this architecture is shown below. This scheme is also
known as a forward-mapped page table.

A three-level page-table scheme :


Suppose that the outer page table is made up of standard-size pages (210 entries, or 212
bytes); a 64-bit address space is still daunting:

Hashed Page Tables:

In this scheme the hash value being the virtual page number. Each entry in the hash table contains a linked
list of elements that hash to the same location Each element consists of three fields:
(1) the virtual page number,
(2) the value of the mapped page frame, and
(3) a pointer to the next element in the linked list.
The algorithm works as follows:
1. The virtual page number in the virtual address is hashed into the hash table.
2. The virtual page number is compared with field 1 in the first element in the linked list.
3. If there is a match, the corresponding page frame (field 2) is used to form the desired
physical address.
[Link] there is no match, subsequent entries in the linked list are searched for a matching virtual
page number. This scheme is shown

Inverted Page Tables


An inverted page table has one entry for each real page (or frame) of memory. Thus, only one page
table is in the system.
Inverted page tables often require that an address-space identifier be stored in each entry of the page
table, since the table usually contains several different address spaces mapping physical memory. Storing the
address-space
identifier ensures that a logical page for a particular process is mapped to the corresponding physical page
frame.
1. Each virtual address in the system consists of a triple
<process-id, page-number, offset>.
2. Each inverted page-table entry is a pair <process-iet page-number>
where the process-id assumes the role of the address-space identifier.
3. When a memory reference occurs, part of the virtual address, consisting of <process-id,
pagenumber>,
is presented to the memory subsystem. The inverted page table is then searched for a match.
If a match is found-say, at entry i-then the physical address <i, offset> is generated. If no
match is found, then an illegal
address access has been attempted.

Although this scheme decreases the amount of memory needed to store each page table, it increases the
amount of time needed to search the table When a page reference occurs.

Inverted Page Table

Segmentation:
In Segmentation user program and its associated data are divided into number of segments. Segments can be
of different sizes. As in paging, a logical address using segmentation consists of two parts segment number
and offset

Because of unequal size partition segmentation is similar to dynamic partition. The difference
compared to dynamic partition is that with segmentation program can occupy more than one partition, and
these partition need not be contiguous.

Segmentation eliminates internal fragmentation, but suffers from external fragmentation but it is very less.

Segmentation implementation consists of

 Logical address consists of a two tuple:


 <segment-number, offset>,
 Segment table – maps two-dimensional physical
addresses; each table entry has:
o base – contains the starting physical address
where the segments reside in memory
o limit – specifies the length of the segment
 Segment-table base register (STBR) points to the
segment table’s location in memory
Segmentation Hardware

A logical addressconsists of two parts: a segment number, s, and an offset into that segment, [Link]
segment number is used as an index to the segment table. The offset d of the logical address must be
between 0 and the segment limit. If it is not, we trap to the operating system (logical addressing attempt
beyond end of segment).When an offset is legal, it is added to the segment base to produce the address in
physical memory of the desired byte. The segment table is thus essentially
an array of base-limit register pairs.

Explain in detail about segmentation on with paging technique.

Basis for
Paging Segmentation
Comparison
Basic A page is of fixed block size. A segment is of variable size.
Paging may lead to internal Segmentation may lead to external
Fragmentation
fragmentation. fragmentation.
The user specifies each address by two
The user specified address is divided
Address quantities a segment number and the offset
by CPU into a page number and offset.
(Segment limit).
Size The hardware decides the page size. The segment size is specified by the user.
Segmentation involves the segment table that
Paging involves a page table that
Table contains segment number and offset (segment
contains base address of each page.
length).
Compare paging with segmentation with respect to the amount of memory required by
the address translation structures in order to convert virtual addresses to physical
addresses.
Answer: Paging requires more memory overhead to maintain the translation structures. Segmentation
requires just two registers per segment: one to maintain the base of the segment and the other to maintain the
extent of the segment. Paging on the other hand requires one entry per page, and this entry provides the
physical address in which the page is located.

Virtual Memory - Part 1


Topics:
 Demand Paging

What is virtual memory ? : It a storage allocation scheme in which secondary memory can be addressed as
though it is part of main memory

Size of virtual storage is limited by the addressing scheme of the computer and by the amount of
secondary memory available .
Why should we go for Virtual Memory? : Virtual memory leads to improved system utilization by:
1. More process may be maintained in memory:
Because we are only going to load some of the pieces of a particular process,
there is room for more processes.
2. A Process may be larger than main memory:
The size of the program written by programmer is not limited by the size of main memory.
As main memory has extended memory called virtual memory on the hard disk. Part the program are stored
in virtual memory.
What is Demand Paging? : In demand paging technique we initially load pages only as they are needed. And it is
commonly used in virtual memory systems.

 With demand-paged virtual memory, pages are only loaded when they are demanded during
program execution; pages that are never accessed are thus never loaded into physical memory.
 A demand-paging system is similar to a paging system with swapping, where processes reside in
secondary memory (usually a disk).When we want to execute a process, we swap it into memory.
 Pager is used for swapping required pages from virtual memory to main memory.
Working of Demand Paging Technique: Steps involved in demand paging is show below

Access to a page marked invalid causes a page-fault trap. The paging hardware, in translating the address
through the page table, will notice that the invalid bit is set, causing a trap to the operating system. This trap
is the result of the operating system's failure to bring the desired page into memory.

The procedure for handling this page fault is given below:


1. We check an internal table (usually kept with the process control block) for this process to
determine whether the reference was a valid or an invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we have not yet
brought in that page, we now page it in.
3. We find a free frame (by taking one from the free-frame list, for example).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process can now access the
page as though it had always been in memory.

What is Pure Demand Paging?


 In Pure Demand Paging OS never brings a page into memory until it is required.
 In the technique we can start executing a process with I/O pages in memory. When the
operating system sets the instruction pointer to the first instruction of the process which is on a
non-memory-resident page the process immediately faults for the page. After this page is
brought into memory the process continues to execute, faulting as necessary until every page
that it needs is in memory. At that point, it can execute with no more faults

What are the different hardware support required for demand paging?:The hardware to support demand
paging is the same as the hardware for paging and swapping:
 Page table. This table has the ability to mark an entry invalid through a valid -invalid it or special
value of protection bits.
 Secondary memory. This memory holds those pages that are not present in main memory. The
secondary memory is usuaUy a high-speed disk. It is known as the swap device, and the section
of disk used for this purpose is known as swap space.
What is the effective access time for a demand-paged memory?

Let the memory-access time, denoted ma, ranges from 10 to 200 nanoseconds. As long as we have
no page faults, the effective access time is equal to the memory access time.
If , however a page fault occurs, we must first read the relevant page from disk and then access the
desired word.
Let p be the probability of a page fault (0 <= p <= 1). We would expect p to be close to zero-that is, we
would expect to have only a few page faults. The effective access time is then
effective access time = (1 - p) x ma + p x page fault time.

Explain how page fault is handled by OS.:A page fault causes the following sequence to occur:
Step 1: Service the page-fault interrupt

[Link] to the operating system.

2. Save the user registers and process state.

3. Determine that the interrupt was a. page fault.

4. Check that the page reference was legal and determine the
location of the page on the disk.

Step 2: Read in the page

Issue a read from the disk to a free frame:

a. ·Wait in a queue for this device until the read request is


serviced.

b. Wait for the device seek and / or latency time.

c. Begin the transfer of the page to a free frame.

Give the detailed description of hardware implementation of a page table with translation Look-Aside Buffer .

Explain how protection can be ensured using paging?

What is the need of mapping between logical address and physical address? [2]
What is thrashing?[3]
What do you mean by Virtual Memory?

Page Replacement Algorithms in Operating


Systems
In an operating system that uses paging for memory management, a page replacement algorithm is needed to
decide which page needs to be replaced when new page comes in.

Page Fault – A page fault happens when a running program accesses a memory page that is mapped into the
virtual address space, but not loaded in physical memory.

Since actual physical memory is much smaller than virtual memory, page faults happen. In case of page
fault, Operating System might have to replace one of the existing pages with the newly needed page.
Different page replacement algorithms suggest different ways to decide which page to replace. The target for
all algorithms is to reduce the number of page faults.

FIFO Page Replacement Algorithm


It is a very simple way of Page replacement and is referred to as First in First Out. This algorithm mainly
replaces the oldest page that has been present in the main memory for the longest time.

 This algorithm is implemented by keeping the track of all the pages in the queue.

 As new pages are requested and are swapped in, they are added to the tail of a queue and the page
which is at the head becomes the victim.
 This is not an effective way of page replacement but it can be used for small systems.

Advantages

 This algorithm is simple and easy to use.


 FIFO does not cause more overhead.

Disadvantages

 This algorithm does not make the use of the frequency of last used time rather it just replaces the
Oldest Page.
 There is an increase in page faults as page frames increases.
 The performance of this algorithm is the worst.

2. LRU Page Replacement Algorithm in OS

This algorithm stands for "Least recent used" and this algorithm helps the Operating system to search those
pages that are used over a short duration of time frame.

 The page that has not been used for the longest time in the main memory will be selected for
replacement.
 This algorithm is easy to implement.
 This algorithm makes use of the counter along with the even-page.

Advantages of LRU

 It is an efficient technique.
 With this algorithm, it becomes easy to identify the faulty pages that are not needed for a long time.
 It helps in Full analysis.

Disadvantages of LRU

 It is expensive and has more complexity.

 There is a need for an additional data structure.

3. Optimal Page Replacement Algorithm

This algorithm mainly replaces the page that will not be used for the longest time in the future. The practical
implementation of this algorithm is not possible.

 Practical implementation is not possible because we cannot predict in advance those pages that will
not be used for the longest time in the future.
 This algorithm leads to less number of page faults and thus is the best-known algorithm

Also, this algorithm can be used to measure the performance of other algorithms.

Advantages of OPR
 This algorithm is easy to use.
 This algorithm provides excellent efficiency and is less complex.
 For the best result, the implementation of data structures is very easy

Disadvantages of OPR

 In this algorithm future awareness of the program is needed.


 Practical Implementation is not possible because the operating system is unable to track the future
request

Page Replacement Algorithms with exaample :

 First In First Out (FIFO) –


This is the simplest page replacement algorithm. In this algorithm, the operating system keeps track
of all pages in the memory in a queue, the oldest page is in the front of the queue. When a page needs
to be replaced page in the front of the queue is selected for removal.

Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page [Link] number of page


faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault

Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we
get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

 Optimal Page replacement –


In this algorithm, pages are replaced which would not be used for the longest duration of time in the
future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Find
number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in the
future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available in the
memory.

Total number of page fault = 6.

Optimal page replacement is perfect, but not possible in practice as the operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that other
replacement algorithms can be analyzed against it.

Least Recently Used –


In this algorithm page will be replaced which is least recently used.

Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page


[Link] number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Total number of page fault = 6.

Allocation of frames in Operating System


An important aspect of operating systems, virtual memory is implemented using demand paging. Demand
paging necessitates the development of a page-replacement algorithm and a frame allocation algorithm.
Frame allocation algorithms are used if you have multiple processes; it helps decide how many frames to
allocate to each process.

There are various constraints to the strategies for the allocation of frames:

 You cannot allocate more than the total number of available frames.
 At least a minimum number of frames should be allocated to each process. This constraint is
supported by two reasons. The first reason is, as less number of frames are allocated, there is an
increase in the page fault ratio, decreasing the performance of the execution of the process. Secondly,
there should be enough frames to hold all the different pages that any single instruction can
reference.

Frame allocation algorithms –


The two algorithms commonly used to allocate frames to a process are:

1. Equal allocation: In a system with x frames and y processes, each process gets equal number of
frames, i.e. x/y. For instance, if the system has 48 frames and 9 processes, each process will get 5
frames. The three frames which are not allocated to any process can be used as a free-frame buffer
pool.
o Disadvantage: In systems with processes of varying sizes, it does not make much sense to
give each process equal frames. Allocation of a large number of frames to a small process
will eventually lead to the wastage of a large number of allocated unused frames.
2. Proportional allocation: Frames are allocated to each process according to the process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m, where S is the sum of the
sizes of all the processes and m is the number of frames in the system. For instance, in a system with
62 frames, if there is a process of 10KB and another process of 127KB, then the first process will be
allocated (10/137)*62 = 4 frames and the other process will get (127/137)*62 = 57 frames.
o Advantage: All the processes share the available frames according to their needs, rather than
equally.

Global vs Local Allocation –


The number of frames allocated to a process can also dynamically change depending on whether you have
used global replacement or local replacement for replacing pages in case of a page fault.

1. Local replacement: When a process needs a page which is not in the memory, it can bring in the
new page and allocate it a frame from its own set of allocated frames only.
o Advantage: The pages in memory for a particular process and the page fault ratio is affected
by the paging behavior of only that process.
o Disadvantage: A low priority process may hinder a high priority process by not making its
frames available to the high priority process.
2. Global replacement: When a process needs a page which is not in the memory, it can bring in the
new page and allocate it a frame from the set of all frames, even if that frame is currently allocated to
some other process; that is, one process can take a frame from another.
o Advantage: Does not hinder the performance of processes and hence results in greater
system throughput.
o Disadvantage: The page fault ratio of a process can not be solely controlled by the process
itself. The pages in memory for a process depends on the paging behavior of other processes
as well.
Thrashing in Operating System
n case, if the page fault and swapping happens very frequently at a higher rate, then the operating system has
to spend more time swapping these pages. This state in the operating system is termed thrashing. Because of
thrashing the CPU utilization is going to be reduced.

Let's understand by an example, if any process does not have the number of frames that it needs to support
pages in active use then it will quickly page fault. And at this point, the process must replace some pages. As
all the pages of the process are actively in use, it must replace a page that will be needed again right away.
Consequently, the process will quickly fault again, and again, and again, replacing pages that it must bring
back in immediately. This high paging activity by a process is called thrashing.

During thrashing, the CPU spends less time on some actual productive work spend more time swapping.

Figure: Thrashing

Causes of Thrashing
Thrashing affects the performance of execution in the Operating system. Also, thrashing results in severe
performance problems in the Operating system.

When the utilization of CPU is low, then the process scheduling mechanism tries to load many processes
into the memory at the same time due to which degree of Multiprogramming can be increased. Now in this
situation, there are more processes in the memory as compared to the available number of frames in the
memory. Allocation of the limited amount of frames to each process.

Whenever any process with high priority arrives in the memory and if the frame is not freely available at
that time then the other process that has occupied the frame is residing in the frame will move to secondary
storage and after that this free frame will be allocated to higher priority process.

We can also say that as soon as the memory fills up, the process starts spending a lot of time for the required
pages to be swapped in. Again the utilization of the CPU becomes low because most of the processes are
waiting for pages.

Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing in the
Operating system.

Effect of Thrashing
At the time, when thrashing starts then the operating system tries to apply either the Global page
replacement Algorithm or the Local page replacement algorithm.
Global Page Replacement
The Global Page replacement has access to bring any page, whenever thrashing found it tries to bring more
pages. Actually, due to this, no process can get enough frames and as a result, the thrashing will increase
more and more. Thus the global page replacement algorithm is not suitable whenever thrashing happens.
Local Page Replacement
Unlike the Global Page replacement, the local page replacement will select pages which only belongs to that
process. Due to this, there is a chance of a reduction in the thrashing. As it is also proved that there are many
disadvantages of Local Page replacement. Thus local page replacement is simply an alternative to Global
Page replacement.

Techniques used to handle the thrashing


As we have already told you the Local Page replacement is better than the Global Page replacement but
local page replacement has many disadvantages too, so it is not suggestible. Thus given below are some
other techniques that are used:

Working-Set Model
This model is based on the assumption of the locality. It makes the use of the parameter ? in order to define
the working-set window. The main idea is to examine the most recent? page reference. What locality is
saying, the recently used page can be used again, and also the pages that are nearby this page will also be
used?

1. Working Set
The set of the pages in the most recent? page reference is known as the working set. If a page is in active
use, then it will be in the working set. In case if the page is no longer being used then it will drop from the
working set ? times after its last reference.

The working set mainly gives the approximation of the locality of the program.

The accuracy of the working set mainly depends on? what is chosen?

This working set model avoids thrashing while keeping the degree of multiprogramming as high as possible.

2. Page Fault Frequency

Figure: Page fault frequency

The working-set model is successful and its knowledge can be useful in preparing but it is a very clumpy
approach in order to avoid thrashing. There is another technique that is used to avoid thrashing and it is Page
Fault Frequency(PFF) and it is a more direct approach.

The main problem is how to prevent thrashing. As thrashing has a high page fault rate and also we want to
control the page fault rate.

When the Page fault is too high, then we know that the process needs more frames. Conversely, if the page
fault-rate is too low then the process may have too many frames.

We can establish upper and lower bounds on the desired page faults. If the actual page-fault rate exceeds the
upper limit then we will allocate the process to another frame. And if the page fault rate falls below the
lower limit then we can remove the frame from the process.
Thus with this, we can directly measure and control the page fault rate in order to prevent thrashing.

You might also like