0% found this document useful (0 votes)
10 views22 pages

Operating System 3rd Unit Notes

This document discusses memory management in computer systems, detailing concepts such as main memory, logical vs physical address space, and various memory allocation techniques including contiguous and non-contiguous allocation. It also covers advanced topics like paging, segmentation, and virtual memory, explaining their advantages and disadvantages. The document emphasizes the importance of efficient memory management for optimal CPU utilization and process execution.

Uploaded by

laxmidara115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views22 pages

Operating System 3rd Unit Notes

This document discusses memory management in computer systems, detailing concepts such as main memory, logical vs physical address space, and various memory allocation techniques including contiguous and non-contiguous allocation. It also covers advanced topics like paging, segmentation, and virtual memory, explaining their advantages and disadvantages. The document emphasizes the importance of efficient memory management for optimal CPU utilization and process execution.

Uploaded by

laxmidara115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit 2

Memory Management
 Memory consists of a large array of words or bytes, each with its own
address. The CPU fetches instructions from memory according to the value of
the program counter. These instructions may cause additional loading from and
storing to specific memory addresses.
 Memory unit sees only a stream of memory addresses. It does not know how
they are generated.
 Program must be brought into memory and placed within a process for it to
be run.
 Input queue – collection of processes on the disk that are waiting to be
brought into memory for execution.

 The term Memory can be defined as a collection of data in a specific format.


It is used to store instructions and processed data. The memory comprises a large
array or group of words or bytes, each with its own location. The primary motive
of a computer system is to execute programs. These programs, along with the
information they access, should be in the main memory during execution. The
CPU fetches instructions from memory according to the value of the program
counter.

 To achieve a degree of multiprogramming and proper utilization of


memory, memory management is important. Many memory management
methods exist, reflecting various approaches, and the effectiveness of each
algorithm depends on the situation.

 What is Main Memory:

 The main memory is central to the operation of a modern computer. Main


Memory is a large array of words or bytes, ranging in size from hundreds of
thousands to billions. Main memory is a repository of rapidly available information
shared by the CPU and I/O devices. Main memory is the place where programs and
information are kept when the processor is effectively utilizing them. Main
memory is associated with the processor, so moving instructions and information
into and out of the processor is extremely fast. Main memory is also known as
RAM(Random Access Memory). This memory is a volatile memory. RAM lost its
data when a power interruption occurs.

 What is Memory Management :

 In a multiprogramming computer, the operating system resides in a part of


memory and the rest is used by multiple processes. The task of subdividing the
memory among different processes is called memory management. Memory
management is a method in the operating system to manage operations between
main memory and disk during process execution. The main aim of memory
management is to achieve efficient utilization of memory.

 Why Memory Management is required:

 Allocate and de-allocate memory before and after process execution.

 To keep track of used memory space by processes.

 To minimize fragmentation issues.

 To proper utilization of main memory.

 To maintain data integrity while executing of process.

Logical Versus Physical Address Space


 The concept of a logical address space that is bound to a separate physical
address space is central to proper memory management.
o Logical address – address generated by the CPU; also referred to as
virtual address.
o Physical address – address seen by the memory unit.
 The set of all logical addresses generated by a program is a logical address
space; the set of all physical addresses corresponding to these logical addresses are
a physical address space.
 Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme.
 The run-time mapping from virtual to physical addresses is done by a hardware
device called the memory management unit (MMU).

 This method requires hardware support slightly different from the hardware
configuration. The base register is now called a relocation register. The value in the
relocation register is added to every address generated by a user process at the time
it is sent to memory.
 The user program never sees the real physical addresses. The program can
create a pointer to location 346, store it in memory, manipulate it and compare it to
other addresses. The user program deals with logical addresses. The memory
mapping hardware converts logical addresses into physical addresses. The final
location of a referenced memory address is not determined until the reference is
made.

Swapping
 A process can be swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution. For example, assume a
multiprogramming environment with a round robin CPU scheduling algorithm.
When a quantum expires, the memory manager will start to swap out the process
that just finished, and to swap in another process to the memory space that has been
freed. In the mean time, the CPU scheduler will allocate a time slice to some other
process in memory. When each process finished its quantum, it will be swapped
with another process. Ideally, the memory manager can swap processes fast enough
that some processes will be in memory, ready to execute, when the CPU scheduler
wants to reschedule the CPU. The quantum must also be sufficiently large that
reasonable amounts of computing are done between swaps.
 Roll out, roll in – swapping variant used for priority-based scheduling
algorithms. If a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process so that it can load and execute
lower priority process can be swapped back in and continued. This variant is some
times called roll out, roll in. Normally a process that is swapped out will be swapped
back into the same memory space that it occupied previously. This restriction is
dictated by the process cannot be moved to different locations. If execution time
binding is being used, then a process can be swapped into a different memory space,
because the physical addresses are computed during execution time.
 Modified versions of swapping are found on many systems (i.e., UNIX, Linux,
and Windows).

Contiguous Memory Allocation :


The main memory should oblige both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the
operating system. The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes. We normally need several
user processes to reside in memory simultaneously. Therefore, we need to consider
how to allocate available memory to the processes that are in the input queue
waiting to be brought into memory. In adjacent memory allotment, each process is
contained in a single contiguous segment of memory.
Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.

Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.

Memory allocation:
To gain proper memory utilization, memory allocation must be allocated efficient
manner. One of the simplest methods for allocating memory is to divide memory
into several fixed-sized partitions and each partition contains exactly one process.
Thus, the degree of multiprogramming is obtained by the number of partitions.
Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.

Fixed partition allocation: In this method, the operating system maintains a table
that indicates which parts of memory are available and which are occupied by
processes. Initially, all memory is available for user processes and is considered
one large block of available memory. This available memory is known as “Hole”.
When the process arrives and needs memory, we search for a hole that is large
enough to store this process. If the requirement fulfills then we allocate memory to
process, otherwise keeping the rest available to satisfy future requests. While
allocating a memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes.
There are some solutions to this problem:
First fit:-

o First-fit: Allocate the first hole that is big enough.


o Best-fit: Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size.
o Worst-fit: Allocate the largest hole; must also search entire list.
Fragmentation
 External Fragmentation – total memory space exists to satisfy a request, but
it is not contiguous.
 Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but not
being used.
Non-Contiguous Allocation:-
Linked List allocation solves all problems of contiguous allocation. In linked list
allocation, each file is considered as the linked list of disk blocks. However, the
disks blocks allocated to a particular file need not be contiguous on the disk. Each
disk block allocated to a file contains a pointer that points to the next disk block
allocated to the same file.

Advantages
 There is no external fragmentation with linked allocation.
 Any free block can be utilized in order to satisfy the file block requests.
 File can continue to grow as long as the free blocks are available.
 Directory entry will only contain the starting block address.
Disadvantages
 Larger access file
Paging

In Operating Systems, Paging is a storage mechanism used to retrieve processes from


the secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.
Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary [Link] Ad

Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.

Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process
is divided into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in
the contiguous way.

Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8
frames become empty and therefore other pages can be loaded in that empty place.
The process P5 of size 8 KB (8 pages) is waiting inside the ready queue.

Given the fact that, we have 8 noncontiguous frames available in the memory and
paging provides the flexibility of storing the process at the different places. Therefore,
we can load the pages of process P5 in the place of P2 and P4.
Segmentation
In Operating Systems, Segmentation is a memory management technique in which the
memory is divided into the variable size parts. Each part is known as a segment which
can be allocated to a process.

The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Why Segmentation is required?


Till now, we were using Paging as our main memory management technique. Paging
is more close to the Operating system rather than the User. It divides all the processes
into the form of pages regardless of the fact that a process can have some relative
parts of functions which need to be loaded in the same page.

Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.

Translation of Logical address into physical address


by segment table
CPU generates a logical address which contains two parts:

1. Segment Number
2. Offset

For Example:

Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for
the segment offset so the maximum segment size is 4096 and the maximum number
of segments that can be refereed is 16.
When a program is loaded into memory, the segmentation system tries to locate space
that is large enough to hold the first segment of the process, space information is
obtained from the free list maintained by memory manager. Then it tries to locate
space for other segments. Once adequate space is located for all the segments, it loads
them into their respective areas.

The operating system also generates a segment map table for each program.

With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a program.

The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the address
is valid otherwise it throws an error as the address is invalid.

In the case of valid addresses, the base address of the segment is added to the offset
to get the physical address of the actual word in the main memory.

The above figure shows how address translation is done in case of segmentation.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

Segmentation with Paging


Pure segmentation is not very popular and not being used in many of the operating
systems. However, Segmentation can be combined with Paging to get the best
features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments which
are further divided into fixed size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has multiple page tables.
3. The logical address is represented as Segment Number (base address), Page number
and page offset.

Segment Number → It points to the appropriate Segment Number.

Page Number → It Points to the exact page within the segment

Page Offset → Used as an offset within the page frame

Each Page table contains the various information about every page of the segment.
The Segment Table contains the information about every segment. Each segment table
entry points to a page table entry and every page table entry is mapped to one of the
page within a segment.
Translation of logical address to physical address
The CPU generates a logical address which is divided into two parts: Segment Number
and Segment Offset. The Segment Offset must be less than the segment limit. Offset
is further divided into Page number and Page Offset. To map the exact page number
in the page table, the page number is added into the page table base.

The actual frame number with the page offset is mapped to the main memory to get
the desired word in the page of the certain segment of the process.
Advantages of Segmented Paging
1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.

Disadvantages of Segmented Paging


1. Internal Fragmentation will be there.
2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.
Virtual Memory
Virtual Memory is a storage scheme that provides user an illusion of having a very big
main memory. This is done by treating a part of secondary memory as the main
memory.

In this scheme, User can load the bigger size processes than the available main
memory by having the illusion that the memory is available to load the process.

Instead of loading one big process in the main memory, the Operating System loads
the different parts of more than one process in the main memory.

By doing this, the degree of multiprogramming will be increased and therefore, the
CPU utilization will also be increased.

How Virtual Memory Works?


In modern word, virtual memory has become quite common these days. In this
scheme, whenever some pages needs to be loaded in the main memory for the
execution and the memory is not available for those many pages, then in that case,
instead of stopping the pages from entering in the main memory, the OS search for
the RAM area that are least used in the recent times or that are not referenced and
copy that into the secondary memory to make the space for the new pages in the main
memory.

Since all this procedure happens automatically, therefore it makes the computer feel
like it is having the unlimited RAM.

Demand Paging
Demand Paging is a popular method of virtual memory management. In demand
paging, the pages of a process which are least used, get stored in the secondary
memory.

A page is copied to the main memory when its demand is made or page fault occurs.
There are various page replacement algorithms which are used to determine the pages
which will be replaced. We will discuss each one of them later in detail.

Snapshot of a virtual memory management system


Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB.
The main memory contains 8 frame of 1 KB each. The OS resides in the first two
partitions. In the third partition, 1st page of P1 is stored and the other frames are also
shown as filled with the different pages of processes in the main memory.

The page tables of both the pages are 1 KB size each and therefore they can be fit in
one frame each. The page tables of both the processes contain various information
that is also shown in the image.

The CPU contains a register which contains the base address of page table that is 5 in
the case of P1 and 7 in the case of P2. This page table base address will be added to
the page number of the Logical address when it comes to accessing the actual
corresponding entry.

Advantages of Virtual Memory


1. The degree of Multiprogramming will be increased.
2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory


1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.
Demand Paging
According to the concept of Virtual Memory, in order to execute some process, only a
part of the process needs to be present in the main memory which means that only a
few pages will only be present in the main memory at any time.

However, deciding, which pages need to be kept in the main memory and which need
to be kept in the secondary memory, is going to be difficult because we cannot say in
advance that a process will require a particular page at particular time.

Therefore, to overcome this problem, there is a concept called Demand Paging is


introduced. It suggests keeping all pages of the frames in the secondary memory until
they are required. In other words, it says that do not load any page in the main memory
until it is required.

Whenever any page is referred for the first time in the main memory, then that page
will be found in the secondary memory.

After that, it may or may not be present in the main memory depending upon the page
replacement algorithm which will be covered later in this tutorial.

What is a Page Fault?


If the referred page is not present in the main memory then there will be a miss and
the concept is called Page miss or page fault.

The CPU has to access the missed page from the secondary memory. If the number of
page fault is very high then the effective access time of the system will become very
high.

What is Thrashing?
If the number of page faults is equal to the number of referred pages or the number
of page faults are so high so that the CPU remains busy in just reading the pages from
the secondary memory then the effective access time will be the time taken by the CPU
to read one word from the secondary memory and it will be so high. The concept is
called thrashing.

If the page fault rate is PF %, the time taken in getting a page from the secondary
memory and again restarting is S (service time) and the memory access time is ma then
the effective access time can be given as;

1. EAT = PF X S + (1 - PF) X (ma)


Page Replacement Algorithms
The page replacement algorithm decides which memory page is to be replaced. The
process of replacement is sometimes called swap out or write to disk. Page
replacement is done when the requested page is not found in the main memory (page
fault).

There are two main aspects of virtual memory, Frame allocation and Page
Replacement. It is very important to have the optimal frame allocation and page
replacement algorithm. Frame allocation is all about how many frames are to be
allocated to the process while the page replacement is all about determining the page
number which needs to be replaced in order to make space for the requested page.

What If the algorithm is not optimal?


1. if the number of frames which are allocated to a process is not sufficient or accurate
then there can be a problem of thrashing. Due to the lack of frames, most of the pages
will be residing in the main memory and therefore more page faults will occur.

However, if OS allocates more frames to the process then there can be internal
fragmentation. Stay

2. If the page replacement algorithm is not optimal then there will also be the problem
of thrashing. If the number of pages that are replaced by the requested pages will be
referred in the near future then there will be more number of swap-in and swap-out
and therefore the OS has to perform more replacements then usual which causes
performance deficiency.

Therefore, the task of an optimal page replacement algorithm is to choose the page
which can limit the thrashing.
Types of Page Replacement Algorithms
There are various page replacement algorithms. Each algorithm has a different method
by which the pages can be replaced.

1. Optimal Page Replacement algorithm → this algorithms replaces the page which will
not be referred for so long in future. Although it can not be practically implementable
but it can be used as a benchmark. Other algorithms are compared to this in terms of
optimality.
2. Least recent used (LRU) page replacement algorithm → this algorithm replaces the
page which has not been referred for a long time. This algorithm is just opposite to the
optimal page replacement algorithm. In this, we look at the past instead of staring at
future.
3. FIFO → in this algorithm, a queue is maintained. The page which is assigned the frame
first will be replaced first. In other words, the page which resides at the rare end of the
queue will be replaced on the every page fault.

Numerical on Optimal, LRU and FIFO


Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the
number of frames in the memory is 3. Find out the number
of page faults respective to:
1. Optimal Page Replacement Algorithm
2. FIFO Page Replacement Algorithm
3. LRU Page Replacement Algorithm

Optimal Page Replacement Algorithm

Number of Page Faults in Optimal Page Replacement Algorithm = 5


LRU Page Replacement Algorithm

Number of Page Faults in LRU = 6

FIFO Page Replacement Algorithm

Number of Page Faults in FIFO = 6

You might also like