Operating System 3rd Unit Notes
Operating System 3rd Unit Notes
Memory Management
Memory consists of a large array of words or bytes, each with its own
address. The CPU fetches instructions from memory according to the value of
the program counter. These instructions may cause additional loading from and
storing to specific memory addresses.
Memory unit sees only a stream of memory addresses. It does not know how
they are generated.
Program must be brought into memory and placed within a process for it to
be run.
Input queue – collection of processes on the disk that are waiting to be
brought into memory for execution.
This method requires hardware support slightly different from the hardware
configuration. The base register is now called a relocation register. The value in the
relocation register is added to every address generated by a user process at the time
it is sent to memory.
The user program never sees the real physical addresses. The program can
create a pointer to location 346, store it in memory, manipulate it and compare it to
other addresses. The user program deals with logical addresses. The memory
mapping hardware converts logical addresses into physical addresses. The final
location of a referenced memory address is not determined until the reference is
made.
Swapping
A process can be swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution. For example, assume a
multiprogramming environment with a round robin CPU scheduling algorithm.
When a quantum expires, the memory manager will start to swap out the process
that just finished, and to swap in another process to the memory space that has been
freed. In the mean time, the CPU scheduler will allocate a time slice to some other
process in memory. When each process finished its quantum, it will be swapped
with another process. Ideally, the memory manager can swap processes fast enough
that some processes will be in memory, ready to execute, when the CPU scheduler
wants to reschedule the CPU. The quantum must also be sufficiently large that
reasonable amounts of computing are done between swaps.
Roll out, roll in – swapping variant used for priority-based scheduling
algorithms. If a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process so that it can load and execute
lower priority process can be swapped back in and continued. This variant is some
times called roll out, roll in. Normally a process that is swapped out will be swapped
back into the same memory space that it occupied previously. This restriction is
dictated by the process cannot be moved to different locations. If execution time
binding is being used, then a process can be swapped into a different memory space,
because the physical addresses are computed during execution time.
Modified versions of swapping are found on many systems (i.e., UNIX, Linux,
and Windows).
Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.
Memory allocation:
To gain proper memory utilization, memory allocation must be allocated efficient
manner. One of the simplest methods for allocating memory is to divide memory
into several fixed-sized partitions and each partition contains exactly one process.
Thus, the degree of multiprogramming is obtained by the number of partitions.
Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
Fixed partition allocation: In this method, the operating system maintains a table
that indicates which parts of memory are available and which are occupied by
processes. Initially, all memory is available for user processes and is considered
one large block of available memory. This available memory is known as “Hole”.
When the process arrives and needs memory, we search for a hole that is large
enough to store this process. If the requirement fulfills then we allocate memory to
process, otherwise keeping the rest available to satisfy future requests. While
allocating a memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes.
There are some solutions to this problem:
First fit:-
Advantages
There is no external fragmentation with linked allocation.
Any free block can be utilized in order to satisfy the file block requests.
File can continue to grow as long as the free blocks are available.
Directory entry will only contain the starting block address.
Disadvantages
Larger access file
Paging
The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.
Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary [Link] Ad
Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.
Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process
is divided into pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in
the contiguous way.
Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8
frames become empty and therefore other pages can be loaded in that empty place.
The process P5 of size 8 KB (8 pages) is waiting inside the ready queue.
Given the fact that, we have 8 noncontiguous frames available in the memory and
paging provides the flexibility of storing the process at the different places. Therefore,
we can load the pages of process P5 in the place of P2 and P4.
Segmentation
In Operating Systems, Segmentation is a memory management technique in which the
memory is divided into the variable size parts. Each part is known as a segment which
can be allocated to a process.
The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.
Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.
1. Segment Number
2. Offset
For Example:
Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for
the segment offset so the maximum segment size is 4096 and the maximum number
of segments that can be refereed is 16.
When a program is loaded into memory, the segmentation system tries to locate space
that is large enough to hold the first segment of the process, space information is
obtained from the free list maintained by memory manager. Then it tries to locate
space for other segments. Once adequate space is located for all the segments, it loads
them into their respective areas.
The operating system also generates a segment map table for each program.
With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a program.
The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the address
is valid otherwise it throws an error as the address is invalid.
In the case of valid addresses, the base address of the segment is added to the offset
to get the physical address of the actual word in the main memory.
The above figure shows how address translation is done in case of segmentation.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
In Segmented Paging, the main memory is divided into variable size segments which
are further divided into fixed size pages.
Each Page table contains the various information about every page of the segment.
The Segment Table contains the information about every segment. Each segment table
entry points to a page table entry and every page table entry is mapped to one of the
page within a segment.
Translation of logical address to physical address
The CPU generates a logical address which is divided into two parts: Segment Number
and Segment Offset. The Segment Offset must be less than the segment limit. Offset
is further divided into Page number and Page Offset. To map the exact page number
in the page table, the page number is added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get
the desired word in the page of the certain segment of the process.
Advantages of Segmented Paging
1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.
In this scheme, User can load the bigger size processes than the available main
memory by having the illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads
the different parts of more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the
CPU utilization will also be increased.
Since all this procedure happens automatically, therefore it makes the computer feel
like it is having the unlimited RAM.
Demand Paging
Demand Paging is a popular method of virtual memory management. In demand
paging, the pages of a process which are least used, get stored in the secondary
memory.
A page is copied to the main memory when its demand is made or page fault occurs.
There are various page replacement algorithms which are used to determine the pages
which will be replaced. We will discuss each one of them later in detail.
The page tables of both the pages are 1 KB size each and therefore they can be fit in
one frame each. The page tables of both the processes contain various information
that is also shown in the image.
The CPU contains a register which contains the base address of page table that is 5 in
the case of P1 and 7 in the case of P2. This page table base address will be added to
the page number of the Logical address when it comes to accessing the actual
corresponding entry.
However, deciding, which pages need to be kept in the main memory and which need
to be kept in the secondary memory, is going to be difficult because we cannot say in
advance that a process will require a particular page at particular time.
Whenever any page is referred for the first time in the main memory, then that page
will be found in the secondary memory.
After that, it may or may not be present in the main memory depending upon the page
replacement algorithm which will be covered later in this tutorial.
The CPU has to access the missed page from the secondary memory. If the number of
page fault is very high then the effective access time of the system will become very
high.
What is Thrashing?
If the number of page faults is equal to the number of referred pages or the number
of page faults are so high so that the CPU remains busy in just reading the pages from
the secondary memory then the effective access time will be the time taken by the CPU
to read one word from the secondary memory and it will be so high. The concept is
called thrashing.
If the page fault rate is PF %, the time taken in getting a page from the secondary
memory and again restarting is S (service time) and the memory access time is ma then
the effective access time can be given as;
There are two main aspects of virtual memory, Frame allocation and Page
Replacement. It is very important to have the optimal frame allocation and page
replacement algorithm. Frame allocation is all about how many frames are to be
allocated to the process while the page replacement is all about determining the page
number which needs to be replaced in order to make space for the requested page.
However, if OS allocates more frames to the process then there can be internal
fragmentation. Stay
2. If the page replacement algorithm is not optimal then there will also be the problem
of thrashing. If the number of pages that are replaced by the requested pages will be
referred in the near future then there will be more number of swap-in and swap-out
and therefore the OS has to perform more replacements then usual which causes
performance deficiency.
Therefore, the task of an optimal page replacement algorithm is to choose the page
which can limit the thrashing.
Types of Page Replacement Algorithms
There are various page replacement algorithms. Each algorithm has a different method
by which the pages can be replaced.
1. Optimal Page Replacement algorithm → this algorithms replaces the page which will
not be referred for so long in future. Although it can not be practically implementable
but it can be used as a benchmark. Other algorithms are compared to this in terms of
optimality.
2. Least recent used (LRU) page replacement algorithm → this algorithm replaces the
page which has not been referred for a long time. This algorithm is just opposite to the
optimal page replacement algorithm. In this, we look at the past instead of staring at
future.
3. FIFO → in this algorithm, a queue is maintained. The page which is assigned the frame
first will be replaced first. In other words, the page which resides at the rare end of the
queue will be replaced on the every page fault.