ABES Engineering College, Ghaziabad
B. Tech Even Semester Sessional Test-3
Printed Pages:02
Session: 2023-24
Semester: 4
Course Code: BCS401 Roll No.:
Course Name: Operating System Time: 1 Hr 30 Min
Maximum Marks: 40
Instructions:
1. Attempt all sections.
2. If require any missing data, then choose suitably.
Solution
Q1 (a).
What is Thrashing and its cause? What steps are taken by the
system to eliminate this problem?
(2+3)
Solution:
Thrashing:
Thrashing can occur when there are too many processes running on a system and not enough
physical memory to accommodate them all. As a result, the operating system
must constantly swap pages of memory between physical memory and virtual memory. This
can lead to a significant decrease in system performance, as the CPU is spending more time
swapping pages than it is actually executing code.
Causes of Thrashing:
Causes of Thrashing in OS
The main causes of thrashing in an operating system are:
1.High degree of multiprogramming:
2. Lack of frames:
3. Page replacement policy:
4. Insufficient physical memory:
5. Inefficient memory management:
6. Poorly designed applications:
Steps to Eliminate this:
There are a number of ways to eliminate thrashing, including:
1. Increase the amount of physical memory
2. Reduce the degree of multiprogramming
3. Use an effective page replacement policy
4. Optimize applications
5. Monitor the system's resource usage
6. Use a system monitoring tool
Q1(b)
Write the difference between Internal and External
Fragmentation. Also, differentiate between Paging and
Segmentation. (2.5+2.5)
Solution:
Q2(a)
Explain the Page Table entries with a suitable diagram. (5)
Solution:
A Page Table is a data structure used by the operating system to keep track of the mapping
between virtual addresses used by a process and the corresponding physical addresses in the
system’s memory.
A Page Table Entry (PTE) is an entry in the Page Table that stores information about a
particular page of memory. Each PTE contains information such as the physical address of the
page in memory, whether the page is present in memory or not, whether it is writable or not,
and other access permissions.
Information Stored in Page Table Entry
1. Frame Number – It gives the frame number in which the current page you are looking
for is present. The number of bits required depends on the number of frames. Frame bit
is also known as address translation bit.
2. Number of bits for frame = Size of physical memory / Frame Present/Absent
Bit: Present or absent bit says whether a particular page you are looking for is present
or absent. In case it is not present, that is called Page Fault. It is set to 0 if the
corresponding page is not in memory. Used to control page faults by the operating
system to support virtual memory. Sometimes this bit is also known as
a valid/invalid bit.
3. Protection Bit: The protection bit says what kind of protection you want on that page.
So, these bits are for the protection of the page frame (read, write, etc).
4. Referenced Bit: Referenced bit will say whether this page has been referred to in the
last clock cycle or not. It is set to 1 by hardware when the page is accessed.
5. Caching Enabled/Disabled: Sometimes we need fresh data. Let us say the user is
typing some information from the keyboard and your program should run according to
the input given by the user. In that case, the information will come into the main
memory. Therefore main memory contains the latest information which is typed by the
user. Now if you try to put that page in the cache, that cache will show the old
information. So whenever freshness is required, we don’t want to go for caching or
many levels of memory. The information present in the closest level to the CPU and
the information present in the closest level to the user might be different. So we want
the information to be consistent, which means whatever information the user has given,
the CPU should be able to see it as first as possible. That is the reason we want to disable
caching. So, this bit enables or disables caching of the page.
6. Modified Bit: Modified bit says whether the page has been modified or not. Modified
means sometimes you might try to write something onto the page. If a page is modified,
then whenever you should replace that page with some other page, then the modified
information should be kept on the hard disk or it has to be written back or it has to be
saved back. It is set to 1 by hardware on the write-access to a page which is used to
avoid writing when swapped out. Sometimes this modified bit is also called the Dirty
bit.
Q2(b)
Write short note on:
i. Demand Paging
ii. Bare Machine
iii. Resident Monitor
iv. Translation lookaside buffer (1+1+1+2)
Solution:
Demand Paging:
Demand Paging is a technique in which a page is usually brought into the main memory only
when it is needed or demanded by the CPU. Initially, only those pages are loaded that are
required by the process immediately. Those pages that are never accessed are thus never loaded
into the physical memory.
Bare Machine:
Bare machine is logical hardware which is used to execute the program in the processor without
using the operating system. as of now, we have studied that we can’t execute any process
without the Operating system. But yes with the help of the Bare machine we can do that.
Initially, when the operating systems are not developed, the execution of an instruction is done
by directly on hardware without using any interfering hardware, at that time the only drawback
was that the Bare machines accepting the instruction in only machine language, due to this
those person who has sufficient knowledge about Computer field are able to operate a
computer. so after the development of the operating system Bare machine is referred to as
inefficient
Resident Monitor:
The resident monitor works like an operating system that controls the instructions and performs
all necessary functions. It also works like job sequencer because it also sequences the job and
sends them to the processor.
After scheduling the job Resident monitors loads the programs one by one into the main
memory according to their sequences. One most important factor about the resident monitor is
that when the program execution occurred there is no gap between the program execution and
the processing is going to be faster.
The Resident monitors are divided into 4 parts as:
1. Control Language Interpreter
2. Loader
3. Device Driver
4. Interrupt Processing
Translation look aside buffer:
The Translation Lookaside Buffer (TLB) in an operating system is a hardware component that
serves as a cache for the virtual-to-physical address translation process. It stores a subset of
recently used virtual memory page mappings, making it faster to translate virtual addresses
generated by programs into corresponding physical addresses in main memory (RAM). When
a program accesses memory, the TLB is consulted, and if it contains the necessary translation,
this results in a TLB hit, which significantly speeds up memory access.
Q3(a)
Consider the given references to the following pages by a program:
1,2,3,4,2,1,6,5,2,1,2,3,6,7,2,3,1,6,2,3
How many page faults will occur if the program has three-page
frames available to it?
i. FIFO replacement
ii. LRU replacement
iii. Optimal replacement (3+3+4)
Solution:
Q3(b)
Given memory partitions of 100 kb, 500 kb, 200 kb, 300 kb, 600
kb (in order), how would each of the first fit, best fit and worst
fit algorithms place processes of 212 kb, 417kb, 112kb and
426kb (in order). which algorithm makes the most
efficient use of memory? Explain by all required steps including
diagram. (3+3+3+1)
Solution:
First Fit:
In the first fit, a partition is allocated which is first sufficient from the top of Main Memory.
212 KB is put in a 500 K partition. 212 KB => 500 KB partition, leaves a 288 KB
partition
417 KB is put in a 600 K partition. 417 KB => 600 KB partition, leaves a 183 KB
partition
112 KB is put in a 288 K partition (new partition 288 KB = 500 KB - 212 KB). 112
KB => 288 KB partition, leaves a 176 KB partition
426 KB must wait. Because 426 KB would not be able to allocate, no partition large
enough!
Best-fit:
Allocate the process to the partition which is the first smallest sufficient partition among the
free available partition.
212 KB is put in a 300 KB partition. 212 KB => 300 KB, leaving a 88 KB partition
417 KB is put in a 500 KB partition. 417 KB => 500 KB, leaving a 83 KB partition
112 KB is put in a 200 KB partition. 112 KB => 200 KB, leaving a 88 KB partition
426 KB is put in a 600 KB partition. 426 KB => 600 KB, leaving a 174 KB partition
Worst-fit:
Allocate the process to the partition which is largest sufficient among the freely available
partitions available in the main memory.
212 KB is put in a 600 KB partition. 212 KB => 600 KB, leaving a 388 KB partition
417 KB is put in a 500 KB partition. 417 KB => 500 KB, leaving a 83 KB partition
112 KB is put in a 388 KB partition. 112 KB => 388 KB, leaving a 276 KB partition
426 KB must wait. Because 426 KB would not be allowed to allocate as no partition
is large enough!
In this problem, the Best-Fit Algorithm performed the best among all the three algorithms,
because it was the only algorithm that meet all the memory requests.
Q4(a)
Suppose a disk has 201 cylinders, numbered from 0 to 200. At
some time the disk arm is at cylinder 100, and there is a queue
of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135
and 145. If Shortest-Seek Time First (SSTF) is being used for
scheduling the disk access.
i. Find the total number of overheads movements showing
with suitable diagram?
ii. After servicing how many requests is the request for
cylinder 90 serviced?
[GATE 2014] (4+1)
Solution:
Q4(b)
Consider a disk queue with requests for I/O to blocks on
cylinders 47,38,121,191,87,11,92,10. The C-LOOK scheduling
algorithm is used. The head is initially at cylinder number 63
moving towards larger cylinder numbers on its servicing pass.
The cylinders are numbered from 0 to 199. The total head
movement (in number of cylinders) incurred while servicing
these requests is? Explain with a diagram.
[GATE-2016] (5)
Solution:
Q5(a)
Describe the architecture of a hard disk in detail with a suitable
diagram. (5)
Solution:
As we know, a process needs two type of time, CPU time and IO time. For I/O, it requests the
Operating system to access the disk.
However, the operating system must be fare enough to satisfy each request and at the same
time, operating system must maintain the efficiency and speed of process execution. The
technique that operating system uses to determine the request which is to be satisfied next is
called disk scheduling.
The disk is divided into tracks. Each track is further divided into sectors. The point to be noted
here is that the outer tracks are bigger than the inner tracks but they contain the same number
of sectors and have equal storage capacity. This is because the storage density is high in sectors
of the inner tracks whereas the bits are sparsely arranged in sectors of the outer tracks. Some
space in every sector is used for formatting. So, the actual capacity of a sector is less than the
given capacity. The read-write (R-W) head moves over the rotating hard disk. It is this Read-
Write head that performs all the read and write operations on the disk and hence, the position
of the R-W head is a major concern. To perform a read or write operation on a memory location,
we need to place the R-W head over that position
Q5(b).
Short Note on:
i. Seek Time
ii. Rotational Latency
iii. Data Transfer Time
iv. Access Time
v. Total Capacity of Disk (1+1+1+1+1)
Solution:
Seek Time Seek time is the time taken in locating the disk arm to a specified track where the
read/write request will be satisfied.
Rotational Latency It is the time taken by the desired sector to rotate itself to the position from
where it can access the R/W heads.
Data Transfer Time It is the time taken to transfer the data.
Disk access time is given as, Disk Access Time = Rotational Latency + Seek Time + Transfer
Time Disk Response Time It is the average of time spent by each request waiting for the IO
operation.
Disk capacity = surfaces * tracks/surface * sectors/track
Q6(a)
Explain the term RAID and its characteristics. Also, explain
various RAID levels with their advantages and disadvantages.
(2+2+4+2)
Solution:
RAID:
RAID is a technique that makes use of a combination of multiple disks instead of using a single
disk for increased performance, data redundancy, or both.
RAID is very transparent to the underlying system. This means, to the host system, it appears
as a single big disk presenting itself as a linear array of blocks. This allows older technologies
to be replaced by RAID without making too many changes to the existing code.
Key Evaluation Points/Characteristics for a RAID System:
Reliability: How many disk faults can the system tolerate?
Availability: What fraction of the total session time is a system in uptime mode, i.e. how
available is the system for actual use?
Performance: How good is the response time? How high is the throughput (rate of
processing work)? Note that performance contains a lot of parameters and not just the two.
Capacity: Given a set of N disks each with B blocks, how much useful capacity is available
to the user?
Instead of placing just one block into a disk at a time, we can work with two (or more) blocks
placed into a disk before moving on to the next one.
Advantages
1. It is easy to implement.
2. It utilizes the storage capacity in a better way.
Disadvantages
1. A single drive loss can result in the complete failure of the system.
2. Not a good choice for a critical system.
2. RAID-1 (Mirroring)
More than one copy of each block is stored in a separate disk. Thus, every block has two (or
more) copies, lying on different disks.
RAID 0 was unable to tolerate any disk failure. But RAID 1 is capable of reliability.
Advantages
1. It covers complete redundancy.
2. It can increase data security and speed.
Disadvantages
1. It is highly expensive.
2. Storage capacity is less.
3. RAID-3 (Byte-Level Stripping with Dedicated Parity)
It consists of byte-level striping with dedicated parity striping.
At this level, we store parity information in a disc section and write to a dedicated parity
drive.
Whenever failure of the drive occurs, it helps in accessing the parity drive, through which
we can reconstruct the data.
Here in the below figure, Disk 3 contains the Parity bits for Disk 0, Disk 1, and Disk 2. If
data loss occurs, we can construct it with Disk 3.
Advantages
1. Data can be transferred in bulk.
2. Data can be accessed in parallel.
Disadvantages
1. It requires an additional drive for parity.
2. In the case of small-size files, it performs slowly.
4. RAID-4 (Block-Level Stripping with Dedicated Parity)
Instead of duplicating data, this adopts a parity-based approach.
In the figure below, we can observe one column (disk) dedicated to parity.
Assume that in the above figure, C3 is lost due to some disk failure. Then, we can recompute
the data bit stored in C3 by looking at the values of all the other columns and the parity bit.
This allows us to recover lost data.
Advantages
1. It helps in reconstructing the data if at most one data is lost.
Disadvantages
1. It can’t help in reconstructing when more than one data is lost.
5. RAID-5 (Block-Level Stripping with Distributed Parity)
This is a slight modification of the RAID-4 system where the only difference is that the parity
rotates among the drives.
In the figure below, we can notice how the parity bit “rotates”.
This was introduced to make the random write performance better.
Advantages
1. Data can be reconstructed using parity bits.
2. It makes the performance better.
Disadvantages
1. Its technology is complex and extra space is required.
2. If both discs get damaged, data will be lost forever.
6. RAID-6 (Block-Level Stripping with two Parity Bits)
Raid-6 helps when there is more than one disk failure. A pair of independent parities are
generated and stored on multiple disks at this level. Ideally, you need four disk drives for this
level.
There are also hybrid RAIDs, which make use of more than one RAID level nested one after
the other, to fulfill specific requirements.
Advantages
1. Very high data Accessibility.
2. Fast read data transactions.
Disadvantages
1. Due to double parity, it has slow write data transactions.
2. Extra space is required.
Q6(b)
Explain the concept of file system management. Also, explain
various file allocation and file access mechanisms in details.
(4+3+3)
Solution:
1. File concept:
A file is a collection of related information that is stored on secondary storage. Information
stored in files must be persistent i.e. not affected by power failures & system reboots. Files
represent both programs as well as data.
Part of the OS dealing with the files is known as file system. The File system takes care of
the following issues
o File Structure
o Recovering Free space
o disk space assignment to the files
o tracking data location
The important file concepts include:
1. File attributes:
Identifier: It is unique number that identifies the file within the file system.
Type: This information is needed for those systems that support different types of files.
Location: It is a pointer to a device & to the location of the file on that device Size: It is
the current size of a file in bytes, words or blocks.
Protection: It is the access control information that determines who can read, write &
execute
a file.
2. File operations: The operating system can provide system calls to create, read, write,
reposition, delete and truncate files.
3. File types: The file name is spilt into 2 parts, Name & extension. Usually these two parts
are separated by a period. The user & the OS can know the type of the file from the extension
itself. Listed below are some file types along with their extension:
File Type Extension
Executable exe, bin, com
4. File structure: Files can be structured in several ways. Three common possible are:
Byte sequence:
Record sequence:
Tree:
Access methods:
Basically, access method is divided into 2 types:
Sequential access: It is the simplest access method. Information in the file is processed in
order i.e. one record after another. A process can read all the data in a file in order starting
from beginning but can‘t skip & read arbitrarily from any location. Sequential files can be
rewound. It is convenient when storage medium was magnetic tape rather than disk.
Direct access: A file is made up of fixed length-logical records that allow programs to read &
write records rapidly in no particular order. This method can be used when disk are used for
storing files. This method is used in many applications e.g. database systems.
Ex: If an airline customer wants to reserve a seat on a particular flight, the reservation
program must be able to access the record for that flight directly without reading the records
before it. In a direct access file, there is no restriction in the order of reading or writing. For
example, we can read block 14, then read block 50 & then write block 7 etc. Direct access
files are very useful for immediate access to large amount of information.
File allocation Methods
Allocation methods:
There are various methods which can be used to allocate disk space to the files. Selection of
an appropriate allocation method will significantly affect the performance and efficiency of
the system. Allocation method provides a way in which the disk will be utilized and the files
will be accessed.
We have mainly discussed 3 methods of allocating disk space which are widely used.
1. Contiguous allocation:
If the blocks are allocated to the file in such a way that all the logical blocks of the file get the
contiguous physical block in the hard disk then such allocation scheme is known as
contiguous allocation.
In the image shown below, there are three files in the directory. The starting block and the
length of each file are mentioned in the table. We can check in the table that the contiguous
blocks are assigned to each file as per its need
2. Linked List Allocation
Linked List allocation solves all problems of contiguous allocation. In linked list allocation,
each file is considered as the linked list of disk blocks. However, the disks blocks allocated to
a particular file need not to be contiguous on the disk. Each disk block allocated to a file
contains a pointer which points to the next disk block allocated to the same file.
a. Linked allocation solves all problems of contiguous allocation.
b. In linked allocation, each file is linked list of disk blocks, which are scattered throughout
the disk.
c. The directory contains a pointer to the first and last blocks of the file.
d. Each block contains a pointer to the next block.
e. These pointers are not accessible to the user. To create a new file, we simply create a new
entry in the directory.
f. For writing to the file, a free block is found by the free space management system and this
new block is written to & linked to the end of the file.
g. To read a file, we read blocks by following the pointers from block to block.
h. There is no external fragmentation with linked allocation & any free block can be used to
satisfy a request.
i. Also there is no need to declare the size of a file when that file is created. A file can
continue to grow as long as there are free blocks.
3. Indexed Allocation
a. Indexed allocation solves the problem of linked allocation by bringing all the pointers
together to one location known as the index block.
b. Each file has its own index block which is an array of disk block addresses. The ith entry
in the index block points to the ith block of the file.
c. The directory contains the address of the index block. The read the ith block, we use the
pointer in the ith index block entry and read the desired block.
d. To write into the ith block, a free block is obtained from the free space manager and its
address is put in the ith index block entry.
e. Indexed allocation supports direct access without suffering external fragmentation.