OSC08
OSC08
• Name
• Type
• Location
• Size
• Protection (permissions)
• Time of creation/modification/access
3. What are the various File Operations?
• Creating
• Opening
• Reading
• Writing
• Closing
• Deleting
• Renaming
4. What is the information associated with an Open File?
• File descriptor
• Current position pointer
• Access mode (read/write)
• File attributes
5. What are the different types of a File?
• Sequential access
• Direct access
• Indexed access
A: Sequential Access Method reads data in order from the beginning to the end of the
file, requiring no random access.
8. Mention Advantages and Disadvantages of Sequential Access Method.
• Simplicity in implementation.
• Efficient for reading large data sets.
Disadvantages:
A: Direct Access Method allows data retrieval at any point in the file without reading
sequentially.
10. Mention Advantages of Direct Access Method.
A: Advantages include:
A: Indexed Access Method uses an index to quickly locate records, improving search
efficiency.
12. What is Single Level Directory? Mention its Advantages and Disadvantages.
A: A Single Level Directory has all files stored in one directory, making it simple but
less organized.
Advantages:
• Easy to implement and manage.
Disadvantages:
A: The Indexed Sequential Access Method combines both sequential and direct
access, using an index for faster searches while still allowing sequential reads.
14. What is Two-Level Directory?
17. What is General Graph Directory Structure? Mention its Advantages and
Disadvantages.
A: General graph structure allows more complex relationships between directories and
files. Advantages: Flexibility. Disadvantages: Difficult to manage, risk of circular
references.
18. What is Node?
A: An node is a data structure that stores information about a file, such as its size, location,
and access permissions.
A: A directory is a container that organizes files and other directories in a computer system.
A: Operations include: create, delete, list, rename, and search files and subdirectories.
21. What are the most common schemes for defining the Logical Structure of a
Directory?
A: Common schemes for defining the logical structure of a directory include single-level,
two-level, tree-structured, and graph-based structures.
A: A Path Name specifies the location of a file or directory within a file system hierarchy.
23. If the average page faults service time of 25 ms and a memory access time of
100ns.Calculate the effective access time.
A: Seek Time is the time taken for the read/write head to move to the correct track on
a disk, while Latency Time is the time taken for the desired sector to rotate under the
head after reaching the track.
27. What are the Allocation Methods of a Disk Space?
A: Allocation methods of disk space include contiguous allocation, linked allocation, and
indexed allocation
A: Drawbacks include fragmentation issues and difficulty in managing free space as files
grow or shrink.
30. What are the advantages of Linked Allocation?
A: Advantages of Linked Allocation include efficient use of space and ease in growing
files without fragmentation issues.
31. What are the disadvantages of Linked Allocation?
A: Disadvantages involve slower access times since each block must be followed via
pointers.
32. What are the operations on Directory?
A: Operations on a directory involve creating, deleting, renaming, and searching for files
within it1
A: Backup and Restore involves creating copies of data to prevent loss and enabling
recovery from failures or corruption.
A: Recovery is the process of restoring data after a failure or crash, ensuring system
integrity and continuity
Long Answer Type Questions (4 Marks & 8 Marks)
1. Explain File Access Methods.
A: File Access Methods define how an operating system retrieves data from files stored in
memory. There are four main types:
1. Sequential Access: Data is accessed in a specific order, one piece at a time, starting
from the beginning. The Sequential Access Method is the simplest way to process files,
where data is accessed one record at a time, in order. It uses two main operations: read
(which moves the pointer to the next record) and write (which adds new data at the end of
the file).
Advantages:
• Easy to implement.
• Fast access to the next record.
Disadvantages:
2. Direct Access: Data can be accessed directly at any point in the file, without following a
specific order. The Direct Access Method, also called real-time access, allows records to
be accessed in any order, without following a sequence. This is like accessing blocks of
data directly from a disk using their unique identifiers, such as block numbers. For
example, you can access block 40 first, then block 10, and so on. This method is often
used in databases to quickly retrieve specific data, saving time when dealing with large
volumes of information.
In this method, operations like reading or writing are based on specific block numbers,
rather than following a sequence. It can also use index or hash functions for faster
searches.
Advantages:
3. Indexed Access: A list (index) is used to quickly find and access data in the file. The
Indexed Access Method is an improved version of direct access, using an index to quickly
find records. Instead of searching through the entire file, the system browses the index
(like a table of contents) to find the record's location using pointers or addresses.
For example, in a bookstore database, each record (like a book's ISBN and price) is stored
in blocks. The index helps quickly find the block that contains a specific book by
performing a binary search.
Drawback: This method becomes inefficient with very large databases because the index
itself can become too large to manage effectively.
4. Indexed Sequential Access: Combines sequential and indexed methods, allowing for
fast access to data but in a sequence. The Indexed Sequential Access Method (ISAM)
improves on indexed access by using two levels of indexes: a master index and a
secondary index. The master index points to the secondary index, and the secondary
index points to the actual data blocks.
In ISAM, to find a data item, two binary searches are performed: one on the master index
and another on the secondary index. This method allows for faster access by using two
direct access reads.
Example: IBM's ISAM uses this approach, making it more efficient than a single-level index
for large data sets.
(i) File Attributes describe key details about a file and can vary across operating systems.
Common attributes include:
(ii) File Structure refers to how data is organized within a file, and it depends on the file
type:
Operating systems like Unix or MS-DOS support a limited number of file structures.
However, supporting multiple file structures increases the size and complexity of the
operating system. If a new application needs a structure not supported by the OS, it can
cause significant problems.
A: File Operations are actions performed on files by users using commands provided by the
operating system. Common file operations include:
1. Create: Used to create a new file in the file system, which allocates space and adds
the file to the directory.
2. Write: Adds data to a file, increasing its size and updating the file pointer to the end
of the written data.
3. Read: Retrieves data from a file, updating the file's read pointer to the next location.
4. Reposition (Seek): Moves the file pointer to a specific location without reading or
writing data.
5. Delete: Removes a file from the directory and frees up its storage space.
6. Truncate: Clears the content of a file but keeps its attributes and structure.
7. Close: Ends file processing, saves changes, and releases resources.
8. Append: Adds data to the end of the file.
9. Rename: Changes the name of an existing file.
A: Locating a specific offset within a file can be complex due to the difference between logical
records and physical disk blocks. Disk systems use fixed-size blocks (e.g., 512 bytes), while
logical records may vary in size. To handle this, logical records are packed into physical blocks.
For example, UNIX treats files as streams of bytes, where each byte has an offset. The operating
system packs and unpacks bytes into disk blocks as needed.
This packing system means all I/O operations are done in terms of blocks, not individual bytes.
As a result, when a file doesn’t perfectly fit into a block, the unused space in the last block is
wasted, leading to internal fragmentation. Larger block sizes lead to more wasted space,
causing greater fragmentation.
A: A directory is a container that holds files and folders, organizing them in a hierarchical
structure. A hard disk can be divided into partitions (also called volumes), each acting like
a separate mini disk. Each partition must have at least one directory to store and manage
files, and each file in a directory has an entry that holds information about the file.
There are several logical structures of directory, these are given as below.
(i) Single-Level Directory: A single-level directory is a simple structure where all files are
stored in one directory, making it easy to manage. However, it has a major limitation: when
there are many files or multiple users, each file must have a unique name. If two users
name their files the same (e.g., “test”), it causes a conflict.
• Easy to implement.
• Faster search for smaller files.
• Simple operations like file creation, searching, deletion, and updating.
(iii) Tree-Structured Directory: A two-level directory can be extended into a tree with
multiple levels, allowing users to create subdirectories and organize their files more
effectively. A tree structure is the most common directory structure. The tree has a root
directory, and every file in the system have a unique path.
• Not all files fit into the hierarchical model; some may need to be in multiple
directories.
• File sharing is difficult.
• Inefficient, as accessing a file may require navigating through multiple directories.
(iv) Acyclic Graph Directory: An acyclic graph directory allows sharing of files and
subdirectories, meaning the same file or subdirectory can appear in multiple directories
without creating cycles. It's useful when multiple people, like programmers working on a
joint project, need access to the same files. These shared files are not copies—any
changes made in one subdirectory will reflect in all others that share it.
(v) General Graph Directory Structure: In a general graph directory structure, cycles are
allowed, meaning directories can have multiple parent directories.
Advantages:
A: There are several methods for allocating disk space, and choosing the right one affects
system performance and efficiency. The main file allocation methods are:
• Contiguous Allocation
• Linked List Allocation
• Indexed Allocation
These methods help in quick file access and efficient use of disk space. Other methods
like FAT, Extents, and Multilevel Indexed Allocation also exist, but the three mentioned are
most commonly used.
(i) Contiguous allocation is a method where files are stored in consecutive blocks on the
disk. Each file is assigned a starting block and a length, and all its blocks are placed next to
each other in a continuous space.
Advantages:
Disadvantages:
(iii) Indexed Allocation: In indexed allocation, each file has an index block that stores the
addresses of the file's data blocks. The directory entry contains the address of the file's
index block.
Advantages of Index Allocation:
To handle large files with more pointers than a single index block can hold, there are three
methods:
i. Linked Scheme: Multiple index blocks are linked together, where each block points
to the next.
ii. Multilevel Index: A first-level index block points to second-level index blocks,
which in turn point to the actual file data. This can extend to multiple levels
depending on the file size.
iii. Combined Scheme: Uses an inode, which contains file information (like name and
size) and pointers to file data blocks. The inode stores direct pointers to file blocks,
and indirect pointers (single, double, or triple) point to additional blocks if needed.
7. Explain Free Space Management and techniques to implement a free space list.
A: A file management system in an operating system tracks and manages free disk space. It
uses a free space list to keep a record of available blocks.
When a file is created, the system checks the free space list to allocate space. When a file is
deleted, the space is freed and added back to the list.
(i) Bitmap: Uses a series of bits to represent free and used blocks. A bit vector (or bitmap)
is a common way to track free space on a disk. It consists of a series of bits, where each bit
represents a disk block. A bit value of '1' means the block is free, and '0' means the block is
occupied. Initially, all blocks are empty, so all bits are set to '0'. For example, in a disk with
16 blocks, free blocks are represented by ‘1’ and occupied blocks by ‘0’.
Free block number" can be defined as that block which does not contain any value, i.e.,
they are free blocks. The formula to find a free block number is: [Block number = (number
of bits per words)*(number of 0-value word) + Offset of first 1 bit] We will consider the first 8
bits group (00111100011110) to constitute a non-zero word since all bits are not 0 here.
"Non-zero word" is that word that contains the bit value <1> (block that is not free) Here,
the first non-zero word is the third block of the group. So, the offset will be (3). Hence, the
block number 80+3=3=8+0+3=3
Advantages:
• Simple to implement.
• Efficient for finding free space on the disk.
Disadvantages:
(ii) Grouping: Groups free blocks together and tracks them. In this modified free-list
technique, instead of storing the address of each free block, we store the address of a
group of n free blocks. The first n-1 blocks are free, and the last block contains the
address of the next n free blocks. This makes it quicker to find multiple free blocks, but we
only store the address of the first free block.
(iii) Linked List: Uses a list where each free block points to the next free block. In the linked
list technique for free space management, a list of free blocks is maintained. A head
pointer points to the first free block, and each block contains a pointer to the next free
block. While this method helps track free space, it is not efficient for quick searching, as
you must read each block to traverse the list, which takes time. Therefore, traversing the
free list is not done frequently.
Advantages:
• Simple allocation: allocate the first free block and move the head pointer to the
next.
Disadvantages:
• Slow search: each block must be read from the disk, which is slower than memory.
• Not efficient for fast access.
(iv) Counting: Counts consecutive free blocks and records the number of them. The
counting approach for free space management stores the address of the first free block
and the count of consecutive free blocks. Instead of listing each free block, we record a
block’s address and how many free blocks follow it. This method is similar to block
allocation and can be stored in a B-tree for efficient operations like lookup, deletion, and
insertion, making it faster than using a linked list.
A: File systems use directories or folders to organize and manage files. In many systems,
directories are also treated as files. This section explains how directories are structured,
their features, and the operations you can do with them.
Simple Directories: A directory usually has entries for each file. One way is to store the file
name, attributes, and disk addresses in each entry. Another way is to store just the file
name in the entry, with a pointer to another structure that holds the attributes and
addresses. Both methods are commonly used.
When a file is opened, the operating system looks up the file name in the directory,
retrieves its attributes and disk addresses, and stores them in memory. After that, all file
operations use the information in memory. The number of directories depends on the
system. The simplest system is a single directory for all files, common in early personal
computers with one user.
In a multi-user system, having one directory for all users can cause problems, like file
name conflicts. For example, if two users name their files the same, one file can overwrite
the other. To avoid this, each user is given their own private directory, so file names don’t
clash. In this setup, when a user tries to open a file, the system knows which directory to
search based on the user’s login. This requires a login procedure, and typically, users can
only access their own files.
Hierarchical Directory Systems: A two-level directory system solves file name conflicts
but doesn't allow users to organize many files efficiently. To fix this, a hierarchical (tree)
directory system is used, where users can create multiple subdirectories for better
organization. This is commonly seen in modern PC and server file systems. For example,
early digital cameras used a single directory to store images, but later models added
multiple directories to help organize files, even though most users don't need or
understand this feature.
A: When storing information on a computer, two main concerns are reliability (protecting
data from physical damage) and protection (preventing unauthorized access).
Reliability: To ensure data safety, files are often backed up regularly (e.g., daily or weekly)
to prevent loss due to hardware issues, power failures, or human errors like accidental
deletion. Systems can also experience damage from hardware failures, power surges, or
even vandalism. For this reason, many systems automatically create backup copies of
files on external storage devices like tapes.
Higher-level operations like copying or renaming files may also be controlled but are
typically implemented using lower-level operations (like reading or writing).
For example, a small system with a few users might not need complex protection, but a
large corporate system would require stricter corporate system would require stricter
controls to safeguard sensitive data.
Access Control ensures that only authorized users can access or modify files. The most
common method is using an Access Control List (ACL), which lists users and their
allowed actions (read, write, execute) on a file. When a user requests access, the system
checks the ACL to grant or deny permission.
However, ACLs can become lengthy and complex, especially in large systems. To simplify
this, users are grouped into three categories:
Systems like Solaris combine these categories with ACLs for easier management while
still allowing fine-grained access control when needed.
A: To ensure data consistency and prevent loss in case of system failure, files and
directories are stored both in main memory and on disk. However, if a system crashes,
updates in memory (such as directory changes) may not be written to disk, leading to
potential data inconsistencies. To address this, two key techniques are used:
(i) Consistency Checking: When a crash occurs, data in cache and buffers can be lost,
making the file system inconsistent. To fix this, a consistency checker program runs after
rebooting. It compares the directory structure in memory with the actual data blocks on
disk and repairs any inconsistencies. For example, with linked allocation, if a file’s blocks
are linked together, the checker can reconstruct the file by following these links.
(ii) Backup and Restore: To protect against data loss due to disk failure, backup programs
are used to copy data to another storage device (like a floppy, tape, or another disk). If a
failure occurs, the system can restore the lost data from the backup. To avoid unnecessary
copying, the backup system can track the last backup time of a file and only copy files that
have changed since the last backup.