Question Bank Program Course OS
Question Bank Program Course OS
Integrated Courses
Question
Unit I Unit II Unit III Unit IV Unit V
Type
2 marks 10 Questions 10 Questions 10 Questions 10 Questions 10 Questions
3 marks 10 Questions 10 Questions 10 Questions 10 Questions 10 Questions
5 marks 10 Questions 10 Questions 10 Questions 10 Questions 10 Questions
Total 30 30 30 30 30
Note:
5 Marks and 3 Marks need to be only in K2 or K3 level as per the CO and equal distribution of
questions must be given in K2 and K3 levels.
5 Mark Question can also contain subdivisions but provide the evaluation scheme.
2 marks questions must be in K1 and or K2 level with equal distribution of questions in K1 and K2
level.
Please provide only standard questions kindly try to avoid direct questions.
Use Font Times New Roman font with 12 pt.
Follow the Revised Bloom's Taxonomy action verbs only.
Sketches, drawings, and figures should be clearly and neatly presented.
For problem-based courses, need to provide problematic questions with 70% and theory questions
for 30%.
Answer Key with scheme need to be provided along with the Question Bank but as a separate file.
REFERENCES
1
Bloom’s Taxonomical Terms and Definitions
Bloom’s
S.No. Taxonomical Definitions Possible Words in Questions
Level
Unit – I
Unit Contents: Structure and Overview of Operating Systems
Operating system overview: Objectives – functions - Computer System Organization -
Operating System Operations- System Calls, System Programs-Operating- System
Structure: Traditional UNIX system structure-The Mac OS X structure - Architecture of
Google’s Android.
2
act as an intermediate between a user of a computer and the computer
hardware. It controls and coordinates the use of the hardware among the
various application programs for the various users.
3. Differentiate between interrupt and trap.
An interrupt is a hardware-generated signal that changes the flow within
the system. A trap is a software-generated interrupt. An interrupt can be 2 CO1 K2
used to signal the completion of I/O so that the CPU doesn't have to
spend cycles polling the device. A trap can be used to catch arithmetic
errors or to call system routines.
4. State the operations carried out in the operating system.
Operations of the operating system are process management, memory
management, device management and file management..
process management : assigning the processor to a process at a time
Memory management: moving of processes from disk to primary 2 CO1 K1
memory for execution.
Device Management: an access all the I/O devices using the device
drivers, which are device specific codes.
file management: the files are mapped onto physical devices
5. Show the organization of computer system
2 CO1 K1
3
11. Define System programs.
System programs, also known as system utilities, provide a convenient
environment for program development and execution
File management
Status information 2 CO1 K1
File modification.
Programming-language support
Program loading and execution
Communications.
Background services.
12. Draw Traditional UNIX system structure
2 CO1 K2
Evaluation Scheme:
Explanation – 1mark
Five Activities – 2mark
Evaluation Scheme:
Explanation – 3mark
Evaluation Scheme:
Explanation – 1mark
Five Activities – 2mark
Evaluation Scheme:
Explanation – 3mark
5. Discuss the drawbacks of Multiprocessor System. 3 CO1 K2
It is much cheaper to buy a simple single processor system than a
multiprocessor system. There are multiple processors in a multiprocessor
system that share peripherals, memory etc. So, it is much more
complicated to schedule processes and impart resources to processes
Evaluation Scheme:
Explanation – 3mark
5
6. Explain the main advantage of the layered approach to system
design? What are the disadvantages of using the layered approach?
(3 Marks)
As in all cases of modular design, designing an operating system in a
modular way has several advantages. The system is easier to debug and
modify because changes affect only limited sections of the system rather
than touching all sections of the operating system. Information is kept 3 CO1 K2
only where it is needed and is accessible only within a defined and
restricted area, so any bugs affecting that data must be limited to a
specific module or layer.
Evaluation Scheme:
Advantages – 2mark
Disadvantage – 1mark
Evaluation Scheme:
Explanation DMA – 1.5mark
Explanation Cache – 1.5mark
Evaluation Scheme:
Explanation – 3mark
9. Draw Architecture of the Mac OS X structure. 3 CO1 K2
6
Evaluation Scheme:
Diagram – 3mark
3 CO1 K2
Evaluation Scheme:
Diagram – 3mark
7
A Modern Computer System
Each device controller is in-charge of a specific type of device
(for example, disk drives, audio devices, and video displays).
The CPU and the device controllers can execute concurrently,
competing for memory cycles.
To ensure orderly access to the shared memory, a memory
controller is provided whose function is to synchronize access to
the memory.
For a computer to start running—for instance, when it is powered
up or rebooted—it needs to have an initial program to run. This
initial program, or bootstrap program is stored in read-only
memory (ROM) or electrically erasable programmable read-only
memory (EEPROM), known by the general term firmware,
within the computer hardware.
It initializes all aspects of the system, from CPU registers to
device controllers to memory contents.
The bootstrap program must know how to load the operating
system and to start executing that system. To accomplish this
goal, the bootstrap program must locate and load into memory
the operating system kernel. The operating system then starts
executing the first process, such as "init," and waits for some
event to occur.
Interrupts are an important part of a computer architecture.The
occurrence of an event is usually signaled by an interrupt from
either the hardware or the software. Hardware may trigger an
interrupt at any time by sending a signal to the CPU, usually by
way of the system bus. Software may trigger an interrupt by
executing a special operation called a system call (also called a
kernel call).
Storage Structure
Computer programs must be in main memory (also called
random-access memory or RAM) to be executed. Main
memory is the only large storage area(millions to billions of
bytes) that the processor can access directly.
RAM is commonly implemented in a semiconductor technology
called dynamic random-access memory (DRAM), which forms
an array of memory words main memory.
8
Storage Memory Hierarchy
The main requirement for secondary storage is that it be able to
hold large quantities of data permanently. The most common
secondary-storage device is a magnetic disk, which provides
storage for both programs and data. Most programs (web
browsers, compilers, word processors, spreadsheets, and so on)
are stored on a disk until they are loaded into memory.
Volatile storage (RAM) loses its contents when the power to the
device is removed. In the absence of expensive battery and
generator backup systems, data must be written to nonvolatile
storage (Secondary Memory) for safekeeping.
Cache memory is small temporary high speed storage between
CPU and RAM that stores frequently used data to reduce the
latency in data access.
Input/Output (I/O) Structure
A large portion of operating system code is dedicated to
managing I/O, because of its importance to the reliability and
performance of a system and because of the varying nature of the
devices.
A general-purpose computer system consists of CPUs and
multiple device controllers that are connected through a common
bus. Each device controller is in charge of a specific type of
device.
9
Working of Modern Computer System
Depending on the controller, there may be more than one
attached device. For instance, seven or more devices can be
attached to the small computer-systems interface (SCSI)
controller.
A device controller maintains some local buffer storage and a
set of special-purpose registers. The device controller is
responsible for moving the data between the peripheral devices
that it controls and its local buffer storage.
Operating systems have a device driver for each device
controller. This device driver understands the device controller
and presents a uniform interface to the device to the rest of the
operating system.
This form of interrupt-driven I/O is fine for moving small
amounts of data but can produce high overhead when used for
bulk data movement such as disk I/O. To solve this problem,
direct memory access (DMA) is used.
After setting up buffers, pointers, and counters for the I/O device,
the device controller transfers an entire block of data directly to
or from its own buffer storage to memory, with no intervention
by the CPU.
While the device controller is performing these operations, the CPU is
available to accomplish other work.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
2. Discuss the different types of operating-system structures.
The different types of operating-system structures are:
1. Monolithic structure: a single, self-contained kernel.
2. Microkernel structure: a small kernel that provides basic services.
3. Layered structure: a hierarchical structure with each layer providing a
specific set of services. 5 CO1 K2
4. Hybrid structure: a combination of different structures.
Evaluation Scheme:
Explanation – 2mark
Types – 3mark
3. Discuss the different types of operating-system services. 5 CO1 K2
The different types of operating-system services are:
10
1. Process management services: creating, scheduling, and terminating
processes.
2. Memory management services: allocating and deallocating memory.
3. File management services: creating, deleting, and manipulating files.
4. I/O management services: performing I/O operations.
5. Security services: providing access control, authentication, and
encryption.
Evaluation Scheme:
Explanation – 2mark
Types – 3mark
5 CO1 K2
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
5. Briefly Explain about Operating System Operations 5 CO1 K2
Modern operating systems are interrupt driven. Events are almost
always signaled by the occurrence of an interrupt or a trap. A trap (or an
exception) is a software-generated interrupt caused either by an error (for
example, division by zero or invalid memory access) or by a specific
request from a user program that an operating-system service be
performed.
12
Even Simple programs heavily use OS by calling thousands of
system calls per second. The system-call interface intercepts
function calls in the API and invokes the necessary system calls
within the operating system.
System Programs
1. Also called as System Utilities
2. Provide a convenient environment for program development and
execution
3. Some are user interfaces to s/m calls or bundle of useful s/m calls
Application Programs
System Programs
Operating System
Hardware
13
Types of System Calls
System calls can be grouped roughly into six major categories:
process control, file manipulation, device manipulation,
information maintenance, communications, and protection
- Process control
◦ end, abort
◦ load, execute
◦ create process, terminate process
◦ get process attributes, set process attributes
◦ wait for time
◦ wait event, signal event
◦ allocate and free memory
- File management
◦ create file, delete file
◦ open, close
◦ read, write, reposition
◦ get file attributes, set file attributes
- Device management
◦ request device, release device
◦ read, write, reposition
◦ get device attributes, set device attributes
◦ logically attach or detach devices
- Information maintenance
◦ get time or date, set time or date
◦ get system data, set system data
◦ get process, file, or device attributes
◦ set process, file, or device attributes
- Communications
◦ create, delete communication connection
◦ send, receive messages
◦ transfer status information
◦ attach or detach remote devices
- Protection
◦ Control access to resources
◦ Get and set Permissions
◦ Allow and deny user access
14
– Maintains and provides information on system Date,
time, disk/memory space, logging, performance,
debugging, registry
• File Modification
– Text editors, create, modify and search file contents
• Programming language Support
– Softwares like Compilers, assemblers, debuggers,
interpreters for common programming languages (C,C+
+,Java)
– Program Loading & Execution Programs of
Absolute/relocatable loaders, linkage editors
• Communication
– These programs provide Virtual connections, remote
login, file transfer, web browsing, email communication
• Background Services
-They are constantly running programs
Service/subs-systems/daemons
Evaluation Scheme:
Explanation – 3mark
Types – 2mark
7. Explain in detail about System Programs.
System programs, also known as system utilities, provide a convenient
environment for program development and execution.
File management
Status information
File modification
Programming-Language support CO1 K2
5
Program loading and execution
Communications
Background services
Evaluation Scheme:
Definition – 2mark
Explanation – 3mark
8. Discuss in detail about the operating system structure. 5 CO1 K2
A system as large and complex as a modern operating system must be
engineered carefully if it is to function properly and be modified easily. A
common approach is to partition the task into small components, or
modules,rather than have one monolithic system. Each of these modules
should be a well-defined portion of the system, with carefully defined
inputs, outputs,and functions. We have already discussed briefly in
Chapter 1 the common components of operating systems. In this section,
we discuss how these components are interconnected and melded into a
kernel.
1.Simple Structure
15
Operating systems were started as small, simple, and limited
systems.
The interfaces and levels of functionality are not well
separated.Application programs are able to access the basic I/O
routines
Such freedom leaves systems (eg-MS-DOS) vulnerable to errant
(or malicious) programs, causing entire system crashes when user
programs fail.
Advantage: simple design and Easy construction
Disadvantage: difficult to debug, prone to frequent system
failure
Example MS-DOS , UNIX operating system.
2. Layered Approach
With proper hardware support, operating systems can be broken
into pieces that are smaller and more appropriate
The operating system can then retain much greater control over
the computer and over the applications that make use of that
computer.
Operating system is broken into a number of layers(levels). The
bottom layer (layer 0) is the hardware; the highest (layer N) is the
user interface. The mid layers mostly comprising the OS system
and application programs This layering structure is depicted in
Figure 2.13.
Advantage
Simplicity of construction and debugging
The first layer can be debugged without any concern for the rest
of the system
A layer does not need to know how these operations are
implemented; it needs to know only what these operations do.
Hence, each layer hides the existence of certain data structures,
16
operations, and hardware-level layers.
Disadvantage
1. Careful planning on design is
necessary
2. less efficient than other types. Each
layer adds overhead to the system call. The net result is a system
call that takes longer than does oneon a nonlayered system.
Example VAX/VMS, Multics
3. Microkernels
Researchers at Carnegie Mellon University developed an
operating system called Mach that modularized the kernel using
the microkernel approach.
Only essential Operating system functions like thread
management, address space management, inter-process
communication are built in kernel
All nonessential components from the kernel are removed and
implemented as system and user-level programs.
The main function of the microkernel is to provide
communication between the client program and the various
services that are also running in user space.
Communication is provided through message passing,
Advantage
It makes extending the operating system easier.
All new services are added to user space and consequently do not
require modification of the kernel.
Its easier to port from one hardware design to another.
Provides more security and reliability, since most services are
running as user process—rather than kernel process
If a service fails, the rest of the operating system remains
untouched
Disadvantage
The performance of microkernels can suffer due to increased system-
function overhead.
Example: Mac OS X kernel (also known as Darwin) based onMach
microkernel
4. Modules Approach
The best current methodology for operating-system design
involves using loadable kernel modules
The kernel has a set of core components and links in additional
services via modules, either at boot time or during run time. This
17
type of design is common in modern implementations of UNIX,
such as Solaris, Linux, and Mac OS X, as well as Windows.
The idea of the design is for the kernel to provide core services
while other services are implemented dynamically, as the kernel
is running
Linking services dynamically is preferable to adding new
features directly to the kernel, which would require recompiling
the kernel every time a change was made.
Advantages
Has defined, protected interfaces;
More flexible than a layered system
More efficient, because modules do not need to invoke message
passing in order to communicate.
Example:Solaris
Evaluation Scheme:
Explanation – 3mark
Components – 1mark
Diagram – 1mark
18
was difficult to implement and maintain. It had a distinct performance
advantage, however: there is very little overhead in the system call
interface or in communication within the kernel. We still see evidence of
this simple, monolithic structure in the UNIX, Linux, and Windows
operating systems.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
10. State the operating system structure. Discuss the operating system 5 CO1 K2
operations in detail. Justify the reason why the lack of a hardware
supported dual mode can cause serious shortcoming in an operating
system?
An operating system is a construct that allows the user application
programs to interact with the system hardware. Since the operating
system is such a complex structure, it should be created with utmost care
so it can be used and modified easily. An easy way to do this is to create
the operating system in parts. Each of these parts should be well defined
with clear inputs, outputs and functions.
Simple Structure
There are many operating systems that have a rather simple structure.
These started as small systems and rapidly expanded much further than
their scope. A common example of this is MS-DOS. It was designed
simply for a niche amount for people. There was no indication that it
would become so popular.
An image to illustrate the structure of MS-DOS is as follows:
19
It is better that operating systems have a modular structure, unlike MS-
DOS. That would lead to greater control over the computer system and its
various applications. The modular structure would also allow the
programmers to hide information as required and implement internal
routines as they see fit without changing the outer specifications.
Layered Structure
One way to achieve modularity in the operating system is the layered
approach. In this, the bottom layer is the hardware and the topmost layer
is the user interface.
An image demonstrating the layered approach is as follows:
As seen from the image, each upper layer is built on the bottom layer. All
the layers hide some structures, operations etc from their upper layers.
One problem with the layered structure is that each layer needs to be
carefully defined. This is necessary because the upper layers can only use
the functionalities of the layers below them.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
Unit – II
Unit Contents: Process Management
Processes: Process Concept – Threads - Process Scheduling - Operations on Processes – Inter
process Communication - Communication in Client–Server Systems-Pipes (RPC, Pipes) -
Process Synchronization: The Critical-Section Problem - Semaphores – Mutex LocksClassic
Problems of Synchronization – Monitors. Case Study: Windows Threads and Linux Threads
20
1. Define the term Dispatch Latency.
The term dispatch latency describes the amount of time it takes for 2 CO2 K1
a system to respond to a request for a process to begin operation.
2. List out the requirements that a solution to the critical section
problem must satisfy.
The three requirements are 2 CO2 K1
• Mutual exclusion
• Progress
• Bounded waiting
3. What are two differences between user-level threads and
kernel-level threads? Under what circumstances is one type
better than the other?
a. User-level threads are unknown by the kernel, whereas the kernel
is aware of kernel threads.
b. On systems using either M:1 or M:N mapping, user threads are 2 CO2 K1
scheduled by the thread library and the kernel schedules kernel
threads.
c. Kernel threads need not be associated with a process whereas
every user thread belongs to a process. Kernel threads are generally
more expensive to maintain than user threads as they must be
represented with a kernel data structure.
4. Name some classic problem of synchronization.
The Bounded – Buffer Problem 2 CO2 K1
The Reader – Writer Problem
The Dining –Philosophers Problem
5. Define entry section and exit section.
The critical section problem is to design a protocol that the
processes can use to cooperate. Each process must request
permission to enter its critical section. The section of the code 2 CO2 K1
implementing this request is the entry section. The critical section
is followed by an exit section. The remaining code is the remainder
section.
6. Define Process.
A Process can be thought of as a program in execution. A process
will need certain 2 CO2 K1
resources such as CPU time, memory, files & I/O devices to
accomplish its task.
7. List the various process states in process management.
New- The process is being created.
Running – Instructions are being executed 2 CO2 K1
Waiting – The process is waiting for some event to occur
Ready – The process is waiting to be assigned a processor
Terminated - the process has finished execution
8. Define process control block. List out the data field associated 2 CO2 K1
with PCB.
Each process is represented in the operating system by a process
control block also
called a task cont rol block.(PCB) Also called a task control block.
Process state
Process number
Program counter
CPU registers
21
Memory limits
List of open files
CPU scheduling information
Memory management information
Accounting information
I/O status information
9. List out the benefits and challenges of thread handling.
Improved throughput.
Simultaneous and fully symmetric use of multiple
processors for computation and I/O.
Superior application responsiveness. 2 CO2 K1
Improved server responsiveness.
Minimized system resource usage.
Program structure simplification.
Better communication.
10. Define context switching.
Switching the CPU to another process requires saving the state of
the old process 2 CO2 K1
and loading the saved state for the new process. This task is known
as context switch.
11. Define monitors.
A high level synchronization construct. A monitor type is an ADT
which presents set 2 CO2 K1
of programmer define operations that are provided mutual
exclusion within the
monitor.
12. Differentiate a Thread form a Process.
Threads
Will by default share memory
Will share file descriptors
Will share file system context
Will share signal handling 2 CO2 K2
Processes
Will by default not share memory
Most file descriptors not shared
Don't share file system context
Don't share signal handling
13. Define mutual exclusion.
Mutual exclusion refers to the requirement of ensuring that no two
process or threads are in their critical section at the same time. i.e. 2 CO2 K1
If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
14. Define semaphore. Mention its importance in operating system.
A semaphore 'S' is a synchronization tool which is an integer value
that, apart from initialization, is accessed only through two
standard atomic operations; wait and signal. 2 CO2 K1
Evaluation Scheme:
Explanation – 2mark
Solutions – 1mark
23
Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark
3 CO2 K2
Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark
Process number
Program counter
CPU registers
Memory limits
List of open files 3 CO2 K2
CPU scheduling information
Memory management information
Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark
7. Discuss the reason for termination of child process by its parent 3 CO2 K2
process.
Child has exceeded its usage of some of the allocated resources
Task assigned to child is no longer required.
The parent is exiting and the operating systems does not allow a child to
continue if its parent terminates.(cascading termination)
Evaluation Scheme:
Explanation – 3mark
24
8. Discuss the features of Monitor.
A monitor is a synchronization mechanism in operating systems that
manages access to shared resources. The key features are:
1. Encapsulation of Shared Resources(1marks)
2. Automatic Mutual Exclusion(1marks)
3. Condition Variables(1marks) 3 CO2 K2
Evaluation Scheme:
Explanation feature – 2mark
Definition – 1mark
Evaluation Scheme:
Explanation – 3mark
Evaluation Scheme:
Explanation – 3 mark
25
memory space. It allows for high-speed communication, as
processes can directly read from and write to the shared memory.
4. Semaphores: A synchronization tool used to manage access to
shared resources. Semaphores prevent race conditions by
ensuring that only one process can access a shared resource at a
time.
5. Sockets: Facilitate communication between processes on
different machines or the same machine over a network. Sockets
support both connection-oriented communication (TCP) and
connectionless communication (UDP).
IPC plays a vital role in modern operating systems, enabling processes to
share data, synchronize their actions, and perform complex tasks in a
coordinated manner. It is used extensively in client-server systems,
parallel processing, and distributed computing.
Diagram(2marks)
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
26
that writers have exclusive access while allowing concurrent
reading.
Evaluation Scheme:
Explanation – 5mark
27
o Page tables: In case of paging, this keeps track of the
process’s pages.
o Segment tables: In case of segmentation, this keeps
track of the segments of the process.
6. Accounting Information:
o Information such as the amount of CPU time used, user
time, system time, and other statistics related to process
resource usage.
7. I/O Status Information:
o Information about I/O devices allocated to the process
and the status of any I/O operations (e.g., files the
process has open, devices it is using, etc.).
8. Scheduling Information:
o Information such as the priority of the process, pointers
to scheduling queues (e.g., ready queue, waiting queue),
and any other process-related data used by the scheduler
to make decisions about when to run the process.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
4. Illustrate the three requirements for critical section problem and 5 CO2 K3
provide the solutions for the critical section problem.
Critical Section Problem
The critical section problem arises in a multiprocessor or
multithreaded environment where multiple processes or threads share
common resources, such as memory or files. The critical section is the
part of the program where a process accesses or modifies shared
resources. The goal is to ensure that when one process is executing in its
critical section, no other process can enter its own critical section to avoid
race conditions and data inconsistencies.
Three Requirements for Solving the Critical Section Problem(4
marks)
To solve the critical section problem, three essential requirements must
be met:
1. Mutual Exclusion:
o Requirement: Only one process or thread should be
allowed to execute in the critical section at any given
time. If one process is in its critical section, all other
processes must be excluded from entering their critical
sections until the first one exits.
o Explanation: This prevents race conditions where
multiple processes simultaneously access shared
resources and lead to incorrect results.
2. Progress:
o Requirement: If no process is currently in the critical
section and one or more processes are waiting to enter,
then one of the waiting processes must be allowed to
enter the critical section in a finite amount of time.
o Explanation: The solution should not cause processes to
wait indefinitely (no deadlock). As soon as the critical
section is available, one of the waiting processes should
be granted access.
3. Bounded Waiting:
o Requirement: There must be a limit on the number of
times a process can be bypassed by other processes
28
before it is allowed to enter the critical section.
o Explanation: This prevents starvation, where some
processes could be perpetually blocked from entering
their critical section because other processes keep
entering.
Solutions to the Critical Section Problem
Various solutions exist to satisfy these three conditions. Here are a few
classic ones:
4. Peterson's Algorithm (for Two Processes):
o Description: A simple and effective software-based
solution for two processes. Each process has a flag to
indicate if it wants to enter the critical section, and there
is a shared variable to ensure mutual exclusion.
o How it works:
Both processes set their flags and use a shared
variable turn to decide who gets to enter the
critical section.
Mutual exclusion is ensured because at most one
process can set its flag and be allowed into the
critical section based on the value of turn.
o Meets Requirements:
Mutual Exclusion: Only one process can enter
the critical section at a time.
Progress: If no process is in the critical section,
one of the waiting processes is allowed to enter.
Bounded Waiting: A process can wait at most
one turn before entering.
5. Semaphore Solution:
o Description: Semaphores (a type of synchronization
primitive) can be used to solve the critical section
problem by controlling access to the critical section.
o How it works:
A semaphore is initialized to 1, representing the
availability of the critical section.
A process uses the P (wait) operation to
decrement the semaphore before entering the
critical section and the V (signal) operation to
increment it after exiting the critical section.
If the semaphore value is 0, processes must wait
until it is incremented (indicating that the critical
section is available).
o Meets Requirements:
Mutual Exclusion: Only one process can
decrement the semaphore and enter the critical
section at a time.
Progress: If no process is in the critical section,
one waiting process will be allowed to enter.
Bounded Waiting: Semaphores guarantee that
processes will not wait forever to enter the
critical section.
6. Monitors:
o Description: A higher-level synchronization mechanism
that provides an abstraction to manage mutual exclusion
and condition synchronization. Monitors encapsulate
shared resources and provide procedures to access them.
o How it works:
29
A monitor contains a lock and condition
variables to allow only one process to access the
shared resource at a time. If another process
needs to wait, it can do so using wait() and
signal() operations on condition variables.
o Meets Requirements:
Mutual Exclusion: Only one process can
execute within a monitor at a time.
Progress: If no process is in the critical section,
one of the waiting processes is allowed to enter.
Evaluation Scheme:
Definition – 1mark
Explanation – 4mark
5. Illustrate in detail about semaphore and its usage for solving 5 CO2 K3
synchronization problem.
Semaphores and Their Usage in Solving Synchronization Problems
A semaphore is a synchronization primitive used in operating systems
and concurrent programming to control access to shared resources and to
coordinate the execution of processes or threads. Semaphores help solve
synchronization problems by ensuring that processes or threads access
shared resources in an orderly and controlled manner, avoiding race
conditions, deadlocks, and other concurrency issues.
Definition of Semaphore
A semaphore is an integer variable that is accessed through two atomic
operations:
P (Proberen): Also called the wait or decrement operation, this
is used by a process to request access to a resource. If the
semaphore value is greater than 0, it is decremented, and the
process proceeds. If the semaphore value is 0, the process is
blocked and placed in a waiting queue until the semaphore
becomes positive again.
V (Verhogen): Also called the signal or increment operation,
this is used to release a resource. The semaphore value is
incremented, and if any process is waiting in the queue, it may be
unblocked and allowed to proceed.
Semaphores can be classified into two types:
1. Counting Semaphore: This type allows arbitrary values and is
typically used to manage a pool of resources (e.g., multiple
printers, database connections). The value of a counting
semaphore indicates how many resources are available.
2. Binary Semaphore (Mutex): A binary semaphore can only have
two values (0 or 1). It is used to implement mutual exclusion
(mutex), ensuring that only one process or thread can access the
critical section at any time.
Usage of Semaphores for Synchronization
Semaphores are used to solve a variety of synchronization problems in
concurrent systems. Below are some key applications:
1. Mutual Exclusion (Mutex)
Problem: When multiple processes or threads need to access
shared resources (like variables or files), mutual exclusion
ensures that only one process or thread can access a resource at a
time to prevent race conditions.
Solution with Semaphore:
o A binary semaphore is initialized to 1, indicating that the
30
resource is available.
o Before entering the critical section, a process performs
the P (wait) operation to check the semaphore.
o If the semaphore value is 1, it is decremented to 0, and
the process enters the critical section.
o After exiting the critical section, the process performs the
V (signal) operation, incrementing the semaphore to 1,
indicating that the resource is now available for other
processes.
This guarantees that only one process can access the critical section at a
time, thereby preventing race conditions.
2. Producer-Consumer Problem
Problem: The producer-consumer problem involves two
processes (a producer and a consumer) that share a fixed-size
buffer. The producer generates data and places it into the buffer,
while the consumer consumes the data. The challenge is to ensure
that the producer does not produce data when the buffer is full,
and the consumer does not try to consume from an empty buffer.
Solution with Semaphore:
o Two semaphores are used:
1. Empty Semaphore: Initialized to the size of the
buffer, it tracks the number of empty slots in the
buffer.
2. Full Semaphore: Initialized to 0, it tracks the
number of items in the buffer.
o The producer performs the P (wait) operation on the
empty semaphore before adding data to the buffer (if
there is space).
o The consumer performs the P (wait) operation on the full
semaphore before consuming data from the buffer (if
there is data).
o After adding or consuming data, the producer and
consumer each perform the V (signal) operation on the
corresponding semaphores to update the state of the
buffer.
This ensures that the producer and consumer are synchronized and the
buffer does not overflow or underflow.
3. Readers-Writers Problem
Problem: The readers-writers problem involves processes that
read from and write to a shared resource. Readers can access the
resource simultaneously, but writers require exclusive access.
The challenge is to allow concurrent reading but ensure that
writers have exclusive access when they need it.
Solution with Semaphore:
o A mutex semaphore is used to ensure mutual exclusion
for the writer.
o A read-count semaphore is used to keep track of the
number of readers currently accessing the resource.
o When a reader enters, it increments the read-count and
checks if it is the first reader (to prevent writers from
accessing the resource). If it is the first reader, the mutex
semaphore is used to block writers.
o When a reader leaves, it decrements the read-count, and
if it is the last reader, the mutex semaphore allows
waiting writers to enter.
o Writers perform the P (wait) operation on the mutex to
31
ensure exclusive access, preventing both other writers
and readers from accessing the resource during writing.
4. Deadlock Avoidance
Problem: Deadlock occurs when two or more processes are each
waiting for the other to release resources, causing them to be
stuck in an infinite wait cycle.
Solution with Semaphore:
o Semaphores can be used with timeouts or resource
ordering to avoid deadlocks.
o For instance, by defining an ordering of resource
requests (e.g., always requesting resource A before
resource B), deadlock can be prevented because
processes will not hold one resource while waiting for
another in a circular wait pattern.
o Additionally, the bounded waiting property of
semaphores ensures that no process waits indefinitely,
thus preventing starvation and reducing the chances of
deadlock.
Evaluation Scheme:
Explanation – 3mark
Usage – 2mark
32
and locks.
3. The key advantage of using monitors for process synchronization
is that they provide a simple, high-level abstraction that can be
used to implement complex concurrent systems. Monitors also
ensure that synchronization is encapsulated within the module,
making it easier to reason about the correctness of the system.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
7. Determine the role of pipes in inter-process communication. How are 5 CO2 K3
named pipes different from unnamed pipes?
Role of Pipes in Inter-Process Communication (IPC)
Pipes are one of the simplest and most widely used mechanisms for
Inter-Process Communication (IPC) in operating systems. They
provide a way for processes to communicate with each other by sending
and receiving data. A pipe allows one process to write data to the pipe,
while another process can read from the pipe, enabling communication
between the processes. This form of communication is especially useful
in scenarios where data needs to be passed between a producer and a
consumer process, or in a client-server architecture.
Pipes can be used for communication between related processes (e.g.,
parent-child processes) or even between unrelated processes, depending
on how they are configured.
Characteristics of Pipes:
1. Unidirectional Communication: Pipes are typically
unidirectional, meaning data flows in one direction — from the
writing process (producer) to the reading process (consumer).
However, two pipes can be used for full-duplex communication
(i.e., two-way communication).
2. Buffered Communication: When data is written to the pipe, it
may be buffered by the operating system until the receiving
process reads it. This buffer allows the writer and reader to
operate asynchronously.
3. Simplicity and Speed: Pipes are efficient for passing small
amounts of data between processes, and the kernel optimizes the
communication, making it faster than many higher-level IPC
methods.
4. Blocking Behavior: If a process tries to read from an empty pipe
or write to a full pipe, the operating system can block the process
until there is data to read or space to write, preventing busy-
waiting.
Evaluation Scheme:
Explanation – 3mark
Limitation – 2mark
9. Explain in detail about semaphores, their usage, implementation 5 CO2 K2
given to avoid busy waiting and binary semaphores.
A semaphore is a synchronization primitive used to manage concurrent
access to shared resources in a multi-process or multi-threaded
environment. It is an integer variable used for controlling access to
resources such that only a certain number of processes or threads can
access the resource at a time. Semaphores are widely used in operating
systems for process synchronization to avoid issues like race conditions
and deadlocks.
Types of Semaphores(3marks)
1. Counting Semaphore:
o A counting semaphore allows an arbitrary number of
processes to access a resource concurrently. The value of
the semaphore represents the number of available
resources.
o For example, if there are 5 identical printers, the
semaphore value is initialized to 5. When a process
requests access to a printer, the semaphore is
decremented. When the process is done, the semaphore is
incremented, signaling that the resource is now available.
2. Binary Semaphore (Mutex):
o A binary semaphore is a special case of counting
semaphores, where the value is restricted to either 0 or 1.
This kind of semaphore is used for mutual exclusion
35
(mutex).
o A binary semaphore is typically used to ensure that only
one process can access a critical section of code at any
given time.
Usage of Semaphores
Semaphores are typically used for the following purposes in a multi-
processing or multi-threading environment:
1. Mutual Exclusion:
o Semaphores can be used to ensure that only one process
or thread can access a critical section of code at any
given time. This prevents race conditions when multiple
processes try to modify shared data concurrently.
2. Synchronization:
o Semaphores help synchronize processes or threads that
need to wait for certain conditions to be met. For
example, a process might need to wait for another
process to produce some data before it can consume it
(producer-consumer problem).
3. Avoiding Deadlocks:
o Semaphores can help in preventing deadlocks by using
timeout mechanisms or limiting the number of resources
available to processes.
4. Resource Allocation:
o Semaphores are used to manage access to a limited
number of resources. For instance, in a situation where
multiple processes need access to a set of printers or
database connections, a counting semaphore can track
the number of available resources.
Evaluation Scheme:
Explanation – 3mark
Types – 2mark
10. Define race condition. Explain how a race condition can be avoided 5 CO2 K2
in critical section problem. Formulate a solution to the dining
philosopher problem so that no race condition arises.
Race Condition
A race condition occurs when multiple processes or threads access
shared resources concurrently and at least one of them modifies the
resource, leading to unpredictable or incorrect behavior. This typically
happens in systems where multiple threads or processes execute in
parallel and their operations are not properly synchronized. The outcome
depends on the relative timing or interleaving of the processes, hence the
term "race."
Race conditions are problematic because the final state of the shared
resource may depend on the sequence of execution, which can lead to
inconsistent results. It often leads to bugs that are difficult to detect and
reproduce since the result can vary each time the program is run.
For example, consider two processes that increment the same global
variable. If both processes read the variable, increment it, and then write
it back simultaneously, the result will not be correct because the value is
being overwritten. This is a classic case of a race condition.
38
philosophers to only pick up the forks when both are available,
starvation is avoided, and each philosopher can eventually eat.
// Philosopher function
void philosopher(int i) {
while (true) {
think(); // Philosopher is thinking
// Main function
int main() {
// Create philosopher threads
for (int i = 0; i < 5; i++) {
create_thread(philosopher, i);
}
}
Explanation of the Algorithm:
1. Mutex: The mutex ensures that only one philosopher can pick up
the forks at a time. The mutual exclusion ensures that if one
philosopher is picking up forks, no other philosopher can
simultaneously check the availability of forks.
2. Fork Semaphores: Each fork has a semaphore. If a philosopher
wants to pick up a fork, they must wait until the fork's semaphore
is available (i.e., set to 1). After using the fork, the philosopher
releases it by signaling the semaphore (setting it back to 1).
3. Philosopher Behavior: The philosopher picks up both forks and
only starts eating if both forks are available. After eating, they
put down the forks so others can use them.
Unit – III
Unit Contents: CPU Scheduling and Deadlock Management
CPU Scheduling: Scheduling Criteria - Scheduling Algorithms. Deadlocks: Deadlock
Characterization - Methods for Handling Deadlocks - Deadlock Prevention - Deadlock
Avoidance - Deadlock Detection - Recovery from Deadlock. Case Study: Real Time CPU
scheduling
40
complete from when it's submitted.
7. What is the objective of the Shortest Remaining Time First (SRTF)
scheduling algorithm? 2 CO3 K1
To minimize average waiting time by selecting the process with the
shortest remaining time for execution.
8. List out the criteria for CPU Scheduling.
CPU utilization, Throughput, waiting time, Turn Around Time and 2 CO3 K1
Response time.
9. Define deadlock in an operating system.
A deadlock occurs when a set of processes is in a state where each 2 CO3 K1
process is waiting for a resource held by another process, leading to
indefinite blocking.
10. List the four necessary conditions for deadlock. 2 CO3 K1
Mutual Exclusion, Hold and Wait, No Preemption, and Circular Wait.
11. What is the difference between deadlock prevention and deadlock
avoidance?
Deadlock prevention ensures at least one of the necessary conditions for 2 CO3 K1
deadlock is never satisfied, while deadlock avoidance dynamically
checks the resource-allocation state to avoid unsafe states.
12. What is a safe state in deadlock management?
A state is safe if the system can allocate resources to all processes in 2 CO3 K1
some order without leading to a deadlock.
13. Name two methods of handling deadlocks. 2 CO3 K1
Deadlock prevention and deadlock detection.
14. What is the purpose of the Banker's Algorithm?
The Banker's Algorithm is used to avoid deadlocks by checking resource 2 CO3 K1
allocation requests against system safety.
15. What is deadlock recovery?
Deadlock recovery is the process of regaining normal system operation 2 CO3 K1
after a deadlock, often by terminating processes or preempting resources.
Evaluation Scheme:
41
Explanation I/O bound – 1.5mark
Explanation CPU bound – 1.5mark
Evaluation Scheme:
Explanation – 2mark
Strategy – 1mark
Evaluation Scheme:
Explanation – 3mark
Evaluation Scheme:
Explanation – 3mark
Evaluation Scheme:
Explanation – 2mark
Types – 1mark
6. Can a system detect that some of its threads are starving? If you 3 CO3 K2
answer “yes,” Explain how it can. If you answer “no,” explain how
the system can deal with the starvation problem.
Yes, a system can detect thread starvation.
1. Detection of Starvation: The system can monitor the waiting time of
each thread in the resource queue. If a thread's waiting time exceeds a
predefined threshold while other threads are continuously granted
resources, the system can infer that the thread is starving.
2. How Detection Works: Implementing a timestamp or counter for each
thread’s request can help track how long it has been waiting. If the
thread's wait time becomes unusually long compared to others, it
indicates starvation.
3. Dealing with Starvation: To address starvation, the system can
43
implement priority adjustments, such as boosting the priority of a
starving thread or using aging techniques to ensure it eventually gets the
required resources.
Evaluation Scheme:
Explanation – 3mark
Deadlock Starvation
A situation where low priority
A situation where no process
process gets blocked by high
proceeds execution.
priority process.
It is a long waiting but not 3 CO3 K2
Its is infinite waiting.
infinite.
Every deadlock is always Every starvation need not be a
starvation. deadlock
Evaluation Scheme:
Explanation Deadlock – 1.5mark
Explanation Starvation – 1.5mark
Evaluation Scheme:
Explanation – 3mark
Evaluation Scheme:
Explanation – 3mark
Evaluation Scheme:
Explanation – 3mark
44
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10
from the second half portion of syllabus Outcome
Evaluation Scheme:
Explanation – 3mark
Examples – 2mark
Consider following processes with length of CPU burst time in
2. milliseconds.
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
All processes arrived in order P1, P2, P3, P4 and P5 at time zero.
1) Draw Gant charts illustrating execution of these processes for SJF,
Non-Preemptive priority (smaller priority number implies a higher
priority) & Round Robin (Quantum=1).
2) Calculate turnaround time for each process for scheduling
algorithms mentioned in part (1)
3) Calculate waiting time for each scheduling algorithms listed in
part (1)
b) Priority - Non-Preemptive
Smaller priority number indicates a higher priority. At time t=0, P1, P2,
P3, P4 and P5 are available, but as per the priority the execution order
changes to P2->P5->P1->P3->P4.
45
Gantt Chart:
P
P5 P1 P3 P4
2
0 1 6 16 18 19
Priorit WT = STE- TAT = WT +
Process Burst time
y AT BT
P1 10 3 6 16
P2 1 1 0 1
P3 2 3 16 18
P4 1 4 18 19
P5 5 2 1 6
Average 41/5 = 8.2 ms 60/5 = 12.0 ms
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
What are the various criteria for a good process scheduling
3. algorithm? Explain any two Preemptive scheduling algorithms in 5 CO3 K2
brief.
Criteria for a Good Process Scheduling Algorithm:
1. CPU Utilization: Maximize the CPU's usage, ensuring it remains
as busy as possible.
2. Throughput: Increase the number of processes completed per unit
of time.
3. Turnaround Time: Minimize the time taken from process
submission to its completion.
4. Waiting Time: Reduce the time processes spend waiting in the
46
ready queue.
5. Response Time: Minimize the time from submission of a process
to the first response it produces.
6. Fairness: Ensure all processes get equitable CPU time, avoiding
starvation.
7. Scalability: Adapt well to varying loads and process numbers.
Preemptive Scheduling Algorithms:
A) Shortest Remaining Time First (SRTF):
An extension of Shortest Job First (SJF), where the CPU is
preempted if a new process arrives with a burst time smaller than the
remaining burst time of the currently running process.
Advantages: Optimizes turnaround and waiting times for shorter
processes.
Disadvantages: May lead to process starvation for long-running jobs.
B) Round Robin (RR):
Each process is assigned a fixed time quantum. If the process
doesn't finish within the quantum, it is preempted and placed back in the
ready queue.
Advantages: Fair allocation of CPU time, ensures no process is starved.
Disadvantages: Performance depends on the choice of time quantum;
too small causes overhead, too large resembles First-Come-First-Served
(FCFS).
Evaluation Scheme:
Explanation – 3mark
Advantages and Disadvantages – 2mark
Consider following processes with length of CPU burst time in
4. milliseconds.
Process Arrival Time Burst Time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
1) Draw Gant charts illustrating execution of these processes for
FCFS, SJF Non-Preemptive & Round Robin (Quantum=2 ms).
2) Calculate turnaround time for each process for scheduling
algorithms mentioned in part (1)
3) Calculate waiting time for each scheduling algorithms listed in
part (1)
a) First Come First Served (FCFS) 5 CO3 K2
FCFS executes the processes in the order of their arrival. At time t=0, P4
alone, followed by P2 at t=1, P5 at t=2, P1 at t=3 and P3 at t=4, as per
this the execution order changes to P4->P2->P5->P1->P3.
Gantt Chart:
P4 P2 P5 P1 P3
0 6 10 13 14
16
Arrival WT = STE- TAT = WT +
Process Burst time
time AT BT
P1 3 1 10 11
P2 1 4 5 9
P3 4 2 10 12
47
P4 0 6 0 6
P5 2 3 8 11
Average 33/5 = 6.6 ms 49/5 = 9.8 ms
Among the given scheduling algorithms, SJF has very less Average
Waiting time and Average Turn Around Time. Hence this is the best for
the given data.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
Infer the following set of processes, with the length of the CPU burst and
5. arrival time given in milliseconds. 5 CO3 K2/K3
48
Process Burst time (B.T) Arrival time(A.T)
P1 8 0.00
P2 4 1.001
P3 9 2.001
P4 5 3.001
P5 3 4.001
Draw four Gantt charts that illustrate the execution of these processes
using the following scheduling algorithms: FCFS, SJF and RR
(quantum=2ms) scheduling. Calculate waiting time and turnaround time
of each process for each of the scheduling algorithms and find the
average waiting time and average turnaround time.
FCFS:
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
Explain the Banker's Algorithm implementations with an example.
6. The Banker's Algorithm is used to avoid deadlocks by ensuring that
resource allocation does not lead to an unsafe state. It checks if the
requested resources can be safely allocated.
Example:
Available: [3, 3, 2]
Allocation (A): [0, 1, 0], [2, 0, 0], [3, 0, 2]
Max (M): [7, 5, 3], [3, 2, 2], [9, 0, 2] 5 CO3 K2
Need (N = M - A): [7, 4, 3], [1, 2, 2], [6, 0, 0]
The algorithm checks if available resources satisfy any process's
needs and processes can be completed in sequence.
Evaluation Scheme:
Explanation – 3mark
Example – 2mark
Consider the following snapshot of a system:
7. Tasks Allocation Max Available 5 CO3 K2
A B C D A B C D A B C D
T0 0 0 1 2 0 0 1 2 1 5 2 0
T1 1 0 0 0 1 7 5 0
T2 1 3 5 4 2 3 5 6
T3 0 6 3 2 0 6 5 2
T4 0 0 1 4 0 6 5 6
49
Explain the following questions using the banker’s algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from thread T1 arrives for (0,4,2,0), can the request be
granted immediately?
a. Need Matrix
Need = Max – Allocation
Tasks A B C D
T0 0 0 0 0
T1 0 7 5 0
T2 1 0 0 2
T3 0 0 2 0
T4 0 6 4 2
50
c. If a request from thread T1 arrives for (0,4,2,0), can the
request be granted immediately?
Step 1: Request < = Need ? (Resource-Request Algorithm)
0 4 2 0 < = 0 7 5 0 Yes
Step 2: Request < = Available ?
0 4 2 0 < = 1 5 2 0 Yes
Step 3:
Available = Available – Request
=1520–0420
=1100
T1: Allocation = Allocation + Request
=1000+0420
=1420
T1: Need = Need – Request
=0750–0420
=0330
If the resulting allocation state is safe, the transaction is
completed and task T1 is allocated with its additional resources.
However, if the new state is unsafe, then T1 must wait and the
old resource allocation state is restored.
Evaluation Scheme:
Explanation – 3mark
Example – 2mark
Suppose that a system is in an unsafe state. Illustrate that it is
8. possible for the threads to complete their execution without entering
a deadlocked state.
If a system is in an unsafe state, it doesn't necessarily mean it is
deadlocked; it just means that careful resource allocation is required to
avoid deadlock. Here's a concise 3-mark explanation:
i.Unsafe State Definition: An unsafe state indicates that not all resource
allocation sequences guarantee the completion of all threads, but it does
not imply deadlock has occurred yet.
ii.Thread Execution without Deadlock: By carefully selecting the order
in which threads are granted resources, it may still be possible for all
threads to complete their execution. For example, if resources are
allocated to a thread that can immediately finish and release its held
resources, these resources become available for other threads, potentially 5 CO3 K2
allowing them to finish as well.
3. Example Sequence: Suppose a system has 3 threads (T1, T2, T3) and
limited resources. If the system is unsafe, it might still be possible to
grant resources to T1 so it can finish, release resources, and then proceed
to safely allocate resources to T2 and T3, ensuring all threads complete
without deadlock.
This demonstrates that while unsafe states are risky, deadlock can be
avoided with appropriate scheduling and resource management.
Evaluation Scheme:
Definition -1mark
Explanation – 4mark
Evaluation Scheme:
Algorithm -2mark
Explanation – 3mark
Consider the following system snapshot using data structures in the
10. banker’s algorithm, with resources A, B, C and D and process P0 to P4. 5 CO3 K2
Max Allocation Need Available
ABCD ABCD ABCD ABCD
P0 6 0 1 2 4 0 0 1 3 2 1 1
P1 1 7 5 0 1 1 0 0
P2 2 3 5 6 1 2 5 4
P3 1 6 5 3 0 6 3 3
P4 1 6 5 6 0 2 1 2
Using banker’s algorithm, Answer the following questions:
i) Explain How many resources of type A, B, C and D are
available in total?
ii) What are the contents of the need matrix?
iii) Is the system in a safe state? If yes, what is the sequence of
process execution?
52
iii) Is the system in a safe state? Why?
The system is in a safe state as the processes can be finished in the
sequence
P0, P2, P4, P1 and P3.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
Unit – IV
Unit Contents: Memory Management
Main Memory: Swapping - Contiguous Memory Allocation, Segmentation, Paging - Structure of the Page
Table - Virtual Memory: Demand Paging - Page Replacement - Allocation of Frames – Thrashing. Case
study: Virtual machine
2. What are the advantage and disadvantage of Contiguous and Non- 2 CO4 K1
Contiguous memory allocation?
53
Aspect Contiguous AllocationNon-Contiguous Allocation
mapping translation overhead
Complex, requires additional
Complexity Simple to implement
data structures
3. What is Address Binding? How to binding of Instructions and data 2 CO4 K1
to Memory?
Address Binding?
1. Compile-Time Binding:
o The program's memory addresses are determined at
compile time.
o If the starting memory location is known at compile time,
the compiler generates absolute addresses for instructions
and data.
2. Load-Time Binding:
o The program is compiled into relocatable code, where
addresses are not fixed.
o The operating system determines the actual memory
addresses during program loading.
o Advantage: Allows the program to be loaded into
different memory locations at different times.
3. Execution-Time Binding:
o The binding of addresses occurs during the program's
execution.
o Advantage: Provides maximum flexibility since the
program can be moved within memory while running.
54
1. Symbolic Addresses:
o Human-readable names for variables and instructions,
used during the source code phase (e.g., int a;).
2. Logical (Virtual) Addresses:
o Addresses generated by the CPU during program
execution.
o These addresses refer to a virtual memory space and are
translated into physical addresses.
3. Physical Addresses:
55
7. Why is a valid/invalid bit used in a page table entry? 2 CO4 K1
The valid/invalid bit indicates whether the corresponding page is present
in physical memory (valid) or not (invalid), helping manage page faults.
8. What is the role of a Translation Lookaside Buffer (TLB) in paging? 2 CO4 K1
9. What is the role of the base and limit registers in segmentation? 2 CO4 K1
The base register holds the starting address of a segment in physical
memory, and the limit register specifies the size of the segment. These
ensure a program accesses only its allocated memory segment.
56
13. What information is stored in a page table entry (PTE)? 2 CO4 K1
A PTE typically stores:
1. Frame number (physical address mapping).
Flags like valid/invalid bit, read/write permissions, and other control bits.
57
Segment Table Base Register
The STBR register is used to point the segment table's location in the
memory.
Segment Table Limit Register
This register indicates the number of segments used by a program. The
segment number s is legal if s<STLR.
Evaluation Scheme:
Explanation of Segment Table – 1mark
Explanation of Segment Table Base Register – 1mark
Explanation of Segment Table Limit Register – 1mark
3. Illustrate segmentation hardware with appropriate diagram. 3 CO4 K2
58
Diagram – 1mark
4. Explain the concept of paging in memory management and its main 3 CO4 K2
advantage.
Paging is a memory management technique where the physical memory
is divided into fixed-size blocks called frames, and the logical memory is
divided into blocks of the same size called pages. The operating system
maintains a mapping between logical pages and physical frames using a
page table.
Main Advantage:
Paging eliminates external fragmentation, as any free frame can be
allocated to a process, regardless of its location in physical memory.
Evaluation Scheme:
Explanation – 2mark
Advantage – 1mark
5. Summarize the key components of a page table, and what is their 3 CO4 K2
purpose?
Key components of a page table include:
1. Page Number: Identifies the page in the logical address space.
2. Frame Number: Specifies the corresponding frame in the
physical memory where the page is stored.
3. Control Bits:
o Present/Absent Bit: Indicates if the page is in memory
or needs to be fetched from disk.
o Read/Write Bit: Specifies whether the page is read-only
or writable.
o Dirty Bit: Indicates if the page has been modified.
Purpose:
The page table is used to translate logical addresses into physical
addresses and manage memory protection and permissions.
Evaluation Scheme:
Explanation – 2mark
Purpose – 1mark
6. Discuss a multilevel page table, and why is it used? 3 CO4 K2
A multilevel page table is a hierarchical structure that breaks down a
single large page table into smaller tables to reduce memory overhead.
Why it is used:
Reduces memory usage by only allocating page tables for parts
of the address space that are actively used.
Helps manage very large address spaces efficiently by avoiding
the need to store an enormous single-level page table in memory.
For example, in a two-level page table, the first-level table points to
second-level tables, and each second-level table maps pages to frames.
59
Evaluation Scheme:
Explanation – 2mark
Use – 1mark
Evaluation Scheme:
Explanation – 1mark
Example – 2mark
8. Differentiate Page Fault and Page hit. 3 CO4 K2
A page hit is said to be an event, where the page is found in the main
memory when trying to access it.
A page fault is a critical event in computer systems in which a program
try to access a page that is not currently available in the physical memory
(main memory).
Evaluation Scheme:
Explanation Page Fault– 1.5mark
Explanation Page hit– 1.5mark
9. Summarize Demand Paging. 3 CO4 K2
Demand paging is a technique used in virtual memory systems where
pages enter main memory only when requested or needed by the CPU. In
demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program
into memory at the start.
Evaluation Scheme:
Explanation – 3mark
60
more time swapping these pages. This state in the operating system is
known as thrashing. Because of thrashing, the CPU utilization is going to
be reduced or negligible.
Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark
Given:
Each page requires one page table entry (PTE), and each PTE is 4 bytes.
The total size of the page table is:
Final Answer:
Evaluation Scheme:
Explanation – 2mark
61
Example – 3mark
Steps in Segmentation:
2. Address Translation:
o The segment number (s) is used to find the
corresponding segment descriptor in the segment table.
o The descriptor contains:
Base address: Starting physical address of the
segment.
Limit: Maximum size of the segment.
o The offset (d) is added to the base address to calculate
the physical address.
3. Memory Access:
o The system checks if the offset is within the segment’s
limit.
o If valid, access is allowed; otherwise, a segmentation
fault occurs.
1. Segment Table:
o Stores descriptors for all segments.
62
Advantages:
Disadvantages:
Evaluation Scheme:
Explanation – 2mark
Role of Key Components – 2mark
Advantages and Disadvantages – 1mark
3. Consider a system where both paging and segmentation are used. 5 CO4 K2
The system uses a segment table for logical division and a page table
for each segment. If segment 1 has 3 pages and segment 2 has 2
pages, how will the system manage memory access for a logical
address in segment 1 with a page number of 2? Explain it.
The segment table maps the segment to a starting physical frame.
For segment 1, the page table will contain entries for the 3 pages.
For a logical address in segment 1 with a page number of 2, the
corresponding physical frame will be found in the page table of
segment 1.
The physical address is calculated by combining the frame
number from the page table with the offset in the page.
Evaluation Scheme:
Explanation – 3mark
Example – 2mark
4. A system has 4 physical frames and uses a Least Recently Used 5 CO4 K2
(LRU) page replacement algorithm. The reference string is:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2. Compute the content of the frames
after each reference and calculate the number of page faults.
Given:
Physical frames = 4.
Page replacement algorithm = Least Recently Used (LRU).
Reference string = 7,0,1,2,0,3,0,4,2,3,0,3,27, 0, 1, 2, 0, 3, 0, 4, 2,
3, 0, 3, 27,0,1,2,0,3,0,4,2,3,0,3,2.
Process:
We maintain a frame table and use the LRU strategy. For each reference:
63
replace the least recently used page.
65
memory gets scattered into small blocks.
2. Limited Flexibility:
o Processes cannot grow beyond their allocated memory
unless adjacent space is free.
3. Inefficient Memory Usage:
o Memory may remain underutilized due to fragmentation
or mismatched partition sizes.
Evaluation Scheme:
Explanation of Concept – 3mark
Example of Technique – 2mark
6. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 5 CO4 K2
600Kb (in order), how would the first-fit, best-fit, and worst-fit
algorithms place processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb
(in order)? Which algorithm makes the most efficient use of
memory? Explain it.
Evaluation Scheme:
Explanation – 2.5mark
Example – 2.5mark
• It occurs when the space is left inside the partition after allocating
the partition to a process.
• This space is called as internally fragmented space.
66
• This space cannot be allocated to any other process.
• This is because only Fixed (static) partitioning allows to store
only one process in each partition.
• Internal Fragmentation occurs only in Fixed (static) partitioning.
Internal fragmentation happens when the memory is split into
mounted sized blocks.
Whenever a method request for the memory, the mounted sized
block is allotted to the method.
just in case the memory allotted to the method is somewhat larger
than the memory requested, then the distinction between allotted
and requested memory is that the Internal fragmentation.
67
into memory at the start. A page fault occurred when the program needed
to access a page that is not currently in memory.
The operating system then loads the required pages from the disk into
memory and updates the page tables accordingly. This process is
transparent to the running program and it continues to run as if the page
had always been in memory.
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
9. Compare the number of No. of hits, page faults and hit rate for FIFO 5 CO4 K2
and Optimal page replacement algorithm using 4 frames for the
68
given page reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.
Evaluation Scheme:
Explanation – 3mark
Example – 2mark
10. Compare the number No. of hits, page faults and hit rate for FIFO 5 CO4 K2
and LRU replacement algorithm using 4 frames for the given page
reference string:
1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.
69
Evaluation Scheme:
Explanation – 3mark
Example – 2mark
Unit – V
Unit Contents: Storage Structure & File Systems
Mass Storage Structure: Disk Structure - Disk Scheduling - Disk Management-Structure - File-System
Interface: File Concepts -Directory Structure - File Sharing – Protection- File Allocation Methods-NFS.
Case study: Recovery in Windows.
70
8. Differentiate the sequential and direct file access. 2 CO5 K2
Sequential access reads data in order, while direct access allows
accessing data at any position.
9. How does a file differ from a database? 2 CO5 K1
A file is an unstructured or minimally structured collection of data, while
a database organizes data systematically for efficient querying and
management.
10. What is the purpose of a file extension? 2 CO5 K1
File extensions indicate the type of file, helping the operating system
determine which application should open it.
11. How does a hierarchical directory structure improve file 2 CO5 K1
organization?
A hierarchical structure organizes files into directories and subdirectories,
making it easier to locate and manage them.
12. List the different directory structure. 2 CO5 K1
Single Level directory, Two Level directory and Tree-structured
directory
13. How does file sharing work in a peer-to-peer network? 2 CO5 K1
In a peer-to-peer network, files are directly shared between devices
without a central server.
14. How do access control lists (ACLs) enhance file protection? 2 CO5 K1
ACLs specify which users or groups can access a file and define their
permissions, ensuring controlled access.
15. What challenge does file sharing address? 2 CO5 K1
File sharing addresses the need for collaborative access to files among
users or devices without duplicating data.
Evaluation Scheme:
71
Explanation – 1mark
Types – 2mark
2. Summarize disk Management. 3 CO5 K2
Evaluation Scheme:
Explanation – 3mark
3. Discuss file and Directory 3 CO5 K2/K3
File:
A file is a collection of data or information stored on a computer's
storage device. It can be of various types such as text files, executable
files, or image files. Files are identified by their name and extension
(e.g., .txt, .jpg). They are used to store and organize data in a
structured manner.
Directory:
A directory (also known as a folder) is a container used to organize
files and other directories in a hierarchical structure. It helps to group
related files together, making it easier to manage and locate data.
Directories can contain both files and subdirectories.
Evaluation Scheme:
Explanation of File – 1.5mark
Explanation of Directory – 1.5mark
4. Explain various layers of a file system. 3 CO5 K2
3. Physical Layer:
This layer is responsible for the actual storage of data on
physical media like hard drives or SSDs. It handles the
72
allocation of space on the storage device and manages low-level
tasks such as data block allocation and disk scheduling.
Evaluation Scheme:
Explanation – 2mark
Various Layers – 1mark
5. Summarize the attributes of a file. 3 CO5 K2
A file has several attributes that describe its characteristics and help
manage it within the file system:
1. File Name:
The name used to identify the file. It usually consists of a base
name and an extension (e.g., document.txt).
2. File Type:
Specifies the format or type of the file (e.g., text file, image file,
executable). This helps the system understand how to interpret
the file.
3. File Size:
Indicates the total size of the file, typically measured in bytes. It
reflects how much space the file occupies on the storage medium.
5. File Permissions:
Defines the read, write, and execute permissions for different
users or groups, controlling access to the file.
Evaluation Scheme:
Explanation – 3mark
6. Explain the concept of File Sharing. 3 CO5 K2
Evaluation Scheme:
73
Explanation – 3mark
7. Interpret Access control list in detail. 3 CO5 K2
Structure of ACL:
An ACL consists of a list of entries, where each entry defines a specific
user or group and the permissions granted (read, write, execute, delete,
etc.). For example, an ACL for a file might specify that User A has read
and write permissions, while User B only has read access.
Types of Permissions:
ACLs allow for detailed permission settings, such as:
Read: Permission to view the content.
Write: Permission to modify the content.
Execute: Permission to run or execute a file or program.
Access Control:
ACLs are used to restrict or grant access to resources based on user
identity or group membership, enhancing security by preventing
unauthorized access.
Evaluation Scheme:
Explanation – 3mark
8. Summarize Contiguous Memory Allocation. 3 CO5 K2
Contiguous Memory Allocation is a memory management technique in
which each process is allocated a single contiguous block of memory.
The operating system keeps track of the start address and size of the
memory block for each process.
1. Simple Allocation:
In contiguous allocation, processes are stored in adjacent
memory locations, meaning the operating system assigns a
continuous chunk of memory to each process without
fragmentation.
2. Advantages:
o Easy to implement due to its simplicity.
o Efficient access because the memory for each process is
contiguous, leading to faster read/write operations.
3. Disadvantages:
o Internal fragmentation may occur if the process doesn't
completely fill the allocated space.
o External fragmentation arises over time as processes
are loaded and removed, leaving gaps in memory.
Evaluation Scheme:
Explanation – 2mark
Advantages and Disadvantages – 1mark
9. Explain Solid State Disk. 3 CO5 K2
1. Storage Technology:
Unlike traditional hard drives (HDDs) that use magnetic platters,
SSDs store data on memory chips (typically NAND flash
memory), which enables faster data read/write speeds and
lower latency.
2. Performance Benefits:
SSDs provide significantly faster data access and boot times
compared to HDDs. This results in improved overall system
performance, especially in disk-intensive tasks such as database
operations or file transfers.
Evaluation Scheme:
Explanation – 3mark
10. What are the objectives of file management systems? Explain the file 3 CO5 K2
system architecture.
The primary objectives of a file management system are to ensure
efficient storage, organization, and retrieval of files, providing users
with an effective way to store and access data. The key objectives
include:
1. File Organization:
Organize files in a structured manner for easy access and
retrieval, ensuring a logical and hierarchical arrangement of
files and directories.
2. Efficient Storage Management:
Manage disk space by allocating and deallocating space
efficiently, ensuring minimal wastage of storage.
3. Security and Access Control:
Provide mechanisms to control access to files, such as setting
permissions for different users and protecting data from
unauthorized access.
75
drives or SSDs), managing disk blocks and their allocation.
Evaluation Scheme:
Explanation of File Management– 1.5mark
Explanation of File Structure -1.5mark
1. Explain the key components of disk structure and their roles. 5 CO5 K2
Tracks: Circular paths on the disk surface where data is stored.
Sectors: Subdivisions of tracks that represent the smallest storage
unit.
Cylinders: Group of tracks at the same position on all platters,
enabling simultaneous access.
Platters: Magnetic disks that store data in multiple layers.
Read/Write Head: Mechanism for accessing data stored on
platters.
These components work together to provide efficient storage and
retrieval of data on the disk.
Evaluation Scheme:
Explanation – 3mark
Roles – 2mark
2. Illustrate any two disk scheduling algorithms. 5 CO5 K3
Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
3. Discuss the file concepts in operating systems and describe the key 5 CO5 K2
attributes associated with files.
Files are containers for data storage, identified by attributes:
76
Evaluation Scheme:
Explanation – 5mark
4. Explain the different directory structures in operating systems and 5 CO5 K2
their advantages and disadvantages.
i. Single-Level Directory:
Evaluation Scheme:
Explanation – 4mark
Advantages and Disadvantages – 1mark
5. What are the challenges of file sharing in multi-user systems? Discuss 5 CO5 K2
mechanisms to ensure secure file sharing.
Challenges:
Mechanisms:
77
Encryption: Secures shared files.
Evaluation Scheme:
Explanation of Challenges– 2mark
Explanation of Mechanisms– 3mark
6. Explain file protection mechanisms and their importance in 5 CO5 K2
operating systems.
Evaluation Scheme:
Explanation – 3mark
Importance – 2mark
7. Discuss the various file allocation methods (contiguous, linked, 5 CO5 K2
indexed) with their pros and cons.
Evaluation Scheme:
Explanation of Contiguous– 2mark
Explanation of Linked– 2mark
Explanation of Indexed– 1mark
8. Explain the working of the Network File System (NFS) and its 5 CO5 K2
significance in distributed systems.
78
Working:
Significance:
Evaluation Scheme:
Explanation of Working of the NFS– 2mark
Explanation of Significance– 3mark
9. Discuss the challenges and limitations of implementing NFS in 5 CO5 K2
distributed systems.
Evaluation Scheme:
Explanation of Challenges– 2mark
Explanation of Limitations– 3mark
10. Explain the system recovery options in Windows, including Startup 5 CO5 K2
Repair, System Restore, and Safe Mode, and their importance in
maintaining system reliability.
79
System Restore: Reverts the system to a previous state, undoing
problematic changes.
Safe Mode: Boots with minimal drivers for troubleshooting.
System Image Recovery: Restores the system from a backup image.
Command Prompt: For advanced manual recovery.
Importance:
Reduces downtime.
Prevents data loss.
Maintains system stability after failures.
These options make Windows recovery user-friendly and reliable
Evaluation Scheme:
Explanation of Startup Repair– 2mark
Explanation of System Restore– 2mark
Explanation of Safe Mode– 1mark
80