0% found this document useful (0 votes)
17 views80 pages

Question Bank Program Course OS

The document outlines the question bank for the Operating Systems course at Vel Tech Rangarajan Dr.Sagunthala R&D Institute for the academic year 2024-25, detailing course outcomes and types of questions across various units. It specifies the distribution of questions based on Bloom's Taxonomy levels and provides guidelines for question formulation, including the need for an answer key. Additionally, it includes references to Bloom's Taxonomical terms and definitions to aid in question development.

Uploaded by

klreddy9010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views80 pages

Question Bank Program Course OS

The document outlines the question bank for the Operating Systems course at Vel Tech Rangarajan Dr.Sagunthala R&D Institute for the academic year 2024-25, detailing course outcomes and types of questions across various units. It specifies the distribution of questions based on Bloom's Taxonomy levels and provides guidelines for question formulation, including the need for an answer key. Additionally, it includes references to Bloom's Taxonomical terms and definitions to aid in question development.

Uploaded by

klreddy9010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 80

Vel Tech Rangarajan Dr.

Sagunthala R&D Institute of Science and Technology


(Deemed to be University Estd. u/s 3 of UGC Act, 1956)
School of Computing
VTR UGE 2021- (CBCS)
B.Tech. – Computer Science and Engineering
Question Bank – Integrated Courses
Academic Year: 2024 - 25
Course Category : Program Core
Course Code / Title : 10211CS103 / Operating Systems
Semester : Winter 2024-25
Achievable Course Outcomes
CO1 : Describe the operating system structures, operations and system calls K2
: Demonstrate process management concept and process synchronization
CO2 K3
methods for real time problems
: Illustrate CPU Scheduling algorithm and deadlock handling methods for
CO3 K2
the given situation
CO4 : Explain the concepts of various memory management techniques K2
CO5 : Discuss the concepts of disk management and file system interface K2

Integrated Courses

Question
Unit I Unit II Unit III Unit IV Unit V
Type
2 marks 10 Questions 10 Questions 10 Questions 10 Questions 10 Questions
3 marks 10 Questions 10 Questions 10 Questions 10 Questions 10 Questions
5 marks 10 Questions 10 Questions 10 Questions 10 Questions 10 Questions
Total 30 30 30 30 30

Note:
 5 Marks and 3 Marks need to be only in K2 or K3 level as per the CO and equal distribution of
questions must be given in K2 and K3 levels.
 5 Mark Question can also contain subdivisions but provide the evaluation scheme.
 2 marks questions must be in K1 and or K2 level with equal distribution of questions in K1 and K2
level.
 Please provide only standard questions kindly try to avoid direct questions.
 Use Font Times New Roman font with 12 pt.
 Follow the Revised Bloom's Taxonomy action verbs only.
 Sketches, drawings, and figures should be clearly and neatly presented.
 For problem-based courses, need to provide problematic questions with 70% and theory questions
for 30%.
 Answer Key with scheme need to be provided along with the Question Bank but as a separate file.

REFERENCES

1
Bloom’s Taxonomical Terms and Definitions
Bloom’s
S.No. Taxonomical Definitions Possible Words in Questions
Level

Cite, Define, Describe, Identify, Label, List,


Match, Name, Outline, Quote, Recall, Report,
K1: Recall facts, basic Reproduce, Retrieve, Show, State, Tabulate,
1
Remembering
concepts, or answers. Tell, Choose, Find, How, Omit, Relate, Select,
Spell, Tell, What, When, Where, Which, Who,
Why.
Abstract, Arrange, Articulate, Associate,
Categorize, Clarify, Classify, Compare,
Compute, Conclude, Contrast, Defend, Diagram,
Demonstrate
Differentiate, Discuss, Distinguish, Estimate,
K2: comprehension by
2 Exemplify, Explain, Extend, Extrapolate,
Understanding organizing, comparing,
Generalize, Give Examples Of, Illustrate, Infer,
translating, or
Interpolate, Interpret, Match, Outline,
summarizing,
Paraphrase, Predict, Rearrange, Reorder,
Rephrase, Represent, Restate, Summarize,
Transform, Translate, Demonstrate.
Apply, Calculate, Carry Out, Classify, Complete,
Compute, Demonstrate, Dramatize, Employ,
Examine, Execute, Experiment, Generalize,
Use information in new Illustrate, Implement, Infer, Interpret,
3 K3: Applying situations or solve Manipulate, Modify, Operate, Organize, Outline,
problems, Predict, Solve, Transfer, Translate, Use, Build,
Choose, Construct, Develop, Experiment with,
Make use of, Model, Organize, Plan, Select,
Solve, Utilize.

Unit – I
Unit Contents: Structure and Overview of Operating Systems
Operating system overview: Objectives – functions - Computer System Organization -
Operating System Operations- System Calls, System Programs-Operating- System
Structure: Traditional UNIX system structure-The Mac OS X structure - Architecture of
Google’s Android.

Two Marks Questions.


*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Course
Marks Level
from the second half portion of syllabus Outcome

1. Define Operating Systems.


An operating system is a program that manages a computer’s hardware. It 2 CO1 K1
also provides a basis for application programs and acts as an intermediary
between the computer user and the computer hardware.
2. State the objectives of operating system. 2 CO1 K1
An operating system is a program that manages the computer hardware. it

2
act as an intermediate between a user of a computer and the computer
hardware. It controls and coordinates the use of the hardware among the
various application programs for the various users.
3. Differentiate between interrupt and trap.
An interrupt is a hardware-generated signal that changes the flow within
the system. A trap is a software-generated interrupt. An interrupt can be 2 CO1 K2
used to signal the completion of I/O so that the CPU doesn't have to
spend cycles polling the device. A trap can be used to catch arithmetic
errors or to call system routines.
4. State the operations carried out in the operating system.
Operations of the operating system are process management, memory
management, device management and file management..
process management : assigning the processor to a process at a time
Memory management: moving of processes from disk to primary 2 CO1 K1
memory for execution.
Device Management: an access all the I/O devices using the device
drivers, which are device specific codes.
file management: the files are mapped onto physical devices
5. Show the organization of computer system

2 CO1 K1

6. Define Batch Systems.


Operators batched together jobs with similar needs and ran through the
computer as a group .The operators would sort programs into batches 2 CO1 K1
with similar requirements and as system become available, it would run
each batch.
7. Describe API need to be used rather than system calls.
System calls are much slower than APIs (library calls) since for each
system call, a context switch has to occur to load the OS (which then 2 CO1 K1
serves the system call). Most details of OS interface hidden from
programmer by API Managed by run-time support library (Set of
functions built into libraries included with compiler.)
8. Define System call and System programs.
A system call is a routine that allows a user application to request actions
that require special privileges 2 CO1 K1
Systems programming, development of computer software that is part of
a computer operating system or other control program.
9. What is Dual-Mode and Multimode Operation
Two separate modes of operation: user mode and kernel mode (also
called supervisor mode, system mode, or privileged mode). A bit, called 2 CO1 K1
the mode bit, is added to the hardware of the computer to indicate the
current mode: kernel (0) or user (1).
10. List the types of system call
System calls can be grouped roughly into six major categories:
 Process control,
 File manipulation, 2 CO1 K1
 Device manipulation,
 Information maintenance,
 Communications,
 Protection.

3
11. Define System programs.
System programs, also known as system utilities, provide a convenient
environment for program development and execution
 File management
 Status information 2 CO1 K1
 File modification.
 Programming-language support
 Program loading and execution
 Communications.
 Background services.
12. Draw Traditional UNIX system structure

2 CO1 K2

13. What is a kernel in an operating system?


A kernel is the core part of an operating system that manages hardware 2 CO1 K1
resources.
14. List out the operating system structure.
 Multiprogramming
 Job pool
 Time sharing
 Interactive computer system
 Response time 2 CO1 K1
 Process
 Job scheduling
 CPU scheduling
 Swapping
 Physical and Logical memory
15. What is the main difficulty that a programmer must overcome in
writing an operating system for a real-time environment?
The main difficulty is keeping the operating system within the fixed time
constraints of a real-time system. If the system does not complete a task 2 CO1 K1
in a certain time frame, it may cause a breakdown of the entire system it
is running. Therefore when writing an operating system for a real-time
system, the writer must be sure that his scheduling schemes don’t allow
response time to exceed the time constraint.

Three Marks Questions. Marks Course Level


Outcome
4
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10
from the second half portion of syllabus
1. Summarize the five major activities of an operating system with
regard to process management?
The five major activities are:
a. The creation and deletion of both user and system processes
b. The suspension and resumption of processes
c. The provision of mechanisms for process synchronization
d. The provision of mechanisms for process communication 3 CO1 K2
e. The provision of mechanisms for deadlock handling

Evaluation Scheme:
Explanation – 1mark
Five Activities – 2mark

2. Identify five services provided by an operating system. Explain how


each provides convenience to the users. Explain also in which cases it
would be impossible for user-level programs to provide these
services.
 Program execution
 I/O operations
 File-system manipulation 3 CO1 K2
 Communications
 Error detection

Evaluation Scheme:
Explanation – 3mark

3. Discuss the three major activities of an operating system with regard


to secondary-storage management? (3 Marks)
The three major activities are:
• Free-space management.
• Storage allocation.
• Disk scheduling. 3 CO1 K2

Evaluation Scheme:
Explanation – 1mark
Five Activities – 2mark

4. Interpret the purpose of the command interpreter? Why is it usually


separate from the kernel? (3 Marks)
It reads commands from the user or from a file of commands and
executes them, usually by turning them into one or more system calls. It
is usually not part of the kernel since the command interpreter is subject 3 CO1 K2
to changes.

Evaluation Scheme:
Explanation – 3mark
5. Discuss the drawbacks of Multiprocessor System. 3 CO1 K2
It is much cheaper to buy a simple single processor system than a
multiprocessor system. There are multiple processors in a multiprocessor
system that share peripherals, memory etc. So, it is much more
complicated to schedule processes and impart resources to processes

Evaluation Scheme:
Explanation – 3mark
5
6. Explain the main advantage of the layered approach to system
design? What are the disadvantages of using the layered approach?
(3 Marks)
As in all cases of modular design, designing an operating system in a
modular way has several advantages. The system is easier to debug and
modify because changes affect only limited sections of the system rather
than touching all sections of the operating system. Information is kept 3 CO1 K2
only where it is needed and is accessible only within a defined and
restricted area, so any bugs affecting that data must be limited to a
specific module or layer.
Evaluation Scheme:
Advantages – 2mark
Disadvantage – 1mark

7. Compare and contrast DMA and cache memory. (3 Marks)


DMA (Direct Memory Access): Direct memory access (DMA) is a
feature of computer systems that allows certain hardware subsystems to
access main memory (Random-access memory), independent of the
central processing unit (CPU).
Cache Memory: A cache is a smaller, faster memory, closer to a
processor core, which stores copies of the data from frequently used main 3 CO1 K2
memory locations. So, both DMA and cache are used for increasing the
speed of memory access.

Evaluation Scheme:
Explanation DMA – 1.5mark
Explanation Cache – 1.5mark

8. In some computer systems, privileged mode of operation is not


available in hardware. Can we find a secure operating system
for these computer systems? Explain it.
An operating system for a machine of this type would need to
remain in control at all times. This could be accomplished by two
methods: a.Software interpretation of all user programs. This
software interpreter would provide, in software, what the hardware 3 CO1 K2
doesnot provide. b. Require meant that all programs be written in
high level language so that all object code is compiler produced.
The compiler would generate the protection checks that the
hardware is missing.

Evaluation Scheme:
Explanation – 3mark
9. Draw Architecture of the Mac OS X structure. 3 CO1 K2

6
Evaluation Scheme:
Diagram – 3mark

10. Draw Architecture of the Google’s Android.

3 CO1 K2

Evaluation Scheme:
Diagram – 3mark

Five Marks Questions.


Course
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Marks Level
Outcome
from the second half portion of syllabus
1. Explain in detail about Computer System Organization 5 CO1 K2
Computer-System Operation
A modern general-purpose computer system consists of one or
more CPUs and a number of device controllers connected through a
common bus that provides access to shared memory.

7
A Modern Computer System
 Each device controller is in-charge of a specific type of device
(for example, disk drives, audio devices, and video displays).
 The CPU and the device controllers can execute concurrently,
competing for memory cycles.
 To ensure orderly access to the shared memory, a memory
controller is provided whose function is to synchronize access to
the memory.
 For a computer to start running—for instance, when it is powered
up or rebooted—it needs to have an initial program to run. This
initial program, or bootstrap program is stored in read-only
memory (ROM) or electrically erasable programmable read-only
memory (EEPROM), known by the general term firmware,
within the computer hardware.
 It initializes all aspects of the system, from CPU registers to
device controllers to memory contents.
 The bootstrap program must know how to load the operating
system and to start executing that system. To accomplish this
goal, the bootstrap program must locate and load into memory
the operating system kernel. The operating system then starts
executing the first process, such as "init," and waits for some
event to occur.
 Interrupts are an important part of a computer architecture.The
occurrence of an event is usually signaled by an interrupt from
either the hardware or the software. Hardware may trigger an
interrupt at any time by sending a signal to the CPU, usually by
way of the system bus. Software may trigger an interrupt by
executing a special operation called a system call (also called a
kernel call).
Storage Structure
 Computer programs must be in main memory (also called
random-access memory or RAM) to be executed. Main
memory is the only large storage area(millions to billions of
bytes) that the processor can access directly.
 RAM is commonly implemented in a semiconductor technology
called dynamic random-access memory (DRAM), which forms
an array of memory words main memory.

8
Storage Memory Hierarchy
 The main requirement for secondary storage is that it be able to
hold large quantities of data permanently. The most common
secondary-storage device is a magnetic disk, which provides
storage for both programs and data. Most programs (web
browsers, compilers, word processors, spreadsheets, and so on)
are stored on a disk until they are loaded into memory.
 Volatile storage (RAM) loses its contents when the power to the
device is removed. In the absence of expensive battery and
generator backup systems, data must be written to nonvolatile
storage (Secondary Memory) for safekeeping.
 Cache memory is small temporary high speed storage between
CPU and RAM that stores frequently used data to reduce the
latency in data access.
Input/Output (I/O) Structure
 A large portion of operating system code is dedicated to
managing I/O, because of its importance to the reliability and
performance of a system and because of the varying nature of the
devices.
 A general-purpose computer system consists of CPUs and
multiple device controllers that are connected through a common
bus. Each device controller is in charge of a specific type of
device.

9
Working of Modern Computer System
 Depending on the controller, there may be more than one
attached device. For instance, seven or more devices can be
attached to the small computer-systems interface (SCSI)
controller.
 A device controller maintains some local buffer storage and a
set of special-purpose registers. The device controller is
responsible for moving the data between the peripheral devices
that it controls and its local buffer storage.
 Operating systems have a device driver for each device
controller. This device driver understands the device controller
and presents a uniform interface to the device to the rest of the
operating system.
 This form of interrupt-driven I/O is fine for moving small
amounts of data but can produce high overhead when used for
bulk data movement such as disk I/O. To solve this problem,
direct memory access (DMA) is used.
 After setting up buffers, pointers, and counters for the I/O device,
the device controller transfers an entire block of data directly to
or from its own buffer storage to memory, with no intervention
by the CPU.
While the device controller is performing these operations, the CPU is
available to accomplish other work.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
2. Discuss the different types of operating-system structures.
The different types of operating-system structures are:
1. Monolithic structure: a single, self-contained kernel.
2. Microkernel structure: a small kernel that provides basic services.
3. Layered structure: a hierarchical structure with each layer providing a
specific set of services. 5 CO1 K2
4. Hybrid structure: a combination of different structures.

Evaluation Scheme:
Explanation – 2mark
Types – 3mark
3. Discuss the different types of operating-system services. 5 CO1 K2
The different types of operating-system services are:

10
1. Process management services: creating, scheduling, and terminating
processes.
2. Memory management services: allocating and deallocating memory.
3. File management services: creating, deleting, and manipulating files.
4. I/O management services: performing I/O operations.
5. Security services: providing access control, authentication, and
encryption.

Evaluation Scheme:
Explanation – 2mark
Types – 3mark

4. Discuss in detail about abstract view of the components of a


computer system with a neat diagram.

5 CO1 K2

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
5. Briefly Explain about Operating System Operations 5 CO1 K2
Modern operating systems are interrupt driven. Events are almost
always signaled by the occurrence of an interrupt or a trap. A trap (or an
exception) is a software-generated interrupt caused either by an error (for
example, division by zero or invalid memory access) or by a specific
request from a user program that an operating-system service be
performed.

The interrupt-driven nature of an operating system defines that system's


general structure. For each type of interrupt, separate segments of code in
the operating system determine what action should be taken. An
interrupt service routine is provided that is responsible for dealing with
the interrupt
A properly designed operating system must ensure that an
incorrect (or malicious) program cannot cause other programs to execute
incorrectly.
(i)Dual-Mode Operation
 To ensure proper operation, we must protect the operating system
and all other programs and their data from any malfunctioning
program. we need two separate modes of operation:
o user mode
o kernel mode( supervisor mode/ system mode/
privileged mode)
 A bit, called the mode bit, is added to the hardware of the
computer to indicate the current mode: kernel (0) or user (1).
 With the mode bit, we are able to distinguish between an
execution that is done on behalf of the operating system, and one
that is done on behalf of the user.
 At system boot time, the hardware starts in kernel mode. The
11
operating system is then loaded, and starts user processes in
user mode. Whenever a trap or interrupt occurs, the hardware
switches from user mode to kernel mode(that is, changes the
state of the mode bit to 0).
 Thus, whenever the operating system gains control of the
computer, it is in kernel mode. The system always switches to
user mode (by setting the mode bit to 1) before passing control to
a user program.
 The dual mode of operation provides us with the means for
protecting the operating system from errant users, and errant
users from one another.
 This protection by designating some of the machine instructions
that may cause harm as privileged instructions.

Transition from user to kernel mode


(ii) Timer
 A timer can be set to interrupt the computer after a specified
period. The period may be fixed (for example,
1/60 second) or variable (for example, from 1 millisecond to 1
second). A variable timer is generally implemented by a fixed-
rate clock and a counter.
 The operating system sets the counter. Every time the clock ticks,
the counter is decremented. When the counter reaches 0, an
interrupt occurs.
 Timer prevents a user program from getting stuck in an infinite
loop or not calling system services and never returning control to
the operating system.
 Before turning over control to the user, the operating system
ensures that the timer is set to interrupt. If the timer interrupts,
control transfers automatically to the operating system, which
may treat the interrupt as a fatal error or may give the program
more time. The OS then terminates the program.
 Instructions that modify the content of the timer are privileged.
Thus, we can use the timer to prevent a user program from running too
long.
Evaluation Scheme:
Explanation – 3mark
Operations – 2mark
6. Explain in detail about System Calls with neat sketch. 5 CO1 K2
System calls provide an interface to the services made available by an
operating system. These calls are generally available as routines written
in C and C++, although certain low-level tasks (for example, tasks where
hardware must be accessed directly) may have to be written using
assembly-language instructions.

12
Even Simple programs heavily use OS by calling thousands of
system calls per second. The system-call interface intercepts
function calls in the API and invokes the necessary system calls
within the operating system.

System Programs
1. Also called as System Utilities
2. Provide a convenient environment for program development and
execution
3. Some are user interfaces to s/m calls or bundle of useful s/m calls
Application Programs

System Programs

Operating System

Hardware

13
Types of System Calls
System calls can be grouped roughly into six major categories:
process control, file manipulation, device manipulation,
information maintenance, communications, and protection
- Process control
◦ end, abort
◦ load, execute
◦ create process, terminate process
◦ get process attributes, set process attributes
◦ wait for time
◦ wait event, signal event
◦ allocate and free memory
- File management
◦ create file, delete file
◦ open, close
◦ read, write, reposition
◦ get file attributes, set file attributes
- Device management
◦ request device, release device
◦ read, write, reposition
◦ get device attributes, set device attributes
◦ logically attach or detach devices
- Information maintenance
◦ get time or date, set time or date
◦ get system data, set system data
◦ get process, file, or device attributes
◦ set process, file, or device attributes
- Communications
◦ create, delete communication connection
◦ send, receive messages
◦ transfer status information
◦ attach or detach remote devices
- Protection
◦ Control access to resources
◦ Get and set Permissions
◦ Allow and deny user access

System programs can be divided into these categories:


• File Management
– Programs to create, delete, print, copy, list, manipulate
files and directories etc
• Status Information

14
– Maintains and provides information on system Date,
time, disk/memory space, logging, performance,
debugging, registry
• File Modification
– Text editors, create, modify and search file contents
• Programming language Support
– Softwares like Compilers, assemblers, debuggers,
interpreters for common programming languages (C,C+
+,Java)
– Program Loading & Execution Programs of
Absolute/relocatable loaders, linkage editors
• Communication
– These programs provide Virtual connections, remote
login, file transfer, web browsing, email communication
• Background Services
-They are constantly running programs
Service/subs-systems/daemons

Evaluation Scheme:
Explanation – 3mark
Types – 2mark
7. Explain in detail about System Programs.
System programs, also known as system utilities, provide a convenient
environment for program development and execution.
 File management
 Status information
 File modification
 Programming-Language support CO1 K2
5
 Program loading and execution
 Communications
 Background services

Evaluation Scheme:
Definition – 2mark
Explanation – 3mark
8. Discuss in detail about the operating system structure. 5 CO1 K2
A system as large and complex as a modern operating system must be
engineered carefully if it is to function properly and be modified easily. A
common approach is to partition the task into small components, or
modules,rather than have one monolithic system. Each of these modules
should be a well-defined portion of the system, with carefully defined
inputs, outputs,and functions. We have already discussed briefly in
Chapter 1 the common components of operating systems. In this section,
we discuss how these components are interconnected and melded into a
kernel.
1.Simple Structure

15
 Operating systems were started as small, simple, and limited
systems.
 The interfaces and levels of functionality are not well
separated.Application programs are able to access the basic I/O
routines
 Such freedom leaves systems (eg-MS-DOS) vulnerable to errant
(or malicious) programs, causing entire system crashes when user
programs fail.
 Advantage: simple design and Easy construction
 Disadvantage: difficult to debug, prone to frequent system
failure
 Example MS-DOS , UNIX operating system.
2. Layered Approach
 With proper hardware support, operating systems can be broken
into pieces that are smaller and more appropriate
 The operating system can then retain much greater control over
the computer and over the applications that make use of that
computer.
 Operating system is broken into a number of layers(levels). The
bottom layer (layer 0) is the hardware; the highest (layer N) is the
user interface. The mid layers mostly comprising the OS system
and application programs This layering structure is depicted in
Figure 2.13.

Advantage
 Simplicity of construction and debugging
 The first layer can be debugged without any concern for the rest
of the system
 A layer does not need to know how these operations are
implemented; it needs to know only what these operations do.
Hence, each layer hides the existence of certain data structures,

16
operations, and hardware-level layers.
Disadvantage
1. Careful planning on design is
necessary
2. less efficient than other types. Each
layer adds overhead to the system call. The net result is a system
call that takes longer than does oneon a nonlayered system.
Example VAX/VMS, Multics

3. Microkernels
 Researchers at Carnegie Mellon University developed an
operating system called Mach that modularized the kernel using
the microkernel approach.
 Only essential Operating system functions like thread
management, address space management, inter-process
communication are built in kernel
 All nonessential components from the kernel are removed and
implemented as system and user-level programs.
 The main function of the microkernel is to provide
communication between the client program and the various
services that are also running in user space.
 Communication is provided through message passing,

Advantage
 It makes extending the operating system easier.
 All new services are added to user space and consequently do not
require modification of the kernel.
 Its easier to port from one hardware design to another.
 Provides more security and reliability, since most services are
running as user process—rather than kernel process
 If a service fails, the rest of the operating system remains
untouched
Disadvantage
The performance of microkernels can suffer due to increased system-
function overhead.
Example: Mac OS X kernel (also known as Darwin) based onMach
microkernel
4. Modules Approach
 The best current methodology for operating-system design
involves using loadable kernel modules
 The kernel has a set of core components and links in additional
services via modules, either at boot time or during run time. This

17
type of design is common in modern implementations of UNIX,
such as Solaris, Linux, and Mac OS X, as well as Windows.
 The idea of the design is for the kernel to provide core services
while other services are implemented dynamically, as the kernel
is running
 Linking services dynamically is preferable to adding new
features directly to the kernel, which would require recompiling
the kernel every time a change was made.
Advantages
 Has defined, protected interfaces;
 More flexible than a layered system
 More efficient, because modules do not need to invoke message
passing in order to communicate.
Example:Solaris

5. Hybrid Systems Approach


 Combine different structures, resulting in hybrid systems that
address performance, security, and usability issues.
 Three hybrid systems: the Apple Mac OS X operating system and
the two most prominent mobile operating systems—iOS and
Android.
Android
The Android operating system was designed by the Open Handset
Alliance (led primarily by Google) and was developed for Android
smartphones and tablet computers.
Android runs on a variety of mobile platforms and is open-sourced,
Android is similar to iOS in that it is a layered stack of software

Evaluation Scheme:
Explanation – 3mark
Components – 1mark
Diagram – 1mark

9. Summarize the Traditional UNIX system structure. 5 CO1 K2


UNIX initially was limited by hardware functionality. It consists of two
separable parts: the kernel and the system programs. The kernel is further
separated into a series of interfaces and device drivers, which have been
added and expanded over the years as UNIX has evolved.We can view
the traditional UNIX operating system as being layered to some extent.
Everything below the system-call interface and above the physical
hardware is the kernel. The kernel provides the file system, CPU
scheduling, memory management, and other operating-system functions
through system calls. Taken in sum, that is an enormous amount of
functionality to be combined into one level. This monolithic structure

18
was difficult to implement and maintain. It had a distinct performance
advantage, however: there is very little overhead in the system call
interface or in communication within the kernel. We still see evidence of
this simple, monolithic structure in the UNIX, Linux, and Windows
operating systems.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark

10. State the operating system structure. Discuss the operating system 5 CO1 K2
operations in detail. Justify the reason why the lack of a hardware
supported dual mode can cause serious shortcoming in an operating
system?
An operating system is a construct that allows the user application
programs to interact with the system hardware. Since the operating
system is such a complex structure, it should be created with utmost care
so it can be used and modified easily. An easy way to do this is to create
the operating system in parts. Each of these parts should be well defined
with clear inputs, outputs and functions.

Simple Structure
There are many operating systems that have a rather simple structure.
These started as small systems and rapidly expanded much further than
their scope. A common example of this is MS-DOS. It was designed
simply for a niche amount for people. There was no indication that it
would become so popular.
An image to illustrate the structure of MS-DOS is as follows:

19
It is better that operating systems have a modular structure, unlike MS-
DOS. That would lead to greater control over the computer system and its
various applications. The modular structure would also allow the
programmers to hide information as required and implement internal
routines as they see fit without changing the outer specifications.

Layered Structure
One way to achieve modularity in the operating system is the layered
approach. In this, the bottom layer is the hardware and the topmost layer
is the user interface.
An image demonstrating the layered approach is as follows:

As seen from the image, each upper layer is built on the bottom layer. All
the layers hide some structures, operations etc from their upper layers.
One problem with the layered structure is that each layer needs to be
carefully defined. This is necessary because the upper layers can only use
the functionalities of the layers below them.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark

Unit – II
Unit Contents: Process Management
Processes: Process Concept – Threads - Process Scheduling - Operations on Processes – Inter
process Communication - Communication in Client–Server Systems-Pipes (RPC, Pipes) -
Process Synchronization: The Critical-Section Problem - Semaphores – Mutex LocksClassic
Problems of Synchronization – Monitors. Case Study: Windows Threads and Linux Threads

Two Marks Questions.


Course
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Marks Level
Outcome
from the second half portion of syllabus

20
1. Define the term Dispatch Latency.
The term dispatch latency describes the amount of time it takes for 2 CO2 K1
a system to respond to a request for a process to begin operation.
2. List out the requirements that a solution to the critical section
problem must satisfy.
The three requirements are 2 CO2 K1
• Mutual exclusion
• Progress
• Bounded waiting
3. What are two differences between user-level threads and
kernel-level threads? Under what circumstances is one type
better than the other?
a. User-level threads are unknown by the kernel, whereas the kernel
is aware of kernel threads.
b. On systems using either M:1 or M:N mapping, user threads are 2 CO2 K1
scheduled by the thread library and the kernel schedules kernel
threads.
c. Kernel threads need not be associated with a process whereas
every user thread belongs to a process. Kernel threads are generally
more expensive to maintain than user threads as they must be
represented with a kernel data structure.
4. Name some classic problem of synchronization.
The Bounded – Buffer Problem 2 CO2 K1
The Reader – Writer Problem
The Dining –Philosophers Problem
5. Define entry section and exit section.
The critical section problem is to design a protocol that the
processes can use to cooperate. Each process must request
permission to enter its critical section. The section of the code 2 CO2 K1
implementing this request is the entry section. The critical section
is followed by an exit section. The remaining code is the remainder
section.
6. Define Process.
A Process can be thought of as a program in execution. A process
will need certain 2 CO2 K1
resources such as CPU time, memory, files & I/O devices to
accomplish its task.
7. List the various process states in process management.
New- The process is being created.
Running – Instructions are being executed 2 CO2 K1
Waiting – The process is waiting for some event to occur
Ready – The process is waiting to be assigned a processor
Terminated - the process has finished execution
8. Define process control block. List out the data field associated 2 CO2 K1
with PCB.
Each process is represented in the operating system by a process
control block also
called a task cont rol block.(PCB) Also called a task control block.
Process state
Process number
Program counter
CPU registers
21
Memory limits
List of open files
CPU scheduling information
Memory management information
Accounting information
I/O status information
9. List out the benefits and challenges of thread handling.
 Improved throughput.
 Simultaneous and fully symmetric use of multiple
processors for computation and I/O.
 Superior application responsiveness. 2 CO2 K1
 Improved server responsiveness.
 Minimized system resource usage.
 Program structure simplification.
 Better communication.
10. Define context switching.
Switching the CPU to another process requires saving the state of
the old process 2 CO2 K1
and loading the saved state for the new process. This task is known
as context switch.
11. Define monitors.
A high level synchronization construct. A monitor type is an ADT
which presents set 2 CO2 K1
of programmer define operations that are provided mutual
exclusion within the
monitor.
12. Differentiate a Thread form a Process.
Threads
 Will by default share memory
 Will share file descriptors
 Will share file system context
 Will share signal handling 2 CO2 K2
Processes
 Will by default not share memory
 Most file descriptors not shared
 Don't share file system context
 Don't share signal handling
13. Define mutual exclusion.
Mutual exclusion refers to the requirement of ensuring that no two
process or threads are in their critical section at the same time. i.e. 2 CO2 K1
If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
14. Define semaphore. Mention its importance in operating system.
A semaphore 'S' is a synchronization tool which is an integer value
that, apart from initialization, is accessed only through two
standard atomic operations; wait and signal. 2 CO2 K1

Semaphores can be used to deal with the n-process critical section


problem. It can be also used to solve various Synchronization
problems.
15. How the mutual exclusion may be violated if the signal and 2 CO2 K1
wait operations are not executed automatically?
22
A wait operation atomically decrements the value associated with a
semaphore. If two wait operations are executed on a semaphore
when its value is1, if the two operations are not performed
atomically, then it is possible that both operations might proceed to
decrement the semaphore value, thereby violating mutual
exclusion.

Three Marks Questions


Course
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Marks Level
Outcome
from the second half portion of syllabus
1. Summarize the critical section problem and its solution.
Various solutions exist to satisfy these three conditions. Here are a few
classic ones:
1. Peterson's Algorithm (for Two Processes)(1marks)
o Description:
o How it works:
o Meets Requirements:
 Mutual Exclusion
 Progress
 Bounded Waiting
2. Semaphore Solution(1marks)
o Description:
o How it works:
o Meets Requirements: 3 CO2 K2
 Mutual Exclusion
 Progress
 Bounded Waiting
3. Monitors(1marks)
o Description
o How it works
o Meets Requirements:
 Mutual Exclusion
 Progress

Evaluation Scheme:
Explanation – 2mark
Solutions – 1mark

2. Summarize Process Control Block with neat sketch. 3 CO2 K2

23
Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark

3. Illustate the different states of a process and name them.

3 CO2 K2

Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark

4. Explain the role of a counting semaphore in resource management.


A counting semaphore tracks the number of available resources, allowing
multiple processes to access shared resources up to a specified limit.
3 CO2 K2
Evaluation Scheme:
Explanation – 3mark

5. Summarize the life cycle of a process with a neat diagram.

Process number
Program counter
CPU registers
Memory limits
List of open files 3 CO2 K2
CPU scheduling information
Memory management information

Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark

6. Explain the models of IPC.


Shared Memory
Message Passing
3 CO2 K2
Evaluation Scheme:
Explanation – 3mark

7. Discuss the reason for termination of child process by its parent 3 CO2 K2
process.
Child has exceeded its usage of some of the allocated resources
Task assigned to child is no longer required.
The parent is exiting and the operating systems does not allow a child to
continue if its parent terminates.(cascading termination)

Evaluation Scheme:
Explanation – 3mark

24
8. Discuss the features of Monitor.
A monitor is a synchronization mechanism in operating systems that
manages access to shared resources. The key features are:
1. Encapsulation of Shared Resources(1marks)
2. Automatic Mutual Exclusion(1marks)
3. Condition Variables(1marks) 3 CO2 K2

Evaluation Scheme:
Explanation feature – 2mark
Definition – 1mark

9. Explain how bounded waiting prevents starvation of processes


Bounded waiting ensures that a process will get a chance to enter its
critical section within a finite time after requesting access. It limits the
number of times other processes can enter the critical section before the
waiting process gets its turn.By guaranteeing that no process waits
indefinitely, bounded waiting prevents starvation, ensuring fair access 3 CO2 K2
to shared resources for all processes. This helps avoid situations where
higher-priority processes continuously block lower-priority ones.

Evaluation Scheme:
Explanation – 3mark

10. Summarize shared memory in an operating system and explain how


it is used for inter-process communication.
Shared memory is a common memory segment that allows multiple
processes to share data without the need for continuous system calls. It
is one of the fastest methods of inter-process communication (IPC) as
processes can directly access the shared memory to read and write data. 3 CO2 K2
This method is efficient in real-time systems where quick data sharing
between processes is required.

Evaluation Scheme:
Explanation – 3 mark

Five Marks Questions.


Course
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Marks Level
Outcome
from the second half portion of syllabus
1. Explain briefly about Inter Process Communication. 5 CO2 K2
Inter-Process Communication (IPC) refers to the mechanisms that
allow processes to communicate and share data with each other, either
within the same machine or over a network. IPC is essential in multi-
process systems where different processes need to cooperate, exchange
data, or synchronize their execution.
Key IPC Mechanisms:(2 marks)
1. Pipes: A unidirectional communication channel that allows one
process to send data to another. It is commonly used for
communication between related processes on the same system.
2. Message Queues: Allow processes to send and receive messages
in an organized queue, typically in a first-in, first-out (FIFO)
order. They support both synchronous and asynchronous
communication.
3. Shared Memory: Enables processes to access a common

25
memory space. It allows for high-speed communication, as
processes can directly read from and write to the shared memory.
4. Semaphores: A synchronization tool used to manage access to
shared resources. Semaphores prevent race conditions by
ensuring that only one process can access a shared resource at a
time.
5. Sockets: Facilitate communication between processes on
different machines or the same machine over a network. Sockets
support both connection-oriented communication (TCP) and
connectionless communication (UDP).
IPC plays a vital role in modern operating systems, enabling processes to
share data, synchronize their actions, and perform complex tasks in a
coordinated manner. It is used extensively in client-server systems,
parallel processing, and distributed computing.
Diagram(2marks)

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark

2. Explain the process synchronization with (i) Producer-Consumer 5 CO2 K2


problem (ii) Reader-Writer problem.
The classic problems of synchronization involve scenarios where
multiple processes or threads need to cooperate and access shared
resources in a way that prevents conflicts and ensures correct operation.
These problems are designed to illustrate common challenges in
concurrent programming. Here are the key classic synchronization
problems:
1. The Producer-Consumer Problem:(2.5 marks)
 Problem: A producer generates data and places it in a buffer,
while a consumer takes data from the buffer. The challenge is to
synchronize the two processes so that the producer doesn’t add
data when the buffer is full, and the consumer doesn’t try to
remove data when the buffer is empty.
 Solution: Use semaphores or condition variables to manage
access to the buffer and synchronize the producer and consumer.
2. The Readers-Writers Problem(2.5marks)
 Problem: A shared resource is being read and written by
multiple processes. Readers can access the resource
simultaneously, but if a writer is accessing it, no other process
(reader or writer) should access it. The challenge is to ensure that
multiple readers can access the resource concurrently while
maintaining exclusive access for writers.
 Solution: Implement two types of semaphores: one for managing
reader access and another for managing writer access, ensuring

26
that writers have exclusive access while allowing concurrent
reading.

Evaluation Scheme:
Explanation – 5mark

3. Illustrate the process control block with neat sketch. 5 CO2 K3


Process Control Block (PCB)

A Process Control Block (PCB) is a data structure used by the operating


system to store information about a process. It is a crucial element in
process management, as it allows the operating system to keep track of
all the details related to a running process, such as its state, program
counter, CPU registers, memory management, and I/O status.
When a process is created, the operating system allocates a PCB for it.
The PCB is used throughout the lifetime of the process to store
information, and it is updated as the process transitions between different
states (e.g., running, waiting, ready, terminated).
Components of a Process Control Block
A typical Process Control Block (PCB) includes the following
components:
1. Process Identification:
o Process ID (PID): A unique identifier assigned to each
process.
o Parent Process ID (PPID): The PID of the process's
parent (useful for tracking process hierarchy).
2. Process State:
o The current state of the process (e.g., new, ready,
running, waiting, terminated).
3. Program Counter (PC):
o Contains the address of the next instruction to be
executed by the process. It helps in resuming the
execution of the process when it is scheduled to run.
4. CPU Registers:
o The contents of various CPU registers (e.g.,
accumulator, base register, stack pointer) are saved
here. This allows the process to continue from where it
left off after being interrupted or suspended.
5. Memory Management Information:
o Base and limit registers: These help define the process's
address space in memory.

27
o Page tables: In case of paging, this keeps track of the
process’s pages.
o Segment tables: In case of segmentation, this keeps
track of the segments of the process.
6. Accounting Information:
o Information such as the amount of CPU time used, user
time, system time, and other statistics related to process
resource usage.
7. I/O Status Information:
o Information about I/O devices allocated to the process
and the status of any I/O operations (e.g., files the
process has open, devices it is using, etc.).
8. Scheduling Information:
o Information such as the priority of the process, pointers
to scheduling queues (e.g., ready queue, waiting queue),
and any other process-related data used by the scheduler
to make decisions about when to run the process.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
4. Illustrate the three requirements for critical section problem and 5 CO2 K3
provide the solutions for the critical section problem.
Critical Section Problem
The critical section problem arises in a multiprocessor or
multithreaded environment where multiple processes or threads share
common resources, such as memory or files. The critical section is the
part of the program where a process accesses or modifies shared
resources. The goal is to ensure that when one process is executing in its
critical section, no other process can enter its own critical section to avoid
race conditions and data inconsistencies.
Three Requirements for Solving the Critical Section Problem(4
marks)
To solve the critical section problem, three essential requirements must
be met:
1. Mutual Exclusion:
o Requirement: Only one process or thread should be
allowed to execute in the critical section at any given
time. If one process is in its critical section, all other
processes must be excluded from entering their critical
sections until the first one exits.
o Explanation: This prevents race conditions where
multiple processes simultaneously access shared
resources and lead to incorrect results.
2. Progress:
o Requirement: If no process is currently in the critical
section and one or more processes are waiting to enter,
then one of the waiting processes must be allowed to
enter the critical section in a finite amount of time.
o Explanation: The solution should not cause processes to
wait indefinitely (no deadlock). As soon as the critical
section is available, one of the waiting processes should
be granted access.
3. Bounded Waiting:
o Requirement: There must be a limit on the number of
times a process can be bypassed by other processes

28
before it is allowed to enter the critical section.
o Explanation: This prevents starvation, where some
processes could be perpetually blocked from entering
their critical section because other processes keep
entering.
Solutions to the Critical Section Problem
Various solutions exist to satisfy these three conditions. Here are a few
classic ones:
4. Peterson's Algorithm (for Two Processes):
o Description: A simple and effective software-based
solution for two processes. Each process has a flag to
indicate if it wants to enter the critical section, and there
is a shared variable to ensure mutual exclusion.
o How it works:
 Both processes set their flags and use a shared
variable turn to decide who gets to enter the
critical section.
 Mutual exclusion is ensured because at most one
process can set its flag and be allowed into the
critical section based on the value of turn.
o Meets Requirements:
 Mutual Exclusion: Only one process can enter
the critical section at a time.
 Progress: If no process is in the critical section,
one of the waiting processes is allowed to enter.
 Bounded Waiting: A process can wait at most
one turn before entering.
5. Semaphore Solution:
o Description: Semaphores (a type of synchronization
primitive) can be used to solve the critical section
problem by controlling access to the critical section.
o How it works:
 A semaphore is initialized to 1, representing the
availability of the critical section.
 A process uses the P (wait) operation to
decrement the semaphore before entering the
critical section and the V (signal) operation to
increment it after exiting the critical section.
 If the semaphore value is 0, processes must wait
until it is incremented (indicating that the critical
section is available).
o Meets Requirements:
 Mutual Exclusion: Only one process can
decrement the semaphore and enter the critical
section at a time.
 Progress: If no process is in the critical section,
one waiting process will be allowed to enter.
 Bounded Waiting: Semaphores guarantee that
processes will not wait forever to enter the
critical section.
6. Monitors:
o Description: A higher-level synchronization mechanism
that provides an abstraction to manage mutual exclusion
and condition synchronization. Monitors encapsulate
shared resources and provide procedures to access them.
o How it works:

29
 A monitor contains a lock and condition
variables to allow only one process to access the
shared resource at a time. If another process
needs to wait, it can do so using wait() and
signal() operations on condition variables.
o Meets Requirements:
 Mutual Exclusion: Only one process can
execute within a monitor at a time.
 Progress: If no process is in the critical section,
one of the waiting processes is allowed to enter.

Evaluation Scheme:
Definition – 1mark
Explanation – 4mark

5. Illustrate in detail about semaphore and its usage for solving 5 CO2 K3
synchronization problem.
Semaphores and Their Usage in Solving Synchronization Problems
A semaphore is a synchronization primitive used in operating systems
and concurrent programming to control access to shared resources and to
coordinate the execution of processes or threads. Semaphores help solve
synchronization problems by ensuring that processes or threads access
shared resources in an orderly and controlled manner, avoiding race
conditions, deadlocks, and other concurrency issues.
Definition of Semaphore
A semaphore is an integer variable that is accessed through two atomic
operations:
 P (Proberen): Also called the wait or decrement operation, this
is used by a process to request access to a resource. If the
semaphore value is greater than 0, it is decremented, and the
process proceeds. If the semaphore value is 0, the process is
blocked and placed in a waiting queue until the semaphore
becomes positive again.
 V (Verhogen): Also called the signal or increment operation,
this is used to release a resource. The semaphore value is
incremented, and if any process is waiting in the queue, it may be
unblocked and allowed to proceed.
Semaphores can be classified into two types:
1. Counting Semaphore: This type allows arbitrary values and is
typically used to manage a pool of resources (e.g., multiple
printers, database connections). The value of a counting
semaphore indicates how many resources are available.
2. Binary Semaphore (Mutex): A binary semaphore can only have
two values (0 or 1). It is used to implement mutual exclusion
(mutex), ensuring that only one process or thread can access the
critical section at any time.
Usage of Semaphores for Synchronization
Semaphores are used to solve a variety of synchronization problems in
concurrent systems. Below are some key applications:
1. Mutual Exclusion (Mutex)
 Problem: When multiple processes or threads need to access
shared resources (like variables or files), mutual exclusion
ensures that only one process or thread can access a resource at a
time to prevent race conditions.
 Solution with Semaphore:
o A binary semaphore is initialized to 1, indicating that the

30
resource is available.
o Before entering the critical section, a process performs
the P (wait) operation to check the semaphore.
o If the semaphore value is 1, it is decremented to 0, and
the process enters the critical section.
o After exiting the critical section, the process performs the
V (signal) operation, incrementing the semaphore to 1,
indicating that the resource is now available for other
processes.
This guarantees that only one process can access the critical section at a
time, thereby preventing race conditions.
2. Producer-Consumer Problem
 Problem: The producer-consumer problem involves two
processes (a producer and a consumer) that share a fixed-size
buffer. The producer generates data and places it into the buffer,
while the consumer consumes the data. The challenge is to ensure
that the producer does not produce data when the buffer is full,
and the consumer does not try to consume from an empty buffer.
 Solution with Semaphore:
o Two semaphores are used:
1. Empty Semaphore: Initialized to the size of the
buffer, it tracks the number of empty slots in the
buffer.
2. Full Semaphore: Initialized to 0, it tracks the
number of items in the buffer.
o The producer performs the P (wait) operation on the
empty semaphore before adding data to the buffer (if
there is space).
o The consumer performs the P (wait) operation on the full
semaphore before consuming data from the buffer (if
there is data).
o After adding or consuming data, the producer and
consumer each perform the V (signal) operation on the
corresponding semaphores to update the state of the
buffer.
This ensures that the producer and consumer are synchronized and the
buffer does not overflow or underflow.
3. Readers-Writers Problem
 Problem: The readers-writers problem involves processes that
read from and write to a shared resource. Readers can access the
resource simultaneously, but writers require exclusive access.
The challenge is to allow concurrent reading but ensure that
writers have exclusive access when they need it.
 Solution with Semaphore:
o A mutex semaphore is used to ensure mutual exclusion
for the writer.
o A read-count semaphore is used to keep track of the
number of readers currently accessing the resource.
o When a reader enters, it increments the read-count and
checks if it is the first reader (to prevent writers from
accessing the resource). If it is the first reader, the mutex
semaphore is used to block writers.
o When a reader leaves, it decrements the read-count, and
if it is the last reader, the mutex semaphore allows
waiting writers to enter.
o Writers perform the P (wait) operation on the mutex to

31
ensure exclusive access, preventing both other writers
and readers from accessing the resource during writing.
4. Deadlock Avoidance
 Problem: Deadlock occurs when two or more processes are each
waiting for the other to release resources, causing them to be
stuck in an infinite wait cycle.
 Solution with Semaphore:
o Semaphores can be used with timeouts or resource
ordering to avoid deadlocks.
o For instance, by defining an ordering of resource
requests (e.g., always requesting resource A before
resource B), deadlock can be prevented because
processes will not hold one resource while waiting for
another in a circular wait pattern.
o Additionally, the bounded waiting property of
semaphores ensures that no process waits indefinitely,
thus preventing starvation and reducing the chances of
deadlock.
Evaluation Scheme:
Explanation – 3mark
Usage – 2mark

6. Explain in detail about monitors. 5 CO2 K2

Monitors are a higher-level synchronization construct that simplifies


process synchronization by providing a high-level abstraction for data
access and synchronization. Monitors are implemented as programming
language constructs, typically in object-oriented languages, and provide
mutual exclusion, condition variables, and data encapsulation in a single
construct.
1. A monitor is essentially a module that encapsulates a shared
resource and provides access to that resource through a set of
procedures. The procedures provided by a monitor ensure that
only one process can access the shared resource at any given
time, and that processes waiting for the resource are suspended
until it becomes available.
2. Monitors are used to simplify the implementation of concurrent
programs by providing a higher-level abstraction that hides the
details of synchronization. Monitors provide a structured way of
sharing data and synchronization information, and eliminate the
need for complex synchronization primitives such as semaphores

32
and locks.
3. The key advantage of using monitors for process synchronization
is that they provide a simple, high-level abstraction that can be
used to implement complex concurrent systems. Monitors also
ensure that synchronization is encapsulated within the module,
making it easier to reason about the correctness of the system.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
7. Determine the role of pipes in inter-process communication. How are 5 CO2 K3
named pipes different from unnamed pipes?
Role of Pipes in Inter-Process Communication (IPC)
Pipes are one of the simplest and most widely used mechanisms for
Inter-Process Communication (IPC) in operating systems. They
provide a way for processes to communicate with each other by sending
and receiving data. A pipe allows one process to write data to the pipe,
while another process can read from the pipe, enabling communication
between the processes. This form of communication is especially useful
in scenarios where data needs to be passed between a producer and a
consumer process, or in a client-server architecture.
Pipes can be used for communication between related processes (e.g.,
parent-child processes) or even between unrelated processes, depending
on how they are configured.
Characteristics of Pipes:
1. Unidirectional Communication: Pipes are typically
unidirectional, meaning data flows in one direction — from the
writing process (producer) to the reading process (consumer).
However, two pipes can be used for full-duplex communication
(i.e., two-way communication).
2. Buffered Communication: When data is written to the pipe, it
may be buffered by the operating system until the receiving
process reads it. This buffer allows the writer and reader to
operate asynchronously.
3. Simplicity and Speed: Pipes are efficient for passing small
amounts of data between processes, and the kernel optimizes the
communication, making it faster than many higher-level IPC
methods.
4. Blocking Behavior: If a process tries to read from an empty pipe
or write to a full pipe, the operating system can block the process
until there is data to read or space to write, preventing busy-
waiting.

Types of Pipes: Named Pipes vs. Unnamed Pipes(3marks)


Unnamed Pipes:
 Definition: Unnamed pipes are used for communication between
related processes, typically between a parent and a child
process. These pipes are created using system calls (e.g., pipe()
in Unix/Linux) and do not have a name or filename associated
with them.
 Characteristics:
1. Temporary: Unnamed pipes exist as long as the
processes using them exist. Once the processes terminate,
the pipe is closed, and the communication channel is
destroyed.
2. Limitations: Unnamed pipes can only be used by
processes that have a direct relationship, such as a parent
33
and a child. They cannot be used by unrelated processes.
3. Communication: Data flows from one process to
another in a single direction (producer to consumer).
Evaluation Scheme:
Role – 1mark
Explanation – 2mark
Types– 2mark
8. Discuss the concept of a client-server architecture using pipes for 5 CO2 K2
communication.

Client-Server Architecture Using Pipes for Communication(1marks)


The client-server architecture is a popular model for structuring
applications where one process (the client) requests services, and another
process (the server) provides them. This architecture is used extensively
in networked systems, database systems, and many other distributed
applications.
In the context of inter-process communication (IPC), pipes provide a
simple and efficient mechanism for communication between client and
server processes. Pipes are especially useful for local communication
between processes running on the same machine, and they help facilitate
data exchange between the client and server without the need for more
complex communication mechanisms like sockets.
Basic Concept of Client-Server Architecture(2marks)
 Client: A process that requests a service or resource from another
process (the server).
 Server: A process that provides a service or resource to the
client. The server waits for requests from clients, processes them,
and sends back responses.
In a client-server architecture using pipes:
 The server typically creates a pipe (or a set of pipes) for
communication.
 The client writes its request to the pipe, and the server reads
from the pipe to get the request.
 The server then processes the request, generates a response, and
writes it back to the pipe, allowing the client to read the response.
How Client-Server Communication Works Using Pipes
In this architecture, there are two main types of pipes that can be used:
1. Unnamed Pipes (for communication between related processes,
like parent-child).
2. Named Pipes (FIFOs) (for communication between unrelated
processes).
How the Client and Server Communicate Using Pipes:
1. Named Pipe Creation: The server creates a named pipe
(/tmp/my_fifo), which is a special file in the filesystem that can
be accessed by both the client and server.
2. Client Writes to Pipe: The client opens the named pipe in write
mode, sends its request (e.g., a string of characters or a data
structure), and then closes the pipe.
3. Server Reads from Pipe: The server opens the same named pipe
in read mode, reads the client’s request, processes it, and then
writes the response back to the pipe.
4. Client Reads Response: After the server writes the response to
the pipe, the client reopens the pipe (this time in read mode),
reads the server’s response, and then closes the pipe.
Advantages of Using Pipes in Client-Server Communication(2marks)
1. Simplicity: Pipes provide a simple mechanism for data transfer
between processes. The client writes to the pipe, and the server
34
reads from it, making the communication process easy to
implement.
2. Efficient Communication: Data is transferred directly through
the operating system’s kernel, which optimizes communication,
especially for small to moderate-sized data.
3. No Need for Complex Setup: Unlike network-based
communication (e.g., using sockets), pipes do not require
network setup or handling of complex protocols. Named pipes
are simply created as special files in the filesystem, making them
easy to use for IPC.
4. Data Synchronization: Since pipes are generally blocking, the
communication between client and server is naturally
synchronized — the server will wait for the client’s request
before responding.
Limitations of Using Pipes in Client-Server Architecture:
1. Limited to Local Communication: Pipes work only for
processes that reside on the same machine. They cannot be used
for communication between processes on different machines (for
that, a different communication mechanism like sockets is
required).
2. Blocking Nature: By default, pipes block the writer if the pipe’s
buffer is full or the reader is not available, and they block the
reader if there is no data. While this is useful in synchronizing
processes, it can lead to inefficiencies in certain scenarios (e.g., if
the server is not responding as expected).
3. Limited Buffer Size: Pipes typically have a small buffer size. If
the data to be written exceeds the buffer size, the writer may be
blocked until there is space to write, which could lead to delays if
not managed correctly.

Evaluation Scheme:
Explanation – 3mark
Limitation – 2mark
9. Explain in detail about semaphores, their usage, implementation 5 CO2 K2
given to avoid busy waiting and binary semaphores.
A semaphore is a synchronization primitive used to manage concurrent
access to shared resources in a multi-process or multi-threaded
environment. It is an integer variable used for controlling access to
resources such that only a certain number of processes or threads can
access the resource at a time. Semaphores are widely used in operating
systems for process synchronization to avoid issues like race conditions
and deadlocks.
Types of Semaphores(3marks)
1. Counting Semaphore:
o A counting semaphore allows an arbitrary number of
processes to access a resource concurrently. The value of
the semaphore represents the number of available
resources.
o For example, if there are 5 identical printers, the
semaphore value is initialized to 5. When a process
requests access to a printer, the semaphore is
decremented. When the process is done, the semaphore is
incremented, signaling that the resource is now available.
2. Binary Semaphore (Mutex):
o A binary semaphore is a special case of counting
semaphores, where the value is restricted to either 0 or 1.
This kind of semaphore is used for mutual exclusion
35
(mutex).
o A binary semaphore is typically used to ensure that only
one process can access a critical section of code at any
given time.
Usage of Semaphores
Semaphores are typically used for the following purposes in a multi-
processing or multi-threading environment:
1. Mutual Exclusion:
o Semaphores can be used to ensure that only one process
or thread can access a critical section of code at any
given time. This prevents race conditions when multiple
processes try to modify shared data concurrently.
2. Synchronization:
o Semaphores help synchronize processes or threads that
need to wait for certain conditions to be met. For
example, a process might need to wait for another
process to produce some data before it can consume it
(producer-consumer problem).
3. Avoiding Deadlocks:
o Semaphores can help in preventing deadlocks by using
timeout mechanisms or limiting the number of resources
available to processes.
4. Resource Allocation:
o Semaphores are used to manage access to a limited
number of resources. For instance, in a situation where
multiple processes need access to a set of printers or
database connections, a counting semaphore can track
the number of available resources.

Implementation of Semaphores (Avoiding Busy Waiting)


To avoid busy waiting (where a process continually checks if a resource
is available and wastes CPU time), semaphores rely on blocking rather
than actively checking. Here's how semaphores are implemented to avoid
busy waiting:
1. Blocking Mechanism:
o When a process performs a P operation (wait), if the
semaphore value is 0 (i.e., no resources are available),
the process is put into a blocked state (waiting) and is
added to the semaphore's waiting queue.
o The process does not consume CPU cycles while
waiting. Instead, it remains in a blocked state until
another process performs a V operation (signal),
incrementing the semaphore and allowing the waiting
process to be unblocked.
2. Kernel Support:
o The operating system kernel manages the semaphore and
the process queues. When a process performs the P
operation and the semaphore is 0, the kernel places the
process in the waiting queue. When the semaphore is
incremented by another process, the kernel removes the
36
first process from the queue and schedules it for
execution.
3. No Busy Waiting:
o By using blocking and queueing, semaphores prevent
busy waiting. The process only resumes execution when
it is explicitly signaled (by the V operation), and the CPU
is free to execute other tasks in the meantime.

Evaluation Scheme:
Explanation – 3mark
Types – 2mark
10. Define race condition. Explain how a race condition can be avoided 5 CO2 K2
in critical section problem. Formulate a solution to the dining
philosopher problem so that no race condition arises.

Race Condition
A race condition occurs when multiple processes or threads access
shared resources concurrently and at least one of them modifies the
resource, leading to unpredictable or incorrect behavior. This typically
happens in systems where multiple threads or processes execute in
parallel and their operations are not properly synchronized. The outcome
depends on the relative timing or interleaving of the processes, hence the
term "race."
Race conditions are problematic because the final state of the shared
resource may depend on the sequence of execution, which can lead to
inconsistent results. It often leads to bugs that are difficult to detect and
reproduce since the result can vary each time the program is run.
For example, consider two processes that increment the same global
variable. If both processes read the variable, increment it, and then write
it back simultaneously, the result will not be correct because the value is
being overwritten. This is a classic case of a race condition.

How Race Conditions Can Be Avoided in the Critical Section


Problem
The Critical Section Problem refers to the challenge of ensuring that
multiple processes or threads that share resources (e.g., variables,
memory) do not interfere with each other in a way that causes
inconsistent results. To prevent race conditions in the critical section, we
need to ensure that:
1. Mutual Exclusion: Only one process/thread can execute in its
critical section at a time. This prevents multiple processes from
simultaneously accessing shared resources.
2. Progress: If no process is in its critical section, and multiple
processes want to enter their critical section, one of them must be
allowed to enter.
3. Bounded Waiting: No process should have to wait indefinitely
to enter the critical section. Each process must eventually be
allowed to enter.
To implement these properties, synchronization mechanisms such as
locks, semaphores, mutexes, and monitors are typically used. These
mechanisms help avoid race conditions by enforcing mutual exclusion
and ensuring proper scheduling and synchronization.

Solution to Avoid Race Conditions: Using Semaphores in the Dining


Philosophers Problem
The Dining Philosophers Problem is a classic synchronization problem
that demonstrates issues related to race conditions, deadlock, and
37
resource sharing. The problem is typically described as follows:
 Five philosophers sit at a table, each with a bowl of spaghetti
and a fork between each pair of adjacent philosophers.
 A philosopher can only eat if they have both the left and right
forks.
 Philosophers alternate between thinking and eating.
 The problem is to design a solution that avoids race conditions
(i.e., prevents two philosophers from eating simultaneously with
the same fork), deadlock (i.e., no philosopher waits indefinitely),
and starvation (i.e., no philosopher is left forever without eating).
Solution Using Semaphores:
One possible solution to the Dining Philosophers Problem that avoids
race conditions, deadlock, and starvation uses semaphores and the
concept of mutexes. A semaphore is a synchronization primitive that can
control access to shared resources.
Here’s the solution that solves the problem:
 Use a semaphore (mutex) to ensure mutual exclusion when
accessing shared resources.
 Use a semaphore for each fork to ensure that no two philosophers
can use the same fork simultaneously.
Key Components:
 mutex: Ensures that no two philosophers can access the shared
resources (forks) simultaneously.
 semaphores for forks: A binary semaphore for each fork, which
is initially set to 1 (indicating that the fork is available).
Philosophers can pick up a fork only if the corresponding
semaphore is available.
Algorithm:
1. Initialization:
o Set up semaphore fork[5] for each fork, initialized to 1
(indicating the forks are available).
o Set up a mutex to ensure mutual exclusion when
checking and picking up forks.
2. Philosopher Behavior (for each philosopher):
o Thinking: A philosopher spends some time thinking and
does not need any resources.
o Picking Up Forks:
1. The philosopher locks the mutex to ensure
mutual exclusion.
2. The philosopher picks up the left fork and right
fork.
3. Release the mutex after picking up both forks to
allow others to continue thinking.
o Eating: Once the philosopher has both forks, they eat for
a while.
o Putting Down Forks:
1. The philosopher releases the left and right forks
(semaphores for both forks are incremented).
2. If any philosopher is waiting for the forks, they
can now proceed.
3. Deadlock Prevention: This solution ensures that the
philosophers do not deadlock by preventing all of them from
picking up one fork and waiting for the other. Philosophers only
pick up both forks when they are available, thus preventing a
circular wait.
4. Starvation Prevention: By ensuring that the mutex allows

38
philosophers to only pick up the forks when both are available,
starvation is avoided, and each philosopher can eventually eat.

Dining Philosophers with Semaphores: Pseudocode


// Global variables
semaphore mutex = 1; // Mutex for mutual exclusion
semaphore fork[5] = {1, 1, 1, 1, 1}; // Semaphores for each fork

// Philosopher function
void philosopher(int i) {
while (true) {
think(); // Philosopher is thinking

wait(mutex); // Enter critical section


wait(fork[i]); // Pick up left fork
wait(fork[(i + 1) % 5]); // Pick up right fork
signal(mutex); // Exit critical section

eat(); // Philosopher is eating

signal(fork[i]); // Put down left fork


signal(fork[(i + 1) % 5]); // Put down right fork
}
}

// Main function
int main() {
// Create philosopher threads
for (int i = 0; i < 5; i++) {
create_thread(philosopher, i);
}
}
Explanation of the Algorithm:
1. Mutex: The mutex ensures that only one philosopher can pick up
the forks at a time. The mutual exclusion ensures that if one
philosopher is picking up forks, no other philosopher can
simultaneously check the availability of forks.
2. Fork Semaphores: Each fork has a semaphore. If a philosopher
wants to pick up a fork, they must wait until the fork's semaphore
is available (i.e., set to 1). After using the fork, the philosopher
releases it by signaling the semaphore (setting it back to 1).
3. Philosopher Behavior: The philosopher picks up both forks and
only starts eating if both forks are available. After eating, they
put down the forks so others can use them.

Avoiding Race Conditions:


By using semaphores to manage access to the forks and the mutex to
prevent concurrent access to critical sections, this solution ensures that:
 Only one philosopher can pick up the forks at a time.
 No philosopher will starve because every philosopher is
guaranteed to eventually get both forks.
 Deadlock is avoided because a philosopher can only pick up both
forks if both are available.
Conclusion:
A race condition arises when multiple processes access shared resources
without proper synchronization, leading to inconsistent or unpredictable
behavior. In the Dining Philosophers Problem, race conditions can be
39
avoided by using synchronization mechanisms like semaphores and
mutexes to control access to shared resources (the forks). The solution
described here ensures that philosophers can eat without causing race
conditions, deadlocks, or starvation.
Evaluation Scheme:
Explanation – 4mark
Algorithm – 1mark

Unit – III
Unit Contents: CPU Scheduling and Deadlock Management
CPU Scheduling: Scheduling Criteria - Scheduling Algorithms. Deadlocks: Deadlock
Characterization - Methods for Handling Deadlocks - Deadlock Prevention - Deadlock
Avoidance - Deadlock Detection - Recovery from Deadlock. Case Study: Real Time CPU
scheduling

Two Marks Questions.


Course
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Marks Level
Outcome
from the second half portion of syllabus
1. What is CPU Scheduling?
CPU scheduling is the process of determining which process in the ready 2 CO3 K1
queue is to be allocated the CPU for execution.
2. Name two types of CPU scheduling. 2 CO3 K1
Preemptive scheduling and Non-preemptive scheduling.
3. What is the difference between Preemptive and Non-Preemptive
scheduling?
Preemptive Scheduling Non-Preemptive Scheduling

CPU is allocated to the processes CPU is allocated to the process


for a specific time. until it gets terminated.
2 CO3 K1
Process can be interrupted while Process cannot be interrupted
its under execution. while its under execution.
Waiting and Response time is
Waiting and Response time is less
high.
Example: Round Robin FCFS
4. List any four CPU scheduling algorithms.
First-Come First-Served (FCFS) Scheduling Algorithm
Shortest Job First (SJF) Scheduling Algorithm 2 CO3 K1
Priority Scheduling Algorithm
Round Robin (RR) Scheduling Algorithm
5. What are the disadvantages of FCFS scheduling algorithm?
It suffers from the "convoy effect," where short processes get stuck 2 CO3 K1
waiting for long processes to complete.
6. Define waiting time and turnaround time in CPU scheduling. 2 CO3 K1
Waiting Time is the time a process spends waiting in a queue to get the
CPU, while Turn Around Time is the total time it takes a process to

40
complete from when it's submitted.
7. What is the objective of the Shortest Remaining Time First (SRTF)
scheduling algorithm? 2 CO3 K1
To minimize average waiting time by selecting the process with the
shortest remaining time for execution.
8. List out the criteria for CPU Scheduling.
CPU utilization, Throughput, waiting time, Turn Around Time and 2 CO3 K1
Response time.
9. Define deadlock in an operating system.
A deadlock occurs when a set of processes is in a state where each 2 CO3 K1
process is waiting for a resource held by another process, leading to
indefinite blocking.
10. List the four necessary conditions for deadlock. 2 CO3 K1
Mutual Exclusion, Hold and Wait, No Preemption, and Circular Wait.
11. What is the difference between deadlock prevention and deadlock
avoidance?
Deadlock prevention ensures at least one of the necessary conditions for 2 CO3 K1
deadlock is never satisfied, while deadlock avoidance dynamically
checks the resource-allocation state to avoid unsafe states.
12. What is a safe state in deadlock management?
A state is safe if the system can allocate resources to all processes in 2 CO3 K1
some order without leading to a deadlock.
13. Name two methods of handling deadlocks. 2 CO3 K1
Deadlock prevention and deadlock detection.
14. What is the purpose of the Banker's Algorithm?
The Banker's Algorithm is used to avoid deadlocks by checking resource 2 CO3 K1
allocation requests against system safety.
15. What is deadlock recovery?
Deadlock recovery is the process of regaining normal system operation 2 CO3 K1
after a deadlock, often by terminating processes or preempting resources.

Three Marks Questions.


*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Course
Marks Level
from the second half portion of syllabus Outcome

1. Outline the two types of programs: a. I/O-bound and b. CPU-bound 3 CO3 K2


which is more likely to have voluntary context switches, and which is
more likely to have nonvoluntary context switches? Explain your
answer.
1. I/O-bound programs: More likely to have voluntary context switches.
These programs frequently perform I/O operations and spend a lot of
time waiting for input/output devices. When an I/O-bound program issues
an I/O request, it voluntarily relinquishes the CPU, resulting in a
voluntary context switch.
2. CPU-bound programs: More likely to have nonvoluntary context
switches. These programs perform intensive computations and tend to use
the CPU for longer durations without frequent I/O operations. They are
often pre-empted by the operating system's scheduler to ensure fair CPU
sharing, resulting in nonvoluntary context switches.
Voluntary context switches occur when a thread explicitly yields the CPU
(e.g., during I/O), while nonvoluntary switches happen when the
scheduler forces a thread to relinquish the CPU (e.g., due to time slicing).

Evaluation Scheme:

41
Explanation I/O bound – 1.5mark
Explanation CPU bound – 1.5mark

2. Interpret on a system implementing multilevel queue scheduling.


What strategy can a computer user employ to maximize the amount
of CPU time allocated to the user’s process?
1. Understand the Queue Structure:
Multilevel queue scheduling assigns processes to different queues based
on their priority or type (e.g., foreground vs. background, interactive vs.
batch). Higher-priority queues usually get more CPU time.
2. Design Interactive or I/O-bound Processes:
Many multilevel queue systems prioritize interactive or I/O-bound
processes over CPU-bound ones. A user can design their process to
perform frequent I/O operations or include periodic yielding to appear as
an interactive process.
3. Manipulate Process Priority (if possible):
If the system allows user control over priority (e.g., using the `nice` 3 CO3 K2
command in Unix-based systems), the user can increase their process's
priority to place it in a higher-priority queue.
4. Divide Work into Smaller Processes:
If the system allocates time slices or resources more favourably to
smaller or shorter processes, splitting a large process into smaller chunks
might help maximize CPU time for each individual process.
By tailoring the process to fit the characteristics of higher-priority
queues, the user can increase the CPU time allocated to their process.

Evaluation Scheme:
Explanation – 2mark
Strategy – 1mark

3. Summarize among the scheduling algorithms that could result in


starvation?
a. First-come, first-served
b. Shortest Job first
c. Round Robin
d. Priority
1. First-come, first-served (FCFS) - Does not result in starvation.
In FCFS, processes are executed in the order they arrive. Every process
eventually gets the CPU, even if some have to wait for a long time.
2. Shortest Job First (SJF) - Can result in starvation.
SJF prioritizes shorter processes. If shorter processes keep arriving,
longer processes may get indefinitely delayed, leading to starvation. 3 CO3 K2
3. Round Robin (RR) - Does not result in starvation.
In RR, every process gets a fair time slice in a cyclic order. No process is
indefinitely delayed, so starvation does not occur.
4. Priority Scheduling - Can result in starvation.
Lower-priority processes may be indefinitely delayed if higher-priority
processes keep arriving, leading to starvation.
Shortest Job First and Priority Scheduling can result in starvation.

Evaluation Scheme:
Explanation – 3mark

4. Explain the how the following scheduling algorithms discriminate 3 CO3 K2


either in favour of or against short processes:
a. FCFS
b. RR
42
c. Multilevel feedback queues
a. FCFS - Discrimination Against Short Processes:
FCFS executes processes in the order of their arrival, without considering
their length. Short processes may have to wait behind longer processes,
resulting in the convoy effect, where a short process is delayed
unnecessarily because it arrived after a long process.
b. Round Robin - Neutral/Partial Favor for Short Processes:
RR assigns a fixed time slice (quantum) to each process in cyclic order.
Short processes can complete quickly if their execution fits within a time
slice or a few iterations. However, if the time quantum is too small, short
processes might face higher overhead due to frequent context switching.
c. Multilevel Feedback Queues - Strong Favor for Short Processes:
MLFQ favors short processes by prioritizing processes with shorter CPU
bursts. Processes that use less CPU time are promoted to higher-priority
queues, where they are scheduled first. Processes that use more CPU time
are demoted to lower-priority queues, giving preference to shorter
processes in higher-priority queues.

Evaluation Scheme:
Explanation – 3mark

5. Explain about Real-Time Scheduling.


Refers to scheduling algorithms designed for systems where tasks must
meet strict timing constraints. It ensures that critical tasks are completed
within their deadlines.
1. Types:
i. Hard Real-Time: Missing a deadline is catastrophic (e.g., medical
devices, avionics).
ii. Soft Real-Time: Missing a deadline is undesirable but not critical (e.g.,
video streaming).
2. Key Features:
- Priority-based scheduling, where higher priority is given to tasks with
stricter deadlines. 3 CO3 K2
- Algorithms include Rate-Monotonic Scheduling (RMS) for static
priorities and Earliest Deadline First (EDF) for dynamic priorities.
3. Challenges:
- Ensuring predictability and meeting deadlines under varying workloads.
- Balancing resource utilization while avoiding task starvation or
overload.

Evaluation Scheme:
Explanation – 2mark
Types – 1mark

6. Can a system detect that some of its threads are starving? If you 3 CO3 K2
answer “yes,” Explain how it can. If you answer “no,” explain how
the system can deal with the starvation problem.
Yes, a system can detect thread starvation.
1. Detection of Starvation: The system can monitor the waiting time of
each thread in the resource queue. If a thread's waiting time exceeds a
predefined threshold while other threads are continuously granted
resources, the system can infer that the thread is starving.
2. How Detection Works: Implementing a timestamp or counter for each
thread’s request can help track how long it has been waiting. If the
thread's wait time becomes unusually long compared to others, it
indicates starvation.
3. Dealing with Starvation: To address starvation, the system can
43
implement priority adjustments, such as boosting the priority of a
starving thread or using aging techniques to ensure it eventually gets the
required resources.

Evaluation Scheme:
Explanation – 3mark

7. Differentiate between starvation and deadlock.

Deadlock Starvation
A situation where low priority
A situation where no process
process gets blocked by high
proceeds execution.
priority process.
It is a long waiting but not 3 CO3 K2
Its is infinite waiting.
infinite.
Every deadlock is always Every starvation need not be a
starvation. deadlock

Evaluation Scheme:
Explanation Deadlock – 1.5mark
Explanation Starvation – 1.5mark

8. Discuss the ways to recover from deadlock?


a. Process Termination – Abort all deadlocked process or abort one
process at time until deadlock cycle is eliminated.
b.Resource Preemption – We successively pre-empt some resources from
processes and give them to other processes until the deadlock cycle is 3 CO3 K2
broken.

Evaluation Scheme:
Explanation – 3mark

9. Summarize Resource Trajectories can be helpful in avoiding the


deadlock?
Resource trajectories represent the sequence of resource allocations and
releases in a graphical manner. By visualizing the paths of resource usage
by different processes, they help in identifying potential conflicts or
unsafe states that may lead to deadlock. By carefully planning and 3 CO3 K2
ensuring that the trajectories do not intersect in a way that causes circular
wait or hold-and-wait conditions, deadlocks can be avoided.

Evaluation Scheme:
Explanation – 3mark

10. Discuss RAG with respect to deadlock?.


A Resource Allocation Graph is a directed graph used to represent
processes and resource allocation in a system to detect potential
deadlocks. 3 CO3 K2

Evaluation Scheme:
Explanation – 3mark

Five Marks Questions. Marks Course Level

44
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10
from the second half portion of syllabus Outcome

Explain with the concept of multilevel queue scheduling.


1. Multilevel queue scheduling divides the ready queue into multiple queues
based on process priority or type (e.g., system processes, interactive
processes). Each queue can have its own scheduling algorithm. Processes
do not move between queues.
Example:
Queue 1: System processes (FCFS) 5 CO3 K2
Queue 2: Interactive processes (RR)
Queue 3: Batch processes (SJF)

Evaluation Scheme:
Explanation – 3mark
Examples – 2mark
Consider following processes with length of CPU burst time in
2. milliseconds.
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
All processes arrived in order P1, P2, P3, P4 and P5 at time zero.
1) Draw Gant charts illustrating execution of these processes for SJF,
Non-Preemptive priority (smaller priority number implies a higher
priority) & Round Robin (Quantum=1).
2) Calculate turnaround time for each process for scheduling
algorithms mentioned in part (1)
3) Calculate waiting time for each scheduling algorithms listed in
part (1)

a) Shortest Job First (SJF)


SJF executes the process with the smallest burst time first. At time t=0,
P1, P2, P3, P4 and P5 are available, but as per the shortest burst time the
execution order changes to P2->P4->P3->P5->P1. 5 CO3 K2
Gantt Chart:
P2 P4 P3 P5 P1
0 1 2 4 9 19
Priorit TAT = WT +
Process Burst time WT = STE-AT
y BT
P1 10 3 9 19
P2 1 1 0 1
P3 2 3 2 4
P4 1 4 1 2
P5 5 2 4 9
Average 16/5 = 3.2 ms 35/5 = 7.0 ms

b) Priority - Non-Preemptive
Smaller priority number indicates a higher priority. At time t=0, P1, P2,
P3, P4 and P5 are available, but as per the priority the execution order
changes to P2->P5->P1->P3->P4.

45
Gantt Chart:
P
P5 P1 P3 P4
2
0 1 6 16 18 19
Priorit WT = STE- TAT = WT +
Process Burst time
y AT BT
P1 10 3 6 16
P2 1 1 0 1
P3 2 3 16 18
P4 1 4 18 19
P5 5 2 1 6
Average 41/5 = 8.2 ms 60/5 = 12.0 ms

c) Round Robin – Preemptive (Quantum = 1ms)


Each process gets a time slice (1 ms), and execution cycles through
processes until they finish. At time t=0, P1, P2, P3, P4 and P5 are
available, so as per the order received the execution order goes as P1-
>P2->P3->P4->P5.
Gantt Chart:
P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
19
Priorit WT = STE- TAT = WT +
Process Burst time
y AT BT
P1 10 3 9 19
P2 1 1 1 2
P3 2 3 5 7
P4 1 4 3 4
P5 5 2 9 14
Average 27/5 = 5.4 ms 46/5 = 9.2 ms

Among the given scheduling algorithms, SJF has very less


Average Waiting time and Average Turn Around Time. Hence this is the
best for the given data.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
What are the various criteria for a good process scheduling
3. algorithm? Explain any two Preemptive scheduling algorithms in 5 CO3 K2
brief.
Criteria for a Good Process Scheduling Algorithm:
1. CPU Utilization: Maximize the CPU's usage, ensuring it remains
as busy as possible.
2. Throughput: Increase the number of processes completed per unit
of time.
3. Turnaround Time: Minimize the time taken from process
submission to its completion.
4. Waiting Time: Reduce the time processes spend waiting in the

46
ready queue.
5. Response Time: Minimize the time from submission of a process
to the first response it produces.
6. Fairness: Ensure all processes get equitable CPU time, avoiding
starvation.
7. Scalability: Adapt well to varying loads and process numbers.
Preemptive Scheduling Algorithms:
A) Shortest Remaining Time First (SRTF):
An extension of Shortest Job First (SJF), where the CPU is
preempted if a new process arrives with a burst time smaller than the
remaining burst time of the currently running process.
Advantages: Optimizes turnaround and waiting times for shorter
processes.
Disadvantages: May lead to process starvation for long-running jobs.
B) Round Robin (RR):
Each process is assigned a fixed time quantum. If the process
doesn't finish within the quantum, it is preempted and placed back in the
ready queue.
Advantages: Fair allocation of CPU time, ensures no process is starved.
Disadvantages: Performance depends on the choice of time quantum;
too small causes overhead, too large resembles First-Come-First-Served
(FCFS).

Evaluation Scheme:
Explanation – 3mark
Advantages and Disadvantages – 2mark
Consider following processes with length of CPU burst time in
4. milliseconds.
Process Arrival Time Burst Time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
1) Draw Gant charts illustrating execution of these processes for
FCFS, SJF Non-Preemptive & Round Robin (Quantum=2 ms).
2) Calculate turnaround time for each process for scheduling
algorithms mentioned in part (1)
3) Calculate waiting time for each scheduling algorithms listed in
part (1)
a) First Come First Served (FCFS) 5 CO3 K2
FCFS executes the processes in the order of their arrival. At time t=0, P4
alone, followed by P2 at t=1, P5 at t=2, P1 at t=3 and P3 at t=4, as per
this the execution order changes to P4->P2->P5->P1->P3.
Gantt Chart:
P4 P2 P5 P1 P3
0 6 10 13 14
16
Arrival WT = STE- TAT = WT +
Process Burst time
time AT BT
P1 3 1 10 11
P2 1 4 5 9
P3 4 2 10 12

47
P4 0 6 0 6
P5 2 3 8 11
Average 33/5 = 6.6 ms 49/5 = 9.8 ms

b) Shortest Job First (SJF Non-Preemptive)


SJF executes the process with the smallest burst time first, but need to
follow this as per their arrival time. At time t=0, P4 alone, followed by P2
at t=1, P5 at t=2, P1 at t=3 and P3 at t=4, but as per the arrival time and
shortest burst time the execution order changes to P2->P4->P3->P5->P1.
Gantt Chart:
P4 P1 P3 P5 P2
0 6 7 9 12 16
Arrival Burst WT = STE- TAT = WT +
Process
Time Time AT BT
P1 3 1 3 4
P2 1 4 11 15
P3 4 2 3 5
P4 0 6 0 6
P5 2 3 7 10
Average 24/5 = 4.8 ms 40/5 = 8.0 ms

c) Round Robin (Quantum = 2 ms)


Each process gets a time slice (1 ms), and execution cycles through ready
queue for processing until they finish. At time t=0, P4 arrives, P2 arrives
at t=1, P5 arrives at t=2, P1 arrives at t=3 and P3 at t=4. Execution order
changes accordingly.
Gantt Chart:
P4 P2 P5 P4 P1 P3 P2 P5 P4
0 2 4 6 8 9 11 13 14
16
Arrival Burst TAT = WT +
Process WT = STE-AT
Time Time BT
P1 3 1 5 6
P2 1 4 8 12
P3 4 2 5 7
P4 0 6 10 16
P5 2 3 9 12
Average 37/5 = 7.4 ms 53/5 = 10.6 ms

Among the given scheduling algorithms, SJF has very less Average
Waiting time and Average Turn Around Time. Hence this is the best for
the given data.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
Infer the following set of processes, with the length of the CPU burst and
5. arrival time given in milliseconds. 5 CO3 K2/K3

48
Process Burst time (B.T) Arrival time(A.T)
P1 8 0.00
P2 4 1.001
P3 9 2.001
P4 5 3.001
P5 3 4.001
Draw four Gantt charts that illustrate the execution of these processes
using the following scheduling algorithms: FCFS, SJF and RR
(quantum=2ms) scheduling. Calculate waiting time and turnaround time
of each process for each of the scheduling algorithms and find the
average waiting time and average turnaround time.
FCFS:

Average Waiting Time=11.3992


Average Turnaround Time=17.199
SJF:
Average Waiting Time=6.999
Average Turnaround Time=12.799
Priority:
Average Turnaround Time=17.192
Average Waiting Time=11.392
Round Robin:

Average Turnaround Time=20.392


Average Waiting Time=14.592

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
Explain the Banker's Algorithm implementations with an example.
6. The Banker's Algorithm is used to avoid deadlocks by ensuring that
resource allocation does not lead to an unsafe state. It checks if the
requested resources can be safely allocated.
Example:
Available: [3, 3, 2]
Allocation (A): [0, 1, 0], [2, 0, 0], [3, 0, 2]
Max (M): [7, 5, 3], [3, 2, 2], [9, 0, 2] 5 CO3 K2
Need (N = M - A): [7, 4, 3], [1, 2, 2], [6, 0, 0]
The algorithm checks if available resources satisfy any process's
needs and processes can be completed in sequence.

Evaluation Scheme:
Explanation – 3mark
Example – 2mark
Consider the following snapshot of a system:
7. Tasks Allocation Max Available 5 CO3 K2
A B C D A B C D A B C D
T0 0 0 1 2 0 0 1 2 1 5 2 0
T1 1 0 0 0 1 7 5 0
T2 1 3 5 4 2 3 5 6
T3 0 6 3 2 0 6 5 2
T4 0 0 1 4 0 6 5 6

49
Explain the following questions using the banker’s algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from thread T1 arrives for (0,4,2,0), can the request be
granted immediately?
a. Need Matrix
Need = Max – Allocation
Tasks A B C D
T0 0 0 0 0
T1 0 7 5 0
T2 1 0 0 2
T3 0 0 2 0
T4 0 6 4 2

b. Is the System Safe (Safety Algorithm) ?


Work = Available
Work = 1 5 2 0
T0: If Need < = Work ?
0 0 0 0 < = 1 5 2 0 Yes
Work = Work + Allocation
=1520+0012
Work = 1 5 3 2
T0 Completed and added into Safe Sequence <T0>
T1: If Need < = Work ?
0 7 5 0 < = 1 5 3 2 No
T1 cannot complete its work, so it will wait. Safe Sequence
remains the same <T0>
T2: If Need < = Work ?
1 0 0 2 < = 1 5 3 2 Yes
Work = Work + Allocation
=1532+1354
Work = 2 8 8 6
T2 completes its work, and added into Safe Sequence <T0, T2>
T3: If Need < = Work ?
0 0 2 0 < = 2 8 8 6 Yes
Work = Work + Allocation
=2886+0632
Work = 2 14 11 8
T3 completes its work, and added into Safe Sequence <T0, T2,
T3>
T4: If Need < = Work ?
0 6 4 2 < = 2 14 11 8 Yes
Work = Work + Allocation
= 2 14 11 8 + 0 0 1 4
Work = 2 14 12 12
T4 completes its work, and added into Safe Sequence <T0, T2,
T3, T4>
T1: If Need < = Work ?
0 7 5 0 < = 2 14 12 12 Yes
Work = Work + Allocation
= 2 14 12 12 + 1 0 0 0
Work = 3 14 12 12
T1 completes its work, and added into Safe Sequence <T0, T2,
T3, T4, T1>
All tasks completed and safe sequence established. Therefore no chance
of deadlock.

50
c. If a request from thread T1 arrives for (0,4,2,0), can the
request be granted immediately?
Step 1: Request < = Need ? (Resource-Request Algorithm)
0 4 2 0 < = 0 7 5 0 Yes
Step 2: Request < = Available ?
0 4 2 0 < = 1 5 2 0 Yes
Step 3:
Available = Available – Request
=1520–0420
=1100
T1: Allocation = Allocation + Request
=1000+0420
=1420
T1: Need = Need – Request
=0750–0420
=0330
If the resulting allocation state is safe, the transaction is
completed and task T1 is allocated with its additional resources.
However, if the new state is unsafe, then T1 must wait and the
old resource allocation state is restored.

Evaluation Scheme:
Explanation – 3mark
Example – 2mark
Suppose that a system is in an unsafe state. Illustrate that it is
8. possible for the threads to complete their execution without entering
a deadlocked state.
If a system is in an unsafe state, it doesn't necessarily mean it is
deadlocked; it just means that careful resource allocation is required to
avoid deadlock. Here's a concise 3-mark explanation:
i.Unsafe State Definition: An unsafe state indicates that not all resource
allocation sequences guarantee the completion of all threads, but it does
not imply deadlock has occurred yet.
ii.Thread Execution without Deadlock: By carefully selecting the order
in which threads are granted resources, it may still be possible for all
threads to complete their execution. For example, if resources are
allocated to a thread that can immediately finish and release its held
resources, these resources become available for other threads, potentially 5 CO3 K2
allowing them to finish as well.
3. Example Sequence: Suppose a system has 3 threads (T1, T2, T3) and
limited resources. If the system is unsafe, it might still be possible to
grant resources to T1 so it can finish, release resources, and then proceed
to safely allocate resources to T2 and T3, ensuring all threads complete
without deadlock.
This demonstrates that while unsafe states are risky, deadlock can be
avoided with appropriate scheduling and resource management.

Evaluation Scheme:
Definition -1mark
Explanation – 4mark

Explain deadlock avoidance and Bankers algorithm in detail.


9. Requires that the system has some additional a priori information 5 CO3 K2
available.
• Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
• The deadlock-avoidance algorithm dynamically examines the resource-
51
allocation state to ensure that there can never be a circular-wait condition.
• Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
Safe State
• When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state.
• System is in safe state if there exists a sequence of ALL the processes is
the systems such that for each Pi , the resources that Pi can still request
can be satisfied by currently available resources + resources held by all
the Pj , with j < i.
Avoidance algorithms
• Single instance of a resource type. Use a resource-allocation graph
• Multiple instances of a resource type. Use the banker’s algorithm
Banker’s Algorithm
• Multiple instances.
• Each process must a priori claim maximum use.
• When a process requests a resource it may have to wait.
• When a process gets all its resources it must return them in a finite
amount of time.
• Let n = number of processes, and m = number of resources types.
• Available: Vector of length m. If available [j] = k, there are k instances
of resource type Rj available.
• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most
k instances of resource type Rj .
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently
allocated k instances of Rj.
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances
of Rj to complete its task.

Evaluation Scheme:
Algorithm -2mark
Explanation – 3mark
Consider the following system snapshot using data structures in the
10. banker’s algorithm, with resources A, B, C and D and process P0 to P4. 5 CO3 K2
Max Allocation Need Available
ABCD ABCD ABCD ABCD
P0 6 0 1 2 4 0 0 1 3 2 1 1
P1 1 7 5 0 1 1 0 0
P2 2 3 5 6 1 2 5 4
P3 1 6 5 3 0 6 3 3
P4 1 6 5 6 0 2 1 2
Using banker’s algorithm, Answer the following questions:
i) Explain How many resources of type A, B, C and D are
available in total?
ii) What are the contents of the need matrix?
iii) Is the system in a safe state? If yes, what is the sequence of
process execution?

i) How many resources of type A, B, C and D are there?


A-9; B-13;C-10;D-11
ii) What are the contents of the need matrix?
Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:

52
iii) Is the system in a safe state? Why?
The system is in a safe state as the processes can be finished in the
sequence
P0, P2, P4, P1 and P3.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark

Unit – IV
Unit Contents: Memory Management
Main Memory: Swapping - Contiguous Memory Allocation, Segmentation, Paging - Structure of the Page
Table - Virtual Memory: Demand Paging - Page Replacement - Allocation of Frames – Thrashing. Case
study: Virtual machine

Two Marks Questions.


*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Course
Marks Level
from the second half portion of syllabus Outcome

1. Define Swapping. What is its purpose? 2 CO4 K1


Swapping is a memory management technique in which a process is
moved temporarily from the main memory (RAM) to a secondary storage
(such as a hard disk or SSD) and later brought back to the main memory
for continued execution. This is done to free up space in the main
memory for other processes.
Purpose of Swapping:
Efficient Use of Main Memory:
Swapping allows the operating system to execute more processes than the
available physical memory by keeping only the currently active processes
in RAM.

2. What are the advantage and disadvantage of Contiguous and Non- 2 CO4 K1
Contiguous memory allocation?

Aspect Contiguous Allocation Non-Contiguous Allocation


Memory Prone to fragmentation, More efficient, reduces
Utilization inefficient fragmentation
Rigid, process size must fit Flexible, allows dynamic
Flexibility
a single block allocation
Performance Faster access due to linear Slower due to address

53
Aspect Contiguous AllocationNon-Contiguous Allocation
mapping translation overhead
Complex, requires additional
Complexity Simple to implement
data structures
3. What is Address Binding? How to binding of Instructions and data 2 CO4 K1
to Memory?

Address Binding?

• Addresses may be represented in different ways during the


execution of the program
• Addresses in the source program are generally symbolic (such as
count)

• A compiler will typically bind these symbolic addresses to


relocatable addresses (such as “14 bytes from the beginning of
this module”)
• Each binding is a mapping address from one address space to
another.

Types of Address Binding:

1. Compile-Time Binding:
o The program's memory addresses are determined at
compile time.
o If the starting memory location is known at compile time,
the compiler generates absolute addresses for instructions
and data.
2. Load-Time Binding:
o The program is compiled into relocatable code, where
addresses are not fixed.
o The operating system determines the actual memory
addresses during program loading.
o Advantage: Allows the program to be loaded into
different memory locations at different times.
3. Execution-Time Binding:
o The binding of addresses occurs during the program's
execution.
o Advantage: Provides maximum flexibility since the
program can be moved within memory while running.

Binding of Instructions and Data to Memory

The binding process involves three key types of addresses:

54
1. Symbolic Addresses:
o Human-readable names for variables and instructions,
used during the source code phase (e.g., int a;).
2. Logical (Virtual) Addresses:
o Addresses generated by the CPU during program
execution.
o These addresses refer to a virtual memory space and are
translated into physical addresses.
3. Physical Addresses:

oActual locations in the physical memory (RAM) where


instructions and data reside.
4. Define management Unit (MMU). 2 CO4 K1
The Memory Management Unit (MMU) is a hardware component
within a computer's processor that handles all memory-related operations.
It is responsible for translating logical (or virtual) addresses generated by
the CPU into physical addresses in the main memory (RAM).
Components of MMU:
1. Translation Lookaside Buffer (TLB):
o A small, fast cache within the MMU that stores recent
address translations to speed up the process.
2. Page Table Base Register (PTBR):
o Holds the starting address of the page table in memory,
used for address translation in systems with paging.
3. Control Logic:
o Performs the actual translation and ensures compliance
with access permissions.
5. Differentiate the Fixed and Variable Partitioning. 2 CO4 K2

6. Differentiate the Internal and External Fragmentation 2 CO4 K2

55
7. Why is a valid/invalid bit used in a page table entry? 2 CO4 K1
The valid/invalid bit indicates whether the corresponding page is present
in physical memory (valid) or not (invalid), helping manage page faults.
8. What is the role of a Translation Lookaside Buffer (TLB) in paging? 2 CO4 K1

The Translation Lookaside Buffer (TLB) is a hardware cache that


stores a subset of page table entries. It helps improve the performance of
virtual-to-physical address translation by:

1. Reducing Access Time: The TLB avoids frequent access to the


main memory for page table lookups, significantly speeding up
address translation.

Caching Frequently Used Entries: It stores mappings of the most


frequently accessed virtual pages to physical frames, minimizing delays
caused by page table accesses.

9. What is the role of the base and limit registers in segmentation? 2 CO4 K1
The base register holds the starting address of a segment in physical
memory, and the limit register specifies the size of the segment. These
ensure a program accesses only its allocated memory segment.

10. How does segmentation help in modular program design? 2 CO4 K1


Segmentation divides memory into logical sections (e.g., code, data,
stack), enabling modular programming and efficient memory allocation
for each module.

11. What is the purpose of a page table in paging? 2 CO4 K1


A page table maps logical page numbers to physical frame numbers,
allowing the system to translate virtual memory addresses to physical
addresses efficiently

12. Explain the difference between a page and a frame 2 CO4 K2


A page is a fixed-size block of logical memory, while a frame is
a fixed-size block of physical memory. Pages are mapped to
frames during address translation.

56
13. What information is stored in a page table entry (PTE)? 2 CO4 K1
A PTE typically stores:
1. Frame number (physical address mapping).

Flags like valid/invalid bit, read/write permissions, and other control bits.

14. Why is a valid/invalid bit used in a page table entry? 2 CO4 K1


The valid/invalid bit indicates whether the corresponding page is present
in physical memory (valid) or not (invalid), helping manage page faults.

15. Define Virtual memory. 2 CO4 K1


Virtual Memory is a storage allocation scheme in which secondary
memory can be addressed as though it were part of the main memory.
The addresses a program may use to reference memory are distinguished
from the addresses the memory system uses to identify physical storage
sites and program-generated addresses are translated automatically to the
corresponding machine addresses.

Three Marks Questions.


*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Course
Marks Level
from the second half portion of syllabus Outcome

1. Differentiate Physical addressing and logical addressing. 3 CO4 K2


A logical address is generated by the CPU while a program is running.
The logical address is a virtual address as it does not exist physically,
therefore, it is also known as a Virtual Address. The physical address
describes the precise position of necessary data in a memory. Before they
are used, the MMU must map the logical address to the physical address.
Evaluation Scheme:
Explanation Physical Address – 1.5mark
Explanation Logical Address – 1.5mark
2. Discuss the following. 3 CO4 K2
a) Segment Table
b) Segment Table Base Register
c) Segment Table Limit Register
Segment Table
It maps a two-dimensional Logical address into a one-dimensional
Physical address. It’s each table entry has
Base Address: It contains the starting physical address where the
segments reside in memory.
Segment Limit: Also known as segment offset. It specifies the length of
the segment.

57
Segment Table Base Register
The STBR register is used to point the segment table's location in the
memory.
Segment Table Limit Register
This register indicates the number of segments used by a program. The
segment number s is legal if s<STLR.

Evaluation Scheme:
Explanation of Segment Table – 1mark
Explanation of Segment Table Base Register – 1mark
Explanation of Segment Table Limit Register – 1mark
3. Illustrate segmentation hardware with appropriate diagram. 3 CO4 K2

The logical address generated by CPU consist of two parts:


 Segment Number(s): It is used as an index into the
segment table.
 Offset(d): It must lie in between '0' and 'segment limit'. In
this case, if the Offset exceeds the segment limit then the
trap is generated.
Thus; correct offset + segment base= address in Physical memory and
segment table is basically an array of base-limit register pair.
Evaluation Scheme:
Explanation – 2mark

58
Diagram – 1mark
4. Explain the concept of paging in memory management and its main 3 CO4 K2
advantage.
Paging is a memory management technique where the physical memory
is divided into fixed-size blocks called frames, and the logical memory is
divided into blocks of the same size called pages. The operating system
maintains a mapping between logical pages and physical frames using a
page table.
Main Advantage:
Paging eliminates external fragmentation, as any free frame can be
allocated to a process, regardless of its location in physical memory.
Evaluation Scheme:
Explanation – 2mark
Advantage – 1mark

5. Summarize the key components of a page table, and what is their 3 CO4 K2
purpose?
Key components of a page table include:
1. Page Number: Identifies the page in the logical address space.
2. Frame Number: Specifies the corresponding frame in the
physical memory where the page is stored.
3. Control Bits:
o Present/Absent Bit: Indicates if the page is in memory
or needs to be fetched from disk.
o Read/Write Bit: Specifies whether the page is read-only
or writable.
o Dirty Bit: Indicates if the page has been modified.

Purpose:
The page table is used to translate logical addresses into physical
addresses and manage memory protection and permissions.
Evaluation Scheme:
Explanation – 2mark
Purpose – 1mark
6. Discuss a multilevel page table, and why is it used? 3 CO4 K2
A multilevel page table is a hierarchical structure that breaks down a
single large page table into smaller tables to reduce memory overhead.
Why it is used:
 Reduces memory usage by only allocating page tables for parts
of the address space that are actively used.
 Helps manage very large address spaces efficiently by avoiding
the need to store an enormous single-level page table in memory.
For example, in a two-level page table, the first-level table points to
second-level tables, and each second-level table maps pages to frames.

59
Evaluation Scheme:
Explanation – 2mark
Use – 1mark

7. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 3 CO4 K2


1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6. How many page faults would occur for
the following replacement algorithms, assuming one, two, three,
four, five, six, or seven frames? Remember all frames are initially
empty, so your first unique pages will all cost one fault each.
Explain it.
Number of LRU FIFO Optimal
frames
1 20 20 20
2 18 18 15
3 15 16 11
4 10 14 8
5 8 10 7
6 7 10 7
7 7 7 7

Evaluation Scheme:
Explanation – 1mark
Example – 2mark
8. Differentiate Page Fault and Page hit. 3 CO4 K2
A page hit is said to be an event, where the page is found in the main
memory when trying to access it.
A page fault is a critical event in computer systems in which a program
try to access a page that is not currently available in the physical memory
(main memory).
Evaluation Scheme:
Explanation Page Fault– 1.5mark
Explanation Page hit– 1.5mark
9. Summarize Demand Paging. 3 CO4 K2
Demand paging is a technique used in virtual memory systems where
pages enter main memory only when requested or needed by the CPU. In
demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program
into memory at the start.
Evaluation Scheme:
Explanation – 3mark

10. Discuss in detail about Thrashing. 3 CO4 K2


Thrashing occurs when the page fault and swapping happens very
frequently at a higher rate, and then the operating system has to spend

60
more time swapping these pages. This state in the operating system is
known as thrashing. Because of thrashing, the CPU utilization is going to
be reduced or negligible.

Evaluation Scheme:
Explanation – 2mark
Diagram – 1mark

Five Marks Questions


Course
*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Marks Level
Outcome
from the second half portion of syllabus
1. A system has a 32-bit logical address space and a page size of 4 KB. 5 CO4 K2
Calculate the total size of the page table if each page table entry
(PTE) requires 4 bytes.

Given:

1. Logical address space =232 bytes


2. Page size = 4 KB=212 bytes
3. Each Page Table Entry (PTE) = 4 bytes

Step 1: Calculate the number of pages in the logical address space.

The total number of pages is given by:

Number of pages= Logical address space /Page size=232/212 =220 pages

Step 2: Calculate the size of the page table.

Each page requires one page table entry (PTE), and each PTE is 4 bytes.
The total size of the page table is:

Page table size=Number of pages×Size of each PTE.

Page table size=220×4 bytes=4 MB

Final Answer:

The total size of the page table is 4 MB

Evaluation Scheme:
Explanation – 2mark

61
Example – 3mark

2. Explain how segmentation works in memory management, including 5 CO4 K2


the role of the segment table and the base and limit registers?

Steps in Segmentation:

1. Logical Address Formation:


A logical address in segmentation consists of:

o Segment number (s): Identifies the specific segment.

o Offset (d): Specifies the location within that segment.

2. Address Translation:
o The segment number (s) is used to find the
corresponding segment descriptor in the segment table.
o The descriptor contains:
 Base address: Starting physical address of the
segment.
 Limit: Maximum size of the segment.
o The offset (d) is added to the base address to calculate
the physical address.
3. Memory Access:
o The system checks if the offset is within the segment’s
limit.
o If valid, access is allowed; otherwise, a segmentation
fault occurs.

Role of Key Components:

1. Segment Table:
o Stores descriptors for all segments.

o Each entry contains:


 Base address: Physical memory location of the
segment.
 Limit: Size of the segment to ensure valid
access.
2. Base Register:
o Holds the starting physical address of the segment.
3. Limit Register:

Stores the length of the segment to check for boundary


violations.

62
Advantages:

 Simplifies memory management by dividing programs into


logical units.
 Provides memory protection by isolating segments.
 Supports modular programming.

Disadvantages:

 Can cause external fragmentation.

Evaluation Scheme:
Explanation – 2mark
Role of Key Components – 2mark
Advantages and Disadvantages – 1mark
3. Consider a system where both paging and segmentation are used. 5 CO4 K2
The system uses a segment table for logical division and a page table
for each segment. If segment 1 has 3 pages and segment 2 has 2
pages, how will the system manage memory access for a logical
address in segment 1 with a page number of 2? Explain it.
 The segment table maps the segment to a starting physical frame.
 For segment 1, the page table will contain entries for the 3 pages.
For a logical address in segment 1 with a page number of 2, the
corresponding physical frame will be found in the page table of
segment 1.
 The physical address is calculated by combining the frame
number from the page table with the offset in the page.

Evaluation Scheme:
Explanation – 3mark
Example – 2mark

4. A system has 4 physical frames and uses a Least Recently Used 5 CO4 K2
(LRU) page replacement algorithm. The reference string is:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2. Compute the content of the frames
after each reference and calculate the number of page faults.

Given:

 Physical frames = 4.
 Page replacement algorithm = Least Recently Used (LRU).
 Reference string = 7,0,1,2,0,3,0,4,2,3,0,3,27, 0, 1, 2, 0, 3, 0, 4, 2,
3, 0, 3, 27,0,1,2,0,3,0,4,2,3,0,3,2.

Process:

We maintain a frame table and use the LRU strategy. For each reference:

 If the page is already in the frame (hit), no page fault occurs.


 If the page is not in the frame (miss), a page fault occurs, and we

63
replace the least recently used page.

Total Page Faults:

There are 7 page faults.


Evaluation Scheme:
Explanation – 2mark
Example – 3mark

5. Explain the concept and techniques of Contiguous Memory 5 CO4 K2


Allocation.
Contiguous memory allocation is a memory management technique in
which each process is assigned a single, continuous block of memory.
Both the instructions and data of the process reside in a single contiguous
section of the main memory, ensuring simplicity in memory management
and address translation.
This technique is commonly used in simpler memory management
systems but can suffer from fragmentation and limited flexibility as
processes are loaded and unloaded.
Techniques of Contiguous Memory Allocation
There are two primary approaches to contiguous memory allocation:
1. Fixed Partitioning
 The main memory is divided into fixed-sized partitions (blocks).
 Each partition can hold exactly one process.
 The partition size is predetermined and cannot be changed at
runtime.
Advantages:
 Simple to implement.
64
 Easy to manage due to fixed sizes.
Disadvantages:
 Internal Fragmentation: If a process doesn’t use the entire
partition, the remaining memory within the partition is wasted.
 Inefficient for processes of varying sizes.
2. Dynamic Partitioning
 The main memory is divided dynamically, with partitions sized
to match the specific memory needs of each process.
 A process is allocated exactly as much memory as it requires.
Advantages:
 No internal fragmentation, as memory allocation is tailored to the
process size.
 Efficient use of memory compared to fixed partitioning.
Disadvantages:
 External Fragmentation: Over time, free memory is broken
into small, non-contiguous blocks, making it hard to allocate
memory for larger processes.
 Requires additional management to track free memory blocks.
Allocation Strategies in Contiguous Memory Allocation
When allocating memory to processes in contiguous allocation, the
operating system can use the following strategies:
1. First Fit
 Allocates the first available memory block that is large enough
for the process.
 Simple and fast, but may lead to fragmentation near the
beginning of memory.
2. Best Fit
 Allocates the smallest memory block that fits the process.
 Reduces wasted space but can increase external fragmentation
due to leftover small blocks.
3. Worst Fit
 Allocates the largest memory block available.
 Leaves larger free blocks for future allocations but can also lead
to inefficient utilization of memory.
Advantages of Contiguous Memory Allocation
1. Simple Implementation:
o Easy to manage as each process occupies a single,
contiguous block.
2. Fast Address Translation:
o Logical to physical address translation is straightforward
due to linear mapping.
3. Low Overhead:
o Requires minimal additional data structures for memory
management.
Disadvantages of Contiguous Memory Allocation
1. Fragmentation:
o Internal Fragmentation: In fixed partitioning, unused
space within a partition is wasted.
o External Fragmentation: In dynamic partitioning, free

65
memory gets scattered into small blocks.
2. Limited Flexibility:
o Processes cannot grow beyond their allocated memory
unless adjacent space is free.
3. Inefficient Memory Usage:
o Memory may remain underutilized due to fragmentation
or mismatched partition sizes.

Evaluation Scheme:
Explanation of Concept – 3mark
Example of Technique – 2mark
6. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 5 CO4 K2
600Kb (in order), how would the first-fit, best-fit, and worst-fit
algorithms place processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb
(in order)? Which algorithm makes the most efficient use of
memory? Explain it.

Evaluation Scheme:
Explanation – 2.5mark
Example – 2.5mark

7. Define Fragmentation. Explain its types. 5 CO4 K2


• Fragmentation is an unwanted problem where the memory blocks
cannot be allocated to the processes due to their small size and
the blocks remain unused.
• It can also be understood as when the processes are loaded and
removed from the memory they create free space or hole in the
memory and these small blocks cannot be allocated to new
upcoming processes and results in inefficient use of memory.
• Basically, there are two types of fragmentation:
▪ Internal Fragmentation
▪ External Fragmentation
Internal Fragmentation

• It occurs when the space is left inside the partition after allocating
the partition to a process.
• This space is called as internally fragmented space.

66
• This space cannot be allocated to any other process.
• This is because only Fixed (static) partitioning allows to store
only one process in each partition.
• Internal Fragmentation occurs only in Fixed (static) partitioning.
 Internal fragmentation happens when the memory is split into
mounted sized blocks.
 Whenever a method request for the memory, the mounted sized
block is allotted to the method.
 just in case the memory allotted to the method is somewhat larger
than the memory requested, then the distinction between allotted
and requested memory is that the Internal fragmentation.

External fragmentation occurs in memory management when free


memory is scattered into non-contiguous blocks across the main memory,
making it impossible to allocate memory to a process despite having
enough total free space. This fragmentation happens because the free
memory is not in a single contiguous block large enough to meet the
needs of a process.
Example of External Fragmentation
Consider a memory space of 1000 KB with the following scenario:
1. Processes are allocated:
o Process P1: 200 KB
o Process P2: 300 KB
o Process P3: 400 KB
2. Process P2 is terminated, freeing 300 KB in memory.
A new process P4 requiring 350 KB arrives, but it cannot be allocated,
even though the total free space (300 KB + 100 KB) is sufficient. This is
because the 350 KB requirement cannot be met with non-contiguous
blocks.
Evaluation Scheme:
Explanation – 3mark
Types – 2mark

8. Explain the process of Demand Paging. 5 CO4 K2


Demand paging is a technique used in virtual memory systems where
pages enter main memory only when requested or needed by the CPU. In
demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program

67
into memory at the start. A page fault occurred when the program needed
to access a page that is not currently in memory.
The operating system then loads the required pages from the disk into
memory and updates the page tables accordingly. This process is
transparent to the running program and it continues to run as if the page
had always been in memory.

Process of Demand Paging.


1. Program Execution: Upon launching a program, the operating
system allocates a certain amount of memory to the program and
establishes a process for it.
2. Creating Page Tables: To keep track of which program pages
are currently in memory and which are on disk, the operating
system makes page tables for each process.
3. Handling Page Fault: When a program tries to access a page
that isn’t in memory at the moment, a page fault happens. In
order to determine whether the necessary page is on disk, the
operating system pauses the application and consults the page
tables.
4. Page Fetch: The operating system loads the necessary page into
memory by retrieving it from the disk if it is there.
5. The page’s new location in memory is then reflected in the page
table.
6. Resuming The Program: The operating system picks up where
it left off when the necessary pages are loaded into memory.
7. Page Replacement: If there is not enough free memory to hold
all the pages a program needs, the operating system may need to
replace one or more pages currently in memory with pages
currently in memory. on the disk. The page replacement
algorithm used by the operating system determines which pages
are selected for replacement.
8. Page Cleanup: When a process terminates, the operating system
frees the memory allocated to the process and cleans up the
corresponding entries in the page tables.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
9. Compare the number of No. of hits, page faults and hit rate for FIFO 5 CO4 K2
and Optimal page replacement algorithm using 4 frames for the

68
given page reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.

Evaluation Scheme:
Explanation – 3mark
Example – 2mark

10. Compare the number No. of hits, page faults and hit rate for FIFO 5 CO4 K2
and LRU replacement algorithm using 4 frames for the given page
reference string:
1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.

69
Evaluation Scheme:
Explanation – 3mark
Example – 2mark

Unit – V
Unit Contents: Storage Structure & File Systems
Mass Storage Structure: Disk Structure - Disk Scheduling - Disk Management-Structure - File-System
Interface: File Concepts -Directory Structure - File Sharing – Protection- File Allocation Methods-NFS.
Case study: Recovery in Windows.

Two Marks Questions


*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Course
Marks Level
from the second half portion of syllabus Outcome

1. Describe the tracks and sectors in a disk. 2 CO5 K1


Tracks are concentric circles on the disk surface where data is stored,
while sectors are subdivisions of tracks, used to organize and access data
efficiently.
2. Why is rotational latency significant in disk performance? 2 CO5 K1
Rotational latency is the delay caused by waiting for the desired sector to
rotate under the read/write head. It affects the time required to access
data.
3. Differentiate between FCFS and SSTF disk scheduling algorithms. 2 CO5 K2
FCFS processes requests in the order they arrive, while SSTF selects the
request closest to the current head position, reducing seek time.
4. How does disk scheduling improve performance? 2 CO5 K1
Disk scheduling minimizes the time spent moving the read/write head,
leading to faster I/O operations.
5. List the various disk scheduling algorithms. 2 CO5 K1
The various disk scheduling algorithms are FCFS, SSTF, SCAN, C-
SCAN, and C-LOOK.
6. How does logical formatting differ from physical formatting? 2 CO5 K1
Logical formatting involves creating a file system on the disk, whereas
physical formatting organizes the raw structure of sectors and tracks.
7. Why is partitioning important in disk management? 2 CO5 K1
Partitioning divides a disk into segments, allowing multiple operating
systems or file systems to coexist and optimizing storage use.

70
8. Differentiate the sequential and direct file access. 2 CO5 K2
Sequential access reads data in order, while direct access allows
accessing data at any position.
9. How does a file differ from a database? 2 CO5 K1
A file is an unstructured or minimally structured collection of data, while
a database organizes data systematically for efficient querying and
management.
10. What is the purpose of a file extension? 2 CO5 K1
File extensions indicate the type of file, helping the operating system
determine which application should open it.
11. How does a hierarchical directory structure improve file 2 CO5 K1
organization?
A hierarchical structure organizes files into directories and subdirectories,
making it easier to locate and manage them.
12. List the different directory structure. 2 CO5 K1
Single Level directory, Two Level directory and Tree-structured
directory
13. How does file sharing work in a peer-to-peer network? 2 CO5 K1
In a peer-to-peer network, files are directly shared between devices
without a central server.
14. How do access control lists (ACLs) enhance file protection? 2 CO5 K1
ACLs specify which users or groups can access a file and define their
permissions, ensuring controlled access.
15. What challenge does file sharing address? 2 CO5 K1
File sharing addresses the need for collaborative access to files among
users or devices without duplicating data.

Three Marks Questions.


*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Course
Marks Level
from the second half portion of syllabus Outcome

1. Discuss different types of storage. 3 CO5 K2

Storage in a computer system can be classified into the following types:

1. Primary Storage (Main Memory):


o Includes RAM and cache memory.

o Fast and directly accessible by the CPU but volatile (data


is lost when power is off).
2. Secondary Storage:
o Includes hard drives (HDDs), solid-state drives
(SSDs), and optical disks.
o Non-volatile and used for long-term data storage.
3. Tertiary Storage:
o Includes magnetic tapes and removable media.
o Used for backup and archival purposes.

Evaluation Scheme:

71
Explanation – 1mark
Types – 2mark
2. Summarize disk Management. 3 CO5 K2

Disk Management refers to the process of organizing and maintaining


storage devices (such as hard drives or SSDs) in a computer system. It
involves tasks like:

1. Partitioning: Dividing the disk into multiple logical units


(partitions) to manage data more efficiently.
2. File System Management: Implementing a file system (e.g.,
NTFS, FAT) to organize and store files in an accessible way.
3. Disk Allocation and Scheduling: Managing how files are stored
and retrieved on the disk, including scheduling read/write
requests to optimize performance.

Evaluation Scheme:
Explanation – 3mark
3. Discuss file and Directory 3 CO5 K2/K3

File:
A file is a collection of data or information stored on a computer's
storage device. It can be of various types such as text files, executable
files, or image files. Files are identified by their name and extension
(e.g., .txt, .jpg). They are used to store and organize data in a
structured manner.

Directory:
A directory (also known as a folder) is a container used to organize
files and other directories in a hierarchical structure. It helps to group
related files together, making it easier to manage and locate data.
Directories can contain both files and subdirectories.

Evaluation Scheme:
Explanation of File – 1.5mark
Explanation of Directory – 1.5mark
4. Explain various layers of a file system. 3 CO5 K2

A file system typically consists of several layers, each responsible for


different aspects of file management:

1. File System Layer:


This layer manages how files are stored, organized, and accessed.
It defines the structure (e.g., directories, file types) and
implements file operations such as creation, deletion, reading,
and writing.
2. Logical Layer:
The logical layer abstracts the physical storage and provides a
logical view of the file system to users and applications. It deals
with file metadata such as file name, size, location, and
attributes, without exposing the underlying hardware details.

3. Physical Layer:
This layer is responsible for the actual storage of data on
physical media like hard drives or SSDs. It handles the

72
allocation of space on the storage device and manages low-level
tasks such as data block allocation and disk scheduling.

Evaluation Scheme:
Explanation – 2mark
Various Layers – 1mark
5. Summarize the attributes of a file. 3 CO5 K2

A file has several attributes that describe its characteristics and help
manage it within the file system:

1. File Name:
The name used to identify the file. It usually consists of a base
name and an extension (e.g., document.txt).
2. File Type:
Specifies the format or type of the file (e.g., text file, image file,
executable). This helps the system understand how to interpret
the file.

3. File Size:
Indicates the total size of the file, typically measured in bytes. It
reflects how much space the file occupies on the storage medium.

4. Date and Time Stamps:


Includes attributes like creation time, last access time, and last
modification time, which track the file's history.

5. File Permissions:
Defines the read, write, and execute permissions for different
users or groups, controlling access to the file.

Evaluation Scheme:
Explanation – 3mark
6. Explain the concept of File Sharing. 3 CO5 K2

File sharing refers to the practice of allowing multiple users or systems


to access and modify the same file over a network. This concept enables
collaboration and efficient data management by allowing users to work
on the same file simultaneously or sequentially.

1. Shared File Access:


Multiple users can access the file stored on a server or networked
computer. The system manages concurrent access to ensure data
consistency.
2. Permission Control:
File sharing includes setting permissions (read, write, execute) to
control who can access or modify the file, ensuring security and
privacy.

3. Protocols for Sharing:


Various protocols, such as Network File System (NFS) or
Server Message Block (SMB), are used for secure file sharing
across different platforms.

Evaluation Scheme:

73
Explanation – 3mark
7. Interpret Access control list in detail. 3 CO5 K2

An Access Control List (ACL) is a security feature used to define


permissions for accessing a resource, such as a file, directory, or network
resource. It specifies which users or groups are allowed to perform
specific actions on the resource. ACLs are commonly used in file
systems, network devices, and operating systems to manage access
control.

Structure of ACL:
An ACL consists of a list of entries, where each entry defines a specific
user or group and the permissions granted (read, write, execute, delete,
etc.). For example, an ACL for a file might specify that User A has read
and write permissions, while User B only has read access.
Types of Permissions:
ACLs allow for detailed permission settings, such as:
Read: Permission to view the content.
Write: Permission to modify the content.
Execute: Permission to run or execute a file or program.
Access Control:
ACLs are used to restrict or grant access to resources based on user
identity or group membership, enhancing security by preventing
unauthorized access.

Evaluation Scheme:
Explanation – 3mark
8. Summarize Contiguous Memory Allocation. 3 CO5 K2
Contiguous Memory Allocation is a memory management technique in
which each process is allocated a single contiguous block of memory.
The operating system keeps track of the start address and size of the
memory block for each process.
1. Simple Allocation:
In contiguous allocation, processes are stored in adjacent
memory locations, meaning the operating system assigns a
continuous chunk of memory to each process without
fragmentation.
2. Advantages:
o Easy to implement due to its simplicity.
o Efficient access because the memory for each process is
contiguous, leading to faster read/write operations.
3. Disadvantages:
o Internal fragmentation may occur if the process doesn't
completely fill the allocated space.
o External fragmentation arises over time as processes
are loaded and removed, leaving gaps in memory.

Evaluation Scheme:
Explanation – 2mark
Advantages and Disadvantages – 1mark
9. Explain Solid State Disk. 3 CO5 K2

A Solid State Disk (SSD) is a type of non-volatile storage device that


74
uses flash memory instead of traditional spinning disks (like hard drives)
to store data. SSDs are commonly used in modern operating systems for
faster data access and improved performance.

1. Storage Technology:
Unlike traditional hard drives (HDDs) that use magnetic platters,
SSDs store data on memory chips (typically NAND flash
memory), which enables faster data read/write speeds and
lower latency.
2. Performance Benefits:
SSDs provide significantly faster data access and boot times
compared to HDDs. This results in improved overall system
performance, especially in disk-intensive tasks such as database
operations or file transfers.

3. Durability and Power Efficiency:


SSDs are more durable since they have no moving parts, making
them less prone to mechanical failure. They are also more
power-efficient, which is particularly beneficial for laptops and
portable devices.

Evaluation Scheme:
Explanation – 3mark
10. What are the objectives of file management systems? Explain the file 3 CO5 K2
system architecture.
The primary objectives of a file management system are to ensure
efficient storage, organization, and retrieval of files, providing users
with an effective way to store and access data. The key objectives
include:
1. File Organization:
Organize files in a structured manner for easy access and
retrieval, ensuring a logical and hierarchical arrangement of
files and directories.
2. Efficient Storage Management:
Manage disk space by allocating and deallocating space
efficiently, ensuring minimal wastage of storage.
3. Security and Access Control:
Provide mechanisms to control access to files, such as setting
permissions for different users and protecting data from
unauthorized access.

The file system architecture typically consists of several layers that


interact to manage files efficiently:
1. File System Layer:
Manages the organization, naming, and metadata of files,
including file allocation and access.
2. Logical Layer:
Provides an abstract view of files and directories, ensuring that
users can access files without dealing with physical storage
details.
3. Physical Layer:
Handles the actual storage of files on physical devices (like hard

75
drives or SSDs), managing disk blocks and their allocation.

Evaluation Scheme:
Explanation of File Management– 1.5mark
Explanation of File Structure -1.5mark

Five Marks Questions,


*Q.Nos. 1- 5 first half Portion of unit syllabus and Q.No 6 - 10 Course
Marks Level
from the second half portion of syllabus Outcome

1. Explain the key components of disk structure and their roles. 5 CO5 K2
 Tracks: Circular paths on the disk surface where data is stored.
 Sectors: Subdivisions of tracks that represent the smallest storage
unit.
 Cylinders: Group of tracks at the same position on all platters,
enabling simultaneous access.
 Platters: Magnetic disks that store data in multiple layers.
 Read/Write Head: Mechanism for accessing data stored on
platters.
These components work together to provide efficient storage and
retrieval of data on the disk.
Evaluation Scheme:
Explanation – 3mark
Roles – 2mark
2. Illustrate any two disk scheduling algorithms. 5 CO5 K3

FCFS, SSTF, SCAN, C-SCAN, and C-LOOK.

Evaluation Scheme:
Explanation – 3mark
Diagram – 2mark
3. Discuss the file concepts in operating systems and describe the key 5 CO5 K2
attributes associated with files.
Files are containers for data storage, identified by attributes:

 Name: Human-readable identifier.


 Type: Specifies file format (e.g., .txt, .jpg).

 Location: Physical address on the disk.

 Size: Indicates data volume in bytes.

 Protection: Access permissions (read, write, execute).

 Ownership: User or group controlling the file.

 Timestamps: Track creation, modification, and last


access times.

These attributes help the OS manage, organize, and protect data


effectively.

76
Evaluation Scheme:
Explanation – 5mark
4. Explain the different directory structures in operating systems and 5 CO5 K2
their advantages and disadvantages.

i. Single-Level Directory:

o All files in one directory.

o Pros: Simple. Cons: Difficult with many files.

ii. Two-Level Directory:

o Separate directories for each user.

o Pros: Organized, avoids name conflicts.

o Cons: Limited sharing.

iii. Tree-Structured Directory:

o Hierarchical structure with subdirectories.

o Pros: Scalable, organized. Cons: Complex navigation.

iv. Acyclic Graph Directory:

o Allows shared directories.

o Pros: Efficient for shared data. Cons: Risk of dangling


pointers.

Directory structures provide a balance between simplicity and


scalability based on system needs.

Evaluation Scheme:
Explanation – 4mark
Advantages and Disadvantages – 1mark
5. What are the challenges of file sharing in multi-user systems? Discuss 5 CO5 K2
mechanisms to ensure secure file sharing.
Challenges:

 Concurrency: Simultaneous access can cause data corruption.


 Security: Unauthorized access risks confidentiality.

 Synchronization: Ensuring data consistency during concurrent


access.

Mechanisms:

 Access Control Lists (ACLs): Define permissions for


users/groups.
 File Locking: Prevents simultaneous writes.

77
 Encryption: Secures shared files.

 Versioning: Maintains data integrity by keeping multiple


versions.

Secure file sharing ensures efficient collaboration without compromising


data integrity or confidentiality.

Evaluation Scheme:
Explanation of Challenges– 2mark
Explanation of Mechanisms– 3mark
6. Explain file protection mechanisms and their importance in 5 CO5 K2
operating systems.

File protection mechanisms prevent unauthorized access.

 Access Control: Permissions like read, write, execute for


users or groups.
 Authentication: Verifies user identity via passwords or
biometrics.

 Encryption: Secures file content by converting it into an


unreadable format.

 Auditing: Tracks file access and modifications for


accountability.

Protection mechanisms ensure data security, maintain privacy, and


prevent misuse in multi-user environments.

Evaluation Scheme:
Explanation – 3mark
Importance – 2mark
7. Discuss the various file allocation methods (contiguous, linked, 5 CO5 K2
indexed) with their pros and cons.

 Contiguous Allocation: Files occupy consecutive blocks. Pros:


Fast access. Cons: Fragmentation, resizing issues.
 Linked Allocation: Blocks linked via pointers. Pros: No
fragmentation. Cons: Slow access, pointer overhead.
 Indexed Allocation: Index block stores pointers to file blocks.
Pros: Fast random access. Cons: Overhead for index block.

Allocation methods balance efficiency, simplicity, and scalability for


different file system needs.

Evaluation Scheme:
Explanation of Contiguous– 2mark
Explanation of Linked– 2mark
Explanation of Indexed– 1mark
8. Explain the working of the Network File System (NFS) and its 5 CO5 K2
significance in distributed systems.

NFS allows file access over a network as if they were local.

78
Working:

Client sends a request to access a file on the server. Server


processes the request and sends data via Remote Procedure Calls
(RPC). Uses caching for faster access.

Significance:

 Simplifies file sharing in distributed systems.


 Enhances resource utilization.

 Supports platform independence.

 NFS ensures seamless integration of storage resources


across networked systems.

Evaluation Scheme:
Explanation of Working of the NFS– 2mark
Explanation of Significance– 3mark
9. Discuss the challenges and limitations of implementing NFS in 5 CO5 K2
distributed systems.

The implementation of Network File System (NFS) faces several


challenges and limitations:
Performance:
Increased latency due to network dependencies.
High load on the server during concurrent access from multiple clients.
Security:
Vulnerabilities in network communication can lead to data breaches.
Requires robust authentication and encryption mechanisms.
Scalability:
Performance degradation with an increasing number of clients.
Caching Issues:
Cache consistency is difficult to maintain when files are modified by
multiple users simultaneously.
Fault Tolerance:
NFS relies heavily on the server; if the server fails, clients lose access to
files.
Platform Dependence:
Compatibility issues may arise with different operating systems or file
system structures.
Addressing these limitations involves optimizing network protocols,
employing robust security practices, and implementing distributed NFS
setups for better fault tolerance and performance.

Evaluation Scheme:
Explanation of Challenges– 2mark
Explanation of Limitations– 3mark
10. Explain the system recovery options in Windows, including Startup 5 CO5 K2
Repair, System Restore, and Safe Mode, and their importance in
maintaining system reliability.

Windows recovery options include:


Startup Repair: Fixes boot issues like a corrupted bootloader.

79
System Restore: Reverts the system to a previous state, undoing
problematic changes.
Safe Mode: Boots with minimal drivers for troubleshooting.
System Image Recovery: Restores the system from a backup image.
Command Prompt: For advanced manual recovery.
Importance:
Reduces downtime.
Prevents data loss.
Maintains system stability after failures.
These options make Windows recovery user-friendly and reliable

Evaluation Scheme:
Explanation of Startup Repair– 2mark
Explanation of System Restore– 2mark
Explanation of Safe Mode– 1mark

Course Coordinator Dean -SoC


[Name and Signature] [Dr.S.P.Chokkalingam]

80

You might also like