Preparaoty Importants
Preparaoty Importants
(Answers will be longer, with sufficient explanation for 2 marks, covering at least a substantial portion
of a page.)
o Error Detection: Monitors hardware and software for errors, ensuring system
stability and recovery options.
Process:
o A process requires system resources such as CPU time, memory, and input/output
devices.
Program:
Types of Schedulers:
o Long-term Scheduler: Determines which jobs are to be admitted into the system for
processing. It controls the degree of multiprogramming.
o Short-term Scheduler (CPU Scheduler): Selects which process from the ready queue
will execute on the CPU next. It operates frequently and is highly time-sensitive.
o The time required for the disk arm to move the read/write head to the appropriate
track where the desired data is stored.
Transfer Time:
o The time taken to transfer data from the disk surface to the computer’s memory
after the correct track is located.
Rotational Time:
o The time required for the disk to rotate to the specific sector where the required
data is located. It depends on the disk's rotational speed, measured in RPM
(revolutions per minute).
Together, these times contribute to the total access time for retrieving data from a disk.
o The BIOS (Basic Input/Output System) performs a Power-On Self-Test (POST) to check
hardware integrity.
o The bootloader program, such as GRUB, loads the operating system kernel into RAM.
o The kernel initializes system components, processes, and services required for
operation.
Types of Booting:
o Warm Boot: Restarting the system without turning off the power.
o Process:
A process is an executing instance of a program. It represents a running program
with its own allocated resources, such as CPU time, memory, and I/O access. Each
process has a unique Process ID (PID) and can be in various states (Ready, Running,
Waiting, etc.).
o Program:
A program is a static collection of instructions stored on disk, ready to be executed. It
becomes a process when loaded into memory and executed.
7. What is polling?
Polling is a technique where the CPU continuously checks the status of a device to determine
whether it is ready for a data transfer or requires attention.
Explanation:
o The CPU repeatedly interrogates the device, checking if it is idle, busy, or ready to
transfer data.
o Polling consumes CPU cycles and can be inefficient since the CPU remains occupied
while waiting for the device.
o Save the current process's state (Program Counter, CPU registers, etc.) into the
Process Control Block (PCB).
o Load the next process's state from its PCB.
Overhead: Context switching introduces overhead as the CPU spends time switching between
processes rather than performing actual execution.
o Memory Management: Allocates and deallocates memory, tracks free space, and
ensures efficient memory usage.
o File System Management: Organizes files, directories, and storage devices for easy
access.
o Security: Protects data and resources from unauthorized access and potential
threats.
Features:
The OS loads the next job into memory once the previous job completes.
Advantages:
Disadvantages:
Hold and Wait: A process holds one resource while waiting for another.
Circular Wait: A circular chain of processes exists where each process waits for a resource
held by the next process.
Explanation:
When a process requests memory, the OS allocates a fixed-sized block. If the process's
memory requirement is smaller than the block size, the remaining space is wasted.
Example: If a process needs 6KB of memory and the OS allocates an 8KB block, the unused 2KB is
internal fragmentation.
PART B
System Calls: System calls provide the interface between user programs and the operating system.
They allow a program to request services from the OS, such as reading or writing files, creating
processes, allocating memory, or communicating with hardware. Without system calls, programs
would be unable to interact with the underlying hardware in a secure and controlled manner.
1. Process Control:
o These system calls manage the execution of processes in the system. They include
the creation, scheduling, termination, and synchronization of processes.
o Examples include:
wait(): Suspends the execution of the current process until one of its child
processes finishes.
2. File Management:
o These system calls handle operations on files such as opening, reading, writing, and
closing files.
o Examples include:
3. Device Management:
o These system calls manage hardware devices. They allow the OS to control
input/output devices, handle data transmission, and ensure that programs can
interact with devices like printers or disks.
o Examples include:
4. Information Maintenance:
o These system calls allow processes to retrieve or update system information, such as
process IDs, system time, or the status of a device.
o Examples include:
o These system calls provide mechanisms for processes to communicate with each
other. IPC allows processes to share data, synchronize actions, and exchange
messages.
o Examples include:
Schedulers in Operating Systems: Schedulers are responsible for determining which processes run
on the CPU and when. They play a vital role in process management, ensuring fair resource allocation
and efficient execution. There are three types of schedulers in an operating system:
o Function: Decides which processes are admitted into the system for execution. It
controls the degree of multiprogramming and manages the ready queue. The long-
term scheduler selects processes from the job pool (or job queue) and loads them
into memory for execution.
o Function: Selects which process from the ready queue will execute next. The CPU
scheduler assigns CPU time to the processes, often based on a scheduling algorithm
(e.g., FCFS, SJF, Round Robin).
o Function: Manages the movement of processes between the main memory and
secondary storage (usually swapping). When memory is full, the medium-term
scheduler selects processes to swap out and make space for others.
Job Queue (Long-Term Scheduler): Contains all processes that are waiting to be admitted
into memory.
Ready Queue (Short-Term Scheduler): Holds processes that are loaded into memory and
ready to execute.
Swap Queue (Medium-Term Scheduler): Stores processes that have been swapped out of
memory due to space limitations.
You can search for a "scheduler block diagram" on Google for visualization.
15. Explain the working of SJF and Priority Scheduling with an example each.
Definition: SJF is a non-preemptive scheduling algorithm that selects the process with the
shortest burst time (execution time) to execute next.
Working:
o The scheduler looks at all the processes in the ready queue and selects the one with
the shortest execution time (burst time).
o Advantage: It minimizes the average waiting time and turnaround time for
processes.
P1: 6 ms
P2: 8 ms
P3: 7 ms
1. P1 (6 ms)
2. P3 (7 ms)
3. P2 (8 ms)
Here, the process with the shortest burst time is selected first.
Priority Scheduling:
Definition: Priority Scheduling assigns each process a priority, and the process with the
highest priority is executed first. This can be preemptive or non-preemptive.
Working:
Example of Priority Scheduling: Consider three processes with the following priorities (lower
number means higher priority):
P1: Priority 2
P2: Priority 1
P3: Priority 3
1. P2 (priority 1)
2. P1 (priority 2)
3. P3 (priority 3)
This scheduling ensures that processes with higher priority values are executed first.
Directory Structures: Directory structures manage the organization of files in the file system. They
allow users and programs to easily locate files and directories. Below are three common directory
structures:
1. Single-Level Directory:
o Structure: In this structure, all files are stored in one directory. There is no hierarchy
of subdirectories.
o Disadvantages: Difficulty in organizing files when there are too many, leading to
confusion.
2. Two-Level Directory:
o Structure: This structure separates files into two levels: the root directory and
individual user directories. Each user has their own directory, containing their files.
o Structure: This is the most commonly used structure, where directories can contain
subdirectories, and those subdirectories can further contain files or other
directories. It resembles a tree structure with a root and branches.
Definition: FIFO is a simple page replacement algorithm where the page that has been in
memory the longest is replaced when a new page needs to be loaded into memory.
Working:
o The pages in memory are placed in a queue. When a page needs to be replaced, the
one at the front of the queue (the oldest) is removed.
Definition: LRU replaces the page that has not been used for the longest period of time.
Working:
o Pages are ordered based on when they were last accessed. When a new page is
needed, the page that has been least recently used is replaced.
o Advantage: More efficient than FIFO in most cases, as it better reflects actual
program usage.
System Calls (Continued): System calls are fundamental to the interaction between user programs
and the operating system. They form the bridge through which applications communicate with the
kernel, which controls the hardware and provides services to user programs.
User-Level vs Kernel-Level:
o User-level functions are typically part of an application, but when they need to
interact with the kernel (e.g., to access files, use network sockets, or create
processes), they issue a system call.
o Kernel-level functions are part of the operating system and perform the core tasks
like managing memory, scheduling processes, and handling input/output.
Types of System Calls: System calls can be classified into five major types:
1. Process Control:
2. File Management:
o These system calls manage the file system, like opening, closing, reading, or writing
to files. Example: open(), read(), write(), close().
3. Device Management:
o These system calls allow processes to interact with hardware devices. Example:
ioctl(), read(), write().
4. Information Maintenance:
o These system calls gather or set system or process information. Example: getpid(),
gettimeofday().
o These are crucial for processes that need to communicate with one another, such as
message passing or shared memory. Example: msgsnd(), msgrcv(), semop().
System Call Interface: When a user program makes a system call, it involves a context switch to
kernel mode. The system call is identified by a number, which the kernel looks up to determine the
action to take. After the kernel completes the system call, it returns control to the user program.
Schedulers in Operating Systems: Schedulers play a critical role in process management by deciding
which process should be executed next on the CPU. There are three major types of schedulers:
o Function: It is responsible for deciding which processes are admitted into the system
and brought into the ready queue. It regulates the degree of multiprogramming,
controlling how many processes are in memory.
o Example: If there are too many processes in memory, the long-term scheduler might
delay the admission of new processes.
o Function: This scheduler selects processes from the ready queue to execute on the
CPU. It uses scheduling algorithms such as FCFS, SJF, and Round Robin.
o Example: The short-term scheduler decides which process should run next, ensuring
fairness and efficient use of the CPU.
o Example: If there is limited RAM, the scheduler may swap out processes that are not
currently executing to disk, freeing memory for others.
20. Differentiate between processes and threads.
Processes vs Threads:
Process:
o Overhead: Processes have higher overhead due to their need for separate memory
and resources.
o Isolation: Processes are isolated from each other, meaning that if one process
crashes, it doesn’t affect others.
Thread:
o A thread is a smaller unit of a process that can run independently. Threads within the
same process share the same memory space, file descriptors, and other resources,
but have their own program counter, stack, and local variables.
o Overhead: Threads have less overhead because they share resources with other
threads of the same process.
o Communication: Threads can easily communicate with each other because they
share the same memory space.
Key Differences:
Processes are heavier, with their own resources and memory space, while threads are lighter
and share resources within the process.
Threads are faster to create and terminate because they share the process’s resources.
Definition: FCFS is a non-preemptive scheduling algorithm where the process that arrives
first is executed first. Once a process starts executing, it runs to completion.
Working:
o The CPU scheduler selects the process in the order of their arrival in the ready
queue.
o Disadvantage: It can lead to the "convoy effect," where shorter processes are
delayed by longer ones, resulting in high average waiting times.
Example: Consider the following three processes with their arrival times and burst times:
Here, even though P3 has the shortest burst time, it has to wait for P1 and P2 to finish first.
22. What are the various objectives and functions of the operating system?
1. Resource Management: The OS manages the hardware resources of a system, including the
CPU, memory, and I/O devices, to optimize their usage.
2. Process Management: It ensures that multiple processes can run concurrently without
interfering with each other.
3. File Management: The OS handles the creation, deletion, and access of files, ensuring that
users can interact with files in a logical and consistent manner.
4. Security and Protection: The OS protects data and system resources from unauthorized
access while ensuring safe execution of processes.
5. User Interface: The OS provides the user with an interface (e.g., command-line or graphical)
to interact with the system.
1. Process Scheduling: The OS schedules processes for execution based on various scheduling
algorithms.
2. Memory Management: It keeps track of memory allocation and deallocation, and optimizes
memory usage.
3. Input/Output Management: The OS manages I/O devices, ensuring smooth and efficient
communication between hardware and software.
4. File System Management: The OS organizes, stores, and retrieves data on storage devices.
5. Error Detection and Handling: The OS detects and responds to errors, such as hardware
failures or invalid memory access.
6. Security: The OS ensures secure access to resources through authentication and access
control mechanisms.
23. Explain about first fit, best fit, worst fit, and next fit algorithms.
Memory Allocation Algorithms:
1. First Fit:
o Function: Allocates the first available block of memory large enough to satisfy the
request.
o Disadvantage: Can lead to fragmentation because small unused gaps are left in the
memory.
2. Best Fit:
o Function: Allocates the smallest available block that is large enough to satisfy the
request.
o Disadvantage: Can lead to inefficient use of memory, as many small gaps remain.
3. Worst Fit:
o Disadvantage: Can lead to inefficient memory usage if large gaps are wasted.
4. Next Fit:
o Function: Similar to first fit, but it starts searching for a block from the point where
the last allocation was made.
o Advantage: Less overhead compared to first fit since it doesn’t always start from the
beginning of the memory.
Definition: In an indexed file, an index is used to store pointers to the actual records. It
allows fast access to data by using the index instead of scanning the entire file.
Advantages:
Definition: This combines both indexed and sequential access methods. Records are stored
sequentially, and an index provides access to records in non-sequential order.
Structure:
Advantages:
**Dis
advantages:**
PART C
1. Message Passing:
o Direct Message Passing: The sender specifies the recipient of the message.
o Indirect Message Passing: Messages are sent to a mailbox or message queue, from
where the recipient retrieves them.
o Disadvantages: Can lead to delays, and message delivery is not always guaranteed.
2. Shared Memory:
o A region of memory is shared between processes, and they can read and write data
to this shared memory.
o Mechanism: One process writes to the memory, and the other reads from it.
IPC Techniques:
1. Pipes:
o Named Pipes (FIFOs): Can be used between any processes, regardless of whether
they are related.
2. Message Queues:
o Advantage: Messages are stored in queues, and processes do not need to wait for
each other.
3. Semaphores:
o Binary Semaphore: Works like a lock; it only takes two values (0 or 1).
4. Shared Memory:
Advantages of IPC:
26. Write in detail about any four different types of operating systems.
Definition: A batch operating system executes a series of jobs without manual intervention,
collecting them into batches and processing them sequentially.
Characteristics:
o Jobs are processed in batches, and the operating system doesn’t allow interaction
with the user during execution.
Advantages:
Disadvantages:
Definition: A time-sharing system allows multiple users to interact with the computer
simultaneously. Each user gets a small time slice of the CPU's time.
Characteristics:
Advantages:
Disadvantages:
Characteristics:
o Systems are designed to perform specific tasks with high reliability and predictability.
Advantages:
o Predictable behavior is crucial for tasks like flight control systems and medical
devices.
Disadvantages:
Characteristics:
o Provides networking capabilities like file sharing, print services, and communication
protocols.
Advantages:
Disadvantages:
Working:
o The disk arm moves to the nearest request in the queue, reducing the average seek
time.
Example:
o Suppose a disk head is at position 50 and there are requests at 30, 60, and 90. SSTF
will first serve the request at position 60, as it is closest to 50, then serve the request
at 30, and finally at 90.
Advantages:
Disadvantages:
o Can lead to starvation, where requests far from the head may not be serviced for a
long time.
Definition: SCAN is a disk scheduling algorithm where the disk arm moves in one direction,
serving requests along the way, and when it reaches the end, it reverses direction and serves
requests in the opposite direction.
Working:
o The disk arm moves from one end of the disk to the other, servicing requests as it
goes, and then reverses direction to serve the remaining requests.
Example:
o Consider requests at 30, 50, 60, and 90. If the disk head starts at position 50 and
moves towards the right, it will first serve the request at 60, then 90, then reverse
direction and serve 30.
Advantages:
Disadvantages:
o Can result in longer wait times for requests near the end of the disk.
1. Single-Level Directory:
Definition: In a single-level directory system, all files are stored in one directory.
Structure:
o All files are stored in a single directory, without subdirectories.
Advantages:
Disadvantages:
2. Two-Level Directory:
Definition: A two-level directory structure separates files into two categories: one for users
and one for the system.
Structure:
o The first level is the root directory, and the second level contains user directories.
Advantages:
o Prevents conflicts between files with the same name by keeping them in separate
directories.
Disadvantages:
3. Hierarchical Directory:
Definition: A hierarchical directory system organizes files into a tree structure, where
directories can contain subdirectories.
Structure:
o The root directory is the top-level directory, and beneath it, there are multiple
subdirectories and files.
Advantages:
Disadvantages:
It is a secondary storage device consisting of one or more rigid plates (platters) coated with a
magnetic material.
Structure Components:
1. Platters:
o The disk consists of platters, which are the flat, circular surfaces where data is stored.
Each platter has two surfaces, and data is written on both surfaces.
2. Tracks:
o The surface of each platter is divided into concentric circles called tracks. Each track
is further subdivided into sectors, which are the smallest units of data storage.
3. Heads:
o Each platter surface has a read/write head that moves across the surface to read or
write data. The heads float above the platter on a thin layer of air.
4. Spindle:
o The spindle holds the platters and rotates them at a constant speed, typically 5400
RPM or 7200 RPM.
5. Cylinder:
o A cylinder is a set of tracks located at the same position on all platters. The
read/write heads move together, accessing the same track on each platter
simultaneously.
Working:
The disk spins, and the read/write head accesses the appropriate track and sector to read or
write data. Data is written magnetically in binary format.
Advantages:
Disadvantages:
Critical Section: The critical section is a part of the program where shared resources are accessed,
and it must be executed by only one process at a time to avoid race conditions.
Important Feature:
Mutual Exclusion: The main feature of the critical section is ensuring that only one process
can execute the critical section at any given time. This prevents conflicts when multiple
processes access shared resources concurrently.
The Dining Philosophers' Problem is a classic synchronization problem that illustrates the difficulties
of process synchronization when multiple processes need to share resources. It involves a number of
philosophers sitting around a dining table, where each philosopher thinks and occasionally eats. To
eat, a philosopher needs two forks, one to their left and one to their right.
Problem Setup:
They can only eat if they have both forks (one from their left and one from their right).
Philosophers pick up the forks one at a time, but if all philosophers pick up the fork at the
same time, they can get stuck in a deadlock.
Solution Approaches:
1. Mutex Locks: Use locks to ensure that only one philosopher can pick up the forks at any
given time.
2. Resource Hierarchy: Assign a hierarchy to the forks (e.g., odd-numbered philosophers pick
up the left fork first, and even-numbered philosophers pick up the right fork first) to prevent
circular waiting.
Goal:
To avoid deadlock (where all philosophers hold one fork and wait forever for the other) and
ensure that all philosophers can eventually eat.