Operating System Assignment
UNIT-1
1. Describe the evolution of an operating system from simple batch processing to
today’s operating system with their advantages & disadvantages.
The evolution of operating systems (OS) from simple batch processing systems to
modern, complex systems is a fascinating journey that reflects advances in technology,
user needs, and computational complexity. Here’s a broad overview of how operating
systems have evolved, along with the pros and cons at each stage.
1. Batch Processing Systems (1950s-1960s)
Batch processing was among the earliest OS types. In this system, jobs (programs) were
submitted together in a "batch" and processed sequentially without user interaction.
• Advantages:
o Efficient use of CPU time: By queuing jobs, the system minimized idle
CPU time.
o Simple design: Easy to implement, as jobs were processed sequentially
without the need for multitasking.
• Disadvantages:
o No interactivity: Users had to wait until the entire batch completed
before they could get results.
o Error handling was challenging: If an error occurred, the entire batch
would halt, often wasting time and resources.
2. Time-Sharing Systems (1960s-1970s)
Time-sharing allowed multiple users to access the computer simultaneously. It
introduced the concept of dividing CPU time into "slices" for each user or process.
• Advantages:
o Improved interactivity: Users could now interact with the system in real
time.
o Efficient resource usage: Sharing CPU time allowed multiple users to
benefit from a single machine.
• Disadvantages:
o Complexity increased: The OS had to manage resources among users,
making it more complex.
o Security concerns: With multiple users, protecting each user's data
became more challenging.
3. Multiprogramming Systems (1970s)
Multiprogramming allowed multiple programs to run concurrently by keeping several
programs in memory. This improved CPU utilization as the OS could switch to another
program if the current one was waiting for I/O operations.
• Advantages:
o Maximized CPU utilization: The CPU could switch to other tasks during
I/O waits, reducing idle time.
o Increased throughput: Allowed for faster job completion by running
multiple jobs simultaneously.
• Disadvantages:
o Complex memory management: The OS had to allocate memory
efficiently to each program.
o Potential for resource contention: Multiple programs competing for
resources could lead to performance degradation.
4. Distributed Systems (1980s)
Distributed systems involve multiple computers (nodes) connected over a network,
working collaboratively as a single system.
• Advantages:
o Improved performance and redundancy: By distributing tasks, the
system could achieve higher performance and resilience.
o Scalability: Additional nodes could be added easily to improve capacity.
• Disadvantages:
o Complex coordination: Managing tasks across multiple computers
added complexity.
o Security risks: Communication across networks increased vulnerability
to attacks.
5. Real-Time Systems (1980s-1990s)
Real-time systems are designed for applications that require immediate processing and
response, such as embedded systems in medical or industrial equipment.
• Advantages:
o Quick response times: Essential for time-sensitive applications.
o Predictable and reliable performance: Ensures that critical tasks are
completed within set time limits.
• Disadvantages:
o Specialized use: Limited to applications with strict timing constraints.
o High costs: Real-time OS often require specialized hardware and
software, increasing development costs.
6. Networked Operating Systems (1990s)
Networked OSs allow computers to connect and communicate over a network. This
evolution was influenced by the growth of the internet and local area networks (LANs).
• Advantages:
o Resource sharing: Users on different systems could share files, printers,
and other resources.
o Remote access and collaboration: Enabled users to access data and
applications remotely.
• Disadvantages:
o Network dependency: The system’s functionality depended on the
reliability of the network.
o Security challenges: Increased connectivity required advanced security
measures to protect shared resources.
7. Modern Operating Systems (2000s-Present)
Modern OSs, such as Windows, macOS, and Linux, combine features from previous
generations and offer advanced capabilities like virtualization, cloud integration, and
security enhancements.
• Advantages:
o Enhanced user experience: Features like GUI, multitasking, and mobile
compatibility make systems intuitive and accessible.
o High security and robustness: Built-in firewalls, encryption, and regular
updates enhance security.
o Support for modern technologies: Compatible with cloud services,
artificial intelligence, and virtual machines.
• Disadvantages:
o Complexity and cost: Developing and maintaining modern OSs is
resource-intensive.
o Resource consumption: Feature-rich systems require powerful
hardware, limiting usage on older or less capable devices.
2. Describe System calls & its types.
A system call is a mechanism that allows a program to request a service from the
operating system's kernel. System calls provide a controlled way for programs to
interact with the hardware and perform essential operations such as file handling,
process control, and memory management. Since direct access to hardware and
critical resources could lead to security and stability risks, system calls act as a safe
interface for executing privileged instructions.
System calls can be broadly categorized into the following types:
1. Process Control
These system calls manage processes, allowing the creation, execution, and
termination of processes. They are crucial for handling the life cycle of processes and
managing their resources.
• Examples:
o fork(): Creates a new process, a child process, as a copy of the calling
process.
o exec(): Replaces the current process image with a new process image.
o exit(): Terminates a process.
o wait(): Makes a parent process wait for its child process to finish
execution.
2. File Management
File management system calls handle the creation, deletion, reading, and writing of
files. They allow programs to store and retrieve data on disk, manage file attributes, and
control access permissions.
• Examples:
o open(): Opens a file.
o close(): Closes an opened file.
o read(): Reads data from a file.
o write(): Writes data to a file.
o chmod(): Changes file permissions.
3. Device Management
Device management system calls manage the input and output devices attached to the
system, such as printers, disks, and network cards. These calls allow programs to
interact with hardware devices in a uniform way.
• Examples:
o ioctl(): A versatile system call used to configure device-specific
properties.
o read() and write(): Used to transfer data to or from devices.
o open() and close(): Used to gain access to or release a device.
4. Information Maintenance
Information maintenance system calls provide details about system status and
configuration. They can retrieve system information or set parameters that control
system behavior.
• Examples:
o getpid(): Retrieves the process ID of the current process.
o alarm(): Sets a timer that sends a signal when it expires.
o time(): Retrieves the current time.
o uname(): Provides system information, such as the OS name and version.
5. Communication (Interprocess Communication - IPC)
Communication system calls facilitate the exchange of data and synchronization
between multiple processes. These are essential for enabling parallel processing and
distributed systems.
• Examples:
o pipe(): Creates a unidirectional data channel for communication between
processes.
o shmget(): Allocates a shared memory segment.
o mmap(): Maps files or devices into memory for efficient data access.
o msgsnd() and msgrcv(): Send and receive messages in a message queue.
o semget(): Creates or accesses a semaphore for process synchronization.
3. How many types of operating system are there? Explain each of them.
Operating systems can be classified into several types based on their capabilities,
design, and purpose. Here’s an overview of the major types of operating systems:
1. Batch Operating System
In batch operating systems, jobs are processed in batches without user interaction.
Users submit jobs (programs and data) to operators, who then feed these jobs to the
system in batches. The OS processes each batch sequentially and outputs the results
once the batch is complete.
• Examples: Early IBM systems.
• Use Cases: Large-scale repetitive tasks, such as payroll processing.
• Advantages: Efficient CPU utilization by minimizing idle time.
• Disadvantages: No interactivity; users must wait for the entire batch to
complete before receiving output.
2. Time-Sharing Operating System
Time-sharing OSs allow multiple users to access a computer simultaneously by dividing
CPU time into small time slots. Each active user or process gets a time slot to execute,
creating the impression of concurrent execution.
• Examples: Unix, Multics.
• Use Cases: Multi-user environments like university systems or corporate
networks.
• Advantages: Interactive and responsive, allows for resource sharing.
• Disadvantages: Higher complexity, security challenges due to multiple users.
3. Distributed Operating System
Distributed OSs manage a group of independent, networked computers that work
together as a single system. These OSs coordinate resources across multiple machines
to perform tasks collaboratively, sharing load and increasing reliability.
• Examples: Plan 9, Amoeba.
• Use Cases: Distributed applications like scientific simulations, cloud
computing, and web applications.
• Advantages: High performance and reliability, scalability, fault tolerance.
• Disadvantages: Complexity in managing and coordinating resources across
multiple machines.
4. Network Operating System
Network OSs allow computers connected within a network to share resources, such as
files and printers, and enable communication between devices. Each system typically
has its own OS and is aware of other networked systems.
• Examples: Novell NetWare, Microsoft Windows Server.
• Use Cases: Corporate networks, file sharing, and printer sharing.
• Advantages: Centralized data management, improved security, and remote
access.
• Disadvantages: Limited support for multitasking on individual machines,
dependency on network reliability.
5. Real-Time Operating System (RTOS)
Real-time OSs are designed to process data and produce output within strict time
constraints. They are often embedded within devices that require timely responses,
such as medical equipment or industrial machinery. RTOS can be divided into Hard
RTOS (strict deadlines) and Soft RTOS (flexible deadlines).
• Examples: VxWorks, FreeRTOS.
• Use Cases: Embedded systems, medical devices, automotive systems.
• Advantages: Predictable and reliable performance, crucial for time-sensitive
applications.
• Disadvantages: Limited flexibility, typically specialized for particular hardware.
6. Multi-Tasking and Multi-User Operating System
A multi-tasking OS allows multiple processes to run simultaneously on a single CPU by
quickly switching between tasks. A multi-user OS extends this capability by allowing
multiple users to use the system at once.
• Examples: Linux, Windows, macOS.
• Use Cases: Personal computers, workstations, servers.
• Advantages: Efficient resource management, supports multiple users and
applications.
• Disadvantages: Increased resource consumption, complex security and access
control requirements.
7. Mobile Operating System
Mobile OSs are designed for handheld devices like smartphones and tablets. They are
optimized for touch input, connectivity, and efficient power usage.
• Examples: Android, iOS.
• Use Cases: Smartphones, tablets, wearable devices.
• Advantages: Lightweight, optimized for battery life and limited hardware.
• Disadvantages: Limited processing power, restricted multitasking capabilities
compared to desktop OSs.
8. Embedded Operating System
Embedded OSs are specialized systems designed for embedded hardware with specific
functions. These OSs have a small footprint and are optimized for the device they
control, such as household appliances, automotive systems, or industrial machines.
• Examples: Embedded Linux, QNX.
• Use Cases: Consumer electronics, medical equipment, industrial machines.
• Advantages: Highly efficient, real-time processing capabilities.
• Disadvantages: Limited flexibility and upgradability, often tightly bound to the
device’s hardware.
4. What is an operating system? Discuss the difficulties involved in writing an
operating system for a real-time environment. Give examples
An **operating system (OS)** is system software that manages computer hardware,
software resources, and provides common services for computer programs. It serves as
an intermediary between users and the hardware, enabling users and applications to
interact with the system efficiently and safely. The OS performs essential tasks such as
process management, memory management, file handling, device control, and
security.
Real-Time Operating System (RTOS)
A real-time operating system is a specialized OS designed to serve real-time
applications that require precise, timely responses. In a real-time environment, the OS
must meet strict deadlines, often reacting to inputs within milliseconds or
microseconds to ensure system stability and predictability. Real-time systems are
typically classified into:
Hard Real-Time: Missing a deadline could lead to catastrophic consequences (e.g.,
medical systems, automotive braking systems).
Soft Real-Time: Missing a deadline degrades performance but does not result in
complete failure (e.g., multimedia streaming).
Difficulties in Writing an Operating System for a Real-Time Environment
Writing an OS for real-time systems presents unique challenges:
1. Meeting Timing Constraints
• Problem: Real-time systems must execute tasks within strict deadlines. In hard
real-time systems, even a minor delay could lead to system failure or endanger
lives.
• Challenge: Ensuring the OS scheduler consistently meets deadlines across
tasks of varying priority and computational requirements.
• Example: In an automotive airbag system, the OS must trigger deployment
within milliseconds of collision detection, without any delay.
2. Resource Management and Priority Scheduling
• Problem: Limited resources (CPU, memory, I/O) must be allocated in a way that
high-priority tasks always receive the resources they need to meet deadlines.
• Challenge: Designing a scheduler that can prioritize tasks effectively while
preventing issues like priority inversion, where a high-priority task is blocked by
lower-priority tasks.
• Example: In a robotics control system, sensor data processing must always
receive higher priority over less critical tasks, such as logging data to storage.
3. Interrupt Handling
• Problem: Real-time systems rely heavily on interrupt handling to respond quickly
to hardware events. An interrupt is a signal that requires immediate attention,
and it must be processed quickly.
• Challenge: Writing an OS that minimizes interrupt latency (the time taken to
respond to an interrupt) while avoiding interrupt overload, which can degrade
system performance.
• Example: In a pacemaker, an interrupt might signal an irregular heartbeat,
requiring the OS to immediately adjust the pacing rate to stabilize the heart.
4. Predictability and Determinism
• Problem: Real-time systems need deterministic behavior, meaning that their
actions are predictable and repeatable under all conditions.
• Challenge: Achieving deterministic responses can be difficult, especially in
complex systems with many dependencies. Factors like caching, memory
management, and task switching can introduce unpredictability.
• Example: In industrial machinery, the OS must control each movement of a
robotic arm with precise timing to avoid accidents and ensure consistent
operation.
5. Concurrency and Synchronization
• Problem: Real-time systems often perform multiple tasks concurrently,
requiring precise synchronization to avoid conflicts and ensure that tasks do not
interfere with each other.
• Challenge: Managing concurrency without causing delays or deadlocks,
particularly in systems where multiple tasks access shared resources.
• Example: In a flight control system, concurrent tasks might include monitoring
sensor data, controlling flight surfaces, and handling communications. Improper
synchronization could lead to conflicting commands.
6. Memory Constraints and Management
• Problem: Many real-time systems operate on hardware with limited memory,
making memory management crucial.
• Challenge: Allocating memory in a way that minimizes delays, such as avoiding
paging (which is often non-deterministic) while ensuring memory availability for
high-priority tasks.
• Example: Embedded systems in medical devices often have limited RAM and
must run continuously without memory leaks or crashes.
7. Testing and Verification
• Problem: Real-time OSs must be rigorously tested and verified, as failures can
have serious consequences.
• Challenge: Exhaustive testing is necessary to verify that timing constraints and
reliability requirements are met. However, real-time testing is complex and time-
consuming due to the need to simulate various conditions.
• Example: In autonomous vehicles, each OS response to sensor input must be
tested under different driving conditions to ensure safe operation.
Examples of Real-Time Operating Systems
1. VxWorks: Widely used in embedded systems for aerospace, automotive, and
industrial applications. For example, it’s used in Mars rovers to ensure timely
responses to control commands.
2. FreeRTOS: Open-source real-time OS for embedded systems. It is used in IoT
devices where real-time data processing is essential.
3. QNX: A real-time OS used in automotive systems and medical devices where
reliability and deterministic performance are crucial.
5. Explain the following terms:
a. Multiprogramming
b. Multitasking
c. Time sharing
d. Multiprocessing
a. Multiprogramming
Multiprogramming is the technique of running multiple programs (or processes) on a
single processor system in such a way that the CPU is kept busy by switching between
programs. The operating system loads multiple programs into memory at once, and
when one program is waiting for I/O operations (like reading from disk), the CPU is given
to another program to make use of idle CPU time.
• How it works: In a multiprogramming system, when a program needs to perform
I/O (e.g., reading from a disk), the OS temporarily suspends its execution and
switches to another program that is ready to use the CPU. This increases the
overall CPU utilization.
• Benefits:
o Maximizes CPU usage by reducing idle time.
o More efficient resource utilization, especially when programs spend
significant time waiting for I/O operations.
b. Multitasking
Multitasking is a broader concept that refers to the ability of an operating system to
manage the execution of multiple tasks (or processes) seemingly simultaneously, by
rapidly switching between them. This includes both cooperative multitasking (where
the OS depends on processes to give up control) and preemptive multitasking (where
the OS takes control and switches processes on its own).
• How it works: In multitasking, the CPU executes multiple tasks by quickly
switching from one task to another. This gives the illusion that the tasks are
running at the same time. In modern systems, the OS preempts tasks to ensure
fair CPU time allocation.
• Benefits:
o Increases system responsiveness, especially in multi-user environments.
o Allows a user to run multiple applications at once (e.g., a web browser,
text editor, and email client).
c. Time-sharing
Time-sharing is an extension of multiprogramming, where the CPU time is divided into
small time slices and shared between multiple users or tasks. Each task (or user) is
allocated a small time slice in which it can execute, and the OS switches between tasks
quickly, providing an interactive environment.
• How it works: The OS rapidly switches between different tasks (or users) by
allocating them fixed time slices, typically in the order of milliseconds. This
allows multiple users to interact with the system at the same time, with each
user receiving a fair share of CPU time.
• Benefits:
o Provides interactive and responsive user experiences, allowing users to
work on their tasks without noticeable delays.
o Supports multiple users simultaneously, which is especially important for
multi-user systems like mainframes.
d. Multiprocessing
Multiprocessing refers to the use of two or more processors (CPUs) in a computer
system to run multiple tasks concurrently. In a multiprocessor system, each processor
can handle a separate task, leading to true parallelism and improved system
performance, particularly for computationally intensive tasks.
• How it works: The OS distributes tasks across multiple processors, with each
processor executing a part of the work simultaneously. There are two main types
of multiprocessing:
o Symmetric Multiprocessing (SMP): All processors have equal access to
memory and perform tasks independently.
o Asymmetric Multiprocessing (AMP): One processor acts as the master
and controls the others (slave processors), which execute specific tasks
under the master’s direction.
• Benefits:
o True parallel processing: Multiple processors work on different tasks
simultaneously, leading to faster computation.
o Improved performance and reliability: If one processor fails, the others
can continue to operate, making the system more fault-tolerant.
6. Suppose you are designing a real-time operating system. In what way your design
will differ from the design of a multiuser operating system? Explain.
Designing a real-time operating system (RTOS) is fundamentally different from designing
a multiuser operating system (MUOS), as the primary focus, constraints, and
optimization goals vary significantly.
1. Timing and Responsiveness
• RTOS: The primary requirement for an RTOS is predictability and timely response
to events. The system must guarantee that tasks complete within defined time
constraints. RTOS scheduling focuses on meeting deadlines and ensuring
deterministic behavior, meaning the OS can predictably respond to inputs within
a known time frame.
• MUOS: In contrast, a multiuser OS prioritizes resource allocation and
responsiveness across multiple users rather than strict timing requirements.
Responsiveness is important, but timing constraints are flexible, and delays are
usually acceptable as long as they don't impact user experience.
2. Scheduling
• RTOS: Scheduling is deadline-driven, often using priority-based preemptive
scheduling to ensure that high-priority tasks execute immediately. Examples of
scheduling methods used in RTOS include Rate-Monotonic Scheduling (RMS)
and Earliest Deadline First (EDF), designed to meet stringent timing constraints.
• MUOS: A multiuser OS typically uses fairness-based scheduling to ensure that
all users get a share of CPU time. Popular algorithms include Round-Robin or
Multilevel Queue Scheduling. While priority-based scheduling can be used, the
focus is on providing fair resource access rather than meeting specific deadlines.
3. Concurrency and Synchronization
• RTOS: Concurrency management in an RTOS requires highly efficient
mechanisms to prevent priority inversion (where a lower-priority task blocks a
higher-priority task). Protocols like priority inheritance are used to handle such
scenarios to prevent timing issues in critical tasks.
• MUOS: In a multiuser OS, concurrency is also important, but the focus is on
resource sharing across processes and users. Locking and synchronization
mechanisms like semaphores, mutexes, and critical sections are used to
prevent conflicts, but delays are more tolerable, and priority inversion is less of
an issue.
4. Interrupt Handling
• RTOS: An RTOS must minimize interrupt latency (the time it takes to respond to
an interrupt) and interrupt jitter (variability in interrupt handling time) to ensure
predictable timing. Interrupt handlers are designed to be lightweight and
efficient to avoid delaying critical tasks.
• MUOS: In a multiuser OS, interrupt handling is still important, but latency
requirements are less strict. Interrupts may not require immediate handling, as
the focus is on ensuring that each user’s tasks are fairly managed rather than
responding within strict time bounds.
5. Memory Management
• RTOS: Memory management in an RTOS is designed for deterministic behavior.
Dynamic memory allocation is minimized, as it can lead to fragmentation and
unpredictable delays. Instead, static memory allocation is preferred, and
features like paging are often avoided to eliminate delays caused by page faults.
• MUOS: Multiuser systems prioritize memory sharing and efficiency, using
dynamic memory allocation techniques, paging, and virtual memory to
accommodate multiple users and applications simultaneously. This allows
efficient resource usage but can introduce latency that an RTOS cannot afford.
6. Reliability and Fault Tolerance
• RTOS: Real-time systems are often used in critical environments (e.g., medical
devices, industrial controls) where faults can have severe consequences.
Therefore, they require high reliability and fault tolerance, with mechanisms like
watchdog timers to reset tasks that fail to complete within a specific time.
• MUOS: A multiuser OS emphasizes security and stability for multiuser
environments, but fault tolerance may be less strict compared to an RTOS.
Crashes or delays in a single user’s process are usually not catastrophic, as
other users can continue working, and the OS can restart faulty processes as
needed.
7. Resource Management
• RTOS: In an RTOS, resource management must be tightly controlled to ensure
that critical tasks have immediate access to necessary resources without
delays. Resources are allocated based on priority rather than user fairness, and
high-priority tasks are protected from resource contention.
• MUOS: A multiuser OS uses a resource management approach focused on fair
distribution across all users. Resources like CPU time, memory, and I/O are
shared and allocated with policies to ensure that no single user monopolizes
resources.
8. System Size and Overhead
• RTOS: RTOSs are typically lightweight and stripped down to minimize processing
overhead and meet strict timing constraints. They include only essential features
to ensure low latency and predictability, making them suitable for embedded
systems with limited resources.
• MUOS: Multiuser OSs are larger and more feature-rich, supporting complex user
interfaces, multitasking, and user management features. They can handle higher
overheads, which helps provide a richer user experience but reduces the
predictability required in real-time environments.
7. What are real time operating systems? How are they developed & implemented?
Illustrate some applications where they can be used.
A real-time operating system (RTOS) is an operating system designed to manage
hardware resources and execute tasks within strict timing constraints. Unlike general-
purpose operating systems, an RTOS must respond to inputs and complete tasks within
a specific time frame to ensure predictable, deterministic behavior. This is essential for
applications where timing precision and reliability are critical, such as medical devices,
industrial automation, and aerospace systems.
Development and Implementation of Real-Time Operating Systems
1. Defining Requirements:
o Performance requirements: These define the timing constraints, such as
deadlines, latency, and response times.
o Resource constraints: Memory and processing capabilities are often
limited, especially in embedded systems.
o Reliability: Many RTOSs operate in critical systems where faults are
unacceptable, requiring high levels of fault tolerance.
2. Selecting the Kernel Type:
o Hard Real-Time: Guarantees strict timing; missing a deadline could lead
to severe consequences.
o Soft Real-Time: Timing constraints are less strict, where delays degrade
performance but are not catastrophic.
o Hybrid Real-Time: Combines elements of both hard and soft real-time,
balancing stringent timing with some flexibility.
3. Scheduler Design:
o Fixed-Priority Scheduling: Pre-assigns priorities to tasks based on
urgency and executes them accordingly (e.g., Rate Monotonic
Scheduling).
o Dynamic-Priority Scheduling: Adjusts priorities based on deadlines,
such as in Earliest Deadline First scheduling.
4. Interrupt Management:
o Minimizing interrupt latency is crucial, as delays can affect the
timeliness of critical tasks.
o Interrupts are often prioritized, and tasks are preempted to handle
interrupts that require immediate attention.
5. Memory Management:
o RTOSs typically avoid features like paging and virtual memory to prevent
unpredictable delays.
o Static memory allocation is common to ensure predictable timing, with
minimal dynamic memory allocation to reduce fragmentation.
6. Testing and Verification:
o Extensive testing is required to verify that all timing constraints are met
under various conditions.
o Real-time systems are often tested using worst-case execution times
(WCET) analysis to ensure that tasks can always complete within their
deadlines.
7. Implementation:
o Coding: Once the design is complete, the RTOS is typically coded in C or
C++ for efficient control over hardware.
o Integration with Hardware: RTOSs are often implemented on embedded
hardware, where they are fine-tuned to the specifics of the processor and
peripherals.
o Optimization and Validation: The system is optimized for low overhead
and validated to ensure compliance with timing requirements.
Applications of Real-Time Operating Systems
RTOSs are used across various fields where time-critical responses are necessary.
Some of the major applications include:
1. Industrial Automation:
o Used to control machinery and robotics on production lines, where
precise timing ensures safe and efficient operation.
o Example: A robotic arm in an assembly line that needs to synchronize
with conveyor belts and other equipment.
2. Medical Devices:
o In medical systems, accurate timing can be a matter of life and death, as
devices must respond to patient data immediately.
o Example: Heart pacemakers or ventilators that need to respond in real-
time to patient conditions.
3. Automotive Systems:
o Many advanced automotive features, such as anti-lock braking systems
(ABS), airbag deployment, and engine control, rely on RTOSs to ensure
driver and passenger safety.
o Example: An airbag control system, which must detect a collision and
deploy the airbag within milliseconds.
4. Telecommunications:
o Used in network routers, switches, and base stations to handle large
volumes of data and ensure low-latency packet transmission.
o Example: A 5G base station that manages and prioritizes data packets for
real-time communication.
5. Aerospace and Defense:
o Many flight control systems, missile guidance systems, and other defense
applications use RTOSs to ensure accurate and predictable responses.
o Example: Autopilot systems in aircraft, which need to make rapid
adjustments to maintain a stable flight path.
6. Consumer Electronics:
o Many embedded consumer devices, such as washing machines,
microwave ovens, and gaming consoles, use RTOSs to ensure responsive
user interactions.
o Example: In a washing machine, the RTOS controls cycles, water levels,
and temperature settings in response to sensor input.
7. Internet of Things (IoT):
o IoT devices often use RTOSs for sensor data processing, control, and
communication, where immediate data processing is crucial.
o Example: Smart home security cameras that process motion detection
and stream video in real-time.
UNIT:2
1. Write comparison between process & thread.
2. Explain the differences in the degree to which the following scheduling
algorithms discriminate in favour of short processes:
a. FCFS b. Round robin c. Multilevel feedback queues
a. First-Come, First-Served (FCFS)
• Description: FCFS schedules processes in the order of their arrival, giving the
CPU to the process that arrives first until it finishes.
• Discrimination: Low
o FCFS does not discriminate in favor of short processes. It operates on a
simple queue basis, where each process waits for its turn regardless of its
duration.
o Issue: This can lead to the "convoy effect", where short processes get
stuck behind long processes, resulting in high waiting times for short
processes.
o Effect: FCFS performs poorly for short processes because they may wait
unnecessarily long, especially if a long process precedes them.
b. Round Robin (RR)
• Description: Round Robin assigns each process a fixed time slice (quantum) and
cycles through processes in a circular fashion, returning unfinished processes to
the back of the queue after each time slice.
• Discrimination: Moderate
o RR moderately favors short processes, as they will complete more quickly
due to the time-slice approach. If a process can finish within one or two
time slices, it will complete quickly relative to longer processes.
o Effect of Time Slice: A smaller time slice increases RR’s discrimination in
favor of short processes, as shorter jobs can complete within fewer
rounds. However, a larger time slice can reduce its effectiveness,
resembling FCFS.
o Effect: RR prevents any single process from monopolizing the CPU,
reducing waiting times for short processes compared to FCFS, though still
not highly optimized for short processes.
c. Multilevel Feedback Queues (MLFQ)
• Description: MLFQ has multiple priority levels, with processes moving between
queues based on their behaviour and execution time. Shorter, interactive
processes often get higher priority and are placed in queues with smaller time
slices, while longer processes are pushed to lower-priority queues.
• Discrimination: High
o MLFQ is highly favourable for short processes, giving them priority and
moving them to higher-priority queues. Short processes usually complete
quickly in high-priority queues with shorter time slices.
o Feedback Mechanism: This algorithm adjusts priorities dynamically,
moving long processes to lower-priority queues, effectively discriminating
against long-running processes and preventing them from delaying short
processes.
o Effect: MLFQ provides the best discrimination in favour of short
processes, as it optimizes response times for processes that need shorter
bursts and penalizes longer processes with lower priorities.
3. Explain scheduling queues with queuing diagram representation of process
scheduling.
In process scheduling, scheduling queues help manage the different stages and
statuses of processes as they move through the system. Processes go through various
queues, each representing a different state, during their lifecycle.
Different scheduling queues are:
1. Job Queue
o This queue contains all the processes in the system. When a process
enters the system (e.g., when a user runs a program), it is initially placed
in the job queue.
2. Ready Queue
o The ready queue holds all processes that are loaded in memory and
ready to run but are waiting for the CPU. The scheduler selects processes
from this queue to assign to the CPU.
o Processes move to the ready queue when they are ready to execute but
need to wait for CPU time.
3. Device (I/O) Queue
o When a process requires an I/O operation (such as reading from a disk or
accessing a printer), it is moved to an I/O queue specific to that device.
o There can be multiple device queues, one for each I/O device. Processes
wait here until the necessary device becomes available, after which they
can proceed.
4. Waiting Queue (Blocked Queue)
o The waiting queue is where processes wait for a specific event (other
than I/O), like a resource becoming available or a signal from another
process.
o Processes stay in the waiting queue until their event occurs, after which
they are moved back to the ready queue.
5. Exit Queue
o When a process has completed its execution, it is moved to the exit
queue, where it waits for final cleanup before being removed from the
system.
3. Suppose that the given ahead processes arrive for execution at time indicated:
Process Burst time
P1 8
P2 4
P3 1
Calculate average turnaround time, average waiting time & throughput:
a. FCFS b. SRTF c. Non-pre-emptive SJF
4. What are the different states in which a process can be in at a particular time?
What factors can lead to the transition process states?
A process in an operating system can exist in several states during its lifecycle. The
states represent the status of the process at any given time.
Main Process States
1. New: The process is being created.
2. Ready: The process is prepared to run and is waiting for CPU time.
3. Running: The process is currently being executed by the CPU.
4. Waiting (or Blocked): The process is waiting for some event (e.g., I/O operation
completion or resource availability).
5. Terminated (or Exit): The process has completed execution and is ready to be
removed from the system.
The transitions between process states are triggered by various factors, primarily
driven by system events, process requirements, or the operating system’s
scheduling policies.
1. New → Ready (Admitted)
• Factor: Process Creation
• When a new process is created and admitted into the system (e.g., when a user
starts a new program), it moves from the New state to the Ready state. This
admission depends on system resources and load.
2. Ready → Running (Dispatch)
• Factor: CPU Scheduling
• When the CPU scheduler selects a process from the Ready queue, it is
dispatched and moves from the Ready state to the Running state. This
dispatching is based on the scheduling algorithm (e.g., FCFS, Round Robin, etc.).
3. Running → Waiting (I/O or Event Wait)
• Factor: I/O or Resource Wait
• If a process in the Running state requires an I/O operation (e.g., reading data
from a disk) or needs to wait for a specific event (like user input or resource
availability), it is moved to the Waiting state. The process will stay in this state
until the event or I/O operation completes.
4. Running → Ready (Preemption)
• Factor: Time Slice Expiration or Higher Priority Process Arrival
• In preemptive scheduling, the CPU may switch a process from the Running state
to the Ready state if its time slice (allocated CPU time) expires or if a higher-
priority process arrives in the ready queue. This allows the CPU to serve all
processes fairly or to give priority to more urgent tasks.
5. Waiting → Ready (Event Completion)
• Factor: I/O Completion or Event Occurrence
• When a process in the Waiting state completes its I/O operation or the event it
was waiting for occurs, it is moved back to the Ready state. This transition allows
the process to compete for CPU time again.
6. Running → Terminated (Process Completion or Exit)
• Factor: Process Completion or Forced Termination
• When a process finishes its execution (e.g., it has completed all instructions), it
moves from the Running state to the Terminated state. Alternatively, a process
may be terminated by the operating system due to errors, resource unavailability,
or manual user intervention (e.g., killing a process).
6. Differentiate between user level threads and kernel level threads.
7. Explain RR Cpu scheduling algorithm with an example.
Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is
ever used in the history of CPU Scheduling Algorithms. Round Robin CPU Scheduling
uses Time Quantum (TQ). The Time Quantum is something which is removed from the
Burst Time and lets the chunk of process to be completed.
Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is
carried out cyclically. The system defines a specific time slice, known as a time
quantum.
First, the processes which are eligible to enter the ready queue enter the ready queue.
After entering the first process in Ready Queue is executed for a Time Quantum chunk
of time. After execution is complete, the process is removed from the ready queue. Even
now the process requires some time to complete its execution, then the process is
added to Ready Queue.
The Ready Queue does not hold processes which already present in the Ready Queue.
The Ready Queue is designed in such a manner that it does not hold non unique
processes. By holding same processes Redundancy of the processes increases.
After, the process execution is complete, the Ready Queue does not take the completed
process for holding.
Unit -3
1 Describe the banker’s algorithm for safe allocation. Consider a system with three
processes and three resource types and at time to the following snapshot of the
system has been taken:
Process Allocated Maximum Available
R2 R3 R1 R2 R3 R1 R2 R3
P1 23 368 7 7 10
P2 03 433
P3 24 344
a. Is the current allocation system a safe state?
b. Would the following requests be granted in the current safe state?
1) Process P2 requests (1, 0)
2) Process P1 requests (1, 0)
The Banker's Algorithm is a resource allocation and deadlock avoidance algorithm
used to determine whether a system is in a safe state or not. The main idea is that the
system must ensure that it can always fulfill the needs of all processes without causing
a deadlock. The system is in a safe state if there exists a sequence of processes such
that each process can finish by obtaining the remaining resources it needs.
Allocated Resources: Resources currently assigned to each process.
Maximum Resources: The maximum resources each process may need to complete.
Available Resources: The resources available in the system that are not currently
allocated.
2 Explain bounded buffer problem of synchronization.
The Bounded Buffer Problem is a classic synchronization problem that involves a
producer and a consumer interacting with a shared fixed-size buffer. This is often
referred to as the Producer-Consumer Problem. The buffer has a limited capacity
(hence "bounded"), and both the producer and consumer must synchronize their
actions to ensure that the buffer is accessed correctly and efficiently.
Problem Definition:
• Producer: The producer generates data and places it into the buffer.
• Consumer: The consumer consumes data from the buffer.
• Buffer: The buffer is a shared resource that has a fixed size and stores data
temporarily before the consumer consumes it.
The challenge is to ensure proper synchronization between the producer and consumer
so that:
1. The producer does not add data to a full buffer.
2. The consumer does not try to consume data from an empty buffer.
3. Both the producer and consumer can operate concurrently without interference
or deadlocks.
Conditions for Synchronization:
• Full Condition: If the buffer is full, the producer must wait until there is space
available.
• Empty Condition: If the buffer is empty, the consumer must wait until there is
data available to consume.
• Mutual Exclusion: The producer and consumer should not simultaneously
access the buffer. This can lead to race conditions and incorrect behavior.
3 What are semaphores? Define binary semaphores. How mutual exclusion is
implemented using semaphores?
A semaphore is a synchronization primitive used to manage access to shared
resources in concurrent programming. It is an integer variable used to control access
based on certain conditions, typically used to signal and synchronize processes or
threads.
Semaphores are typically used to:
• Coordinate the access of multiple processes or threads to shared resources.
• Prevent race conditions and ensure safe execution in concurrent environments.
• Signal between processes or threads to indicate events such as the availability
of a resource.
Binary Semaphore:
A binary semaphore is a semaphore that has only two possible values: 0 and 1. It is
used to implement mutual exclusion (mutex), ensuring that only one process can
access a critical section at a time.
• Binary semaphore operations:
o wait(S): If the semaphore value is 1, it is set to 0, and the process enters
the critical section. If the value is 0, the process is blocked and waits.
o signal(S): If the value is 0, it is set to 1, and the process releases the
critical section, possibly unblocking a waiting process.
Mutual Exclusion refers to the requirement that only one process or thread should be
allowed to access a critical section of code or resource at any given time. Binary
semaphores are a perfect fit for this purpose. Here's how mutual exclusion is
implemented using binary semaphores:
1. Initialization:
o A binary semaphore (let's call it mutex) is initialized to 1. This indicates
that the critical section is initially available for access.
2. Critical Section Access:
o When a process wants to enter the critical section, it calls wait(mutex). If
the semaphore value is 1, it is changed to 0, and the process enters the
critical section.
o If the semaphore value is already 0 (indicating that another process is
inside the critical section), the process is blocked until the semaphore
value becomes 1.
3. Critical Section Exit:
o When a process finishes executing in the critical section, it calls
signal(mutex), which sets the semaphore value to 1. This indicates that
the critical section is now available, and if there are any waiting
processes, one of them will be unblocked to enter the critical section.
4 Explain the deadlock. What are the conditions for deadlock not to occur?
Deadlock is a situation in a multi-process or multi-threaded system where two or more
processes (or threads) become stuck in a state where each is waiting for another to
release a resource. As a result, none of the processes can proceed, and the system as a
whole is in a state of deadlock.
In simple terms, deadlock occurs when a set of processes are blocked forever because
they are waiting for resources that other processes in the set hold, and these resources
can never be released.
Conditions for Deadlock:
For a deadlock to occur, all of the following four necessary conditions must be true
simultaneously. These are known as the Coffman conditions:
1. Mutual Exclusion:
o At least one resource must be held in a non-shareable mode, meaning
only one process can access it at a time. If any process holds a resource
exclusively, no other process can use it.
2. Hold and Wait:
o A process must be holding at least one resource and waiting for additional
resources that are currently being held by other processes. This condition
allows processes to hold resources while waiting for others, contributing
to the possibility of deadlock.
3. No Preemption:
o Resources cannot be forcibly taken from a process holding them; they
must be released voluntarily. In other words, once a process acquires a
resource, it cannot be preempted (i.e., taken away by the system) until it
finishes its task and releases the resource.
4. Circular Wait:
o A set of processes must exist such that each process is waiting for a
resource held by the next process in the set, forming a closed chain. For
example, Process A waits for Process B’s resource, Process B waits for
Process C’s resource, and Process C waits for Process A’s resource
5 How is inter-process communication achieved in O.S.?
Inter-Process Communication (IPC) refers to the mechanisms that allow processes to
communicate and synchronize with each other. In an operating system, processes are
typically isolated from each other to prevent interference, but they often need to
exchange data or synchronize their actions to complete tasks. IPC provides the
necessary tools for processes to communicate and share information.
IPC can be achieved by two methods:
i) Shared Memory Method :Communication between processes using shared memory
requires processes to share some variable and it completely depends on how
programmer will implement it. One way of communication using shared memory can be
imagined like this: • Suppose process1 and process2 are executing simultaneously and
they share some resources or use some information from other process, process1
generate information about certain computations or resources being used and keeps it
as a record in shared memory. • When process2 need to use the shared information, it
will check in the record stored in shared memory and take note of the information
generated by process1 and act accordingly. Processes can use shared memory for
extracting information as a record from other process as well as for delivering any
specific information to other process.
ii) Messaging Passing Method: In this method, processes communicate with each other
without using any kind of shared memory. If two processes p1 and p2 want to
communicate with each other, they proceed as follow: • Establish a communication
link (if a link already exists, no need to establish it again.) • Start exchanging messages
using basic primitives. • We need at least two primitives: Send (message, destination) or
send (message) Receive (message, host) or receive (message)
6 With simple data structures/procedures. explain the following problems of
synchronisation:
a. Readers-writer’s problem
b. Dining philosopher’s problem
The Readers-Writers Problem is a classic synchronization problem where there is a
shared resource (e.g., a database) that multiple processes (readers and writers) need to
access concurrently. The challenge lies in ensuring that readers and writers can access
the resource safely without causing inconsistencies or data corruption.
Problem Explanation:
• Readers: Can read the shared resource concurrently without causing any issues.
• Writers: Modify the shared resource, so no other process (reader or writer)
should be accessing it while a writer is working on it.
The main issues are:
• Readers can access the resource concurrently.
• A writer needs exclusive access to the resource.
• Writers should not be starved (i.e., they should eventually get a chance to write).
• Readers should not be starved (i.e., they should not be blocked indefinitely).
Simple Procedure for reader:
semaphore mutex = 1; // For mutual exclusion
integer readcount = 0; // Number of readers currently accessing the resource
procedure reader() {
wait(mutex); // Enter the critical section to update readcount
readcount = readcount + 1;
if (readcount == 1) {
wait(wrt); // First reader locks the writer }
signal(mutex); // Exit the critical section
// Read the shared resource
wait(mutex); // Enter critical section to update readcount
readcount = readcount - 1;
if (readcount == 0) {
signal(wrt); // Last reader releases the writer }
signal(mutex); // Exit critical section }
Simple Procedure for writer;
semaphore wrt = 1; // Binary semaphore for writer exclusive access
procedure writer() {
wait(wrt); // Get exclusive access to the resource
// Write to the shared resource
signal(wrt); // Release exclusive access }
The Dining Philosophers Problem is a synchronization problem where five
philosophers are sitting at a round table, thinking and occasionally eating. There is a
fork between each pair of adjacent philosophers. Each philosopher needs two forks to
eat, but there is only one fork between any two philosophers, creating potential
conflicts when two philosophers try to pick up the same fork at the same time.
The challenge is to design a solution that ensures:
1. No deadlock (i.e., no philosopher should be stuck forever waiting for a fork).
2. No starvation (i.e., each philosopher should eventually get a chance to eat).
3. Concurrency (i.e., as many philosophers as possible should be eating at the
same time, without interference).
Simple Procedure for Dining Philosophers Problem:
semaphore fork[5] = {1, 1, 1, 1, 1}; // 5 forks, each initialized to 1 (available)
procedure philosopher(i) {
while (true) {
think(); // Philosopher is thinking
wait(fork[i]); // Pick up the left fork (i-th fork)
wait(fork[(i + 1) % 5]); // Pick up the right fork (next fork)
eat(); // Philosopher is eating
signal(fork[i]); // Put down the left fork
signal(fork[(i + 1) % 5]); // Put down the right fork }
}
7 Explain the critical section with an example.
The Critical Section is a part of a program where shared resources are accessed and
modified. When multiple processes or threads are running concurrently, they might
need to access the same shared resources, such as memory, files, or devices. The
critical section problem refers to the issue of ensuring that when one process is
executing within its critical section, no other process is allowed to enter its own critical
section to access the shared resource simultaneously.
The critical section problem arises from the need for mutual exclusion—ensuring that
only one process or thread can execute in the critical section at a time to avoid data
corruption or inconsistencies.
Conditions for Solving the Critical Section Problem
For a solution to the critical section problem, the following conditions must be satisfied:
1. Mutual Exclusion: Only one process or thread can be in its critical section at any
given time.
2. Progress: If no process is in the critical section and some processes want to
enter their critical section, the selection of the process that will enter the critical
section must be made without indefinite delay.
3. Bounded Waiting: A process that wants to enter its critical section should not be
forced to wait indefinitely. There must be a limit on the number of times other
processes are allowed to enter their critical sections before the waiting process
can enter its own.
4. Example of Critical Section
Consider a scenario with two processes (P1 and P2) that both want to increment a shared
variable, counter. This variable is initialized to 0. If both processes try to increment the
variable at the same time, they could read the same value and both increment it to 1, even
though it should have been incremented to 2.
Solution Using Mutual Exclusion (Critical Section Management):
We can use a mutex or semaphore to ensure mutual exclusion. This prevents multiple
processes from entering the critical section at the same time.
int counter = 0; // Shared resource (critical section)
semaphore mutex = 1; // Binary semaphore initialized to 1 (available)
void process1() {
wait(mutex); // Lock the mutex to enter critical section
counter = counter + 1; // Critical section: modifying shared resource
signal(mutex); // Release the mutex after leaving critical section }
void process2() {
wait(mutex); // Lock the mutex to enter critical section
counter = counter + 1; // Critical section: modifying shared resource
signal(mutex); // Release the mutex after leaving critical section }
UNIT-4
1.Explain the difference between the following
a. MVT & MFT b. Internal & external fragmentation c. Logical & physical address
2 Consider the following page reference string- 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3,
2, 1, 2, 3, 6 How many page faults will occur for LRU, FIFO & optimal page
replacement algorithms, assuming 4 available frames (initially all empty)?
To calculate the number of page faults for the three page replacement algorithms (LRU,
FIFO, and Optimal) given the page reference string:
Page Reference String:
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
Available Frames: 4 (initially all empty)
1. FIFO (First In, First Out) Page Replacement Algorithm:
FIFO replaces the oldest page in memory when a page fault occurs.
1. 1 → Page fault (1 in frames)
2. 2 → Page fault (1, 2 in frames)
3. 3 → Page fault (1, 2, 3 in frames)
4. 4 → Page fault (1, 2, 3, 4 in frames)
5. 2 → No page fault (2 is already in frames)
6. 1 → No page fault (1 is already in frames)
7. 5 → Page fault (replace 3) → (1, 2, 4, 5 in frames)
8. 6 → Page fault (replace 4) → (1, 2, 5, 6 in frames)
9. 2 → No page fault (2 is already in frames)
10. 1 → No page fault (1 is already in frames)
11. 2 → No page fault (2 is already in frames)
12. 3 → Page fault (replace 5) → (1, 2, 6, 3 in frames)
13. 7 → Page fault (replace 6) → (1, 2, 3, 7 in frames)
14. 6 → Page fault (replace 2) → (1, 3, 7, 6 in frames)
15. 3 → No page fault (3 is already in frames)
16. 2 → Page fault (replace 7) → (1, 3, 6, 2 in frames)
17. 1 → No page fault (1 is already in frames)
18. 2 → No page fault (2 is already in frames)
19. 3 → No page fault (3 is already in frames)
20. 6 → No page fault (6 is already in frames)
Total page faults for FIFO = 12
2. LRU (Least Recently Used) Page Replacement Algorithm:
LRU replaces the page that has not been used for the longest time.
1. 1 → Page fault (1 in frames)
2. 2 → Page fault (1, 2 in frames)
3. 3 → Page fault (1, 2, 3 in frames)
4. 4 → Page fault (1, 2, 3, 4 in frames)
5. 2 → No page fault (2 is already in frames)
6. 1 → No page fault (1 is already in frames)
7. 5 → Page fault (replace 3) → (1, 2, 4, 5 in frames)
8. 6 → Page fault (replace 4) → (1, 2, 5, 6 in frames)
9. 2 → No page fault (2 is already in frames)
10. 1 → No page fault (1 is already in frames)
11. 2 → No page fault (2 is already in frames)
12. 3 → Page fault (replace 5) → (1, 2, 6, 3 in frames)
13. 7 → Page fault (replace 6) → (1, 2, 3, 7 in frames)
14. 6 → Page fault (replace 2) → (1, 3, 7, 6 in frames)
15. 3 → No page fault (3 is already in frames)
16. 2 → Page fault (replace 7) → (1, 3, 6, 2 in frames)
17. 1 → No page fault (1 is already in frames)
18. 2 → No page fault (2 is already in frames)
19. 3 → No page fault (3 is already in frames)
20. 6 → No page fault (6 is already in frames)
Total page faults for LRU = 12
3. Optimal Page Replacement Algorithm:
The Optimal algorithm replaces the page that will not be used for the longest time in the
future.
1. 1 → Page fault (1 in frames)
2. 2 → Page fault (1, 2 in frames)
3. 3 → Page fault (1, 2, 3 in frames)
4. 4 → Page fault (1, 2, 3, 4 in frames)
5. 2 → No page fault (2 is already in frames)
6. 1 → No page fault (1 is already in frames)
7. 5 → Page fault (replace 3) → (1, 2, 4, 5 in frames)
8. 6 → Page fault (replace 4) → (1, 2, 5, 6 in frames)
9. 2 → No page fault (2 is already in frames)
10. 1 → No page fault (1 is already in frames)
11. 2 → No page fault (2 is already in frames)
12. 3 → Page fault (replace 5) → (1, 2, 6, 3 in frames)
13. 7 → Page fault (replace 6) → (1, 2, 3, 7 in frames)
14. 6 → Page fault (replace 2) → (1, 3, 7, 6 in frames)
15. 3 → No page fault (3 is already in frames)
16. 2 → Page fault (replace 7) → (1, 3, 6, 2 in frames)
17. 1 → No page fault (1 is already in frames)
18. 2 → No page fault (2 is already in frames)
19. 3 → No page fault (3 is already in frames)
20. 6 → No page fault (6 is already in frames)
Total page faults for Optimal = 10
3 Assume memory partitions of 100 kb, 500 kb, 200 kb, 300 kb, and 600 kb. How
would each of the first fit, best fit & worst fit algorithms place processes of 212 kb,
417 kb, 112 kb, 426 kb. Which algorithm makes the most efficient use of memory?
Memory Partitions:
• 100 KB
• 500 KB
• 200 KB
• 300 KB
• 600 KB
Processes to Place:
• 212 KB
• 417 KB
• 112 KB
• 426 KB
1. First Fit Algorithm:
The First Fit algorithm places the process in the first partition that is large enough to
hold it.
Placement steps:
• 212 KB: First partition that fits is 500 KB. So, 212 KB is placed in 500 KB partition.
(288 KB remaining)
• 417 KB: First partition that fits is 600 KB. So, 417 KB is placed in the 600 KB
partition. (183 KB remaining)
• 112 KB: First partition that fits is 200 KB. So, 112 KB is placed in 200 KB partition.
(88 KB remaining)
• 426 KB: First partition that fits is 500 KB (but it’s already used). The next partition
is 300 KB (but it's too small). So, this process cannot be placed.
2. Best Fit Algorithm:
The Best Fit algorithm places the process in the smallest available partition that can
accommodate it, minimizing leftover space.
Placement steps:
• 212 KB: The best fit is the 300 KB partition (remaining 88 KB).
• 417 KB: The best fit is the 500 KB partition (remaining 83 KB).
• 112 KB: The best fit is the 200 KB partition (remaining 88 KB).
• 426 KB: The best fit is the 600 KB partition (remaining 174 KB).
3. Worst Fit Algorithm:
The Worst Fit algorithm places the process in the largest available partition to
maximize the remaining free space.
Placement steps:
• 212 KB: The worst fit is the 600 KB partition (remaining 388 KB).
• 417 KB: The worst fit is the 500 KB partition (remaining 83 KB).
• 112 KB: The worst fit is the 300 KB partition (remaining 188 KB).
• 426 KB: The worst fit is the 500 KB partition (remaining 83 KB). Since 500 KB is
already used for 417 KB, it will place it in the 300 KB partition.
4 Describe various page replacement algorithms along with their merits &
demerits.
Page replacement algorithms are used in virtual memory systems to decide which page
should be replaced when a page fault occurs and no free frames are available to load a
new page. Here’s a description of various page replacement algorithms, along with their
merits and demerits:
1. FIFO (First-In, First-Out)
Description:
FIFO is one of the simplest page replacement algorithms. It replaces the oldest page
(i.e., the one that has been in memory the longest) when a page fault occurs.
Merits:
• Simple to implement and understand.
• Requires minimal bookkeeping: only the order of pages needs to be tracked.
Demerits:
• Belady's Anomaly: FIFO can cause more page faults when the number of frames
increases, which is counterintuitive.
• Poor performance: The oldest page may not be the least useful page, leading to
inefficient memory usage.
• Does not consider the usage frequency or recency of pages.
2. Optimal Page Replacement (OPT)
Description:
The Optimal algorithm replaces the page that will not be used for the longest time in the
future. It provides the best possible performance but is impractical in real systems
because future page accesses cannot be predicted.
Merits:
• Provides the minimum number of page faults for a given reference string, as it
uses optimal knowledge of future references.
• Ideal for theoretical analysis and comparison of algorithms.
Demerits:
• Impossible to implement in real systems, as it requires future knowledge of page
references.
• Not practical for general use in real-time systems.
3. LRU (Least Recently Used)
Description:
LRU replaces the page that has not been used for the longest time. It assumes that
pages used recently are likely to be used again soon.
Merits:
• Efficient for real-world workloads: Tends to perform well in practice since it
closely approximates the Optimal algorithm.
• Simple to implement using stacks or counters to track the most recently used
pages.
Demerits:
• Overhead: Requires additional overhead to keep track of page access history,
which can be expensive in terms of time and space.
• Not optimal in all cases: Can still cause poor performance in certain access
patterns (e.g., cyclic patterns).
4. LFU (Least Frequently Used)
Description:
LFU replaces the page that has the least number of accesses over time. Pages with the
least frequency of access are considered less important and are replaced first.
Merits:
• Good for workloads with long-term patterns: Effective if pages with less
frequent access are rarely needed.
• Can be used to track long-term access behavior.
Demerits:
• Complex implementation: Requires maintaining a counter for the frequency of
each page, leading to significant overhead.
• Not effective for short-term patterns: It may keep pages in memory that are no
longer useful, if they were accessed frequently at some point in time.
5. Clock (Second Chance)
Description:
The Clock algorithm is a practical approximation of LRU. It arranges pages in a circular
buffer and gives each page a "second chance" before replacing it. When a page is
accessed, its reference bit is set to 1. If the page is not accessed before its turn comes
again, it is replaced.
Merits:
• Less overhead than LRU: It approximates LRU behavior with lower overhead.
• Simpler to implement than LRU, using a circular queue.
Demerits:
• The performance is not as good as LRU in some cases because it only
approximates LRU behavior.
• Reference bit management: It needs an additional reference bit for each page,
which adds some complexity.
6. Optimal (MIN) Algorithm (Extended)
Description:
The Optimal algorithm can be extended to a version that replaces pages based on the
longest gap before a page is used again, even when the page reference string isn't fully
known. This method provides a theoretical baseline for comparing other algorithms.
Merits:
• Optimal performance: Gives the minimum number of page faults.
Demerits:
• Infeasible in practice: Requires knowledge of future references, making it
impractical for real-world systems.
7. FIFO with Aging
Description:
This variation of FIFO improves its performance by keeping track of the age of each page
and replacing the oldest non-referenced page.
Merits:
• More efficient than simple FIFO in some cases, as it tries to replace pages that
haven’t been used recently.
Demerits:
• More complex than simple FIFO and introduces additional bookkeeping
overhead.
• Still may suffer from poor performance in cases where FIFO has poor locality of
reference.
8. NRU (Not Recently Used)
Description:
NRU is a simpler version of LRU that uses two bits (referenced and modified) for each
page. Pages are replaced based on their reference status and whether they’ve been
modified. It is an approximation of LRU and less complex than implementing full LRU.
Merits:
• Simple to implement: Requires just two bits per page for reference and
modification status.
• Approximates LRU with less overhead.
Demerits:
• Coarse approximation: It doesn't fully capture recency of access and might
result in inefficient page replacement.
• Can be more prone to page faults in some access patterns compared to more
refined algorithms like LRU.
9. Second-Chance (Clock with Aging)
Description:
This is an improvement over the Clock algorithm, where pages get a second chance to
stay in memory. Each page gets a reference bit that is periodically reset, allowing the
system to keep track of recently used pages without using too much space or memory.
Merits:
• Less overhead: It provides a good balance between efficiency and complexity.
• Can be used in systems with constrained memory.
Demerits:
• Still has limitations compared to more advanced algorithms like LRU.
• Does not perfectly approximate LRU in all cases
5 What is the effect of increasing the page size on page fault rate?
The page size in a virtual memory system is an important factor that affects the page
fault rate. The page fault rate is the frequency at which the operating system has to load
pages into memory from secondary storage (like a hard disk or SSD) due to a page not
being in memory. The effect of increasing the page size on the page fault rate depends
on several factors, including the locality of reference (how frequently a program
accesses certain data) and the overall memory usage.
Here’s how increasing the page size affects the page fault rate:
1. Effect on Internal Fragmentation:
• Internal fragmentation occurs when the last portion of a page is unused after it
has been allocated to a process, wasting memory.
• Increasing the page size generally increases internal fragmentation because
larger pages mean more unused space in the last page when the process size
doesn't fit perfectly into the page boundaries.
• This increased fragmentation can lead to inefficient use of memory and may
indirectly cause an increase in the overall page fault rate, especially if the system
runs out of physical memory due to inefficient space usage.
2. Effect on Page Fault Rate:
• Large pages mean that each page fault involves bringing in more data. If a
process accesses a large amount of data that fits within a page, it will result in
fewer page faults because more data is being fetched at once, reducing the
number of times a page needs to be loaded.
• However, larger pages also increase the amount of irrelevant data that is brought
into memory if only a small portion of the page is used. This leads to wasting
memory space and may cause higher page fault rates when there are multiple
processes competing for memory.
• Smaller pages bring in less data per page fault, but the system may experience
more frequent page faults because less data is fetched with each fault, which
could reduce the overall efficiency.
3. Effect on Locality of Reference:
• If the program exhibits good locality of reference, meaning it tends to access
data that is close to recently accessed data, larger pages might decrease the
page fault rate because more relevant data is brought in with each page fault.
• On the other hand, if the program has poor locality of reference (i.e., it frequently
accesses distant areas of memory), larger pages can increase the page fault rate
because more irrelevant data is loaded into memory, leading to higher overhead
and more frequent page swaps.
4. Effect on Translation Lookaside Buffer (TLB) Efficiency:
• The TLB is a small, fast cache that stores recently accessed page table entries.
When a page size increases, the number of entries required in the TLB might
decrease since each TLB entry now corresponds to a larger region of memory.
• However, if the program has poor locality or frequently accesses non-contiguous
memory, the increased page size could decrease TLB efficiency (more TLB
misses), potentially leading to an increase in page faults as TLB misses incur
overhead in accessing the full page table.
5. Effect on Disk I/O:
• Larger pages reduce the number of I/O operations needed when a page fault
occurs because each page fault brings in more data. This reduces the overall
disk I/O time associated with loading data into memory.
• However, if frequent page faults occur due to the program's memory access
pattern or due to wasting memory with larger pages, the system might still face
significant I/O delays even though each individual page fault involves fewer
operations.
6 What is meant by page faults? Will increase in degree of multiprogramming
decrease page fault? Justify your answer.
A page fault occurs when a process attempts to access a page (part of its address
space) that is not currently loaded in physical memory (RAM). This happens when the
virtual memory system needs to load the requested page from secondary storage
(such as a hard disk or SSD) into physical memory. A page fault triggers an interrupt, and
the operating system handles it by finding a free frame in memory (or replacing an
existing page using a page replacement algorithm) and loading the
Degree of multiprogramming refers to the number of processes or tasks running
concurrently in the system. An increase in the degree of multiprogramming means that
more processes are loaded into memory and are executing simultaneously. The impact
of increasing the degree of multiprogramming on the page fault rate can be complex
and depends on various factors such as memory size, process behavior, and system
configuration.
impact of increased degree of multiprogramming on page faults:
1. Increase in Page Faults (Typically):
• More processes competing for memory: As the number of running processes
increases, the amount of available memory per process decreases. With more
processes in memory, there is less free memory available for each individual
process. This increases the chances that a process will not have the required
pages in memory, leading to a higher rate of page faults.
• Higher contention for physical memory: With more processes running, more
pages need to be loaded into physical memory. If the system’s memory capacity
remains the same, there will be more swapping of pages between disk and RAM,
causing an increase in page fault rate due to more frequent page replacements.
• Reduced locality of reference: When the number of processes increases, each
individual process might have fewer pages loaded in memory, causing it to
access more pages that are not currently in memory. This leads to an increase in
page faults. Moreover, as the number of processes increases, the locality of
reference (the tendency of processes to access a small set of pages) decreases,
which can further increase the page fault rate.
2. Possible Decrease in Page Faults (In Specific Cases):
• Improved memory utilization: In some cases, the system may benefit from
having more processes, especially if the processes have different memory
access patterns. If one process is not accessing memory frequently (e.g., CPU-
bound), the other processes may still be able to use the memory, effectively
improving overall memory utilization. In these cases, page faults might be less
frequent for some processes, but this effect is limited.
• Better utilization of cache: In systems with more processes, some pages might
stay in memory because other processes are accessing different pages, leading
to better cache usage. However, this scenario is less common and highly
dependent on process characteristics.
7 Describe first-fit, best-fit, worst-fit strategies of memory allocation. Which one of
them suffers from the problem of external fragmentation and why?
Memory Allocation Strategies
In memory management, when a process requests memory, the operating system
needs to allocate a block of memory from the available free space. Different strategies
are used to allocate memory, each with its own advantages and disadvantages. The
most common strategies are First-Fit, Best-Fit, and Worst-Fit.
1. First Fit: In the first fit, the partition is allocated which is the first sufficient block from
the top of Main Memory. It scans memory from the beginning and chooses the first
available block that is large enough. Thus it allocates the first hole that is large enough.
2. Best Fit Allocate the process to the partition which is the first smallest sufficient
partition among the free available partition. It searches the entire list of holes to find the
smallest hole whose size is greater than or equal to the size of the process.
3. Worst Fit Allocate the process to the partition which is the largest sufficient among
the freely available partitions available in the main memory. It is opposite to the best-fit
algorithm. It searches the entire list of holes to find the largest hole and allocate it to
process.
External fragmentation occurs when there is enough total free memory to satisfy a
process’s request, but the free memory is scattered in small, non-contiguous blocks.
This means that even though there may be free memory in the system, the operating
system cannot allocate a large enough block because the free space is fragmented into
smaller, disconnected chunks.
Which Strategy Suffers from External Fragmentation?
All three strategies — First-Fit, Best-Fit, and Worst-Fit — suffer from external
fragmentation, but to different extents and in different ways:
• First-Fit: Can cause external fragmentation because it might leave small,
scattered gaps between allocated blocks. These gaps may not be large enough
to accommodate new processes, even though the total free memory in the
system is sufficient.
• Best-Fit: This strategy can cause external fragmentation more severely because it
tries to use the smallest block possible, which often leaves many small unusable
blocks. Over time, these small fragments can accumulate and lead to inefficient use
of memory.
• Worst-Fit: While Worst-Fit tries to leave large free blocks, it can still cause external
fragmentation in the long run because the large gaps it leaves might still be too large
and difficult to allocate efficiently. It may also leave large chunks of memory that are
not used by smaller processes.