0% found this document useful (0 votes)
11 views28 pages

Operating System: Lectures

Uploaded by

prathampatel5044
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views28 pages

Operating System: Lectures

Uploaded by

prathampatel5044
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Operating system

Lectures
DRY principal → do not repeat yourself

✅ 2. Multiprogramming Operating System


📌 Concept:
Allows multiple programs to reside in memory simultaneously.

The CPU switches between them to optimize utilization.

⚙️ How it works:
When one program is waiting (e.g., for I/O), CPU switches to another.

Efficient use of CPU time and main memory.

✅ 3. Multitasking Operating System


📌 Concept:
Allows multiple tasks to run concurrently on a single CPU.

Each task gets a small slice of CPU time using time-sharing.

⚙️ How it works:
OS rapidly switches between tasks, giving the illusion of parallel execution.

User can, for example, type in a document while music plays.

✅ 4. Multiprocessing Operating System


Operating system 1
📌 Concept:
Supports more than one processor (CPU) to execute multiple processes in
parallel.

⚙️ How it works:
Tasks are divided among processors.

Can either be symmetric (SMP) or asymmetric (AMP) multiprocessing.

thread ma cache preserve re bcz next tread ne use karvani jarur pade to ? elte
process to same j che so cache preserve rakhvno

Components of User Space

User space

no hw access ek type no layer which provide convenience to user GUI


and CLI

what is software interrupt

micro kernel ma performance ochu male bcz user space and kernel space
vache switch karta revu psde i.e overhead

interview questions:

user mode and kernel mode vache commn kai rite thay ?

IPC -interprocess ccommunication

shared memory bw UM and KM (kernel mode)

Msg passing

How OS creates a PROCESS


📌 1. Request Initiation
A system call like fork() (UNIX/Linux) or CreateProcess() (Windows) is invoked.

Operating system 2
This can happen when:

A user launches a program

A process spawns a child process

📌 2. Allocate PCB (Process Control Block)


OS creates a PCB, a data structure that stores:

Process ID

Registers

Program Counter

State

Memory limits

I/O information

📌 3. Assign Address Space


The OS allocates memory:

Copies program code and data into RAM

Sets up the stack, heap, and text segments

📌 4. Load Program into Memory


OS loads the executable file into the allocated memory space.

📌 5. Initialize CPU Registers


Program Counter is set to the entry point (start of program)

Other registers (stack pointer, etc.) are initialized

📌 6. Setup I/O and Resources


File descriptors, open files, and I/O buffers are initialized

Operating system 3
📌 7. Add to Scheduler
Process state is set to "Ready"

It is added to the ready queue so the CPU can schedule it

📌 8. Context Switching
When the process is chosen by the CPU scheduler, it begins execution

OS performs context switch to transfer control

Why storage always in 32-64-128 ??


binary system

Standard che

Memory architecture

Addressing also easy easy

program to process ni journey → OS

5 steps

Process creation ( fork() )

Process identification ( getpid() )

Waiting for child process ( wait() )

Process termination

how OS differentiate processes

PCB contains all info regarding process it is kind of data structure

context switching kernel Kare

Operating system 4
Producer Consumer

OS ma badho khel most of CPU cycle ne ghatadva aaju baju fare che etle etlu
to boli j devanu k CPU utilization thay aa kam thi

Reader write and producer consumer problem sarkhi rite karvi plese code and
functionality of semaphores and mutex, conditional variables sarkhi rite karvi

Lec 23 - Baki rahkyo che leetcode questions kare che etle sarkhi rite samjava

💡
SELF STUDY

🔒 What is a Mutex?
Mutex stands for Mutual Exclusion.
It is a lock that allows only one thread to enter a critical section at a time.

✅ Example Use Case:


You want to ensure only one thread modifies a shared variable at any moment.

🔧 Working:
lock() – A thread locks the mutex before entering critical section.

unlock() – It unlocks the mutex after it's done.

If another thread tries to lock() it while it's already locked — it will wait (block).

🚦 What is a Semaphore?
A semaphore is a signaling mechanism. It can be binary (like a mutex) or
counting (allowing more than one thread).

Operating system 5
✅ Example Use Case:
You have 5 identical printers, and multiple threads want to print. You want to
allow only 5 threads to access them simultaneously.

🔧 Working:
Has an integer counter.

wait() (also called P() or down or acquire() ) — Decrements the counter. If counter
is 0, it blocks.

signal() (also called V() or up or release ) — Increments the counter and wakes
waiting threads.

wait, P and acquire are used alternatively similarly signal, V and release.

🔁 Code Logic:
std::counting_semaphore<5> sem(5); // allows 5 threads

void print_job() {
sem.acquire(); // wait // access printer
sem.release(); // signal
}

✅ What is a Condition Variable?


A condition variable is a synchronization tool used to wait for some condition to
become true.
It’s used with a mutex, and it allows one or more threads to:

wait (sleep/block) until another thread notifies them that something has
changed.

🔁 Real-Life Analogy
Imagine a classroom:

Operating system 6
Students (threads) are waiting for the teacher (condition) to say: “The exam is
over!”

Until then, they sit quietly (blocked).

When the teacher says “exam is over” (notify), they all leave (continue
execution).

🔧 Why & When is it Used?


Sometimes, threads need to wait for a specific condition to be true (e.g., data is
ready, queue is not empty, etc.).
Rather than continuously checking (which wastes CPU), they wait efficiently
using condition variables.

What is difference between primary memory and Main memory?

✅ Benefits of ASID:
Benefit Description

✔️ Memory protection Prevents one process from using another's TLB entry

✔️ Fast context switching No need to flush TLB during every switch

✔️ Efficient multi-process TLB can store entries from multiple processes safely
support

🔁 Without ASID:
OS would have to flush (clear) the entire TLB during every context switch →
slow and inefficient.

🧠 With ASID:
TLB can keep entries for multiple processes.

Only use entries that match the current ASID.

Operating system 7
✅ TLB (Translation Lookaside Buffer) in Operating System:
TLB (Translation Lookaside Buffer) is a small, fast, hardware cache in the MMU
(Memory Management Unit) that stores recent translations of virtual addresses
to physical addresses.

🧠 Why TLB?
Accessing the page table in RAM for every memory reference is slow.
So, TLB helps by storing recently used virtual-to-physical address mappings for
fast retrieval.

📌 How it works:
1. CPU generates a virtual address.

2. TLB is checked:

✅ Hit: Physical address is returned instantly.


❌ Miss: Page table is accessed → mapping is added to TLB.
🔄 TLB acts like a cache for the page table.
📦 What is Segmentation in OS?
Segmentation is a memory management technique where the logical address
space of a process is divided into variable-sized segments, based on the logical
divisions of a program (like functions, data, stack, etc.)

🧠 Why Use Segmentation?


Unlike paging (which divides memory into fixed-size pages), segmentation
divides memory into parts that make logical sense to the programmer.

For example:

Code segment

Operating system 8
Data segment

Stack segment

Heap segment

This helps in:

Better organization

Protection

Flexibility

🧠 What is a Livelock?
A livelock is a situation in concurrent programming where two or more processes
(or threads) are not blocked, but they keep reacting to each other in a way that
prevents progress.
So:

The processes are not dead (like in deadlock).

They are still running and actively doing something.

But they keep interfering with each other, so no useful work is completed.

🔁 Difference from Deadlock


Aspect Deadlock Livelock

Processes are
State Processes are running
blocked

Progress No progress No progress

Activity Inactive Active but useless

Looks like Frozen Busy loop

📘 Real-Life Analogy
Imagine two people trying to pass each other in a hallway:

Operating system 9
Both move to the left → block.

Then both move to the right → block again.

They keep moving, trying to be polite, but never pass each other.

That’s a livelock.

@2nd Time

Types of Threads

User level and kernel level threads

🖨️ What is Spooling? (printer valu example yad rakh)


Spooling (Simultaneous Peripheral Operations OnLine) is a technique where I/O
operations (like printing or disk writes) are not performed directly but are first
written to a secondary storage (or buffer also )area (like a disk), and then
processed sequentially later.

Operating system 10
📌 Example:
When you print a file, the data is not sent directly to the printer.

It is first saved in a spool (queue/file) on the disk.

The printer then picks up jobs one by one and prints them.

✅ Benefits:
Allows multiple processes to submit print jobs at the same time.

Avoids making a process wait for slow I/O devices.

Handles device sharing efficiently.

🧺 What is Buffering?
Buffering is the technique of using a temporary memory area (buffer) to hold
data while it is being transferred between two devices/processes with different
speeds.

📌 Example:
While watching a video online, data is downloaded in the buffer before being
played.

It allows smooth playback, even if the network speed varies.

✅ Benefits:
Compensates for speed mismatch between producer and consumer.

Reduces waiting time for processes.

Allows overlapping of computation and I/O.

🧠 Key Differences:
Feature Spooling Buffering

Queues jobs for slow devices (like Temporarily holds data during
Purpose
printer) transfer

Operating system 11
Feature Spooling Buffering

Storage Used Disk (usually) Main memory (RAM)

Works with Entire jobs Data chunks/streams

Supports multiple jobs from


Device Sharing Typically handles one job at a time
multiple users

Video streaming, keyboard input


Example Print spooling
buffering

📝 Summary:
Spooling is like a to-do list for devices: jobs are stored and processed one by
one.

Buffering is like a holding area to smooth out the flow between fast and slow
components.

🏗️ Memory Hierarchy (Top to Bottom in Speed)


CPU Registers (fastest)

L1/L2/L3 Cache (hardware cache) // L1 is fastest among all

RAM (Main Memory)

Disk Cache / Swap (virtual memory)

Disk / SSD

Operating system 12
Feature Nano Kernel Exo Kernel

Minimize the kernel to bare Expose hardware to applications


Goal
essentials efficiently

Only basic scheduling & No abstractions — just secure


Services
interrupts multiplexing

Abstractions Provided by user-level systems Left to the application to define

Performance Moderate (modular) Very high (no forced abstraction)

Research, custom high-performance


Usage Embedded, RTOS base
systems

🧠 Summary Diagram:
Power On

CPU starts & looks for BIOS/UEFI

BIOS/UEFI runs POST (hardware test)

BIOS/UEFI looks for bootloader in MBR (BIOS) or EFI partition (UEFI)

Bootloader found and executed (GRUB, Bootmgr, boot.efi)

Bootloader loads OS kernel into RAM

OS takes over — system is ready

Details ma GPT ma che joi levu

🧩 1. Process Concurrency
🔹 Definition:
Operating system 13
Concurrency refers to the execution of multiple processes or threads at the
same time (or seemingly at the same time, via interleaving on a single CPU).

Even if the system has only one CPU, it rapidly switches between processes,
giving the illusion that they are running in parallel.

🧠 Goal:
Improve CPU utilization

Enable parallelism where possible

Make systems responsive and efficient

🛠️ Example:
A music app playing songs while you browse files — both processes are
concurrent.

🧩 2. Synchronization
🔹 Definition:
Synchronization ensures that multiple concurrent processes/threads don’t
interfere with each other, especially when accessing shared resources (e.g.,
variables, memory, files).
Without synchronization, you may face issues like:

Race conditions

Inconsistent data

Deadlocks or data corruption

🧠 Goal:
Ensure data consistency

Coordinate execution order

Manage critical sections

🔐 Tools Used:
Operating system 14
Mutexes

Semaphores

Monitors

Condition variables

Some modifications in wait and signal based on page


23 point g and h.
Suppose we have a semaphore S=0 and two processes:

1. P1 does wait(S) — it sees S=0 , so it:

Can’t proceed.

Goes to Waiting state via block() .

Waits in S’s queue.

2. P2 later does signal(S) :

It increases S to 1.

Sees that someone is waiting, so it:

Calls wakeup() , moves P1 from Waiting → Ready queue.

Now the scheduler can choose to run P1 .

Give some benefits of multithreaded programming?


🚀 Improved performance through concurrent or parallel execution
💾 Shared memory between threads enables faster data access and
communication

🖥️ Better CPU utilization, especially during I/O operations


🎯 Responsiveness in interactive applications (e.g., UI stays smooth)
📈 Scalability on multi-core systems
Operating system 15
🧩 Simplified code structure by separating tasks into threads
🔄 Faster context switching compared to full processes
⚙️ Efficient resource usage, since threads share process-level resources
what is RAID levels??

✅ Banker’s Algorithm (Deadlock Avoidance Algorithm)


Banker’s Algorithm is a deadlock avoidance algorithm used in operating systems.
It ensures that a system only enters safe states where deadlocks cannot occur.

23. What are overlays?


The concept of overlays is that whenever a process is running it will not use the
complete program at the same time, it will use only some part of it. Then overlay
concept says that whatever part you required, you load it and once the part is
done, then you just unload it, which means just pull it back and get the new part
you required and run it. Formally, “The process of transferring a block of program
code or other data into internal memory, replacing what is already stored”.

✅ Fragmentation in Operating Systems


Fragmentation is a condition in memory management where free memory is
broken into small, non-contiguous blocks, making it difficult to allocate large
continuous chunks to processes — even though total free space is enough.

Reader and writer problem code :

semaphore mutex = 1; // For readCount access


semaphore writeLock = 1; // For writer access
int readCount = 0; // Number of active readers

//Reader mate

Reader() {
wait(mutex); // Lock readCount

Operating system 16
readCount++;
if (readCount == 1) {
wait(writeLock); // First reader locks writer
}
signal(mutex); // Unlock readCount

// --- Reading section ---


// Read the shared data

wait(mutex); // Lock readCount


readCount--;
if (readCount == 0) {
signal(writeLock); // Last reader unlocks writer
}
signal(mutex); // Unlock readCount
}

// Writer mate
Writer() {
wait(writeLock); // Lock out readers and writers

// --- Writing section ---


// Write to the shared data

signal(writeLock); // Release lock


}

28. What is the Direct Access Method?


The direct Access method is based on a disk model of a file, such that it is viewed
as a numbered sequence of blocks or records. It allows arbitrary blocks to be read
or written. Direct access is advantageous when accessing large amounts of
information. Direct memory access (DMA) is a method that allows an input/output
(I/O) device to send or receive data directly to or from the main memory,

Operating system 17
bypassing the CPU to speed up memory operations. The process is managed by a
chip known as a DMA controller.

GPT ans:
DMA (Direct Memory Access) is a technique that allows hardware devices (like
disk drives, network cards, sound cards, etc.) to transfer data directly to/from
memory without CPU involvement.
Normally, the CPU handles all data transfers between devices and memory. But
this can slow down the system. DMA solves this by offloading data transfer tasks
from the CPU.

Processor affects on performance:


Feature Impact on Performance

Clock Speed Faster instruction processing

Number of
Better multitasking and parallel execution
Cores

Cache Size Faster access to data, reduces wait time

Architecture Efficient processing and support for modern features

Thread Support Smoother multitasking

Heat
Sustained performance under load without thermal throttling
Management

What is multitasking?
Multitasking is the ability of an operating system to execute multiple tasks
(processes or programs) simultaneously. The CPU switches between tasks so
quickly that it gives the illusion that all tasks are running at the same time — even
on a single-core processor.

⚙️ How It Works:
Operating system 18
OS uses a scheduler to switch between tasks.

Context switching saves and restores the state of processes.

Time slicing gives each task a small time to run

34. What is the functionality of an Assembler?


The Assembler is used to translate the program written in Assembly language into
machine code. The source program is an input of an assembler that contains
assembly language instructions. The output generated by the assembler is the
object code or machine code understandable by the computer.

42. What are the different IPC mechanisms?


These are the methods in IPC:

Pipes (Same Process): This allows a flow of data in one direction only.
Analogous to simplex systems (Keyboard). Data from the output is usually
buffered until the input process receives it which must have a common origin.

Named Pipes (Different Processes): This is a pipe with a specific name it can
be used in processes that don’t have a shared common process origin. E.g.
FIFO where the details written to a pipe are first named.

Message Queuing: This allows messages to be passed between processes


using either a single queue or several message queues. This is managed by
the system kernel these messages are coordinated using an API.

Semaphores: This is used in solving problems associated with


synchronization and avoiding race conditions. These are integer values that
are greater than or equal to 0.

Shared Memory: This allows the interchange of data through a defined area of
memory. Semaphore values have to be obtained before data can get access to
shared memory.

Sockets: This method is mostly used to communicate over a network between


a client and a server. It allows for a standard connection which is computer
and OS independent

Operating system 19
53. What is Cycle Stealing?
Cycle stealing is a method of accessing computer memory (RAM) or bus without
interfering with the CPU. It is similar to direct memory access (DMA) for allowing
I/O controllers to read or write RAM without CPU intervention.

59. What is a critical - section?


A critical section is a group of instructions/statements or regions of code that
need to be executed atomically such as accessing a resource (file, input or output
port, global data, etc.). When more than one processes access the same code
segment that segment is known as the critical section. The critical section
contains shared variables or resources which are needed to be synchronized to
maintain the consistency of data variables.

Threads independent hoy k nai??

user level threads vs kernel level threads

✅ Advantages of Multithreading:
🔄 Concurrent Execution: Multiple threads run in parallel, improving
performance and responsiveness.

⚙️ Efficient CPU Utilization: Threads make better use of CPU, especially


during I/O waits.

🧠 Faster Context Switching: Switching between threads is faster than


between processes because threads share the same memory.

💾 Resource Sharing: Threads within the same process share code, data, and
resources, reducing overhead.

🖥️ Improved Application Responsiveness: In GUI-based applications,


background tasks (like loading or downloading) can run without freezing the
UI.

Operating system 20
📈 Scalability: Programs can scale with multi-core processors by running
threads on separate cores.

🧩 Simpler Program Design: Complex tasks can be broken into smaller,


manageable threads (e.g., one thread for input, one for processing, one for
output).

❌ Drawbacks of Semaphores:
🧠 Risk of Deadlock:
Incorrect use (e.g., forgetting to release a semaphore) can cause deadlocks,
where processes wait forever for each other.

🔁 Busy Waiting (in spinlocks):


Some types of semaphores (like spinlocks) use busy waiting, which wastes
CPU cycles.

❗ Priority Inversion:
A lower-priority thread holding a semaphore can block a higher-priority
thread, leading to unintended delays.

🔁 Starvation:
If not managed carefully, some threads might never get access to the critical
section (e.g., if others keep acquiring the semaphore).

66. Define the term Bounded waiting?


A system is said to follow bounded waiting conditions if a process wants to enter
into a critical section will enter in some finite time.

71. What are the necessary conditions which can lead to a


deadlock in a system?
Mutual Exclusion: There is a resource that cannot be shared.
Hold and Wait: A process is holding at least one resource and waiting for another
resource, which is with some other process.

Operating system 21
No Preemption: The operating system is not allowed to take a resource back from
a process until the process gives it back.
Circular Wait: A set of processes waiting for each other in circular form.

72. What are the issues related to concurrency?


Non-atomic: Operations that are non-atomic but interruptible by multiple
processes can cause problems.

Race conditions: A race condition occurs of the outcome depends on which


of several processes gets to a point first.

Blocking: Processes can block waiting for resources. A process could be


blocked for a long period of time waiting for input from a terminal. If the
process is required to periodically update some data, this would be very
undesirable.

Starvation: It occurs when a process does not obtain service to progress.

Deadlock: It occurs when two processes are blocked and hence neither can
proceed to execute

Precedence graphs?
→ shows the order of execution of processes.

74. Explain the resource allocation graph?


The resource allocation graph is explained to us what is the state of the system in
terms of processes and resources. One of the advantages of having a diagram is,
sometimes it is possible to see a deadlock directly by using RAG.

76. What is the goal and functionality of memory management?


The goal and functionality of memory management are as follows;

Relocation

Protection

Sharing

Operating system 22
Logical organization

Physical organization

✅ What is Address Binding in Operating Systems?


Address Binding is the process of mapping logical (virtual) addresses to physical
addresses in memory.

📌 Why Address Binding?


Programs are written using logical addresses (like variable names or abstract
memory locations).
These must be bound to physical addresses (actual memory locations in RAM)
before or during execution.

🧠 Types of Address Binding:


1. Compile-Time Binding

Binding happens during compilation.

Physical address must be known in advance.

Used in embedded or simple systems (no relocation).

2. Load-Time Binding

Binding happens when the program is loaded into memory.

Allows loading the program into any location in memory.

Common in general-purpose OS.

3. Execution-Time Binding

Binding happens while the program is running.

Requires hardware support (like MMU).

Allows for dynamic relocation (used in modern OS with virtual memory).

Operating system 23
4. What is RAID structure in OS? What are the different levels of
RAID configuration?
RAID (Redundant Arrays of Independent Disks) is a method used to store data on
Multiple hard disks therefore it is considered as data storage virtualization
technology that combines multiple hard disks. It simply balances data protection,
system performance, storage space, etc. It is used to improve the overall
performance and reliability of data storage. It also increases the storage capacity
of the system and its main purpose is to achieve data redundancy to reduce data
loss.

8. What is a bootstrap program in OS?


It is generally a program that initializes OS during startup i.e., first code that is
executed whenever computer system startups. OS is loaded through a
bootstrapping process or program commonly known as booting. Overall OS only
depends on the bootstrap program to perform and work correctly. It is fully stored
in boot blocks at a fixed location on the disk. It also locates the kernel and loads it
into the main memory after which the program starts its execution.

Stack: It is used for local variables and functional args and re

turn values.

Heap: It is used for dynamic memory allocation.

Data: It stores global and static variables.

Code or text: It comprises compiled program code.

Transitions from US to KS done by software interrupts.

System Calls are the only way through which a process can go into kernel mode
from user mode.

The bootloader is a small program that has the large task of


booting the
rest of the operating system (Boots Kernel then, User Space).Windows

Operating system 24
uses a bootloader named Windows Boot Manager (Bootmgr.exe), most Linux
systems use GRUB, and Macs use something called boot.efi

How OS creates a process? Converting program into a process.


STEPS:
a.
Load the program & static data into memory.
b. Allocate runtime stack.
c.
Heap memory allocation.
d. IO tasks.
e.
OS handoffs control to main () function

Registers in the PCB, it is a data structure. When a processes is running and it's
time slice expires, the current value of process specific registers would be stored
in the PCB and the process would be swapped out. When the process is
scheduled to be run, the register values is read from the PCB and written to the
CPU registers. This is the main purpose of the registers in the PCB.

LTS controls degree of multi-programming


Dispatcher: The module of OS that gives control of CPU to a process selected by
STS

Swapping of process done by MTS(medium term schedular)

convoy effect : if one process has longer BT. It will have major effect on average
WT of diff processes, called Convoy effect. it is basically seen in FCFS algo

priority scheduling ma ageing no use thay → pre-emtive case ma

Multilevel Queue
Ready Q is divide into multiple q depending upon priority

1. System process

2. Interactive process

Operating system 25
3. Batch process

Multilevel feedback q
allows process to move between sub queue

If a process uses too much CPU time, it will be moved to lower


priority queue. This scheme leaves I/O bound and interactive processes in the
higher-priority
queue.

Peterson’s solution can be used to avoid race condition but holds good for only 2
process/threads.
Contention: one thread has acquired the lock, other threads will be
busy waiting, what if thread that had acquired the lock dies, then all
other threads will be in infinite waiting
Contention is not present in conditional variables bcz there is no busy waiting in
here.

Deadlock prevetion pdf mathi joi levu imp cje

Banker Algorithm
When a process requests a set of resources, the system must determine whether
allocating
these resources will leave the system in a safe state. If yes, then the resources
may be
allocated to the process. If not, then the process must wait till other processes
release
enough resources

Operating system 26
Free List

First fit, next fit,best fit worst fit

Paging is a memory-management scheme that permits the physical address space


of a
process to be non-contiguous.
In segmentation Process is divided into variable segments based on user view.
Paging is closer to the Operating system rather than the User.

Demand paging when page fault occur


for that page replacement algo are there
lazy swapper only load page when it is needed
We are viewing a process as a sequence of pages, rather than one large
contiguous address space, using, the term Swapper is technically incorrect. A
swapper manipulates entire processes, whereas a Pager is concerned with
individual pages of a process

Operating system 27
if page fault occur OS creates trap to manage page access from swap space and
after page bring back into mem process continue its execution

Advantages of Virtual memory


a. The degree of multi-programming will be increased.
b. User can run large apps with less real physical memory

12. Disadvantages of Virtual Memory


a. The system can become slower as swapping takes time.
b. Thrashing may occu

Belady’s anomaly is present.


In the case of LRU and optimal page replacement algorithms, it is seen that
the number of page faults will be reduced if we increase the number of
frames. However, Balady found that, In FIFO page replacement algorithm, the
number of page faults will get increased with the increment in number of
frames.

Operating system 28

You might also like