0% found this document useful (0 votes)
16 views71 pages

Fcfs Algorithm

The document outlines various CPU scheduling algorithms including First-Come, First-Served, Shortest Job First, Priority Scheduling, Round Robin, and Multilevel Feedback Queue Scheduling, detailing their processes, calculations for turnaround time (TAT), waiting time, and Gantt charts. Additionally, it discusses synchronization issues in concurrent programming, including semaphores and classical synchronization problems like the Producer-Consumer, Dining Philosophers, Readers-Writers, and Sleeping Barber problems. Finally, it covers deadlock avoidance techniques such as the Banker's Algorithm and resource request handling.

Uploaded by

tiusage784
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views71 pages

Fcfs Algorithm

The document outlines various CPU scheduling algorithms including First-Come, First-Served, Shortest Job First, Priority Scheduling, Round Robin, and Multilevel Feedback Queue Scheduling, detailing their processes, calculations for turnaround time (TAT), waiting time, and Gantt charts. Additionally, it discusses synchronization issues in concurrent programming, including semaphores and classical synchronization problems like the Producer-Consumer, Dining Philosophers, Readers-Writers, and Sleeping Barber problems. Finally, it covers deadlock avoidance techniques such as the Banker's Algorithm and resource request handling.

Uploaded by

tiusage784
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

1.

First-come, First-Served Scheduling


Mode: Non-Primitive /Arrival time (Criteria)
1. Calculate the Total TAT & Avg. TAT of Each
Processers?
2. Calculate the Total Waiting Time & Avg. waiting time of Each
Processer?
EX:
PROCESS Arrival Time Burst Time Compl. Time TAT
WT
(CT-AT)
TAT-BT

P1 0 5 5 5
0
P2 1 3 8 7
4
P3 2 8 16 14
6
P4 3 6 22 19
13
----------
22
----------
First we can draw the Gantt chart for the Processers

P1 P2 P4
P3
0 5 8 16 22
Total TAT= 45
Average TAT= 45/4 = 11.2
Total Waiting Time= 23
Average Waiting = 23/4= 5.7

2. Shortest-job-First (SJF) Scheduling


Mode: Non-Primitive /Burst time (Criteria)
EX:
PROCESS Arrival Time Burst Time Compl. Time TAT
WT
(CT-AT)
TAT-BT

P1 0 8 8 8
0
P2 1 4 12 11
7
P3 2 9 32 30
21
P4 3 5 17 14
9
P5 4 6 23 19
13
----------
32
----------
First we can draw the Gantt chart for the Processers

P1 P2 P5 P3
P4
0 8 12 17 23
32
Total TAT= 82
Average TAT= 82/5 = 16.4
Total Waiting Time= 50
Average Waiting = 50/5= 10

3. Priority Scheduling Algorithm


Mode: Non-Primitive /Low in Number (Criteria)
Low in Number- High Priority
EX:
Priority PROCESS Arrival Time Burst Time Compl. Time TAT
WT
(CT-AT)
TAT-BT

5 P1 0 9 9 9
0
3 P2 1 4 25 24
20
1 P3 2 5 14 12
7
2 P4 3 7 21 18
11
4 P5 4 3 28 24
21
----------
28
----------
First we can draw the Gantt chart for the Processers

P1 P3 P2 P5
P4
0 9 14 21 25
28
Total TAT= 87
Average TAT= 87/5 = 17.4
Total Waiting Time= 59
Average Waiting = 59/5= 11.8

4. Round Robin Scheduling


Mode: Non-Primitive / time quantum + Arrival time
(Criteria)
EX: TQ= 2 mille. Sec
PROCESS Arrival Time Burst Time Compl. Time TAT
WT
(CT-AT)
TAT-BT

P1 0 5>3>1 12 12
7
P2 1 4>2>0 11 10
6
P3 2 2 >0 6 4
2
P4 4 1>0 9 5
4

Ready Queue
P1 P2 P3 P1 P4 P2 P1

Gantt chart
P1 P2 P3 P1 P4 P2 P1
0 2 4 6 8 9 11
12
Total TAT= 31
Average TAT= 31/4 = 7.75
Total Waiting Time= 19
Average Waiting = 19/4= 4.75

4. Multi Queue Scheduling


High Priority System 5
Process
Foreground RR 2
(Interact
Process)
Background FCFS 3
(Batch
Process)
Low Priority Student 5
Process

1. Fixed Primitive Scheduling Algorithm


2. Time Slice between Queues
*** Starvation***
5. Multilevel Feedback Queue Scheduling

1. Ready Q0
2. Ready Q1
3. Ready Q2

CASE 1: In Ready Queue Q0


QT is 3 m.sec
If QT>8 m.sec
CASE 2: In Ready Queue Q1
QT is 10 m.sec
If QT>16 m.sec
CASE 2: In Ready Queue Q2
*** No Starvation ***
Peterson’s Solution

2 Process P0, P1

int turn
boolean falg[2];
Structure of Process pi
while(true)
{
flag[i] = true;
turn =j;
while (flag[j]= = true && turn= =j);
Critical Section;
flag[i]= false;
}
P0:
flag[0]=true;
turn=1;
while (flag[1]= = true && turn= =1);
Critical Section;
flag[0]= false;
P1:
flag[1]=true;
turn=0;
while (flag[0]= = true && turn= =0);
Critical Section;
flag[1]= false;

Semaphores

Two Procedures to implement


Semaphores
1. wait ()
2. signal ()
Atomic Procedures
1. wait ():
1. semaphore value >=1, process can avail the CS
2. semaphore value will be decremented
3. if semaphore value is ‘0’ then it indicates some
process in CS no other process should not allow into the
CS.
while(s)
{
while (s<=0); //nothing
s--;
}

2. signal():
1. After completing the process in CS the semaphore
value is incremented.
signal(s)
{
s++;
}
Types of Semaphores:
1. Binary Semaphore
2. Counting Semaphore
//p1,p2 processers
int mutex=1;
mutex=s; //s=semaphore
do
{
wait(mutex) //mutex=1
critical section
signal (mutex)
reminder section
}while(1);
CASE:1
while(s)
{
while (s<=0); //nothing
s--;
}
CASE:2
signal(s)
{
s++; //S=0
}
Classical Problems of Synchronization with
Semaphore Solution

1. Bounded-buffer (or Producer-Consumer) Problem,


2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem

1. Bounded-buffer (or Producer-Consumer) Problem


1. Producer: Produces the product and place it in a buffer
(memory)
2. Consumer: Consumes the product which was placed in
buffer (memory) by the Producer
Conditions at Producer side:
**Before placing product into buffer producer have to
check Buffer is not full
**Count will be incremented
Conditions at Consumer side:
** Before Consuming product from Buffer Consumer have
to check Buffer is not empty
**Count will be decremented
Let us assume,
N Buffers
Each Buffer hold one item
Semaphore mutex
mutex=1 P-> (1/0 /1) C-> /0/1
Semaphore full
full=0 P-> (0/1) C-> 0
Semaphore Empty
Empty= n P-> (5/4) C-> 5
Producer
10

Consumer

Structure of Producer:

while(true)
{
//produce an item in nextp
wait(empty); //n=5
wait(mutex);
Critical Section;
// add nextp to buffer
signal(mutex);
signal(full);

Structure of Consumer process:


while(true)
{
wait(full);
wait(mutex);
Critical Section;
// removes an item from buffer to nextc
signal(mutex);
signal(empty);
2. Dining-Philosophers Problem

P0
C0 C1

P4 P1 P1
Rice

C2
C4
P3
C3 P2
5 Philosophers P0, P1, P2, P3, P4
A Boul of Rice and they are having 5 Plates
They are having 5 Chopsticks C0, C1, C2, C3, C4
Philosopher will do 2 Activates is:
1. Eat
2. Think
If Philosophers is not eating it means he can
thinking.
If Philosophers is not thinking it means he can
eating.
C[0] C[1] C[2] C[3] C[4]
1 1 1 1 1
0 – P0 0 –P1
1 1
0-P3 0-P3
1 1

1- Chopstick is free
0- Chopstick is already allotted to some one

do
{
wait(chopstick[i]) //semaphore= chopstick
wait(chopstick[(i+1)%5]);
----------
----------
Eat
----------
----------
signal(chopstick[i])
signal(chopstick[(i+1)%5]);
-----------
-----------
think
}while(1);
3. Readers and Writers Problem

A file is shared between Readers and Writers Process


Reader Process can read a file
Writer Process can both read & write file
Problem:
Allow multiple readers to read the file simultaneously
i.e RR Possible
when a reader process accessing the file then write process not allowed
to access file
i.e RW not possible
when a write process accessing the file then no other process is allowed
to access file
i.e WR,WW not possible.

Shared data
Shared file
integer readcount=0;
semaphore mutex=1;
semaphore wrt=1;

Structure of write process:


while(true)
{
wait(wrt); // ENTRY SECTION
// writing is performed // CRITICAL SECTION
signal(wrt); // EXIT SECTION
}

Structure of Redear process:


while(true)
{
wait(mutex);
readcount++;
if (readcount==1) ENTRY SECTION

wait(wrt);
signal(mutex);
//Reading Performs // CRITICAL SECTION

wait(mutex);
readcount--;
if (readcount==0) EXIT SECTION
signal(wrt);
signal(mutex);
}

4. Sleeping Barber Problem

Problem Statement:
Conditions:
1 waiting room with N chairs
1 barber room with 1barber chair
Barber & Customer:
1. If there are no customer, the barber is goes to sleep
2. When a customer arrives, he has to wake up the barber.
3. If there are many customers and the barber is cutting a
customer’s hair, then the remaining customers either wait if there are
empty chairs in the waiting room or they leave if no chairs are empty.
Sleeping barber solution:

Variable Declaration: shared data


Semaphore customer=0;
Semaphore barber=0;
int no.of free seats =N;
access seats mutux=1;
Barber:
while(true)
{
wait(customer);
wait(mutux);
//CRITICAL SECTION
no.of free seats ++; // a chair gets free
signal(barber); // bring customer for hair cut
signal(mutex); // release the mutex on the chair
barber is cutting the hair.
}
Customer:
while(1)
{
wait(access seats);
if(no of free seats>0)
{
no of free sets--; // sitting down
// critical section
signal(customer); //notify the barber
signal(access seats); //release the lock
signal(barber); //wait in the waiting room if
barber is busy.
Customer is having hair cut
}
else
{
signal(access seats); //release the lock customer
leaves
}
}
Monitors
Syntax of Monitors:
moniter <moniter_name>
{
shared variable declaration
procedure P1(….)
{
…….
}
procedure P2(….)
{
………
}
procedure intilization (….)
{
intilization code
}
}
Semantic view of a monitor
Entry queue
Shared data
OPERATERS
initialization code

Monitor with Conditions Variable

Entry queue
Shared data x,y

OPERATERS
initialization code

x.wait()
x.signal()

Deadlock Avoidance
Bankers Algorithm
1. Safety Algorithm
A B C (Resources)
10 5 7

Allocation Max Available Need


A B C A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2 7 4 3
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

1. WORK=AVAILABLE
FINISH[I] =FALSE FOR I=0…N-1
2. FIND AN I SUCH THAT
A) FINISH[I] =FALSE
B) NEED <= WORK
3. WORK=WORK + ALLOCATION
FINISH[I] =TRUE
GOTO STEP 2
4. IF FINISH[I] =TRUE FOR ALL I, THEN SYSTEM IS IN SAFE
STATE
WORK=AVAILABLE SO WORK = 3 3 2

FINISH
0 1 2 3 4
FALSE FALSE FALSE FALSE FALSE
TRUE TRUE TRUE
TRUE TRUE

P0:
NEED<=WORK
743<=332, FALSE
SO P0 MUST WAIT
P1:
NEED<=WORK
122<=332, TRUE
WORK=WORK+ALLOCATION
=332+200
=532
P2:
NEED<=WORK
600<=532, FALSE
P2 MUST WAIT
P3:
NEED<=WORK
011<=532, TRUE
WORK=WORK+ALLOCATION
=532+211
=743

P4:
NEED<=WORK
431<=743, TRUE
WORK=WORK+ALLOCATION
=743+002
=745
P0:
NEED<=WORK
743<=745, TRUE
WORK=WORK+ALLOCATION
=745+010
= 755
P2:
NEED<=WORK
600<=755, TRUE
WORK=WORK+ALLOCATION
=755+302

=10 5 7

2. Resource Request Algorithm

A B C (Resources)
10 5 7

Allocation Max Available Need


A B C A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2 7 4 3
P1 2 -3 0-0 0-2 3 2 2 2 3 0 1-0 2-2 2-0
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

1. IF REQUEST<= NEED GO TO STEP 2


2. IF REQUEST<= AVILABE GO TO STEP 3
3. ALLOCATION=ALLOCATION + REQUEST
AVALIABLE= AVALIABLE-REQUEST
NEED = NEED - REQUEST
EXAMPLE:
Let P1 request for (1,0,2)
Work=230
Finish
0 1 2 3 4
False False False False False
TRUE TRUE TRUE
TRUE TRUE
P0:
NEED<=WORK
743<=230, FALSE
SO P0 MUST WAIT
P1:
NEED<=WORK
020<=230, TRUE
WORK=WORK+ALLOCATION
=230+302
=532
P2:
NEED<=WORK
600<=532, FALSE
P2 MUST WAIT
P3:
NEED<=WORK
011<=532, TRUE
WORK=WORK+ALLOCATION
=532+211
=743
P4:
NEED<=WORK
431<=743, TRUE
WORK=WORK+ALLOCATION
=743+002
=745

P0:
NEED<=WORK
743<=745, TRUE
WORK=WORK+ALLOCATION
=745+010
=755
P2:
NEED<=WORK
600<=755, TRUE
WORK=WORK+ALLOCATION
=755+302

=10 5 7
DEADLOCK DETECTION
1. Single Instance of Each Recourse Type
Recourse Allocation Graph:

P5
R1 R3 R4

P1 P2 P3

R2 P4 R5

Corresponding wait for Graph:


P5

P1 P2 P3

P4

2. Multiple Instance of Resource Type


A B C (Resources)
7 2 6

Allocation Request Available


A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2

1. WORK=AVAILABLE,
FINISH[i]=FALSE IS ALLOCATION #0,
FINISH[i]=TRUE IS ALLOCATION =0,
2. FIND AN i SUCH THAT
A) FINISH[i] =FALSE
B) REQ <= WORK
3. WORK=WORK + ALLOCATION
FINISH[i] =TRUE
GOTO STEP 2
4. IF FINISH[i] =FALSE FOR ALL i=0,1,…n-1, THEN SYSTEM IS IN
DEADLOCK(UNSAFE) OTHERWISE SAFE STATE
EXAMPLE:
Work=000
Finish
0 1 2 3 4
False False False False False
TRUE TRUE TRUE TRUE

P0:
REQ<=WORK
000<=000, TRUE
WORK=WORK+ALLOCATION
= 000+010
=010
FINISH[0]=TRUE
P1:
REQ<=WORK
202<=010, FALSE
SO, P1 MUST WAIT

P2:
REQ<=WORK
000<=010, TRUE
WORK=WORK+ALLOCATION
= 010+303
=313
FINISH [2]=TRUE
P3:
REQ<=WORK
100<=313, TRUE
WORK=WORK+ALLOCATION
= 313+211
=524
FINISH [3]=TRUE
P4:
REQ<=WORK
002<=524, TRUE
WORK=WORK+ALLOCATION
= 524+002
=526
FINISH [4]=TRUE
P1:
REQ<=WORK
202<=526, TRUE
WORK=WORK+ALLOCATION
= 526+200
=726
NO DEADLOCK DETECTED

726
DEADLOCK RECOVERY

1. Process Termination
a. Abort all Deadlock Processers
b. Abort 1 Process at a time until Deadlock cycle is
eliminated.
2. Recourse Preemption
a. Selecting a victim
b. Rollback
c. Starvation

Contiguous memory allocation


1. Multiprogramming with fixed size (static) partition
(MFT)

O/S

P1 5MB
2 MB
P2 10MB
8MB
P3 3 MB
2MB
P4 25 MB
5MB
P5 20MB
5MB
P6 7MB

1MB
Main Memory
O/S

P1 5MB

P2 5MB

P3 5MB

P4 5MB

Main Memory
Let us assume,
P1=2 MB, P2=2MB, P3=1MB, P4=20MB, P5=15MB,
P6=6MB
Free space= 2+8+2+5+5+1=23 MB
Let us assume P7 wants to enter 20 MB, the OS Not
Possible allocate free space of 23 why because this is
contiguous allocation.
Advantages:
1. Easy to implement
2. No external fragmentation
Disadvantages:
1. Internal fragmentation
2. External fragmentation
3. Limit Process Size
4. Limitation on Degree of Multiprogramming
Three methods we can use
1. First Fit
2. Best Fit
3. Worst Fit

2. Multiprogramming with variable size (Dynamic)


partition (MVT)

Let us assume P1=5 MB, P2=10MB, P3=7MB, P4=20MB

O/S
P1 5MB

P2 10MB

P3 7MB
P4 20MB

Main Memory
Advantages:
1. No Internal Fragmentation
2. No restriction on the Degree of Multiprogramming
3. No Limitation on the Size of the Process
Disadvantages:
1. Difficult Implementation
2. External Fragmentation
Compaction:

O/S
5 MB

P2 10MB

7 MB

P4 20MB
O/S
P2 10MB

P4 20MB

P5 12MB

Non-contiguous memory allocation


1. Paging

Paging Model of Logical & Physical Memory

Page 0
Page 1
Page 2
Page 3
Process (Logical Memory)

0 1
1 4
2 3
3 7
Page Table

0
1 Page 0
2
3 Page 2
4 Page 1
5
6
7 Page 3
Main Memory (Physical Memory)

Paging Hardware
2. Segmentation

Segmentation example:
Process:
1. Deposit –seg0
2. Withdraw- seg1
3. Cancel- seg2
0
100 O/S
Seg0
200
Seg1
350

750 Seg2
Case 1: 50 is offset, seg1 limit is 150
50<150
Case 2: 200 is offset , seg1 limit is 150
200<150
Then trap

Virtual Memory implemented using Demand


Paging
Swapping between main memory and secondary memory

Page table when some pages not in main memory

0 A
1 B
2 C
3 D
Process or Logical Memory
02 V
1 I
25 V
3 I
Page Table
0
1
2 A
3
4
5 C
6
Main memory or Physical memory

A
B
C
D

Secondary Memory

Steps in Handling Page Fault


1. Page Fault Trap: The CPU detects a page fault while trying to
access a memory page that isn't currently in RAM. This triggers
a page fault exception, which transfers control to the operating
system.
2. Page Table Lookup: The operating system looks up the page
table to determine the location of the required page in
secondary storage, such as a disk.
3, 4. Fetch Page from Secondary Storage: The OS initiates a
process to fetch the required page from secondary storage into
an available page frame in RAM. This involves reading the page
data from disk into a free page frame.
5. Update Page Table: Once the page is successfully brought
into RAM, the operating system updates the page table to
reflect the new location of the page in physical memory. This
includes updating the page table entry with the physical
address of the page frame where the page now resides.
6. Resume Process Execution: Finally, the operating system
restarts the interrupted memory access instruction that caused
the page fault. With the required page now available in RAM, the
process can proceed as expected, accessing the data it needs
from memory.

Page Replacement
Page Replacement Algorithms
There are three types of Page Replacement Algorithms. They
are:

1. Optimal Page Replacement Algorithm


2. First In First Out Page Replacement Algorithm
3. Least Recently Used (LRU) Page Replacement Algorithm

Allocation of Frames
The OS will provide allocation of frames in 5 types
1. Equal allocation

P1 require 10 Frames
P2 require 30 Frames
Main memory = 40 Frames
But OS is allocates equally P1-> 20
OS is allocates equally P2-> 20
Disadvantage: Priority wise not followed

2. Proportional allocation

P1 -> 10/(10+30)*40= 10/40*40


P1 = 10
P2 -> 30/(10+30)*40= 30/40*40
P1 = 30

3. Priority Allocation
4. Global Replacement Allocation
5. Local Replacement Allocation
Thrashing
Definition:
A Process is spending on more time on pages rather than
executing this is called Thrashing.
Causes of thrashing:
High degree of multiprogramming.
Lack of frames.
Page replacement policy.
1. When the CPU’s usage is low, the process scheduling
mechanism tries to load multiple processes into memory at the
same time, increasing the degree of Multi programming.
2. In this case, the number of processes in the memory
exceeds the number of frames available in the memory
3. If a high-priority process arrives in memory and the frame is
not vacant at the moment, the other process occupying the
frame will be moved to secondary storage, and the free frame
will be allotted to a higher-priority process.
4. At that time the memory is full, the procedure begins to take
a long time to swap in the required pages. Because most of the
processes are waiting for pages, the CPU utilization drops
again. Now thrashing will occurs.

Techniques to handle:
1. Working Set Model
Principal of Locality of Preference
Min-

MAX-

INFINITY-
2. Page Fault Frequency

UNIT-6

What is a File System?


A file system is a method an operating system uses to store,
organize, and manage files and directories on a storage device.
Some common types of file systems include:

FAT (File Allocation Table): An older file system used by older


versions of Windows and other operating systems.
NTFS (New Technology File System): A modern file system
used by Windows. It supports features such as file and folder
permissions, compression, and encryption.
ext (Extended File System): A file system commonly used on
Linux and Unix-based operating systems.
HFS (Hierarchical File System): A file system used by macOS.
APFS (Apple File System): A new file system introduced by
Apple for their Macs and iOS devices.
A file is a collection of related information that is recorded on
secondary storage. Or file is a collection of logically related
entities. From the user’s perspective, a file is the smallest
allotment of logical secondary storage.

File Access Methods in Operating System


When a file is used, information is read and accessed into
computer memory and there are several ways to access this
information of the file.
There are three ways to access a file into a computer system
1.Sequential-Access
2. Direct Access
3. Index sequential Method.
1. Sequential Access:
It is the simplest access method. Information in the file is
processed in order, one record after the other.
a. Data is accessed one record right after another record in an
order.
b. When we use read command, it move ahead pointer by one
c. When we use write command, it will allocate memory and
move the pointer to the end of the file
Advantages:
1. It is simple to implement this file access mechanism.
2. It uses lexicographic order to quickly access the next entry.
3. It is suitable for applications that require access to all
records in a file, in a specific order.
2. Direct Access:
Another method is direct access method also known as relative
access method. A fixed-length logical record that allows the
program to read and write record rapidly. In no particular order.
The direct access is based on the disk model of a file since
disk allows random access to any file block. For direct access,
the file is viewed as a numbered sequence of block or record.
Thus, we may read block 14 then block 59, and then we can
write block 17. There is no restriction on the order of reading
and writing for a direct access file.
Advantages:
The files can be immediately accessed decreasing the average
access time.
In the direct access method, in order to access a block, there is
no need of traversing all the blocks present before it.
3. Index sequential method:
It is the other method of accessing a file that is built on the top
of the sequential access method. These methods construct an
index for the file. The index, like an index in the back of a book,
contains the pointer to the various blocks. To find a record in
the file, we first search the index, and then by the help of
pointer we access the file directly.
File System Mounting

Mounting is a process in which the operating system adds the


directories and files from a storage device to the user’s
computer file system.
The file system is attached to an empty directory, by adding so
the system user can access the data that is available inside the
storage device through the system file manager.
Storage systems can be internal hard disks, external hard disks,
USB flash drivers, SSD cards, memory cards, network-attached
storage devices, CDs and DVDs, remote file systems, or
anything else.
Terminologies used in File System Mounting:
File System: It is the method used by the operating system to
manage data storage in a storage device. So, a user can access
and organize the directories and files in an efficient manner.
Device name: It is a name/identifier given to a storage partition.
In windows, for example, “D:” in windows.
Mount point: It is an empty directory in which we are adding the
file system during the process of mounting.
1. Linux-Unix based OS
We want to mount /dev/sdb1 to an existing directory /mnt.

sudo mount /dev/sdb1 /mnt/mydisk


Structures of Directory
Definition:
A directory is a container that is used to contain folders and
files. It organizes files and folders in a hierarchical manner.

They are 4 Types of Directories


1. Single-level directory
2. Two-level directory
3. Tree Structure/ Hierarchical Structure
4. Acyclic Graph Structure

1. Single-level directory:

2. Two-level directory:
3. Tree Structure/ Hierarchical Structure:

4. Acyclic Graph Structure:


Directory Implementation in Operating System

Directory implementation in the operating system can be done


using Singly Linked List and Hash table.
Directory Implementation using Singly Linked List:
The implementation of directories using a singly linked list is
easy to program but is time-consuming to execute. Here we
implement a directory by using a linear list of filenames with
pointers to the data blocks.
1. To create a new file the entire list has to be checked such
that the new directory does not exist previously.
2. The new directory then can be added to the end of the list or
at the beginning of the list.
3. In order to delete a file, we first search the directory with the
name of the file to be deleted. After searching we can delete
that file by releasing the space allocated to it.
4. To reuse the directory entry we can mark that entry as
unused or we can append it to the list of free directories.
5. To delete a file linked list is the best choice as it takes less
time.

Directory Implementation using Hash Table:


An alternative data structure that can be used for directory
implementation is a hash table. It overcomes the major
drawbacks of directory implementation using a linked list. In
this method, we use a hash table along with the linked list. Here
the linked list stores the directory entries, but a hash data
structure is used in combination with the linked list.
In the hash table for each pair in the directory key-value pair is
generated. The hash function on the file name determines the
key and this key points to the corresponding file stored in the
directory. This method efficiently decreases the directory
search time as the entire list will not be searched on every
operation. Using the keys the hash table entries are checked
and when the file is found it is fetched.

Disadvantage:
The major drawback of using the hash table is that generally, it
has a fixed size and its dependency on size.

File Allocation Methods


The allocation methods define how the files are stored in the
disk blocks. There are three main disk space or file allocation
methods.
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation
1. Contiguous Allocation:
In this scheme, each file occupies a contiguous set of blocks
on the disk. For example, if a file requires n blocks and is given
a block b as the starting location, then the blocks assigned to
the file will be: b, b+1, b+2,……b+n-1. This means that given the
starting block address and the length of the file (in terms of
blocks required), we can determine the blocks occupied by the
file.
The directory entry for a file with contiguous allocation contains
1. Address of starting block
2. Length of the allocated portion.

The file ‘mail’ in the following figure starts from the block 19
with length = 6 blocks. Therefore, it occupies 19, 20, 21, 22, 23,
24 blocks.
Advantages:
1. Both the Sequential and Direct Accesses are supported by
this. For direct access, the address of the kth block of the file
which starts at block b can easily be obtained as (b+k).
2. This is extremely fast since the number of seeks are minimal
because of contiguous allocation of file blocks.
Disadvantages:
1. This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of memory
utilization.
2. Increasing file size is difficult because it depends on the
availability of contiguous memory at a particular instance.
2. Linked Allocation:
In this scheme, each file is a linked list of disk blocks which
need not be contiguous. The disk blocks can be scattered
anywhere on the disk.
The directory entry contains a pointer to the starting and the
ending file block. Each block contains a pointer to the next
block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are
randomly distributed. The last block (25) contains -1 indicating
a null pointer and does not point to any other block.

Advantages:
1. This is very flexible in terms of file size. File size can be
increased easily since the system does not have to look for a
contiguous chunk of memory.
2. This method does not suffer from external fragmentation.
This makes it relatively better in terms of memory utilization.
Disadvantages:
1. Because the file blocks are distributed randomly on the disk,
a large number of seeks are needed to access every block
individually. This makes linked allocation slower.
3. Indexed Allocation:
In this scheme, a special block known as the Index block
contains the pointers to all the blocks occupied by a file. Each
file has its own index block. The ith entry in the index block
contains the disk address of the ith file block. The directory
entry contains the address of the index block as shown in the
image:
Advantages:
1. This supports direct access to the blocks occupied by the file
and therefore provides fast access to the file blocks.
2. It overcomes the problem of external fragmentation.
Disadvantages:
1. The pointer overhead for indexed allocation is greater than
linked allocation.

Mass Storage Structure


1. Magnetic Disk (Non-Volatile Memory)
a. Platters
Used for head disk used for R/W head\
Transfer rate, seek time, rotational latency
b. Tracks
c. Sectors
d. data
Supports Random access
2. Magnetic Tape
Supports Sequential access

DISK STRUCTURE
What is Hard Disk?
Hard Disk is a secondary storage device. It is a type of electro
mechanical device. A hard disk stores and retrieves digital data
using magnetic storage. Disk is rigid rapidly rotating platters
coated with
Magnetic material.
Hard Disk Pack consist of more than one disk.
How Data is Stored in a Hard Disk?
● Data is stored in Disk in form of tracks and sectors.
● Disk surface is divided in to tracks.
● Track is further dived in to sectors.
● Sector is the addressable unit in disk.
Basic picture for disk storage concept is given below

Disk Structure in OS
Disk structure is as shown in following figure
Step wise description of Disk Structure is given below
● Disk surface divided into tracks
● A rеad/writе hеad positionеd just abovе thе disk surfacе
● Information storеd by magnеtic rеcording on thе track undеr
rеad/writе hеad
● Fixеd hеad disk
● Moving hеad disk
● Dеsignеd for largе amount of storagе
● Primary dеsign considеration cost, sizе, and spееd
Hardwarе for disk systеm
● Disk drivе, Dеvicе motor, Rеad/writе hеad, Associatеd logic
Disk controllеr
● Dеtеrminеs thе logical intеraction with thе computеr
● Can sеrvicе morе than onе drivе (ovеrlappеd sееks)
Cylindеr
● Thе samе numbеrеd tracks on all thе disk surfacеs
● Еach track contains bеtwееn 8 to 32 sеctors
Sеctor
● Smallеst unit of information that can bе rеad from/writtеn
into disk
● Rangе from 32 bytеs to 4096 bytеs

DISK MANAGEMENT
Disk management of the operating system includes:
1. Disk Format
- Low level (or) Physical level
- High level (or) Logical level
2. Booting from disk
- Bootstrap Loaded Program – ROM
- Tiny Bootstrap Program
3. Bad block recovery
The low-level format or physical format:
Divides the disk into sectors before storing data so that the
disk controller can read and write Each sector can be:

The header retains information, data, and error correction code


(ECC) sectors of data, typically 512 bytes of data, but optional
disks use the operating system’s own data structures to
preserve files using disks.

It is conducted in two stages:

1. Divide the disc into multiple cylinder groups. Each is treated


as a logical disk.

2. Logical format or “Create File System”. The OS stores the


data structure of the first file system on the disk. Contains free
space and allocated space.

For efficiency, most file systems group blocks into clusters.


Disk I / O runs in blocks. File I / O runs in a cluster.

For example, the sizes can be 256,512, and 1,024 bytes. If disk
is formatted with larger sector size, fewer sectors can fit on
each track.

As a result fewer headers and trailers are written on each track


and more space is obtainable for user data. – Some operating
systems can handle a sector size of 512 bytes.

Operating system keeps its own data structures on disk before


it use disk to store the files. It performs this with following two
steps:

1. It partitions the disk into one or more groups of cylinders.


Each partition is treated by OS as a separate disk.

2. Logical formatting: That means creation of file system.

In order to increase the efficiency, file system groups blocks in


chunks called as clusters.

Some operating systems give special programs the ability to


use a disk partition as a large sequential array of logical blocks,
without any file-system data structures. This array is
sometimes called the raw disk, and I/O to this array is called as
raw I/O.

Boot block:
When the computer is turned on or restarted, the program
stored in the initial bootstrap ROM finds the location of the OS
kernel from the disk, loads the kernel into memory, and runs the
OS. start.
To change the bootstrap code, you need to change the ROM
and hardware chip. Only a small bootstrap loader program is
stored in ROM instead.
The full bootstrap code is stored in the “boot block” of the disk.
A disk with a boot partition is called a boot disk or system disk.
The bootstrap program is required for a computer to initiate the
booting after it is powered up or rebooted.
It initializes all components of the system, from CPU registers
to device controllers and the contents of main memory, and
then starts the operating system.
The bootstrap program then locates the OS kernel on disk,
loads that kemel into memory, and jumps to an initial address
to start the operating-system execution.
The Read Only Memory (ROM) does not require initialization
and is at a fixed location that the processor can begin
executing when powered up or reset. Therefore bootstrap is
stored in ROM.
Because of read only feature of ROM; it cannot be infected by a
computer virus. The difficulty is that modification of this
bootstrap code requires changing the ROM hardware chips.
Therefore, most systems store a small bootstrap loader
program in the boot ROM which invokes and bring full
bootstrap program from disk into main memory.
The modified version of full bootstrap program can be simply
written onto the disk.
The fixed storage location of full bootstrap program is in the
“boot blocks”.
A disk that has a boot partition is called a boot disk or system
disk.
Bad Blocks:
Disks are error-prone because moving parts have small
tolerances.
Most disks are even stuffed from the factory with bad blocks
and are handled in a variety of ways.
The controller maintains a list of bad blocks.
The controller can instruct each bad sector to be logically
replaced with one of the spare sectors. This scheme is known
as sector sparing or transfer.
A soft error triggers the data recovery process.
However, unrecoverable hard errors may result in data loss and
require manual intervention.
Failure of the disk can be:
1. Complete, means there is no way other than replacing the
disk. Back up of content must be taken on new disk.

2. One or more sectors become faulty.

3. After manufacturing, the bad blocks exist. Depending on the


disk and controller in use, these blocks are handled in a
different ways.

Disk management in operating systems involves organizing


and maintaining the data on a storage device, such as a hard
disk drive or solid-state drive. The main goal of disk
management is to efficiently utilize the available storage space
and ensure data integrity and security.

Page Replacement Algorithms:


1. First In First Out (FIFO):
This is the simplest page replacement algorithm. In this
algorithm, the operating system keeps track of all pages in the
memory in a queue, the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the
queue is selected for removal.

Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with


3 page frames. Find the number of page faults.

Initially, all slots are empty, so when 1, 3, 0 came they are


allocated to the empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the
oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not
available in memory so it replaces the oldest page slot i.e 3
—>1 Page Fault. Finally, when 3 come it is not available so it
replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page
faults when increasing the number of page frames while using
the First in First Out (FIFO) page replacement algorithm. For
example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1,
0, 4, and 3 slots, we get 9 total page faults, but if we increase
slots to 4, we get 10-page faults.

2. Optimal Page replacement:


In this algorithm, pages are replaced which would not be used
for the longest duration of time in the future.

Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3,


0, 3, 2, 3 with 4 page frame. Find number of page fault.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the


empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take
the place of 7 because it is not used for the longest duration of
time in the future.—>1 Page fault. 0 is already there so —> 0
Page fault. 4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault
because they are already available in the memory.
Optimal page replacement is perfect, but not possible in
practice as the operating system cannot know future requests.
The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.

3. Least Recently Used:


In this algorithm, page will be replaced which is least recently
used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0,
4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find number of page faults.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the


empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take
the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault
because they are already available in the memory.

You might also like