0% found this document useful (0 votes)
27 views15 pages

Operating System File Management

Data structure by dsa

Uploaded by

Rahul Bharadwaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views15 pages

Operating System File Management

Data structure by dsa

Uploaded by

Rahul Bharadwaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

The Launcher Academy

Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

Process Management
Process:
A program in execution state is called process. When the user starts any program/application that is stored in
the secondary memory then that program has to be executed by CPU so program must be loaded in the main
memory by the operating system. Once the requested program loaded in main memory it is called process.
That is, process is a program in execution.
A program becomes a process when an executable file (a list of instructions stored on disk) is loaded into the
memory.
There are two techniques to load executable files.
1. By double clicking on the icon that representing the executable file.
2. By writing the name of executable file on the command line.

Process state
When a process executes, it changes its state. Each process may be in one of the following states.
1. New state
2. Ready state
3. Running state
4. Blocked state/ Waiting state
5. Terminated state.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

When a job is admitted to the system a corresponding process is created and it is inserted in the back of the
ready list. It gradually moves to the head of the list. When it reaches the head and if the CPU is available then
the process is given to the CPU by the dispatcher, this transition is called ready-to-running transition.

To prevent some process from monopolizing the system knowingly or unknowingly, O.S. sets a hardware
clock to allow the user to run the program for a specific interval of time called quantum.
If the process doesn’t release the CPU voluntarily before the interval expires, the clock generates an
interrupt calling the operating system to regain control. The O.S. sends this process back to the ready list and
the next process will get the CPU.
If the running process initiate an I/O operation before its quantum expires then the process releases the
CPU and blocks itself.
If the process is completed within the quantum then it is terminated.

States of the Process


New: the process is being created.
Running: Instructions are being executed.
Waiting: The process is waiting for some event such as I/O to occur.
Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor
allocated to them by the operating system so that they can run.
Terminated: Once the process finishes its execution, or it is terminated by the operating system.

PCB (Process Control Block)

The operating system groups all information that it needs about a particular process into a data structure called
a Process Control Block (PCB) also called task control block. It simply serves as the storage for any
information for processes. When a process is created, the operating system creates a corresponding PCB and
when it terminates, its PCB is released. A PCB is implemented as a record containing many pieces of
information associated with a specific process, including:

➢ Process number: Each process is identified by its process number, called process ID.

➢ Process state: Each process may be in any of these states: new, ready, running, suspended and
terminated.

➢ Program counter: It indicates the address of the next instruction to be executed for this process.

➢ Memory management information: This includes the information of page table, memory limits,
Segment table depending on memory used by the operating system.

➢ Registers: They include accumulator, general purpose registers, index registers etc. Whenever a
processor switches over from one process to another process, information about current status of the old
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.
process is saved in the register along with the program counter so that the process is allowed to
continue correctly afterwards.

➢ Accounting information: It includes actual CPU time used in executing a process.

➢ I/O status information: This information includes the list of I/O devices allocated to the process, a list
of open files, and so on.

➢ Processor scheduling details: It includes a priority of a process, address to scheduling queues.

➢ A pointer to the process’s parent (that means the process created this process

➢ A pointer to the process’s child (the process created by this process)

Process memory Space


When a program is loaded into the memory and it becomes a process, It allocates some space in main memory
also known as process memory space and it can be divided into four sections ─ stack, heap, code/text and data
segments.

1. Stack segment:-The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2. Heap segment:- This is dynamically allocated memory to a process during its run time for example In
C memory reserved through malloc() ,calloc() and free() function and in C++ through new/delete operator.
3. Data segment:- This section contains the global and static variables.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

4. Code/ Text segment:- This section contains executable image of the process.

Schedulers

Schedulers are special system software which handles process scheduling in various ways. Their main task is
to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
1. long term scheduler or job scheduler
2. short term scheduler or CPU scheduler
3. medium term scheduler

1. Long term scheduler or job scheduler: - Selects a process from the job pool and load them into the
memory for execution. Generally present in batch OS. The primary objective of the job scheduler is to
provide a balanced mix of jobs, such as I/O bound and processor bound. It is also known as Job
scheduler.

2. Short term scheduler or CPU scheduler: - selects a process from the processes in memory that is ready
to execute and allocate CPU to that process. Short-term schedulers, also known as dispatchers or CPU
scheduler, make the decision of which process to execute next. Short-term schedulers are faster than
long-term schedulers and medium term scheduler.

3. Medium term scheduler: - select a process and remove it from the main memory and reintroduces after
some time, from where it was left off. This scheme is also called swapping. That is process is swapped
out and later swapped in. It is also known as process swapping scheduler.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block
so that a process execution can be resumed from the same point at a later time. Using this technique, a context
switcher enables multiple processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state from the
current running process is stored into the process control block. After this, the state for the process to run next
is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process can start
executing.

CPU Scheduling/ Process Scheduling

A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling
algorithms.

Two types of scheduling disciplines are there.


1. Pre-emptive: A scheduling discipline is pre emptive if the CPU can be taken away from the process
during execution.

1. Non pre emptive:- A scheduling discipline is non pre emptive if once a process has been given to the
CPU, the CPU can’t be taken away from that process. In non preemptive systems, short job are made to
wait by longer jobs, but the treatment of all the jobs are fair.

Scheduling criteria

There are several types of CPU scheduling algorithms, for a selection of which algorithms is suitable for a
particular situation, we must have to know the various criteria so that we can compare different types of CPU
scheduling algorithms.

1. CPU utilization: we should to keep CPU as busy as possible.

CPU Utilization = (Processor busy time) / (Processor busy time + Processor idle time)

2. Throughput: The number of processes that are completed per time is called throughput. It should be
keep higher.

Throughput = (No. of processes completed) / (Time unit)


The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

3. Turnaround time: The interval from the time of submission of a process to the time of completion is
called turnaround time.

Turnaround time=waiting time + Burst time.


Or
Turnaround time= T1(Time of Completion)-T2(Time of Arrival/submission)

4. Waiting time: waiting time is sum of periods spent by a process in the ready queue.

Waiting time= Turnaround time – Burst Time

5. Response time: The time from the submission of a request until the first response is produced. This
measure is called response time.

Response time = t(first response) – t(submission of request)

Scheduling algorithms
1. FCFS: the simplest CPU scheduling algorithms is the first come first serve scheduling algorithms;
processes are dispatched according to their arrival time on the ready queue. Once the process has the
CPU, it runs to completion. FCFS is non pre emptive discipline. It is good scheduling for some cases
but not always because the shorter job has to wait the completion of longer jobs. It is rarely used but it
is embedded within other schemes.

Burst time (CPU time): How long a process will avail CPU to complete its task is called CPU time or burst
time.
Let us consider an example, a set of processes that arrive at time 0, with the length of CPU burst time given
in milliseconds.
Process Burst time
P1 24
P2 3
P3 3
Calculate average waiting time and average turnaround time.
If the process is arrived in order p2,p3,p1 then we can minimize the w.t. and t.t.

2. SJF: the SJF is a non preemptive scheduling algorithms. In this scheduling the processes have
minimum runtime to completion is avail the CPU first, then after the next shortest process. The SJF
reduces average waiting time over FCFS. If the next CPU burst of two processes is the same, FCFS
scheduling is used to break the tie.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

Let us consider the following set of processes.


Process Burst time
P1 6
P2 8
P3 7
P4 3
Compare using FCFS, which takes more time.10.25 ms

3. SRTN/SRTF (Shortest Remaining Time Next/First): shortest remaining time first is the preemptive
scheduling algorithm. If a new process arrive at the ready queue while a previous process is still
executing. The CPU burst time of newly arrived process may be shorter than what is left of the
currently executing process. A SRTF algorithm will preempt or stop the currently executing process for
that new process.
Ex: process Arrival time Burst time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Ans Order: p1,p2,p4,p1,p3


Advantage is that arriving small process will run immediately and problem is that longer job will wait even
longer and it must keep the track of remaining time of executing.

Process arrival time burst time


P1 0 15
P2 2 11
P3 5 19
P4 6 6
P5 7 2

Round Robin

This algorithm specially designed for time sharing systems. Similar to FCFS but preemption is added to
switch between processes. In this scheduling, a small unit of time called time quantum or time slice is
defined, generally ranges from 10 to 100 milliseconds. The CPU Scheduler picks each process from the
ready queue and allocate CPU for a time interval of up to 1 time Quantum.
If the process is completed within one time quantum then it is terminated otherwise, when the
time-slice expires the timer interrupts and process is send back to the ready queue for its next turn.
EX: Process Burst-Time time quqntam=4 miliseconds
P1 24
P2 3
P3 3
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

Priority scheduling

In this scheme, a priority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal priority processes are scheduled in FCFS order. Priority can be indicated by some
fixed range of numbers such as 0 to 7 or 0 to 4095. Some system use low numbers to represent low priority
other uses low numbers for highest priority.
This scheduling may be preemptive or non preemptive. The major problem with priority scheduling is
indefinite blocking or starvation. And the solution is “aging”.

Process burst time priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Multilevel queue scheduling: - A multilevel queue scheduling algorithm partitions the ready queue into
Several separate queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type. Each queue has its own
scheduling algorithm. For example, separate queues might be used for foreground and background processes.
The foreground quite might be scheduled by an RR algorithm, while the background queue is scheduled by an
FCFS algorithm.

In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority
preemptive scheduling. For example, the foreground queue may have absolute priority over the background
queue.

Consider an example of a multilevel queue scheduling algorithm with five queues, listed below in order of
priority:

1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example, could
run unless the queues for system processes, interactive processes, and interactive editing processes were all
empty. If an interactive editing process entered the ready queue while a batch process was running, the batch
process would be preempted.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.
Another possibility is to time-slice among the queues. Here, each queue gets a certain portion of the CPU time,
which it can then schedule among its various processes. For instance, in the foreground-background queue
example, the foreground queue can be given 80 percent of the CPU time for RR scheduling
among its processes, whereas the background queue receives 20 percent of the CPU to give to its processes on
an FCFS basis.

Multilevel feedback queue scheduling

Multilevel feedback queue-scheduling algorithm allows a process to move between queues. It uses many ready
queues and associates a different priority with each queue.

The Algorithm chooses to process with highest priority from the occupied queue and run that process either
preemptively or non-preemptively. If the process uses too much CPU time it will moved to a lower-priority
queue. Similarly, a process that waits too long in the lower-priority queue may be moved to a higher-priority
queue. Note that this form of aging prevents starvation.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

• A process entering the ready queue is placed in queue 0.


• If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
• If it does not complete, it is preempted and placed into queue 2.
• Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only when queue 0
and queue 1 are empty.`

Inter-process Communication(IPC)
Processes executing concurrently in the operating system may be either independent processes or cooperating
processes. A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.

A process is cooperating if it can affect or be affected by the other processes executing in the system. Any
process that shares data with other processes is a cooperating process.

There are several reasons for providing an environment that allows process cooperation:

1. Information sharing:- Since several users may be interested in the same piece of information (for
instance, a shared file), we must provide an environment to allow concurrent access to such
information.
2. Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of
which will be executing in parallel with the others in multiprocessor environment.
3. Modularity:- We may want to construct the system in a modular fashion, dividing the system functions
into separate processes or threads.
4. Convenience. Even an individual user may work on many tasks at the same time. For instance, a user
may be editing, printing, and compiling in parallel.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.
Cooperating processes require an inter-process communication (IPC) mechanism that will allow them
to exchange data and information.
There are two fundamental models of inter-process communication:
(1) Shared memory
(2) Message passing.

1. Shared memory:- Inter-process communication using shared memory requires communicating


processes to establish a region of shared memory. Typically, a shared-memory region resides in the
address space of the process creating the shared-memory segment. Other processes that wish to
communicate using this shared-memory segment must attach it to their address space. Normally, the
operating system tries to prevent one process from accessing another process's memory. Shared
memory requires that two or more processes agree to remove this restriction. They can then exchange
information by reading and writing data in the shared areas. The form of the data and the location are
determined by these processes and are not under the operating system's control. The processes are also
responsible for ensuring that they are not writing to the same location simultaneously. Consider the
producer-consumer problem, which is a common paradigm for cooperating processes. A producer
process produces information that is consumed by a consumer process. For example, a compiler may
produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce
object modules, which are consumed by the loader.

2. Message passing:- Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space and is particularly useful in a
distributed environment, where the communicating processes may reside on different computers
connected by a network. For example, a chat program used on the World Wide Web could be designed
so that chat participants communicate with one another by exchanging messages. A message-passing
facility provides at least two operations: send(message) and receive(message).
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

Multithreading

A process is divided into a number of light weight processes called threads, and this division is made so that
each part of process (thread) executes independent of other part that means if one part of the process has to
wait for some event to occur (I/O Event) then the CPU does not wait rather then it starts execution of other
parts. Due to this CPU never remain idle and the performance of the system is increased. This technique is
popularly known as multithreading or multitasking. The applications those are made up of using
multithreading technique, called multithreaded applications. In a multithreaded application, different parts of
the application are executed by different threads. Even if any of the thread is blocked during the execution of
the application, the entire application does not halt. For example, a multithreaded web browser could still allow
user interaction in one thread while an image was being loaded in the other thread. In case of multithreaded
architecture, different threads are allocated to different processors. Each thread is executed
in different processors in parallel.
Another example is, when a user open a file in word processor, and typing the text (it is one thread), the text is
automatically specifies the spelling checking (another thread).

Advantages of multithreading
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.
1. Responsiveness:- Multithreading an interactive application may allow a program to continue running
even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to
the user.
2. Resource sharing:- By default, threads share the memory and the resources of the process to which
they belong. The benefit of sharing code and data is that it allows an application to have several
different threads of activity within the same address space.
3. Economy:- Allocating memory and resources for process creation is costly. Because threads share
resources of the process to which they belong, it is more economical to create and context-switch
threads.
4. Utilization of multiprocessor architectures:- The benefits of multithreading can be greatly increased in a
multiprocessor architecture, where threads may be running in parallel on different processors. A single
threaded process can only run on one CPU, no matter how many are available. Multithreading on a
multi-CPU machine increases concurrency.

Difference between Process and Thread

Process Thread
1. process can not share the 1.thread can share same memory
same memory.
2. It takes more time to create 2. It takes less time to create.
3. Process switching needs interaction 3. Thread switching does not need to interact with OS.
with operating system.
4. In multiple processes each process 4.One thread can read, write or change another thread's data.
operates independently of the others.

Multithreading Models
The support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel
threads. User threads are supported above the kernel and are managed without kernel support, whereas kernel
threads are supported and managed directly by the operating system. Virtually all modern operating systems—
including Windows, Linux, Mac OS X, Solaris, and UNIX—support kernel threads. Ultimately, there must
exist a relationship between user threads and kernel threads.

The three common ways of establishing this relationship.


1. Many-to-One Model:- The many-to-one model maps many user-level threads to one kernel thread.
Thread management is done by the thread library in user space, so it is efficient; but the entire process
will block if a thread makes a blocking system call. Also, because only one thread can access the kernel
at a time, multiple threads are unable to run in parallel on multiprocessors.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

2. One-to-One Model:- The one-to-one model maps each user thread to a kernel thread. It provides more
concurrency than the many-to-one model by allowing another thread to run when a thread makes a
blocking system call; it also allows multiple threads to run in parallel on multiprocessors. The only
drawback to this model is that creating a user thread requires creating the corresponding kernel thread.

Because the overhead of creating kernel threads can burden the performance of an application, most
implementations of this model restrict the number of threads supported by the system. Linux, along
with the family of Windows operating systems—including Windows 95, 98, NT, 2000, and XP—
implement the one-to-one model.

3. Many-to-Many Model:- The many-to-many model multiplexes many user-level threads to a smaller or
equal number of kernel threads. The number of kernel threads may be specific to either a particular
application or a particular machine. It has advantages over above both.
The Launcher Academy
Office NO- 17/18, 2nd Floor, City Center Opposite – Gossner College, Club Road,
Ranchi, Contact No. 8877155769, 7903154392.

You might also like