0% found this document useful (0 votes)
62 views26 pages

OS Unit1

The document provides a comprehensive overview of operating systems, including their definition, history, types, and functions. It discusses various aspects of operating systems such as process management, CPU scheduling, and resource allocation, highlighting the importance of OS in managing hardware and software resources. Additionally, it outlines the advantages and disadvantages of different types of operating systems, including batch, multiprogramming, distributed, network, time-sharing, and real-time OS.

Uploaded by

haleemathaiba1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views26 pages

OS Unit1

The document provides a comprehensive overview of operating systems, including their definition, history, types, and functions. It discusses various aspects of operating systems such as process management, CPU scheduling, and resource allocation, highlighting the importance of OS in managing hardware and software resources. Additionally, it outlines the advantages and disadvantages of different types of operating systems, including batch, multiprogramming, distributed, network, time-sharing, and real-time OS.

Uploaded by

haleemathaiba1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

OPERATING SYSTEM

Unit -1 : Introduction to Operating System: Definition, History and Examples of Operating


System; Computer System organization; Types of Operating Systems; Functions of Operating
System; Systems Calls; Operating System Structure.

Process Management: Process Concept- Process Definition, Process State, Process Control
Block, Threads; Process scheduling. Multiprogramming, Scheduling Queues, CPU Scheduling,
Context Switch; Operations on Processes- Creation and Termination of Processes; Inter process
communication (IPC)- Definition and Need for Inter process Communication; IPC
Implementation Methods- Shared Memory and Message Passing.

CPU Scheduling: Basic concepts; Scheduling Criteria; Scheduling Algorithms; Multiple-


processor scheduling; Thread scheduling; Multiprocessor Scheduling; Real-Time CPU
Scheduling.

1.1 DEFINITION OF OS

• An Operating system is a program that controls the execution of application programs and
acts as an interface between the user of a computer and the computer hardware. It is the
one program running at all times on the computer (usually called the kernel), with all else
being applications programs.
• An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly includes
programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.

1.2 Computer System Organization

Computer system can be divided into four components


1. Hardware – provides basic computing resources CPU, memory, I/O devices .
2. Operating system- Controls and coordinates use of hardware among various
applications and users.
3. Application programs – define the ways in which the system resources are used to solve the
computing problems of the users .Word processors, compilers, web browsers, database
systems, video games
4. Users -People, machines, other computers.

Every computer must have an operating system to run other programs. The operating system
coordinates the use of the hardware among the various system programs and application
programs for various users. It simply provides an environment within which other programs
can do useful work.

Saloni K, Sapient College


In other words,the operating system is a set of special programs that run on a computer
system that allows it to work properly. It performs basic tasks such as recognizing input from
the keyboard, keeping track of files and directories on the disk, sending output to the display
screen, and controlling peripheral devices.

OS is designed to serve these basic purposes:

1. It controls the allocation and use of the computing System’s resources among the
various user and tasks.
2. It provides an interface between the computer hardware and the programmer that
simplifies and makes it feasible for coding and debugging of application programs.
3. Provides the facilities to create and modify of programs and data files using an editor.
4. Access to the compiler for translating the user program from high-level
language to machine language.
5. Provide a loader program to move the compiled program code to the
computer’s memory for execution.
6. Provide routines that handle the details of I/O programming.

Saloni K, Sapient College


1.3 History of Operating system:
The operating system has been evolving through the years. The following table shows
the history of OS. The history of the operating system has four generations now

First Generation :(1945-1955) In this generation, operating systems were not introduced
therefore the instruction was directly given to the computer systems. All the code was
included to communicate with the connected hardware and the system.

Second Generation :(1955-1965) GMOS (General Motos operating system) was the first
operating system that came into the picture in the 1950s which was developed for IBM
computers. Around the 1960s the first UNIX Operating system was developed ,it used the
batch processing system, where all the similar jobs are collected in groups by the system, and
then all the jobs are submitted to the operating system using a punch card to execute all jobs in
a machine.

Third generation (1965-1980) , the concept of multiprogramming was introduced in which


multiple tasks could be performed in a single computer. Due to performing multiple tasks at a
time, multiprogramming allows the CPU to be busy every time . With the DEC PDP-1 in 1961,
the development of minicomputers' phenomenal growth was introduced.

Fourth Generation : (1980-now) The evolution of Personal computers came under the fourth
generation. The birth of the Microsoft Windows operating system was in 1975 and then Bill
Gates took the personal computers to next level by launching MS-DOS in 1981, but due to the
cryptic commands, it was difficult for a user to get hold of the commands. In this generation,
people were also introduced to Graphic User Interface(GUI). Today, Windows is the most popular
operating system and has evolved from Windows 95, Windows 98, Windows XP, Windows 10

Saloni K, Sapient College


1.4 Types of Operating Systems

There are several different types of operating systems present. In this section, we will discuss
the advantages and disadvantages of these types of OS.

• Batch OS
• Multi-Programming OS
• Distributed OS
• Network OS
• Time Sharing/ Multi Taking OS
• Real- T ime-OS

1. Batch OS
Batch OS is the first operating system for second-generation computers. This OS does not
directly interact with the computer. Instead, an operator takes up similar jobs and groups
them together into a batch, and then these batches are executed one by one based on the first-
come, first, serve principle.
Advantages of Batch OS
• Execution time taken for similar jobs is higher.
• Multiple users can share batch systems.
• Managing large works becomes easy in batch systems.
• The idle time for a single batch is very less.
Disadvantages of OS
• It is hard to debug batch systems.
• If a job fails, then the other jobs have to wait for an unknown time till the
issue is resolved.
• Batch systems are sometimes costly.
Examples of Batch OS: payroll system, bank statements, data entry, etc.

2 Multi-Programming OS : An operating system that is capable of running multiple programs on a


single processor is known as a multiprogramming operating system. If a program has to wait for an
I/O transfer in a multiprogramming operating system, other programs utilize the CPU and other
resources. One of the major aims of multiprogramming is to manage the various resources of the
entire system. UNIX, LINUX, and Solaris are common examples.

Advantages

• Faster response time: Multiprogramming systems have less waiting time and faster
response times.
• Efficient resource use: Multiprogramming systems allow multiple processes to run at the
same time, which makes better use of system resources.
• Improved reliability: By distributing tasks across multiple processors, multiprogramming
systems can handle multiple applications at once. This reduces performance issues due to
errors in individual components.

Disadvantages

• Increased complexity: Multiprogramming systems can be more complex.


• Risk of deadlocks: Multiprogramming systems can have deadlocks.
• Overhead: Multiprogramming systems can have increased overhead.

Saloni K, Sapient College


3. Distributed OS : In a distributed OS, various computers are connected through a single
communication channel. These independent computers have their memory unit and CPU and are
known as loosely coupled systems. The system processes can be of different sizes and can perform
different functions. The major benefit of such a type of operating system is that a user can access files
that are not present on his system but in another connected system. In addition, remote access is
available to the systems connected to this network.

Advantages of Distributed OS
Failure of one system will not affect the other systems because all the computers are
independent of each other.
The load on the host system is reduced.
The size of the network is easily scalable as many computers can be added to the network.
As the workload and resources are shared therefore the calculations are performed at a
higher speed.

Disadvantages of Distributed OS
The setup cost is high.
Software used for such systems is highly complex.
Failure of the main network will lead to the failure of the whole system.
Examples of Distributed OS: LOCUS, etc.
4. Time Sharing/ Multitasking OS

The multitasking OS is also known as the time-sharing operating system as each task is given
some time so that all the tasks work efficiently. This system provides access to a large
number of users, and each user gets the time of CPU as they get in a single system. The tasks
performed are given by a single user or by different users. The time allotted to execute one
task is called a quantum, and as soon as the time to execute one task is completed, the system
switches over to another task.

Advantages of Multitasking OS
Each task gets equal time for execution.
The idle time for the CPU will be the lowest.
There are very few chances for the duplication of the software.
Disadvantages of Multitasking OS
Processes with higher priority cannot be executed first as equal
priority is given to each process or task.
Various user data is needed to be taken care of from unauthorized access.
Sometimes there is a data communication problem.
Examples of Multitasking OS: UNIX, Linux etc.
5. Network OS

Network operating systems are the systems that run on a server and manage all the
networking functions. They allow sharing of various files, applications, printers, security, and
other networking functions over a small network of computers like LAN or any other
private network. In the network OS, all the users are aware of the configurations of every
other user within the network, which is why network operating systems are also known as
tightly coupled systems.

Saloni K, Sapient College


Advantages of Network OS

New technologies and hardware can easily upgrade the systems.


Security of the system is managed over servers.
Servers can be accessed remotely from different locations and systems.
The centralized servers are stable.
Disadvantages of Network OS
Server costs are high.
Regular updates and maintenance are required.
Users are dependent on the central location for the maximum number of operations.
Examples of Network OS: Microsoft Windows server 2008, LINUX, etc.

6. Real-Time OS
Real-Time operating systems serve real-time systems. These operating systems are useful when
many events occur in a short time or within certain deadlines, such as real-time simulations.
Types of the real-time OS are:

Hard real-time OS: The hard real-time OS is the operating system for mainly the applications in
which the slightest delay is also unacceptable. The time constraints of such applications are very strict.
Such systems are built for life-saving equipment like parachutes and airbags, which immediately
need to be in action if an accident happens.

Soft real-time OS: The soft real-time OS is the operating system for applications where time
constraint is not very strict. In a soft real-time system, an important task is prioritized over less
important tasks, and this priority remains active until the completion of the task.
Furthermore, a time limit is always set for a specific job, enabling short time delays for future
tasks, which is acceptable. For Example, virtual reality, reservation systems, etc.

Advantages of Real-Time OS
It provides more output from all the resources as there is maximum
utilization of systems.
It provides the best management of memory allocation.
These systems are always error-free.
These operating systems focus more on running applications than those in
the queue.
Shifting from one task to another takes very little time.
Disadvantages of Real-Time OS
System resources are extremely expensive and are not so good.
The algorithms used are very complex.
Only limited tasks can run at a single time.
In such systems, we cannot set thread priority as these systems
cannot switch tasks easily.
Examples of Real-Time OS: Medical imaging systems, robots, etc.

Saloni K, Sapient College


1.5 Functions of an Operating System

Operating System is used as a communication channel between the Computer hardware and the user. It
works as an intermediate between System Hardware and End-User. Operating System handles the following
responsibilities:

• Resource allocation and management: An operating system manages hardware resources


such as CPU, memory, and disk space, and assigns these resources to running applications
based on their priority.
• Memory management: An operating system manages memory usage, including virtual
memory and memory allocation. It also ensures that memory is shared efficiently among
running programs.
• Device management: An operating system manages input and output devices such as
printers, scanners, and keyboards. It ensures that these devices are compatible with the
system and can be used by applications.
• User interface management: An operating system provides a graphical user interface (GUI)
that allows users to interact with the computer. It manages windows, menus, and other
graphical elements.
• Security management: An operating system manages security features such as user
authentication, firewalls, and antivirus software. It also ensures that applications and data
are protected from unauthorized access.

1.6 OPERATING SYSTEM SERVICES:

The operating system provides the programming environment in which a programmer works on a
computer system. The user program requests various resources through the operating system. The
operating system gives several services to utility programmers and users. Applications access these
services through application programming interfaces or system calls. By invoking those interfaces, the
application can request a service from the operating system, pass parameters, and acquire the operation
outcomes.

From a user's perspective, an operating system (OS) provides services like file management, program
execution, managing input/output devices, providing a user interface, and ensuring security, while from a
system perspective, an OS manages resources like memory, CPU, and peripherals by allocating them to
different processes, handling interrupts, and performing scheduling to optimize system performance;
essentially acting as a mediator between the hardware and the user applications.

Saloni K, Sapient College


Key services provided by an OS from different viewpoints:

User Perspective:
File Management: Creating, deleting, renaming, and organizing files and folders.
Program Execution: Launching and running applications.
Input/Output Control: Managing interactions with devices like keyboard, mouse, printer, and monitor.
User Interface: Providing a visual interface for interacting with the system
Security: Protecting against unauthorized access and malicious software.

System Perspective:
Memory Management: Allocating and deallocating memory to running processes.
CPU Scheduling: Deciding which process gets to use the CPU and when.
Process Management: Creating, managing, and terminating processes.
Device Management: Controlling and coordinating various hardware devices.
Error Handling: Detecting and responding to system errors.
Resource Allocation: Distributing system resources like CPU, memory, and disk space efficiently among
processes.
Some of the services are explained in detail here as

Program execution : To execute a program, several tasks need to be performed. Both the instructions and
data must be loaded into the main memory. In addition, input-output devices and files should be initialized,
and other resources must be prepared. The Operating structures handle these kinds of tasks. The user now
no longer should fear the reminiscence allocation or multitasking or anything.

Control Input/output devices :As there are numerous types of I/O devices within the computer system,
and each I/O device calls for its own precise set of instructions for the operation. The Operating System
hides that info with the aid of presenting a uniform interface. Thus, it is convenient for programmers to
access such devices easily.

Program Creation :
The Operating system offers structures and tools, including editors and debuggers, to help the programmer
create, modify, and debugging programs.

Error Detection and Response: An Error in a device may also cause malfunctioning of the entire device.
These include hardware and software errors such as device failure, memory error, division by zero, attempts
to access forbidden memory locations, etc. To avoid error, the operating system monitors the system for
detecting errors and takes suitable action with at least impact on running applications.
While working with computers, errors may occur quite often. Errors may occur in the:
• Input/ Output devices: For example, connection failure in the network, lack of paper in the printer,
etc.
• User program: For example: attempt to access illegal memory locations, divide by zero, use too much
CPU time, etc.
• Memory hardware: For example, Memory error, the memory becomes full, etc.
To handle these errors and other types of possible errors, the operating system takes appropriate action and
generates messages to ensure correct and consistent computing.

Accounting : An Operating device collects utilization records for numerous assets and tracks the overall
performance parameters and responsive time to enhance overall performance. These personal records are
beneficial for additional upgrades and tuning the device to enhance overall performance.

Security and Protection : An operating system provides security services by controlling access to system
resources, implementing user authentication mechanisms like passwords, managing memory access,
enforcing access control policies, and protecting against unauthorized access to data, essentially
safeguarding the system from various threats and vulnerabilities by limiting actions to authorized users .

File management : Computers keep data and information on secondary storage devices like magnetic tape,
Saloni K, Sapient College
magnetic disk, optical disk, etc. Each storage media has its capabilities like speed, capacity, data transfer rate,
and data access methods. For file management, the operating system must know the types of different files
and the characteristics of different storage devices. It has to offer the proportion and safety mechanism of
documents additionally.

Communication : The operating system manages the exchange of data and programs among different
computers connected over a network. This communication is accomplished using message passing and
shared memory.

1.7 Components of Operating System

Now to perform the functions mentioned above, the operating system has two
components:

• Shell
• Kernel

Shell provides a way to communicate with the OS by either taking the input from the user or the
shell script. A shell script is a sequence of system commands that are stored in a file.
Shell handles user interactions. It is the outermost layer of the OS and manages the interaction between user
and operating system by:
• Prompting the user to give input
• Interpreting the input for the operating system
• Handling the output from the operating system.

The kernel is the core component of an operating system for a computer (OS). All other
components of the OS rely on the core to supply them with essential services. It serves as the primary
interface between the OS and the hardware and aids in the control of devices, networking, file
systems, and process and memory management.

Functions of kernel
• The kernel is the core component of an operating system which acts as an interface between
applications, and the data is processed at the hardware level.

• When an OS is loaded into memory, the kernel is loaded first and remains in memory until the
OS is shut down. After that, the kernel provides and manages the computer resources and
allows other programs to run and use these resources.

• The kernel also sets up the memory address space for applications, loads the files with
application code into memory, and sets up the execution stack for programs.

Saloni K, Sapient College


• The kernel is responsible for performing the following tasks:
Input-Output management.
Memory Management
Process Management for application execution.
Device Management
System calls control

1.8 SYSTEM CALLS

The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also included in
the manuals used by the assembly level programmers. System calls are usually made when a
process in user mode requires access to a resource. Then it requests the kernel to provide the
resource via a system call.

A figure representing the execution of the system call is given as follows −

As can be seen from this diagram, the processes execute normally in the user mode until a
system call interrupts this. Then the system call is executed on a priority basis in the kernel
mode. After the execution of the system call, the control returns to the user mode and
execution of user processes can be resumed.

In general, system calls are required in the following situations −

• If a file system requires the creation or deletion of files. Reading and writing from files also require a
system call.
• Creation and management of new processes.
• Network connections also require system calls. This includes sending and receiving packets.
• Access to a hardware devices such as a printer, scanner etc. requires a system call.

Types of System Calls : There are mainly five types of system calls. These are explained in detail as follows:

1. Process Control: These system calls deal with processes such as process creation, process
termination etc.
2. File Management: These system calls are responsible for file manipulation such as creating a file,
reading a file, writing into a file etc.
3. Device Management: These system calls are responsible for device manipulation such as reading
from device buffers, writing into device buffers etc.
4. Information Maintenance: These system calls handle information and its transfer between the

Saloni K, Sapient College


operating system and the user program.
5. Communication: These system calls are useful for interprocess communication. They also deal with
creating and deleting a communication connection.

2. PROCESS MANAGEMENT

Process : A process is defined as an entity which represents the basic unit of work to be implemented in the
system. A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.

To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program. A process is an 'active' entity as
opposed to the program which is a 'passive' entity.

Components of a Process in OS: When a program is loaded into the memory and it becomes a process, it can
be divided into four sections : stack, heap, text and data.

The components of a process are:


Program code/Text: The instructions that the process will execute.
Data: The data that the process will use during its execution.
Stack: A data structure that is used to store temporary data, such as function parameters and return
addresses.
Heap: A data structure that is used to store dynamically allocated memory.

Saloni K, Sapient College


Process vs Program

Process Program

A Program is basically a collection of


The process is basically an
instructions that mainly performs a specific
instance of the computer
task when executed by the computer.
program that is being executed.

A process has a shorter lifetime. A Program has a longer lifetime.

A Process requires resources such as A Program is stored by hard-disk and does not
memory, CPU, Input-Output devices. require any resources.

A process has a dynamic instance A Program has static code and static data.
of code and data

Basically, a process is the running On the other hand, the program is the
instance of the code. executable code.

2.1 Process Life Cycle

When a process executes, it passes through different states. A process transitions between these states
as it progresses through its lifecycle. By transitioning through these states, the operating system ensures
that processes are executed smoothly, resources are allocated effectively, and the overall performance
of the computer is optimized.
In general, a process can have one of the following five states at a time.

New: When a process is first created and has not yet been scheduled to run. It is the program that is present
in secondary memory that will be picked up by the OS to create the process.

Ready : A process enters the ‘ready’ state when it is loaded into the main memory and is waiting to be
assigned to a processor for execution. It is ready to run but is waiting for CPU time.
Running: When a process is currently being executed by the CPU. The process is chosen from the ready
queue by the OS for execution and the instructions within the process are executed by any one of the
available processors.
Waiting (or Blocked): When a process is waiting for an external event to occur, like input from a device,
before it can continue execution.

Saloni K, Sapient College


Terminated (or Completed): A process in this state has finished its execution or has been stopped by the
user for some reason. At this point, it is released by the operating system and removed from memory.
** Suspended : In some cases an additional state “Suspended” is there which is used to temporarily remove
a process from active memory to free up resource.

2.2 Process Control Block (PCB)

To identify the processes, it assigns a process identification number (PID) to each process. As the operating
system supports multi-programming, it needs to keep track of all the processes. For this task, the process
control block (PCB) is used to track the process’s execution status. Each block of memory contains
information about the process state, program counter, stack pointer, status of opened files, scheduling
algorithms, etc.
All this information is required and must be saved when the process is switched from one state to another.
When the process makes a transition from one state to another, the operating system must update
information in the process’s PCB.
Process Control Block is a data structure that contains information of the process related to it. The process
control block is also known as a task control block.There is a Process Control Block for each process,
enclosing all the information about the process. The Process Table is an array of PCBs, which logically
contains a PCB for all of the current processes in the system.

Structure of the Process Control Block : PCB keeps track of many important pieces of information needed to
manage processes efficiently. The diagram helps explain some of these key data items.

• Pointer: It is a stack pointer that is required to be saved when the process is switched from one state
to another to retain the current position of the process.

• Process state: It stores the respective state of the process.

• Process number: Every process is assigned a unique id known as process ID or PID which stores the
process identifier.

• Program counter: Program Counter stores the counter, which contains the address of the next
instruction that is to be executed for the process.
• Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time slice
expires, the current value of process specific registers would be stored in the PCB and the process
would be swapped out. When the process is scheduled to be run, the register values is read from the
PCB and written to the CPU registers. This is the main purpose of the registers in the PCB.
Saloni K, Sapient College
• Memory limits: This field contains the information about memory management system used by the
operating system. This may include page tables, segment tables, etc.

• List of Open files: This information includes the list of files opened for a process.

Function of Process Control Block

OS Managing Process Execution and Resource Allocation


State Management: PCB in OS consistently tracks and updates the current state of a process (e.g., running,
ready, or blocked).

Resource Allocation: Using details stored in the PCB in OS, the system efficiently assigns and monitors
resources like memory and I/O devices to specific processes.

Execution Oversight: PCB in OS houses the program counter and CPU registers, ensuring processes execute
correctly by pointing to the next instruction and maintaining intermediate data.

Protection and Security: By tracking allocated resources and memory boundaries in the PCB in OS, it
prevents processes from interfering with each other or accessing unauthorized resources.

Context Information: During context switches, the PCB in OS stores the current context of a process,
ensuring seamless transitions and allowing processes to resume where they left off.

2.3 Process Scheduling

Definition : The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the
basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such


operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time sharing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-pre-emptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves
to a waiting state.
2. Pre-emptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state
to ready state. This switching occurs as the CPU may give priority to other processes and replace
the process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue.

Saloni K, Sapient College


When the state of a process is changed, its PCB is unlinked from its current queue and moved to
its new state queue.

The Operating System maintains the following important process scheduling queues

• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

Schedulers :Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −

• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. When a process changes the state from
new to ready, then long-term scheduler is used.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects
a process among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes. A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from memory and
make space for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Saloni K, Sapient College
CONTEXT SWITCHING

Context switching is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.
In following example, the process P1 is initially running on the CPU for the execution of its task.
At the very same time, P2, another process, is in its ready state. If an interruption or error has
occurred or if the process needs I/O, the P1 process would switch the state from running to
waiting.

Before the change of the state of the P1 process, context switching helps in saving the context
of the P1 process as registers along with the program counter
(to PCB1). Then it loads the P2 process state from its ready state (of PCB2) to its running
state.

Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware systems
employ two or more sets of processor registers. When the process is switched, the
following information is stored for later use.

• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information and
• Accounting information

Saloni K, Sapient College


Operations on Process
A process is a program in execution that undergoes a number of states in its lifetime. In each of these
states, a process undergoes certain operations that enable the process to execute into completion.
Some of the operations are Creation, termination, scheduling and communication.

Process Creation : Parent process create children processes, which, in turn create other
processes, forming a tree of processes . Generally, process identified and managed via a
process identifier (pid). Execution can be done in two ways aren’t and children execute
concurrently Parent waits until children terminate. Resource sharing can be done in many
ways such as
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources .

Process Termination : Process executes last statement and the operating system deletes it .
Process resources are deallocated by operating system. Parent may terminate execution of
children processes (abort) when
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting ,some operating systems do not allow child to continue if its
parent terminates.

2.4 Inter Process Communication (IPC)

Definition : "Inter-process communication is used for exchanging useful information between


numerous threads in one or more processes (or programs)."

In general, Inter Process Communication is a type of mechanism usually provided by the


operating system (or OS). The main aim or goal of this mechanism is to provide communication
in between several processes. In short, intercommunication allows a process letting another
process know that some event has occurred.

A process can be of two types:

• Independent process.
• Co-operating process.

An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently. In reality, there are
many situations when co-operative nature can be utilized for increasing computational
speed, convenience, and modularity. Inter- process communication (IPC) is a mechanism
that allows processes to communicate with each other and synchronize their actions. The
communication between these processes can be seen as a method of co-operation between
them. Processes can communicate with each other through one of the two ways:

Saloni K, Sapient College


1. Shared Memory
2. Message passing

1. Shared Memory Model:


In this IPC model, a shared memory region is established which is used by the processes for
data communication. This memory region is present in the address space of the process which
creates the shared memory segment. The processes that want to communicate with this
process should attach this memory segment into their address space.

2. Message Passing Model:


In this model, the processes communicate with each other by exchanging messages. For
this purpose, a communication link must exist between the processes and it must
facilitate at least two operations send (message) and receive (message). The size of
messages may be variable or fixed.

Advantages of IPC:
1. Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall
system performance.
3. Allows for the creation of distributed systems that can span multiple
computers or networks.
4. Can be used to implement various synchronization and communication protocols,
such as semaphores, pipes, and sockets.
Disadvantages of IPC:
1. Increases system complexity, making it harder to design, implement, and debug.
2. Can introduce security vulnerabilities, as processes may be able to access or
modify data belonging to other processes.
3. Requires careful management of system resources, such as memory and CPU time,
to ensure that IPC operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the
same data at the same time.
4. Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary
mechanism for modern operating systems and enables processes to work together
and share resources in a flexible and efficient manner. However, care must be
taken to design and implement IPC systems carefully, in order to avoid potential
security vulnerabilities and performance issues.

Saloni K, Sapient College


3.0 CPU SCHEDULING

CPU Scheduling is a process of determining which process will own CPU for execution while
another process is on hold. The main task of CPU scheduling is to make sure that whenever
the CPU remains idle, the OS at least select one of the processes available in the ready queue
for execution. The selection process will be carried out by the CPU scheduler. It selects one of
the processes in memory that are ready for execution.

Here are the reasons for using a scheduling algorithm:

• The CPU uses scheduling to improve its efficiency.


• It helps you to allocate resources among competing processes.
• The maximum utilization of CPU can be obtained with multi-programming.
• The processes which are to be executed are in ready queue.

Types of CPU Scheduling

Here are two kinds of Scheduling methods:

Preemptive Scheduling :
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task holds for some time and resumes
when the higher priority task finishes its execution.

Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The process
that keeps the CPU busy will release the CPU either by switching context or terminating. It is
the only method that can be used for various hardware platforms. That’s because it doesn’t
need special hardware (for example, a timer) like preemptive scheduling.

Saloni K, Sapient College


3.1 Criteria for CPU scheduling : The criteria the CPU takes into consideration while "scheduling" these
processes are - CPU utilization, throughput, turnaround time, waiting time, and response time.

1. CPU utilization: CPU utilization is the main task in which the operating system
needs to make sure that CPU remains as busy as possible. It can range from 0 to
100 percent. However, for the RTOS, it can be range from 40 percent for low-level
and 90 percent for the high-level system.

2. Throughput: The number of processes that finish their execution per unit time is
known Throughput. So, when the CPU is busy executing the process, at that time,
work is being done, and the work completed per unit time is called Throughput.

3. Waiting time: Waiting time is an amount that specific process needs to wait in the
ready queue.

4. Response time: It is an amount to time in which the request was submitted until the
first response is produced.

5. Turnaround Time: Turnaround time is an amount of time to execute a specific


process. It is the calculation of the total time spent waiting to get into the memory,
waiting in the queue and, executing on the CPU. The period between the time of
process submission to the completion time is the turnaround time.

A good CPU scheduling algorithm should ensure that each process gets a fair share of the CPU time,
while also maximizing overall system throughput and minimizing response and waiting times.

3.2 Types of CPU scheduling Algorithm

There are mainly six types of process scheduling algorithms


1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling

Saloni K, Sapient College


First Come First Serve

First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling
algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation
first. This scheduling method can be managed with a FIFO queue.

As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of
the queue. So, when CPU becomes free, it should be assigned to the process at the beginning of
the queue.

Characteristics of FCFS method:

• It offers non-preemptive and pre-emptive scheduling algorithm.


• Jobs are always executed on a first-come, first-serve basis
• It is easy to implement and use.
• However, this method is poor in performance, and the general wait time is quite
high.

Shortest Remaining Time

The full form of SRT is Shortest remaining time. It is also known as SJF preemptive scheduling.
In this method, the process will be allocated to the task, which is closest to its completion.
This method prevents a newer ready state process from holding the completion of an older
process.

Characteristics of SRT scheduling method:

• This method is mostly applied in batch environments where short jobs are required
to be given preference.
• This is not an ideal method to implement it in a shared system where the required
CPU time is unknown.
• Associate with each process as the length of its next CPU burst. So that operating
system uses these lengths, which helps to schedule the process with the shortest
possible time.

Priority Based Scheduling


Priority scheduling is a method of scheduling processes based on priority. In this method, the
scheduler selects the tasks to work as per the priority.

Priority scheduling also helps OS to involve priority assignments. The processes with higher
priority should be carried out first, whereas jobs with equal priorities are carried out on a
round-robin or FCFS basis. Priority can be decided based on memory requirements, time
requirements, etc.

Round-Robin Scheduling

Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes
from the round-robin principle, where each person gets an equal share of something in turn.
It is mostly used for scheduling algorithms in multitasking. This algorithm method helps

Saloni K, Sapient College


for starvation free execution of processes.

Characteristics of Round-Robin Scheduling

• Round robin is a hybrid model which is clock-driven


• Time slice should be minimum, which is assigned for a specific task to be
processed. However, it may vary for different processes.
• It is a real time system which responds to the event within a specific time limit.

Shortest Job First

SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the
shortest execution time should be selected for execution next. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average waiting time for other
processes awaiting execution.

Characteristics of SJF Scheduling

• It is associated with each job as a unit of time to complete.


• In this method, when the CPU is available, the next process or job with the shortest
completion time will be executed first.
• It is Implemented with non-preemptive policy.
• This algorithm method is useful for batch-type processing, where waiting for jobs to
complete is not critical.
• It improves job output by offering shorter jobs, which should be executed first,
which mostly have a shorter turnaround time.

Multiple-Level Queues Scheduling


This algorithm separates the ready queue into various separate queues. In this method,
processes are assigned to a queue based on a specific property of the process, like the
process priority, size of the memory, etc.

However, this is not an independent scheduling OS algorithm as it needs to use other types
of algorithms in order to schedule the jobs.

Characteristic of Multiple-Level Queues Scheduling:

• Multiple queues should be maintained for processes with some


characteristics.
• Every queue may have its separate scheduling algorithms.
• Priorities are given for each queue.

3.4 Multi Processor Scheduling

A Multi-processor is a system that has more than one processor but shares the same memory,
bus, and input/output devices. In multi-processor scheduling, more than one
processors(CPUs) share the load to handle the execution of processes smoothly. The CPUs may
be of the same kind(homogeneous) or different(heterogeneous).

The scheduling process of a multi-processor is more complex than that of a single processor

Saloni K, Sapient College


system because of the following reasons.

3.4.1 Load balancing is a problem since more than one processors are present.
3.4.2 Processes executing simultaneously may require access to shared data.
3.4.3 Processor/Cache affinity should be considered in scheduling.

Load Balancing is the phenomenon that keeps the workload evenly distributed across all
processors in an SMP system so that one processor doesn’t sit idle while the other is being
overloaded. It can be done in 2 ways,

1) Push Migration: A task regularly checks if there is an imbalance of load among


the processors and then shifts\distributes the load accordingly.
2) Pull Migration: It occurs when an idle processor pulls a task from an
overloaded\busy processor.

Processor Affinity : A process has an affinity for the processor on which it is currently
executing. This is because it fills the processor's cache with the data it most recently
accessed. As a result, the process frequently finds the answers to its subsequent memory
requests in the cache memory.

There are two types of processor affinity.


• Soft Affinity: The system has a rule of trying to keep running a process on the same
processor but does not guarantee it. This is called soft affinity.
• Hard Affinity: The system allows the process to specify the subset of processors on
which it may run, i.e., each process can run only some of the processors. Systems such
as Linux implement soft affinity, but they also provide system calls such as
sched_setaffinity() to support hard affinity.

Approaches to multi-processor scheduling

Symmetric Multiprocessing: In symmetric multi-processor scheduling, the processors are self-


scheduling. The scheduler for each processor checks the ready queue and selects a process to
execute. Each of the processors works on the same copy of the operating system and
communicates with each other. If one of the processors goes down, the rest of the system keeps
working.

Asymmetric Multiprocessing (AMP): is a multiprocessor system where CPUs don't have equal
roles, and each processor is assigned specific tasks or tasks to a specific processor, with one acting
as a primary processor, unlike symmetric multiprocessing where tasks are distributed equall

Key Differences Between Symmetric and Asymmetric Multiprocessing

1. The most distinguishable point between symmetric and asymmetric


multiprocessing is that the tasks in OS are handled only by the master processor in
Asymmetric Multiprocessing. On the other hand, all the processors in symmetric
multiprocessing run the tasks in OS.

2. In symmetric multiprocessing, each processor may have its own private queue of
ready processes, or they can take processes from a common ready queue. But, in
asymmetric multiprocessing, master processor assigns processes to the slave
processors.
Saloni K, Sapient College
3. All the processor in Symmetric Multiprocessing has the same
architecture. But the structure of processors in asymmetric
multiprocessor may differ.

4. Processors in symmetric multiprocessing communicate with each other by the


shared memory. However, the processors in Asymmetric Multiprocessing need
not to communicate with each other as they are controlled by the master
processor.
5. In case the master processor fails, a slave processor is turned to master processor to
continue the execution. But, if a processor in symmetric multiprocessing fails, the
computing capacity of the system is reduced.

6. Asymmetric Multiprocessor is simple as only master processor accesses the data


structure whereas, symmetric multiprocessor is complex as all the processors need
to work in synchronisation.

REAL TIME CPU SCHEDULING

A real-time system is one whose correctness depends on timing as well as


functionality.When we discussed more traditional scheduling algorithms, the metrics we
looked at were turn-around time (or throughput), fairness, and mean response time.
But real-time systems have very different requirements, characterized by different
metrics:
• Timeliness.- how closely does it meet its timing requirements.
• Predictability - how much deviation is there in delivered timeliness
• Feasibility ... whether or not it is possible to meet the requirements for a
particular task set

It sounds like real-time scheduling is more critical and difficult than traditional time- sharing,
and in many ways it is. But real-time systems may have a few characteristics that make
scheduling easier:

• We may know how long each task will take to run. This enables much more intelligent
scheduling.
• Starvation (of low priority tasks) may be acceptable. The space shuttle absolutely must
sense attitude and acceleration and adjust spolier positions once per millisecond. But
it probably doesn't matter if we update the navigational display once per millisecond
or once every ten seconds. Telemetry transmission is probably somewhere in-
between. Understanding the relative criticality of each task gives us the freedom to
intelligently shed less critical work in times of high demand.
• The work-load may be relatively fixed. Normally high utilization implies long
queuing delays, as burst traffic creates long lines. But if the incoming traffic rate is
relatively constant, it is possible to simultaneously achieve high utilization and good
response time.

Real Time Scheduling is of 2 types

• Hard real-time - There are strong requirements that specified tasks be run a specified
intervals (or within a specified response time). Failure to meet this

Saloni K, Sapient College


requirement (perhaps by as little as a fraction of a micro-second) may result in system
failure.
• soft real-time ... we may want to provide very good (e.g. microseconds) response
time, the only consequences of missing a deadline are degraded performance or
recoverable failures.

Dynamic Scheduling Algorithms Dynamic schedulers make decisions during the runtime
of the system. This allows to not only design a more flexible system, but also associate
calculation overhead with it. The dynamic schedulers decide what task to execute
depending on the importance of the task, called priority.
The task priority may change during the runtime Two dynamic scheduling algorithms
EDF and LST are :

• Earliest Deadline First(EDF) : EDF uses priorities to the jobs for scheduling. It assigns
priorities to the task according to the absolute deadline. The task whose deadline is
closest gets the highest priority. The priorities are assigned and changed in a dynamic
fashion. EDF is very efficient as compared to other scheduling algorithms in real-
time systems.
• The Least Slack Time (LST) scheduling algorithm is a real-time scheduling algorithm
that prioritizes tasks based on the amount of time remaining before a task's deadline.
The LST algorithm's basic idea is to schedule the task with the least slack time first
because it has the least amount of time before its deadline.

Static Scheduling Algorithms The static scheduler can calculate the order of execution
before runtime as well. The static scheduler also decides the sequence of task based on
priority, but the priority value will not change during runtime .example of static scheduling
algorithms are RM and SJF .

• The Rate Monotonic (RM) The rate monotonic is a static scheduling algorithm, which
gives maximum priority to the process which has the smallest period or smallest rate .
The rate of a process is already known in RTOS and defined as the task occurs again in a
given duration. The algorithm executes when the current process completes or new
process arrives.
• The Shortest Job First (SJF) The shortest job first algorithm is a static scheduling
algorithm, which gives maximum priority to the process which has the smallest
execution time . The execution time of a process is already known in RTOS and defined
as the process that needs CPU time to complete the given task.

Saloni K, Sapient College


Important Questions from Unit-1

Q1. Explain Organization of computer System ( Component Diagram)

Q2. What are the Services offered by OS

Q3. Write about the History and Types of O.S.

Q4. Draw a diagram explain Structure of OS ( shell,

kernal)

Q5. What is a Process and explain its states .

Q6. What is Context Switching? Why is it done?

Q7. What are Scheduling Queues.

Q8. What is CPU scheduling? Explain types and different algorithms available.

Q9. What is IPC and what are its implementation methods.

Q10. What are different CPU scheduling criteria.

Saloni K, Sapient College

You might also like