OS Unit1
OS Unit1
Process Management: Process Concept- Process Definition, Process State, Process Control
Block, Threads; Process scheduling. Multiprogramming, Scheduling Queues, CPU Scheduling,
Context Switch; Operations on Processes- Creation and Termination of Processes; Inter process
communication (IPC)- Definition and Need for Inter process Communication; IPC
Implementation Methods- Shared Memory and Message Passing.
1.1 DEFINITION OF OS
• An Operating system is a program that controls the execution of application programs and
acts as an interface between the user of a computer and the computer hardware. It is the
one program running at all times on the computer (usually called the kernel), with all else
being applications programs.
• An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly includes
programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.
Every computer must have an operating system to run other programs. The operating system
coordinates the use of the hardware among the various system programs and application
programs for various users. It simply provides an environment within which other programs
can do useful work.
1. It controls the allocation and use of the computing System’s resources among the
various user and tasks.
2. It provides an interface between the computer hardware and the programmer that
simplifies and makes it feasible for coding and debugging of application programs.
3. Provides the facilities to create and modify of programs and data files using an editor.
4. Access to the compiler for translating the user program from high-level
language to machine language.
5. Provide a loader program to move the compiled program code to the
computer’s memory for execution.
6. Provide routines that handle the details of I/O programming.
First Generation :(1945-1955) In this generation, operating systems were not introduced
therefore the instruction was directly given to the computer systems. All the code was
included to communicate with the connected hardware and the system.
Second Generation :(1955-1965) GMOS (General Motos operating system) was the first
operating system that came into the picture in the 1950s which was developed for IBM
computers. Around the 1960s the first UNIX Operating system was developed ,it used the
batch processing system, where all the similar jobs are collected in groups by the system, and
then all the jobs are submitted to the operating system using a punch card to execute all jobs in
a machine.
Fourth Generation : (1980-now) The evolution of Personal computers came under the fourth
generation. The birth of the Microsoft Windows operating system was in 1975 and then Bill
Gates took the personal computers to next level by launching MS-DOS in 1981, but due to the
cryptic commands, it was difficult for a user to get hold of the commands. In this generation,
people were also introduced to Graphic User Interface(GUI). Today, Windows is the most popular
operating system and has evolved from Windows 95, Windows 98, Windows XP, Windows 10
There are several different types of operating systems present. In this section, we will discuss
the advantages and disadvantages of these types of OS.
• Batch OS
• Multi-Programming OS
• Distributed OS
• Network OS
• Time Sharing/ Multi Taking OS
• Real- T ime-OS
1. Batch OS
Batch OS is the first operating system for second-generation computers. This OS does not
directly interact with the computer. Instead, an operator takes up similar jobs and groups
them together into a batch, and then these batches are executed one by one based on the first-
come, first, serve principle.
Advantages of Batch OS
• Execution time taken for similar jobs is higher.
• Multiple users can share batch systems.
• Managing large works becomes easy in batch systems.
• The idle time for a single batch is very less.
Disadvantages of OS
• It is hard to debug batch systems.
• If a job fails, then the other jobs have to wait for an unknown time till the
issue is resolved.
• Batch systems are sometimes costly.
Examples of Batch OS: payroll system, bank statements, data entry, etc.
Advantages
• Faster response time: Multiprogramming systems have less waiting time and faster
response times.
• Efficient resource use: Multiprogramming systems allow multiple processes to run at the
same time, which makes better use of system resources.
• Improved reliability: By distributing tasks across multiple processors, multiprogramming
systems can handle multiple applications at once. This reduces performance issues due to
errors in individual components.
Disadvantages
Advantages of Distributed OS
Failure of one system will not affect the other systems because all the computers are
independent of each other.
The load on the host system is reduced.
The size of the network is easily scalable as many computers can be added to the network.
As the workload and resources are shared therefore the calculations are performed at a
higher speed.
Disadvantages of Distributed OS
The setup cost is high.
Software used for such systems is highly complex.
Failure of the main network will lead to the failure of the whole system.
Examples of Distributed OS: LOCUS, etc.
4. Time Sharing/ Multitasking OS
The multitasking OS is also known as the time-sharing operating system as each task is given
some time so that all the tasks work efficiently. This system provides access to a large
number of users, and each user gets the time of CPU as they get in a single system. The tasks
performed are given by a single user or by different users. The time allotted to execute one
task is called a quantum, and as soon as the time to execute one task is completed, the system
switches over to another task.
Advantages of Multitasking OS
Each task gets equal time for execution.
The idle time for the CPU will be the lowest.
There are very few chances for the duplication of the software.
Disadvantages of Multitasking OS
Processes with higher priority cannot be executed first as equal
priority is given to each process or task.
Various user data is needed to be taken care of from unauthorized access.
Sometimes there is a data communication problem.
Examples of Multitasking OS: UNIX, Linux etc.
5. Network OS
Network operating systems are the systems that run on a server and manage all the
networking functions. They allow sharing of various files, applications, printers, security, and
other networking functions over a small network of computers like LAN or any other
private network. In the network OS, all the users are aware of the configurations of every
other user within the network, which is why network operating systems are also known as
tightly coupled systems.
6. Real-Time OS
Real-Time operating systems serve real-time systems. These operating systems are useful when
many events occur in a short time or within certain deadlines, such as real-time simulations.
Types of the real-time OS are:
Hard real-time OS: The hard real-time OS is the operating system for mainly the applications in
which the slightest delay is also unacceptable. The time constraints of such applications are very strict.
Such systems are built for life-saving equipment like parachutes and airbags, which immediately
need to be in action if an accident happens.
Soft real-time OS: The soft real-time OS is the operating system for applications where time
constraint is not very strict. In a soft real-time system, an important task is prioritized over less
important tasks, and this priority remains active until the completion of the task.
Furthermore, a time limit is always set for a specific job, enabling short time delays for future
tasks, which is acceptable. For Example, virtual reality, reservation systems, etc.
Advantages of Real-Time OS
It provides more output from all the resources as there is maximum
utilization of systems.
It provides the best management of memory allocation.
These systems are always error-free.
These operating systems focus more on running applications than those in
the queue.
Shifting from one task to another takes very little time.
Disadvantages of Real-Time OS
System resources are extremely expensive and are not so good.
The algorithms used are very complex.
Only limited tasks can run at a single time.
In such systems, we cannot set thread priority as these systems
cannot switch tasks easily.
Examples of Real-Time OS: Medical imaging systems, robots, etc.
Operating System is used as a communication channel between the Computer hardware and the user. It
works as an intermediate between System Hardware and End-User. Operating System handles the following
responsibilities:
The operating system provides the programming environment in which a programmer works on a
computer system. The user program requests various resources through the operating system. The
operating system gives several services to utility programmers and users. Applications access these
services through application programming interfaces or system calls. By invoking those interfaces, the
application can request a service from the operating system, pass parameters, and acquire the operation
outcomes.
From a user's perspective, an operating system (OS) provides services like file management, program
execution, managing input/output devices, providing a user interface, and ensuring security, while from a
system perspective, an OS manages resources like memory, CPU, and peripherals by allocating them to
different processes, handling interrupts, and performing scheduling to optimize system performance;
essentially acting as a mediator between the hardware and the user applications.
User Perspective:
File Management: Creating, deleting, renaming, and organizing files and folders.
Program Execution: Launching and running applications.
Input/Output Control: Managing interactions with devices like keyboard, mouse, printer, and monitor.
User Interface: Providing a visual interface for interacting with the system
Security: Protecting against unauthorized access and malicious software.
System Perspective:
Memory Management: Allocating and deallocating memory to running processes.
CPU Scheduling: Deciding which process gets to use the CPU and when.
Process Management: Creating, managing, and terminating processes.
Device Management: Controlling and coordinating various hardware devices.
Error Handling: Detecting and responding to system errors.
Resource Allocation: Distributing system resources like CPU, memory, and disk space efficiently among
processes.
Some of the services are explained in detail here as
Program execution : To execute a program, several tasks need to be performed. Both the instructions and
data must be loaded into the main memory. In addition, input-output devices and files should be initialized,
and other resources must be prepared. The Operating structures handle these kinds of tasks. The user now
no longer should fear the reminiscence allocation or multitasking or anything.
Control Input/output devices :As there are numerous types of I/O devices within the computer system,
and each I/O device calls for its own precise set of instructions for the operation. The Operating System
hides that info with the aid of presenting a uniform interface. Thus, it is convenient for programmers to
access such devices easily.
Program Creation :
The Operating system offers structures and tools, including editors and debuggers, to help the programmer
create, modify, and debugging programs.
Error Detection and Response: An Error in a device may also cause malfunctioning of the entire device.
These include hardware and software errors such as device failure, memory error, division by zero, attempts
to access forbidden memory locations, etc. To avoid error, the operating system monitors the system for
detecting errors and takes suitable action with at least impact on running applications.
While working with computers, errors may occur quite often. Errors may occur in the:
• Input/ Output devices: For example, connection failure in the network, lack of paper in the printer,
etc.
• User program: For example: attempt to access illegal memory locations, divide by zero, use too much
CPU time, etc.
• Memory hardware: For example, Memory error, the memory becomes full, etc.
To handle these errors and other types of possible errors, the operating system takes appropriate action and
generates messages to ensure correct and consistent computing.
Accounting : An Operating device collects utilization records for numerous assets and tracks the overall
performance parameters and responsive time to enhance overall performance. These personal records are
beneficial for additional upgrades and tuning the device to enhance overall performance.
Security and Protection : An operating system provides security services by controlling access to system
resources, implementing user authentication mechanisms like passwords, managing memory access,
enforcing access control policies, and protecting against unauthorized access to data, essentially
safeguarding the system from various threats and vulnerabilities by limiting actions to authorized users .
File management : Computers keep data and information on secondary storage devices like magnetic tape,
Saloni K, Sapient College
magnetic disk, optical disk, etc. Each storage media has its capabilities like speed, capacity, data transfer rate,
and data access methods. For file management, the operating system must know the types of different files
and the characteristics of different storage devices. It has to offer the proportion and safety mechanism of
documents additionally.
Communication : The operating system manages the exchange of data and programs among different
computers connected over a network. This communication is accomplished using message passing and
shared memory.
Now to perform the functions mentioned above, the operating system has two
components:
• Shell
• Kernel
Shell provides a way to communicate with the OS by either taking the input from the user or the
shell script. A shell script is a sequence of system commands that are stored in a file.
Shell handles user interactions. It is the outermost layer of the OS and manages the interaction between user
and operating system by:
• Prompting the user to give input
• Interpreting the input for the operating system
• Handling the output from the operating system.
The kernel is the core component of an operating system for a computer (OS). All other
components of the OS rely on the core to supply them with essential services. It serves as the primary
interface between the OS and the hardware and aids in the control of devices, networking, file
systems, and process and memory management.
Functions of kernel
• The kernel is the core component of an operating system which acts as an interface between
applications, and the data is processed at the hardware level.
• When an OS is loaded into memory, the kernel is loaded first and remains in memory until the
OS is shut down. After that, the kernel provides and manages the computer resources and
allows other programs to run and use these resources.
• The kernel also sets up the memory address space for applications, loads the files with
application code into memory, and sets up the execution stack for programs.
The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also included in
the manuals used by the assembly level programmers. System calls are usually made when a
process in user mode requires access to a resource. Then it requests the kernel to provide the
resource via a system call.
As can be seen from this diagram, the processes execute normally in the user mode until a
system call interrupts this. Then the system call is executed on a priority basis in the kernel
mode. After the execution of the system call, the control returns to the user mode and
execution of user processes can be resumed.
• If a file system requires the creation or deletion of files. Reading and writing from files also require a
system call.
• Creation and management of new processes.
• Network connections also require system calls. This includes sending and receiving packets.
• Access to a hardware devices such as a printer, scanner etc. requires a system call.
Types of System Calls : There are mainly five types of system calls. These are explained in detail as follows:
1. Process Control: These system calls deal with processes such as process creation, process
termination etc.
2. File Management: These system calls are responsible for file manipulation such as creating a file,
reading a file, writing into a file etc.
3. Device Management: These system calls are responsible for device manipulation such as reading
from device buffers, writing into device buffers etc.
4. Information Maintenance: These system calls handle information and its transfer between the
2. PROCESS MANAGEMENT
Process : A process is defined as an entity which represents the basic unit of work to be implemented in the
system. A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program. A process is an 'active' entity as
opposed to the program which is a 'passive' entity.
Components of a Process in OS: When a program is loaded into the memory and it becomes a process, it can
be divided into four sections : stack, heap, text and data.
Process Program
A Process requires resources such as A Program is stored by hard-disk and does not
memory, CPU, Input-Output devices. require any resources.
A process has a dynamic instance A Program has static code and static data.
of code and data
Basically, a process is the running On the other hand, the program is the
instance of the code. executable code.
When a process executes, it passes through different states. A process transitions between these states
as it progresses through its lifecycle. By transitioning through these states, the operating system ensures
that processes are executed smoothly, resources are allocated effectively, and the overall performance
of the computer is optimized.
In general, a process can have one of the following five states at a time.
New: When a process is first created and has not yet been scheduled to run. It is the program that is present
in secondary memory that will be picked up by the OS to create the process.
Ready : A process enters the ‘ready’ state when it is loaded into the main memory and is waiting to be
assigned to a processor for execution. It is ready to run but is waiting for CPU time.
Running: When a process is currently being executed by the CPU. The process is chosen from the ready
queue by the OS for execution and the instructions within the process are executed by any one of the
available processors.
Waiting (or Blocked): When a process is waiting for an external event to occur, like input from a device,
before it can continue execution.
To identify the processes, it assigns a process identification number (PID) to each process. As the operating
system supports multi-programming, it needs to keep track of all the processes. For this task, the process
control block (PCB) is used to track the process’s execution status. Each block of memory contains
information about the process state, program counter, stack pointer, status of opened files, scheduling
algorithms, etc.
All this information is required and must be saved when the process is switched from one state to another.
When the process makes a transition from one state to another, the operating system must update
information in the process’s PCB.
Process Control Block is a data structure that contains information of the process related to it. The process
control block is also known as a task control block.There is a Process Control Block for each process,
enclosing all the information about the process. The Process Table is an array of PCBs, which logically
contains a PCB for all of the current processes in the system.
Structure of the Process Control Block : PCB keeps track of many important pieces of information needed to
manage processes efficiently. The diagram helps explain some of these key data items.
• Pointer: It is a stack pointer that is required to be saved when the process is switched from one state
to another to retain the current position of the process.
• Process number: Every process is assigned a unique id known as process ID or PID which stores the
process identifier.
• Program counter: Program Counter stores the counter, which contains the address of the next
instruction that is to be executed for the process.
• Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time slice
expires, the current value of process specific registers would be stored in the PCB and the process
would be swapped out. When the process is scheduled to be run, the register values is read from the
PCB and written to the CPU registers. This is the main purpose of the registers in the PCB.
Saloni K, Sapient College
• Memory limits: This field contains the information about memory management system used by the
operating system. This may include page tables, segment tables, etc.
• List of Open files: This information includes the list of files opened for a process.
Resource Allocation: Using details stored in the PCB in OS, the system efficiently assigns and monitors
resources like memory and I/O devices to specific processes.
Execution Oversight: PCB in OS houses the program counter and CPU registers, ensuring processes execute
correctly by pointing to the next instruction and maintaining intermediate data.
Protection and Security: By tracking allocated resources and memory boundaries in the PCB in OS, it
prevents processes from interfering with each other or accessing unauthorized resources.
Context Information: During context switches, the PCB in OS stores the current context of a process,
ensuring seamless transitions and allowing processes to resume where they left off.
Definition : The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the
basis of a particular strategy.
Categories of Scheduling
1. Non-pre-emptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves
to a waiting state.
2. Pre-emptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state
to ready state. This switching occurs as the CPU may give priority to other processes and replace
the process with higher priority with the running process.
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue.
The Operating System maintains the following important process scheduling queues
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Schedulers :Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. When a process changes the state from
new to ready, then long-term scheduler is used.
Context switching is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.
In following example, the process P1 is initially running on the CPU for the execution of its task.
At the very same time, P2, another process, is in its ready state. If an interruption or error has
occurred or if the process needs I/O, the P1 process would switch the state from running to
waiting.
Before the change of the state of the P1 process, context switching helps in saving the context
of the P1 process as registers along with the program counter
(to PCB1). Then it loads the P2 process state from its ready state (of PCB2) to its running
state.
Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware systems
employ two or more sets of processor registers. When the process is switched, the
following information is stored for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information and
• Accounting information
Process Creation : Parent process create children processes, which, in turn create other
processes, forming a tree of processes . Generally, process identified and managed via a
process identifier (pid). Execution can be done in two ways aren’t and children execute
concurrently Parent waits until children terminate. Resource sharing can be done in many
ways such as
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources .
Process Termination : Process executes last statement and the operating system deletes it .
Process resources are deallocated by operating system. Parent may terminate execution of
children processes (abort) when
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting ,some operating systems do not allow child to continue if its
parent terminates.
• Independent process.
• Co-operating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently. In reality, there are
many situations when co-operative nature can be utilized for increasing computational
speed, convenience, and modularity. Inter- process communication (IPC) is a mechanism
that allows processes to communicate with each other and synchronize their actions. The
communication between these processes can be seen as a method of co-operation between
them. Processes can communicate with each other through one of the two ways:
Advantages of IPC:
1. Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall
system performance.
3. Allows for the creation of distributed systems that can span multiple
computers or networks.
4. Can be used to implement various synchronization and communication protocols,
such as semaphores, pipes, and sockets.
Disadvantages of IPC:
1. Increases system complexity, making it harder to design, implement, and debug.
2. Can introduce security vulnerabilities, as processes may be able to access or
modify data belonging to other processes.
3. Requires careful management of system resources, such as memory and CPU time,
to ensure that IPC operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the
same data at the same time.
4. Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary
mechanism for modern operating systems and enables processes to work together
and share resources in a flexible and efficient manner. However, care must be
taken to design and implement IPC systems carefully, in order to avoid potential
security vulnerabilities and performance issues.
CPU Scheduling is a process of determining which process will own CPU for execution while
another process is on hold. The main task of CPU scheduling is to make sure that whenever
the CPU remains idle, the OS at least select one of the processes available in the ready queue
for execution. The selection process will be carried out by the CPU scheduler. It selects one of
the processes in memory that are ready for execution.
Preemptive Scheduling :
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task holds for some time and resumes
when the higher priority task finishes its execution.
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The process
that keeps the CPU busy will release the CPU either by switching context or terminating. It is
the only method that can be used for various hardware platforms. That’s because it doesn’t
need special hardware (for example, a timer) like preemptive scheduling.
1. CPU utilization: CPU utilization is the main task in which the operating system
needs to make sure that CPU remains as busy as possible. It can range from 0 to
100 percent. However, for the RTOS, it can be range from 40 percent for low-level
and 90 percent for the high-level system.
2. Throughput: The number of processes that finish their execution per unit time is
known Throughput. So, when the CPU is busy executing the process, at that time,
work is being done, and the work completed per unit time is called Throughput.
3. Waiting time: Waiting time is an amount that specific process needs to wait in the
ready queue.
4. Response time: It is an amount to time in which the request was submitted until the
first response is produced.
A good CPU scheduling algorithm should ensure that each process gets a fair share of the CPU time,
while also maximizing overall system throughput and minimizing response and waiting times.
First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling
algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation
first. This scheduling method can be managed with a FIFO queue.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of
the queue. So, when CPU becomes free, it should be assigned to the process at the beginning of
the queue.
The full form of SRT is Shortest remaining time. It is also known as SJF preemptive scheduling.
In this method, the process will be allocated to the task, which is closest to its completion.
This method prevents a newer ready state process from holding the completion of an older
process.
• This method is mostly applied in batch environments where short jobs are required
to be given preference.
• This is not an ideal method to implement it in a shared system where the required
CPU time is unknown.
• Associate with each process as the length of its next CPU burst. So that operating
system uses these lengths, which helps to schedule the process with the shortest
possible time.
Priority scheduling also helps OS to involve priority assignments. The processes with higher
priority should be carried out first, whereas jobs with equal priorities are carried out on a
round-robin or FCFS basis. Priority can be decided based on memory requirements, time
requirements, etc.
Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes
from the round-robin principle, where each person gets an equal share of something in turn.
It is mostly used for scheduling algorithms in multitasking. This algorithm method helps
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the
shortest execution time should be selected for execution next. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average waiting time for other
processes awaiting execution.
However, this is not an independent scheduling OS algorithm as it needs to use other types
of algorithms in order to schedule the jobs.
A Multi-processor is a system that has more than one processor but shares the same memory,
bus, and input/output devices. In multi-processor scheduling, more than one
processors(CPUs) share the load to handle the execution of processes smoothly. The CPUs may
be of the same kind(homogeneous) or different(heterogeneous).
The scheduling process of a multi-processor is more complex than that of a single processor
3.4.1 Load balancing is a problem since more than one processors are present.
3.4.2 Processes executing simultaneously may require access to shared data.
3.4.3 Processor/Cache affinity should be considered in scheduling.
Load Balancing is the phenomenon that keeps the workload evenly distributed across all
processors in an SMP system so that one processor doesn’t sit idle while the other is being
overloaded. It can be done in 2 ways,
Processor Affinity : A process has an affinity for the processor on which it is currently
executing. This is because it fills the processor's cache with the data it most recently
accessed. As a result, the process frequently finds the answers to its subsequent memory
requests in the cache memory.
Asymmetric Multiprocessing (AMP): is a multiprocessor system where CPUs don't have equal
roles, and each processor is assigned specific tasks or tasks to a specific processor, with one acting
as a primary processor, unlike symmetric multiprocessing where tasks are distributed equall
2. In symmetric multiprocessing, each processor may have its own private queue of
ready processes, or they can take processes from a common ready queue. But, in
asymmetric multiprocessing, master processor assigns processes to the slave
processors.
Saloni K, Sapient College
3. All the processor in Symmetric Multiprocessing has the same
architecture. But the structure of processors in asymmetric
multiprocessor may differ.
It sounds like real-time scheduling is more critical and difficult than traditional time- sharing,
and in many ways it is. But real-time systems may have a few characteristics that make
scheduling easier:
• We may know how long each task will take to run. This enables much more intelligent
scheduling.
• Starvation (of low priority tasks) may be acceptable. The space shuttle absolutely must
sense attitude and acceleration and adjust spolier positions once per millisecond. But
it probably doesn't matter if we update the navigational display once per millisecond
or once every ten seconds. Telemetry transmission is probably somewhere in-
between. Understanding the relative criticality of each task gives us the freedom to
intelligently shed less critical work in times of high demand.
• The work-load may be relatively fixed. Normally high utilization implies long
queuing delays, as burst traffic creates long lines. But if the incoming traffic rate is
relatively constant, it is possible to simultaneously achieve high utilization and good
response time.
• Hard real-time - There are strong requirements that specified tasks be run a specified
intervals (or within a specified response time). Failure to meet this
Dynamic Scheduling Algorithms Dynamic schedulers make decisions during the runtime
of the system. This allows to not only design a more flexible system, but also associate
calculation overhead with it. The dynamic schedulers decide what task to execute
depending on the importance of the task, called priority.
The task priority may change during the runtime Two dynamic scheduling algorithms
EDF and LST are :
• Earliest Deadline First(EDF) : EDF uses priorities to the jobs for scheduling. It assigns
priorities to the task according to the absolute deadline. The task whose deadline is
closest gets the highest priority. The priorities are assigned and changed in a dynamic
fashion. EDF is very efficient as compared to other scheduling algorithms in real-
time systems.
• The Least Slack Time (LST) scheduling algorithm is a real-time scheduling algorithm
that prioritizes tasks based on the amount of time remaining before a task's deadline.
The LST algorithm's basic idea is to schedule the task with the least slack time first
because it has the least amount of time before its deadline.
Static Scheduling Algorithms The static scheduler can calculate the order of execution
before runtime as well. The static scheduler also decides the sequence of task based on
priority, but the priority value will not change during runtime .example of static scheduling
algorithms are RM and SJF .
• The Rate Monotonic (RM) The rate monotonic is a static scheduling algorithm, which
gives maximum priority to the process which has the smallest period or smallest rate .
The rate of a process is already known in RTOS and defined as the task occurs again in a
given duration. The algorithm executes when the current process completes or new
process arrives.
• The Shortest Job First (SJF) The shortest job first algorithm is a static scheduling
algorithm, which gives maximum priority to the process which has the smallest
execution time . The execution time of a process is already known in RTOS and defined
as the process that needs CPU time to complete the given task.
kernal)
Q8. What is CPU scheduling? Explain types and different algorithms available.