PROCESS MANAGEMENT
What's a Process?
A process is a program that's currently running. Think of a program as the blueprint for a building—
it's just a set of instructions on a piece of paper. A process is the actual building project, with all the
workers, materials, and equipment. A program is a static, passive thing, but a process is an active,
living entity. When you click on an application like a video game or a spreadsheet, you're starting a
new process.
What Makes Up a Process?
When a program becomes a process, the computer loads it into its memory and organizes it into a
few distinct sections, each with a specific job. It's like a construction site with different areas for
different tasks.
Text Section: This is the program's actual code—the architectural plan itself. It's read-only, so
the process can't accidentally change its own instructions while it's running.
Data Section: This is where the process keeps all its global and static variables. Think of this
as the main warehouse on the construction site, where all the shared materials are stored
for the entire project.
Stack: This is a temporary workspace that's constantly being used and reused. It's where the
process stores information for function calls, like local variables and return addresses. It's like
a foreman's clipboard, with temporary notes for a single, specific task.
Heap: This is a flexible area for dynamic memory. The process can get more memory from
here as it needs it and give it back when it's done. This is like a flexible storage yard where
the project manager can order more materials for unexpected jobs.
How the OS Manages a Process
To keep track of all the different processes and make sure they don't get in each other's way, the
operating system gives each one a unique ID card, called the Process Control Block (PCB). The PCB
contains all the crucial information about the process, from its identity to its current state.
Think of the PCB as a project file for the construction manager (the operating system). It includes
things like:
Process ID (PID): A unique number for each process, like a project number.
Process State: What the process is currently doing (e.g., waiting for input, running on the
CPU, finished).
CPU Information: Data that helps the OS figure out when the process should get to use the
CPU, like its priority.
Memory Info: The exact location of all the process's different sections in memory.
By using the PCB, the operating system can seamlessly switch between multiple processes, which is
what allows your computer to multitask. When one process needs to stop for a moment, the OS
saves its complete state in the PCB. When it's time to start again, it uses the PCB to restore
everything exactly where it was. It's like a manager putting one project on hold and picking up
another, knowing they can return to the first one later without losing their place.
PROCESS STATES
I understand. I will provide a descriptive, easy-to-understand explanation of process states in an
operating system, using clear language and a helpful analogy. I'll make sure to cover all the key points
from the provided text, but I will not include any of the emojis or formatting I was instructed to
avoid.
What are Process States?
In an operating system, a process (a program that's running) doesn't just run continuously until it's
done. Instead, it goes through different phases or states that describe what it's doing at any given
moment. These states help the operating system manage multiple programs at once and ensure
everything runs smoothly.
Think of it like a patient in a hospital. The patient (the process) moves through different parts of the
hospital (the states) depending on what they need.
The Five-State Model: The Core of Process Management
The most common model for understanding a process's journey has five key states.
1. New: This is the initial state. The process is being created but hasn't been loaded into the
computer's main memory yet. It's like a new patient arriving at the hospital; they have their
paperwork filled out, but they haven't been admitted yet.
2. Ready: The process has been created and is now waiting for its turn on the CPU (the main
doctor). It's ready to execute as soon as the operating system gives it a chance. All processes
in this state are kept in a queue, waiting for their turn.
3. Running: This is the active state. The process is currently executing on the CPU. At any given
moment, only one process can be in the running state per CPU core. This is like a patient who
is currently being treated by the doctor.
4. Blocked/Waiting: The process has to pause its execution because it needs something before
it can continue. This is usually waiting for an event to complete, such as reading data from a
hard drive or waiting for a user to type something. The process moves out of the running
state and waits. Our patient analogy is a patient waiting for lab results before the doctor can
continue treatment.
5. Terminated: The process has finished its job, or it has been stopped. All of its resources, like
memory, are released, and the process is no longer active. The patient has been discharged
from the hospital.
A process can move between the Ready, Running, and Blocked states multiple times during its life,
but it enters the New and Terminated states only once.
How Processes Move Between States
The movement between these states is managed by different types of schedulers in the operating
system.
Ready to Running: A short-term scheduler (also called the CPU scheduler) chooses a process
from the ready queue and gives it to the CPU.
Running to Ready: This can happen in a few ways. The operating system might preempt the
process, meaning it forcefully stops it to let a higher-priority process run. This is what
enables multitasking. Or, the process might be stopped because its allocated time slot has
expired.
Running to Blocked: A running process voluntarily gives up the CPU because it needs to wait
for a resource, like an I/O operation to finish.
Blocked to Ready: When the event the process was waiting for is completed, the operating
system moves it back to the ready queue.
Any State to Terminated: A process will be terminated when it completes its task or if it's
explicitly killed by the user or the operating system.
The Seven-State Model: Handling Memory Issues
Sometimes, a computer's main memory gets full. To handle this, the operating system can add two
more states to the model, creating the seven-state model. These new states involve swapping,
where a process is temporarily moved from main memory to secondary storage (like a hard drive) to
free up space. This is managed by a medium-term scheduler.
Suspended Ready: A process that was in the Ready state is moved out of main memory. It's
still ready to run, but it can't be scheduled until it is "swapped back in."
Suspended Blocked: A process that was in the Blocked state is moved out of main memory.
It's still waiting for its event, but it's not occupying valuable main memory space. Once its
event is complete, it moves to the Suspend Ready state.
This added complexity helps the operating system manage resources more efficiently on systems
with limited memory.
.
The Three Schedulers
Imagine you're managing a busy restaurant with a limited number of tables (CPU cores). You need
different managers for different tasks.
1. Long-Term Scheduler (The Host):
o What it does: This scheduler decides which new customers (processes) are allowed
into the restaurant. It looks at all the people waiting outside and decides which ones
to let in to wait for a table.
o Why it's important: It controls the "degree of multiprogramming," which is the
number of processes in the "ready to run" state. If it lets too many people in, the
restaurant gets too crowded, and nobody gets served quickly. If it lets too few in,
tables sit empty. The long-term scheduler ensures there's always a good number of
processes ready to run without overwhelming the system.
o How often it runs: Not very often, since a new process isn't created every second.
2. Short-Term Scheduler (The Waiter):
o What it does: This scheduler is the busiest. It's like the waiter who looks at all the
customers sitting at the bar (the ready queue) and decides which one to take to the
next available table (the CPU).
o Why it's important: It's responsible for making sure the CPU is always busy. It
chooses the next process to execute from the ready queue and hands it over to the
CPU.
o How often it runs: Very frequently, often thousands of times per second. It's also
called the CPU scheduler for this reason.
3. Medium-Term Scheduler (The Parking Attendant):
o What it does: This scheduler manages processes that are temporarily removed from
main memory. If the restaurant gets too full, the manager might ask a few people to
wait outside to make room. This process is called swapping.
o Why it's important: It helps manage memory. If the system's main memory is low,
this scheduler moves a process from memory to the hard drive, freeing up space.
When the process is needed again, it's swapped back into memory. This reduces the
degree of multiprogramming to prevent the system from slowing down.
o How often it runs: Infrequently, only when the memory gets tight.
Multitasking: Preemption vs. Non-Preemption
Multitasking is what allows your computer to run multiple programs at the same time. There are two
ways the operating system can manage this.
Preemption (Time-Sharing):
o This is when the operating system forcefully removes a process from the CPU. Each
process gets a small, fixed amount of time to run. When its time is up, it's sent back
to the ready queue, and another process gets its turn. This makes it look like all
programs are running at the same time, which is how modern operating systems like
Windows and macOS work.
Non-Preemption:
o In this method, a process keeps the CPU until it's finished or until it voluntarily gives
up the CPU (for example, if it's waiting for user input). The operating system cannot
interrupt it. This is simpler to manage, but if one process gets stuck in a loop, it can
lock up the entire system.
Key Operations on a Process
The operating system performs several key operations to manage a process's life.
Creation: When you start a program, the operating system creates a process, sets it up in
memory, and puts it in the ready queue.
Context Switching: This is the act of the operating system stopping one process and starting
another. It involves saving the state of the current process (like where it was in its code) and
loading the state of the new one. This is what makes multitasking feel so smooth.
Blocking: A process stops running and moves to the blocked state when it's waiting for an
event to happen.
Resumption: When the event a blocked process was waiting for finally happens, the OS
moves it back to the ready queue.
Termination: When a process completes its task or is killed by the user, the operating system
terminates it and cleans up all its resources.
Inter-Process Communication (IPC): This is how processes can talk to each other and share
data, even though they are in separate memory spaces.
Process Synchronization: This involves using special tools to make sure that when multiple
processes try to access a shared resource, only one can do so at a time. This prevents chaos
and data corruption.
The Managers of a Computer
Imagine a big, busy factory. To keep everything running smoothly, the factory has different managers
with different jobs. An operating system works the same way, using schedulers to manage all the
programs running on your computer.
The Three Schedulers
1. Long-Term Scheduler (The Hiring Manager):
o What it does: This scheduler decides which new projects (programs) are allowed
into the factory. It's like a hiring manager looking at job applications and deciding
who to hire.
o Why it's important: It controls how many projects are active at the same time. If it
hires too many, the factory gets crowded and slows down. If it hires too few, some
machines sit idle. Its goal is to keep a good balance.
o How often it works: Not very often, since new programs aren't started all the time.
2. Short-Term Scheduler (The Foreman):
o What it does: This is the busiest manager. It looks at all the workers (processes) who
are ready to work and assigns the next job to a machine (the CPU).
o Why it's important: It makes sure the CPU is always busy. It's a key part of
multitasking, which makes your computer feel fast and responsive.
o How often it works: Very, very often—thousands of times every second. That's why
it's also called the CPU scheduler.
3. Medium-Term Scheduler (The Storage Manager):
o What it does: This manager handles a problem when the factory gets too full. If
there isn't enough space for everyone to work, this scheduler temporarily moves
some projects (processes) from the main work area (memory) to a storage room (the
hard drive). This process is called swapping.
o Why it's important: It frees up space in memory so the computer doesn't crash or
slow down. When the project is needed again, it's brought back from storage.
o How often it works: Only when the main work area (memory) gets too crowded.
How Computers Multitask
Multitasking is the ability of your computer to run multiple programs at the same time, like playing
music while you're writing a document. There are two main ways this is done:
Preemption (Forceful Multitasking):
o This is when the computer's foreman (the short-term scheduler) gives each worker a
short, fixed amount of time to do their job. When that time is up, the foreman takes
the job away and gives it to another worker. This happens so fast that it looks like all
the workers are doing their jobs at the same time. This is how modern computers
work.
Non-Preemption (Voluntary Multitasking):
o In this method, a worker keeps their job until they're completely finished or they
decide they need a break. The foreman can't take the job away from them. This is
simpler, but if one worker gets stuck, it can stop the entire factory from working.
In a nutshell, preemption is a more reliable and common method of multitasking because it ensures
that no single program can hog the CPU
In an operating system, the Process Control Block (PCB) and the Process Table are crucial data
structures that work together to manage all running processes. The PCB is a comprehensive record of
a single process's state, while the Process Table is a master list that holds all the PCBs for every active
process.
Process Control Block (PCB)
The Process Control Block (PCB) is a data structure that the operating system uses to store all the
information needed to manage a specific process. It's like a passport or an ID card for a process,
containing everything the OS needs to know to control its execution.
Key attributes stored in a PCB include:
Process State: The current status of the process (e.g., new, ready, running, waiting, or
terminated). This is a critical field that tells the OS what the process is currently doing.
Process ID (PID): A unique numerical identifier assigned to the process. The OS uses this ID
to identify and track the process.
Program Counter (PC): This register holds the memory address of the next instruction to be
executed. When a process is switched out, the value of the program counter is saved in the
PCB so that the process can resume from the exact point where it left off.
CPU Registers: A set of general-purpose registers that hold temporary data for the process.
When a process is swapped out, the values of these registers are saved in its PCB to be
restored when the process is scheduled to run again. This is a fundamental part of context
switching.
Memory Management Information: This includes details about the memory allocated to the
process, such as the base and limit registers or pointers to page tables or segment tables
that manage the process's virtual memory.
Open Files and I/O Information: A list of all files the process has open and information about
any I/O devices it is using.
Scheduling Information: Data that helps the OS decide which process to run next, such as
the process's priority, its CPU burst time, and its arrival time.
Accounting Information: Tracks the amount of CPU time the process has used, the time it
was last active, and other resource usage statistics.
The PCB is stored in a special, protected area of memory that is only accessible to the operating
system kernel.
Process Table
The Process Table is a data structure that acts as a directory for all active processes in the system. It
is essentially an array or a list of all the Process Control Blocks.
The main role of the Process Table is to provide a quick and efficient way for the operating system to
look up a process's PCB using its Process ID (PID). When a process needs to be managed, the OS uses
the PID to find the corresponding entry in the Process Table, which in turn points to the process's
PCB.
While some simple information like the process state or PID might be stored directly in the Process
Table for faster access, the majority of the detailed information is stored in the individual PCBs. The
Process Table's primary job is to serve as the master index that connects a process's unique ID to its
full, detailed record (the PCB).