Operating Processes & CPU Scheduling
Operating Processes & CPU Scheduling
Processes don't just exist; they transition through various states during their lifecycle:
• Waiting (Blocked): The process is waiting for some event to occur (e.g., I/O completion or a signal).
These states and their transitions form the fundamental lifecycle of any task executed by your computer, illustrating how an operating system
constantly manages the flow of work.
Process Control Block (PCB): The Process Identity Card
Every process in an operating system is represented by a Process Control Block (PCB). Think of the PCB as the process's identity card or
passport—a data structure containing vital information about the process. This block is crucial for the operating system to manage multiple
processes concurrently and ensure smooth operation.
• Process State: The current state (New, Ready, Running, Waiting, Terminated).
• CPU Registers: All CPU register values, saved when the process is interrupted.
• I/O Status Information: List of I/O devices allocated to the process, list of open files.
• Accounting Information: CPU usage time, time limits, job or process numbers.
The PCB is absolutely essential for context switching, the mechanism by which the CPU switches from one process to another. When a process
is interrupted, its current state is saved in its PCB, and the state of the next process to be run is loaded from its PCB. This allows for seamless
multitasking.
Process Scheduling & Schedulers
Process scheduling is the activity of determining which process runs at what time on the CPU. It is a critical component of multitasking
operating systems, aiming to maximize CPU utilization while providing acceptable response times for users.
Operating systems employ different types of schedulers, each responsible for a specific aspect of process management:
• Long-Term Scheduler (Job Scheduler): Selects processes from the job queue and loads them into memory for execution. It controls the
degree of multiprogramming (the number of processes in memory).
• Medium-Term Scheduler: Primarily used in time-sharing systems, it swaps processes in and out of memory to reduce the degree of
multiprogramming or improve the mix of processes running. This is often part of the swapping process.
• Short-Term Scheduler (CPU Scheduler): The most frequent scheduler, it selects a process from the ready queue and allocates the CPU to it.
It makes decisions frequently, whenever a process switches states.
These schedulers work in concert to manage the flow of processes through the system, ensuring efficient and fair allocation of CPU resources.
Threads & Interprocess Communication (IPC)
• Maximize CPU Utilization: Keep the CPU as busy as possible, preventing idle time.
• Maximize Throughput: Maximize the number of processes completed per unit of time.
• Minimize Waiting Time: Reduce the amount of time processes spend in the ready queue.
• Minimize Turnaround Time: Minimize the total time taken to execute a process, from submission to completion.
• Minimize Response Time: For interactive systems, minimize the time it takes from submission of a request until the first response is
produced.
• Ensure Fairness: Give each process a fair share of CPU time, preventing starvation.
Without effective CPU scheduling, systems would be sluggish, unresponsive, and incapable of handling the complex demands of modern
applications. It is the core mechanism that allows your computer to run a web browser, a word processor, and music playback all at once,
seamlessly.
Key CPU Scheduling Algorithms Overview
Various algorithms are employed to manage CPU allocation, each with its own strengths and weaknesses. The choice of algorithm can significantly impact system performance and responsiveness.
First-Come, First-Served (FCFS) Processes are executed in the order they arrive. Simple queue system. Non-preemptive. Can lead to long waiting times for short
processes if a long process arrives first (convoy effect). Fair but
inefficient for interactive systems.
Shortest Job First (SJF) The CPU is assigned to the process with the smallest next CPU burst. Optimal for minimizing average waiting time. Non-preemptive.
Requires predicting future CPU burst times, which is difficult in
practice.
Shortest Remaining Time First (SRTF) Preemptive version of SJF. If a new process arrives with a shorter Preemptive. Provides better response for short jobs. Higher
remaining burst time than the currently executing process, the CPU is overhead due to frequent context switching. Also requires burst
preempted. time prediction.
Round Robin (RR) Each process gets a small unit of CPU time (time quantum). When the Preemptive. Fair and good for time-sharing systems. Time
quantum expires, the process is preempted and added to the end of the quantum size is critical: too large and it becomes FCFS, too
ready queue. small and overhead increases.
Priority Scheduling Processes are executed based on assigned priorities. Highest priority runs Can be preemptive or non-preemptive. Can suffer from
first. starvation (low-priority processes might never execute). Aging
(gradually increasing priority of waiting processes) can mitigate
starvation.
Each algorithm makes different trade-offs between fairness, throughput, and response time, chosen based on the specific needs of the operating system and its applications.
Multiple Processor Scheduling
The advent of multi-core processors and parallel computing has introduced new complexities and opportunities for CPU scheduling. Instead of managing a
single CPU, operating systems now need to efficiently distribute tasks across multiple processing units.
Modern operating systems dynamically adapt their scheduling strategies to these complexities, balancing the need for load distribution with the benefits of
processor affinity to maximize overall system efficiency.
Scheduling Criteria & Challenges
Effective CPU scheduling is a delicate balancing act, with operating systems striving to optimize several performance criteria while overcoming inherent challenges.
Challenges in Scheduling:
• Starvation: A low-priority process might never get CPU time if higher-priority processes continuously arrive. This is a common issue with pure priority-based scheduling.
• Context Switch Overhead: Switching between processes involves saving the state of the old process and loading the state of the new one, incurring a performance cost. Frequent context
switching can degrade performance.
• Predicting Burst Times: Algorithms like SJF require knowledge of future CPU burst times, which is impossible to know accurately. Heuristics are often used, but they are not perfect.
• Fairness vs. Efficiency: Achieving perfect fairness (every process gets equal CPU time) can sometimes conflict with maximizing efficiency (e.g., prioritizing short jobs).
Common Solutions:
• Aging: Gradually increasing the priority of processes that have been waiting in the system for a long time, preventing starvation.
• Multilevel Queue Scheduling: Dividing the ready queue into several separate queues, often with different scheduling algorithms for each, based on process characteristics (e.g., interactive
vs. batch).
• Feedback Queues: Allowing processes to move between queues based on their CPU burst behavior, adapting to their needs.
Summary: The Heartbeat of Operating Systems
Process management and CPU scheduling are not just abstract concepts; they are the very heartbeat of modern operating systems, enabling
multitasking, responsiveness, and efficient resource utilization.
We've explored:
By understanding these fundamental principles, you gain insight into how your computer seamlessly juggles countless tasks, from streaming
video to running complex applications. Effective scheduling is paramount to system performance, user experience, and the overall stability of
computing environments.
"The art of scheduling is the art of compromise."
We encourage you to explore these algorithms further, perhaps even by simulating them or examining open-source operating system code, to truly
appreciate their impact on the digital world around us.