0% found this document useful (0 votes)
25 views10 pages

Operating Processes & CPU Scheduling

The document provides an overview of CPU scheduling and process management in operating systems, detailing the lifecycle of processes, the role of Process Control Blocks (PCBs), and various scheduling algorithms. It emphasizes the importance of efficient CPU scheduling for multitasking and system performance, discussing challenges such as starvation and context switch overhead. Additionally, it highlights the complexities of multi-processor scheduling and the need for balancing fairness and efficiency.

Uploaded by

M Sridhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views10 pages

Operating Processes & CPU Scheduling

The document provides an overview of CPU scheduling and process management in operating systems, detailing the lifecycle of processes, the role of Process Control Blocks (PCBs), and various scheduling algorithms. It emphasizes the importance of efficient CPU scheduling for multitasking and system performance, discussing challenges such as starvation and context switch overhead. Additionally, it highlights the complexities of multi-processor scheduling and the need for balancing fairness and efficiency.

Uploaded by

M Sridhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Introduction to Operating

Processes & CPU Scheduling


CPU scheduling means how modern operating systems manage tasks and
allocate crucial CPU resources efficiently. Understanding these principles is
essential for anyone interested in the inner workings of computing systems, from
basic functionality to advanced performance optimization.
Understanding the Process Concept
At its core, an operating system manages processes. A process is essentially a program in execution—a dynamic entity that requires system
resources like CPU time, memory, files, and I/O devices to complete its task. Each process operates independently, equipped with its own
memory space and a unique identifier.

Processes don't just exist; they transition through various states during their lifecycle:

• New: The process is being created.

• Ready: The process is waiting to be assigned to a processor.

• Running: Instructions are being executed by the CPU.

• Waiting (Blocked): The process is waiting for some event to occur (e.g., I/O completion or a signal).

• Terminated: The process has finished execution.

These states and their transitions form the fundamental lifecycle of any task executed by your computer, illustrating how an operating system
constantly manages the flow of work.
Process Control Block (PCB): The Process Identity Card
Every process in an operating system is represented by a Process Control Block (PCB). Think of the PCB as the process's identity card or
passport—a data structure containing vital information about the process. This block is crucial for the operating system to manage multiple
processes concurrently and ensure smooth operation.

The PCB stores a wealth of information, including:

• Process ID (PID): A unique identifier for the process.

• Process State: The current state (New, Ready, Running, Waiting, Terminated).

• CPU Registers: All CPU register values, saved when the process is interrupted.

• Memory Management Information: Pointers to page tables or segment tables.

• I/O Status Information: List of I/O devices allocated to the process, list of open files.

• Accounting Information: CPU usage time, time limits, job or process numbers.

The PCB is absolutely essential for context switching, the mechanism by which the CPU switches from one process to another. When a process
is interrupted, its current state is saved in its PCB, and the state of the next process to be run is loaded from its PCB. This allows for seamless
multitasking.
Process Scheduling & Schedulers
Process scheduling is the activity of determining which process runs at what time on the CPU. It is a critical component of multitasking
operating systems, aiming to maximize CPU utilization while providing acceptable response times for users.

Operating systems employ different types of schedulers, each responsible for a specific aspect of process management:

• Long-Term Scheduler (Job Scheduler): Selects processes from the job queue and loads them into memory for execution. It controls the
degree of multiprogramming (the number of processes in memory).

• Medium-Term Scheduler: Primarily used in time-sharing systems, it swaps processes in and out of memory to reduce the degree of
multiprogramming or improve the mix of processes running. This is often part of the swapping process.

• Short-Term Scheduler (CPU Scheduler): The most frequent scheduler, it selects a process from the ready queue and allocates the CPU to it.
It makes decisions frequently, whenever a process switches states.

These schedulers work in concert to manage the flow of processes through the system, ensuring efficient and fair allocation of CPU resources.
Threads & Interprocess Communication (IPC)

Threads: Lightweight Parallelism Interprocess Communication (IPC) Real-World Application


Threads are often called "lightweight processes" While processes are isolated for security and Consider a web server. When it receives multiple
because they share the same code section, data stability, they often need to communicate and client requests, it doesn't create a new heavy
section, and other OS resources (like open files and synchronize with each other. Interprocess process for each one. Instead, it typically spawns
signals) with other threads belonging to the same Communication (IPC) mechanisms facilitate this multiple threads within a single process to handle
process. However, each thread has its own interaction. these requests concurrently. These threads can
program counter, register set, and stack. easily share data (like cached web pages) and
Common IPC methods include:
resources, significantly improving performance
This shared memory model allows for efficient
• Pipes: Simple, unidirectional communication and responsiveness. IPC mechanisms further allow
communication and parallelism within a single
channels. this web server process to communicate with other
application, making threads ideal for tasks that can
• Shared Memory: Processes map a region of services, such as a database server.
be broken down into smaller, concurrently
memory into their address space for direct, fast
executable units.
communication.

• Message Queuing: Processes exchange


messages through a system-managed queue.

• Sockets: Used for network communication


between processes on the same or different
machines.
CPU Scheduling: Why It Matters
CPU scheduling is fundamental to the very concept of a modern operating system. While a single CPU can only execute one instruction at a
time, scheduling makes it seem as though multiple programs are running simultaneously, enabling multitasking and enhanced user experience.

The primary goals of CPU scheduling are:

• Maximize CPU Utilization: Keep the CPU as busy as possible, preventing idle time.

• Maximize Throughput: Maximize the number of processes completed per unit of time.

• Minimize Waiting Time: Reduce the amount of time processes spend in the ready queue.

• Minimize Turnaround Time: Minimize the total time taken to execute a process, from submission to completion.

• Minimize Response Time: For interactive systems, minimize the time it takes from submission of a request until the first response is
produced.

• Ensure Fairness: Give each process a fair share of CPU time, preventing starvation.

Without effective CPU scheduling, systems would be sluggish, unresponsive, and incapable of handling the complex demands of modern
applications. It is the core mechanism that allows your computer to run a web browser, a word processor, and music playback all at once,
seamlessly.
Key CPU Scheduling Algorithms Overview
Various algorithms are employed to manage CPU allocation, each with its own strengths and weaknesses. The choice of algorithm can significantly impact system performance and responsiveness.

Algorithm Description Characteristics

First-Come, First-Served (FCFS) Processes are executed in the order they arrive. Simple queue system. Non-preemptive. Can lead to long waiting times for short
processes if a long process arrives first (convoy effect). Fair but
inefficient for interactive systems.

Shortest Job First (SJF) The CPU is assigned to the process with the smallest next CPU burst. Optimal for minimizing average waiting time. Non-preemptive.
Requires predicting future CPU burst times, which is difficult in
practice.

Shortest Remaining Time First (SRTF) Preemptive version of SJF. If a new process arrives with a shorter Preemptive. Provides better response for short jobs. Higher
remaining burst time than the currently executing process, the CPU is overhead due to frequent context switching. Also requires burst
preempted. time prediction.

Round Robin (RR) Each process gets a small unit of CPU time (time quantum). When the Preemptive. Fair and good for time-sharing systems. Time
quantum expires, the process is preempted and added to the end of the quantum size is critical: too large and it becomes FCFS, too
ready queue. small and overhead increases.

Priority Scheduling Processes are executed based on assigned priorities. Highest priority runs Can be preemptive or non-preemptive. Can suffer from
first. starvation (low-priority processes might never execute). Aging
(gradually increasing priority of waiting processes) can mitigate
starvation.

Each algorithm makes different trade-offs between fairness, throughput, and response time, chosen based on the specific needs of the operating system and its applications.
Multiple Processor Scheduling
The advent of multi-core processors and parallel computing has introduced new complexities and opportunities for CPU scheduling. Instead of managing a
single CPU, operating systems now need to efficiently distribute tasks across multiple processing units.

Challenges of Multi-CPU Systems Load Balancing Processor Affinity


Scheduling in a multi-processor Distributing processes evenly among When a process has been running on a
environment involves deciding not only processors is crucial for optimal specific processor, its data may reside in
when a process runs, but also which CPU it performance. Techniques include: that processor's cache. Processor affinity
runs on. Key challenges include: attempts to keep a process running on the
• Push Migration: A specific task
same processor, or at least on the same
• Maintaining load balance across all periodically checks the load on each
physical chip, to take advantage of cached
processors. processor and moves processes from
data and reduce cache misses. This can be
• Minimizing cache invalidation and data overloaded to idle or less busy
soft affinity (the OS tries to keep it there) or
coherency issues. processors.
hard affinity (the process is explicitly bound
• Ensuring synchronization among • Pull Migration: An idle processor pulls a to a specific processor).
processes accessing shared resources. waiting task from a busy processor.

Modern operating systems dynamically adapt their scheduling strategies to these complexities, balancing the need for load distribution with the benefits of
processor affinity to maximize overall system efficiency.
Scheduling Criteria & Challenges
Effective CPU scheduling is a delicate balancing act, with operating systems striving to optimize several performance criteria while overcoming inherent challenges.

Key Scheduling Criteria:

• CPU Utilization: Keeping the CPU busy as much as possible.


• Throughput: The number of processes completed per unit of time.
• Turnaround Time: Total time from process submission to completion.
• Waiting Time: Total time a process spends in the ready queue.
• Response Time: Time from request submission to the first response (for interactive systems).

Challenges in Scheduling:

• Starvation: A low-priority process might never get CPU time if higher-priority processes continuously arrive. This is a common issue with pure priority-based scheduling.
• Context Switch Overhead: Switching between processes involves saving the state of the old process and loading the state of the new one, incurring a performance cost. Frequent context
switching can degrade performance.

• Predicting Burst Times: Algorithms like SJF require knowledge of future CPU burst times, which is impossible to know accurately. Heuristics are often used, but they are not perfect.
• Fairness vs. Efficiency: Achieving perfect fairness (every process gets equal CPU time) can sometimes conflict with maximizing efficiency (e.g., prioritizing short jobs).

Common Solutions:

• Aging: Gradually increasing the priority of processes that have been waiting in the system for a long time, preventing starvation.
• Multilevel Queue Scheduling: Dividing the ready queue into several separate queues, often with different scheduling algorithms for each, based on process characteristics (e.g., interactive
vs. batch).

• Feedback Queues: Allowing processes to move between queues based on their CPU burst behavior, adapting to their needs.
Summary: The Heartbeat of Operating Systems

Process management and CPU scheduling are not just abstract concepts; they are the very heartbeat of modern operating systems, enabling
multitasking, responsiveness, and efficient resource utilization.

We've explored:

• The life cycle of a process and its distinct states.


• The vital role of the Process Control Block (PCB) in maintaining process identity and facilitating context switching.
• The different schedulers (long-term, medium-term, short-term) that coordinate process flow.
• The distinction between processes and threads, and the importance of Interprocess Communication (IPC).
• The core goals of CPU scheduling and a variety of scheduling algorithms like FCFS, SJF, SRTF, Round Robin, and Priority scheduling.
• The complexities and solutions for Multiple Processor Scheduling.
• The critical scheduling criteria and the ongoing challenges faced by OS designers.

By understanding these fundamental principles, you gain insight into how your computer seamlessly juggles countless tasks, from streaming
video to running complex applications. Effective scheduling is paramount to system performance, user experience, and the overall stability of
computing environments.
"The art of scheduling is the art of compromise."

We encourage you to explore these algorithms further, perhaps even by simulating them or examining open-source operating system code, to truly
appreciate their impact on the digital world around us.

You might also like