0% found this document useful (0 votes)
17 views12 pages

OS Module 3

The document discusses process and concurrency management in modern operating systems, detailing the definitions, attributes, and lifecycle of processes and threads. It explains the complexities of managing multiple processes, including CPU scheduling, deadlock handling, and inter-process communication, as well as the benefits of multithreading. Additionally, it highlights issues related to concurrency, such as race conditions and deadlocks, emphasizing the importance of synchronization in ensuring efficient and secure execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views12 pages

OS Module 3

The document discusses process and concurrency management in modern operating systems, detailing the definitions, attributes, and lifecycle of processes and threads. It explains the complexities of managing multiple processes, including CPU scheduling, deadlock handling, and inter-process communication, as well as the benefits of multithreading. Additionally, it highlights issues related to concurrency, such as race conditions and deadlocks, emphasizing the importance of synchronization in ensuring efficient and secure execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Process Mangement and Concurrency Management

Modern operating systems (OS) are designed to handle multiple tasks simultaneously. This is
made possible through process management and concurrency management, which ensure
smooth execution of multiple applications and internal OS tasks. Two key concepts in this area
are processes and threads.

Process in Operating System

A process is a program in execution. For example, when we write a program in C or C++ and
compile it, the compiler creates binary code. The original code and binary code are both
programs. When we actually run the binary code, it becomes a process.

A process is an 'active' entity instead of a program, which is considered a 'passive' entity.


A single program can create many processes when run multiple times; for example, when we
open a .exe or binary file multiple times, multiple instances begin (multiple processes are
created). .
How Does a Process Look Like in Memory?
A process in memory is
divided into several distinct sections, each serving a different purpose. Here's how a process
typically looks in memory.
Text Section: A text or code segment contains executable instructions. It is typically a read only
section
Stack: The stack contains temporary data, such as function parameters, returns addresses, and
local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically memory allocated to process during its run time.
Attributes of a Process
A process has several important attributes that help the operating system manage and control it.
These attributes are stored in a structure called the Process Control Block (PCB) (sometimes
called a task control block). The PCB keeps all the key information about the process, including:
Process ID (PID): A unique number assigned to each process so the operating system can
identify it.
Process State: This shows the current status of the process, like whether it is running, waiting,
or ready to execute.
Priority and other CPU Scheduling Information: Data that helps the operating system decide
which process should run next, like priority levels and pointers to scheduling queues.
I/O Information: Information about input/output devices the process is using.
File Descriptors: Information about open files files and network connections.
Accounting Information: Tracks how long the process has run, the amount of CPU time used,
and other resource usage data.
Memory Management Information: Details about the memory space allocated to the process,
including where it is loaded in memory and the structure of its memory layout (stack, heap, etc.).
These attributes in the PCB help the operating system control, schedule, and manage each
process effectively.

Introduction of Process Management

Process Management for a single tasking or batch processing system is easy as only one process
is active at a time. With multiple processes (multiprogramming or multitasking) being active, the
process management becomes complex as a CPU needs to be efficiently utilized by multiple
processes. Multiple active processes can may share resources like memory and may
communicate with each other. This further makes things complex as an Operating System has to
do process synchronization.

Please remember the main advantages of having multiprogramming are system responsiveness
and better CPU utilization. We can run multiple processes in interleaved manner on a single
CPU. For example, when the current process is getting busy with IO, we assign CPU to some
other process.

CPU-Bound vs I/O-Bound Processes


A CPU-bound process requires more CPU time or spends more time in the running state. An I/O-
bound process requires more I/O time and less CPU time. An I/O-bound process spends more
time in the waiting state.
Process planning is an integral part of the process management operating system. It refers to the
mechanism used by the operating system to determine which process to run next. The goal of
process scheduling is to improve overall system performance by maximizing CPU utilization,
minimizing throughput time, and improving system response time.

Process Management Tasks


Process management is a key part in operating systems with multi-programming or multitasking.

Process Creation and Termination : Process creation involves creating a Process ID, setting up
Process Control Block, etc. A process can be terminated either by the operating system or by the
parent process. Process termination involves clearing all resources allocated to it.
CPU Scheduling : In a multiprogramming system, multiple processes need to get the CPU. It is
the job of Operating System to ensure smooth and efficient execution of multiple processes.
Deadlock Handling : Making sure that the system does not reach a state where two or more
processes cannot proceed due to cyclic dependency on each other.
Inter-Process Communication : Operating System provides facilities such as shared memory
and message passing for cooperating processes to communicate.
Process Synchronization : Process Synchronization is the coordination of execution of multiple
processes in a multiprogramming system to ensure that they access shared resources (like
memory) in a controlled and predictable manner.
Process states
There are five states that a process may be in, namely:

New: When a process is created for performing a particular task, it is in a New state.

Running: When the instructions of a process are being executed, the process is Running.

Waiting: When a process is waiting for an event to occur, such as receiving a data packet,
waiting for the user's input, or writing to secondary memory, the process is Waiting.

Ready: When a process has been successfully created but hasn't yet been assigned a processor to
begin/resume its execution, it is in the Ready state.

Terminated: When a process has finished its execution, it is in the Terminated state.
Process life cycle

The diagram below illustrates how the process above states interacts during a process's evolution
from its creation to its termination.
Explanation
The following list corresponds to the arrows in the diagram above:

New→Ready: Once a process is created, it moves into the Ready state and is admitted into the
queue of other Ready processes waiting to begin their execution.

Ready→Running: Once the scheduler assigns a process in the Ready queue to a processor, it
begins its execution and moves into the Running state.

Running→Waiting: When a process is Running and executes an instruction that requires it to


wait for an event (e.g., waiting for the user's input) to occur, the process is moved to the Waiting
state to free up the processor for another process's execution.

Running→Ready: When a process is Running and an interrupt (e.g., the arrival of a higher
priority process) occurs, it is moved to the Ready state to allow the higher priority process to
begin execution.

Waiting→Ready: Once the event a process was waiting for is completed, the process is moved
back into the Ready state.

Running→Terminated: Once a process completes its execution successfully, it moves into the
Termination state.
Thread in Operating System
A thread is a single sequence stream within a process. Threads are also called lightweight
processes as they possess some of the properties of processes. Each thread belongs to exactly one
process.
• In an operating system that supports multithreading, the process can consist of many
threads. But threads can be effective only if the CPU is more than 1 otherwise two
threads have to context switch for that single CPU.
• All threads belonging to the same process share - code section, data section, and OS
resources (e.g. open files and signals)
• But each thread has its own (thread control block) - thread ID, program counter, register
set, and a stack
• Any operating system process can execute a thread. we can say that single process can
have multiple threads.
Why Do We Need Thread?
• Threads run in concurrent manner that improves the application performance. Each such
thread has its own CPU state and stack, but they share the address space of the process
and the environment. For example, when we work on Microsoft Word or Google Docs,
we notice that while we are typing, multiple things happen together (formatting is
applied, page is changed and auto save happens).
• Threads can share common data so they do not need to use inter-process communication.
Like the processes, threads also have states like ready, executing, blocked, etc.
• Priority can be assigned to the threads just like the process, and the highest priority thread
is scheduled first.
• Each thread has its own Thread Control Block (TCB). Like the process, a context switch
occurs for the thread, and register contents are saved in (TCB). As threads share the same
address space and resources, synchronization is also required for the various activities of
the thread.
Components of Threads
These are the basic components of the Operating System.

Stack Space: Stores local variables, function calls, and return addresses specific to the thread.
Register Set: Hold temporary data and intermediate results for the thread's execution.
Program Counter: Tracks the current instruction being executed by the thread.
Types of Thread in Operating System
Threads are of two types. These are described below.
• User Level Thread
• Kernel Level Thread
1. User Level Thread
User Level Thread is a type of thread that is not created using system calls. The kernel has no
work in the management of user-level threads. User-level threads can be easily implemented by
the user. In case when user-level threads are single-handed processes, kernel-level thread
manages them. Let's look at the advantages and disadvantages of User-Level Thread.

2. Kernel Level Threads


A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel
Level Threads has its own thread table where it keeps track of the system. The operating System
Kernel helps in managing threads. Kernel Threads have somehow longer context switching time.
Kernel helps in the management of threads.

Difference Between Process and Thread


The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces. Threads are not independent of one another like
processes are, and as a result, threads share with other threads their code section, data section,
and OS resources (like open files and signals). But, like a process, a thread has its own program
counter (PC), register set, and stack space.
What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc.

Multithreading is a technique used in operating systems to improve the performance and


responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.

Multithreading can be done without OS support, as seen in Java's multithreading model. In Java,
threads are implemented using the Java Virtual Machine (JVM), which provides its own thread
management. These threads, also called user-level threads, are managed independently of the
underlying operating system.
Application itself manages the creation, scheduling, and execution of threads without relying on
the operating system's kernel. The application contains a threading library that handles thread
creation, scheduling, and context switching. The operating system is unaware of User-Level
threads and treats the entire process as a single-threaded entity.

Benefits of Thread in Operating System


Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
Faster context switch: Context switch time between threads is lower compared to the process
context switch. Process context switching requires more overhead from the CPU.
Effective utilization of multiprocessor system: If we have multiple threads in a single process,
then we can schedule multiple threads on multiple processors. This will make process execution
faster.
Resource sharing: Resources like code, data, and files can be shared among all threads within a
process. Note: Stacks and registers can't be shared among the threads. Each thread has its own
stack and registers.
Communication: Communication between multiple threads is easier, as the threads share a
common address space. while in the process we have to follow some specific communication
techniques for communication between the two processes.
Enhanced throughput of the system: If a process is divided into multiple threads, and each
thread function is considered as one job, then the number of jobs completed per unit of time is
increased, thus increasing the throughput of the system.
Context Switch
• Interrupts cause the operating system to change a CPU from its current task and
to run a kernel routine.
• happen frequently on general-purpose systems.
• When an interrupt occurs, the system needs to save the current context of the
process currently running on the CPU ,it can restore that context when its
processing is done.
• The context is represented in the PCB of the process; it includes the value of the
CPU registers, the process state and memory-management information
• Switching the CPU to another process requires performing a state save of the
current process and a state restore of a different process. This task is known as a
context switch
• When a context switch occurs, the kernel saves the context of the old process in
its PCB and loads the saved context of the new process scheduled to run
• Context-switch time is pure overhead , system does no useful work while
switching
CPU Switch From Process to Process

Concurrency in Operating System


Concurrency in operating systems refers to the capability of an OS to handle more than one task
or process at the same time, thereby enhancing efficiency and responsiveness. It may be
supported by multi-threading or multi-processing whereby more than one process or threads are
executed simultaneously or in an interleaved fashion.

Thus, more than one program may run simultaneously on shared resources of the system, such as
CPU, memory, and so on. This helps optimize performance and reduce idle times while
improving the responsiveness of applications, generally in multitasking contexts. Good
concurrency handling is crucial for deadlock situations, race conditions, and usually also for
uninterrupted execution of tasks. It helps in techniques like coordinating the execution of
processes, memory allocation, and execution scheduling for maximizing throughput.

What is Concurrency in OS?


Concurrency in an operating system refers to the ability to execute multiple processes or threads
simultaneously, improving resource utilization and system efficiency. It allows several tasks to
be in progress at the same time, either by running on separate processors or through context
switching on a single processor. Concurrency is essential in modern OS design to handle
multitasking, increase system responsiveness, and optimize performance for users and
applications.
There are several motivations for allowing concurrent execution:

Physical resource Sharing: Multiuser environment since hardware resources are limited
Logical resource Sharing: Shared file (same piece of information)
Computation Speedup: Parallel execution
Modularity: Divide system functions into separation processes
Principles of Concurrency
Both interleaved and overlapped processes can be viewed as examples of concurrent processes,
they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:

The activities of other processes


The way operating system handles interrupts
The scheduling policies of the operating system
Problems in Concurrency
Sharing global resources: Sharing of global resources safely is difficult. If two processes both
make use of a global variable and both perform read and write on that variable, then the order in
which various read and write are executed is critical.
Optimal allocation of resources: It is difficult for the operating system to manage the allocation
of resources optimally.
Locating programming errors: It is very difficult to locate a programming error because reports
are usually not reproducible.
Locking the channel: It may be inefficient for the operating system to simply lock the channel
and prevents its use by other processes.
Issues of Concurrency
Non-atomic: Operations that are non-atomic but interruptible by multiple processes can cause
problems.
Race conditions: A race condition occurs of the outcome depends on which of several processes
gets to a point first.
Blocking: Processes can block waiting for resources. A process could be blocked for long period
of time waiting for input from a terminal. If the process is required to periodically update some
data, this would be very undesirable.
Starvation: Starvation occurs when a process does not obtain service to progress.
Deadlock: Deadlock occurs when two processes are blocked and hence neither can proceed to
execute.

Race Condition
Race condition occurs when multiple threads read and write the same variable i.e. they have
access to some shared data and they try to change it at the same time. In such a scenario threads
are “racing” each other to access/change the data. This is a major security vulnerability.
What is Race Condition?
A race condition is a situation that may occur inside a critical section. This happens when the
result of multiple thread execution in a critical section differs according to the order in which the
threads execute. Race conditions in critical sections can be avoided if the critical section is
treated as an atomic instruction. Also, proper thread synchronization using locks or atomic
variables can prevent race conditions.

This is a major security vulnerability [CWE-362], and by manipulating the timing of actions
anomalous results might appear. This vulnerability arises during a TOCTOU (time-of-check,
time-of-use) window.

Real-Time Examples of Race Conditions


Example 1 - Consider an ATM Withdrawal
Imagine Ram and his friend Sham both have access to the same bank account. They both try to
withdraw Rs,500 at the same time from different ATMs. The system checks the balance and sees
there’s enough money for both withdrawals. Without proper synchronization, the system might
allow both transactions to go through, even if the balance is only enough for one, leaving the
account overdrawn.

Example 2 - Consider a Printer Queue


Imagine two people sending print jobs at the same time. If the printer isn’t managed properly, the
print jobs could get mixed up, with pages from one person’s document being printed in the
middle of another’s.

Key Terms in a Race Condition


Critical Section: A code part where the shared resources are accessed. It is critical as multiple
processes enter this section at same time leading to data corruption and errors.
Synchronization: It is the process of controlling how and when multiple processes or threads
access the shared resources ensuring that only one can enter the critical section at same time.
Mutual Exclusion (Mutex): A mutex is like a lock that ensures only one process can access a
resource at a time. If a process holds a lock, others must wait their turn preventing race
conditions.
Deadlock: A situation where two or more processes are stuck waiting for each other's resources,
causing a deadlock(standstill).
What is a Race Condition Vulnerability?
Race Condition Vulnerability is a situation where two or more processes or threads in a system
simultaneously access the shared resources. If the processes lacks coordination, it leads to
unexpected behaviour or security issues.

Let's understand this concept by taking a real-life scenario . Imagine there are 2 people named A
and B , trying to write on the same notebook at same time without agreeing on who needs to
write first followed by other person which results into writings getting mixed up leading to
confusion.

Now let's understand the same concept in computer system. Imagine a system where one
process's duty is to read a file from the system and another process's duty is to write to the
system. If both of them try to access the same file at same time without a proper control leads to
incorrect/incomplete data leading to errors.

Common Vulnerabilities Leading to Race Conditions


In systems where multiple processes access the shared memory, failure to control how memory is
accessed can lead to conflicting operations, resulting in incorrect data being read or written to the
system.
If multiple processes access the same file at same time without proper access mechanism, the
file's content can become inconsistent.
In some scenarios, a certain operations need to be happened in a specific order. If the sequence is
not followed and multiple processes run out of order, race conditions can occur leading to error
or vulnerabilities.
By identifying these common vulnerabilities, developers can reduce the chance of race
conditions that affect the system by proper locking mechanism, careful sequencing and strong
synchronization practices which can help to almost reduce/eliminate these issues.

How to Detect Race Conditions?


Review the Code : Carefully inspecting of the code can help to identify the areas where shared
resources are accessed without proper locking or synchronization.
Static Analysis Tools : Specialized tools analyze the code to automatically detect potential race
conditions by identifying unsafe access to shared resources.
Testing with Multiple Threads/Processes: Simulate scenarios with many threads or processes
running simultaneously. If unexpected behaviors like data inconsistencies or crashes occur, a race
condition might be present.
Logging and Monitoring: Adding logs to track resource access can reveal out-of-order
operations, signaling a race condition
How to Prevent Race Condition Attacks?
Use Locks: Implement locks (like mutexes) to ensure that only one process or thread can access
a resource at a time, preventing conflicting operations.
Proper Synchronization: Ensure processes or threads work in a coordinated sequence when
accessing shared data. Techniques like semaphores help achieve this.
Avoid Time-of-Check to Time-of-Use (TOCTOU) Vulnerabilities: Reduce the gap between
checking a condition (like permissions) and acting on it, minimizing opportunities for an attacker
to change the state in between.
Priority Management: Prioritize certain processes or threads so they get controlled access to
critical resources, preventing uncoordinated access.

You might also like