0% found this document useful (0 votes)
58 views17 pages

8 - Intro To Threads and Concurrency

The document provides an introduction to threads and concurrency in operating systems, explaining that threads are lightweight processes that can run independently within a single process, sharing the same memory image. It discusses the benefits of multi-threading, including improved performance through concurrency and parallelism, as well as the challenges such as race conditions and the need for mutual exclusion using locks and semaphores. Additionally, it highlights the importance of managing shared data access to prevent issues like deadlocks.

Uploaded by

f20230345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views17 pages

8 - Intro To Threads and Concurrency

The document provides an introduction to threads and concurrency in operating systems, explaining that threads are lightweight processes that can run independently within a single process, sharing the same memory image. It discusses the benefits of multi-threading, including improved performance through concurrency and parallelism, as well as the challenges such as race conditions and the need for mutual exclusion using locks and semaphores. Additionally, it highlights the importance of managing shared data access to prevent issues like deadlocks.

Uploaded by

f20230345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

CS F372: Operating Systems

Introduction to Threads and Concurrency


BITS Pilani Instructor-in-Charge: Arnab K. Paul
K K Birla Goa Campus
Processes and Threads
● So, far we have studied single threaded programs

● Recap: process execution


○ CPU executes instruction by instruction, traps
to OS as needed
○ PC points to next instruction to run
○ SP points to current top of stack
○ Other registers also with process context

● A program can also have multiple threads of


execution

● What is a thread?
2 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus
What are Threads?
● Threads = light weight processes

● Why? A process may want to run multiple copies of itself


○ If one copy blocks due to blocking system call, another copy can still run
○ Multiple copies can run in parallel on multiple CPU cores to speed up work

● Why not have multiple child processes running the same program?
○ Too much memory consumed by identical memory images
○ Needs IPC to share information across processes

● A process can create multiple threads (default: single thread)


○ Multiple threads share same memory image of process, saves memory
○ Threads run independently on same code (if one blocks, another can still run)
○ Threads can run in parallel on multiple cores at same time
○ Threads can share data more easily
3 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus
Multi-threaded Process
● A thread is like another copy of a process that
executes independently from parent

● Threads shares the same code, global/static data,


heap

● Each thread has separate stack for independent


function calls

● Each thread has separate PC, running different code

● Each thread has separate CPU context during


execution
4 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus
Multi-threaded Process

5 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


Concurrency vs Parallelism

● Concurrency: running multiple threads/processes at the same time, even on single CPU
core, by interleaving their executions

● Parallelism: running multiple threads/processes in parallel over different CPU cores

● With multiple threads, process can get better performance on multicore systems via
parallelism
○ Example: matrix multiplication can be easily parallelized, with different threads
operating on different parts of the matrix

● Even if no parallelism (single core), concurrency of threads ensures effective use of CPU
when one of the threads blocks
○ Example: if one thread of a server blocks on I/O read, another can still run

6 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


POSIX Threads (man pthreads)

● In Linux, POSIX threads (pthreads) library allows creation


of multiple threads in a process

● Each thread is given a start function where its execution


begins
○ Threads execute independently from parent after
creation
○ Parent can wait for threads to finish (optional)

● Several such threading libraries exist in different


programming languages

7 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


Thread Creation and Wait

Compile with -lpthread flag


Output Output
main: begin main: begin
B OR A
A B
main: end main: end
8 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus
Threads with Shared Data

● Shared global counter


● Two threads update same
counter 1 lac times
● What is the expected output?
9 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus
Threads with Shared Data

● Shared global counter


● Two threads update same counter 1 lac times
● What is the expected output?

Why is the answer not 2 * 100000?

10 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


Shared Data Access
● counter = counter + 1
is compiled into multiple instructions

○ Load counter variable from


memory into register

○ Increment register

○ Store register back into memory


of counter variable

11 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


Shared Data Concurrent Access
● When 2 threads run
counter = counter + 1
concurrently for the first iteration

○ Counter is 0 initially
○ T1 loads counter into register, increment reg
○ Context switch, register (value 1) saved
○ T2 runs, loads counter 0 from memory
○ T2 increments register, stores to memory
○ T1 resumes, stores register value to counter
○ Counter value rewritten to 1 again
○ Final counter value is 1, expected value is 2 Need the power of atomicity

12 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


Mutual Exclusion
● Incorrect execution of code due to concurrency is called race condition
○ Due to unfortunate timing of context switches, atomicity of data update violated
○ Race conditions happen when we have concurrent execution on shared data

● We require mutual exclusion on some parts of user


or OS code
○ Concurrent execution by multiple threads/processes
should not be permitted
○ Parts of program that need to be executed with
mutual exclusion for correct operation are called
critical sections

● How to access critical sections with mutual exclusion?


Using locks
13 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus
Using Locks
● Locks are special variables that provide mutual
exclusion
○ Provided by threading libraries
○ Can call lock/acquire and unlock/release
functions on a lock

● When a thread T1 acquires a lock, another


thread T2 cannot acquire same lock
○ Execution of T2 stops at the lock statement
○ T2 can proceed only after T1 releases the lock

● Acquire lock → critical section → release lock


ensures mutual exclusion in critical section

14 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


Implementing Locks Using Semaphores
● Semaphore - Synchronization primitive
○ Is a variable with an underlying counter
○ Semaphore counter can be initialized to a suitable value

● Two functions on a semaphore variable


○ Down/wait decrements the counter by one, blocks the calling thread if the
resulting value is negative
○ Up/post increments the counter by one, wakes up any one thread that is
blocked on the semaphore

● Not possible to access counter value in any other way


○ For example, cannot check if counter is positive and only then call down

15 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus


Implementing Locks Using Semaphores
● Consider semaphore variable “sem” initialized
to value 1

● Multiple threads in a program can use


semaphore for mutual exclusion
○ down(sem) //counter is 0, any further
downs will wait here
○ critical section //accessed with
mutual exclusion
○ up(sem) //counter incremented to 1,
waiting thread woken up

● Such semaphore used as locks are called


binary semaphores
16 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus
Watch out for deadlocks
● Deadlock: threads are stuck in blocked state
without making progress
○ Example:
■ a thread (say Thread 1) is holding a
lock (L1) and waiting for another one
(L2); unfortunately,
■ the thread (Thread 2) that holds lock L2
is waiting for L1 to be released.

17 CS F372: Operating Systems BITS Pilani, K K Birla Goa Campus

You might also like