Chapter 4: Threads
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 4: Threads
Overview
Multicore Programming
Multithreading Models
Thread Libraries
Implicit Threading
Threading Issues
Operating System Examples
Operating System Concepts – 9th Edition 4.2 Silberschatz, Galvin and Gagne ©2013
Objectives
To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of
multithreaded computer systems
To discuss the APIs for the Pthreads, Windows, and Java thread libraries
To explore several strategies that provide implicit threading
To examine issues related to multithreaded programming
To cover operating system support for threads in Windows and Linux
Operating System Concepts – 9th Edition 4.3 Silberschatz, Galvin and Gagne ©2013
Single and Multithreaded Processes
A thread is a basic unit of CPU utilization
It comprises a thread ID, a program counter, a register set, and a stack.
It shares with other threads belonging to the same process its code
section, data section, and other operating-system resources, such as
open files and signals.
Operating System Concepts – 9th Edition 4.4 Silberschatz, Galvin and Gagne ©2013
Single and Multithreaded Processes
Operating System Concepts – 9th Edition 4.5 Silberschatz, Galvin and Gagne ©2013
Motivation
A web server accepts client requests for web pages, images, sound, and
so forth. A busy web server may have several (perhaps thousands of)
clients concurrently accessing it.
If the web server ran as a traditional single-threaded process, it would be
able to service only one client at a time, and a client might have to wait a
very long time for its request to be serviced.
Operating System Concepts – 9th Edition 4.6 Silberschatz, Galvin and Gagne ©2013
Multithreaded Server Architecture
When a server receives a message, it services the message using a
separate thread. This allows the server to service.
Operating System Concepts – 9th Edition 4.7 Silberschatz, Galvin and Gagne ©2013
Benefits
Responsiveness – may allow continued execution if part of process is
blocked, especially important for user interfaces
Resource Sharing – threads share resources of process, easier than
shared memory or message passing
Economy – cheaper than process creation, thread switching lower
overhead than context switching
Scalability – process can take advantage of multiprocessor architectures
Operating System Concepts – 9th Edition 4.8 Silberschatz, Galvin and Gagne ©2013
Multicore Programming
A system is parallel if it can perform more than one task
simultaneously.
In contrast, a concurrent system supports more than one task by
allowing all the tasks to make progress.
Thus, it is possible to have concurrency without parallelism
Operating System Concepts – 9th Edition 4.9 Silberschatz, Galvin and Gagne ©2013
Concurrency vs. Parallelism
Concurrent execution on single-core system:
Parallelism on a multi-core system:
Operating System Concepts – 9th Edition 4.10 Silberschatz, Galvin and Gagne ©2013
Multicore Programming Challenges
For designers of operating systems must write scheduling algorithms
that use multiple processing cores to allow the parallel execution.
For application programmers, the challenge is to modify existing
programs as well as design new programs that are multithreaded
Operating System Concepts – 9th Edition 4.11 Silberschatz, Galvin and Gagne ©2013
Multicore Programming Challenges
In general, five areas present challenges
Operating System Concepts – 9th Edition 4.12 Silberschatz, Galvin and Gagne ©2013
Types of Parallelism
Operating System Concepts – 9th Edition 4.13 Silberschatz, Galvin and Gagne ©2013
Types of Parallelism: Data parallelism
Data parallelism focuses on distributing subsets of the same data
across multiple computing cores and performing the same operation on
each core. For example, summing the contents of an array of size N.
On a single-core system, one thread would simply sum the elements
[0] . . . [N − 1].
On a dual-core system, however,
The thread A, running on Core-0, could sum the elements [0] . . . [N/2 − 1]
The thread B, running on Core-1, could sum the elements [N/2] . . . [N − 1].
Operating System Concepts – 9th Edition 4.14 Silberschatz, Galvin and Gagne ©2013
Types of Parallelism: Task parallelism
Task parallelism involves distributing not data but tasks (threads)
across multiple computing cores.
Each thread is performing a unique operation. Different threads may be
operating on the same data, or they may be operating on different data.
An example of task parallelism might involve two threads, each
performing a unique statistical operation on the array of elements.
Operating System Concepts – 9th Edition 4.15 Silberschatz, Galvin and Gagne ©2013
User Threads and Kernel Threads
User threads - management done by user-level threads library
Three primary thread libraries:
POSIX Pthreads
Windows threads
Java threads
Operating System Concepts – 9th Edition 4.16 Silberschatz, Galvin and Gagne ©2013
User Threads and Kernel Threads
Kernel threads - Supported by the Kernel
Examples – virtually all general purpose operating systems, including:
Windows
Solaris
Linux
Tru64 UNIX
Mac OS X
Operating System Concepts – 9th Edition 4.17 Silberschatz, Galvin and Gagne ©2013
Multithreading Models
1. Many-to-One
2. One-to-One
3. Many-to-Many
Operating System Concepts – 9th Edition 4.18 Silberschatz, Galvin and Gagne ©2013
Many-to-One
Many user-level threads mapped to single
kernel thread.
One thread blocking causes all to block
Multiple threads may not run in parallel on
muticore system because only one may be in
kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
Operating System Concepts – 9th Edition 4.19 Silberschatz, Galvin and Gagne ©2013
One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
Examples are Windows, Linux, Solaris 9 and later
Operating System Concepts – 9th Edition 4.20 Silberschatz, Galvin and Gagne ©2013
Many-to-Many Model
Allows many user level threads to be mapped to many kernel threads
Allows the operating system to create a sufficient number of kernel
threads
Solaris prior to version 9 and Windows with the ThreadFiber package
Operating System Concepts – 9th Edition 4.21 Silberschatz, Galvin and Gagne ©2013
Two-level Model
Similar to M:M, except that it allows a user thread to be bound to
kernel thread
Example is Solaris 8 and earlier
Operating System Concepts – 9th Edition 4.22 Silberschatz, Galvin and Gagne ©2013
Thread Libraries
Thread library provides programmer with API for creating and managing
threads
Two primary ways of implementing
Library entirely in user space
Kernel-level library supported by the OS
Three main thread libraries are in use today
1. POSIX Pthreads
2. Windows
3. Java (we will focus on java threads in rest of the slides)
Operating System Concepts – 9th Edition 4.23 Silberschatz, Galvin and Gagne ©2013
Java Threads
All Java programs comprise at least a single thread of control—even a
simple Java program consisting of only a main() method runs as a
single thread
Java threads are managed by the JVM
Java threads are available on any system that provides a JVM
including Windows, Linux, and Mac OS X.
The Java thread API is available for Android applications as well.
Operating System Concepts – 9th Edition 4.24 Silberschatz, Galvin and Gagne ©2013
Java Threads
Java threads may be created by:
a) Extending/derived Thread class
b) Implementing the Runnable interface
More commonly used— technique is to define a class that implements
the Runnable interface
Operating System Concepts – 9th Edition 4.25 Silberschatz, Galvin and Gagne ©2013
Java Threads
When a class implements Runnable, it must define a run() method.
The code implementing the run() method is what runs as a separate
thread.
Operating System Concepts – 9th Edition 4.26 Silberschatz, Galvin and Gagne ©2013
Java Multithreaded Program
Operating System Concepts – 9th Edition 4.27 Silberschatz, Galvin and Gagne ©2013
Java Multithreaded Program
Operating System Concepts – 9th Edition 4.28 Silberschatz, Galvin and Gagne ©2013
Implicit Threading
With the continued growth of multicore processing, applications
containing hundreds—or even thousands—of threads are looming on
the horizon.
Designing such applications is not a trivial undertaking. As numbers of
threads increase, program correctness more difficult with explicit
threads.
A better solution the design of multithreaded applications is to transfer
the creation and management of threading from application developers
to compilers and run-time libraries
Operating System Concepts – 9th Edition 4.29 Silberschatz, Galvin and Gagne ©2013
Implicit Threading
By Implicit Threading we means creation and management of threads
done by compilers and run-time libraries rather than programmers.
Implicit Threading EXAMPLES
Thread Pools
OpenMP
Grand Central Dispatch
Microsoft Threading Building Blocks (TBB)
Operating System Concepts – 9th Edition 4.30 Silberschatz, Galvin and Gagne ©2013
Thread Pools
Thread pool is to create a number of threads at process startup and
place them into a pool, where they sit and wait for work.
When a server receives a request, it awakens a thread from this pool—if
one is available—and passes it the request for service.
Once the thread completes its service, it returns to the pool and awaits
more work
If the pool contains no available thread, the server waits until one
becomes free.
Operating System Concepts – 9th Edition 4.31 Silberschatz, Galvin and Gagne ©2013
Thread Pools
Advantages:
1. Slightly faster to service a request with an existing thread than create a
new thread
2. Allows the number of threads in the application(s) to be bound to the size
of the pool. Important on systems that cannot support a large number of
concurrent threads.
3. Separating task to be performed from mechanics of creating task allows
different strategies for running task
Windows API supports thread pools
Operating System Concepts – 9th Edition 4.32 Silberschatz, Galvin and Gagne ©2013
OpenMP
OpenMP is a set of compiler directives as well as an API for programs
written in C, C++, or FORTRAN that provides support for parallel
programming in shared-memory environments.
OpenMP identifies parallel regions as blocks of code that may run in
parallel
Application developers insert compiler directives into their code at
parallel regions, and these directives instruct the OpenMP run-time
library to execute the region in parallel
Operating System Concepts – 9th Edition 4.33 Silberschatz, Galvin and Gagne ©2013
OpenMP
#pragma omp parallel
Create as many threads as there are
cores
#pragma omp parallel for
for(i=0;i<N;i++
c[i] = a[i] + b[i];
Run for loop in parallel
Operating System Concepts – 9th Edition 4.34 Silberschatz, Galvin and Gagne ©2013
Other Approaches
Other commercial approaches include
nA notable example is the java.util.concurrent package, which supports
implicit thread creation and management.
nIntel’s Threading Building Blocks (TBB) and several products from
Microsoft
nGrand Central Dispatch (GCD)—a technology for Apple’s Mac OS X and
iOS operating systems allows application developers to identify sections of
code to run in parallel.
Operating System Concepts – 9th Edition 4.35 Silberschatz, Galvin and Gagne ©2013
Homework
Write answers to the (Only) even numbered exercise
question from 4.1 to 4.16
Only handwritten neat and clean homework will be
accepted.
Page size and page quality through out the course
should remain the same.
Throughout this course, homework should be
submitted to CR in next class after the class
attendance.
Operating System Concepts – 9th Edition 4.36 Silberschatz, Galvin and Gagne ©2013
End of Chapter 4
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013