0% found this document useful (0 votes)
9 views18 pages

Os 4

Uploaded by

rafidislam529
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views18 pages

Os 4

Uploaded by

rafidislam529
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

●​ 4.

1 Overview​

○​ Thread Definition​

○​ Single-threaded vs. Multithreaded Process​

●​ 4.1.1 Motivation​

○​ Multithreaded Applications​

○​ Multicore Systems​

○​ Web Servers​

●​ 4.1.2 Benefits​

○​ Responsiveness​

○​ Resource Sharing​

○​ Economy​

○​ Scalability​

●​ 4.2 Multicore Programming​

○​ Multicore Systems​

○​ Concurrency vs. Parallelism​

●​ 4.2.1 Programming Challenges​

○​ Identifying Tasks​

○​ Balance​

○​ Data Splitting​

○​ Data Dependency​

○​ Testing and Debugging​

●​ 4.2.2 Types of Parallelism​

○​ Data Parallelism​

○​ Task Parallelism​

●​ Summary (Multicore Programming)​

●​ 4.3 Multithreading Models​


○​ Overview​

●​ 4.3.1 Many-to-One Model​

○​ Description​

○​ Advantages​

○​ Disadvantages​

●​ 4.3.2 One-to-One Model​

○​ Description​

○​ Advantages​

○​ Disadvantages​

●​ 4.3.3 Many-to-Many Model​

○​ Description​

○​ Advantages​

○​ Disadvantages​

○​ Variation (Two-level Model)​

●​ Summary (Multithreading Models)​

●​ 4.4 Thread Libraries​

○​ Overview​

●​ 4.4.1 Pthreads​

○​ Example (C Program using Pthreads)​

○​ Key Functions​

○​ Attributes​

●​ 4.4.2 Windows Threads​

○​ Example (C Program using Windows API)​

○​ Key Functions​

●​ 4.4.3 Java Threads​

○​ Thread Creation​

○​ Synchronization​
●​ 4.4.3.1 Java Executor Framework​

○​ Example (Java Executor Framework)​

○​ ExecutorService​

○​ Callable and Future​

●​ Summary (Thread Libraries)​

●​ 4.5 Implicit Threading​

○​ Overview​

●​ 4.5.1 Thread Pools​

○​ Benefits of Thread Pools​

○​ Windows API Example​

○​ Java Thread Pools​

●​ 4.5.2 Fork-Join​

○​ Fork-Join in Java​

●​ 4.5.3 OpenMP​

○​ Example​

●​ 4.5.4 Grand Central Dispatch (GCD)​

○​ GCD Features​

○​ Example (Swift)​

●​ 4.5.5 Intel Threading Building Blocks (TBB)​

○​ Example (Parallel For Loop)​

●​ Summary (Implicit Threading)​

●​ 4.6 Threading Issues​

○​ 4.6.1 The fork() and exec() System Calls​

○​ 4.6.2 Signal Handling​

○​ 4.6.3 Thread Cancellation​

○​ 4.6.4 Thread-Local Storage (TLS)​

○​ 4.6.5 Scheduler Activations​


●​ Summary (Threading Issues)​

●​ 4.7 Operating-System Examples​

○​ 4.7.1 Windows Threads​

○​ 4.7.2 Linux Threads​

●​ Summary (Operating-System Examples)​

●​ 4.8 Summary​

Threads & Concurrency - Chapter Overview

4.1 Overview
●​ Thread Definition: A thread is a unit of CPU utilization consisting of a thread ID, program counter (PC),
register set, and a stack. It shares resources like code, data, and open files with other threads in the same
process.​

●​ Single-threaded vs. Multithreaded Process: A single-threaded process has one thread, while a
multithreaded process has multiple threads, enabling it to perform more tasks simultaneously.​

4.1.1 Motivation
●​ Multithreaded Applications: Most modern applications use multiple threads to perform tasks like:​

○​ Creating photo thumbnails.​

○​ A web browser running tasks like displaying images and fetching data in parallel.​

○​ A word processor using threads for UI, spelling checks, and background operations.​

●​ Multicore Systems: Multithreading helps leverage multicore systems by splitting tasks across multiple
CPUs.​

●​ Web Servers: Instead of creating new processes for each request, a multithreaded server creates threads
to handle multiple requests simultaneously.​

4.1.2 Benefits
1.​ Responsiveness: Multithreading allows applications to remain responsive, even when performing lengthy
operations in the background (e.g., responsive UIs).​

2.​ Resource Sharing: Threads within a process share the same memory and resources, making
communication easier and more efficient.​

3.​ Economy: Threads are cheaper to create and switch between than processes, saving memory and time.​
4.​ Scalability: Multithreading allows applications to run parallel tasks on multiple processors, improving
performance on multicore systems.​

4.2 Multicore Programming


Multicore Systems

●​ Multicore Systems: These systems have multiple processing cores on a single chip. Each core appears as
a separate CPU to the operating system, making them ideal for multithreaded programming, which
maximizes the use of these cores.​

●​ Concurrency vs. Parallelism:​

○​ Concurrency: Multiple tasks make progress over time but not simultaneously (e.g., on a
single-core system).​

○​ Parallelism: Tasks are performed simultaneously on multiple cores (e.g., multicore system).​

4.2.1 Programming Challenges


Designing programs for multicore systems introduces five major challenges:

1.​ Identifying Tasks: Find independent tasks in the program that can be divided and executed in parallel.​

2.​ Balance: Ensure tasks perform equal work to avoid inefficient use of resources.​

3.​ Data Splitting: Data must also be divided across cores to ensure efficient parallel processing.​

4.​ Data Dependency: Tasks that depend on shared data must be synchronized to avoid conflicts.​

5.​ Testing and Debugging: Parallel programs are harder to test due to numerous possible execution paths.​

These challenges make it necessary for software development to adapt to parallel programming and multicore
architectures.

4.2.2 Types of Parallelism


1.​ Data Parallelism: Involves dividing data into subsets and performing the same operation across multiple
cores. For example, summing parts of an array across different threads on different cores.​

2.​ Task Parallelism: Involves dividing tasks (not data) across multiple cores. Each thread performs a unique
operation, and the tasks may operate on the same or different data.​

While data and task parallelism are different, they can be combined to create a hybrid approach.

Summary
Multicore programming takes advantage of multiple processing cores to perform tasks concurrently or in parallel. The
shift from single-core to multicore systems presents programming challenges such as identifying tasks, balancing
workload, and managing data dependencies. Two main types of parallelism—data and task parallelism—help
distribute workload efficiently across cores. Despite the benefits, programming for multicore systems remains
complex, requiring new approaches to software design and debugging.

4.3 Multithreading Models


Overview

Threads can be supported in two main ways: user threads (managed at the user level without kernel support) and
kernel threads (managed directly by the operating system). The relationship between user threads and kernel
threads can follow different models: many-to-one, one-to-one, and many-to-many.

4.3.1 Many-to-One Model


●​ Description: In this model, many user threads map to a single kernel thread. The thread management is
handled by the user space.​

●​ Advantages:​

○​ Efficient thread management because it's done in user space.​

●​ Disadvantages:​

○​ If one thread blocks (e.g., making a system call), the entire process blocks.​

○​ Cannot utilize multiple cores in a multicore system, as only one thread can access the kernel at a
time.​

○​ Rarely used today due to these limitations.​

4.3.2 One-to-One Model


●​ Description: Each user thread is mapped to a separate kernel thread.​

●​ Advantages:​

○​ Better concurrency as other threads can continue when one makes a blocking system call.​

○​ Supports parallelism on multicore systems, with multiple threads running in parallel.​

●​ Disadvantages:​

○​ Creating a user thread requires creating a corresponding kernel thread.​

○​ Too many threads may degrade system performance due to overhead in kernel thread
management.​

●​ Example: Used by operating systems like Linux and Windows.​


4.3.3 Many-to-Many Model
●​ Description: Many user threads are multiplexed to a smaller or equal number of kernel threads.​

●​ Advantages:​

○​ Can create many user threads without the limitations of the many-to-one model.​

○​ Supports parallelism on multiple cores and can schedule different kernel threads for execution
when one thread makes a blocking system call.​

●​ Disadvantages:​

○​ Difficult to implement in practice.​

○​ Number of kernel threads is often limited by the number of cores available.​

●​ Variation: The two-level model is a variation where user threads can be bound to specific kernel threads,
enhancing flexibility.​

Summary

●​ The many-to-one model is rarely used today due to its limitations with blocking and multicore support.​

●​ The one-to-one model provides better concurrency and parallelism but can be limited by the overhead of
creating too many kernel threads.​

●​ The many-to-many model is the most flexible, allowing many user threads to be managed by a smaller
number of kernel threads and running in parallel on multiple cores, though it is complex to implement.​

●​ Most modern systems prefer the one-to-one model for simplicity and performance, though some
concurrency libraries still utilize the many-to-many model for specialized tasks.​

4.4 Thread Libraries


Overview

Thread libraries provide an API for creating and managing threads. There are two primary ways to implement these
libraries:

1.​ User-level thread libraries: Managed entirely in user space, with no kernel support.​

2.​ Kernel-level thread libraries: Supported and managed by the operating system, typically requiring system
calls.​

The most commonly used thread libraries today include:

●​ POSIX Pthreads: A thread API standardized by POSIX.​


●​ Windows Threads: A kernel-level thread library for Windows.​

●​ Java Threads: The Java thread API, which relies on the underlying OS thread library.​

Threads in all these libraries can share data and resources, but Java requires explicit management of shared data.

4.4.1 Pthreads
Pthreads is a POSIX standard that provides a set of functions for thread creation and synchronization. Pthreads can
be implemented at either the user level or kernel level.

Example (C Program using Pthreads):

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
int sum; /* shared data between threads */

void *runner(void *param); /* thread function */


int main(int argc, char *argv[]) {
pthread_t tid; /* thread identifier */
pthread_attr_t attr; /* thread attributes */
pthread_attr_init(&attr);

/* Create the thread */


pthread_create(&tid, &attr, runner, argv[1]);

/* Wait for the thread to exit */


pthread_join(tid, NULL);

printf("sum = %d\n", sum);


}

/* The thread will execute in this function */


void *runner(void *param) {
int i, upper = atoi(param);
sum = 0;
for (i = 1; i <= upper; i++)
sum += i;
pthread_exit(0);
}

●​ Key Functions: pthread_create(), pthread_join(), pthread_exit().​

●​ Attributes: Threads can be configured using attributes like stack size and scheduling policy.​

4.4.2 Windows Threads


Windows also provides a thread library for creating and managing threads. Threads in Windows are created with the
CreateThread() function and are synchronized using WaitForSingleObject() or
WaitForMultipleObjects().

Example (C Program using Windows API):

#include <windows.h>
#include <stdio.h>

DWORD Sum; /* shared data */

DWORD WINAPI Summation(LPVOID Param) {


DWORD Upper = *(DWORD*)Param;
for (DWORD i = 1; i <= Upper; i++)
Sum += i;
return 0;
}

int main(int argc, char *argv[]) {


DWORD ThreadId;
HANDLE ThreadHandle;
int Param = atoi(argv[1]);

/* Create the thread */


ThreadHandle = CreateThread(NULL, 0, Summation, &Param, 0, &ThreadId);

/* Wait for the thread to finish */


WaitForSingleObject(ThreadHandle, INFINITE);

/* Close the thread handle */


CloseHandle(ThreadHandle);
printf("sum = %d\n", Sum);
}

●​ Key Functions: CreateThread(), WaitForSingleObject(), CloseHandle().​

4.4.3 Java Threads


In Java, threads are created either by extending the Thread class or implementing the Runnable interface. Java
provides a simple API for thread management and synchronization.

Example (Java program using Runnable):

class Task implements Runnable {


public void run() {
System.out.println("I am a thread.");
}
}

public class Driver {


public static void main(String[] args) {
Thread worker = new Thread(new Task());
worker.start();
}
}

●​ Thread Creation: Create a new thread with new Thread(new Task()), and call start() to execute.​

●​ Synchronization: Java threads use join() to wait for threads to complete.​

4.4.3.1 Java Executor Framework


The Executor Framework introduced in Java 1.5 abstracts thread creation and execution. It uses the Executor
interface to manage thread execution.

Example (Java Executor Framework):

import java.util.concurrent.*;

class Summation implements Callable<Integer> {


private int upper;

public Summation(int upper) {


this.upper = upper;
}

public Integer call() {


int sum = 0;
for (int i = 1; i <= upper; i++)
sum += i;
return sum;
}
}

public class Driver {


public static void main(String[] args) {
int upper = Integer.parseInt(args[0]);
ExecutorService pool = Executors.newSingleThreadExecutor();
Future<Integer> result = pool.submit(new Summation(upper));
try {
System.out.println("sum = " + result.get());
} catch (InterruptedException | ExecutionException e) { }
}
}

●​ ExecutorService: Manages thread pool and tasks, using submit() for task execution.​

●​ Callable and Future: Callable allows tasks to return results, and Future.get() retrieves the result once
it's available.​
Summary

Thread libraries provide essential tools for managing and synchronizing threads. Pthreads and Windows Threads
allow direct control over thread creation and management, while Java simplifies this with its built-in thread model and
the Executor Framework. Java's Callable and Future classes extend the flexibility of thread management,
especially for concurrent tasks that return results. These thread libraries provide the foundation for creating
multithreaded programs across different operating systems and environments.

4.5 Implicit Threading


Overview

Implicit threading simplifies the creation and management of threads by transferring the responsibility to compilers or
runtime libraries. Instead of directly managing threads, developers focus on identifying parallel tasks—functions that
can run concurrently. The library or framework then manages thread creation and execution, often using a
many-to-many model. This approach alleviates the complexity of managing hundreds or thousands of threads,
particularly in multicore systems.

4.5.1 Thread Pools


A thread pool is a collection of pre-created threads that can be used to execute tasks. Instead of creating new
threads for each task, the server submits tasks to the pool, where an idle thread executes the task. When the task is
complete, the thread returns to the pool.

Benefits of Thread Pools:

1.​ Faster Execution: Reusing existing threads is faster than creating new ones.​

2.​ Thread Limitation: Limits the number of concurrent threads, preventing resource exhaustion.​

3.​ Task Scheduling: Tasks can be scheduled for delayed or periodic execution.​

Windows API Example:

●​ QueueUserWorkItem() is used to add tasks to the thread pool, where the function is executed by a thread
from the pool.​

Java Thread Pools:

●​ ExecutorService is used to manage thread pools in Java, including models like


newSingleThreadExecutor(), newFixedThreadPool(), and newCachedThreadPool(). Tasks are
submitted using the execute() method.​

Example:​

ExecutorService pool = Executors.newCachedThreadPool();
pool.execute(new Task());
pool.shutdown();
●​

4.5.2 Fork-Join
The fork-join model involves breaking tasks into smaller subtasks (forking), executing them in parallel, and then
combining the results (joining). This model is synchronous, where the parent thread waits for all children to complete
before continuing.

Fork-Join in Java:

●​ Introduced in Java 1.7, the ForkJoinPool handles divide-and-conquer tasks, such as sorting algorithms.​

●​ Tasks are split recursively, and when the problem size is small enough, the task is solved directly.​

●​ Work stealing: Idle threads can steal tasks from other threads to balance the workload.​

Example:

ForkJoinPool pool = new ForkJoinPool();


int sum = pool.invoke(new SumTask(0, SIZE - 1, array));

4.5.3 OpenMP
OpenMP is a set of compiler directives for parallel programming in C, C++, and Fortran, primarily for shared-memory
systems. Developers insert directives into the code to specify parallel regions, and OpenMP takes care of creating
and managing threads.

Example:

#pragma omp parallel


{
printf("I am a parallel region.");
}

●​ Parallelizing loops: OpenMP can parallelize for loops automatically, dividing iterations among available
threads.​

4.5.4 Grand Central Dispatch (GCD)


Grand Central Dispatch (GCD) is an Apple technology for managing parallel tasks in macOS and iOS. It uses
dispatch queues to manage tasks, which can be either serial (one task at a time) or concurrent (multiple tasks at
once). GCD dynamically manages a thread pool and can scale based on demand.

GCD Features:

1.​ Serial Queues: Execute tasks one at a time, ensuring sequential order.​
2.​ Concurrent Queues: Execute multiple tasks in parallel.​

3.​ Quality of Service (QoS): Prioritizes tasks into categories like user-interactive, utility, and background
tasks.​

Example (Swift):

let queue = dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0)


dispatch_async(queue) { print("I am a closure.") }

4.5.5 Intel Threading Building Blocks (TBB)


Intel TBB is a C++ library for parallel programming that abstracts thread management. It supports parallel loops, task
scheduling, and load balancing, and is cache-aware for efficient memory access.

Example (Parallel For Loop):

parallel_for(0, n, [&](size_t i) { apply(v[i]); });

TBB divides the iteration space into chunks and assigns them to threads, similar to the fork-join model. The library
manages tasks and ensures optimal performance.

Summary

Implicit threading frameworks like thread pools, fork-join, OpenMP, GCD, and Intel TBB abstract away thread
creation and management, allowing developers to focus on identifying parallel tasks. These models help optimize
task execution, reduce thread overhead, and balance workload across multiple cores. By using implicit threading,
developers can create efficient parallel applications without dealing directly with the complexities of thread
management.

4.6 Threading Issues


4.6.1 The fork() and exec() System Calls

●​ fork() in Multithreading: In multithreaded programs, the behavior of fork() can vary:​

1.​ Duplicate all threads: Some systems provide a version of fork() that duplicates all threads.​

2.​ Duplicate only the calling thread: This is a version where only the thread calling fork() is
duplicated.​

●​ exec(): If exec() is called after fork(), the new process replaces the entire process, including all
threads.​

When to use which version of fork():


●​ If exec() is called immediately, duplicating only the calling thread is sufficient.​

●​ If exec() is not called, it is better to duplicate all threads to maintain the multithreaded state.​

4.6.2 Signal Handling


●​ Signals: Used to notify a process of an event. They can be synchronous (like illegal memory access) or
asynchronous (like a process termination).​

●​ Handling Signals in Multithreaded Programs:​

1.​ Deliver to the specific thread causing the issue (for synchronous signals).​

2.​ Deliver to all threads (for certain asynchronous signals).​

3.​ Deliver to specific threads based on configuration.​

4.​ Assign one thread to handle all signals.​

●​ UNIX Example: kill(pid, signal) sends a signal to a process, while pthread_kill(tid,


signal) allows sending signals to specific threads in Pthreads.​

●​ Windows: Emulates signals using Asynchronous Procedure Calls (APCs), which are delivered to specific
threads.​

4.6.3 Thread Cancellation


●​ Thread Cancellation: Used to terminate a thread before it finishes execution. Can be done in two ways:​

○​ Asynchronous Cancellation: A thread is immediately terminated.​

○​ Deferred Cancellation: The thread checks periodically for a cancellation request and terminates at
a safe point.​

●​ Challenges with Asynchronous Cancellation:​

○​ Resources might not be cleaned up properly.​

○​ It can be problematic if the thread is in the middle of updating shared data.​

●​ Deferred Cancellation: Preferred as it ensures threads can terminate safely at appropriate points.​

○​ Pthreads Example: Uses pthread_cancel(tid) to request cancellation, with the thread


handling the cancellation at a safe point.​

●​ Java Example: Uses Thread.interrupt() to set an interrupt flag for the target thread, which can check
using isInterrupted().​
4.6.4 Thread-Local Storage (TLS)
●​ TLS: Allows each thread to have its own copy of certain data, preventing shared access issues.​

○​ TLS is often declared as static but is unique to each thread.​

○​ Example: In Java, ThreadLocal<T> provides thread-specific storage. In Pthreads,


pthread_key_t allows thread-specific data.​

○​ Use Case: For example, each transaction in a system may require its own transaction ID stored in
TLS.​

4.6.5 Scheduler Activations


●​ Scheduler Activations: Used in many-to-many and two-level models for communication between user-level
thread libraries and the kernel.​

○​ Lightweight Process (LWP): A virtual processor for user threads, which is mapped to a kernel
thread.​

○​ Upcalls: Used to inform the user-level thread library when a thread is about to block, allowing it to
manage available threads and efficiently schedule tasks.​

Summary

●​ fork() and exec() in multithreaded programs behave differently, with the need to duplicate either all
threads or only the calling thread, depending on the situation.​

●​ Signal handling in multithreaded programs requires careful delivery decisions, as signals can be sent to
specific threads, all threads, or one designated thread.​

●​ Thread cancellation can be done asynchronously (immediate termination) or deferred (safe, periodic
checks).​

●​ Thread-local storage (TLS) ensures that each thread has its own copy of certain data to avoid conflicts,
particularly in systems like Java, Pthreads, and C#.​

●​ Scheduler activations provide coordination between the kernel and user-level threads to ensure efficient
use of resources and smooth execution, especially in complex threading models.​

4.7 Operating-System Examples

4.7.1 Windows Threads


●​ Thread Management in Windows: Each Windows process may have one or more threads. The Windows
API creates and manages threads using a one-to-one mapping (each user-level thread maps to a kernel
thread).​
Thread Components:

●​ Thread ID: A unique identifier for the thread.​

●​ Register Set: Represents the processor's status.​

●​ Program Counter: Points to the thread's next instruction.​

●​ User and Kernel Stacks: Used when the thread is running in user mode (user stack) or kernel mode (kernel
stack).​

●​ Private Storage Area: Used by runtime libraries and dynamic link libraries (DLLs).​

Primary Data Structures:

1.​ ETHREAD (Executive Thread Block): Holds pointers to the process and the starting address of the thread's
routine. It also contains a pointer to the KTHREAD.​

2.​ KTHREAD (Kernel Thread Block): Contains kernel-specific scheduling and synchronization information,
including the kernel stack.​

3.​ TEB (Thread Environment Block): A user-space data structure holding the thread identifier, user stack, and
thread-local storage when the thread is in user mode.​

Thread Structure:

●​ The ETHREAD and KTHREAD exist in kernel space, and only the kernel can access them. The TEB exists
in user space and is accessed when the thread is in user mode.​

Diagram of a Windows Thread:

●​ ETHREAD and KTHREAD in kernel space, while the TEB resides in user space, managing the thread's
execution environment.​

4.7.2 Linux Threads


●​ Threads in Linux: Linux does not distinguish between processes and threads, using the term task for both.
The clone() system call is used to create tasks, allowing varying levels of resource sharing between parent
and child tasks.​

Flags for clone():

●​ CLONE_FS: Share filesystem information.​

●​ CLONE_VM: Share the same memory space.​

●​ CLONE_SIGHAND: Share signal handlers.​


●​ CLONE_FILES: Share the set of open files.​

When clone() is called with certain flags, the parent and child tasks share various resources, making it equivalent to
thread creation. If no flags are set, the child task is like a new process created by fork().

Linux Task Management:

●​ Each task in Linux has a unique kernel data structure: struct task_struct. This structure holds pointers to
other data structures (e.g., memory, open files, signals).​

clone() and Task Creation:

●​ When fork() is used, a new task is created along with copies of the parent process's data structures.
However, clone() allows selective sharing of data structures, depending on the flags passed.​

Task and Container Flexibility:

●​ The clone() system call can be extended to create Linux containers, which isolate tasks (Linux systems)
under a single kernel, providing virtualization. Containers are discussed further in Chapter 18.​

Summary

●​ Windows uses a one-to-one thread mapping where each user-level thread maps directly to a kernel
thread, with specific thread data structures like ETHREAD, KTHREAD, and TEB for managing execution in
user and kernel space.​

●​ Linux treats processes and threads as tasks, using clone() to create tasks with varying degrees of
resource sharing between parent and child tasks. The task_struct structure is used to manage these tasks,
and clone() can also be used to create containers for virtualization.​

4.8 Summary
●​ Thread Definition: A thread is a basic unit of CPU utilization. Threads within the same process share
resources like code and data.​

●​ Benefits of Multithreading:​

○​ Responsiveness: Allows continued operation even if part of the program is blocked or performing
lengthy operations.​

○​ Resource Sharing: Threads share resources, making it easier to communicate and share data.​

○​ Economy: Threads are more lightweight compared to processes, reducing the overhead of
creation and switching.​

○​ Scalability: Multithreaded applications can leverage multiple processors for parallel execution.​
●​ Concurrency vs. Parallelism:​

○​ Concurrency: Multiple threads making progress over time (possible on single CPU systems).​

○​ Parallelism: Multiple threads making progress simultaneously (requires a multicore system).​

●​ Challenges in Multithreading:​

○​ Dividing work and data across threads.​

○​ Handling data dependencies between threads.​

○​ Difficulty in testing and debugging multithreaded programs.​

●​ Parallelism Types:​

○​ Data Parallelism: Distributes data subsets across cores, performing the same operation on each
core.​

○​ Task Parallelism: Distributes tasks (different operations) across cores.​

●​ Thread Models:​

○​ Many-to-One: Multiple user threads mapped to a single kernel thread.​

○​ One-to-One: Each user thread maps to a single kernel thread.​

○​ Many-to-Many: Multiple user threads mapped to a smaller or equal number of kernel threads.​

●​ Thread Libraries:​

○​ Windows, Pthreads (for POSIX systems like UNIX, Linux, macOS), and Java provide APIs for
thread creation and management.​

●​ Implicit Threading: Developers identify tasks instead of threads, and the system (or libraries) handles
thread creation and management. Examples include thread pools, fork-join, and Grand Central Dispatch.​

●​ Thread Cancellation:​

○​ Asynchronous Cancellation: Terminates a thread immediately, potentially in an unsafe state.​

○​ Deferred Cancellation: Allows threads to check periodically and terminate safely.​

●​ Linux Task Management:​

○​ Unlike other OSs, Linux doesn't distinguish between processes and threads, using the term task.
The clone() system call can create tasks that behave either like processes or threads based on
resource sharing settings.​

This chapter highlights how multithreading provides performance improvements through parallel execution, how
various models and libraries manage thread behavior, and the issues that arise in creating multithreaded applications.

You might also like