Concurrency vs Parallelism Notes
• Concurrency allows a program to e2iciently manage multiple tasks by rapidly
switching between them even on a single CPU core, creating an illusion of
simultaneous progress.
• This switching, called context switching, involves the CPU changing focus from
one task to another.
• Example: Like a chef working on multiple dishes by preparing one then switching
to another.
• Excessive context switching adds overhead and can reduce performance.
• Parallelism means executing multiple tasks truly simultaneously using multiple
CPU cores.
• Example: Two chefs working simultaneously on di2erent tasks, speeding up
completion.
• Concurrency is ideal for tasks involving waiting (like I/O) because it enables other
tasks to proceed during waits.
• Parallelism is suited for heavy computations split across cores, significantly
boosting speed.
• Practical examples:
• Web applications use concurrency to smoothly handle user inputs,
database queries, and background tasks, improving responsiveness.
• Machine learning leverages parallelism by distributing training across
multiple cores or machines to reduce compute time.
• Video rendering and scientific simulations use parallelism to process
frames or model phenomena across many cores.
• Big data frameworks such as Hadoop and Spark apply parallelism for
e2icient large-dataset processing.
• Relationship between concurrency and parallelism:
• Concurrency is about managing multiple tasks to keep programs
responsive, especially for I/O-bound operations.
• Parallelism focuses on improving performance by executing computation-
heavy tasks simultaneously.
• Concurrency can enable parallelism by breaking programs into smaller
independent tasks that can be distributed across multiple cores for
parallel execution.
• Programming languages with strong concurrency primitives simplify writing
concurrent programs that can be e2iciently parallelized.
• Understanding and leveraging concurrency and parallelism helps design more
e2icient systems and better-performing applications.
Multithreading vs Asynchronous
• People often confuse multithreading and asynchronous programming, but they
are di2erent concepts with di2erent objectives.
Real-world analogy: Chef handling orders in a restaurant
• Chef receives two orders simultaneously: co2ee and boiled eggs.
• For asynchronous programming:
• Chef starts boiling milk for co2ee.
• While milk is boiling, chef doesn't wait idle; starts boiling eggs.
• Chef sets timers to get alerts when each task finishes.
• Chef can also perform other tasks like cleaning while waiting.
• Once milk is ready, chef finishes co2ee preparation and delivers it.
• Once eggs are ready, chef finishes eggs and delivers them.
• Chef is never idle; tasks are managed asynchronously.
• For multithreading:
• Chef hires two additional chefs: one for co2ee, one for eggs.
• Each chef independently completes and delivers their task.
• Primary chef manages resource allocation and hires/fires helpers as
needed.
• Creates challenges for scaling (cannot infinitely hire chefs).
• This represents multithreaded programming where multiple threads work
in parallel.
Programming examples: Handling client requests on a server
• Asynchronous programming:
• Server receives request #1, sends query to database.
• While waiting for DB response, server processes request #2 and #3.
• When DB response for request #1 arrives, server processes it and sends
response.
• Server never blocks or waits for one task before picking up others.
• Focuses on e2icient task management.
• Multithreading:
• Server creates a new thread for each client request.
• Each thread processes the request and waits (blocks) for DB results.
• More requests lead to more threads.
• Large number of threads causes resource exhaustion and crashes.
• Thread pool is used to limit number of active threads and reuse them.
• Focuses on worker (thread) management.
Key di2erence summarized
• Multithreading is about managing workers (threads)—how many threads,
delegating work, starting and stopping threads.
• Asynchronous programming is about managing tasks—not waiting for any task to
finish before starting another, regardless of threading model.
• Asynchronous programming can be single-threaded or multi-threaded; it
focuses on avoiding blocking and e2icient task scheduling.
• Multithreading involves concurrency by multiple threads; asynchronous
programming involves non-blocking task execution.
Conclusion
• Multithreading means using multiple threads to run things in parallel.
• Asynchronous programming means not blocking the flow waiting for tasks to
complete.
• Both improve responsiveness and resource utilization but operate with di2erent
principles.
• In server environments, asynchronous programming improves scalability without
multiplying threads.
• Thread pool helps in multithreading by limiting and reusing threads to avoid
resource exhaustion.
Concurrency and multithreading di2er in system design primarily in their scope and
implementation approach:
• Concurrency is a broad concept that refers to a system's ability to handle
multiple tasks at overlapping times. It manages the execution of many tasks that
may or may not run simultaneously (e.g., through time-slicing or interleaving on a
single CPU core). Concurrency is about task management—allowing multiple
tasks to make progress without necessarily executing at the exact same instant.
It is key for improving resource utilization and responsiveness, especially in I/O-
bound and interactive systems.
• Multithreading is a specific technique to achieve concurrency within a single
process by dividing the program into multiple threads of execution that share the
same memory. Each thread runs independently, and on multicore processors,
threads can run in parallel. Multithreading focuses on lightweight execution units
(threads) within a process to handle multiple tasks simultaneously or by
interleaving, o2ering e2icient resource sharing and faster task handling.
Notes on Thread, ThreadPool, Task, and async/await in .NET
Thread
• A thread in .NET represents a lightweight, independent path of execution within a
process.
• Threads enable concurrent execution and parallelism in applications.
• Creating a thread requires about 1 MB of memory.
• Threads have methods like Start, Join, Abort, and properties like Name.
• Threads have their own call stack.
• Threads are suitable for long-running tasks where precise control is needed.
• Threads can be named for debugging purposes.
• Creating threads manually has overhead and requires careful synchronization.
ThreadPool
• ThreadPool manages a pool of worker threads e2iciently.
• Threads are reused from the pool to avoid overhead of creating/destroying
threads.
• ThreadPool threads are limited in number to prevent system overload.
• Best suited for short-lived, IO-bound or background operations.
• Using ThreadPool provides fire-and-forget behavior with no return values or built-
in error handling.
• No support for cancellation or fine-grained control like thread priorities.
• Threads in ThreadPool do not support dependency injection or scoped lifetime
services.
• ThreadPool is managed by the .NET runtime automatically.
Task
• Task is a higher-level abstraction over threads and thread pools introduced in
.NET 4.0.
• Provides better control, such as return values, exception handling, cancellation
tokens, and continuation.
• Internally uses threads from the ThreadPool.
• Simplifies parallel and asynchronous programming.
• Supports async and await keywords for better readability and maintainability.
• Suitable for CPU-bound parallelism or complex asynchronous workflows.
• Improves scalability by e2iciently reusing threads.
• Tasks can run multiple operations in parallel, with constructs like [Link].
async and await
• Used mainly for IO-bound asynchronous programming to avoid blocking threads.
• async keyword marks a method as asynchronous.
• await keyword suspends method execution until the awaited task completes,
freeing the thread to do other work.
• Improves responsiveness, especially in UI applications by freeing the UI thread
during waits.
• Works well with Task-returning methods.
• Internally, async-await uses a state machine to manage method resumption
after await.
• Ideal for operations like HTTP calls, file IO, or database queries where waiting
occurs.
Recommendations on Usage
• Use Thread for long-running, dedicated operations needing precise control.
• Use ThreadPool for quick, short, fire-and-forget background tasks where no
return is needed.
• Use Task for parallelism with better exception handling, cancellation, and return
values.
• Use async-await for IO-bound operations to keep the application responsive and
e2iciently use threads.
GENERIC COLLECTION
• Definition of Generics: Generics allow classes and methods to be type
independent or type safe. This means methods/classes can handle any data type
instead of a fixed one.
• Problem Without Generics:
• Example: A method that compares two integers only works for integers,
not strings or other types.
• Workarounds like using object type as parameter cause boxing/unboxing,
which reduces performance.
• Another workaround is method overloading for each type, increasing code
size and complexity.
• How Generics Solve It:
• A generic method uses a type parameter (e.g., T) within angle
brackets <T>.
• This type parameter can represent any type passed during method call,
making the method type independent.
• Example of a generic method REqual<T>(T t1, T t2) can compare integers,
strings, or any other type using the same method.
• Generic Classes:
• Instead of making individual methods generic, the whole class can be
generic by specifying the type parameter at the class level.
• This generic type applies to all methods within the class that use the type
parameter.
• This approach is better when multiple methods need to be generic in a
class, maintaining type safety and reducing repetition.
• Benefits:
• Avoids boxing/unboxing, improving performance.
• Code is reusable and cleaner.
• Type safety is maintained at compile time, reducing runtime errors.
• Usage Context:
• Used extensively in real applications to create flexible and optimized
code components that operate on any data type e2iciently.