0% found this document useful (0 votes)
8 views6 pages

Threads Operating System

The document explains threads, basic units of execution within a process that share code and data. Threads allow for the concurrent execution of tasks and improve performance. There are user-level and kernel-level threads, and different models such as many-to-one, one-to-one, and many-to-many for assigning user threads to kernel threads.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views6 pages

Threads Operating System

The document explains threads, basic units of execution within a process that share code and data. Threads allow for the concurrent execution of tasks and improve performance. There are user-level and kernel-level threads, and different models such as many-to-one, one-to-one, and many-to-many for assigning user threads to kernel threads.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Threads

In the previous chapter, it was supposed that a process consisted only of a thread.
Now, most operating systems provide features that allow
a process can have multiple threads of control. In this chapter, we will see what a thread is, its
advantages, and the different implementation models.

What is a thread?

A thread is a basic unit of CPU utilization, which contains a thread ID, its own
program counter, a set of registers, and a stack; which is represented at the system level
operating with a structure called TCB (thread control block).

Threads share the code section with other threads belonging to the same process,
the data section, among other things. If a process has multiple threads, it can perform more
one task at a time (this is real when you have more than one CPU).

Let’s see an example to clarify the concept:


A web server accepts requests from clients asking for web pages. If this server
it has several clients and will operate with a single thread of execution, it could only serve one
client at a time, and the time a client could wait to be attended to could be very
big.

One possible solution would be for the server to operate in such a way that it accepts a request
for once, and that when it receives another request, it creates another process to serve the new one.
request. But creating a process takes time and uses a lot of resources, so if each
process will perform the same tasks Why not use threads?

It is generally more efficient to use a process that utilizes multiple threads (one thread for
listen to the requests, and when a request comes in, instead of creating another process, it is created
another thread to process the request)

Advantages of using threads


Response: the response time improves, as the program can continue.
running, even if part of it is blocked.
· Share resources: threads share the memory and resources of the process to which
they belong, so it is possible to have multiple execution threads within the same space
directions.
Economy: It is easier to create, change context, and manage threads than processes.
Multiple CPU usage: allows threads of the same process to run on different
CPUs at the same time. In a single-threaded process, a process runs on a single CPU,
regardless of how many I have available.

User-level and kernel-level threads


So far we have talked about threads in a generic sense, but on a practical level threads
They can be implemented at the user level or at the kernel level.
User-level threads: they are implemented in some library. These threads are managed without
support of the OS, which only recognizes a single thread of execution.
Kernel-level threads:
The OS is the one who creates, plans, and manages the threads. As many threads are recognized as have been
created.
User-level threads have the benefit that their context switching is simpler.
that the context switching between kernel threads. Furthermore, they can still be implemented even if the
SO does not use threads at the kernel level. Another benefit is the ability to schedule.
different from the SO strategy.
Kernel-level threads have the great benefit of being able to make better use of the
multiprocessor architectures, which provide better response time, since if
one thread is blocked, the others can continue executing.

How are kernel-level threads related to user-level threads?


There are 3 ways to establish the relationship

Mx1 Model (Many to one)


The model assigns multiple user threads to a kernel thread.
This case corresponds to the threads implemented at the user level, as the system
only recognizes a single thread of control for the process.
The downside is that if one thread gets blocked, the entire process gets blocked.
Also, since only one thread can access the kernel at a time, they will not be able to execute.
multiple threads in parallel on multiple CPUs.

Model 1x1 (one to one)


The model assigns each user thread to a kernel thread. It provides a greater
Concurrency that the previous model, allowing another thread to run if one gets blocked.
It has the drawback that every time a thread is created at the user level, a
Kernel-level thread and the number of kernel-level threads are restricted in most
of the systems.
MxN model (many to many)
The model multiplexes many user threads over a smaller or equal number of threads of the
kernel. Each process is assigned a set of kernel threads, regardless of
the number of user threads that have been created.
It does not have any of the drawbacks of the two previous models, as it brings out the best.
of each one. The user can create as many threads as needed, and kernel threads can
execute in parallel. Likewise, when a thread is blocked, the kernel can schedule another
thread for its execution.
Then, the user-level scheduler assigns user threads to kernel threads, and
The kernel-level scheduler assigns kernel threads to processors.

Threads are a relatively new concept in operating systems. In this context, a process
it is called a heavy process, while a thread is called a process
light.

The term thread refers syntactically and semantically to execution threads.


The term multithreading refers to the ability of an OS to maintain multiple threads of
execution within the same process.
In an operating system with single-threaded processes (a single thread of execution per process), where there is no
the concept of a thread, the representation of a process includes its PCB, a space of
process directions, a process stack, and a kernel stack.

In an OS with multithreaded processes, there is only one PCB and one address space associated with
process, however, there are now separate stacks for each thread and control blocks for
each thread

Structure of Threads
A thread (lightweight process) is a basic unit of CPU utilization, and it consists of a
program counter, a register game and a stack space.

Threads within the same application share:


The code section.
The data section.
The resources of the OS (open files and signals).
A traditional or heavy process is equivalent to a single-threaded task.

Threads allow for the concurrent execution of several associated instruction sequences.
to different functions within the same process, sharing the same space of
addresses and the same data structures of the kernel.

Thread States
The main states of a thread are: running, ready, and blocked; and there are four
Basic operations related to the state change of threads:

Creation: In general, when a new process is created, a thread is also created for it.
process. Subsequently, that thread can create new threads by giving them a pointer to
instruction and some arguments. That thread will be placed in the prepared queue.

Blocking: When a thread must wait for an event, it is blocked while saving its
records. This way, the processor will proceed to execute another prepared thread.
Unlocking: When the event occurs that caused a thread to be blocked, it moves to the queue of
ready.

Termination: When a thread finishes, its context and its stacks are released.
An important point is the possibility that the blocking of a thread could lead to the blocking of the entire
process. That is to say, that the blocking of one thread leads to the blocking of all threads that it
component, even when the process is prepared.

Shared and unshared resources


Threads allow for the concurrent execution of several sequences of associated instructions.
to different functions within the same process, sharing the same space of
directions and the same data structures of the core.

Shared resources among threads:


Code (instructions).
Global variables.
Open files and devices.
Resources not shared among threads:
Program counter (each thread can execute a different section of code).
CPU registers.
Stack for the local variables of the procedures that is invoked after creating a
hello
State: different threads can be running, ready, or blocked waiting for an event.

PROCESSES vs THREADS
Similarities: Threads operate, in many ways, just like processes.
They can be in one or more states: ready, blocked, running, or finished.
They also share the CPU.
There is only one active (running) thread at any given moment.
A thread within a process executes sequentially.
Each thread has its own stack and program counter.
They can create their own child threads.
Differences: Threads, unlike processes, are not independent of each other.
Since all threads can access all addresses of the task, a thread can read
the pile of any other thread or write about it. Although it may seem otherwise, the
protection is not necessary since the design of a task with multiple threads has to be a
unique user.

Advantages of threads over processes.


It takes much less time to create a new thread in an existing process than to create
a new process.

It takes much less time to finish a thread than a process.


It takes significantly less time to switch between threads of the same process than between
processes.

Threads make communication between processes faster, as they share memory and
resources can communicate with each other without invoking the OS kernel.

Examples of the use of threads:


Interactive and background work: In a spreadsheet program, a thread could
one could be reading the user's input and another could be executing the orders and
updating the information.

Asynchronous processing: It could be implemented in order to protect against outages.


energy, a thread that will be responsible for safeguarding the buffer of a word processor
once per minute.

Modular structuring of programs: Programs that carry out a variety of


Activities can be designed and implemented through threads.

You might also like