0% found this document useful (0 votes)
4 views13 pages

Unit-2 Chapter 1 Multithreaded Programming

The document states that the training data is current only until October 2023. No further information or context is provided regarding the implications of this cutoff date. It emphasizes the limitation of the data's recency.

Uploaded by

ryanmathew650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views13 pages

Unit-2 Chapter 1 Multithreaded Programming

The document states that the training data is current only until October 2023. No further information or context is provided regarding the implications of this cutoff date. It emphasizes the limitation of the data's recency.

Uploaded by

ryanmathew650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit-2

Chapter-1
Multithreaded Programming.
• A thread refers to a single sequential flow of activities being executed in
a process; it is also known as the thread of execution or the thread of
control.
• There can be multiple threads in a single process.
• A thread has three components namely Program counter, register set, and
stack space.
• Thread is also termed as a lightweight process as they share resources and
are faster compared to processes. Threads are not independent of one
another like processes are, as they share the code section, data section,
OS resources etc. But like processes a thread has its own program
counter(PC),register set and stack space.
• Context switching is faster in threads.

Advantages of Threading

• Threads improve the overall performance of a program.


• Threads increases the responsiveness of the program
• Context Switching time in threads is faster.
• Threads share the same memory and resources within a process.
• Communication is faster in threads.
• Threads provide concurrency within a process


Types of Thread

1. User Level Thread:

User-level threads are implemented and managed by the user and the
kernel is not aware of it. Is implemented in the user level library, they are
not created using the system calls. The kernel-level thread manages user-
level threads as if they are single-threaded processes?

examples: Java thread

• User-level threads are implemented using user-level libraries and


the OS does not recognize these threads..
• User-level thread is faster to create and manage compared to
kernel-level thread.
• User threads are supported above the kernel, without kernel
support. These are the threads that application programmers would
put into their programs.
• Context switching in user-level threads is faster.
• If one user-level thread performs a blocking operation then the
entire process gets blocked. Eg: POSIX threads, Java threads, etc.

Advantages:

Simple representation since thread has only program counter, register set,
stack space.

Simple to create since no intervention of kernel.

Thread switching is fast since no OS calls need to be made

User-level threads can be applied to such types of operating systems that do


not support threads at the kernel-level. Context switch time is shorter than the
kernel-level threads.
2. Kernel level Thread:

Kernel level threads are implemented and managed by the OS. Kernel knows
and manages the threads. Instead of thread table in each process, the kernel
itself has thread table (a master one) that keeps track of all the threads in the
system. There is a thread control block and process control block in the system
for each thread and process in the kernel-level thread.

The kernel-level thread offers a system call to create and manage the threads
from user-space. The implementation of kernel threads is more difficult than the
user thread. Context switch time is longer in the kernel thread. If a kernel
thread performs a blocking operation, the other thread execution can continue.

• Kernel level threads are implemented using system calls and Kernel level
threads are recognized by the OS.
• Kernel-level threads are slower to create and manage compared to user-
level threads.
• Context switching in a kernel-level thread is slower.
• Even if one kernel-level thread performs a blocking operation, it does not
affect other threads. Eg: Window Solaris.

Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. If one thread in a process is blocked ,the kernel can schedule
another thread of the same process

Multithreading:

Multithreading is similar to multitasking, but enables the processing of


multiple threads at one time, rather than multiple processes. Since threads are
smaller, more basic instructions than processes, multithreading may occur
within processes.

Multithreading is a process divided into a number of smaller tasks. Each task


is represented or called it as a “Thread”. That means a thread is a lightweight of
process. A number of such threads within a process executes at a time is called
“multithreading”.
Example of multithreading

Multiple threads run behind the scenes in most of the applications you use
regularly. At any given time, you may have numerous tabs open in the system
and every tab displaying different types of content. Many threads of execution
are used to display animations, load content, play a video, etc.

A word processor is another instance of a multithreaded program . Multiple


threads are used to show the content, asynchronously check content's spelling
and grammar, and generate a PDF version of content while typing. These are all
happening simultaneously, with independent threads doing these tasks
internally.

Advantages of Multithreading:

1. Responsiveness – Multithreading in an interactive application may allow a


program to continue running even if a part of it is blocked or is performing a
lengthy operation, thereby increasing responsiveness to the user.

In a non multi threaded environment, a server listens to the port for some
request and when the request comes, it processes the request and then resume
listening to another request. The time taken while processing of request makes
other users wait unnecessarily. Instead a better approach would be to pass the
request to a worker thread and continue listening to port.

2. Resource Sharing – Processes may share resources only through techniques


such as-

• Message Passing

• Shared Memory

threads share the memory and the resources of the process to which they belong
by default. The benefit of sharing code and data is that it allows an application
to have several threads of activity within same address space.

3. Economy – Allocating memory and resources for process creation is a costly


job in terms of time and space. Since, threads share memory with the process it
belongs, it is more economical to create and context switch threads. Generally
much more time is consumed in creating and managing processes than in
threads.

4. Scalability – The benefits of multi-programming greatly increase in case of


multiprocessor architecture, where threads may be running parallel on multiple
processors. If there is only one thread then it is not possible to divide the
processes into smaller tasks that different processors can perform. Single
threaded process can run only on one processor regardless of how many
processors are available. Multi-threading on a multiple CPU machine increases
parallelism.

5. Utilization of multiprocessor architecture- The advantages of


multithreading might be considerably amplified in a multiprocessor
architecture, where every thread could execute in parallel on a distinct
processor.

A single-threaded task could only run on one of them, no matter how many
CPUs are available. On a multi-CPU machine, multithreading enhances
concurrency. The CPU switches among threads so quickly in single-processor
architecture that it creates the illusion of parallelism, but only one thread is
running at a particular time.

Muti threading Models:


Multithreading is a process divided into a number of smaller tasks. Each task
is represented or called it as a “Thread”. That means a thread is a lightweight of
process. A number of such threads within a process executes at a time is called
“multithreading”.

Multithreading Models in Operating System exhibit the ways of mapping the


user threads to the kernel threads. User threads are supported above the kernel
and are managed without the kernel support. Whereas kernel thread are
supported and managed directly by the operating system. Ultimately, a
relationship must exit between user threads and kernel threads.

There are three common ways to establish such relationship

1) Many to One model

2) One to One model

3) Many to many model


1) Many to one model:

The many to one model maps many user levels threads to one kernel
thread. Thread management is done by the kernel library in user space, so it is
efficient. The entire process will block if a thread makes a blocking system call.
Also, because only one thread can access the kernel at a time, multiple threads
are unable to run in parallel on multicore systems.

In the above figure, the many to one model associates all user-level threads to
single kernel-level threads.

2. One to One model

The one-to-one model maps a single user-level thread to a single kernel-level


thread. This type of relationship facilitates the running of multiple threads in
parallel. It provides more concurrency than the many to one model by allowing
another thread to run when a thread makes a blocking system call.

it also allows multiple threads to run in parallel on multiprocessors. The only


drawback of this model is that creating a user thread requires creating the
corresponding Kernel thread.
In the above figure, one model associates that one user-level thread to a single
kernel-level thread.

3. Many to many model:

The many to many model multiplexes many user level threads to a smaller or
equal number of Kernel threads. The number of Kernel threads may be specific
to either a particular application or a particular machine. In this type of model,
there are several user-level threads and several kernel-level threads. The number
of kernel threads created depends upon a particular application. The developer
can create as many threads at both levels but may not be the same. In this
model, if any thread makes a blocking system call, the kernel can schedule
another thread for execution. Though this model allows the creation of multiple
kernel threads, true concurrency cannot be achieved by this model. This is
because the kernel can schedule only one process at a time.
Many to many versions of the multithreading model associate several user-level
threads to the same or much less variety of kernel-level threads in the above
figure.

Threading issue:

There are a number of issues that arise with threading. Some of them are
mentioned below:

1) fork() and exec() system calls:

A process can use fork() system call to create a new process. calling process is
called parent process. New process is called child process. child is the exact
memory image of the parent process.

(The fork() system call is used to make a new one process that is an properly
copy (duplicate) of the calling process that is known as the parent process. After
the fork() call, two processes are made: the parent process and the newly made
child process.)

During a fork() call the issue that arises is whether the whole process should be
duplicated or just the thread which made the fork() call should be duplicated.

There are two versions of fork() in some of the UNIX systems.

Either the fork() can duplicate all the threads of the parent process in the child
process or the fork() would only duplicate that thread from parent process that
has invoked it.

Which version of fork() must be used totally depends upon the application.

Next system call i.e. exec() system call when invoked replaces the program
along with all its threads with the program that is specified in the parameter to
exec(). Typically the exec() system call is lined up after the fork() system call.
The exec() call replaces the whole process that called it including all the threads
in the process with a new program.

1) If program uses exec() system call after fork() then only calling thread is
copied.

2) Unnecessary to duplicate other threads because program mentioned as


parameter to exec() will replace the thread.
3) If the separate process does not use exec()after fork() then all the threads are
duplicated.

2) Thread cancellation:

The termination of a thread before its completion is called thread cancellation


and the terminated thread is termed as target thread.

The process of prematurely aborting an active thread during its run is called
‘thread cancellation’.

Suppose, there is a multithreaded program whose several threads have been


given the right to scan a database for some information. The other threads
however will get cancelled once one of the threads happens to return with the
necessary results.

The target thread is now the thread that we want to cancel. Thread cancellation
can be done in two ways:

a) Asynchronous Cancellation: In asynchronous cancellation, one thread


immediately terminates the target thread.

b) Deferred Cancellation: In deferred cancellation, the target thread


periodically checks if it should be terminated. allowing it an opportunity to
terminate itself in an orderly fashion.

The issue related to the target thread is listed below:

How is it managed when resources are assigned to a cancelled target thread?

Suppose the target thread exits when updating the information that is being
shared with other threads.

However, in here, asynchronous threads cancellation whereas thread cancels its


target thread irrespective of whether it owns any resource is a problem.

On the other hand, in synchronous thread cancellation the target thread receives
this message first and then checks its flag to see if it should cancel itself now or
later. They are called the Cancellation Points of threads under which thread
cancellation occurs safely.

3) Signal handling: In UNIX systems, a signal is used to notify a process that a


particular event has occurred. Signals are software interrupts which sent to a
program to indicate that an important event has occurred.Signal handling is a
way of dealing with interrupts, exceptions, and signals sent to a process by the
operating system or another process. Based on the source of the signal, signal
handling can be categorized as:

a) Asynchronous Signal: The signal which is generated outside the process


which receives it.

b) Synchronous Signal: The signal which is generated and delivered in the


same process.

A signal may be received either synchronously or asynchronously depending on


the source of and the reason for the event being signaled. All signals, whether
synchronous or asynchronous, follow the same pattern:

3. A signal is generated by the occurrence of a particular event.


4. The signal is delivered to a process.
5. Once delivered, the signal must be handled.

A signal may be handled by one of two possible handlers:

1. A default signal handler .


2. A user-defined signal handler.

Every signal has a default signal handler that the kernel runs when handling that
signal.

a user-defined signal handler defined by the user.

4.Thread specific Data/Thread local storage:

Threads belonging to a process share the data of the process. this data sharing
provides one of the benefits of multithreaded programming. However, in some
circumstances, each thread might need its own copy of certain data. We will call
such data thread-local storage (or TLS.) While thread-specific data avoids data
interference between threads, it also hinders direct communication between
threads using shared data.

For example, in a transaction-processing system, we might service each


transaction in a separate thread. Furthermore, each transaction might be
assigned a unique identifier. To associate each thread with its unique identifier,
we could use thread-local storage.
5. Thread Pool

A thread pool is to create a number of threads at process start-up and place


them into a pool, where they sit and wait for work. A thread pool is a
software design pattern used to manage and reuse a fixed number of threads
efficiently in multithreaded applications. It provides a pool of worker threads
ready to execute tasks concurrently, rather than creating and destroying
threads for each task. The thread pool is managed by a thread pool manager,
which assigns tasks to available threads.

Once the task is completed, then a thread returns to the pool, and getting to
become available for the further task. This mechanism reduces the overhead
of creating and destroying threads, resulting in improved performance and
reduced resource consumption.

Difference between user-level threads and kernel-level threads

S.
No. Parameters User Level Thread Kernel Level Thread

Kernel threads are


User threads are
1. Implemented by implemented by Operating
implemented by users.
System (OS).

The operating System Kernel threads are


2. Recognize doesn’t recognize user- recognized by Operating
level threads. System.

Implementation of
Implementation of User
3. Implementation Kernel-Level thread is
threads is easy.
complicated.

Context switch Context switch time isless. Context switch time is


4.
time more.

Hardware Context switch requiresno Hardware support is


5.
support hardware support. needed.

If one user-level thread If one kernel thread


performs a blocking performs a blocking
Blocking
6. operation then the entire operation then another
operation
process will be blocked. thread can continue
execution.
Multithread applications
Kernels can be
7. Multithreading cannot take advantage of
multithreaded.
multiprocessing.

User-level threads canbe Kernel-level level


Creation and
8. created and managed threads take more timeto
Management
more quickly. create and manage.

Operating Any operating systemcan Kernel-level threads are


9.
System support user-level thread operating system-
specific.

Thread The thread library contains The application code does


10
Management the code for thread creation, not contain thread
message passing, thread management code. It is
scheduling, datatransfer, and merely an API to the
thread destroying kernel mode. The
Windows operating
system makes use of this
feature.

You might also like