Operating Systems: Internals and Design Principles: Threads

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 34

Operating

Systems:
Internals
and
Design Chapter 4
Principles Threads
Seventh Edition
By William Stallings
Processes and Threads
Traditional processes have two characteristics:
Resource Ownership Scheduling/Execution
Process includes a Follows an execution path that
virtual address space to may be interleaved with other
hold the process image processes
 the OS provides protection  a process has an execution state
to prevent unwanted (Running, Ready, etc.) and a
interference between dispatching priority and is scheduled
processes with respect to and dispatched by the OS
resources  Traditional processes are sequential;
i.e. only one execution path
Processes and Threads
 Multithreading - The ability of an OS to
support multiple, concurrent paths of
execution within a single process
 The unit of resource ownership is referred to
as a process or task
 Theunit of dispatching is referred to as a
thread or lightweight process
Single Threaded Approaches
 A single execution path
per process, in which
the concept of a thread
is not recognized, is
referred to as a single-
threaded approach
 MS-DOS, some
versions of UNIX
supported only this
type of process.
Multithreaded Approaches
 The right half of Figure
4.1 depicts
multithreaded
approaches
 A Java run-time
environment is a
system of one process
with multiple threads;
Windows, some
UNIXes, support
multiple multithreaded
processes.
Processes
 In a multithreaded environment the process is the unit
that owns resources and the unit of protection.
 i.e., the OS provides protection at the process level
 Processes have
 A virtual address space that holds the process image
 Protected

access to
processors
 other processes
 files
 I/O resources
One or More Threads
in a Process

Each thread has:

• an execution state (Running, Ready, etc.)


• saved thread context when not running (TCB)
• an execution stack
• some per-thread static storage for local variables
• access to the shared memory and resources of its
process (all threads of a process share this)
Threads vs. Processes
Benefits of Threads

Less time to Threads enhance


terminate a efficiency in
thread than a Switching between communication
Takes less process two threads takes between programs
time to create less time than
a new thread switching between
than a process processes
Thread Use in a
Single-User System
 Foreground and background work
 Asynchronous processing
 Speed of execution
 Modular program structure
Threads
 In an OS that supports threads, scheduling and
dispatching is done on a thread basis
 Most of the state information dealing with
execution is maintained in thread-level data
structures
suspending a process involves suspending all
threads of the process
termination of a process terminates all threads
within the process
Thread Execution States

The key states for Thread operations


associated with a
a thread are: change in thread
state are:
 Running
 Spawn (create)
 Ready
 Block
 Blocked
 Unblock
 Finish
Thread Execution
• A key issue with threads is whether or not they can
be scheduled independently of the process to
which they belong.

• Or, is it possible to block one thread in a process


without blocking the entire process?
• If not, then much of the flexibility of threads is
lost.
RPC Using Single Thread
RPC Using One
Thread per Server
Multithreading
on a
Uniprocessor
Thread Synchronization
 Itis necessary to synchronize the activities of the
various threads
 all threads of a process share the same
address space and other resources
 any alteration of a resource by one thread
affects the other threads in the same
process
Types of Threads

Kernel
User Level
level Thread
Thread (ULT)
(KLT)

NOTE: we are talking about threads for user


processes. Both ULT & KLT execute in user
mode. An OS may also have threads but that is
not what we are discussing here.
User-Level Threads (ULTs)
 Thread management
is done by the
application
 The kernel is not
aware of the existence
of threads
 Not the kind we’ve
discussed so far.
Relationships Between ULT
States and Process States
Possible
transitions
from 4.6a:

4.6a→4.6b
4.6a→4.6c
4.6a→4.6d

Figure 4.6 Examples of the Relationships between User-Level Thread States and Process States
Advantages of ULTs

ULTs can run


on any OS
Scheduling can
be application
specific

Thread switching
does not require
kernel mode
privileges (no
mode switches)
Disadvantages of ULTs
 Ina typical OS many system calls are blocking
 as a result, when a ULT executes a system call,
not only is that thread blocked, but all of the
threads within the process are blocked
 In
a pure ULT strategy, a multithreaded
application cannot take advantage of
multiprocessing
Overcoming ULT
Disadvantages
Jacketing
converts a blocking
system call into a non-
blocking system call

Wr
itin
g
an
ap
pli
cat
ion
as
mu
ltip
le
pro
ces
ses
rat
her
tha
n
mu
ltip
le
thr
ea
ds
Kernel-Level Threads (KLTs)
 Thread management is
done by the kernel
(could call them KMT)
 no thread management
is done by the
application
 Windows is an example
of this approach
Advantages of KLTs
 The kernel can simultaneously schedule multiple
threads from the same process on multiple
processors
 If one thread in a process is blocked, the kernel can
schedule another thread of the same process
Disadvantage of KLTs
 The transfer of control from one thread to another
within the same process requires a mode switch to
the kernel
Combined Approaches
 Thread creation is done in the
user space
 Bulk of scheduling and
synchronization of threads is
by the application
 Solaris is an example
Relationship Between
Threads and Processes

Table 4.2 Relationship between Threads and Processes


Multiple Cores &
Multithreading
• Multithreading and multicore chips have the
potential to improve performance of
applications that have large amounts of
parallelism
• Gaming, simulations, etc. are examples
• Performance doesn’t necessarily scale
linearly with the number of cores …
Amdahl’s Law
• Speedup depends on the amount of code that must
be executed sequentially
• Formula:
Speedup = time to run on single processor
time to execute on N || processors
1
= (1 – f) + f / N
(where f is the amount of parallelizable code)
Performance Effect
of Multiple Cores

Figure 4.7 (a) Figure 4.7 (b)


Database Workloads on
Multiple-Processor Hardware

Figure 4.8 Scaling of Database Workloads on Multiple Processor Hardware


Applications That Benefit
[MCDO06]

 Multithreaded native applications


 characterized by having a small number of highly threaded
processes

 Multiprocess applications
 characterized by the presence of many single-threaded processes

 Java applications
 Multi-instance applications
 multiple instances of the application in parallel
Summary
 User-level threads
 created and managed by a threads library that runs in the user space of a process
 a mode switch is not required to switch from one thread to another
 only a single user-level thread within a process can execute at a time
 if one thread blocks, the entire process is blocked

 Kernel-level threads
 threads within a process that are maintained by the kernel
 a mode switch is required to switch from one thread to another
 multiple threads within the same process can execute in parallel on a multiprocessor
 blocking of a thread does not block the entire process
 Process/related to resource ownership
 Thread/related to program execution

You might also like