0% found this document useful (0 votes)
5 views37 pages

Week 4 - Threads

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 37

THREADS

TOPICS

Multicore Multithreading
Overview
Programming Models

Thread Implicit Threading


Libraries Threading Issues

Operating
System
Examples
THREADS
CODE DATA FILES • Most modern applications are
multithreaded
REGISTERS REGISTERS REGISTERS
STACK STACK STACK
• Threads run within application
• Multiple tasks with the application can
be implemented by separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
• Process creation is heavy-weight while
thread creation is light-weight
• Can simplify code, increase efficiency
• Kernels are generally multithreaded
• Responsiveness – may allow continued
execution if part of process is blocked,
especially important for user interfaces
• Resource Sharing – threads share resources
of process, easier than shared memory or
BENEFITS message passing
• Economy – cheaper than process creation,
thread switching lower overhead than
context switching
• Scalability – process can take advantage of
multiprocessor architectures
MULTICORE PROGRAMMING
• Parallelism implies a system can perform more than one task
simultaneously
• Concurrency supports more than one task making progress
• Single processor / core, scheduler providing concurrency

SINGLE
CORE 1 CORE T1 T2 T3 T4 T1 T2 T3 T4

CORE 2

TIME
MULTICORE
PROGRAMMING
• Multicore or multiprocessor systems put pressure
on programmers, challenges include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
USER THREADS AND KERNEL
THREADS
USER THREADS KERNEL THREADS
• User threads - management • Kernel threads - Supported by the
done by user-level threads Kernel
library • Examples – virtually all general
purpose operating systems, including:
• Three primary thread libraries:
• Windows
• POSIX Pthreads • Solaris
• Windows threads • Linux
• Java threads • Tru64 UNIX
• Mac OS X
•Many-to-One

MULTITHREADIN •One-to-One
G MODELS
•Many-to-Many
MANY-TO-ONE
• Many user-level threads mapped to single kernel thread
• One thread blocking causes all to block
• Multiple threads may not run in parallel on muticore
system because only one may be in kernel at a time
• Few systems currently use this model
• Examples:
• Solaris Green Threads
• GNU Portable Threads
ONE-TO-ONE
• Each user-level thread maps to kernel
thread
• Creating a user-level thread creates a
kernel thread
• More concurrency than many-to-one
• Number of threads per process
sometimes restricted due to overhead
• Examples
• Windows
• Linux
• Solaris 9 and later
MANY-TO-MANY MODEL
• Allows many user level threads
to be mapped to many kernel
threads
• Allows the operating system to
create a sufficient number of
kernel threads
• Solaris prior to version 9
• Windows with the ThreadFiber
package
TWO-LEVEL MODEL
• Similar to M:M, except that it
allows a user thread to be
bound to kernel thread
• Examples
• IRIX
• HP-UX
• Tru64 UNIX
• Solaris 8 and earlier
THREAD LIBRARIES
• Thread library provides programmer with API for creating and
managing threads
• Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
Pthreads
• May be provided either as user-level or kernel-level
• A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
• Specification, not implementation
• API specifies behavior of the thread library, implementation is up to
development of the library
• Common in UNIX operating systems (Solaris, Linux, Mac OS X)
PTHREADS EXAMPLE
Pthreads Example (Cont.)
Pthreads Code for Joining 10 Threads

Pthreads Code for Joining 10 Threads

cepts – 9 th Edition 4. 21 Silberschatz, Galvin and Gagne ©2013


Windows Multithreaded C Program
JAVA THREADS
• Java threads are managed by the JVM
• Typically implemented using the threads model provided by underlying OS
• Java threads may be created by:

• Extending Thread class


• Implementing the Runnable interface
JAVA MULTITHREADED PROGRAM
IMPLICIT THREADING
• This is a strategy where the application developers transfer
the creation and management of threads to compilers and
run-time libraries
• Three methods explored
• Thread Pools
• OpenMP
• Grand Central Dispatch
• Other methods include Microsoft Threading Building Blocks (TBB),
java.util.concurrent package
THREAD POOLS
• Create a number of threads in a pool where they await work
• Advantages:
• Usually slightly faster to service a request with an existing thread than create
a new thread
• Allows the number of threads in the application(s) to be bound to the size of
the pool
• Separating task to be performed from mechanics of creating task allows
different strategies for running task
• i.e.Tasks could be scheduled to run periodically
• Windows API supports thread pools:
OpenMP
• Set of compiler directives and an API for C, C++, FORTRAN
• Provides support for parallel programming in shared-memory environments
• Identifies parallel regions – blocks of code that can run in parallel
#pragma omp parallel
Create as many threads as there are cores
#pragma omp parallel for for(i=0;i<N;i++) {
c[i] = a[i] + b[i];
}
Run for loop in parallel
Grand Central Dispatch
• Apple technology for Mac OS X and iOS operating systems
• Extensions to C, C++ languages, API, and run-time library
• Allows identification of parallel sections
• Manages most of the details of threading
• Block is in “^{ }” - ˆ{ printf("I am a block"); }
• Blocks placed in dispatch queue
• Assigned to available thread in thread pool when removed from queue
Grand Central Dispatch
• Two types of dispatch queues:
• serial – blocks removed in FIFO order, queue is per process, called main
queue
• Programmers can create additional serial queues within program
• concurrent – removed in FIFO order but several may be removed at a time
• Three system wide queues with priorities low, default, high
THREADING ISSUES
• Semantics of fork() and exec() system calls
• Signal handling
• Synchronous and asynchronous
• Thread cancellation of target thread
• Asynchronous or deferred
• Thread-local storage
• Scheduler Activations
Semantics of fork() and exec()
• Does fork()duplicate only the calling thread or all
threads?
• Some UNIXes have two versions of fork
• exec() usually works as normal – replace the
running process including all threads
SIGNAL HANDLING
• Signals are used in UNIX systems to notify a process that a particular
event has occurred.
• A signal handler is used to process signals
• Signal is generated by particular event
• Signal is delivered to a process
• Signal is handled by one of two signal handlers:
• default
• user-defined
• Every signal has default handler that kernel runs when handling signal
• User-defined signal handler can override default
• For single-threaded, signal delivered to process
SIGNAL HANDLING (CONT.)
• Where should a signal be delivered for multi-
threaded?
• Deliver the signal to the thread to which the signal applies
• Deliver the signal to every thread in the process
• Deliver the signal to certain threads in the process
• Assign a specific thread to receive all signals for the
process
THREAD CANCELLATION
• Terminating a thread before it has finished
• Thread to be canceled is target thread
• Two general approaches:
• Asynchronous cancellation terminates the target thread immediately
• Deferred cancellation allows the target thread to periodically check if it
should be cancelled
• Pthread code to create and cancel a thread:
Thread Cancellation (Cont.)
• Invoking thread cancellation requests cancellation, but actual cancellation depends on
thread state

• If thread has cancellation disabled, cancellation remains pending until thread enables it
• Default type is deferred
• Cancellation only occurs when thread reaches cancellation point
• I.e. pthread_testcancel()
• Then cleanup handler is invoked
• On Linux systems, thread cancellation is handled through signals
Thread-Local Storage
• Thread-local storage (TLS) allows each thread to have its own copy of
data
• Useful when you do not have control over the thread creation process
(i.e., when using a thread pool)
• Different from local variables
• Local variables visible only during single function invocation
• TLS visible across function invocations
• Similar to static data
• TLS is unique to each thread
Scheduler Activations
• Both M:M and Two-level models require communication to
maintain the appropriate number of kernel threads allocated to
the application
• Typically use an intermediate data structure between user and
kernel threads – lightweight process (LWP)
• Appears to be a virtual processor on which process can schedule user
thread to run
• Each LWP attached to kernel thread
• How many LWPs to create?
• Scheduler activations provide upcalls - a communication
mechanism from the kernel to the upcall handler in the thread
library
• This communication allows an application to maintain the correct
number kernel threads
OPERATING SYSTEM
EXAMPLES
Windows Threads
Linux Threads
Windows Threads
• Windows implements the Windows API – primary API for Win 98, Win NT,
Win 2000, Win XP, and Win 7
• Implements the one-to-one mapping, kernel-level
• Each thread contains
• A thread id
• Register set representing state of processor
• Separate user and kernel stacks for when thread runs in user mode or kernel mode
• Private data storage area used by run-time libraries and dynamic link libraries (DLLs)
• The register set, stacks, and private storage area are known as the context
of the thread
Windows Threads (Cont.)
• The primary data structures of a thread include:
• ETHREAD (executive thread block) – includes pointer to process to which
thread belongs and to KTHREAD, in kernel space
• KTHREAD (kernel thread block) – scheduling and synchronization info, kernel-
mode stack, pointer to TEB, in kernel space
• TEB (thread environment block) – thread id, user-mode stack, thread-local
storage, in user space
Windows Threads Data Structures
LINUX THREADS
• Linux refers to them as tasks rather than threads
• Thread creation is done through clone() system call
• clone() allows a child task to share the address space of the parent task
(process)
• Flags control behavior

• struct task_struct points to process data structures (shared or unique)

You might also like