0% found this document useful (0 votes)
58 views35 pages

Unit I-Processes and Threads Operating System:: V+ Team

The document discusses the key components and functions of an operating system. It explains that an OS acts as an intermediary between the user and computer hardware, with goals of executing programs efficiently and conveniently. An OS controls hardware resources and coordinates applications. The document outlines different types of OSs like mainframe, time-sharing, desktop, and distributed systems. It describes OS functions like process management, memory management, I/O handling, and providing services to users like program execution and file manipulation.

Uploaded by

Varun Shankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views35 pages

Unit I-Processes and Threads Operating System:: V+ Team

The document discusses the key components and functions of an operating system. It explains that an OS acts as an intermediary between the user and computer hardware, with goals of executing programs efficiently and conveniently. An OS controls hardware resources and coordinates applications. The document outlines different types of OSs like mainframe, time-sharing, desktop, and distributed systems. It describes OS functions like process management, memory management, I/O handling, and providing services to users like program execution and file manipulation.

Uploaded by

Varun Shankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

WWW.VIDYARTHIPLUS.

COM

Unit I- Processes and Threads

Operating System:

A program that acts as an intermediary between a user of a computer and the computer hardware

Operating system goals:

 Execute user programs and make solving user problems easier


 Make the computer system convenient to use
 Use the computer hardware in an efficient manner

A computer system can be divided roughly into four components:

 Hardware – provides basic computing resources


-CPU, memory, I/O devices

 Operating system
-Controls and coordinates use of hardware among various applications and users

 Application programs – define the ways in which the system resources are used to solve
the computing problems of the users
-Word processors, compilers, web browsers, database systems, video games

 Users
-People, machines, other computers

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Operating System can be explored from two view points:

 User View
 System View

System Goals:
Convenience and Efficiency

 Mainframe System

 Batch System

Operating System

User Program area

 Multiprogrammed System

Operating System

Job 1
Job 2
Job 3
Job 4

 Time Sharing System


Time sharing is a logical extension of multiprogramming.
 The CPU is multiplexed among several jobs that are kept in memory and on disk (the
CPU is allocated to a job only if the job is in memory).
 A job swapped in and out of memory to the disk.
 Online communication between the user and the system is provided; when the operating
system finishes the execution of one command, it seeks the next “control statement” from
the user’s keyboard.
 Online system must be available for users to access data and code

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Desktop System
 Personal computers – computer system dedicated to a single user.
 I/O devices – keyboards, mice, display screens, small printers.
 User convenience and responsiveness.
 Can adopt technology developed for larger operating system’ often individuals have sole
use of computer and do not need advanced CPU utilization of protection features.
 May run several different types of operating systems (Windows, MacOS, UNIX, Linux

 Multiprocessor System
Three Advantages
o Increased throughput
o Economy of scale
o Increased reliability

 Distriduted System:

 Distribute the computation among several physical processors.


 Loosely coupled system – each processor has its own local memory; processors
communicate with one another through various communications lines, such as high speed
buses or telephone lines.
 Advantages of distributed systems.
1. Resources Sharing
2. Computation speed up – load sharing
3. Reliability
4. Communications
 Requires networking infrastructure.
Local area networks (LAN) or Wide area networks (WAN)
May be either clientserver or peertopeer systems.

Computer System Organization


 One or more CPUs, device controllers connect through common bus
providing access to shared memory
 Concurrent execution of CPUs and devices competing for memory cycles

Computer-System Operation

 I/O devices and the CPU can execute concurrently


 Each device controller is in charge of a particular device type

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Each device controller has a local buffer


 CPU moves data from/to main memory to/from local buffers
 I/O is from the device to local buffer of controller
 Device controller informs CPU that it has finished its operation by causing An interrupt

Common Functions of Interrupts

 Interrupt transfers control to the interrupt service routine generally, through the interrupt
vector, which contains the addresses of all the service routines
 Interrupt architecture must save the address of the interrupted instruction
 Incoming interrupts are disabled while another interrupt is being processed to prevent a
lost interruptnA trap is a software-generated interrupt caused either by an error or a user
request
Interrupt Handling

 The operating system preserves the state of the CPU by storing registers and the program
counter
 Determines which type of interrupt has occurred:
 polling
 vectored interrupt system
 Separate segments of code determine what action should be taken for each type of
interrupt
I/O Structure

 After I/O starts, control returns to user program only upon I/O completion
 Wait instruction idles the CPU until the next interrupt
 Wait loop (contention for memory access)
 At most one I/O request is outstanding at a time, no simultaneous I/O processing
 After I/O starts, control returns to user program without waiting for I/O completion
 System call – request to the operating system to allow user to wait for I/O completion
 Device-status table contains entry for each I/O device indicating its type, address, and
state
 Operating system indexes into I/O device table to determine device status and to modify
table entry to include interrupt

Process Management
 A process is a program in execution. It is a unit of work within the system. Program is a
passive entity, process is an active entity.
 Process needs resources to accomplish its task

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 CPU, memory, I/O, files


 Initialization data
 Process termination requires reclaim of any reusable resources
 Single-threaded process has one program counter specifying location of next
instruction to execute
 Process executes instructions sequentially, one at a time, until completion
 Multi-threaded process has one program counter per thread
 Typically system has many processes, some user, some operating system running
concurrently on one or more CPUs
 Concurrency by multiplexing the CPUs among the processes / threads

Operating System Structure:


Process Management Activities
 The operating system is responsible for the following activities in connection with
process management:
 Creating and deleting both user and system processes
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication
 Providing mechanisms for deadlock handling

Memory Management
 All data in memory before and after processing
 All instructions in memory in order to execute
 Memory management determines what is in memory when
 Optimizing CPU utilization and computer response to users
 Memory management activities
 Keeping track of which parts of memory are currently being used and by whom
 Deciding which processes (or parts thereof) and data to move into and out of memory
 Allocating and deallocating memory space as needed

Operating System Services


 One set of operating-system services provides functions that are helpful to the user:
 User interface - Almost all operating systems have a user interface (UI)
 Varies between Command-Line (CLI), Graphics User Interface (GUI), Batch
 Program execution - The system must be able to load a program into memory and to run
that program, end execution, either normally or abnormally (indicating error)

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 I/O operations - A running program may require I/O, which may involve a file or an I/O
device
 File-system manipulation - The file system is of particular interest. Obviously, programs
need to read and write files and directories, create and delete them, search them, list file
Information, permission management.

 One set of operating-system services provides functions that are helpful to the user.
Communications – Processes may exchange information, on the same computer or
between computers over a network
Communications may be via shared memory or through message passing (packets moved
by the OS)

 Error detection – OS needs to be constantly aware of possible errors


o May occur in the CPU and memory hardware, in I/O devices, in user program

o For each type of error, OS should take the appropriate action to ensure correct and
consistent computing

o Debugging facilities can greatly enhance the user’s and programmer’s abilities to
efficiently use the system

 Another set of OS functions exists for ensuring the efficient operation of the system itself
via resource sharing
 Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them
 Many types of resources - Some (such as CPU cycles, main memory, and file storage)
may have special allocation code, others (such as I/O devices) may have general request
and release code
 Accounting - To keep track of which users use how much and what kinds of computer
resources
 Protection and security - The owners of information stored in a multiuser or networked
computer system may want to control use of that information, concurrent processes
should not interfere with each other
 Protection involves ensuring that all access to system resources is controlled
 Security of the system from outsiders requires user authentication, extends to defending
external I/O devices from invalid access attempts
 If a system is to be protected and secure, precautions must be instituted throughout it. A
chain is only as strong as its weakest link.

System Calls

 Programming interface to the services provided by the OS


 Typically written in a high-level language (C or C++)

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Mostly accessed by programs via a high-level Application Program Interface (API) rather
than direct system call usenThree most common APIs are Win32 API for Windows,
POSIX API for POSIX-based systems (including virtually all versions of UNIX, Linux,
and Mac OS X), and Java API for the Java virtual machine (JVM)
 Why use APIs rather than system calls?(Note that the system-call names used throughout
this text are generic)

A description of the parameters passed to ReadFile()

 HANDLE file—the file to be read


 LPVOID buffer—a buffer where the data will be read into and written from
 DWORD bytesToRead—the number of bytes to be read into the buffer
 LPDWORD bytesRead—the number of bytes read during the last read
 LPOVERLAPPED ovl—indicates if overlapped I/O is being used

System Call Implementation

 Typically, a number associated with each system call


 System-call interface maintains a table indexed according to these
 Numbers
 The system call interface invokes intended system call in OS kernel and returns status of
the system call and any return values
 The caller need know nothing about how the system call is implemented
 Just needs to obey API and understand what OS will do as a result call
 Most details of OS interface hidden from programmer by API
Managed by run-time support library (set of functions built into libraries included with
compiler)

System Call Parameter Passing

 Often, more information is required than simply identity of desired system call
 Exact type and amount of information vary according to OS and call
 Three general methods used to pass parameters to the OS
 Simplest: pass the parameters in registers
 In some cases, may be more parameters than registers
 Parameters stored in a block, or table, in memory, and address of block passed as a
parameter in a register
This approach taken by Linux and Solaris

 Parameters placed, or pushed, onto the stack by the program and popped off the stack by
the operating system
 Block and stack methods do not limit the number or length of parameters being passed

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Types of System Calls

 Process control
 File management
 Device management
 Information maintenance
 Communications

System Programs

o System programs provide a convenient environment for program development


and execution. The can be divided into:
o File manipulation
o Status information
o File modification
o Programming language support
o Program loading and execution
o Communications
o Application programs
o Most users’ view of the operation system is defined by system programs, not the
actual system calls

 Provide a convenient environment for program development and execution

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

o Some of them are simply user interfaces to system calls; others are considerably
more complex
 File management - Create, delete, copy, rename, print, dump, list, and generally
manipulate files and directories
 Status information
o Some ask the system for info - date, time, amount of available memory, disk
space, number of users
o Others provide detailed performance, logging, and debugging information
o Typically, these programs format and print the output to the terminal or other
output devices
o Some systems implement a registry - used to store and retrieve configuration
information
 File modification
o Text editors to create and modify files
o Special commands to search contents of files or perform transformations of the
text
 Programming-language support - Compilers, assemblers, debuggers and interpreters
sometimes provided
 Program loading and execution- Absolute loaders, relocatable loaders, linkage editors,
and overlay-loaders, debugging systems for higher-level and machine language
 Communications - Provide the mechanism for creating virtual connections among
processes, users, and computer systems
o Allow users to send messages to one another’s screens, browse web pages, send
electronic-mail messages, log in remotely, transfer files from one machine to
another

Operating System Design and Implementation


 Design and Implementation of OS not “solvable”, but some approaches have proven
successful
 Internal structure of different Operating Systems can vary widely
 Start by defining goals and specifications
 Affected by choice of hardware, type of system
 User goals and System goals
o User goals – operating system should be convenient to use, easy to learn, reliable,
safe, and fast
o System goals – operating system should be easy to design, implement, and
maintain, as well as flexible, reliable, error-free, and efficient
 Important principle to separate
o Policy: What will be done?
Mechanism: How to do it?
 Mechanisms determine how to do something, policies decide what will be done
o The separation of policy from mechanism is a very important principle, it allows
maximum flexibility if policy decisions are to be changed later

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

SYSTEM Structure

Simple Structure
 MS-DOS – written to provide the most functionality in the least space
o Not divided into modules
o Although MS-DOS has some structure, its interfaces and levels of functionality
are not well separated

Layered Approach

 The operating system is divided into a number of layers (levels), each built on top of
lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user
interface.

 With modularity, layers are selected such that each uses functions (operations) and
services of only lower-level layers

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Virtual Machines

 A virtual machine takes the layered approach to its logical conclusion. It treats hardware
and the operating system kernel as though they were all hardware

 A virtual machine provides an interface identical to the underlying bare hardware

 The operating system creates the illusion of multiple processes, each executing on its own
processor with its own (virtual) memory

 The resources of the physical computer are shared to create the virtual machines

o CPU scheduling can create the appearance that users have their own processor

o Spooling and a file system can provide virtual card readers and virtual line
printers

o A normal user time-sharing terminal serves as the virtual machine operator’s


console

 The virtual-machine concept provides complete protection of system resources since each
virtual machine is isolated from all other virtual machines. This isolation, however,
permits no direct sharing of resources.

 A virtual-machine system is a perfect vehicle for operating-systems research and


development. System development is done on the virtual machine, instead of on a
physical machine and so does not disrupt normal system operation.

 The virtual machine concept is difficult to implement due to the effort required to provide
an exact duplicate to the underlying machine

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Process Concept

 An operating system executes a variety of programs:

o Batch system – jobs

o Time-shared systems – user programs or tasks

 Textbook uses the terms job and process almost interchangeably

 Process – a program in execution; process execution must progress in sequential fashion

 A process includes:

o program counter

o stack

o data section

Process State

 As a process executes, it changes state

o new: The process is being created

o running: Instructions are being executed

o waiting: The process is waiting for some event to occur

o ready: The process is waiting to be assigned to a process

o terminated: The process has finished execution

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Process Control Block (PCB)

CPU Switch From Process to Process

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Process Scheduling Queues


 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in main memory, ready and waiting to
execute
 Device queues – set of processes waiting for an I/O device
 Processes migrate among the various queues

Ready Queue And Various I/O Device Queues

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Representation of Process Scheduling

Schedulers

 Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue

 Short-term scheduler (or CPU scheduler) – selects which process should be


executed next and allocates CPU

 Short-term scheduler is invoked very frequently (milliseconds)  (must be fast)

 Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be


slow)

 The long-term scheduler controls the degree of multiprogramming

 Processes can be described as either:

 I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 CPU-bound process – spends more time doing computations; few very long CPU
bursts

Addition of Medium Term Scheduling

Context Switch

 When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process

 Context-switch time is overhead; the system does no useful work while switching

 Time dependent on hardware support

Operations on Processes:

Process Creation

 Parent process create children processes, which, in turn create other


processes, forming a tree of processes

 Resource sharing

o Parent and children share all resources

o Children share subset of parent’s resources

o Parent and child share no resources

 Execution

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

o Parent and children execute concurrently

Parent waits until children terminate

 Address space

o Child duplicate of parent

o Child has a program loaded into it

 UNIX examples

o fork system call creates new process

o exec system call used after a fork to replace the process’ memory
space with a new program

C Program Forking Separate Process

int main()

Pid_t pid;

/* fork another process */

pid = fork();

if (pid < 0) { /* error occurred */

fprintf(stderr, "Fork Failed");


WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

exit(-1);

else if (pid == 0) { /* child process */

execlp("/bin/ls", "ls", NULL);

else { /* parent process */

/* parent will wait for the child to complete */

wait (NULL);

printf ("Child Complete");

exit(0);

Process Termination

 Process executes last statement and asks the operating system to delete it
(exit)

o Output data from child to parent (via wait)

o Process’ resources are deallocated by operating system

 Parent may terminate execution of children processes (abort)

o Child has exceeded allocated resources

o Task assigned to child is no longer required

o If parent is exiting

 Some operating system do not allow child to continue if its


parent terminates

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 All children terminated - cascading termination

Cooperating Processes

 Independent process cannot affect or be affected by the execution of


another process

 Cooperating process can affect or be affected by the execution of another


process

 Advantages of process cooperation

o Information sharing

o Computation speed-up

o Modularity

o Convenience

Producer-Consumer Problem

 Paradigm for cooperating processes, producer process produces information


that is consumed by a consumer process

o unbounded-buffer places no practical limit on the size of the buffer

o bounded-buffer assumes that there is a fixed buffer size

Bounded-Buffer – Shared-Memory Solution

 Shared data

#define BUFFER_SIZE 10

Typedef struct {

...

} item;

item buffer[BUFFER_SIZE];

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

int in = 0;

int out = 0;

 Solution is correct, but can only use BUFFER_SIZE-1 elements

Bounded-Buffer – Insert() Method

The code for the producer and consumer processes follows.The producer process
has a local variable nextProduced in which the new item to be produced is stored:

while (true) {
/* Produce an item */

while (((in = (in + 1) % BUFFER SIZE count) == out)

; /* do nothing -- no free buffers */

buffer[in] = item;

in = (in + 1) % BUFFER SIZE;

The consumer process has a local variable nextConsumed in which the item to be
consumed is stored:

while (true) {

while (in == out)

; // do nothing -- nothing to consume

// remove an item from the buffer

item = buffer[out];

out = (out + 1) % BUFFER SIZE;

return item;

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Interprocess Communication (IPC)


 Mechanism for processes to communicate and to synchronize their actions

 Message system – processes communicate with each other without resorting to shared
variables

 IPC facility provides two operations:

 send(message) – message size fixed or variable

 receive(message)

 If P and Q wish to communicate, they need to:

 establish a communication link between them

 exchange messages via send/receive

 Implementation of communication link

 physical (e.g., shared memory, hardware bus)

 logical (e.g., logical properties)

IPC-Message Passing
 Mechanism for processes to communicate and to synchronize their actions
 Message system – processes communicate with each other without resorting to shared
variables
 IPC facility provides two operations:
 send(message) – message size fixed or variable
 receive(message)
 If P and Q wish to communicate, they need to:
 establish a communication link between them
 exchange messages via send/receive
 Implementation of communication link
 physical (e.g., shared memory, hardware bus)
 logical (e.g., logical properties)
IPC-Naming
Processes that want to communicate must have a way to refer to each other. They can use either
direct or indirect communication.

Direct Communication

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Processes must name each other explicitly:

 send (P, message) – send a message to process P

 receive(Q, message) – receive a message from process Q

 Properties of communication link

 Links are established automatically

 A link is associated with exactly one pair of communicating processes

 Between each pair there exists exactly one link

 The link may be unidirectional, but is usually bi-directional

Indirect Communication

 Messages are directed and received from mailboxes (also referred to as ports)

 Each mailbox has a unique id

 Processes can communicate only if they share a mailbox

 Properties of communication link

 Link established only if processes share a common mailbox

 A link may be associated with many processes

 Each pair of processes may share several communication links

 Link may be unidirectional or bi-directional

 Operations

 create a new mailbox

 send and receive messages through mailbox

 destroy a mailbox

 Primitives are defined as:

 send(A, message) – send a message to mailbox A

 receive(A, message) – receive a message from mailbox A

 Mailbox sharing
WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 P1, P2, and P3 share mailbox A

 P1, sends; P2 and P3 receive

 Who gets the message?

 Solutions

 Allow a link to be associated with at most two processes

 Allow only one process at a time to execute a receive operation

 Allow the system to select arbitrarily the receiver. Sender is notified who the receiver
was.

IPC-Synchronization
 Message passing may be either blocking or non-blocking

 Blocking is considered synchronous

 Blocking send has the sender block until the message is received

 Blocking receive has the receiver block until a message is available

 Non-blocking is considered asynchronous

 Non-blocking send has the sender send the message and continue

 Non-blocking receive has the receiver receive a valid message or null

IPC-Buffering
 Queue of messages attached to the link; implemented in one of three ways

 Zero capacity – 0 messages. Sender must wait for receiver (rendezvous)

 Bounded capacity – finite length of n messages. Sender must wait if link full

 Unbounded capacity – infinite length. Sender never waits

Communication in client-server systems


 Sockets

 Remote Procedure Calls


WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Remote Method Invocation (Java)

Sockets
 A socket is defined as an endpoint for communication

 Concatenation of IP address and port

 The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8

 Communication consists between a pair of sockets

Socket Communication

Remote Procedure Calls


 Remote procedure call (RPC) abstracts procedure calls between processes on networked
systems.

 Stubs – client-side proxy for the actual procedure on the server.

 The client-side stub locates the server and marshalls the parameters.

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 The server-side stub receives this message, unpacks the marshalled parameters, and
peforms the procedure on the server.

Execution of RPC

Remote Method Invocation


 Remote Method Invocation (RMI) is a Java mechanism similar to RPCs.

 RMI allows a Java program on one machine to invoke a method on a remote object.

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Marshalling Parameters

Case study: IPC in Linux.

Examples of IPC Systems - POSIX


 POSIX Shared Memory
 Process first creates shared memory segment
 segment id = shmget(IPC PRIVATE, size, S IRUSR | S IWUSR);

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Process wanting access to that shared memory must attach to it


 shared memory = (char *) shmat(id, NULL, 0);
 Now the process could write to the shared memory
 printf(shared memory, "Writing to shared memory");
 When done a process can detach the shared memory from its address space
 shmdt(shared memory);
Examples of IPC Systems - Mach
 Mach communication is message based
 Even system calls are messages
 Each task gets two mailboxes at creation- Kernel and Notify
 Only three system calls needed for message transfer
 msg_send(), msg_receive(), msg_rpc()
 Mailboxes needed for commuication, created via
 port_allocate()

Examples of IPC Systems – Windows XP


 Message-passing centric via local procedure call (LPC) facility
 Only works between processes on the same system
 Uses ports (like mailboxes) to establish and maintain communication channels
 Communication works as follows:
 The client opens a handle to the subsystem’s connection port object

 The client sends a connection request

 The server creates two private communication ports and returns the handle to one of
them to the client

 The client and server use the corresponding port handle to send messages or callbacks
and to listen for replies

Threads
 To introduce the notion of a thread — a fundamental unit of CPU utilization that forms
the basis of multithreaded computer systems
 To discuss the APIs for the Pthreads, Win32, and Java thread libraries
 To examine issues related to multithreaded programming

Overview

 A thread, sometimes called a lightweight process (LWP),is basic unit of CPU utilization;
it comprises a thread, a program counter, a register set, and a stack.

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 A traditional (or heavyweight) process has single thread of control. If the process has
multiple threads of control, it can do more than one task at a time.

Motivation
 Many software packages that run on modern desktop PCs are multithreaded.

 Single and Multithreaded Processes

Benefits:
 Responsiveness

 Resource Sharing

 Economy

 Utilization of multiprocessor architectures.

User and Kernel Threads:

User Threads
 Thread management done by user-level threads library

 Three primary thread libraries:

 POSIX Pthreads

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Win32 threads

 Java threads

Kernel Threads

 Supported by the Kernel

 Examples

 Windows XP/2000

 Solaris

 Linux

 Tru64 UNIX

 Mac OS X

Multithreading Models

 Many-to-One

 One-to-One

 Many-to-Many

Many-to-One

 Many user-level threads mapped to single kernel thread

 Examples:

 Solaris Green Threads

 GNU Portable Threads

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

One-to-One

 Each user-level thread maps to kernel thread

 Examples

 Windows NT/XP/2000

 Linux

 Solaris 9 and later

Many-to-Many Model

 Allows many user level threads to be mapped to many kernel threads


WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Allows the operating system to create a sufficient number of kernel threads

 Solaris prior to version 9

 Windows NT/2000 with the ThreadFiber package

Two-level Model

Similar to M: M, except that it allows a user thread to be bound to kernel thread

Examples

 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Threading Issues
 Semantics of fork() and exec() system calls

 Thread cancellation

 Signal handling

 Thread pools

 Thread specific data

 Scheduler activations

Semantics of fork() and exec()

 Usage of two version of fork depends upon the application.

 If exec is called immediately after forking, then duplicating all threads is unnecessary, as
the program specified in the parameters to exec will replace the process.

 Does fork() duplicate only the calling thread or all threads?


WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

Thread Cancellation

 Terminating a thread before it has finished

 Two general approaches:

 Asynchronous cancellation terminates the target thread immediately

 Deferred cancellation allows the target thread to periodically check if it should be


cancelled

Signal Handling

 Signals are used in UNIX systems to notify a process that a particular event has occurred

 A signal handler is used to process signals

1. Signal is generated by particular event

2. Signal is delivered to a process

3. Signal is handled

 Options:

1. Deliver the signal to the thread to which the signal applies

2. Deliver the signal to every thread in the process

3. Deliver the signal to certain threads in the process

4. Assign a specific threa to receive all signals for the process

Thread Pools

 Create a number of threads in a pool where they await work

 Advantages:

o Usually slightly faster to service a request with an existing thread than create a
new thread

o Allows the number of threads in the application(s) to be bound to the size of the
pool

Thread Specific Data

 Allows each thread to have its own copy of data


WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 Useful when you do not have control over the thread creation process (i.e., when using a
thread pool)

Scheduler Activations

 Both M:M and Two-level models require communication to maintain the appropriate
number of kernel threads allocated to the application

 Scheduler activations provide upcalls - a communication mechanism from the kernel to


the thread library

 This communication allows an application to maintain the correct number kernel threads

Case Study: Pthreads library


Pthreads

 A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization

 API specifies behavior of the thread library, implementation is up to development of the


library

 Common in UNIX operating systems (Solaris, Linux, Mac OS X)

Windows XP Threads

 Implements the one-to-one mapping

 Each thread contains

o A thread id

o Register set

o Separate user and kernel stacks

o Private data storage area

 The register set, stacks, and private storage area are known as the context of the threads

 The primary data structures of a thread include:

o ETHREAD (executive thread block)

o KTHREAD (kernel thread block)

WWW.VIDYARTHIPLUS.COM V+ TEAM
WWW.VIDYARTHIPLUS.COM

 TEB (thread environment block)

Linux Threads

 Linux refers to them as tasks rather than threads

 Thread creation is done through clone() system call

 clone() allows a child task to share the address space of the parent task (process)

Java Threads

 Java threads are managed by the JVM

 Java threads may be created by:

 Extending Thread class

 Implementing the Runnable interface

WWW.VIDYARTHIPLUS.COM V+ TEAM

You might also like