2. Process Management (1)
2. Process Management (1)
CHAPTER 2
PROCESS MANAGEMENT
OVERVIEW
Process Concept
Process Scheduling
Interprocess Communication
Process Synchronization
2
PROCESS CONCEPT
3
PROGRAM VS PROCESS
4
PROGRAM VS PROCESS
5
PROCESS STATE
6
DIAGRAM OF PROCESS STATE
7
PROCESS CONTROL BLOCK (PCB)
8
PROCESS CONTROL BLOCK (PCB)
9
CPU SWITCH FROM PROCESS TO PROCESS
10
PROCESS SCHEDULING QUEUES
11
READY QUEUE AND VARIOUS I/O DEVICE QUEUES
12
PROCESS SCHEDULING QUEUES
A new process is initially put in the ready queue. It waits there until it
is selected for execution. Once the process is allocated the CPU and
is executing, one of several events could occur:
The process could issue an I/O request and then be placed in an I/O queue.
The process could create a new child process and wait for the child’s
termination.
The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.
13
REPRESENTATION OF PROCESS SCHEDULING
14
SCHEDULERS
15
SCHEDULERS
16
SCHEDULERS
17
ADDITION OF MEDIUM TERM SCHEDULING
18
CONTEXT SWITCH
19
PROCESS CREATION
20
PROCESS CREATION (CONT.)
Address space
Child duplicate of parent.
Child has a program loaded into it.
UNIX examples:
fork system call creates new process
execve system call used after a fork to replace the process’ memory space
with a new program.
21
A TREE OF PROCESSES ON A TYPICAL UNIX SYSTEM
22
PROCESS CREATION (CONT.)
23
UNISTD C LIBRARY
getpid()
getppid()
fork()
wait()
24
UNIX PROCESS CREATION
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main() {
pid_t pid;
pid = fork(); /* fork a child process */
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) { /* child process */
execlp("/bin/ls","ls",NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
}
return 0;
}
25
PROCESS CREATION USING THE FORK() SYSTEM CALL
26
PROCESS TERMINATION
28
OUTLINE
Scheduling Algorithms
Interprocess Communication
Thread
Process Synchonization
Deadlocks
29
SCHEDULING ALGORITHMS
Whenever the CPU becomes idle, the operating system must select
one of the processes in the ready queue to be executed.
The selection process is carried out by the short-term scheduler,
or CPU scheduler
30
PREEMPTIVE VS NON-PREEMPTIVE SCHEDULING
Non-preemptive
The running process keeps the CPU until it voluntarily gives up the CPU
process exits
switches to blocked state
Preemptive:
The running process can be interrupted and must release the CPU (can be forced to give
up CPU)
31
SCHEDULING CRITERIA
CPU utilization:In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily loaded system).
Throughput: the number of processes that are completed per time unit
Turnaround time: The interval from the time of submission of a process to the time
of completion. = waiting to get into memory + waiting in the ready queue +
executing on the CPU + doing I/O.
Waiting time. is the sum of the periods spent waiting in the ready queue.
Response time. the time from the submission of a request until the first response is
produced
33
SCHEDULING ALGORITHMS (CONT)
34
SCHEDULING ALGORITHMS (CONT)
35
SCHEDULING ALGORITHMS (CONT)
36
SCHEDULING ALGORITHMS (CONT)
37
SCHEDULING ALGORITHMS (CONT)
38
SCHEDULING ALGORITHMS (CONT)
4. Priority Scheduling:
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7
or 0 to 4,095.
Some systems use low numbers to represent low priority; others use low
numbers for high priority
In this text, we assume that low numbers represent high priority
40
SCHEDULING ALGORITHMS (CONT)
4. Priority Scheduling:
4. Priority Scheduling:
Priorities can be defined either internally or externally.
Internally defined priorities use some measurable quantity or quantities to compute the priority
of a process.
time limits
memory requirements
the number of open files
ratio of average I/O burst to average CPU burst
4. Priority Scheduling:
A major problem with priority scheduling algorithms is indefinite blocking, or
starvation.
Asolution to the problem of indefinite blockage of low-priority processes is aging
For example, if priorities range from 127 (low) to 0 (high), we could increase the
priority of a waiting process by 1 every 15 minutes.
Priority scheduling can be either preemptive (4a) or nonpreemptive. (4b)
43
SCHEDULING ALGORITHMS (CONT)
5. Round-Robin Scheduling
Preemption scheduling
The round-robin (RR) scheduling algorithm is designed especially for timesharing
systems.
A small unit of time, called a time quantum or time slice, is defined. A time
quantum is generally from 10 to 100 milliseconds in length.
The ready queue is treated as a circular queue.
44
SCHEDULING ALGORITHMS (CONT)
5. Round-Robin Scheduling
45
SCHEDULING ALGORITHMS (CONT)
5. Round-Robin Scheduling
In the RR scheduling algorithm, no process is allocated the CPU for more than 1
time quantum in a row (unless it is the only runnable process)
The performance of the RR algorithm depends heavily on the size of the time
quantum.
At one extreme, if the time quantum is extremely large, the RR policy
if the time quantum is extremely small, the RR approach can result in a large number
of context switches
46
SCHEDULING ALGORITHMS (CONT)
5. Round-Robin Scheduling
47
SCHEDULING ALGORITHMS (CONT)
48
SCHEDULING ALGORITHMS (CONT)
49
SCHEDULING ALGORITHMS (CONT)
50
SCHEDULING ALGORITHMS
51
COOPERATING PROCESSES
52
COOPERATING PROCESSES
53
THREADS
A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:
program counter
register set
stack space
A thread shares with its peer threads its:
code section
data section
operating-system resources
collectively know as a task.
A traditional or heavyweight process is equal to a task with one thread
54
THREADS (CONT.)
55
PROCESS VS THREAD
56
THREADS (CONT.)
57
MULTITHREADING MODELS
Many-to-one model
58
MULTITHREADING MODELS
One-to-one model.
59
MULTITHREADING MODELS
Many-to-many model.
60
THREAD LIBRARIES
61
THREAD LIBRARIES
62
PTHREADS
63
#include <pthread.h>
#include <stdio.h>
int sum; /* this data is shared by the thread(s) */
void *runner(void *param); /* threads call this function */
The technique for creating threads using theWindows thread library is similar to
the Pthreads technique in several ways.
Threads are created in the Windows API using the CreateThread() function, and -
just as in Pthreads a set of attributes for the thread is passed to this function
Once the summation thread is created, the parent must wait for it to complete
before outputting the value of Sum, as the value is set by the summation thread
Windows API using the WaitForSingleObject() function to wait for the summation
thread
66
#include <windows.h>
#include <stdio.h>
DWORD Sum; /* data is shared by the thread(s) */
DWORD WINAPI Summation(LPVOID Param){
DWORD Upper = *(DWORD*)Param;
for (DWORD i = 0; i <= Upper; i++)
Sum += i;
return 0;
}
int main(int argc, char *argv[]){
DWORD ThreadId;
HANDLE ThreadHandle;
int Param;
if (argc != 2) {
fprintf(stderr,"An integer parameter is required\n"); return -1;
}
Param = atoi(argv[1]);
if (Param < 0) {
fprintf(stderr,"An integer >= 0 is required\n");
return -1;
}
ThreadHandle = CreateThread(/* create the thread */
NULL, /* default security attributes */
0, /* default stack size */
Summation, /* thread function */
&Param, /* parameter to thread function */
0, /* default creation flags */
&ThreadId); /* returns the thread identifier */
if (ThreadHandle != NULL) { /* now wait for the thread to finish */
WaitForSingleObject(ThreadHandle,INFINITE); /* close the thread handle */
CloseHandle(ThreadHandle);
printf("sum = %d\n",Sum);
}
}
JAVA THREADS
68
class Summation implements Runnable{
private int upper;
public Summation(int upper) {
this.upper = upper;
}
public void run() {
int sum = 0;
for (int i = 0; i <= upper; i++)
a.sumObject += i;
}
}
public class a {
static int sumObject = 0;
public static void main(String[] args) {
int upper = Integer.parseInt(args[0]);
Thread thread = new Thread(new Summation(upper));
thread.start();
try {
thread.join();
System.out.println("The sum of "+upper+" is " + a.sumObject);
} catch (InterruptedException ie) { }
}
}
IMPLICIT THREADING
70
THREAD POOLS
The general idea behind a thread pool is to create a number of threads at process
startup and place them into a pool, where they sit and wait for work
71
THREAD POOLS
72
import java.util.concurrent.ArrayBlockingQueue;
Java
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
74
OPENMP
OpenMP is a set of compiler directives as well as an API for programs written in C, C++,
or FORTRAN that provides support for parallel programming in shared-memory environments
OpenMP identifies parallel regions as blocks of code that may run in parallel
75
#include <omp.h>
#include <stdio.h>
int main(int argc, char *argv[]){
/* sequential code */
#pragma omp parallel
{
printf("I am a parallel region.");
}
/* sequential code */
return 0;
}
#pragma omp parallel for
for (i = 0; i < N; i++) {
c[i] = a[i] + b[i];
}
MULTIPLE THREADS WITHIN A TASK
77
THREADS SUPPORT IN SOLARIS 2
Solaris 2 is a version of UNIX with support for threads at the kernel and user levels,
symmetric multiprocessing, and
real-time scheduling.
LWP – intermediate level between user-level threads and kernel-level threads.
Resource needs of thread types:
Kernel thread: small data structure and a stack; thread switching does not require changing memory
access information – relatively fast.
LWP: PCB with register data, accounting and memory information,; switching between LWPs is
relatively slow.
User-level thread: only ned stack and program counter; no kernel involvement means fast switching.
Kernel only sees the LWPs that support user-level threads.
78
SOLARIS 2 THREADS
79
INTERPROCESS COMMUNICATION (IPC)
80
IMPLEMENTATION QUESTIONS
81
DIRECT COMMUNICATION
82
INDIRECT COMMUNICATION
Messages are directed and received from mailboxes (also referred to as ports).
Each mailbox has a unique id.
Processes can communicate only if they share a mailbox.
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes.
Each pair of processes may share several communication links.
Link may be unidirectional or bi-directional.
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
83
INDIRECT COMMUNICATION (CONTINUED)
Mailbox sharing
P1, P2, and P3 share mailbox A.
P1, sends; P2 and P3 receive.
Who gets the message?
Solutions
Allow a link to be associated with at most two processes.
Allow only one process at a time to execute a receive operation.
Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.
84
BUFFERING
85
EXCEPTION CONDITIONS – ERROR RECOVERY
Process terminates
Lost messages
Scrambled Messages
86
PROCESS SYNCHRONIZATION
87
BOUNDED-BUFFER
Consumer process
repeat
while counter = 0 do no-op;
nextc := buffer [out];
out := out + 1 mod n;
counter := counter – 1;
…
consume the item in nextc
…
until false;
The statements:
counter := counter + 1;
counter := counter - 1;
must be executed atomically.
89
THE CRITICAL-SECTION PROBLEM
90
SOLUTION TO CRITICAL-SECTION PROBLEM
repeat
entry section
critical section
exit section
reminder section
until false;
Processes may share some common variables to synchronize their actions.
92
ALGORITHM 1
Shared variables:
var turn: (0..1);
initially turn = 0
turn - i P can enter its critical section
i
Process P
i
repeat
while turn i do no-op;
critical section
turn := j;
reminder section
until false;
Satisfies mutual exclusion, but not progress
93
ALGORITHM 2
Shared variables
var flag: array [0..1] of boolean;
initially flag [0] = flag [1] = false.
flag [i] = true Pi ready to enter its critical section
Process P
i
repeat
flag[i] := true;
while flag[j] do no-op;
critical section
flag [i] := false;
remainder section
until false;
Satisfies mutual exclusion, but not progress requirement.
94
ALGORITHM 3
repeat
flag [i] := true;
turn := j;
while (flag [j] and turn = j) do no-op;
critical section
flag [i] := false;
remainder section
until false;
Meets all three requirements; solves the critical-section problem for two processes.
95
BAKERY ALGORITHM
Before entering its critical section, process receives a number. Holder of the
smallest number enters the critical section.
If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else
Pj is served first.
The numbering scheme always generates numbers in increasing order of
enumeration; i.e., 1,2,3,3,3,3,4,5...
96
BAKERY ALGORITHM (CONT.)
97
BAKERY ALGORITHM (CONT.)
repeat
choosing[i] := true;
number[i] := max(number[0], number[1], …, number [n – 1])+1;
choosing[i] := false;
for j := 0 to n – 1
do begin
while choosing[j] do no-op;
while number[j] 0
and (number[j],j) < (number[i], i) do no-op;
end;
critical section
number[i] := 0;
remainder section
until false;
98
SYNCHRONIZATION HARDWARE
99
MUTUAL EXCLUSION WITH TEST-AND-SET
repeat
while Test-and-Set (lock) do no-op;
critical section
lock := false;
remainder section
until false;
10
SEMAPHORE
signal (S): S := S + 1;
10
EXAMPLE: CRITICAL SECTION OF N PROCESSES
Shared variables
var mutex : semaphore
initially mutex = 1
Process Pi
repeat
wait(mutex);
critical section
signal(mutex);
remainder section
until false;
10
SEMAPHORE IMPLEMENTATION
10
IMPLEMENTATION (CONT.)
10
DEADLOCK AND STARVATION
Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes.
Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
signal(S); signal(Q);
signal(Q) signal(S);
Starvation – indefinite blocking. A process may never be removed from the
semaphore queue in which it is suspended.
10
TWO TYPES OF SEMAPHORES
10
IMPLEMENTING S AS A BINARY SEMAPHORE
Data structures:
var S1: binary-semaphore;
S2: binary-semaphore;
S3: binary-semaphore;
C: integer;
Initialization:
S1 = S3 = 1
S2 = 0
C = initial value of semaphore S
10
IMPLEMENTING S (CONT.)
wait operation
wait(S3);
wait(S1);
C := C – 1;
if C < 0
then begin
signal(S1);
wait(S2);
end
else signal(S1);
signal(S3);
signal operation
wait(S1);
C := C + 1;
if C 0 then signal(S2);
signal(S)1;
10