OS Lecture 4-5 - 240419 - 133541

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

PROCESS MANAGEMENT AND THREAD

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in
the system.
To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −

S.N. Component & Description

Stack
1 The process Stack contains the temporary data such as method/function parameters,
return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

Text
3 This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.

4 Data
This section contains the global and static variables.

Program
A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language
For example, here is a simple program written in C programming language –

1
#include <stdio.h>
int main(){
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when executed
by a computer. When we compare a program with a process, we can conclude that a process
is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A
collection of computer programs, libraries and related data are referred to as a software.

Process Life Cycle


When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1 Start
This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can
run. Process may come into this state after Start state or while running it by but
interrupted by the scheduler to assign CPU to some other process.

3 Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system,
it is moved to the terminated state where it waits to be removed from main memory.

2
Process life Cyle

The CPU Thread


A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set
of registers, ( and a thread ID. ) Traditional ( heavyweight ) processes have a single thread of
control - There is one program counter, and one sequence of instructions that can be carried
out at any given time.

As shown below; multi-threaded applications have multiple threads within a single process,
each having their own program counter, stack and set of registers, but sharing common code,
data, and certain structures such as open files.

Types of Threads
Threads, like processes, are run in the operating system. There are two types of threads: user
threads (which run in user applications) and kernel threads (which are run by the OS).

3
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table −

S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed
for this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for running
state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule
the process.

8 Memory management information


This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems.

4
Here is a simplified diagram of a PCB; −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

5
CPU/PROCESS SCHEDULING

Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready state
or from waiting state to ready state. This switching occurs as the CPU may give priority
to other processes and replace the process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues

6
which can only have one entry per processor core on the system; in the above diagram, it has
been merged with the CPU.

Two-State Process Model


This refers to running and non-running states which are described below −
S.N. State & Description

1 Running
When a new process is created, it enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process. Queue is implemented
by using linked list. Use of dispatcher is as follows. When a process is interrupted,
that process is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then selects a
process from the queue to execute.

Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.

7
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped
out or rolled out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Short-Term Medium-Term


Scheduler Scheduler Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than Speed is fastest Speed is in between


short term scheduler among other two both short and long
term scheduler.

3 It controls the degree It provides lesser It reduces the degree of


of multiprogramming control over degree multiprogramming.
of multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time


minimal in time sharing time sharing system sharing systems.
system

5 It selects processes It selects those It can re-introduce the


from pool and loads processes which are process into memory
them into memory for ready to execute and execution can be
execution continued.

8
CPU SCHEDULING

1. Introduction
CPU scheduling is a crucial feature of operating systems that govern sharing processor time
among the numerous tasks running on a computer. Hence, it is essential to ensure efficiency
and fairness in executing processes. It also ensures that the system can fulfill its users’
performance and responsiveness requirements.

2. Key Concepts in CPU Scheduling


2.1. Arrival Time
In CPU Scheduling, the arrival time refers to the moment in time when a process enters
the ready queue and is awaiting execution by the CPU. In other words, it is the point at
which a process becomes eligible for scheduling.
Many CPU scheduling algorithms consider arrival time when selecting the next process for
execution. A scheduler, for example, may favor processes with earlier arrival timings over
those with later arrival times to reduce the waiting time for a process in the ready queue. Hence,
it can assist in ensuring the execution of processes efficiently.

2.2. Burst Time


Burst time, also referred to as “execution time”. It is the amount of CPU time the process
requires to complete its execution. It is the amount of processing time required by a process
to execute a specific task or unit of a job. Factors such as the task’s complexity, code efficiency,
and the system’s resources determine the process’s burst time.
The burst time is also an essential factor in CPU scheduling. A scheduler, for example, may
favor processes with shorter burst durations over those with longer burst times. This will reduce
the time a process spends operating on the CPU. Hence, it can assist in ensuring that the
system can make optimum use of the processor’s resources.

2.3. Completion Time


Completion time is when a process finishes execution and is no longer being processed
by the CPU. It is the summation of the arrival, waiting, and burst times.
Completion time is an essential metric in CPU scheduling, as it can help determine the
efficiency of the scheduling algorithm. It is also helpful in determining the waiting time of a
process.
For example, a scheduling algorithm that consistently results in shorter completion times for
processes is considered more efficient than one that consistently results in longer completion
times.

2.4. Turnaround Time


The time elapsed between the arrival of a process and its completion is known as
turnaround time. That is, the duration it takes for a process to complete its execution and leave
the system.
Turnaround Time = Completion Time – Arrival TimeCopy

9
A scheduling algorithm that regularly produces shorter turnaround times for processes is
considered more efficient than one with longer turnaround times.

2.5. Waiting Time


This is a process’s duration in the ready queue before it begins executing. It helps assess
how efficient the scheduling algorithm is. A scheduling method that consistently results in
reduced wait times for processes, for example, is considered more efficient than one that
regularly results in longer wait times.
Waiting Time = Turnaround Time – Burst TimeCopy
Furthermore, it helps to measure the efficiency of a scheduling algorithm. Also, it aids in
determining a system’s perceived responsiveness to user queries. A long wait time can
contribute to a negative user experience. This is because consumers may view the system as
slow to react to their requests.

2.6. Response Time


Response time is the amount of time it takes for the CPU to respond to a request made by a
process. It is the duration between the arrival of a process and the first time it runs. It is
an essential parameter in CPU scheduling since it may assist in determining a system’s
perceived responsiveness to user requests.
Response Time = Time it Started Executing – Arrival TimeCopy
The number of processes waiting in the ready queue, the priority of the processes, and the
features of the scheduling algorithm are all variables that might impact response time. For
example, a scheduling algorithm that prioritizes processes with shorter burst times may result
in quicker response times for those processes.

3. Example for Illustration


To further illustrate this concept and how they are calculated, let’s consider an example with
four processes as shown in the table, including arrival time and burst time. Using the non-
preemptive shortest-job-first algorithm, we can see how the processes are completed:

Process Arrival Time Burst Time


P1 3 3
P2 6 3
P3 0 4
P4 2 5

At time=0: P3 arrives and starts execution without waiting.


Let’s note that P3 is first attended to at time=0:

10
At time=2: P4 arrives, and P3 continues executing.
Hence, P4 waits in the queue:

At time=3: P1 arrives, and P3 continues executing:

At time=4: P3 completes execution. The burst time for P4 and P1 are compared. Hence P1
starts executing:

At this point, we can calculate the Turnaround, Wait, and Response Time for P3:
Completion Time (P3) = 4
Turnaround Time (P3) = 4 - 0 = 4
Wait time (P3) = 4 - 4 = 0
Response Time (P3) = 0 - 0 = 0Copy
At time=6: P2 arrives, and P1 is still executing:

At time=7: P1 completes execution. The burst time for P4 and P2 are compared. Hence, P2
starts executing:

Now, we can make calculations for P1:


Completion Time (P1) = 7
Turnaround Time (P1) = 7 - 3 = 4
Wait time (P1) = 4 - 3 = 1
11
Response Time (P1) = 4 - 3 = 1Copy
At time=10: P2 completes execution, and only P4 is in the wait queue. Hence, P4 starts
executing:

At this point, we can make calculations for P2:


Completion Time (P2) = 10
Turnaround Time (P2) = 10 - 6 = 4
Wait Time (P2) = 4 - 3 = 1
Response Time (P2) = 7 - 6 = 1Copy
At time=15: P4 completes execution. Hence, we have the Gantt Chart:

We can now make calculations for P4:


Completion Time (P4) = 15
Turnaround Time (P4) = 15 - 2 = 13
Wait Time (P4) = 13 - 5 = 8
Response Time (P4) = 10 - 2 = 8Copy

12

You might also like