Unit 2 - Operating System - WWW - Rgpvnotes.in PDF
Unit 2 - Operating System - WWW - Rgpvnotes.in PDF
Unit 2 - Operating System - WWW - Rgpvnotes.in PDF
Tech
Subject Name: Operating System
Subject Code: IT-501
Semester: 5th
Downloaded from www.rgpvnotes.in
Waiting time
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in
the ready queue to acquire get control on the CPU.
Load average
It is the average number of processes residing in the ready queue waiting for their turn to get into the
CPU.
Response time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution (final
response).
Scheduling algorithm:
A Process Scheduler schedules different process to be assigned to the CPU based on particular scheduling
algorithms. There are six popular process-scheduling algorithms, which we are going to discuss in this
chapter-
First-Come, First-Served (FCFS) Scheduling
Shortest-Job-Next (SJN) Scheduling
Priority Scheduling
Shortest Remaining Time
Round Robin(RR) Scheduling
Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithm are designed so that
once a process enters the running state; it cannot be preempted until it completes its allotted time, whereas
the preemptive scheduling is based on priority where a scheduler may preempt a low priority running
process anytime when a high priority process enters into a ready state
First Come First Serve (FCFS)
Jobs are executed on first come, first serve basis
It is a non-preemptive, pre-emptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue
Poor in performance as average wait time is high.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
Shortest Job First(SJF)
This is also known as shortest job first, or SJF
This is a non-preemptive, pre-emptive scheduling algorithm.
Best approach to minimize waiting time.
Easy to implement in Batch systems where required CPU time is known in advance.
Impossible to implement in interactive systems where required CPU time is not known
The processer should know in advance how much time process will take
P0 3-0=3
P1 0-0=0
P2 16 - 2 = 14
P3 8-3=5
Average Wait Time: (3+0+14+5) / 4 = 5.50
P0 9-0=9
P1 6-1=5
P2 14 - 2 = 12
P3 0-0=0
Average Wait Time: (9+5+12+0) / 4 = 6.5
Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion but it can be preempted by a newer ready
job with shorter time to completion
Impossible to implement in interactive systems where required CPU time is not known
It is often used in batch environments where short jobs need to give preference
Round Robin Scheduling
Round Robin is the preemptive process-scheduling algorithm.
Each process is provided a fix time to execute, it is called a quantum
Once a process is executed for a given time period, it is preempted and other process executes for a
given time period.
Context switching is used to save states of preempted processes
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
Multiple-Level Queues Scheduling
Multiple-level queues are dependent scheduling algorithm. They make use of other existing algorithms to
group and schedule jobs with common characteristics
Multiple queues are maintained for processes with common characteristics
Each queue can have its own scheduling algorithms.
Priorities are assigned to each queue
For example, CPU-bound jobs can schedule in one queue and all I/O-bound jobs in another queue. The
Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on the
algorithm assigned to the queue.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example,
could run unless the queues for system processes, interactive processes, and interactive editing processes
were all empty. If an interactive editing process entered the ready queue while a batch process was running,
the batch process will be preempted.
Process Concept
Process: A process is a program in execution. The execution of a process must progress in a sequential
fashion a process defines as an entity, which represents the basic unit of work implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process, which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─
stack, heap, text and data. The following image shows a simplified layout of a process inside main memory −
1 Start
This is the initial state when a process is first started/created
2 Ready
The process is waiting to assign to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run
Process may come into this state after Start state or while running it by but interrupted by the
scheduler to assign CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process state is
set to running and the processor executes its instructions
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system
4 Pointer
A pointer to parent process
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process
6 CPU registers
Various CPU registers where process need to be stored for execution for running state.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates
Schedulers
Schedulers are special system software, which handles process scheduling in various ways. Their main task is
to select jobs submitted into the system and to decide which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
A long-term scheduler determines which programs admitted to the system for processing. It selects
processes from the queue and loads them into memory for execution. Process loads into the memory for
CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems
have no long-term scheduler. When a process changes the state from new to ready, then there is use of
long-term scheduler.
Short Term Scheduler
Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the
change of ready state to running state of the process. CPU scheduler selects a process among the processes
that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers are.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspend if it makes an I/O request. Suspended processes cannot make any
progress towards completion in this condition, to remove the process from memory and make space for
other process; the suspended process moved to the secondary storage. This process called swapping, and
the process said swapped out or rolled out. Swapping may be necessary to improve the process mix
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
2 Speed is lesser than short Speed is fastest among Speed is in between both short
term scheduler other two and long-term scheduler.
Context switches are computationally intensive since register and memory state, some hardware systems
employ two or more sets of processor registers. When the process switched, the following information is
stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Operations on Processes:
There are many operations that can be performed on processes. Some of these are process creation, process
preemption, process blocking, and process termination. These are given in detail as follows:
Process Creation
Processes need to be created in the system for different operations. This can be done by the following
events:
User request for process creation
System initialization
Execution of a process creation system call by a running process
Batch job initialization
A process may be created by another process using fork(). The creating process is called the parent process
and the created process is the child process. A child process can have only one parent but a parent process
may have many children. Both the parent and child processes have the same memory image, open files, and
environment strings. However, they have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows:
Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing currently and the next
process to execute is determined by the short-term scheduler. Preemption makes sure that all processes get
some CPU time for execution.
A diagram that demonstrates process preemption is as follows:
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O events are
executed in the main memory and don't require the processor. After the event is complete, the process again
goes to the ready state.
A diagram that demonstrates process blocking is as follows:
Process Termination
After the process has completed the execution of its last instruction, it is terminated. The resources held by a
process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant. The child process
sends its status information to the parent process before it terminates. Also, when a parent process is
terminated, its child processes are terminated as well as the child processes cannot run if the parent
processes are terminated.
Thread
A thread is a flow of execution through the process code, with its own program counter that keeps track of
which instruction to execute next, system registers which hold its current working variables, and a stack,
which contains the execution history.
A thread shares with its peer threads little information like code segment, data segment and open files.
When one thread alters a code segment memory items all other threads see that
A thread is also called a lightweight process Threads provide a way to improve application performance
through parallelism. Threads represent a software approach to improving performance of operating system
by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads has been successfully used in implementing network servers
and web server they also provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded
process.
1 Process is heavy weight or resource Thread is light weight, taking lesser resources
intensive. than a process
2 Process switching needs interaction Thread switching does not need to interact
with operating system. with operating system.
3 In multiple processing environments, All threads can share same set of open files,
each process executes the same code child processes.
but has its own memory and file
resources.
4 If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first second thread in the same task can run.
process is unblocked.
6 In multiple processes, each process One thread can read, write or change another
operates independently of the others. thread's data.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads is implemented in following two ways-
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread library
contains code for creating and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts. The application starts with a
single thread.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, the Kernel does thread management. There is no thread management code in the application
area. Kernel threads supported directly by the operating system
Any application can be programmed to be multithreaded All of the threads within an application are
supported within a single process.
The Kernel maintains context information for the process as a whole and for individual’s threads within the
process. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads
are generally slower to create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provides a combined user level thread and Kernel level thread facility. Solaris is a
good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types
Many-to-many relationship
Many to one relationship
One to one relationship
Many-to- Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller number of
kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model
provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel
can schedule another thread for execution.
1 User-level threads are faster to create and manage. Kernel-level threads are slower to
create and manage.
2 Implementation is by a thread library at the user level. Operating system supports creation
of Kernel threads.
3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: Execution of one process does not affect the execution of other processes.
Cooperative Process: Execution of one process affects the execution of other processes.
Cooperating Processes
• Once we have multiple processes or threads, it is likely that two or more of them will want to
communicate with each other
• Process cooperation (i.e., interprocess communication) deals with three main issues
Passing information between processes/threads
Making sure that processes/threads do not interfere with each other
Ensuring proper sequencing of dependent operations
• These issues apply to both processes and threads
Initially we concentrate on shared memory mechanisms
The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for OS designer but
complicated for programmer and if it is of variable size then it is easy for programmer but complicated for the
OS designer. A standard message can have two parts: header and body. The header part is used for storing
Message type, destination id, source id, and message length and control information. The control information
contains information like what to do if runs out of buffer space, sequence number, priority. Generally,
message is sent using FIFO style.
Process Synchronization
Process Synchronization means sharing system resources by processes in a way that, Concurrent access to
shared data is handled thereby minimizing the chance of inconsistent data. Maintaining data consistency
demands mechanisms to ensure synchronized execution of cooperating processes.
Process Synchronization was introduced to handle problems that arose while multiple process executions.
Precedence Graph
A precedence graph is a directed graph acyclic graph where edge represents execution order and node
represents individual statements of the program code.
Precedence graph:
Out of a group of cooperating processes, only one process can be in its critical section at a given point
of time.
2. Progress
If no process is in its critical section, and if one or more threads want to execute their critical section
then any one of these threads must be allowed to get into its critical section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many other
processes can get into their critical section, before this process's request is granted. So after the limit
is reached, system must grant the process permission to get into its critical section.
Synchronization Hardware
Many systems provide hardware support for critical section code. The critical section problem could be
solved easily in a single-processor environment if we could disallow interrupts to occur while a shared
variable or resource is being modified.
In this manner, we could be sure that the current sequence of instructions would be allowed to execute in
order without pre-emption. Unfortunately, this solution is not feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time consuming as the message is passed to all
the processors.
This message transmission lag, delays entry of threads into critical section and the system efficiency
decreases.
Mutex Locks
As the synchronization hardware solution is not easy to implement for everyone, a strict software approach
called Mutex Locks was introduced. In this approach, in the entry section of code, a LOCK is acquired over the
critical resources modified and used inside critical section, and in the exit section that LOCK is released.
As the resource is locked while a process executes its critical section hence no other process can access it.
Semaphores
In 1965, Dijkstra proposed a new and very significant technique for managing concurrent processes by using
the value of a simple integer variable to synchronize the progress of interacting processes. This integer
variable is called semaphore. So it is basically a synchronizing tool and is accessed only through two low
standard atomic operations, wait and signal designated by P() and V() respectively.
The classical definition of wait and signal are :
Wait: decrement the value of its argument S as soon as it would become non-negative.
Signal: increment the value of its argument, S as an individual operation.
Properties of Semaphores
1. Simple
2. Works with many processes
3. Can have many different critical sections with different semaphores
4. Each critical section has unique access semaphores
5. Can permit multiple processes into the critical section at once, if desirable.
Types of Semaphores
Semaphores are mainly of two types:
1. Binary Semaphore
It is a special form of semaphore used for implementing mutual exclusion; hence it is often called
Mutex. A binary semaphore is initialized to 1 and only takes the value 0 and 1 during execution of a
program.
2. Counting Semaphores
These are used to implement bounded concurrency.
Limitations of Semaphores
1. Priority Inversion is a big limitation of semaphores.
2. Their use is not enforced, but is by convention only.
3. With improper use, a process may block indefinitely. Such a situation is called Deadlock. We will be
studying deadlocks in details in coming lessons.
Producer consumer problem can be solved using semaphores or monitors. Here we see solution using
semaphores. There are three semaphores: full, empty, and mutex. ‘Full’ counts number of full slots, ‘empty’
counts number of empty slots, and ‘mutex’ enforces mutual exclusion condition.
Shared Data:
Semaphore mutex = 1 ; /* for mutual exclusion */
Semaphore full = 0 ; /* counts number of full slots, initially none */
Semaphore empty = n ; /* counts number of empty slots, initially all */
Mutual exclusion is enforced by ‘mutex’ semaphore. Synchronization is enforced using ‘full’ and ‘empty’
semaphores that avoid race around condition and deadlock in the system.
If we do not use these three semaphores in the producer and consumer problem then there will be race
around, deadlock situation and loss of data.
Shared Data:
Semaphore mutex = 1 ;
Semaphore wsem = 1 ;
intreadcount = 0 ;
Reader Process:
Reader ()
{
While (true)
Writer Process:
{
Writer ()
Wait (mutex) ;
{
Readcount ++ ;
While (true)
If (readcount = = 1)
{
Wait (wsem) ;
Wait (wsem) ;
Signal (mutex) ;
// Critical Section () ;
// Critical Section () ;
Signal (wsem) ;
Wait (mutex) ;
}
Readcount - - ;
}
If (readcount = = 0)
Signal (wsem) ;
Signal (mutex) ;
}
}
Semaphore ‘mutex’ ensures mutual exclusion property. Readcount and wsem ensures that there is no conflict
in the critical section problem. These avoids race around, loss of data, and synchronization problem.
Semaphore mutex = 1 ;
Semaphore S[n] ; /*one per philosopher, all 0 initially*/
Philosopher (int process)
{
while(true)
{
Think () ;
Get_forks (process) ;
Eat () ;
Put_forks (process) ;
}
}
Test (int i)
{
If (state[i] = hungry)&&(state[left(i)] != eating)&&(state[right(i) != eating])
{
State[i] = eating ;
V(S[i]) ;
}
}
Get_forks(int i)
{
P(mutex) ;
State[i] = hungry ;
Test [i] ;
V (mutex) ;
P(S[i])
}
Put_forks(int i)
{
P(mutex) ;
State [i] = thinking ;
Test (left(i)) ;
Test (right(i)) ;
V (mutex) ;
}