OS Unit - I
OS Unit - I
CHAPTER 1
BASICS CONCEPT
INTRODUCTION
The operating system is a program that can play a middle role between
hardware and software. An operating system (OS) exploits the hardware resources
of one or more processors to provide a set of services to system users. An operating
system serves as an interface between the program and various computer hardware
or software components. The operating system is made to control all of the
computer's resources and activities. It is an entirely integrated collection of
specialized applications that manage all of the computer's functions.
Page 1 of 159
Operating System
• Input/Output: This layer manages all physical interactions with other devices,
including the keyboard, printer, display, and disc drive. The I/O layer receives a
request if a higher layer needs access to a device.
• File Management System: It also goes by the name "file system." It is in
charge of planning and overseeing the data storage on long-term storage devices
including hard drives, and floppy disc drives.
• User Interface: It is referred to as the area where human and machine
interaction takes place. There are two different types of user interfaces: the icon-
based Graphical User Interface (GUI), which is used in Windows and Apple Mac
OS, and the text-based Command Line Interface (CLI), which is used in MS-DOS
and LINUX.
Page 2 of 159
Operating System
Page 3 of 159
Operating System
Page 4 of 159
Operating System
Page 5 of 159
Operating System
• Layers are independent of each other. This means that a change to one layer
should not affect the other layers.
Below is the Image illustrating the Layered structure in OS:
Page 6 of 159
Operating System
Advantages
• It allows the system to run multiple programs simultaneously.
• Beneficial for tasks that need to use all of the processor’s resources, such as
games, scientific calculations, and financial simulations.
Disadvantages: They require additional hardware, such as processors and
memory, making a system more expensive.
Page 7 of 159
Operating System
Page 8 of 159
Operating System
Advantages
• Allows companies to scale their computing resources to handle increased
demand without having to buy new hardware.
• Client-server systems can be quickly reconfigured to meet the changing
needs of an organization.
• They are also more reliable and easier to maintain than dedicated server
systems.
• Lower operating cost.
• More reliable and easier to maintain than dedicated server systems
Disadvantages
• These OS need more sophisticated management and networking
technologies, longer startup times, and increased vulnerability to attack.
• Less secure than dedicated server systems.
• More challenging to scale than dedicated server systems.
Batch OS: There are different users, and each user prepares their work in a
standalone device, such as punch cards in batch operating systems and sends
them to a computer operator. The various systems split and distribute similar tasks
in batches to facilitate computing and faster responses. A single operator takes
similar jobs with similar needs and requirements and then groups them into
various batches. Similar kinds of jobs that share similar needs and requirements.
These types of operating systems are not used nowadays.
Page 9 of 159
Operating System
Advantages
• The overall time the system takes to execute all the programs will be
reduced.
• Less time to execute all programs.
• These operating systems are shared between multiple users.
• Suitable for small-scale businesses.
• It can work in offline mode also.
• It can give specific time to the computer, and when a computer is idle can
process the jobs.
Disadvantages
• Sometimes, manual interventions are required between two batches.
• The CPU utilization is low because the time taken in loading and unloading
batches is very high compared to execution time.
• Sometimes, jobs enter into an infinite loop due to some mistake.
• Meanwhile, if one job takes too much time, other jobs must wait.
Thus, the diverse types of operating systems (OS) serve as the backbone of
computer technology, each catering to different requirements and user
preferences.
Page 10 of 159
Operating System
OS helps in:
• Creating a file: The operating system provides a graphical user interface or
command-line interface that allows users to create new files. In a graphical
user interface-
• You can right-click on a folder or desktop and select “New”
• Choose the type of file you want to create, such as a text file or a Microsoft
Word document. Alternatively, you can use a command-line interface and
type commands to create files.
• Editing a file: Once a file has been created, you can use various tools, such as
a word processor and applications, provided by the operating system, to edit it.
• Updating a file: The operating system provides the facility to edit the file and
also tracks changes made to the file and updates the file metadata accordingly.
• Deleting a file: The operating system allows you to delete the file you no longer
need. The OS moves the file to the recycle bin or trash folder, which can be
restored if necessary, or permanently deletes it from the storage device.
Page 11 of 159
Operating System
• Allocates resources such that the system doesn’t run out of resources.
• Offering mechanisms for process synchronization.
• Helps in process communication(inter-communication).
Page 12 of 159
Operating System
CHARACTERISTICS OF MODERN OS
An operating system is a fundamental piece of software that manages
computer hardware resources and provides services for running applications.
Modern operating systems have evolved significantly over the years and have a
plethora of features that enhance their functionality, security, and usability.
The characteristics of modern operating systems that make them reliable,
efficient, and user-friendly are given below.
Object-Oriented Design: An object-oriented operating system (OOOS) is an
operating system that is designed and built using the principles of object-oriented
programming (OOP).
In an OOOS, the operating system is viewed as a collection of objects, each
representing a different aspect of the system. For example, there may be objects for
processes, files, devices, and users. These objects encapsulate their respective data
and behaviour and interact with each other through well-defined interfaces.
Multitasking and Multithreading: Multitasking is the ability of an operating
system to run multiple programs or processes at the same time, allowing users to
switch between them seamlessly.
In a multitasking environment, the operating system allocates CPU time to
each program or process in small time slices, allowing each program to run for a
short period before switching to the next program. This gives the illusion of
multiple programs running simultaneously, even though only one program is
running at any given moment.
Multithreading, on the other hand, is the ability of a program to perform
multiple tasks or subtasks simultaneously within a single process. In a
multithreaded program, each thread can execute a separate set of instructions
simultaneously, allowing the program to perform multiple tasks concurrently. This
can improve program performance and responsiveness, particularly for applications
that require heavy processing or input/output operations.
The combination of multitasking and multithreading allows modern
operating systems to efficiently manage system resources and run multiple
programs or processes simultaneously. This enables users to perform multiple
tasks at once and allows for better utilization of system resources such as CPU
time and memory.
Symmetric Multiprocessing: Symmetric multiprocessing (SMP) is a type of
multiprocessing architecture in which two or more identical processors are
connected to a single shared main memory and run a single operating system. In
an SMP system, each processor can access any memory area and perform any
system task, such as running applications or managing input/output operations.
SMP systems are commonly used in servers, high-performance computing
clusters, and desktop computers with multiple processors. In these systems, the
workload can be divided among the available processors, allowing for improved
performance and faster execution of tasks.
Page 13 of 159
Operating System
MICROKERNEL ARCHITECTURE
A microkernel is a type of operating system kernel that is designed to provide
only the most basic services required for an operating system to function, such as
memory management and process scheduling. Other services, such as device
drivers and file systems, are implemented as user-level processes that
communicate with the microkernel via message passing (IPC i.e. inter-process
communication). This design allows the operating system to be more modular and
flexible than traditional monolithic kernels, which implement all operating system
services in kernel space. Microkernel Example: MINX (mini-UNIX), QNX, etc.
Salient Features
1. Modularity: Microkernels are designed with a modular structure, separating
the core functionalities into small, independent components which is easier to
add remove or update without affecting the entire system.
Page 14 of 159
Operating System
PROCESS MANAGEMENT
A process is a program in execution. For example, when we write a program
in C or C++ and compile it, the compiler creates binary code. The original code and
binary code are both programs. When we run the binary code, it becomes a
process. A process is an ‘active’ entity instead of a program, which is considered a
‘passive’ entity. A single program can create many processes when run multiple
times; for example, when we open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).
Page 15 of 159
Operating System
Page 16 of 159
Operating System
• Process State: A process can be in several states, some of them are ready,
suspend wait, and running.
• Process Control Block: The PCB is a data structure that contains information
related to a process. These blocks are stored in the process table.
• Resources: Processes request various types of resources such as files,
input/output devices, and network connections. The OS manages the allocation
of these resources.
• Priority: Each process has a scheduling priority. Higher-priority processes are
given preferential treatment and they receive more CPU time compared to lower-
priority processes.
• Execution Context: Each process has its execution context, which includes the
address of the next instruction to be executed, stack pointer, and register
values.
PROCESS STATES
These are the states in which a process might go during its execution. Let us
know more about them:
Page 17 of 159
Operating System
• Suspended: this state shows that the process is ready but it has not been
placed in the ready queue for execution.
OPERATIONS ON PROCESS
In an operating system, processes represent the execution of individual tasks
or programs. Process operations involve the creation, scheduling, execution, and
termination of processes. The OS allocates necessary resources, such as CPU time,
memory, and I/O devices, to ensure the seamless execution of processes. Process
operations in OS encompass vital aspects of process lifecycle management,
optimizing resource allocation, and facilitating concurrent and responsive
computing environments.
Page 18 of 159
Operating System
5. Initialization: Any initializations required for the process are performed at this
stage. This might involve initializing variables, setting default values, and
preparing the process for execution.
6. Process State: After the necessary setup, the new process is typically in a
"ready" or "waiting" state, indicating that it is prepared for execution but hasn't
started running yet.
Page 19 of 159
Operating System
1. I/O Operations: When a process requests data from an I/O device (such as
reading data from a disk or receiving input from a keyboard), it may be blocked
until the requested data is ready.
2. Synchronization: Processes often wait for synchronization primitives like
semaphores or mutexes to achieve mutual exclusion or coordinate their
activities.
3. Inter-Process Communication: Processes waiting for messages or data from
other processes through mechanisms like message queues or pipes may enter a
blocked state.
4. Resource Allocation: Processes requesting system resources, such as memory
or network connections, may be blocked until the resources are allocated.
Page 20 of 159
Operating System
the currently running process is pre-empted, and the OS selects the next
process to run.
3. Interrupt-Driven Preemption: Hardware or software interrupts can trigger
pre-emption. For example, an interrupt generated by a hardware device or a
system call request from a process may cause the OS to pre-empt the current
process and handle the interrupt.
4. Fairness and Responsiveness: Preemption ensures that no process is unfairly
blocked from accessing CPU time. It guarantees that even low-priority
processes get a chance to execute, preventing starvation.
5. Real-Time Systems: Preemption is crucial in real-time systems, where tasks
have strict timing requirements. If a higher-priority real-time task becomes
ready to run, the OS must pre-empt lower-priority tasks to ensure timely
execution.
CONCURRENT PROCESS
Concurrency in operating systems refers to the ability of an OS to manage
and execute multiple tasks or processes simultaneously. It allows multiple tasks to
Page 21 of 159
Operating System
Page 22 of 159
Operating System
• Race Conditions: They occur when multiple threads or processes access shared
resources simultaneously without proper synchronization. In the absence of
synchronization mechanisms, race conditions can lead to unpredictable
behaviour and data corruption. This can result in data inconsistencies,
application crashes, or even security vulnerabilities if sensitive data is involved.
• Deadlocks: A deadlock arises when two or more processes or threads become
unable to progress as they are mutually waiting for resources that are currently
held by each other. This situation can bring the entire system to a standstill,
causing disruptions and frustration for users.
• Priority Inversion: Priority inversion occurs when a lower-priority task
temporarily holds a resource that a higher-priority task needs. This can lead to
delays in the execution of high-priority tasks, reducing system efficiency and
responsiveness.
• Resource Starvation: Resource starvation occurs when some processes are
unable to obtain the resources they need, leading to poor performance and
responsiveness for those processes. This can happen if the OS does not manage
resource allocation effectively or if certain processes monopolize resources.
Page 23 of 159
Operating System
PROCESS THREADS
In computers, a single process might have multiple functionalities running
parallelly where each functionality can be considered as a thread. Each thread has
its own set of registers and stack space. There can be multiple threads in a single
process having the same or different functionality. Threads in operating systems
are also termed lightweight processes.
Thread is a sequential flow of tasks within a process. Threads in an
operating system can be of the same or different types. Threads are used to
increase the performance of the applications.
Each thread has its own program counter, stack, and set of registers.
However, the threads of a single process might share the same code and
data/file. Threads are also termed lightweight processes as they share common
resources.
E.g.: While playing a movie on a device the audio and video are controlled by
different threads in the background.
Types of Thread
User Level Thread: User-level threads are implemented and managed by the
user and the kernel is not aware of it. User-level threads are implemented using
user-level libraries and the OS does not recognize these threads. User-level threads
are faster to create and manage compared to kernel-level threads. If one user-level
Page 24 of 159
Operating System
thread performs a blocking operation then the entire process gets blocked.
E.g.: POSIX threads, Java threads, etc.
User Level Thread is a type of thread that is not created using system calls.
The kernel has no work in the management of user-level threads. User-level
threads can be easily implemented by the user. In case when user-level threads are
single-handed processes, kernel-level thread manages them. Let’s look at the
advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
• Implementation of the User-Level Thread is easier than Kernel Level Thread.
• Context Switch Time is less in User Level Thread.
• User-Level Thread is more efficient than Kernel-Level Thread.
• Because of the presence of only Program Counter, Register Set, and Stack
Space, it has a simple representation.
Disadvantages of User-Level Threads
• There is a lack of coordination between Thread and Kernel.
• In case of a page fault, the whole process can be blocked.
The above diagram shows the functioning of user-level threads in user space
and kernel-level threads in kernel space.
A kernel-level thread is a type of thread that can recognize the Operating
system easily. Kernel Level Threads has its thread table which it keeps track of the
system. The operating System Kernel helps in managing threads. Kernel Threads
have somehow longer context switching time. Kernel helps in the management of
threads.
Page 25 of 159
Operating System
Advantages of Threading
• Threads improve the overall performance of a program.
• Threads increase the responsiveness of the program
• Context Switching time in threads is faster.
• Threads share the same memory and resources within a process.
• Communication is faster in threads.
• Threads provide concurrency within a process.
• Enhanced throughput of the system.
• Since different threads can run parallelly, threading enables the utilization
of the multiprocessor architecture to a greater extent and increases
efficiency.
1. Creation: The first stage in the lifecycle of a thread is its creation. In most
programming languages and environments, threads are created by
instantiating a thread object or invoking a thread creation function. During
creation, you specify the code or function that the thread will execute.
2. Ready/Runnable: After a thread is created, it enters the "ready" or
"runnable" state. In this state, the thread is ready to run, but the operating
system scheduler has not yet selected it to execute on the CPU. Threads in the
ready state are typically waiting for the scheduler to allocate CPU time to them.
Page 26 of 159
Operating System
3. Running: When the scheduler selects a thread from the pool of ready threads
and allocates CPU time to it, the thread enters the "running" state. In this
state, the thread's code is being executed on the CPU. A running thread will
continue to execute until it either voluntarily yields the CPU (e.g., through sleep
or wait operations) or is pre-empted by a higher-priority thread.
4. Blocked/Waiting: Threads can enter the "blocked" or "waiting" state when
they are waiting for some event to occur, such as I/O operations,
synchronization primitives (e.g., locks or semaphores), or signals from other
threads. When a thread is blocked, it is not eligible to run until the event it is
waiting for occurs.
5. Termination: Threads can terminate either voluntarily or involuntarily.
Voluntary termination occurs when a thread completes its execution or
explicitly calls a termination function. Involuntary termination can happen due
to errors (e.g., segmentation faults) or signals received from the operating
system.
6. Dead: Once a thread has terminated, it enters the "dead" state. In this state,
the thread's resources (such as memory and handles) are deallocated, and it no
longer exists as an active entity in the system. Dead threads cannot be
restarted or resumed.
MULTITHREADING
The term multithreading means a process and a thread. Process means a
program that is being executed. Processes are further divided into independent
units also known as threads, also known as collections of threads. It is a process
that is small and lightweight residing inside a process.
Page 27 of 159
Operating System
For example, client 1, client 2, and client 3 in the above example are
accessing the web server without having to wait for other tasks to be completed.
The threads in this are divided into user-level and kernel-level threads. The
user-level thread is used for handling an independent form of the kernel-level
thread without any support from the kernel. On the other hand, the kernel-level
threads are directly managed by the operating system.
Examples of Multithreading Operating Systems: Multithreading is widely
used by applications. Some of the applications are processing transactions
like online bank transfers, recharge, etc.
For instance, in the banking system, many users perform day-to-day
activities using bank servers like transfers, payments, deposits, `opening a new
account, etc. All these activities are performed instantly without having to wait for
another user to finish.
In this, all the activities get executed simultaneously as and when they arise.
This is where multithreading comes into the picture, wherein several threads
perform different activities without interfering with others.
Advantages:
• Multithreading allows the CPU to execute multiple tasks simultaneously, which
can boost performance.
• Multithreading reduces the amount of time that is spent waiting for a task to
finish.
• Multithreading can help to improve the scalability of a program.
• Interactive applications may allow a program to continue running even if part of
it is blocked or is performing a lengthy operation, thereby increasing
responsiveness.
Disadvantages of multi-threading
• Multithreading can be complex and challenging to implement.
• Multithreading can increase the complexity of a program.
• Multithreading can be error-prone.
• Programmers must carefully design their code to utilize multithreading
capabilities without introducing unwanted delays or fragmentation into their
programs’ execution.
Process Vs. Thread: Process simply means any program in execution while
the thread is a segment of a process. The main differences between process and
thread are mentioned below:
Process Thread
Processes use more resources and
Threads share resources and hence
hence they are termed as heavyweight
they are termed lightweight processes.
processes.
Creation and termination times of Creation and termination times of
processes are slower. threads are faster compared to
Page 28 of 159
Operating System
processes.
Processes have their code and Threads share code and data/files
data/file. within a process.
Communication between processes is Communication between threads is
slower. faster.
Context Switching in processes is
Context switching in threads is faster.
slower.
Threads, on the other hand, are
Processes are independent of each
interdependent. (i.e. they can read,
other.
write or change another thread’s data)
E.g.: Opening two tabs in the same
E.g.: Opening two different browsers.
browser.
MICROKERNELS
A kernel is the most important part of an operating system. It manages the
resources of the system and also acts as an interface between hardware and the
computer application. A microkernel is one type of kernel. It manages all system
resources.
Microkernel in an operating system is one of the kernel’s classifications. In
the microkernel, the user service, as well as the kernel service, are all implemented
on different kernel spaces.
In the user address space, all the user services are kept while in the kernel
address space, all the kernel services are available. Thus, by doing this, the size of
the kernel as well as the size of an operating system is reduced.
It is used for communicating between applications or client programs and
services that are running in the address space of the user are established by
message passing, thereby reducing the speed of the microkernel. It also provides
minimal service of memory management and process.
Since the user service and kernel service are isolated from each other, the
operating system remains unaffected if any user service is failed as it does not
affect the kernel service. It can be extended if any new service is to be added as
they are added to the user space and no modification is needed in the kernel space.
It is secure, portable, and reliable.
Page 29 of 159
Operating System
Scheduling of CPU: It refers to which process will be executed next. All the
processes reside in a queue and are executed one at a time. There are levels of
priority in every process and the process that has the highest process is executed
first. It helps in the optimization and utilization of the CPU to the maximum by
utilizing the resources efficiently. It minimizes the waiting time, response, and
turnaround times.
Memory management: It is the process of allocating space in memory for
processes. Virtual memory is also created if the process has a size bigger than that
of the actual memory by which the memory is divided into portions and stored. All
the processes wait in memory before CPU execution.
Page 30 of 159
Operating System
• Zircon
CPU SCHEDULING
CPU scheduling is a process that allows one process to use the CPU while
the execution of another process is on hold (in a waiting state) due to the
unavailability of any resource like I/O etc, thereby making full use of the CPU. CPU
scheduling aims to make the system efficient, fast, and fair.
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out by
the short-term scheduler (or CPU scheduler). The scheduler selects from among the
processes in memory that are ready to execute and allocates the CPU to one of
them. There is essential 4 conditions under which CPU scheduling decisions are
taken:
1. If a process is making the switch between the running state to the waiting state
(could be for an I/O request, or invocation of wait () for terminating one of its
child processes)
2. If a process is making the switch from the running state to the ready state (on
the occurrence of an interrupt, for example)
3. If a process is making the switch between waiting and ready state (e.g. when
its I/O request completes)
4. If a process terminates upon completion of execution.
So, in the case of conditions 1 and 4, the CPU does not have a choice of
scheduling, if a process exists in the ready queue the CPU's response to this would
be to select it for execution. In cases 2 and 3, the CPU has a choice of selecting a
particular process for executing next. There are mainly two types of CPU
scheduling:
Page 31 of 159
Operating System
For Example: In the image above, we can see that all the processes were executed
in the order in which they appeared, and none of the processes were interrupted by
another, making this a non-preemptive, FCFS (First Come, First Served) CPU
scheduling algorithm. P2 was the first process to arrive (arrived at time = 0) and
was hence executed first. Let's ignore the third column for a moment, we'll get to
that soon. Process P3 arrived next (at time = 1) and was executed after the previous
process - P2 was done executing, and so on.
Some examples of non-preemptive scheduling algorithms are - Shortest Job
First (SJF, non-preemptive), and Priority scheduling (non-preemptive).
Page 32 of 159
Operating System
SCHEDULERS
A scheduler is a software that helps schedule the processes in an operating
system. It helps to keep all computer resources busy and allows multiple users to
share system resources effectively. Let’s go through different schedulers in an
operating system.
1. Long-term schedulers: The processes that are created are in the NEW state.
The programs are admitted to the RAM for execution. So, before execution, the
processes are put in the READY queue. So, do they get into the ready queue
themselves? Here comes the role of long-term schedulers (LTS). It is also called
a job scheduler. These schedulers select processes from secondary memory and
put them into the ready queue. LTS runs less frequently. The main aim of LTS
is to maintain the degree of multiprogramming. Multiprogramming means
executing multiple programs by a single processor. But not all processes
simultaneously. It means if one process is not executing for some reason, then
another process will get a chance to get executed. An optimal level of
multiprogramming means
The average rate of process = the average departure rate of processes getting
executed and leaving the system.
Page 33 of 159
Operating System
SCHEDULING METHODOLOGY
In different environments, different scheduling methodologies are needed.
This situation arises because different application areas (and different kinds of
operating systems) have different goals. In other words, what the scheduler should
optimize for is not the same in all systems. Three environments are
1. Batch.
2. Interactive.
3. Real time.
Batch systems are still in widespread use in the business world for doing
payroll, inventory, accounts receivable, accounts payable, interest calculation (at
banks), claims processing (at insurance companies), and other periodic tasks. In
batch systems, there are no users impatiently waiting at their terminals for a quick
response to a short request. Consequently, non-preemptive algorithms, or
preemptive algorithms with long periods for each process, are often acceptable.
This approach reduces process switches and thus improves performance. The
batch algorithms are fairly general and often applicable to other situations as well,
which makes them worth studying, even for people not involved in corporate
mainframe computing.
In an environment with interactive users, preemption is essential to keep
one process from hogging the CPU and denying service to the others. Even if no
process intentionally ran forever, one process might shut out all the others
indefinitely due to a program bug. Preemption is needed to prevent this behaviour.
Servers also fall into this category, since they normally serve multiple (remote)
users, all of whom are in a big hurry.
In systems with real-time constraints, preemption is, oddly enough,
sometimes not needed because the processes know that they may not run for long
periods and usually do their work and block quickly. The difference with interactive
Page 34 of 159
Operating System
systems is that real-time systems run only programs that are intended to further
the application at hand. Interactive systems are general purpose and may run
arbitrary programs that are not cooperative or even malicious.
Page 35 of 159
Operating System
FCFS
The full form of FCFS Scheduling is First Come First Serve Scheduling.
FCFS Scheduling algorithm automatically executes the queued processes
and requests in the order of their arrival. It allocates the job that first arrived in the
queue to the CPU, then allocates the second one, and so on. FCFS is the simplest
and easiest CPU scheduling algorithm, managed with a FIFO queue. FIFO stands
for First In First Out. The FCFS scheduling algorithm places the arriving
processes/jobs at the very end of the queue. So, the processes that request the
CPU first get the allocation from the CPU first. As any process enters the FIFO
queue, its Process Control Block (PCB) gets linked with the queue’s tail. As the CPU
becomes free, the process at the very beginning gets assigned to it. Even if the CPU
starts working on a longer job, many shorter ones have to wait after it. The FCFS
scheduling algorithm works in most of the batches of operating systems.
Examples of FCFS scheduling: Buying a movie ticket at the counter. This
algorithm serves people in the queue order. The first person in line buys a ticket,
then the next person, and so on. This process will continue until the last person in
line has purchased a ticket. This method mimics the CPU process.
Advantages of FCFS:
• Simplicity. Orders are fulfilled in order, simplifying scheduling and processing.
Orders are simply performed in chronological sequence.
• User friendly. Order scheduling and code writing are straightforward for team
members. Easy scheduling saves time and labour. It’s a foolproof technique that
also reduces errors.
• Easy to implement. FCFS's simplicity makes it easier to integrate into existing
systems. FCFS order scheduling can be deployed quickly and inexpensively into
any scheduling system your company uses. FCFS can be used soon after its
implementation.
Limitation of FCFS:
• Long waiting time. FCFS processes orders in order since it is non-preemptive.
This means a business order can start processing once the previous order has
been completed. A CPU-allocated process will never release it until it finishes. If
the initial order has a long burst time, orders following it must wait for
fulfilment, regardless of their burst times.
• Lower device usage. Simple FCFS is inefficient. Longer wait periods accompany
this. If the CPU is busy processing a long order, all other orders lie idle, causing
a backup. FCFS is particularly wasteful because the CPU can only process one
order at a time.
• CPU over I/O. FCFS emphasizes CPU over I/O. The algorithm is more CPU-
friendly than I/O-friendly. This may dissuade I/O system users.
Example:
Process Burst Time
P1 24
Page 36 of 159
Operating System
P2 3
P3 3
If the processes arrive in the order P1, P2, and P3, and are served in FCFS
order, we get the result shown in the following Gantt chart:
P1 P2 P3
0 24 27 30
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process
P2, and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds. If the processes arrive in the order P2, P3, and P1,
however, the results will be as shown in the following Gantt chart:
P2 P3 P1
0 3 6 30
The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This
reduction is substantial. Thus, the average waiting time under an FCFS policy is
generally not minimal and may vary substantially if the process's CPU burst times
vary greatly.
SJF
Shortest Job First (SJF) algorithm is also known as Shortest Job Next
(SJN) or Shortest Process Next (SPN). It is a CPU processes scheduling algorithm
that sorts and executes the process with the smallest execution time first, and then
the subsequent processes with the increased execution time. Both preemptive and
non-preemptive scheduling strategies are possible in the SJF scheduling algorithm.
In SJF, there is a significant amount of reduction in the average waiting time for
other processes that are waiting to be executed.
However, it can be quite challenging to estimate the burst time required for a
process, making it difficult to apply this technique to the operating system
scheduling process.
Page 37 of 159
Operating System
The burst time for a process can only be approximated or predicted. Our
approximations must be correct to get the most out of the SJF algorithm.
Numerous methods can be used to predict a process's CPU burst time.
There are two types of SJF methods:
• Non-Preemptive SJF
• Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to the
process, the process holds it till it reaches a waiting state or is terminated.
In Preemptive SJF Scheduling, jobs are put into the ready queue as they
come. A process with the shortest burst time begins execution. If a process with
even a shorter burst time arrives, the current process is removed or preempted
from execution, and the shorter job is allocated a CPU cycle.
Advantages of SJF: These are some of the Advantages of the SJF algorithm:
• Shortest Job First (SJF) has a shorter average waiting time as compared to the
First Come First Serve (FCFS) algorithm.
• SJF can be applied to long-term scheduling.
• SJF is ideal for jobs that run in batches and whose run times are known.
• SJF is probably the best concerning the average turnaround time of a process.
RR
The Round-robin scheduling algorithm is a kind of preemptive first-come,
first-served CPU Scheduling algorithm in which each process in the ready state
gets the CPU for a fixed time cyclically (turn by turn). It is the oldest scheduling
algorithm and is mainly used for multitasking.
The round-robin scheduling algorithm is one of the CPU scheduling
algorithms in which every process gets a fixed amount of time to execute.
In this algorithm, every process gets executed cyclically. This means that
processes that have their burst time remaining after the expiration of the time
quantum are sent back to the ready state and wait for their next turn to complete
the execution until it terminates. This processing is done in FIFO order, suggesting
that processes are executed on a first-come, first-served basis.
Page 38 of 159
Operating System
2. At first, the burst time of every process is compared to the time quantum of the
CPU.
3. If the burst time of the process is less than or equal to the time quantum in the
round-robin scheduling algorithm, the process is executed to its burst time.
4. If the burst time of the process is greater than the time quantum, the process is
executed up to the time quantum (TQ).
5. When the time quantum expires, it checks if the process is executed completely
or not.
6. On completion, the process terminates. Otherwise, it goes back again to
the ready state.
Advantages
1. This round-robin algorithm offers starvation-free execution of processes.
2. Each process gets equal priority and fair allocation of CPU.
3. Round Robin scheduling algorithm enables the Context switching method to
save the states of preempted processes.
4. It is easily implementable on the system because round-robin scheduling in
OS doesn’t depend upon burst time.
Disadvantages
1. The waiting and response times are higher due to the short time slot.
2. Lower time quantum results in higher context switching.
3. We cannot set any special priority for the processes.
Example: Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds:
P1 24
P2 3
P3 3
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Page 39 of 159
Operating System
quantum, that process is pre-empted and is put back in the ready queue. The RR
scheduling algorithm is thus pre-emptive.
PRIORITY SCHEDULING
Priority scheduling in OS is the scheduling algorithm that schedules
processes according to the priority assigned to each process. Higher-priority
processes are executed before lower-priority processes.
In priority scheduling in OS, processes are executed based on their priority.
The jobs/processes with higher priority are executed first. Naturally, you might
want to know how the priority of processes is decided. Priority of processes depends
on some factors such as:
• Time limit
• Memory requirements of the process
• Ratio of average I/O to average CPU burst time
There can be more factors based on which the priority of a process/job is
determined. This priority is assigned to the processes by the scheduler.
These priorities of processes are represented as simple integers in a fixed range
such as 0 to 7, or maybe 0 to 4095. These numbers depend on the type of system.
Advantages:
• High-priority processes do not have to wait for their chance to be executed due
to the current running process.
• We can define the relative importance/priority of processes.
• The applications in which the requirements of time and resources fluctuate are
useful.
Disadvantages:
• Since we only execute high-priority processes, this can lead to starvation of the
processes that have a low priority. Starvation is the phenomenon in which a
Page 40 of 159
Operating System
process gets infinitely postponed because the resources that are required by the
process are never allocated to it, since other processes are executed before it.
You can research more about starvation on Google.
• If the system eventually crashes, all of the processes that have low priority will
get lost since they are stored in the RAM.
Example: As an example, consider the following set of processes, assumed to have
arrived at time 0, in the order P1, P2, • •, P5, with the length of the CPU burst given
in milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Using priority scheduling, we would schedule these processes according to
the following Gantt chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
The average waiting time is 8.2 milliseconds.
Priorities can be defined either internally or externally. Internally defined
priorities use some measurable quantity or quantities to compute the priority of a
process. For example, time limits, memory requirements, the number of open files,
and the ratio of average I/O burst to average CPU burst have been used in
computing priorities. External priorities are set by criteria outside the operating
system, such as the importance of the process, the type and amount of funds being
paid for computer use, the department sponsoring the work, and other, often
political, factors.
Page 41 of 159
Operating System
After all the processes are available in the ready queue, No-preemption will be done
and the algorithm will work the same as SJF scheduling. In the Process Control
Block, the context of the process is saved, when the process is removed from the
execution, and when the next process is scheduled. The PCB is accessed on the
next execution of this process.
Advantages of SRTF: The main advantage of the SRTF algorithm is that it
makes the processing of the jobs faster than the SJF algorithm, mentioned its
overhead charges are not counted.
Disadvantages of SRTF: In SRTF, the context switching is done a lot more times
than in SJN due to more consumption of the CPU's valuable time for processing.
The consumed time of the CPU then adds up to its processing time which then
diminishes the advantage of fast processing of this algorithm.
Example
At the 0th unit of the CPU, there is only one process which is P1, so P1 gets
executed for the 1-time unit. At the 1st unit of the CPU, Process P2 arrives. Now,
the P1 needs 6 more units to be executed, and the P2 needs only 3 units. So, P2 is
executed first by pre-empting P1. At the 3rd unit of time, the process P3 arrives,
and the burst time of P3 is 4 units which is more than the completion time of P2
which is 1 unit, so P2 continues its execution. Now after the completion of P2, the
burst time of P3 is 4 units which means it needs only 4 units for completion while
P1 needs 6 units for completion. So, this algorithm picks P3 above P1 due to the
reason that the completion time of P3 is less than that of P1. P3 gets completed at
time unit 8, there are no new processes arrived. So again, P1 is sent for execution,
and it gets completed at the 14th unit.
The arrival Time and Burst time for three processes P1, P2, and P3 are given
in the above diagram. Let us calculate Turnaround time, completion time, and
waiting time.
FRAGMENTATION
Fragmentation is an unwanted issue that occurs in an operating system in
which a process is unloaded and loaded from memory, causing the free memory
space to become fragmented. As a result, the process cannot be assigned to the
memory blocks due to their small size.
Page 42 of 159
Operating System
P1 10 units 15 units
P2 20 units 25 units
P3 30 units 35 units
Page 43 of 159
Operating System
when memory blocks, once allocated, are freed up, leading to 'holes' of unused
memory spread across the system.
The issue is that these 'holes' or blocks may not be large enough to satisfy
subsequent allocation requests, despite collectively having sufficient space.
Consequently, the system is unable to use this memory effectively, leading to
wasted resources and decreased efficiency. Consider this simple representation of
external fragmentation:
Block 1 Used
Block 2 Free
Block 3 Used
Block 4 Free
Block 5 Used
Here, although there is free memory (Blocks 2 and 4), it is not contiguous,
resulting in external fragmentation.
REVIEW QUESTIONS
1. What is an Operating System? Explain the different functions of the operating
system.
2. Explain the following terms: (1) Process (2) Creation and termination
operation of process.
3. What is multithreading? Explain with a suitable example.
4. What is scheduling? Explain the SJP Shortest-Job First algorithm.
5. What is a process thread? Explain.
6. What is a thread? Explain the concept of multithreading with a suitable
example.
7. Explain schedulers and types of schedulers in detail.
8. Describe the Round Robin CPU scheduling algorithm.
9. What is Micro Kernel? Explain its architecture and benefits.
10. Explain the structure of the Operating System.
Page 44 of 159
Operating System
Page 45 of 159