0% found this document useful (0 votes)
26 views45 pages

OS Unit - I

The document provides an overview of operating systems (OS), detailing their role as intermediaries between hardware and software, and outlining their main layers, types, and structures. It discusses various OS types such as Microsoft Windows, UNIX, and Linux, along with their advantages and disadvantages. Additionally, it highlights the functions of an OS, including file, device, process, and memory management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views45 pages

OS Unit - I

The document provides an overview of operating systems (OS), detailing their role as intermediaries between hardware and software, and outlining their main layers, types, and structures. It discusses various OS types such as Microsoft Windows, UNIX, and Linux, along with their advantages and disadvantages. Additionally, it highlights the functions of an OS, including file, device, process, and memory management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Operating System

CHAPTER 1
BASICS CONCEPT

INTRODUCTION
The operating system is a program that can play a middle role between
hardware and software. An operating system (OS) exploits the hardware resources
of one or more processors to provide a set of services to system users. An operating
system serves as an interface between the program and various computer hardware
or software components. The operating system is made to control all of the
computer's resources and activities. It is an entirely integrated collection of
specialized applications that manage all of the computer's functions.

Main Layers in an Operating System: The software that acts as an


interface between various computer parts is referred to as layers in an operating
system. A clear benefit of layering is evident in an operating system. Each layer
may be designed independently and interacted with as needed. The following five-
layer model is often used in an Operating System:
• Kernel: It links a computer's hardware with application software. As a result, it
controls how applications in the RAM access memory. Additionally, it decides
when each program will execute and allots processing time and memory to each
application.
• Memory Management: It is in charge of restarting the computer's physical
memory between processes and managing programs that need more memory
than is physically accessible.

Page 1 of 159
Operating System

• Input/Output: This layer manages all physical interactions with other devices,
including the keyboard, printer, display, and disc drive. The I/O layer receives a
request if a higher layer needs access to a device.
• File Management System: It also goes by the name "file system." It is in
charge of planning and overseeing the data storage on long-term storage devices
including hard drives, and floppy disc drives.
• User Interface: It is referred to as the area where human and machine
interaction takes place. There are two different types of user interfaces: the icon-
based Graphical User Interface (GUI), which is used in Windows and Apple Mac
OS, and the text-based Command Line Interface (CLI), which is used in MS-DOS
and LINUX.

Different Types of Operating Systems


• Microsoft Windows: It is a form of an operating system that comes in 32 and
64-bit variants of Microsoft Windows. Microsoft was the one that created it. It
offers a Graphical User Interface (GUI), the ability to manage virtual memory,
multitasking features, and compatibility with a wide range of peripheral devices.
• UNIX: The most capable and well-liked operating system for multiple users and
tasks is Unix. It is a group of applications that serve as the user's interface with
the computer. Dennis Ritchie later contributed to Ken Thompson's original UNIX
code. Unix systems are built around a core kernel that manages the system and
the other processes. The following are the major features of UNIX:
• A hierarchy of files
• Independence from devices
• Multi-tasking
• Multi-user functionality
• Tools and utilities for creating tools
• The portability
• Integrated Networking
• Linux: One of the most widely used variations of Unix OS is the LINUX
Operating System. It is an open-source operating system that is freely available
online and whose source code is editable by anybody who uses it. Its
functionality list resembles Unix in many ways.

STRUCTURE OF THE OPERATING SYSTEM


The operating system structure refers to how the various components of an
operating system are organized and interconnected. There are several different
approaches to operating system structure, each with its advantages and
disadvantages.
An operating system has a complex structure, so we need a well-defined
structure to assist us in applying it to our unique requirements. Just as we break
down a big problem into smaller, easier-to-solve subproblems, designing an

Page 2 of 159
Operating System

operating system in parts is a simpler approach to do it. Each section is an


Operating System component. The approach of interconnecting and integrating
multiple operating system components into the kernel can be described as an
operating system structure. As mentioned below, various sorts of structures are
used to implement operating systems.
1. Simple Structure
2. Monolithic Structure
3. Layered Approach Structure
4. Micro-kernel Structure

Simple Structure: It is the simplest Operating System Structure and is not


well defined; It can only be used for small and limited systems. In this structure,
the interfaces and levels of functionality are well separated; hence programs can
access I/O routines which can cause unauthorized access to I/O routines. This
structure is implemented in the MS-DOS operating system: The MS-DOS operating
System is made up of various layers, each with its own set of functions. These
layers are the Application Program, System Program, MS-DOS device drivers, and
ROM BIOS device drivers.

Advantages of Simple Structure


• It is easy to develop because of the limited number of interfaces and layers.
• Offers good performance due to lesser layers between hardware and
applications.
• Minimal overhead, suitable for resource-constrained environments.
Disadvantages of Simple Structure
• If one user program fails, the entire operating system crashes.
• Limited functionality.
• Abstraction or data hiding is not present as layers are connected and
communicate.
• Layers can access the processes going in the Operating System, which can
lead to data modification and can cause the Operating System to crash.

Monolithic Structure: The Monolithic operating System in which


the kernel acts as a manager by managing all things like file management, memory

Page 3 of 159
Operating System

management, device management, and operational processes of the Operating


System.
The kernel is the heart of a computer operating system (OS). Kernel delivers
basic services to all other elements of the System. It serves as the primary interface
between the Operating System and the hardware.
In monolithic systems, kernels can directly access all the resources of the operating
system like physical hardware, exp Keyboard, Mouse, etc. The monolithic kernel is
another name for the monolithic operating system. Batch processing and time-
sharing maximize the usability of a processor by multiprogramming. The
monolithic kernel functions as a virtual machine by working on top of the
Operating System and controlling all hardware components. This is an outdated
operating system that was used in banks to accomplish minor activities such as
batch processing and time-sharing, which enables many people at various
terminals to access the Operating System. A Diagram of the Monolithic structure is
shown below:

Advantages of Monolithic structure:


• It is simple to design and implement because all operations are managed
by kernel only, and layering is not needed.
• As services such as memory management, file management, process scheduling,
etc., are implemented in the same address space, the execution of the
monolithic kernel is relatively fast as compared to normal systems. Using the
same address saves time for address allocation for new processes and makes it
faster.
• Simple design and implementation.

Disadvantages of Monolithic structure:


• If any service in the monolithic kernel fails, the entire System fails because, in
address space, the services are connected and affect each other.
• Lack of modularity makes maintenance and extensions difficult.
• It is not flexible, and to introduce a new service

Micro-kernel Structure: Micro-kernel structure designs the Operating


System by removing all non-essential components of the kernel. These non-

Page 4 of 159
Operating System

essential components of kernels are implemented as systems and user programs.


Hence these implemented systems are called Micro-Kernels. Each Micro-Kernel is
made independently and is isolated from other Micro-Kernels. So, this makes the
system more secure and reliable. If any Micro-Kernel fails, then the remaining
operating System remains untouched and works fine.

Advantages of Micro-kernel structure:


• It allows the operating system to be portable between platforms.
• Enhanced system stability and security.
• As each Micro-Kernel is isolated, it is safe and trustworthy.
• Because Micro-Kernels are smaller, they can be successfully tested.
• If any component or Micro-Kernel fails, the remaining operating System is
unaffected and continues to function normally.
Disadvantages of Micro-kernel structure:
• Increased inter-module communication reduces system performance.
• System is complex to construct.
• Complexity in managing user-space components.

Layered Structure: In this type of structure, OS is divided into layers or


levels. The hardware is on the bottom layer (layer 0), while the user interface is on
the top layer (layer N). These layers are arranged in a hierarchical way in which the
top-level layers use the functionalities of their lower-level levels. In this approach,
the functionalities of each layer are isolated, and abstraction is also available. In a
layered structure, debugging is easier as it is a hierarchical model, so all lower-level
layered is debugged, and then the upper layer is checked. So, all the lower layers
are already checked, and the current layer is to be checked only.
The following are some of the key characteristics of a layered operating
system structure:
• Each layer is responsible for a specific set of tasks. This makes it easier to
understand, develop, and maintain the operating system.
• Layers are typically arranged in a hierarchy. This means that each layer can only
use the services provided by the layers below it.

Page 5 of 159
Operating System

• Layers are independent of each other. This means that a change to one layer
should not affect the other layers.
Below is the Image illustrating the Layered structure in OS:

Advantages of Layered Structure


• A layered structure is highly modular, meaning that each layer is
responsible for a specific set of tasks. This makes it easier to understand,
develop, and maintain the operating system.
• Each layer has its functionalities, so work tasks are isolated, and
abstraction is present up to some level.
• Debugging is easier as lower layers are debugged, and then upper layers are
checked.
Disadvantages of Layered Structure
• In Layered Structure, layering causes degradation in performance.
• It takes careful planning to construct the layers since higher layers only
utilize the functions of lower layers.
• There can be some performance overhead associated with the
communication between layers. This is because each layer must pass data to
the layer above it.

TYPES OF OPERATING SYSTEM


The following are the different types of operating systems:

Multiprocessor OS: A multiprocessor operating system is an operating system


that uses multiple processors to improve performance. This operating system is
commonly found on computers with more than one CPU. Multiprocessor systems
improve system performance by allowing the execution of tasks on multiple
processors simultaneously. Overall reduces the time it takes to complete specific
tasks.

Page 6 of 159
Operating System

Advantages
• It allows the system to run multiple programs simultaneously.
• Beneficial for tasks that need to use all of the processor’s resources, such as
games, scientific calculations, and financial simulations.
Disadvantages: They require additional hardware, such as processors and
memory, making a system more expensive.

Multi-programming OS: The operating system that can run multiple


processes on a single processor is called a multiprogramming operating system.
Different programs want to get executed. So, these programs are kept in the ready
queue. And are assigned to the CPU one by one. If one process gets blocked then
other processes from the ready queue are assigned to the CPU. The aim of this is
optimal resource utilization and more CPU utilization. In the below figure, different
processes are there in RAM (main memory). Some processes are waiting for the
CPU, and process 2(which was previously executing) is now doing I/O operations.
So CPU shifted to execute process 1.

Distributed OS: A distributed operating system is an operating system that is


designed to operate on a network of computers. Distributed systems are usually
used to distribute software applications and data. Distributed systems are also
used to manage the resources of multiple computers. Users could be at different
sites. Multiple computers are connected via a single communication channel. Every
system has its processor and memory. Resources like disk, computer, CPU,
network interface, nodes, etc., are shared among different computers at different
locations. It increases data availability in the entire system.
Advantages
• It is more reliable as a failure of one system will not impact the other
computers or the overall system.
• All computers work independently.
• Resources are shared, so there is less cost overall.
• The system works at a higher speed as resources are shared
• The host system has less load.
• Computers can be easily added to the system.
Disadvantages
• Costly setup.
• If the server fails, then the whole system will fail.

Page 7 of 159
Operating System

• Complex software is used for such a system

Multitasking OS: Multi-tasking operating systems are designed to enable


multiple applications to run simultaneously. Multi-tasking operating systems allow
multiple users to work on the same document or application simultaneously.
For example, a user running antivirus software, searching the internet, and
playing a song simultaneously. Then the user is using a multitasking OS.

Time-sharing OS: A time-sharing operating system is an application that


provides a shared user interface with multiple users logged in simultaneously. It
allows multiple users to access the same resources, such as files and applications,
as long as they are logged in simultaneously. This operating system type is most
commonly used in businesses, especially those that involve many simultaneous
users. Time-sharing operating systems enable users to finish their jobs on a system
at once. The time-sharing OS is the latest advancement in the computer science
world; it is being accepted worldwide, also at an increasing rate.

Client/Server Network OS: Client/server network operating systems are


those networks that contain two types of nodes: servers and clients. The servers
host the applications or services for users while clients use these applications. In a
client/server system, both the server and client computers must have certain
software installed to connect securely over a network connection.
Client-server networks are a type of computer network in which two or more
computer systems are linked through a telecommunications network. Clients are
the computers that use the network to access services provided by the server.
Servers are the computers that provide the services to the network. Client/server
networks are commonly used in business and government applications.

Page 8 of 159
Operating System

Advantages
• Allows companies to scale their computing resources to handle increased
demand without having to buy new hardware.
• Client-server systems can be quickly reconfigured to meet the changing
needs of an organization.
• They are also more reliable and easier to maintain than dedicated server
systems.
• Lower operating cost.
• More reliable and easier to maintain than dedicated server systems
Disadvantages
• These OS need more sophisticated management and networking
technologies, longer startup times, and increased vulnerability to attack.
• Less secure than dedicated server systems.
• More challenging to scale than dedicated server systems.

Batch OS: There are different users, and each user prepares their work in a
standalone device, such as punch cards in batch operating systems and sends
them to a computer operator. The various systems split and distribute similar tasks
in batches to facilitate computing and faster responses. A single operator takes
similar jobs with similar needs and requirements and then groups them into
various batches. Similar kinds of jobs that share similar needs and requirements.
These types of operating systems are not used nowadays.

Page 9 of 159
Operating System

Advantages
• The overall time the system takes to execute all the programs will be
reduced.
• Less time to execute all programs.
• These operating systems are shared between multiple users.
• Suitable for small-scale businesses.
• It can work in offline mode also.
• It can give specific time to the computer, and when a computer is idle can
process the jobs.
Disadvantages
• Sometimes, manual interventions are required between two batches.
• The CPU utilization is low because the time taken in loading and unloading
batches is very high compared to execution time.
• Sometimes, jobs enter into an infinite loop due to some mistake.
• Meanwhile, if one job takes too much time, other jobs must wait.
Thus, the diverse types of operating systems (OS) serve as the backbone of
computer technology, each catering to different requirements and user
preferences.

OPERATING SYSTEM FUNCTIONS


The operating system (OS) is a crucial component of the computing
environment, serving as an intermediary between users and computer hardware.
Let's examine the functions of the OS in detail below.
File Management
Device Management
Process Management
Memory Management
Job Accounting

File Management: An operating system’s (OS) primary function is to manage


files and folders. Operating systems are responsible for managing the files on a
computer. This includes creating, opening, closing, and deleting files. The operating
system is also responsible for organizing the files on the disk.
The OS also handles file permissions, which dictate what actions a user can
take on a particular file or folder. For example, you may be able to read a file but
not edit or delete it. This prevents unauthorized users from accessing or tampering
with your files.
Tasks of Operating System
• Keeps track of the location and status of files.
• Allocating and deallocating resources.
• Decide which resource to assign to which file.

Page 10 of 159
Operating System

OS helps in:
• Creating a file: The operating system provides a graphical user interface or
command-line interface that allows users to create new files. In a graphical
user interface-
• You can right-click on a folder or desktop and select “New”
• Choose the type of file you want to create, such as a text file or a Microsoft
Word document. Alternatively, you can use a command-line interface and
type commands to create files.
• Editing a file: Once a file has been created, you can use various tools, such as
a word processor and applications, provided by the operating system, to edit it.
• Updating a file: The operating system provides the facility to edit the file and
also tracks changes made to the file and updates the file metadata accordingly.
• Deleting a file: The operating system allows you to delete the file you no longer
need. The OS moves the file to the recycle bin or trash folder, which can be
restored if necessary, or permanently deletes it from the storage device.

Device Management: Operating systems provide essential functions for


managing devices connected to a computer. These functions include allocating
memory, processing input and output requests, and managing storage devices.
These devices could be a keyboard, mouse, printer, or any other device you may
have connected.
An operating system will provide options for managing each device's
behaviour. For example, you can set up your keyboard to type in a specific
language or make the mouse move only one screen at a time. An operating system
can also install software and updates for your devices and manage their security
settings.
The operating system does the following tasks:
• Allocating and deallocating devices to different processes.
• Keeps records of all the devices attached to the computer.
• Decide which device to be allocated to which process and for how much
time.

Process Management: The operating system is responsible for managing the


processes on your computer. This includes starting and stopping programs,
allocating resources, and managing memory usage. The operating system ensures
that the programs running on your computer should be compatible. It’s also
responsible for enforcing program security, which helps to keep your computer safe
from potential attacks.
How do Operating systems manage all processes? Each process, called a
quantum, is given a certain amount of time to execute. Once a process has used its
quantum, the operating system interrupts it, providing another process with a
turn. This ensures that each process gets a fair share of the CPU time.
The operating system manages processes by doing the following tasks:
• Allocating and deallocating the resources.

Page 11 of 159
Operating System

• Allocates resources such that the system doesn’t run out of resources.
• Offering mechanisms for process synchronization.
• Helps in process communication(inter-communication).

Memory Management: One of the most critical functions of an operating


system is memory management. This is the process of keeping track of all the
different applications and processes running on your computer and all the data
they’re using.
This is especially important on computers with limited memory, as it
ensures that no application or process takes up too much space and slows down
the computer. The operating system can move data around and delete files to free
up space.
Operating systems perform the following tasks:
• Allocating/deallocating memory to store programs.
• Deciding the amount of memory that should be allocated to the program.
• Memory distribution while multiprocessing.
• Update the status in case memory is freed
• Keeps record of how much memory is used and how much is unused.
When a computer starts, the operating system loads itself into memory and
manages all the other running programs. It checks how much memory is used and
how much is available and makes sure that executing programs do not interfere
with each other.

Job Accounting: An operating system’s (OS) job accounting feature is a


powerful tool for tracking how your computer’s resources are being used. This
information can help you pinpoint and troubleshoot any performance issues and
identify unauthorized software installations.
Operating systems track which users and processes use how many
resources. This information can be used for various purposes, including monitoring
system usage, billing users for their resource use, and providing system
administrators with information about which users and processes are causing
problems.
The operating system does the following tasks:
• Keeps record of all the activities taking place on the system.
• Keeps record of information regarding resources, memory, errors, resources,
etc.
• Responsible for Program swapping (in and out) in memory
• Keeps track of memory usage and accordingly assigns memory
• Opening and closing and writing to peripheral devices.
• Creating a file system for organizing files and directories.

Page 12 of 159
Operating System

CHARACTERISTICS OF MODERN OS
An operating system is a fundamental piece of software that manages
computer hardware resources and provides services for running applications.
Modern operating systems have evolved significantly over the years and have a
plethora of features that enhance their functionality, security, and usability.
The characteristics of modern operating systems that make them reliable,
efficient, and user-friendly are given below.
Object-Oriented Design: An object-oriented operating system (OOOS) is an
operating system that is designed and built using the principles of object-oriented
programming (OOP).
In an OOOS, the operating system is viewed as a collection of objects, each
representing a different aspect of the system. For example, there may be objects for
processes, files, devices, and users. These objects encapsulate their respective data
and behaviour and interact with each other through well-defined interfaces.
Multitasking and Multithreading: Multitasking is the ability of an operating
system to run multiple programs or processes at the same time, allowing users to
switch between them seamlessly.
In a multitasking environment, the operating system allocates CPU time to
each program or process in small time slices, allowing each program to run for a
short period before switching to the next program. This gives the illusion of
multiple programs running simultaneously, even though only one program is
running at any given moment.
Multithreading, on the other hand, is the ability of a program to perform
multiple tasks or subtasks simultaneously within a single process. In a
multithreaded program, each thread can execute a separate set of instructions
simultaneously, allowing the program to perform multiple tasks concurrently. This
can improve program performance and responsiveness, particularly for applications
that require heavy processing or input/output operations.
The combination of multitasking and multithreading allows modern
operating systems to efficiently manage system resources and run multiple
programs or processes simultaneously. This enables users to perform multiple
tasks at once and allows for better utilization of system resources such as CPU
time and memory.
Symmetric Multiprocessing: Symmetric multiprocessing (SMP) is a type of
multiprocessing architecture in which two or more identical processors are
connected to a single shared main memory and run a single operating system. In
an SMP system, each processor can access any memory area and perform any
system task, such as running applications or managing input/output operations.
SMP systems are commonly used in servers, high-performance computing
clusters, and desktop computers with multiple processors. In these systems, the
workload can be divided among the available processors, allowing for improved
performance and faster execution of tasks.

Page 13 of 159
Operating System

Distributed Operating System: A distributed operating system (DOS) is an


operating system that runs on multiple independent computers and coordinates
their activities as a single system.
Unlike traditional operating systems, which are designed to run on a single
computer, a DOS is designed to support distributed computing, where multiple
computers work together to achieve a common goal.
DOS is typically used in environments where a large number of computers
need to work together to perform complex tasks. Examples include scientific
simulations, weather forecasting, and large-scale data processing. In a DOS, the
different computers that make up the system can communicate with each other
and share resources such as memory, storage, and processing power.
Traditional Unix System: The rapid growth of Unix is due to many factors, i.e.,
its portability to a wide range of machines, adaptability, simplicity, a wide range of
tasks it can perform, its multi-user and multitasking nature, and suitability for
networking.
Data security: Modern operating systems use security measures like encryption,
file permissions, and user authentication to protect the system and user data. They
also receive regular security updates and patches to address vulnerabilities.
Error handling: Modern operating systems check for errors in hardware,
software, and data. They display error messages and may even suggest solutions to
problems.
Multitasking: Modern operating systems can handle multiple applications or
processes running at the same time.
File management: Modern operating systems can create new files, place them in
specific locations, and help users find and share files.
Resource management: Modern operating systems manage a computer's
resources.

MICROKERNEL ARCHITECTURE
A microkernel is a type of operating system kernel that is designed to provide
only the most basic services required for an operating system to function, such as
memory management and process scheduling. Other services, such as device
drivers and file systems, are implemented as user-level processes that
communicate with the microkernel via message passing (IPC i.e. inter-process
communication). This design allows the operating system to be more modular and
flexible than traditional monolithic kernels, which implement all operating system
services in kernel space. Microkernel Example: MINX (mini-UNIX), QNX, etc.
Salient Features
1. Modularity: Microkernels are designed with a modular structure, separating
the core functionalities into small, independent components which is easier to
add remove or update without affecting the entire system.

Page 14 of 159
Operating System

2. Kernel Minimalism: By keeping the minimal, the trusted computing base


(TCB) is reduced, enhancing security and reliability.
3. Inter-Process Communication (IPC): Microkernels heavily rely on IPC
mechanisms, such as message passing, for communication between user-space
servers and microkernel
4. Scalability: Microkernels can be more scalable than monolithic kernels,
allowing for easier adaptation to different system requirements without
sacrificing performance.

The minimum functionalities included in the microkernel are:


• Memory management mechanisms like address spaces are included in the
microkernel. This also contains memory protection features.
• Processor scheduling mechanisms are also necessary in the microkernel.
This contains process and thread schedulers.
• Interprocess communication is important as it is needed to manage the
servers that run their own address spaces.

Performance of a Microkernel System: Providing services in a microkernel


system is much more expensive than in a normal monolithic system. The service is
obtained by sending an interprocess communication message to the server and
getting one in return. This means a context switch or a function call if the drivers
are implemented as processes or procedures respectively.
So, performance can be complicated in microkernel systems and may lead to
some problems. However, this issue is reduced in the modern microkernel systems
created such as L4 microkernel systems.

PROCESS MANAGEMENT
A process is a program in execution. For example, when we write a program
in C or C++ and compile it, the compiler creates binary code. The original code and
binary code are both programs. When we run the binary code, it becomes a
process. A process is an ‘active’ entity instead of a program, which is considered a
‘passive’ entity. A single program can create many processes when run multiple
times; for example, when we open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).

Page 15 of 159
Operating System

Process management is a key part of an operating system. It controls how


processes are carried out, and controls how your computer runs by handling the
active processes. This includes stopping processes, setting which processes should
get more attention, and many more. You can manage processes on your computer
too.
The OS is responsible for managing the start, stop, and scheduling of
processes, which are programs running on the system. The operating system uses
several methods to prevent deadlocks, facilitate inter-process communication, and
synchronize processes. Efficient resource allocation, conflict-free process execution,
and optimal system performance are all guaranteed by competent process
management. This essential component of an operating system enables the
execution of numerous applications at once, enhancing system utilization and
responsiveness.

Process Attributes/Characteristics: Let us see the attributes of a


process at a glance.
• Process ID: a unique identifier assigned by the operating system to each
process.
• Process State: there are a few possible states a process goes through during
execution.
• CPU registers: stores the details of the process when it is swapped in and out of
the CPU, just like the program counter.
• I/O status information: shows information like the device to which a process is
allotted and details of open files.
• CPU scheduling information: processes are scheduled and executed based on
priority.

• Accounting & Business information: information about the amount of CPU


used and time utilities like a job or process number, real-time utilized, etc.
• Memory management information: information about the value of base
registers and limit registers, segment tables, and pages.

Characteristics of a Process: A process has the following characteristics:

Page 16 of 159
Operating System

• Process State: A process can be in several states, some of them are ready,
suspend wait, and running.
• Process Control Block: The PCB is a data structure that contains information
related to a process. These blocks are stored in the process table.
• Resources: Processes request various types of resources such as files,
input/output devices, and network connections. The OS manages the allocation
of these resources.
• Priority: Each process has a scheduling priority. Higher-priority processes are
given preferential treatment and they receive more CPU time compared to lower-
priority processes.
• Execution Context: Each process has its execution context, which includes the
address of the next instruction to be executed, stack pointer, and register
values.

PROCESS STATES
These are the states in which a process might go during its execution. Let us
know more about them:

• New: a new process is created when a certain program is called up from


secondary memory and loaded to RAM.
• Ready: a process is said to be in a ready state if it is already loaded in the
primary memory/RAM for its execution.
• Running: here, the process is already executing.
• Paused (Waiting): at this stage, the process is believed to be waiting for CPU
or resource allocation.
• Blocked: at this stage, the process is waiting for some I/O operation to get
completed.
• Terminated: this specifies the time at which the process gets terminated by
the OS.

Page 17 of 159
Operating System

• Suspended: this state shows that the process is ready but it has not been
placed in the ready queue for execution.

OPERATIONS ON PROCESS
In an operating system, processes represent the execution of individual tasks
or programs. Process operations involve the creation, scheduling, execution, and
termination of processes. The OS allocates necessary resources, such as CPU time,
memory, and I/O devices, to ensure the seamless execution of processes. Process
operations in OS encompass vital aspects of process lifecycle management,
optimizing resource allocation, and facilitating concurrent and responsive
computing environments.

Process Operations: Process operations in an operating system involve several


key steps that manage the lifecycle of processes. The operations on process in OS
ensure efficient utilization of system resources, multitasking, and a responsive
computing environment. The primary process operations in OS include:
1. Process Creation
2. Process Scheduling
3. Context Switching
4. Process Execution
5. Inter-Process Communication (IPC)
6. Process Termination
7. Process Synchronization
8. Process State Management
9. Process Priority Management
10. Process Accounting and Monitoring

Creating: Process creation is a fundamental operation within an operating


system that involves the creation and initialization of a new process. The process
operation in OS is crucial for enabling multitasking, resource allocation, and
concurrent execution of tasks. The process creation operation in OS typically
follows a series of steps which are as follows:
1. Request: The process of creation begins with a request from a user or a system
component, such as an application or the operating system itself, to start a new
process.
2. Allocating Resources: The operating system allocates necessary resources for
the new process, including memory space, a unique process identifier (PID), a
process control block (PCB), and other essential data structures.
3. Loading Program Code: The program code and data associated with the
process are loaded into the allocated memory space.
4. Setting Up Execution Environment: The OS sets up the initial execution
environment for the process.

Page 18 of 159
Operating System

5. Initialization: Any initializations required for the process are performed at this
stage. This might involve initializing variables, setting default values, and
preparing the process for execution.
6. Process State: After the necessary setup, the new process is typically in a
"ready" or "waiting" state, indicating that it is prepared for execution but hasn't
started running yet.

Dispatching/Scheduling: It is a crucial operation within an operating


system that involves the selection of the next process to execute on the central
processing unit (CPU). This operation is a key component of process management
and is essential for efficient multitasking and resource allocation. The dispatching
operation encompasses the following key steps:
1. Process Selection: The dispatching operation selects a process from the pool
of ready-to-execute processes. The selection criteria may include factors such
as process priority, execution history, and the scheduling algorithm employed
by the OS.
2. Context Switching: Before executing the selected process, the operating
system performs a context switch. This involves saving the state of the
currently running process, including the program counter, CPU registers, and
other relevant information, into the process control block (PCB).
3. Loading New Process: Once the context switch is complete, the OS loads the
saved state of the selected process from its PCB. This includes restoring the
program counter and other CPU registers to the values they had when the
process was last pre-empted or voluntarily yielded to the CPU.
4. Execution: The CPU begins executing the instructions of the selected process.
The process advances through its program logic, utilizing system resources
such as memory, I/O devices, and external data.
5. Timer Interrupts and Pre-emption: During process execution, timer
interrupts are set at regular intervals. When a timer interrupt occurs, the
currently running process is pre-empted, and the CPU returns control to the
operating system.
6. Scheduling Algorithms: The dispatching operation relies on scheduling
algorithms that determine the order and duration of process execution.
7. Resource Allocation: The dispatching operation is responsible for allocating
CPU time to processes based on the scheduling algorithm and their priority.
This ensures that high-priority or time-sensitive tasks receive appropriate
attention.

Blocking: In an operating system, a blocking operation refers to a situation


where a process in OS is temporarily suspended or "blocked" from executing
further instructions until a specific event or condition occurs. This event typically
involves waiting for a particular resource or condition to become available before
the process can proceed. Blocking operations are common in scenarios where
processes need to interact with external resources, such as input/output (I/O)
devices, files, or other processes.

Page 19 of 159
Operating System

When a process operation in OS initiates a blocking operation, it enters a


state known as "blocked" or "waiting." The operating system removes the process
from the CPU's execution queue and places it in a waiting queue associated with
the resource it is waiting for. The process remains in this state until the resource
becomes available or the condition is satisfied.
Blocking operations is crucial for efficient resource management and
coordination among processes. They prevent processes from monopolizing system
resources while waiting for external events, enabling the operating system to
schedule other processes for execution. Common examples of blocking operations
include:

1. I/O Operations: When a process requests data from an I/O device (such as
reading data from a disk or receiving input from a keyboard), it may be blocked
until the requested data is ready.
2. Synchronization: Processes often wait for synchronization primitives like
semaphores or mutexes to achieve mutual exclusion or coordinate their
activities.
3. Inter-Process Communication: Processes waiting for messages or data from
other processes through mechanisms like message queues or pipes may enter a
blocked state.
4. Resource Allocation: Processes requesting system resources, such as memory
or network connections, may be blocked until the resources are allocated.

Pre-emption: Pre-emption in an operating system refers to the act of


temporarily interrupting the execution of a currently running process to allocate
the CPU to another process. This interruption is typically triggered by a higher-
priority process becoming available for execution or by the expiration of a time slice
assigned to the currently running process in a time-sharing environment. Key
aspects of pre-emption include:
1. Priority-Based Preemption: Processes with higher priorities are given
preference in execution. When a higher-priority process becomes available, the
OS may pre-empt the currently running process to allow the higher-priority
process to execute.
2. Time Sharing: In a time-sharing or multitasking environment, processes are
allocated small time slices (quantum) of CPU time. When a time slice expires,

Page 20 of 159
Operating System

the currently running process is pre-empted, and the OS selects the next
process to run.
3. Interrupt-Driven Preemption: Hardware or software interrupts can trigger
pre-emption. For example, an interrupt generated by a hardware device or a
system call request from a process may cause the OS to pre-empt the current
process and handle the interrupt.
4. Fairness and Responsiveness: Preemption ensures that no process is unfairly
blocked from accessing CPU time. It guarantees that even low-priority
processes get a chance to execute, preventing starvation.
5. Real-Time Systems: Preemption is crucial in real-time systems, where tasks
have strict timing requirements. If a higher-priority real-time task becomes
ready to run, the OS must pre-empt lower-priority tasks to ensure timely
execution.

Termination of Process: Termination of a process operation in an


operating system refers to the orderly and controlled cessation of a running
process's execution. Process termination occurs when a process has completed its
intended task, when it is no longer needed, or when an error or exception occurs.
This operation involves several steps to ensure proper cleanup and resource
reclamation:
1. Exit Status: When a process terminates, it typically returns an exit status or
code that indicates the outcome of its execution. This status provides
information about whether the process was completed successfully or
encountered an error.
2. Resource Deallocation: The OS releases the resources allocated to the process,
including memory, file handles, open sockets, and other system resources. This
prevents resource leaks and ensures efficient utilization of system components.
3. File Cleanup: If the process has opened files or created temporary files, the OS
ensures that these files are properly closed and removed, preventing data
corruption and freeing up storage space.
4. Parent Process Notification: In most cases, the parent process (the process
that created the terminating process) needs to be informed of the termination
and the exit status.
5. Process Control Block Update: The OS updates the process control block (PCB)
of the terminated process, marking it as "terminated" and removing it from the
list of active processes.
6. Reclamation of System Resources: The OS updates its data structures and
internal tables to reflect the availability of system resources that were used by
the terminated process.

CONCURRENT PROCESS
Concurrency in operating systems refers to the ability of an OS to manage
and execute multiple tasks or processes simultaneously. It allows multiple tasks to

Page 21 of 159
Operating System

overlap in execution, giving the appearance of parallelism even on single-core


processors. Concurrency is achieved through various techniques such as
multitasking, multithreading, and multiprocessing.
Multitasking involves the execution of multiple tasks by rapidly switching between
them. Each task gets a time slot, and the OS switches between them so quickly
that it seems as if they are running simultaneously.
Multithreading takes advantage of modern processors with multiple cores. It
allows different threads of a process to run on separate cores, enabling true
parallelism within a single process.
Multiprocessing goes a step further by distributing multiple processes across
multiple physical processors or cores, achieving parallel execution at a higher level.

Why Allow Concurrent Execution? The need for concurrent execution


arises from the desire to utilize computer resources efficiently. Here are some key
reasons why concurrent execution is essential:
• Resource Utilization: Concurrency ensures that the CPU, memory, and other
resources are used optimally. Without concurrency, a CPU might remain idle
while waiting for I/O operations to complete, leading to inefficient resource
utilization.
• Responsiveness: Concurrent systems are more responsive. Users can interact
with multiple applications simultaneously, and the OS can switch between them
quickly, providing a smoother user experience.
• Throughput: Concurrency increases the overall throughput of the system.
Multiple tasks can progress simultaneously, allowing more work to be done in a
given time frame.
• Real-Time Processing: Certain applications, such as multimedia playback and
gaming, require real-time processing. Concurrency ensures that these
applications can run without interruptions, delivering a seamless experience.

Principles of Concurrency in Operating Systems: To effectively


implement concurrency, OS designers adhere to several key principles:
• Process Isolation: Each process should have its own memory space and
resources to prevent interference between processes. This isolation is critical to
maintain system stability.
• Synchronization: Concurrency introduces the possibility of data races and
conflicts. Synchronization mechanisms like locks, semaphores, and mutexes are
used to coordinate access to shared resources and ensure data consistency.
• Deadlock Avoidance: OSs implement algorithms to detect and avoid deadlock
situations where processes are stuck waiting for resources indefinitely.
Deadlocks can halt the entire system.
• Fairness: The OS should allocate CPU time fairly among processes to prevent
any single process from monopolizing system resources.

Problems in Concurrency: While concurrency offers numerous benefits, it


also introduces a range of challenges and problems:

Page 22 of 159
Operating System

• Race Conditions: They occur when multiple threads or processes access shared
resources simultaneously without proper synchronization. In the absence of
synchronization mechanisms, race conditions can lead to unpredictable
behaviour and data corruption. This can result in data inconsistencies,
application crashes, or even security vulnerabilities if sensitive data is involved.
• Deadlocks: A deadlock arises when two or more processes or threads become
unable to progress as they are mutually waiting for resources that are currently
held by each other. This situation can bring the entire system to a standstill,
causing disruptions and frustration for users.
• Priority Inversion: Priority inversion occurs when a lower-priority task
temporarily holds a resource that a higher-priority task needs. This can lead to
delays in the execution of high-priority tasks, reducing system efficiency and
responsiveness.
• Resource Starvation: Resource starvation occurs when some processes are
unable to obtain the resources they need, leading to poor performance and
responsiveness for those processes. This can happen if the OS does not manage
resource allocation effectively or if certain processes monopolize resources.

Advantages of Concurrency: Concurrency in operating systems offers


several distinct advantages:
• Improved Performance: Concurrency significantly enhances system
performance by effectively utilizing available resources. With multiple tasks
running concurrently, the CPU, memory, and I/O devices are continuously
engaged, reducing idle time and maximizing overall throughput.
• Responsiveness: Concurrency ensures that users enjoy fast response times,
even when juggling multiple applications. The ability of the operating system to
swiftly switch between tasks gives the impression of seamless multitasking and
enhances the user experience.
• Scalability: Concurrency allows systems to scale horizontally by adding more
processors or cores, making it suitable for both single-core and multi-core
environments.
• Fault Tolerance: Concurrency contributes to fault tolerance, a critical aspect of
system reliability. In multiprocessor systems, if one processor encounters a
failure, the remaining processors can continue processing tasks. This
redundancy minimizes downtime and ensures uninterrupted system operation.

Limitations of Concurrency: Despite its advantages, concurrency has its


limitations:
• Complexity: Debugging and testing concurrent code is often more challenging
than sequential code. The potential for hard-to-reproduce bugs necessitates
careful design and thorough testing.
• Overhead: Synchronization mechanisms introduce overhead, which can slow
down the execution of individual tasks, especially in scenarios where
synchronization is excessive.
• Race Conditions: Dealing with race conditions requires careful consideration
during the design and rigorous testing to prevent data corruption and erratic
behaviour.

Page 23 of 159
Operating System

• Resource Management: Balancing resource usage to prevent both resource


starvation and excessive contention is a critical task. Careful resource
management is vital to maintain system stability.

PROCESS THREADS
In computers, a single process might have multiple functionalities running
parallelly where each functionality can be considered as a thread. Each thread has
its own set of registers and stack space. There can be multiple threads in a single
process having the same or different functionality. Threads in operating systems
are also termed lightweight processes.
Thread is a sequential flow of tasks within a process. Threads in an
operating system can be of the same or different types. Threads are used to
increase the performance of the applications.
Each thread has its own program counter, stack, and set of registers.
However, the threads of a single process might share the same code and
data/file. Threads are also termed lightweight processes as they share common
resources.
E.g.: While playing a movie on a device the audio and video are controlled by
different threads in the background.

Components of Thread: A thread has the following three components:


1. Program Counter
2. Register Set
3. Stack space

Types of Thread
User Level Thread: User-level threads are implemented and managed by the
user and the kernel is not aware of it. User-level threads are implemented using
user-level libraries and the OS does not recognize these threads. User-level threads
are faster to create and manage compared to kernel-level threads. If one user-level

Page 24 of 159
Operating System

thread performs a blocking operation then the entire process gets blocked.
E.g.: POSIX threads, Java threads, etc.
User Level Thread is a type of thread that is not created using system calls.
The kernel has no work in the management of user-level threads. User-level
threads can be easily implemented by the user. In case when user-level threads are
single-handed processes, kernel-level thread manages them. Let’s look at the
advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
• Implementation of the User-Level Thread is easier than Kernel Level Thread.
• Context Switch Time is less in User Level Thread.
• User-Level Thread is more efficient than Kernel-Level Thread.
• Because of the presence of only Program Counter, Register Set, and Stack
Space, it has a simple representation.
Disadvantages of User-Level Threads
• There is a lack of coordination between Thread and Kernel.
• In case of a page fault, the whole process can be blocked.

Kernel-level Thread: Kernel-level threads are implemented and managed by


the OS. Kernel level threads are implemented using system calls and Kernel level
threads are recognized by the OS. Kernel-level threads are slower to create and
manage compared to user-level threads. Context switching in a kernel-level thread
is slower. Even if one kernel-level thread performs a blocking operation, it does not
affect other threads. E.g.: Window Solaris.

The above diagram shows the functioning of user-level threads in user space
and kernel-level threads in kernel space.
A kernel-level thread is a type of thread that can recognize the Operating
system easily. Kernel Level Threads has its thread table which it keeps track of the
system. The operating System Kernel helps in managing threads. Kernel Threads
have somehow longer context switching time. Kernel helps in the management of
threads.

Page 25 of 159
Operating System

Advantages of Kernel-Level Threads


• It has up-to-date information on all threads.
• Applications that block frequency are to be handled by the Kernel-Level Threads.
• Whenever any process requires more time to process, Kernel-Level Thread
provides more time to it.
Disadvantages of Kernel-Level threads
• Kernel-Level Thread is slower than User-Level Thread.
• Implementation of this type of thread is a little more complex than a user-level
thread.

Advantages of Threading
• Threads improve the overall performance of a program.
• Threads increase the responsiveness of the program
• Context Switching time in threads is faster.
• Threads share the same memory and resources within a process.
• Communication is faster in threads.
• Threads provide concurrency within a process.
• Enhanced throughput of the system.
• Since different threads can run parallelly, threading enables the utilization
of the multiprocessor architecture to a greater extent and increases
efficiency.

Life Cycle of Thread

1. Creation: The first stage in the lifecycle of a thread is its creation. In most
programming languages and environments, threads are created by
instantiating a thread object or invoking a thread creation function. During
creation, you specify the code or function that the thread will execute.
2. Ready/Runnable: After a thread is created, it enters the "ready" or
"runnable" state. In this state, the thread is ready to run, but the operating
system scheduler has not yet selected it to execute on the CPU. Threads in the
ready state are typically waiting for the scheduler to allocate CPU time to them.

Page 26 of 159
Operating System

3. Running: When the scheduler selects a thread from the pool of ready threads
and allocates CPU time to it, the thread enters the "running" state. In this
state, the thread's code is being executed on the CPU. A running thread will
continue to execute until it either voluntarily yields the CPU (e.g., through sleep
or wait operations) or is pre-empted by a higher-priority thread.
4. Blocked/Waiting: Threads can enter the "blocked" or "waiting" state when
they are waiting for some event to occur, such as I/O operations,
synchronization primitives (e.g., locks or semaphores), or signals from other
threads. When a thread is blocked, it is not eligible to run until the event it is
waiting for occurs.
5. Termination: Threads can terminate either voluntarily or involuntarily.
Voluntary termination occurs when a thread completes its execution or
explicitly calls a termination function. Involuntary termination can happen due
to errors (e.g., segmentation faults) or signals received from the operating
system.
6. Dead: Once a thread has terminated, it enters the "dead" state. In this state,
the thread's resources (such as memory and handles) are deallocated, and it no
longer exists as an active entity in the system. Dead threads cannot be
restarted or resumed.

MULTITHREADING
The term multithreading means a process and a thread. Process means a
program that is being executed. Processes are further divided into independent
units also known as threads, also known as collections of threads. It is a process
that is small and lightweight residing inside a process.

Multithreading divides the task of application into separate individual


threads. The same process or tasks in multithreading can be done by several
threads or it can be said that more than one thread is used for performing the
tasks in multithreading.
It has a segment that divides the code into a small set of lightweight tasks
and gives less load to CPU memory.

Page 27 of 159
Operating System

For example, client 1, client 2, and client 3 in the above example are
accessing the web server without having to wait for other tasks to be completed.
The threads in this are divided into user-level and kernel-level threads. The
user-level thread is used for handling an independent form of the kernel-level
thread without any support from the kernel. On the other hand, the kernel-level
threads are directly managed by the operating system.
Examples of Multithreading Operating Systems: Multithreading is widely
used by applications. Some of the applications are processing transactions
like online bank transfers, recharge, etc.
For instance, in the banking system, many users perform day-to-day
activities using bank servers like transfers, payments, deposits, `opening a new
account, etc. All these activities are performed instantly without having to wait for
another user to finish.
In this, all the activities get executed simultaneously as and when they arise.
This is where multithreading comes into the picture, wherein several threads
perform different activities without interfering with others.

Advantages:
• Multithreading allows the CPU to execute multiple tasks simultaneously, which
can boost performance.
• Multithreading reduces the amount of time that is spent waiting for a task to
finish.
• Multithreading can help to improve the scalability of a program.
• Interactive applications may allow a program to continue running even if part of
it is blocked or is performing a lengthy operation, thereby increasing
responsiveness.

Disadvantages of multi-threading
• Multithreading can be complex and challenging to implement.
• Multithreading can increase the complexity of a program.
• Multithreading can be error-prone.
• Programmers must carefully design their code to utilize multithreading
capabilities without introducing unwanted delays or fragmentation into their
programs’ execution.

Process Vs. Thread: Process simply means any program in execution while
the thread is a segment of a process. The main differences between process and
thread are mentioned below:
Process Thread
Processes use more resources and
Threads share resources and hence
hence they are termed as heavyweight
they are termed lightweight processes.
processes.
Creation and termination times of Creation and termination times of
processes are slower. threads are faster compared to

Page 28 of 159
Operating System

processes.
Processes have their code and Threads share code and data/files
data/file. within a process.
Communication between processes is Communication between threads is
slower. faster.
Context Switching in processes is
Context switching in threads is faster.
slower.
Threads, on the other hand, are
Processes are independent of each
interdependent. (i.e. they can read,
other.
write or change another thread’s data)
E.g.: Opening two tabs in the same
E.g.: Opening two different browsers.
browser.

MICROKERNELS
A kernel is the most important part of an operating system. It manages the
resources of the system and also acts as an interface between hardware and the
computer application. A microkernel is one type of kernel. It manages all system
resources.
Microkernel in an operating system is one of the kernel’s classifications. In
the microkernel, the user service, as well as the kernel service, are all implemented
on different kernel spaces.
In the user address space, all the user services are kept while in the kernel
address space, all the kernel services are available. Thus, by doing this, the size of
the kernel as well as the size of an operating system is reduced.
It is used for communicating between applications or client programs and
services that are running in the address space of the user are established by
message passing, thereby reducing the speed of the microkernel. It also provides
minimal service of memory management and process.
Since the user service and kernel service are isolated from each other, the
operating system remains unaffected if any user service is failed as it does not
affect the kernel service. It can be extended if any new service is to be added as
they are added to the user space and no modification is needed in the kernel space.
It is secure, portable, and reliable.

Architecture of Microkernel OS: A kernel is the core part of an operating


system which means that all the important services are handled by it. Because of
this, in the architecture of the microkernel, the important services reside within the
kernel and the rest of the services reside within the system’s application program.
The important services for which the microkernel is responsible are:
• Inter-process communication
• Scheduling of CPU
• Memory management

Page 29 of 159
Operating System

Inter-process communication: It refers to how processes interact with each other.


A process has many threads. Threads of any process can interact with each other
in kernel space. Messages are sent and received using ports across threads.

Scheduling of CPU: It refers to which process will be executed next. All the
processes reside in a queue and are executed one at a time. There are levels of
priority in every process and the process that has the highest process is executed
first. It helps in the optimization and utilization of the CPU to the maximum by
utilizing the resources efficiently. It minimizes the waiting time, response, and
turnaround times.
Memory management: It is the process of allocating space in memory for
processes. Virtual memory is also created if the process has a size bigger than that
of the actual memory by which the memory is divided into portions and stored. All
the processes wait in memory before CPU execution.

Components of Microkernel Operating System: A microkernel contains


only the basic functions of the system. A component in the microkernel is included
if and only if putting the component outside disrupts the operation of the system.
For non-essential components, user mode should be used. Following is some of the
functionalities of components of the microkernel:
• Processors, as well as thread schedulers, are included. The scheduling
algorithm of the processor is also required in the microkernel.
• Address space and other memory management services are incorporated into
the microkernel along with its security.
• For managing servers that execute their own address space, Inter-process
communication is used.

Example of Microkernel Operating System


Here are a few examples of microkernel operating systems:
• Helen OS
• Minix
• Horizon
• The L4 microkernel family

Page 30 of 159
Operating System

• Zircon

Advantages of Microkernel Operating System


• These are secure since a few parts are added, the ones that might change
the functionality of the system.
• Microkernels are modular, which means, various modules in them can be
swapped, modified, and reloaded without affecting the kernel.
• Better performance since the architecture is compact and isolated.
• It is scalable and hence more systems can be introduced without disturbing
each other.
• It adds new features without even recompiling.
• The interface of the microkernel helps in enforcing the modular structure.

Disadvantages of Microkernel Operating System


• Context switch or function call is needed while implementing drivers as
procedures.
• Providing services is more costly in microkernel systems as compared to
traditional monolithic systems.
• The performance might be indifferent and can cause some issues.

CPU SCHEDULING
CPU scheduling is a process that allows one process to use the CPU while
the execution of another process is on hold (in a waiting state) due to the
unavailability of any resource like I/O etc, thereby making full use of the CPU. CPU
scheduling aims to make the system efficient, fast, and fair.
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out by
the short-term scheduler (or CPU scheduler). The scheduler selects from among the
processes in memory that are ready to execute and allocates the CPU to one of
them. There is essential 4 conditions under which CPU scheduling decisions are
taken:
1. If a process is making the switch between the running state to the waiting state
(could be for an I/O request, or invocation of wait () for terminating one of its
child processes)
2. If a process is making the switch from the running state to the ready state (on
the occurrence of an interrupt, for example)
3. If a process is making the switch between waiting and ready state (e.g. when
its I/O request completes)
4. If a process terminates upon completion of execution.
So, in the case of conditions 1 and 4, the CPU does not have a choice of
scheduling, if a process exists in the ready queue the CPU's response to this would
be to select it for execution. In cases 2 and 3, the CPU has a choice of selecting a
particular process for executing next. There are mainly two types of CPU
scheduling:

Page 31 of 159
Operating System

Non-Preemptive Scheduling: In the case of non-pre-emptive scheduling, new


processes are executed only after the current process has completed its execution.
The process holds the resources of the CPU (CPU time) till its state changes to
terminated or is pushed to the process waiting state. If a process is currently being
executed by the CPU, it is not interrupted till it is completed.
Once the process has completed its execution, the processer picks the next
process from the ready queue (the queue in which all processes that are ready for
execution are stored).

For Example: In the image above, we can see that all the processes were executed
in the order in which they appeared, and none of the processes were interrupted by
another, making this a non-preemptive, FCFS (First Come, First Served) CPU
scheduling algorithm. P2 was the first process to arrive (arrived at time = 0) and
was hence executed first. Let's ignore the third column for a moment, we'll get to
that soon. Process P3 arrived next (at time = 1) and was executed after the previous
process - P2 was done executing, and so on.
Some examples of non-preemptive scheduling algorithms are - Shortest Job
First (SJF, non-preemptive), and Priority scheduling (non-preemptive).

Preemptive Scheduling: Preemptive scheduling takes into consideration the


fact that some processes could have a higher priority and hence must be executed
before the processes that have a lower priority.
In preemptive scheduling, the CPU resource is allocated to a process for only
a limited period and then those resources are taken back and assigned to another
process (the next in execution). If the process has yet to complete its execution, it is
placed back in the ready state, where it will remain till it gets a chance to execute
once again.
So, when we take a look at the conditions under which CPU
scheduling decisions are taken based on which CPU provides its resources to
processes, we can see that there isn't a choice in making a decision when it comes
to conditions 1 and 4. If we have a process in the ready queue, we must select it for
execution.

Page 32 of 159
Operating System

However, we do have a choice in conditions 2 and 3. If we opt to choose


scheduling only if a process terminates (condition 4) or if the current process
execution is waiting for I/O (condition 1) then we can say that our scheduling
is non-preemptive, however, if we make scheduling decisions in other conditions as
well, we can say that our scheduling process is preemptive.

SCHEDULERS
A scheduler is a software that helps schedule the processes in an operating
system. It helps to keep all computer resources busy and allows multiple users to
share system resources effectively. Let’s go through different schedulers in an
operating system.

1. Long-term schedulers: The processes that are created are in the NEW state.
The programs are admitted to the RAM for execution. So, before execution, the
processes are put in the READY queue. So, do they get into the ready queue
themselves? Here comes the role of long-term schedulers (LTS). It is also called
a job scheduler. These schedulers select processes from secondary memory and
put them into the ready queue. LTS runs less frequently. The main aim of LTS
is to maintain the degree of multiprogramming. Multiprogramming means
executing multiple programs by a single processor. But not all processes
simultaneously. It means if one process is not executing for some reason, then
another process will get a chance to get executed. An optimal level of
multiprogramming means
The average rate of process = the average departure rate of processes getting
executed and leaving the system.

2. Short-term schedulers: It is also called a CPU scheduler. When the


processes are in the ready queue, they are prepared to get executed. So the
short-term schedulers select one process from the ready queue, put it in the
running queue, and allocate a processor (CPU) to it. They are also known as the
dispatcher who decides which process will be executed next. They are faster
than long-term schedulers. The performance of the system depends on the
choice of Short-term schedulers. If it selects the processes having high burst
time, then, in that case, other processes in the waiting queue will keep on
waiting in the ready queue. This situation is called starvation.

Page 33 of 159
Operating System

3. Medium-term schedulers: When the process is assigned CPU and the


program execution starts, the program execution is sometimes suspended. The
reason could be an I/O request or some high-priority process. In this case,
suspended processes cannot make any progress towards completion. So the
process has to be removed from the memory and make space for other
processes. The suspended process is moved back to the secondary storage. For
example, suppose process 1 was executing, but it got suspended for some
reason, so process 1 is swapped out, and process 2 is swapped in. This means
swapping is taking place here. For doing swapping, we have a medium-term
scheduler.

SCHEDULING METHODOLOGY
In different environments, different scheduling methodologies are needed.
This situation arises because different application areas (and different kinds of
operating systems) have different goals. In other words, what the scheduler should
optimize for is not the same in all systems. Three environments are
1. Batch.
2. Interactive.
3. Real time.
Batch systems are still in widespread use in the business world for doing
payroll, inventory, accounts receivable, accounts payable, interest calculation (at
banks), claims processing (at insurance companies), and other periodic tasks. In
batch systems, there are no users impatiently waiting at their terminals for a quick
response to a short request. Consequently, non-preemptive algorithms, or
preemptive algorithms with long periods for each process, are often acceptable.
This approach reduces process switches and thus improves performance. The
batch algorithms are fairly general and often applicable to other situations as well,
which makes them worth studying, even for people not involved in corporate
mainframe computing.
In an environment with interactive users, preemption is essential to keep
one process from hogging the CPU and denying service to the others. Even if no
process intentionally ran forever, one process might shut out all the others
indefinitely due to a program bug. Preemption is needed to prevent this behaviour.
Servers also fall into this category, since they normally serve multiple (remote)
users, all of whom are in a big hurry.
In systems with real-time constraints, preemption is, oddly enough,
sometimes not needed because the processes know that they may not run for long
periods and usually do their work and block quickly. The difference with interactive

Page 34 of 159
Operating System

systems is that real-time systems run only programs that are intended to further
the application at hand. Interactive systems are general purpose and may run
arbitrary programs that are not cooperative or even malicious.

CPU SCHEDULING ALGORITHMS


There are different processes, and every process wants to get executed, but
all cannot be executed simultaneously. So that is why we need Scheduling. CPU
does this Scheduling. That’s why we call it CPU scheduling. CPU Scheduling
Algorithm is an essential part of any operating system. Various algorithms can be
used, each with advantages and disadvantages.
The CPU scheduling algorithm is used to schedule process execution by
determining which process should be removed from execution and which process
should be executed. In the end, the main goal is to engage the CPU all the time,
which means the CPU should not be idle.
Types of Scheduling Algorithms
• Preemptive process- These processes are based on the process priority. If a low-
priority process is executing and the higher-priority process enters, then the low-
priority process is preempted (stop running) and executes the high-priority
process first.
• Non-preemptive process- In these algorithms, if any process is assigned the
CPU, it will execute entirely or be released in case of context switching or
termination. There is no process priority in this.

Important CPU Scheduling Terminologies: Let's now discuss some


important terminologies that are relevant to CPU scheduling.
1. Arrival time: Arrival time (AT) is the time at which a process arrives at
the ready queue.
2. Burst Time: As you may have seen, the third column is 'burst time', it is the
time required by the CPU to complete the execution of a process, or the amount
of time required for the execution of a process. It is also sometimes called
the execution time or running time.
3. Completion Time: As the name suggests, completion time is the time when a
process completes its execution. It is not to be confused with burst time.
4. Turn-Around Time: Also written as TAT, turn-around time is simply the
difference between completion time and arrival time (Completion time - arrival
time).
5. Waiting Time: A process's Waiting time (WT) is the difference between
turnaround time and burst time (TAT - BT), i.e., the amount of time a process
waits to get CPU resources in the ready queue.
6. Response Time: The response time (RT) of a process is the time after which
any process gets CPU resources allocated after entering the ready queue.

Page 35 of 159
Operating System

FCFS
The full form of FCFS Scheduling is First Come First Serve Scheduling.
FCFS Scheduling algorithm automatically executes the queued processes
and requests in the order of their arrival. It allocates the job that first arrived in the
queue to the CPU, then allocates the second one, and so on. FCFS is the simplest
and easiest CPU scheduling algorithm, managed with a FIFO queue. FIFO stands
for First In First Out. The FCFS scheduling algorithm places the arriving
processes/jobs at the very end of the queue. So, the processes that request the
CPU first get the allocation from the CPU first. As any process enters the FIFO
queue, its Process Control Block (PCB) gets linked with the queue’s tail. As the CPU
becomes free, the process at the very beginning gets assigned to it. Even if the CPU
starts working on a longer job, many shorter ones have to wait after it. The FCFS
scheduling algorithm works in most of the batches of operating systems.
Examples of FCFS scheduling: Buying a movie ticket at the counter. This
algorithm serves people in the queue order. The first person in line buys a ticket,
then the next person, and so on. This process will continue until the last person in
line has purchased a ticket. This method mimics the CPU process.

Advantages of FCFS:
• Simplicity. Orders are fulfilled in order, simplifying scheduling and processing.
Orders are simply performed in chronological sequence.
• User friendly. Order scheduling and code writing are straightforward for team
members. Easy scheduling saves time and labour. It’s a foolproof technique that
also reduces errors.
• Easy to implement. FCFS's simplicity makes it easier to integrate into existing
systems. FCFS order scheduling can be deployed quickly and inexpensively into
any scheduling system your company uses. FCFS can be used soon after its
implementation.

Limitation of FCFS:
• Long waiting time. FCFS processes orders in order since it is non-preemptive.
This means a business order can start processing once the previous order has
been completed. A CPU-allocated process will never release it until it finishes. If
the initial order has a long burst time, orders following it must wait for
fulfilment, regardless of their burst times.
• Lower device usage. Simple FCFS is inefficient. Longer wait periods accompany
this. If the CPU is busy processing a long order, all other orders lie idle, causing
a backup. FCFS is particularly wasteful because the CPU can only process one
order at a time.
• CPU over I/O. FCFS emphasizes CPU over I/O. The algorithm is more CPU-
friendly than I/O-friendly. This may dissuade I/O system users.
Example:
Process Burst Time
P1 24

Page 36 of 159
Operating System

P2 3
P3 3
If the processes arrive in the order P1, P2, and P3, and are served in FCFS
order, we get the result shown in the following Gantt chart:

P1 P2 P3
0 24 27 30
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process
P2, and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds. If the processes arrive in the order P2, P3, and P1,
however, the results will be as shown in the following Gantt chart:

P2 P3 P1
0 3 6 30
The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This
reduction is substantial. Thus, the average waiting time under an FCFS policy is
generally not minimal and may vary substantially if the process's CPU burst times
vary greatly.

SJF
Shortest Job First (SJF) algorithm is also known as Shortest Job Next
(SJN) or Shortest Process Next (SPN). It is a CPU processes scheduling algorithm
that sorts and executes the process with the smallest execution time first, and then
the subsequent processes with the increased execution time. Both preemptive and
non-preemptive scheduling strategies are possible in the SJF scheduling algorithm.
In SJF, there is a significant amount of reduction in the average waiting time for
other processes that are waiting to be executed.
However, it can be quite challenging to estimate the burst time required for a
process, making it difficult to apply this technique to the operating system
scheduling process.

Page 37 of 159
Operating System

The burst time for a process can only be approximated or predicted. Our
approximations must be correct to get the most out of the SJF algorithm.
Numerous methods can be used to predict a process's CPU burst time.
There are two types of SJF methods:
• Non-Preemptive SJF
• Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to the
process, the process holds it till it reaches a waiting state or is terminated.
In Preemptive SJF Scheduling, jobs are put into the ready queue as they
come. A process with the shortest burst time begins execution. If a process with
even a shorter burst time arrives, the current process is removed or preempted
from execution, and the shorter job is allocated a CPU cycle.

Advantages of SJF: These are some of the Advantages of the SJF algorithm:
• Shortest Job First (SJF) has a shorter average waiting time as compared to the
First Come First Serve (FCFS) algorithm.
• SJF can be applied to long-term scheduling.
• SJF is ideal for jobs that run in batches and whose run times are known.
• SJF is probably the best concerning the average turnaround time of a process.

Disadvantages of SJF: These are some of the Disadvantages of the SJF


algorithm:
• The Shortest Job First algorithm may result in a starvation problem with
extremely long turnaround times.
• In SJF, job burst time must be predetermined, although it might be difficult to
predict it.
• As we are unable to estimate the duration of the upcoming CPU process burst
time, we cannot utilize SJF for short-term CPU scheduling.

RR
The Round-robin scheduling algorithm is a kind of preemptive first-come,
first-served CPU Scheduling algorithm in which each process in the ready state
gets the CPU for a fixed time cyclically (turn by turn). It is the oldest scheduling
algorithm and is mainly used for multitasking.
The round-robin scheduling algorithm is one of the CPU scheduling
algorithms in which every process gets a fixed amount of time to execute.
In this algorithm, every process gets executed cyclically. This means that
processes that have their burst time remaining after the expiration of the time
quantum are sent back to the ready state and wait for their next turn to complete
the execution until it terminates. This processing is done in FIFO order, suggesting
that processes are executed on a first-come, first-served basis.

Working of Round Robin Algorithm:


1. All the processes are added to the ready queue.

Page 38 of 159
Operating System

2. At first, the burst time of every process is compared to the time quantum of the
CPU.
3. If the burst time of the process is less than or equal to the time quantum in the
round-robin scheduling algorithm, the process is executed to its burst time.
4. If the burst time of the process is greater than the time quantum, the process is
executed up to the time quantum (TQ).
5. When the time quantum expires, it checks if the process is executed completely
or not.
6. On completion, the process terminates. Otherwise, it goes back again to
the ready state.

Advantages
1. This round-robin algorithm offers starvation-free execution of processes.
2. Each process gets equal priority and fair allocation of CPU.
3. Round Robin scheduling algorithm enables the Context switching method to
save the states of preempted processes.
4. It is easily implementable on the system because round-robin scheduling in
OS doesn’t depend upon burst time.

Disadvantages
1. The waiting and response times are higher due to the short time slot.
2. Lower time quantum results in higher context switching.
3. We cannot set any special priority for the processes.

Example: Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds:

Process Burst Time

P1 24

P2 3

P3 3

If we use a time quantum of 4 milliseconds, then process P 1 gets the first 4


milliseconds. Since it requires another 20 milliseconds, it is preempted after the
first-time quantum, and the CPU is given to the next process in the queue, process
P2. Since process P2 does not need 4 milliseconds, it quits before its time quantum
expires. The CPU is then given to the next process, process P 3. Once each process
has received 1 time quantum, the CPU is returned to process P1 for an additional
time quantum. The resulting RR schedule is

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

The average waiting time is 17/3 = 5.66 milliseconds. In the RR scheduling


algorithm, no process is allocated the CPU for more than 1 time quantum in a row
(unless it is the only runnable process). If a process's CPU burst exceeds 1 time

Page 39 of 159
Operating System

quantum, that process is pre-empted and is put back in the ready queue. The RR
scheduling algorithm is thus pre-emptive.

PRIORITY SCHEDULING
Priority scheduling in OS is the scheduling algorithm that schedules
processes according to the priority assigned to each process. Higher-priority
processes are executed before lower-priority processes.
In priority scheduling in OS, processes are executed based on their priority.
The jobs/processes with higher priority are executed first. Naturally, you might
want to know how the priority of processes is decided. Priority of processes depends
on some factors such as:
• Time limit
• Memory requirements of the process
• Ratio of average I/O to average CPU burst time
There can be more factors based on which the priority of a process/job is
determined. This priority is assigned to the processes by the scheduler.
These priorities of processes are represented as simple integers in a fixed range
such as 0 to 7, or maybe 0 to 4095. These numbers depend on the type of system.

Types of Priority Scheduling: There are two types of priority scheduling


algorithms in OS:
Non-Preemptive Scheduling: In this type of scheduling, if during the execution
of a process, another process with a higher priority arrives for execution, even then
the currently executing process will not be disturbed. The newly arrived high-
priority process will be put in next for execution since it has higher priority than
the processes that are waiting for execution. All the other processes will remain in
the waiting queue to be processed. Once the execution of the current process is
done, the high-priority process will be given the CPU for execution.
Preemptive Scheduling: `Preemptive Scheduling as opposed to non-preemptive
scheduling will preempt (stop and store the currently executing process) the
currently running process if a higher priority process enters the waiting state for
execution and will execute the higher priority process first and then resume
executing the previous process.

Advantages:
• High-priority processes do not have to wait for their chance to be executed due
to the current running process.
• We can define the relative importance/priority of processes.
• The applications in which the requirements of time and resources fluctuate are
useful.

Disadvantages:
• Since we only execute high-priority processes, this can lead to starvation of the
processes that have a low priority. Starvation is the phenomenon in which a

Page 40 of 159
Operating System

process gets infinitely postponed because the resources that are required by the
process are never allocated to it, since other processes are executed before it.
You can research more about starvation on Google.
• If the system eventually crashes, all of the processes that have low priority will
get lost since they are stored in the RAM.
Example: As an example, consider the following set of processes, assumed to have
arrived at time 0, in the order P1, P2, • •, P5, with the length of the CPU burst given
in milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Using priority scheduling, we would schedule these processes according to
the following Gantt chart:

P2 P5 P1 P3 P4
0 1 6 16 18 19
The average waiting time is 8.2 milliseconds.
Priorities can be defined either internally or externally. Internally defined
priorities use some measurable quantity or quantities to compute the priority of a
process. For example, time limits, memory requirements, the number of open files,
and the ratio of average I/O burst to average CPU burst have been used in
computing priorities. External priorities are set by criteria outside the operating
system, such as the importance of the process, the type and amount of funds being
paid for computer use, the department sponsoring the work, and other, often
political, factors.

SRTF SCHEDULING ALGORITHM


The Pre-emptive version of Shortest Job First (SJF) scheduling is known as
Shortest Remaining Time First (SRTF). With the SRTF algorithm's help, the process
with the smallest amount of time remaining until completion is selected first to
execute. So basically, in SRTF, the processes are scheduled according to the
shortest remaining time.
However, the SRTF algorithm involves more overheads than the Shortest job
first (SJF)scheduling, because SRTF OS is frequently required to monitor the CPU
time of the jobs in the READY queue and perform context switching.
In the SRTF scheduling algorithm, the execution of any process can be
stopped after a certain amount of time. On arrival of every process, the short-term
scheduler schedules those processes from the list of available processes & running
processes that have the least remaining burst time.

Page 41 of 159
Operating System

After all the processes are available in the ready queue, No-preemption will be done
and the algorithm will work the same as SJF scheduling. In the Process Control
Block, the context of the process is saved, when the process is removed from the
execution, and when the next process is scheduled. The PCB is accessed on the
next execution of this process.
Advantages of SRTF: The main advantage of the SRTF algorithm is that it
makes the processing of the jobs faster than the SJF algorithm, mentioned its
overhead charges are not counted.
Disadvantages of SRTF: In SRTF, the context switching is done a lot more times
than in SJN due to more consumption of the CPU's valuable time for processing.
The consumed time of the CPU then adds up to its processing time which then
diminishes the advantage of fast processing of this algorithm.
Example

At the 0th unit of the CPU, there is only one process which is P1, so P1 gets
executed for the 1-time unit. At the 1st unit of the CPU, Process P2 arrives. Now,
the P1 needs 6 more units to be executed, and the P2 needs only 3 units. So, P2 is
executed first by pre-empting P1. At the 3rd unit of time, the process P3 arrives,
and the burst time of P3 is 4 units which is more than the completion time of P2
which is 1 unit, so P2 continues its execution. Now after the completion of P2, the
burst time of P3 is 4 units which means it needs only 4 units for completion while
P1 needs 6 units for completion. So, this algorithm picks P3 above P1 due to the
reason that the completion time of P3 is less than that of P1. P3 gets completed at
time unit 8, there are no new processes arrived. So again, P1 is sent for execution,
and it gets completed at the 14th unit.
The arrival Time and Burst time for three processes P1, P2, and P3 are given
in the above diagram. Let us calculate Turnaround time, completion time, and
waiting time.

FRAGMENTATION
Fragmentation is an unwanted issue that occurs in an operating system in
which a process is unloaded and loaded from memory, causing the free memory
space to become fragmented. As a result, the process cannot be assigned to the
memory blocks due to their small size.

Page 42 of 159
Operating System

In operating systems, fragmentation is a phenomenon that impacts storage


space efficiency, impeding both capacity and performance. Fragmentation usually
arises when blocks of storage space are scattered, leading to potential wastage.

Causes of Fragmentation: At its core, fragmentation is caused by the


dynamic allocation and deallocation of memory during a program's execution.
External fragmentation occurs when a process is removed from memory, leaving a
'hole' of unused memory. If the hole is too small or poorly located to accommodate
subsequent processes, fragmentation occurs.
Internal fragmentation, on the other hand, results from allocating memory in
fixed block sizes. If a process does not fully utilize its allocated block, the remaining
memory is wasted. This is essentially space reserved for a specific process but left
unused, creating inefficiency within the system.

Types of Fragmentation: Fragmentation is a condition in which memory is


allocated but not used efficiently. Below are the most common fragmentations:

Internal Fragmentation: Contrastingly, internal fragmentation occurs when


memory blocks allocated to processes exceed what they initially requested. The
leftover space within a block, which remains unused, gives rise to internal
fragmentation.
For Example,

Internal fragmentation is a classic case of over-allocation, where system


resources are wasted within allocated blocks. While attempts to prevent under-
allocation and the associated performance issues are well-intentioned,
overcompensation can lead to its own set of problems. Consider this simple
representation of internal fragmentation:

Process Memory Requested Memory Allocated

P1 10 units 15 units

P2 20 units 25 units

P3 30 units 35 units

Here, each process is allocated more memory than it requested, leading to


wasted space and internal fragmentation.

External Fragmentation: External fragmentation arises when free memory


blocks in a system become separated and non-contiguous. This typically happens

Page 43 of 159
Operating System

when memory blocks, once allocated, are freed up, leading to 'holes' of unused
memory spread across the system.

The issue is that these 'holes' or blocks may not be large enough to satisfy
subsequent allocation requests, despite collectively having sufficient space.
Consequently, the system is unable to use this memory effectively, leading to
wasted resources and decreased efficiency. Consider this simple representation of
external fragmentation:

Memory Blocks State

Block 1 Used

Block 2 Free

Block 3 Used

Block 4 Free

Block 5 Used

Here, although there is free memory (Blocks 2 and 4), it is not contiguous,
resulting in external fragmentation.

REVIEW QUESTIONS
1. What is an Operating System? Explain the different functions of the operating
system.
2. Explain the following terms: (1) Process (2) Creation and termination
operation of process.
3. What is multithreading? Explain with a suitable example.
4. What is scheduling? Explain the SJP Shortest-Job First algorithm.
5. What is a process thread? Explain.
6. What is a thread? Explain the concept of multithreading with a suitable
example.
7. Explain schedulers and types of schedulers in detail.
8. Describe the Round Robin CPU scheduling algorithm.
9. What is Micro Kernel? Explain its architecture and benefits.
10. Explain the structure of the Operating System.

Page 44 of 159
Operating System

11. What is Process? Explain different process states.


12. Explain: (i) Concurrent Process (ii) Multithreading.
13. Explain the FCFS (First Come First Served) CPU Scheduling algorithm with an
example.
14. What are the differences between process and threads? Explain process states
along with a diagram.
15. What are Threads? Explain its Life Cycle.
16. Explain FCFS, SRTF, and Round Robin CPU scheduling algorithms with proper
examples.
17. Explain Internal and External Fragmentation.
18. Explain: (i) User level thread (ii) Kernel level thread
19. Explain the SRTF CPU scheduling algorithm with an example.
20. List and explain the characteristics of modern operating systems.
21. Draw and explain the life cycle of the thread
22. What is microkernel? Explain its architecture and benefits.
23. Write short notes on (i) Multiprogramming and (ii) The sharing system.

Page 45 of 159

You might also like