OPERATING SYSTEM UNIT -1 NOTES

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

OPERATING SYSTEM

UNIT - I Introduction: Views - Types of System - OS Structure – Operations - Services –


Interface- System Calls System Structure - System Design and Implementation. Process
Management: Process - Process Scheduling - Inter-process Communication. CPU Scheduling:
CPU Schedulers - Scheduling Criteria - Scheduling Algorithms.

Introduction:
 An operating system acts as an intermediary between the user of a computer and
computer hardware. The purpose of an operating system is to provide an environment in
which a user can execute programs conveniently and efficiently.
 An operating system is software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system.
 A more common definition is that the operating system is the one program running at all
times on the computer (usually called the kernel), with all else being application
programs.

Views of Operating System:


 An operating system is a framework that enables user application programs to interact
with system hardware.
 The operating system does not perform any functions on its own, but it provides an
atmosphere in which various apps and programs can do useful work.
Viewpoints of Operating System:
 The operating system may be observed from the viewpoint of the user or the system. It is
known as the user view and the system view. There are mainly two types of views of the
operating system.
 These are as follows:
1. User View
2. System View
1. User View
 The user view depends on the system interface that is used by the users. Some systems
are designed for a single user to monopolize the resources to maximize the user's task. In
these cases, the OS is designed primarily for ease of use, with little emphasis on quality
and none on resource utilization.
 The user viewpoint focuses on how the user interacts with the operating system through
the usage of various application programs. In contrast, the system viewpoint focuses on
how the hardware interacts with the operating system to complete various tasks.
 Backward Skip 10sPlay Video Forward Skip 10s
1. Single User View Point
Most computer users use a monitor, keyboard, mouse, printer, and other accessories to
Operate their computer system. In some cases, the system is designed to maximize the
output of a single user. As a result, more attention is laid on accessibility, and resource
allocation is less important. These systems are much more designed for a single user
experience and meet the needs of a single user, where the performance is not given focus
as the multiple user systems.
2. Multiple User View Point
Another example of user views in which the importance of user experience and
performance is given is when there is one mainframe computer and many users on
their computers trying to interact with their kernels over the mainframe to each
other. In such circumstances, memory allocation by the CPU must be done
effectively to give a good user experience. The client-server architecture is
another good example where many clients may interact through a remote server,
and the same constraints of effective use of server resources may arise.
3. Handled User View Point
Moreover, the touch screen era has given you the best handheld technology ever.
interact via wireless devices to perform numerous operations, but they're not as
efficient as a computer interface, limiting their usefulness. However, their
operating system is a great example of creating a device focused on the user's
point of view.

4. Embedded System User View Point


Some systems, like embedded systems that lack a user point of view. The remote
control used to turn on or off the tv is all part of an embedded system in which the
electronic device communicates with another program where the user viewpoint is
limited and allows the user to engage with the application.

2. System View
 The OS may also be viewed as just a resource allocator. A computer system comprises
various sources, such as hardware and software, which must be managed effectively. The
operating system manages the resources, decides between competing demands, controls
the program execution, etc. According to this point of view, the operating system's
purpose is to maximize performance.
 The operating system is responsible for managing hardware resources and allocating
them to programs and users to ensure maximum performance.
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O
interaction, etc. These are all resources that the operating system needs when an
application program demands them. Only the operating system can allocate resources,
and it has used several tactics and strategies to maximize its processing and memory
space. The operating system uses a variety of strategies to get the most out of the
hardware resources, including paging, virtual memory, caching, and so on.
2. Control Program
The control program controls how input and output devices (hardware) interact with the
operating system. The user may request an action that can only be done with I/O devices;
in this case, the operating system must also have proper communication, control, detect,
and handle such devices.
Types of Operating Systems (OS)
An operating system is a well-organized collection of programs that manages the computer
hardware. It is a type of system software that is responsible for the smooth functioning of the
computer system.

1. Batch Operating System

Batch processing was very popular. In this technique, similar types of jobs were batched together
and executed in time. People were used to having a single computer which was called a
mainframe.

In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.

The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to another as
soon as the job was completed. It contained a small set of programs called the resident monitor
that always resided in one part of the main memory. The remaining part is used for servicing
jobs.

Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.

Disadvantages of Batch OS

1. Starvation

Batch processing suffers from starvation.

For Example:

There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very
high, then the other four jobs will never be executed, or they will have to wait for a very long
time. Hence the other processes get starved.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's input. If
job requires the input of two numbers from the console, then it will never get it in
the batch processing scenario since the user is not present at the time of execution.

2. Multiprogramming Operating System

Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each
process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.

Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources
are used efficiently, but they do not provide any user interaction with the computer
system.

3. Multiprocessing Operating System

In Multiprocessing, Parallel computing is achieved. There are more than one processors present
in the system which can execute more than one process at the same time. This will increase the
throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput of
the system.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks can be


distributed among several processors. This increases reliability as if one processor fails,
the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.

Disadvantages of Multiprocessing operating System

o Multiprocessing operating system is more complex and sophisticated as it takes care of


multiple CPUs simultaneously.

4. Multitasking Operating System


The multitasking operating system is a logical extension of a multiprogramming system that
enables multiple programs simultaneously. It allows a user to perform more than one computer
task at the same time.

Advantages of Multitasking operating system


o This operating system is more suited to supporting multiple users simultaneously.
o The multitasking operating systems have well-defined memory management.

Disadvantages of Multitasking operating system


o The multiple processors are busier at the same time to complete any task in a
multitasking environment, so the CPU generates more heat.

5. Real Time Operating System

In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.

The Application of a Real-Time system exists in the case of military applications, if you want to
drop a missile, then the missile is supposed to be dropped with a certain precision.
Advantages of Real-time operating system:
o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.

Disadvantages of Real-time operating system:


o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.

6. Time-Sharing Operating System

In the Time Sharing operating system, computer resources are allocated in a time-dependent
fashion to several programs simultaneously. Thus it helps to provide a large number of user's
direct access to the main computer.

It is a logical extension of multiprogramming. In time-sharing, the CPU is switched among


multiple programs given by different users on a scheduled basis.
A time-sharing operating system allows many users to be served simultaneously, so sophisticated
CPU scheduling schemes and Input/output management are required.

Time-sharing operating systems are very difficult and expensive to build.

Advantages of Time Sharing Operating System


o The time-sharing operating system provides effective utilization and sharing of resources.
o This system reduces CPU idle and response time.

Disadvantages of Time Sharing Operating System


o Data transmission rates are very high in comparison to other methods.
o Security and integrity of user programs loaded in memory and data need to be maintained
as many users access the system at the same time.

Operating System Structure

o An operating system is a design that enables user application programs to communicate


with the hardware of the machine.
o The operating system should be built with the utmost care because it is such a
complicated structure and should be simple to use and modify.
1. Simple Structure
2. Monolithic Structure
3. Layered Approach Structure
4. Micro-Kernel Structure
5. Exo-Kernel Structure
6. Virtual Machines

Simple Structure

o It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to access
I/O routines, which may result in unauthorized access to I/O procedures
This organizational structure is used by the MS-DOS operating system:

o There are four layers that make up the MS-DOS operating system, and each has its own
set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures
are visible to end users, giving them the potential for unwanted access.

The following figure illustrates layering in simple structure:

Advantages of Simple Structure:

o There are only a few interfaces and levels, it is simple to develop.


o There are fewer layers between the hardware and the applications, it offers superior
performance.

Disadvantages of Simple Structure:

o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is no
abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.
MONOLITHIC STRUCTURE

The monolithic operating system controls all aspects of the operating system's operation,
including file management, memory management, device management, and operational
operations.

The core of an operating system for computers is called the kernel (OS). All other System
components are provided with fundamental services by the kernel. The operating system and the
hardware use it as their main interface. When an operating system is built into a single piece of
hardware, such as a keyboard or mouse, the kernel can directly access all of its resources.

The following diagram represents the monolithic structure:

Advantages of Monolithic Structure:

o This layering is unnecessary and the kernel alone is responsible for managing all
operations, it is easy to design and execute.
o Due to the fact that functions like memory management, file management, process
scheduling, etc., are implemented in the same address area, the monolithic kernel runs
rather quickly when compared to other systems. Utilizing the same address speeds up and
reduces the time required for address allocation for new processes.

Disadvantages of Monolithic Structure:

o The monolithic kernel's services are interconnected in address space and have an impact
on one another, so if any of them malfunctions, the entire system do as well.
o It is not adaptable. Therefore, launching a new service is difficult.
LAYERED STRUCTURE

The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer)

The image below shows how OS is organized into layers:

Advantages of Layered Structure:

o Work duties are separated since each layer has its own functionality, and there is some
amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by the top
layers.

Disadvantages of Layered Structure:

o Performance is compromised in layered structures due to layering.


o Construction of the layers requires careful design because upper layers only make use of
lower layers' capabilities.

MICRO-KERNEL STRUCTURE

The operating system is created using a micro-kernel framework that strips the kernel of any
unnecessary parts.

Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system
is now more trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating
system is unaffected and continues to function normally.
The image below shows Micro-Kernel Operating System Structure:

Advantages of Micro-Kernel Structure:

o It enables portability of the operating system across platforms.


o Due to the isolation of each Micro-Kernel, it is reliable and secure.
o The reduced size of Micro-Kernels allows for successful testing.
o The remaining operating system remains unaffected and keeps running properly even if a
component or Micro-Kernel fails.

Disadvantages of Micro-Kernel Structure:

o The performance of the system is decreased by increased inter-module communication.


o The construction of a system is complicated.

EXOKERNEL

An operating system called Exokernel was created at MIT with the goal of offering application-
level management of hardware resources. The exokernel architecture's goal is to enable
application-specific customization by separating resource management from protection.
Exokernel size tends to be minimal due to its limited operability.

Exokernel operating systems have a number of features, including:

o Enhanced application control support.


o Splits management and security apart.
o A secure transfer of abstractions is made to an unreliable library operating system.
o Brings up a low-level interface.
o Operating systems for libraries provide compatibility and portability.

Advantages of Exokernel Structure:

o Application performance is enhanced by it.


o Accurate resource allocation and revocation enable more effective utilisation of hardware
resources.
o New operating systems can be tested and developed more easily.
o Every user-space program is permitted to utilise its own customised memory
management.

Disadvantages of Exokernel Structure:

o A decline in consistency
o Exokernel interfaces have a complex architecture.

VIRTUAL MACHINES (VMs)

The hardware of our personal computer, including the CPU, disc drives, RAM, and NIC
(Network Interface Card), is abstracted by a virtual machine into a variety of various execution
contexts based on our needs, giving us the impression that each execution environment is a
separate computer. A virtual box is an example of it.

Using CPU scheduling and virtual memory techniques, an operating system allows us to execute
multiple processes simultaneously while giving the impression that each one is using a separate
processor and virtual memory.

Advantages of Virtual Machines:

o Due to total isolation between each virtual machine and every other virtual machine,
there are no issues with security.
o Simple availability, accessibility, and recovery convenience.

Disadvantages of Virtual Machines:

o Depending on the workload, operating numerous virtual machines simultaneously on a


host computer may have an adverse effect on one of them.
o When it comes to hardware access, virtual computers are less effective than physical
ones.
Operating System Services:
 The operating system is a set of special programs that run on a computer system that
allows it to work properly. It controls input-output devices, execution of programs,
managing files, etc.

Services of Operating System


1. Program execution
2. Output Operations
3. Communication between Process
4. File Management
5. Memory Management
6. Process Management
7. Security and Privacy
8. Resource Management
9. User Interface
10. Networking
11. Error handling
12. Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed
depends on the CPU Scheduling Algorithms. A few are FCFS, SJF, etc.
Input Output Operations
Operating System manages the input-output operations and establishes communication between
the user and device drivers. Device drivers are software that is associated with hardware that is
being managed by the OS so that the sync between the devices works properly. It also provides
access to input-output devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes. Communication between
processes includes data transfer among them. If the processes are not on the same computer but
connected through a computer network, then also their communication is managed by the
Operating System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It also
provides a platform for the user to create, and delete files. The Operating System is responsible
for making decisions regarding the storage of all types of data or files, i.e, floppy disk/hard
disk/pen drive, etc. The Operating System decides how the data should be manipulated and
stored.
Memory Management
Let’s understand memory management by OS in simple way.
OS first check whether the upcoming program fulfill all requirement to get memory space or not,
if all things good, it checks how much memory space will be sufficient for program and then
load the program into memory at certain location. And thus , it prevents program from using
unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking (execution) is really happen and chef as the (OS) who uses kitchen-
stove (CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
Security and Privacy
 Security : OS keep our computer safe from an unauthorized user by adding security layer to
it. Basically, Security is nothing but just a layer of protection which protect computer from
bad guys like viruses and hackers. OS provide us defenses like firewalls and anti-virus
software and ensure good safety of computer and personal information.
 Privacy : OS give us facility to keep our essential information hidden like having a lock on
our door, where only you can enter and other are not allowed . Basically , it respect our
secrets and provide us facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that manages
resource sharing. It also manages the CPU time among processes using CPU Scheduling
Algorithms. It also helps in the memory management of the system. It also controls input-output
devices. The OS also ensures the proper use of all the resources available by deciding which
resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The
command interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the
internet, sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices, etc.
It also ensures that an error does not occur frequently and fixes the errors. It also prevents the
process from coming to a deadlock. It also looks for any type of error or bugs that can occur
while any task. The well-secured OS sometimes also acts as a countermeasure for preventing any
sort of breach of the Computer System from any external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue) , start(yellow)=>(ready queue),move(green)=>(under execution) and
this light (control) changes after a certain interval of time at each side of the road(computer
system) so that the cars(program) from all side of road move smoothly without traffic.
What is a System Call?

A system call is an interface between a program running in user space and the operating
system (OS). Application programs use system calls to request services and functionalities from
the OS's kernel. This mechanism allows the program to call for a service, like reading from a file,
without accessing system resources directly.

When a program invokes a system call, the execution context switches from user to kernel mode,
allowing the system to access hardware and perform the required operations safely. After the
operation is completed, the control returns to user mode, and the program continues its
execution.

This layered approach facilitated by system calls:

 Ensures that hardware resources are isolated from user space processes.
 Prevents direct access to the kernel or hardware memory.
 Allows application code to run across different hardware architectures.

System calls serve several important functions, which include:

 User-Kernel Boundary. System calls serve as the authorized gateway for user programs
when requesting services from the kernel. They ensure that user programs cannot
arbitrarily access kernel functions or critical system resources.
 Resource Management. User programs can request and manage vital resources like
CPU time, memory, and file storage via system calls. The OS oversees the process and
guarantees that it is completed in an organized manner.
 Streamlined Development. System calls abstract the complexities of hardware. This
allows developers to perform operations like reading and writing to a file or managing
network data without needing to write hardware-specific code.
 Security and Access Control. System calls implement checks to ensure that requests
made by user programs are valid and that the programs have the necessary permissions to
perform the requested operations.
 Inter-Process Communication (IPC). System calls provide the mechanisms for
processes to communicate with each other. They offer features like pipes, message
queues, and shared memory to facilitate this inter-process communication.
 Network Operations. System calls provide the framework for network communications
between programs. Developers can devote their attention to building their application's
logic instead of focusing on low-level network programming

System calls work:

1. System Call Request. The application requests a system call by invoking


its corresponding function. For instance, the program might use
the read() function to read data from a file.
2. Context Switch to Kernel Space. A software interrupt or special
instruction is used to trigger a context switch and transition from the user
mode to the kernel mode.

3. System Call Identified. The system uses an index to identify the system
call and address the corresponding kernel function.

4. Kernel Function Executed. The kernel function corresponding to the


system call is executed. For example, reading data from a file.

5. System Prepares Return Values. After the kernel function completes


its operation, any return values or results are prepared for the user
application.

6. Context Switch to User Space. The execution context is switched back


from kernel mode to user mode.

7. Resume Application. The application resumes its execution from where


it left off, now with the results or effects of the system call.

Types of System Calls

The following list categorizes system calls based on their functionalities:

1. Process Control
 Create new processes or terminate existing ones.
 Load and execute programs within a process's space.
 Schedule processes and set execution attributes, such as priority.
 Wait for a process to complete or signal upon its completion.
2. File Management

 Reading from or writing to files.


 Opening and closing files.
 Deleting or modifying file attributes.
 Moving or renaming files.

3. Device Management

 System calls can be used to facilitate device management by:


 Requesting device access and releasing it after use.
 Setting device attributes or parameters.
 Reading from or writing to devices.
 Mapping logical device names to physical devices.

4. Information Maintenance
 Retrieve or modify various system attributes.
 Set the system date and time.
 Query system performance metrics.

5. Communication

 Sending or receiving messages between processes.


 Synchronizing actions between user processes.
 Establishing shared memory regions for inter-process communication.
 Networking via sockets.

6. Security and Access Control


 Determining which processes or users get access to specific resources and who
can read, write, and execute resources.
 Facilitating user authentication procedures.

Example:

open() Open a file (or device). CreateFile() Open or create a file or device.

Close an open file (or


close() CloseHandle() Close an open object handle.
device).

read()
Read from a file (or ReadFile()
Read data from a file or input
device). device.

Write data to a file or output


write() Write to a file (or device). WriteFile()
device.

System Design and Implementation :

The operating system is needed to design and implement because without proper design and
implementation any system cannot work properly, for every aspect or for any development a
proper design and implementation should be necessary so that it can work in good manner and
we can easily debug if any failures occur.
So, design and implementation is a necessary part of an operating system and this technique can
be used by every user who uses a computer.

There are different types of techniques to design and implement the operating system.

 Design goals
 Mechanism
 Implementation
Design goals

Concurrent Systems

Operating systems must handle multiple devices as well as multiple users concurrently. It is a
must for modern multiple core architectures. Due to these features the design of the operating
system is complex and very difficult to make.

Security and Privacy

Operating systems must provide security and privacy to a system. It is important to prevent the
malicious user from accessing your system and to prevent the stealing of the user programs.

Resource Sharing

Operating system ensures that the resources of the system must be shared in a correct fashion in
between multiple user processes. It becomes more complex when multiple users use the same
device.

Changes in Hardware and Software

Operating systems must be flexible in order to accommodate any change to the hardware and
software of the system, It should not be obsolete. It is necessary as it is costly to change the
operating system again and again on any change to the software or hardware.

Portable Operating Systems

Operating system that is able to work with different hardware and systems is called a portable
operating system and it is a very important design goal.
Backward Compatibility

Any upgrade to the current operating system could not hinder it's compatibility with the machine
i.e. if the previous version of the operating system is compatible with the system then the newer
or upgraded version should also be compatible with the system this is called backward
compatibility.

Mechanism

In an operating system there is a particular mechanism to be followed and this mechanism is


responsible for the entire task performed by the operating system.

When a task is performed in the operating system then we follow a particular mechanism for
input, store, process, and output and by using this process we can define memory to the different
tasks performed by the computer.

An Operating system provides the services to users and programs. Like I/O operations, program
execution, file system manipulation, Resource allocation, Protection.

Program Execution

OS handles many activities from user programs to system programs like printer spooler, name
servers, file servers etc. Each of these activities is encapsulated as a process. A process includes
the complete execution context.

Operating System Activities

The operating system activities are as follows −

 Loads a program into memory


 Execute the program
 Handle's program's execution
 Provides a mechanism for process synchronization
 Provides a mechanism for process communication.
Implementation

Once the operating system is designed it must be implemented because it is a collection of many
programs written by many people over a long period of time.
Implementation is a process which is most important for the operating system.

An operating system needs to be implemented because when it is implemented then new tasks
can be performed and new application software can also install in your computer and run your
computer smoothly for that we need implementation of operating system.

Process Scheduling
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

Process scheduler

Categories of Scheduling
Scheduling falls into one of two categories:
 Non-Preemptive: In this case, a process’s resource cannot be taken before the process has
finished running. When a running process finishes and transitions to waiting state,
resources are switched.
 Preemptive: In this case, the OS assigns resources to a process for a predetermined period.
The process switches from running state to ready state or from waiting state to ready state
during resource allocation. This switching happens because the CPU may give other
processes priority and substitute the currently active process for the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important that
the long-term scheduler make a careful selection of both I/O and CPU-bound processes. I/O-
bound tasks are which use much of their time in input and output operations while CPU-bound
processes are which spend their time on the CPU. The job scheduler increases efficiency by
maintaining a balance between the two. They operate at a high level and are typically used in
batch-processing systems.
2. Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on the running
state. Note: Short-term scheduler only selects the process to schedule it doesn’t load the
process on running. Here is when all the scheduling algorithms are used. The CPU scheduler
is responsible for ensuring no starvation due to high burst time processes.

Short Term Scheduler

The dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A
dispatcher does the following:
 Switching context.
 Switching to user mode.
 Jumping to the proper location in the newly loaded program.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in maintaining a perfect balance
between the I/O bound and the CPU bound. It reduces the degree of multiprogramming.
Medium Term Scheduler

Inter Process Communication (IPC)


A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently, in reality,
there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and synchronize their
actions. The communication between these processes can be seen as a method of co-operation
between them. Processes can communicate with each other through both:

1. Shared Memory
2. Message passing

An operating system can implement both methods of communication. First, we will discuss the
shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it.

Shared Memory Method


Ex: Producer-Consumer problem
there are two processes: Producer and Consumer. The producer produces some items and the
Consumer consumes that item. The two processes share a common space or memory location
known as a buffer where the item produced by the Producer is stored and from which the
Consumer consumes the item if needed.

Messaging Passing Method


Now, we will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they
proceed as follows:

 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

Message Passing through Communication Link.


Direct and Indirect Communication link

A link has some capacity that determines the number of messages that can reside in it
temporarily for which every link has a queue associated with it which can be of zero
capacity, bounded capacity, or unbounded capacity. In zero capacity, the sender waits until
the receiver informs the sender that it has received the message.
Direct Communication links are implemented when the processes use a specific process
identifier for the communication, but it is hard to identify the sender ahead of time.

For example the print server.


In-direct Communication is done via a shared mailbox (port), which consists of a queue
of messages. The sender keeps the message in mailbox and the receiver picks them up.

Message Passing through Exchanging the Messages.


Synchronous and Asynchronous Message Passing:
A process that is blocked is one that is waiting for some event, such as a resource becoming
available or the completion of an I/O operation. IPC is possible between the processes on same
computer as well as on the processes running on different computer i.e. in
networked/distributed system.

There are basically three preferred combinations:


 Blocking send and blocking receive
 Non-blocking send and Non-blocking receive
 Non-blocking send and Blocking receive (Mostly used)

In Direct message passing: The process which wants to communicate must explicitly
name the recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.

Examples of IPC systems


Posix : uses shared memory method.
Mach : uses message passing
Windows XP : uses message passing using local procedural calls
Communication in client/server Architecture:

There are various mechanisms:


Pipe
Socket
Remote Procedural calls (RPCs)

CPU Scheduling in Operating Systems


 Scheduling of processes/work is done to finish the work on time.
 CPU Scheduling is a process that allows one process to use the CPU while another
process is delayed (in standby) due to unavailability of any resources such as I / O etc,
thus making full use of the CPU.
 The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer.
 CPU scheduling is a key part of how an operating system works. It decides which task
the CPU should work on at any given time.
 The process memory is divided into four sections for efficient operation:
 The text category is composed of integrated program code, which is read from fixed
storage when the program is launched.
 The data class is made up of global and static variables, distributed and executed before
the main action.
 Heap is used for flexible, or dynamic memory allocation and is managed by calls to
new, delete, malloc, free, etc.
 The stack is used for local variables. The space in the stack is reserved for local
variables when it is announced.
Process Scheduling

 Process Scheduling is the process of the process manager handling the removal of an
active process from the CPU and selecting another process based on a specific strategy.
 Process Scheduling is an integral part of Multi-programming applications. Such
operating systems allow more than one process to be loaded into usable memory at a
time and the loaded shared CPU process uses repetition time.
There are three types of process schedulers:
 Long term or Job Scheduler
 Short term or CPU Scheduler
 Medium-term Scheduler

Terminologies Used in CPU Scheduling


 Arrival Time: Time at which the process arrives in the ready queue.
 Completion Time: Time at which process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and arrival time.
o Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turn around time and burst time.
o Waiting Time = Turn Around Time – Burst Time

There are mainly two types of scheduling methods:


 Preemptive Scheduling: Preemptive scheduling is used when a process switches from
running state to ready state or from the waiting state to the ready state.
 Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process
terminates, or when a process switches from running state to waiting state.
1. First Come First Serve
FCFS considered to be the simplest of all operating system scheduling algorithms. First come
first serve scheduling algorithm states that the process that requests the CPU first is allocated
the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS
 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS
 Easy to implement
 First come, first serve method
Disadvantages of FCFS
 FCFS suffers from Convoy effect.
 The average waiting time is much higher than the other algorithms.
 FCFS is very simple and easy to implement and hence not much efficient.
2. Shortest Job First(SJF)
Shortest job first (SJF) is a scheduling process that selects the waiting process with the
smallest execution time to execute next. This scheduling method may or may not be
preemptive. Significantly reduces the average waiting time for other processes waiting to be
executed. The full form of SJF is Shortest Job First.
Characteristics of SJF
 Shortest Job first has the advantage of having a minimum average waiting time among
all operating system scheduling algorithms.
 It is associated with each task as a unit of time to complete.
 It may cause starvation if shorter processes keep coming. This problem can be solved using
the concept of ageing.
Advantages of SJF
 As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
 SJF is generally used for long term scheduling
Disadvantages of SJF
 One of the demerit SJF has is starvation.
 Many times it becomes complicated to predict the length of the upcoming CPU request
3. Longest Job First(LJF)
Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the
name suggests this algorithm is based upon the fact that the process with the largest burst time
is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF
 Among all the processes waiting in a waiting queue, CPU is always assigned to the process
having largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the process
that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF
 No other task can schedule until the longest job or process executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of LJF
 Generally, the LJF algorithm gives a very high average waiting time and average turn-
around time for a given set of processes.
 This may lead to convoy effect.
4. Priority Scheduling
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU
scheduling algorithm that works based on the priority of a process.
Characteristics of Priority Scheduling
 Schedules tasks based on priority.
 When the higher priority work arrives and a task with less priority is executing, the higher
priority process will takes the place of the less priority proess and
 The later is suspended until the execution is complete.
 Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling
 The average waiting time is less than FCFS
 Less complex
Disadvantages of Priority Scheduling
 One of the most common demerits of the Preemptive priority CPU scheduling algorithm is
the Starvation Problem. This is the problem in which a process has to wait for a longer
amount of time to get scheduled into the CPU. This condition is called the starvation
problem.
.
5. Round Robin
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed
time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm .
Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin
 It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
 One of the most widely used methods in CPU scheduling as a core.
 It is considered preemptive as the processes are given to the CPU for a very limited time.
Advantages of Round robin
 Round robin seems to be fair as every process gets an equal share of CPU.
 The newly created process is added to the end of the ready queue.
6. Shortest Remaining Time First(SRTF)
Shortest remaining time first is the preemptive version of the Shortest job first which we
have discussed earlier where the processor is allocated to the job closest to completion. In
SRTF the process with the smallest amount of time remaining until completion is selected to
execute.
Characteristics of SRTF
 SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s
overhead charges are not counted.
 The context switch is done a lot more times in SRTF than in SJF and consumes the CPU’s
valuable time for processing. This adds up to its processing time and diminishes its
advantage of fast processing.
Advantages of SRTF
 In SRTF the short processes are handled very fast.
 The system also requires very little overhead since it only makes a decision when a process
completes or a new process is added.

Disadvantages of SRTF
 Like the shortest job first, it also has the potential for process starvation.
 Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the shortest remaining time first.
7.Longest Remaining Time First(LRTF)
The longest remaining time first is a preemptive version of the longest job first scheduling
algorithm. This scheduling algorithm is used by the operating system to program incoming
processes for use in a systematic way. This algorithm schedules those processes first which
have the longest processing time remaining for completion.
Characteristics of LRTF
 Among all the processes waiting in a waiting queue, the CPU is always assigned to the
process having the largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the process
that arrived first is processed first.
 LRTF CPU Scheduling can be of both preemptive and non-preemptive.
 No other process can execute until the longest task executes completely.
 All the jobs or processes finish at the same time approximately.
Advantages of LRTF
 Maximizes Throughput for Long Processes.
 Reduces Context Switching.
 Simplicity in Implementation.
Disadvantages of LRTF
 This algorithm gives a very high average waiting time and average turn-around time for a
given set of processes.
 This may lead to a convoy effect.
8. Highest Response Ratio Next(HRRN)
Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is
considered as one of the most optimal scheduling algorithms. The name itself states that we
need to find the response ratio of all available processes and select the one with the highest
Response Ratio. A process once selected will run till completion.
Characteristics of HRRN
 The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.
 HRRN is considered as the modification of Shortest Job First to reduce the problem
of starvation.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the process.
Advantages of HRRN
 HRRN Scheduling algorithm generally gives better performance than the shortest job
first Scheduling.
 There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.
Disadvantages of HRRN
 The implementation of HRRN scheduling is not possible as it is not possible to know the
burst time of every job in advance.
 In this scheduling, there may occur an overload on the CPU.
9. Multiple Queue Scheduling
Processes in the ready queue can be divided into different classes where each class has its own
scheduling needs. For example, a common division is a foreground (interactive) process and
a background (batch) process. These two classes have different scheduling needs. For this
kind of situation Multilevel Queue Scheduling is used.

The description of the processes in the above diagram is as follows:


 System Processes: The CPU itself has its process to run, generally termed as System
Process.
 Interactive Processes: An Interactive Process is a type of process in which there should be
the same type of interaction.
 Batch Processes: Batch processing is generally a technique in the Operating system that
collects the programs and data together in the form of a batch before the processing starts.
Advantages of Multilevel Queue Scheduling
 The main merit of the multilevel queue is that it has a low scheduling overhead.
Disadvantages of Multilevel Queue Scheduling
 Starvation problem
 It is inflexible in nature
10. Multilevel Feedback Queue Scheduling
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue
Scheduling but in this process can move between the queues. And thus, much more efficient
than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling
 In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue
on entry to the system, and processes are not allowed to move between queues.
 As the processes are permanently assigned to the queue, this setup has the advantage of low
scheduling overhead,
 But on the other hand disadvantage of being inflexible.
Advantages of Multilevel feedback Queue Scheduling
 It is more flexible
 It allows different processes to move between different queues
Disadvantages of Multilevel Feedback Queue Scheduling
 It also produces CPU overheads
 It is the most complex algorithm.

You might also like