OPERATING SYSTEM UNIT -1 NOTES
OPERATING SYSTEM UNIT -1 NOTES
OPERATING SYSTEM UNIT -1 NOTES
Introduction:
An operating system acts as an intermediary between the user of a computer and
computer hardware. The purpose of an operating system is to provide an environment in
which a user can execute programs conveniently and efficiently.
An operating system is software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system.
A more common definition is that the operating system is the one program running at all
times on the computer (usually called the kernel), with all else being application
programs.
2. System View
The OS may also be viewed as just a resource allocator. A computer system comprises
various sources, such as hardware and software, which must be managed effectively. The
operating system manages the resources, decides between competing demands, controls
the program execution, etc. According to this point of view, the operating system's
purpose is to maximize performance.
The operating system is responsible for managing hardware resources and allocating
them to programs and users to ensure maximum performance.
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O
interaction, etc. These are all resources that the operating system needs when an
application program demands them. Only the operating system can allocate resources,
and it has used several tactics and strategies to maximize its processing and memory
space. The operating system uses a variety of strategies to get the most out of the
hardware resources, including paging, virtual memory, caching, and so on.
2. Control Program
The control program controls how input and output devices (hardware) interact with the
operating system. The user may request an action that can only be done with I/O devices;
in this case, the operating system must also have proper communication, control, detect,
and handle such devices.
Types of Operating Systems (OS)
An operating system is a well-organized collection of programs that manages the computer
hardware. It is a type of system software that is responsible for the smooth functioning of the
computer system.
Batch processing was very popular. In this technique, similar types of jobs were batched together
and executed in time. People were used to having a single computer which was called a
mainframe.
In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to another as
soon as the job was completed. It contained a small set of programs called the resident monitor
that always resided in one part of the main memory. The remaining part is used for servicing
jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very
high, then the other four jobs will never be executed, or they will have to wait for a very long
time. Hence the other processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If
job requires the input of two numbers from the console, then it will never get it in
the batch processing scenario since the user is not present at the time of execution.
Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each
process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources
are used efficiently, but they do not provide any user interaction with the computer
system.
In Multiprocessing, Parallel computing is achieved. There are more than one processors present
in the system which can execute more than one process at the same time. This will increase the
throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput of
the system.
In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.
The Application of a Real-Time system exists in the case of military applications, if you want to
drop a missile, then the missile is supposed to be dropped with a certain precision.
Advantages of Real-time operating system:
o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.
In the Time Sharing operating system, computer resources are allocated in a time-dependent
fashion to several programs simultaneously. Thus it helps to provide a large number of user's
direct access to the main computer.
Simple Structure
o It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to access
I/O routines, which may result in unauthorized access to I/O procedures
This organizational structure is used by the MS-DOS operating system:
o There are four layers that make up the MS-DOS operating system, and each has its own
set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures
are visible to end users, giving them the potential for unwanted access.
o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is no
abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.
MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation,
including file management, memory management, device management, and operational
operations.
The core of an operating system for computers is called the kernel (OS). All other System
components are provided with fundamental services by the kernel. The operating system and the
hardware use it as their main interface. When an operating system is built into a single piece of
hardware, such as a keyboard or mouse, the kernel can directly access all of its resources.
o This layering is unnecessary and the kernel alone is responsible for managing all
operations, it is easy to design and execute.
o Due to the fact that functions like memory management, file management, process
scheduling, etc., are implemented in the same address area, the monolithic kernel runs
rather quickly when compared to other systems. Utilizing the same address speeds up and
reduces the time required for address allocation for new processes.
o The monolithic kernel's services are interconnected in address space and have an impact
on one another, so if any of them malfunctions, the entire system do as well.
o It is not adaptable. Therefore, launching a new service is difficult.
LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer)
o Work duties are separated since each layer has its own functionality, and there is some
amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by the top
layers.
MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of any
unnecessary parts.
Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system
is now more trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating
system is unaffected and continues to function normally.
The image below shows Micro-Kernel Operating System Structure:
EXOKERNEL
An operating system called Exokernel was created at MIT with the goal of offering application-
level management of hardware resources. The exokernel architecture's goal is to enable
application-specific customization by separating resource management from protection.
Exokernel size tends to be minimal due to its limited operability.
o A decline in consistency
o Exokernel interfaces have a complex architecture.
The hardware of our personal computer, including the CPU, disc drives, RAM, and NIC
(Network Interface Card), is abstracted by a virtual machine into a variety of various execution
contexts based on our needs, giving us the impression that each execution environment is a
separate computer. A virtual box is an example of it.
Using CPU scheduling and virtual memory techniques, an operating system allows us to execute
multiple processes simultaneously while giving the impression that each one is using a separate
processor and virtual memory.
o Due to total isolation between each virtual machine and every other virtual machine,
there are no issues with security.
o Simple availability, accessibility, and recovery convenience.
A system call is an interface between a program running in user space and the operating
system (OS). Application programs use system calls to request services and functionalities from
the OS's kernel. This mechanism allows the program to call for a service, like reading from a file,
without accessing system resources directly.
When a program invokes a system call, the execution context switches from user to kernel mode,
allowing the system to access hardware and perform the required operations safely. After the
operation is completed, the control returns to user mode, and the program continues its
execution.
Ensures that hardware resources are isolated from user space processes.
Prevents direct access to the kernel or hardware memory.
Allows application code to run across different hardware architectures.
User-Kernel Boundary. System calls serve as the authorized gateway for user programs
when requesting services from the kernel. They ensure that user programs cannot
arbitrarily access kernel functions or critical system resources.
Resource Management. User programs can request and manage vital resources like
CPU time, memory, and file storage via system calls. The OS oversees the process and
guarantees that it is completed in an organized manner.
Streamlined Development. System calls abstract the complexities of hardware. This
allows developers to perform operations like reading and writing to a file or managing
network data without needing to write hardware-specific code.
Security and Access Control. System calls implement checks to ensure that requests
made by user programs are valid and that the programs have the necessary permissions to
perform the requested operations.
Inter-Process Communication (IPC). System calls provide the mechanisms for
processes to communicate with each other. They offer features like pipes, message
queues, and shared memory to facilitate this inter-process communication.
Network Operations. System calls provide the framework for network communications
between programs. Developers can devote their attention to building their application's
logic instead of focusing on low-level network programming
3. System Call Identified. The system uses an index to identify the system
call and address the corresponding kernel function.
1. Process Control
Create new processes or terminate existing ones.
Load and execute programs within a process's space.
Schedule processes and set execution attributes, such as priority.
Wait for a process to complete or signal upon its completion.
2. File Management
3. Device Management
4. Information Maintenance
Retrieve or modify various system attributes.
Set the system date and time.
Query system performance metrics.
5. Communication
Example:
open() Open a file (or device). CreateFile() Open or create a file or device.
read()
Read from a file (or ReadFile()
Read data from a file or input
device). device.
The operating system is needed to design and implement because without proper design and
implementation any system cannot work properly, for every aspect or for any development a
proper design and implementation should be necessary so that it can work in good manner and
we can easily debug if any failures occur.
So, design and implementation is a necessary part of an operating system and this technique can
be used by every user who uses a computer.
There are different types of techniques to design and implement the operating system.
Design goals
Mechanism
Implementation
Design goals
Concurrent Systems
Operating systems must handle multiple devices as well as multiple users concurrently. It is a
must for modern multiple core architectures. Due to these features the design of the operating
system is complex and very difficult to make.
Operating systems must provide security and privacy to a system. It is important to prevent the
malicious user from accessing your system and to prevent the stealing of the user programs.
Resource Sharing
Operating system ensures that the resources of the system must be shared in a correct fashion in
between multiple user processes. It becomes more complex when multiple users use the same
device.
Operating systems must be flexible in order to accommodate any change to the hardware and
software of the system, It should not be obsolete. It is necessary as it is costly to change the
operating system again and again on any change to the software or hardware.
Operating system that is able to work with different hardware and systems is called a portable
operating system and it is a very important design goal.
Backward Compatibility
Any upgrade to the current operating system could not hinder it's compatibility with the machine
i.e. if the previous version of the operating system is compatible with the system then the newer
or upgraded version should also be compatible with the system this is called backward
compatibility.
Mechanism
When a task is performed in the operating system then we follow a particular mechanism for
input, store, process, and output and by using this process we can define memory to the different
tasks performed by the computer.
An Operating system provides the services to users and programs. Like I/O operations, program
execution, file system manipulation, Resource allocation, Protection.
Program Execution
OS handles many activities from user programs to system programs like printer spooler, name
servers, file servers etc. Each of these activities is encapsulated as a process. A process includes
the complete execution context.
Once the operating system is designed it must be implemented because it is a collection of many
programs written by many people over a long period of time.
Implementation is a process which is most important for the operating system.
An operating system needs to be implemented because when it is implemented then new tasks
can be performed and new application software can also install in your computer and run your
computer smoothly for that we need implementation of operating system.
Process Scheduling
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
Process scheduler
Categories of Scheduling
Scheduling falls into one of two categories:
Non-Preemptive: In this case, a process’s resource cannot be taken before the process has
finished running. When a running process finishes and transitions to waiting state,
resources are switched.
Preemptive: In this case, the OS assigns resources to a process for a predetermined period.
The process switches from running state to ready state or from waiting state to ready state
during resource allocation. This switching happens because the CPU may give other
processes priority and substitute the currently active process for the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important that
the long-term scheduler make a careful selection of both I/O and CPU-bound processes. I/O-
bound tasks are which use much of their time in input and output operations while CPU-bound
processes are which spend their time on the CPU. The job scheduler increases efficiency by
maintaining a balance between the two. They operate at a high level and are typically used in
batch-processing systems.
2. Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on the running
state. Note: Short-term scheduler only selects the process to schedule it doesn’t load the
process on running. Here is when all the scheduling algorithms are used. The CPU scheduler
is responsible for ensuring no starvation due to high burst time processes.
The dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A
dispatcher does the following:
Switching context.
Switching to user mode.
Jumping to the proper location in the newly loaded program.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in maintaining a perfect balance
between the I/O bound and the CPU bound. It reduces the degree of multiprogramming.
Medium Term Scheduler
1. Shared Memory
2. Message passing
An operating system can implement both methods of communication. First, we will discuss the
shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it.
Establish a communication link (if a link already exists, no need to establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
A link has some capacity that determines the number of messages that can reside in it
temporarily for which every link has a queue associated with it which can be of zero
capacity, bounded capacity, or unbounded capacity. In zero capacity, the sender waits until
the receiver informs the sender that it has received the message.
Direct Communication links are implemented when the processes use a specific process
identifier for the communication, but it is hard to identify the sender ahead of time.
In Direct message passing: The process which wants to communicate must explicitly
name the recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.
Process Scheduling is the process of the process manager handling the removal of an
active process from the CPU and selecting another process based on a specific strategy.
Process Scheduling is an integral part of Multi-programming applications. Such
operating systems allow more than one process to be loaded into usable memory at a
time and the loaded shared CPU process uses repetition time.
There are three types of process schedulers:
Long term or Job Scheduler
Short term or CPU Scheduler
Medium-term Scheduler
Disadvantages of SRTF
Like the shortest job first, it also has the potential for process starvation.
Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the shortest remaining time first.
7.Longest Remaining Time First(LRTF)
The longest remaining time first is a preemptive version of the longest job first scheduling
algorithm. This scheduling algorithm is used by the operating system to program incoming
processes for use in a systematic way. This algorithm schedules those processes first which
have the longest processing time remaining for completion.
Characteristics of LRTF
Among all the processes waiting in a waiting queue, the CPU is always assigned to the
process having the largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the process
that arrived first is processed first.
LRTF CPU Scheduling can be of both preemptive and non-preemptive.
No other process can execute until the longest task executes completely.
All the jobs or processes finish at the same time approximately.
Advantages of LRTF
Maximizes Throughput for Long Processes.
Reduces Context Switching.
Simplicity in Implementation.
Disadvantages of LRTF
This algorithm gives a very high average waiting time and average turn-around time for a
given set of processes.
This may lead to a convoy effect.
8. Highest Response Ratio Next(HRRN)
Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is
considered as one of the most optimal scheduling algorithms. The name itself states that we
need to find the response ratio of all available processes and select the one with the highest
Response Ratio. A process once selected will run till completion.
Characteristics of HRRN
The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.
HRRN is considered as the modification of Shortest Job First to reduce the problem
of starvation.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the process.
Advantages of HRRN
HRRN Scheduling algorithm generally gives better performance than the shortest job
first Scheduling.
There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.
Disadvantages of HRRN
The implementation of HRRN scheduling is not possible as it is not possible to know the
burst time of every job in advance.
In this scheduling, there may occur an overload on the CPU.
9. Multiple Queue Scheduling
Processes in the ready queue can be divided into different classes where each class has its own
scheduling needs. For example, a common division is a foreground (interactive) process and
a background (batch) process. These two classes have different scheduling needs. For this
kind of situation Multilevel Queue Scheduling is used.