Operating System Notion

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

operating system

💡 By Ritik parmar

An Operating System performs all the basic tasks like managing files, processes, and memory. Thus operating system
acts as the manager of all the resources, i.e. resource manager. Thus, the operating system becomes an interface
between the user and the machine. It is one of the most required software that is present in the device.
Operating System is a type of software that works as an interface between the system program and the hardware.
There are several types of Operating Systems in which many of which are mentioned below. Let’s have a look at them.

Types of Operating Systems


There are several types of Operating Systems which are mentioned below.

Batch Operating System

Multi-Programming System

Multi-Processing System

Multi-Tasking Operating System

Time-Sharing Operating System

Distributed Operating System

Network Operating System

Real-Time Operating System

1. Batch Operating System


This type of operating system does not interact with the computer directly. There is an operator which takes similar
jobs having the same requirement and groups them into batches. It is the responsibility of the operator to sort jobs with
similar needs.

operating system 1
Batch Operating System

Advantages of Batch Operating System

It is very difficult to guess or know the time required for any job to complete. Processors of the batch systems know
how long the job would be when it is in the queue.

Multiple users can share the batch systems.

The idle time for the batch system is very less.

It is easy to manage large work repeatedly in batch systems.

Disadvantages of Batch Operating System

The computer operators should be well known with batch systems.

Batch systems are hard to debug.

It is sometimes costly.

The other jobs will have to wait for an unknown time if any job fails.

Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.

2. Multi-Programming Operating System


Multiprogramming Operating Systems can be simply illustrated as more than one program is present in the main
memory and any one of them can be kept in execution. This is basically used for better execution of resources.

operating system 2
MultiProgramming
Advantages of Multi-Programming Operating System

Multi Programming increases the Throughput of the System.

It helps in reducing the response time.

Disadvantages of Multi-Programming Operating System

There is not any facility for user interaction of system resources with the system.

3. Multi-Processing Operating System


Multi-Processing Operating System is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the System.

Multiprocessing
Advantages of Multi-Processing Operating System

It increases the throughput of the system.

operating system 3
As it has several processors, so, if one processor fails, we can proceed with another processor.

Disadvantages of Multi-Processing Operating System

Due to the multiple CPU, it can be more complex and somehow difficult to understand.

4. Multi-Tasking Operating System


Multitasking Operating System is simply a multiprogramming Operating System with having facility of a Round-Robin
Scheduling Algorithm. It can run multiple programs simultaneously.

There are two types of Multi-Tasking Systems which are listed below.

Preemptive Multi-Tasking

Cooperative Multi-Tasking

Multitasking

Advantages of Multi-Tasking Operating System

Multiple Programs can be executed simultaneously in Multi-Tasking Operating System.

It comes with proper memory management.

Disadvantages of Multi-Tasking Operating System

The system gets heated in case of heavy programs multiple times.

5. Time-Sharing Operating Systems


Each task is given some time to execute so that all the tasks work smoothly. Each user gets the time of the CPU as
they use a single system. These systems are also known as Multitasking Systems. The task can be from a single user
or different users also. The time that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.

operating system 4
Time-Sharing OS
Advantages of Time-Sharing OS

Each task gets an equal opportunity.

Fewer chances of duplication of software.

CPU idle time can be reduced.

Resource Sharing: Time-sharing systems allow multiple users to share hardware resources such as the CPU,
memory, and peripherals, reducing the cost of hardware and increasing efficiency.

Improved Productivity: Time-sharing allows users to work concurrently, thereby reducing the waiting time for their
turn to use the computer. This increased productivity translates to more work getting done in less time.

Improved User Experience: Time-sharing provides an interactive environment that allows users to communicate
with the computer in real time, providing a better user experience than batch processing.

Disadvantages of Time-Sharing OS

Reliability problem.

One must have to take care of the security and integrity of user programs and data.

Data communication problem.

High Overhead: Time-sharing systems have a higher overhead than other operating systems due to the need for
scheduling, context switching, and other overheads that come with supporting multiple users.

Complexity: Time-sharing systems are complex and require advanced software to manage multiple users
simultaneously. This complexity increases the chance of bugs and errors.

Security Risks: With multiple users sharing resources, the risk of security breaches increases. Time-sharing
systems require careful management of user access, authentication, and authorization to ensure the security of
data and software.

Examples of Time-Sharing OS with explanation

IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was first introduced in 1972. It is still in use
today, providing a virtual machine environment that allows multiple users to run their own instances of operating

operating system 5
systems and applications.

TSO (Time Sharing Option): TSO is a time-sharing operating system that was first introduced in the 1960s by
IBM for the IBM System/360 mainframe computer. It allowed multiple users to access the same computer
simultaneously, running their own applications.

Windows Terminal Services: Windows Terminal Services is a time-sharing operating system that allows multiple
users to access a Windows server remotely. Users can run their own applications and access shared resources,
such as printers and network storage, in real-time.

6. Distributed Operating System


These types of operating system is a recent advancement in the world of computer technology and are being widely
accepted all over the world and, that too, at a great pace. Various autonomous interconnected computers
communicate with each other using a shared communication network. Independent systems possess their own
memory unit and CPU. These are referred to as loosely coupled systems or distributed systems. These systems’
processors differ in size and function. The major benefit of working with these types of the operating system is that it is
always possible that one user can access the files or software which are not actually present on his system but some
other system connected within this network i.e., remote access is enabled within the devices connected in that
network.

Distributed OS
Advantages of Distributed Operating System

Failure of one will not affect the other network communication, as all systems are independent of each other.

Electronic mail increases the data exchange speed.

Since resources are being shared, computation is highly fast and durable.

Load on host computer reduces.

These systems are easily scalable as many systems can be easily added to the network.

Delay in data processing reduces.

Disadvantages of Distributed Operating System

Failure of the main network will stop the entire communication.

To establish distributed systems the language is used not well-defined yet.

These types of systems are not readily available as they are very expensive. Not only that the underlying software
is highly complex and not understood well yet.

Examples of Distributed Operating Systems are LOCUS, etc.

operating system 6
7. Network Operating System
These systems run on a server and provide the capability to manage data, users, groups, security, applications, and
other networking functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network. One more important aspect of Network
Operating Systems is that all the users are well aware of the underlying configuration, of all other users within the
network, their individual connections, etc. and that’s why these computers are popularly known as tightly coupled
systems.

Network Operating System


Advantages of Network Operating System

Highly stable centralized servers.

Security concerns are handled through servers.

New technologies and hardware up-gradation are easily integrated into the system.

Server access is possible remotely from different locations and types of systems.

Disadvantages of Network Operating System

Servers are costly.

User has to depend on a central location for most operations.

Maintenance and updates are required regularly.

Examples of Network Operating Systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008,
UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.

operating system 7
8. Real-Time Operating System
These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very small.
This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile systems, air traffic
control systems, robots, etc.
Types of Real-Time Operating Systems

Hard Real-Time Systems: Hard Real-Time OSs are meant for applications where time constraints are very strict
and even the shortest possible delay is not acceptable. These systems are built for saving life like automatic
parachutes or airbags which are required to be readily available in case of an accident. Virtual memory is rarely
found in these systems.

Soft Real-Time Systems: These OSs are for applications where time-constraint is less strict.

For more, refer to the Difference Between Hard Real-Time OS and Soft Real-Time OS.

Real-Time Operating System


Advantages of RTOS

Maximum Consumption: Maximum utilization of devices and systems, thus more output from all the resources.

Task Shifting: The time assigned for shifting tasks in these systems is very less. For example, in older systems, it
takes about 10 microseconds in shifting from one task to another, and in the latest systems, it takes 3
microseconds.

Focus on Application: Focus on running applications and less importance on applications that are in the queue.

Real-time operating system in the embedded system: Since the size of programs is small, RTOS can also be
used in embedded systems like in transport and others.

Error Free: These types of systems are error-free.

Memory Allocation: Memory allocation is best managed in these types of systems.

operating system 8
Disadvantages of RTOS

Limited Tasks: Very few tasks run at the same time and their concentration is very less on a few applications to
avoid errors.

Use heavy system resources: Sometimes the system resources are not so good and they are expensive as well.

Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.

Device driver and interrupt signals: It needs specific device drivers and interrupts signal to respond earliest to
interrupts.

Thread Priority: It is not good to set thread priority as these systems are very less prone to switching tasks.

Examples of Real-Time Operating Systems are Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.

What is a System Call?


A system call is a method for a computer program to request a service from the kernel of the operating system on
which it is running. A system call is a method of interacting with the operating system via programs. A system call is a
request from computer software to an operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to user programs. It acts as a
link between the operating system and a process, allowing user-level programs to request operating system services.
The kernel system can only be accessed using system calls. System calls are required for any programs that use
resources.
Advertisement

How are system calls made?


When a computer software needs to access the operating system's kernel, it makes a system call. The system call
uses an API to expose the operating system's services to user programs. It is the only method to access the kernel
system. All programs or processes that require resources for execution must use system calls, as they serve as an
interface between the operating system and user programs.
Below are some examples of how a system call varies from a user function.

1. A system call function may create and use kernel processes to execute the asynchronous processing.

2. A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege executes
in the kernel protection domain.

3. System calls are not permitted to use shared libraries or any symbols that are not present in the kernel protection
domain.

4. The code and data for system calls are stored in global kernel memory.

Why do you need system calls in Operating System?


There are various situations where you must require system calls in the operating system. Following of the situations
are as follows:

1. It is must require when a file system wants to create or delete a file.

2. Network connections require the system calls to sending and receiving data packets.

operating system 9
3. If you want to read or write a file, you need to system calls.

4. If you want to access hardware devices, including a printer, scanner, you need a system call.

5. System calls are used to create and manage new processes.

THREAD:
Here are the basics:A thread is a single sequence stream within a process. Threads are also called lightweight
processes as they possess some of the properties of processes. Each thread belongs to exactly one process. In an
operating system that supports multithreading, the process can consist of many threads.

UNIT 2

Context Switching in OS (Operating System)


The Context switching is a technique or method used by the operating system to switch a process from one state to
another to execute its function using CPUs in the system. When switching perform in the system, it stores the old
running process's status in the form of registers and assigns the CPU to a new process to execute its tasks. While a
new process is running in the system, the previous process must wait in a ready queue. The execution of the old
process starts at that point where another process stopped it. It defines the characteristics of a multitasking operating
system in which multiple processes shared the same CPU to perform multiple tasks without the need for additional
processors in the system.

PCB
When the process is created by the operating system it creates a data structure to store the information of that
process. This is known as Process Control Block (PCB).

Process Control block (PCB) is a data structure that stores information of a process.

operating system 10
PCBs are stored in specially reserved memory for the operating system known as kernel space.

*Note: **The Random Access Memory (RAM) can be logically divided into two distinct regions namely - the kernel
space and the user space. kernel space is the core of the operating system. It normally has full access to all
memory and machine hardware and it cant be accessed by the user.

PCB is unique for every process which consists of various attributes such as process ID, priority, registers, program
counters, process states, list of open files, etc.

Type of Process Schedulers


A scheduler is a type of system software that allows you to handle process scheduling.
There are mainly three types of Process Schedulers:

1. Long Term Scheduler

2. Short Term Scheduler

3. Medium Term Scheduler

Long Term Scheduler


Long term scheduler is also known as a job scheduler. This scheduler regulates the program and select process from
the queue and loads them into memory for execution. It also regulates the degree of multi-programing.
However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O jobs., that allows
managing multiprogramming.

operating system 11
Medium Term Scheduler
Medium-term scheduling is an important part of swapping. It enables you to handle the swapped out-processes. In
this scheduler, a running process can become suspended, which makes an I/O request.
A running process can become suspended if it makes an I/O request. A suspended processes can’t make any
progress towards completion. In order to remove the process from memory and make space for other processes, the
suspended process should be moved to secondary storage.

Short Term Scheduler


Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to boost the system
performance according to set criteria. This helps you to select from a group of processes that are ready to execute and
allocates CPU to one of them. The dispatcher gives control of the CPU to the process selected by the short term
scheduler.

What are the different terminologies to take care of in any


CPU Scheduling algorithm?
Arrival Time: Time at which the process arrives in the ready queue.

Completion Time: Time at which process completes its execution.

Burst Time: Time required by a process for CPU execution.

Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time  –  Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turn Around Time  –  Burst Time

Things to take care while designing a CPU Scheduling


algorithm?
Different CPU Scheduling algorithms have different structures and the choice of a particular algorithm depends on a
variety of factors. Many conditions have been raised to compare CPU scheduling algorithms.
The criteria include the following:

CPU utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible. Theoretically,
CPU usage can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending on the
system load.

Throughput: The average CPU performance is the number of processes performed and completed during each
unit. This is called throughput. The output may vary depending on the length or duration of the processes.

Turn round Time: For a particular process, the important conditions are how long it takes to perform that process.
The time elapsed from the time of process delivery to the time of completion is known as the conversion time.

operating system 12
Conversion time is the amount of time spent waiting for memory access, waiting in line, using CPU, and waiting for
I / O.

Waiting Time: The Scheduling algorithm does not affect the time required to complete the process once it has
started performing. It only affects the waiting time of the process i.e. the time spent in the waiting process in the
ready queue.

Response Time: In a collaborative system, turn around time is not the best option. The process may produce
something early and continue to computing the new results while the previous results are released to the user.
Therefore another method is the time taken in the submission of the application process until the first response is
issued. This measure is called response time.

What are the different types of CPU Scheduling Algorithms?


There are mainly two types of scheduling methods:

Preemptive Scheduling: Preemptive scheduling is used when a process switches from running state to ready
state or from the waiting state to the ready state.

Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process terminates , or when a


process switches from running state to waiting state.

Different types of CPU Scheduling Algorithms


Let us now learn about these CPU scheduling algorithms in operating systems one by one:

1. First Come First Serve:


FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve scheduling
algorithm states that the process that requests the CPU first is allocated the CPU first and is implemented by
using FIFO queue.

operating system 13
Characteristics of FCFS:

FCFS supports non-preemptive and preemptive CPU scheduling algorithms.

Tasks are always executed on a First-come, First-serve concept.

FCFS is easy to implement and use.

This algorithm is not much efficient in performance, and the wait time is quite high.

Advantages of FCFS:

Easy to implement

First come, first serve method

Disadvantages of FCFS:

FCFS suffers from Convoy effect.

The average waiting time is much higher than the other algorithms.

FCFS is very simple and easy to implement and hence not much efficient.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on First come,
First serve Scheduling.

2. Shortest Job First(SJF):


Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest execution time to
execute next. This scheduling method may or may not be preemptive. Significantly reduces the average waiting time
for other processes waiting to be executed. The full form of SJF is Shortest Job First.

operating system 14
Characteristics of SJF:

Shortest Job first has the advantage of having a minimum average waiting time among all operating system
scheduling algorithms.

It is associated with each task as a unit of time to complete.

It may cause starvation if shorter processes keep coming. This problem can be solved using the concept of
ageing.

Advantages of Shortest Job first:

As SJF reduces the average waiting time thus, it is better than the first come first serve scheduling algorithm.

SJF is generally used for long term scheduling

Disadvantages of SJF:

One of the demerit SJF has is starvation.

Many times it becomes complicated to predict the length of the upcoming CPU request

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Shortest Job
First.

3. Longest Job First(LJF):


Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the name suggests this
algorithm is based upon the fact that the process with the largest burst time is processed first. Longest Job First is non-
preemptive in nature.

Characteristics of LJF:

Among all the processes waiting in a waiting queue, CPU is always assigned to the process having largest burst
time.

If two processes have the same burst time then the tie is broken using FCFS i.e. the process that arrived first is
processed first.

LJF CPU Scheduling can be of both preemptive and non-preemptive types.

Advantages of LJF:

No other task can schedule until the longest job or process executes completely.

All the jobs or processes finish at the same time approximately.

Disadvantages of LJF:

Generally, the LJF algorithm gives a very high average waiting time and average turn-around time for a given
set of processes.

This may lead to convoy effect.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Longest job
first scheduling.

4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that
works based on the priority of a process. In this algorithm, the editor sets the functions to be as important, meaning
that the most important process must be done first. In the case of any conflict, that is, where there are more than one

operating system 15
processor with equal value, then the most important CPU planning algorithm works on the basis of the FCFS (First
Come First Serve) algorithm.
Characteristics of Priority Scheduling:

Schedules tasks based on priority.

When the higher priority work arrives while a task with less priority is executed, the higher priority work takes the
place of the less priority one and

The latter is suspended until the execution is complete.

Lower is the number assigned, higher is the priority level of a process.

Advantages of Priority Scheduling:

The average waiting time is less than FCFS

Less complex

Disadvantages of Priority Scheduling:

One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the Starvation
Problem. This is the problem in which a process has to wait for a longer amount of time to get scheduled into the
CPU. This condition is called the starvation problem.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Priority
Preemptive Scheduling algorithm.

5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It is
the preemptive version of First come First Serve CPU Scheduling algorithm. Round Robin CPU Algorithm
generally focuses on Time Sharing technique.

Characteristics of Round robin:

It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.

One of the most widely used methods in CPU scheduling as a core.

It is considered preemptive as the processes are given to the CPU for a very limited time.

Advantages of Round robin:

Round robin seems to be fair as every process gets an equal share of CPU.

The newly created process is added to the end of the ready queue.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Round
robin Scheduling algorithm.

6. Shortest Remaining Time First:


Shortest remaining time first is the preemptive version of the Shortest job first which we have discussed earlier
where the processor is allocated to the job closest to completion. In SRTF the process with the smallest amount of
time remaining until completion is selected to execute.
Characteristics of Shortest remaining time first:

SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s overhead charges are not
counted.

operating system 16
The context switch is done a lot more times in SRTF than in SJF and consumes the CPU’s valuable time for
processing. This adds up to its processing time and diminishes its advantage of fast processing.

Advantages of SRTF:

In SRTF the short processes are handled very fast.

The system also requires very little overhead since it only makes a decision when a process completes or a new
process is added.

Disadvantages of SRTF:

Like the shortest job first, it also has the potential for process starvation.

Long processes may be held off indefinitely if short processes are continually added.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the shortest
remaining time first.

7. Longest Remaining Time First:


The longest remaining time first is a preemptive version of the longest job first scheduling algorithm. This scheduling
algorithm is used by the operating system to program incoming processes for use in a systematic way. This algorithm
schedules those processes first which have the longest processing time remaining for completion.

Characteristics of longest remaining time first:

Among all the processes waiting in a waiting queue, the CPU is always assigned to the process having the largest
burst time.

If two processes have the same burst time then the tie is broken using FCFS i.e. the process that arrived first is
processed first.

LJF CPU Scheduling can be of both preemptive and non-preemptive types.

Advantages of LRTF:

No other process can execute until the longest task executes completely.

All the jobs or processes finish at the same time approximately.

Disadvantages of LRTF:

This algorithm gives a very high average waiting time and average turn-around time for a given set of
processes.

This may lead to a convoy effect.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the longest
remaining time first.

8. Highest Response Ratio Next:


Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is considered as one of the most
optimal scheduling algorithms. The name itself states that we need to find the response ratio of all available processes
and select the one with the highest Response Ratio. A process once selected will run till completion.
Characteristics of Highest Response Ratio Next:

The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.

HRRN is considered as the modification of Shortest Job First to reduce the problem of starvation.

operating system 17
In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted to the next process which has
the highest response ratio and not to the process having less burst time.

Response Ratio = (W + S)/SHere, W is the waiting time of the process so far and S is the
Burst time of the process.

Advantages of HRRN:

HRRN Scheduling algorithm generally gives better performance than the shortest job first Scheduling.

There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.

Disadvantages of HRRN:

The implementation of HRRN scheduling is not possible as it is not possible to know the burst time of every job in
advance.

In this scheduling, there may occur an overload on the CPU.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Highest
Response Ratio Next.

9. Multiple Queue Scheduling:


Processes in the ready queue can be divided into different classes where each class has its own scheduling needs.
For example, a common division is a foreground (interactive) process and a background (batch) process. These
two classes have different scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.

The description of the processes in the above diagram is as follows:

System Processes: The CPU itself has its process to run, generally termed as System Process.

Interactive Processes: An Interactive Process is a type of process in which there should be the same type of
interaction.

Batch Processes: Batch processing is generally a technique in the Operating system that collects the programs
and data together in the form of a batch before the processing starts.

Advantages of multilevel queue scheduling:

The main merit of the multilevel queue is that it has a low scheduling overhead.

Disadvantages of multilevel queue scheduling:

Starvation problem

It is inflexible in nature

operating system 18
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel
Queue Scheduling.

10. Multilevel Feedback Queue Scheduling::


Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like  Multilevel Queue Scheduling but in this
process can move between the queues. And thus, much more efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:

In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the


system, and processes are not allowed to move between queues.

As the processes are permanently assigned to the queue, this setup has the advantage of low scheduling
overhead,

But on the other hand disadvantage of being inflexible.

Advantages of Multilevel feedback queue scheduling:

It is more flexible

It allows different processes to move between different queues

Disadvantages of Multilevel feedback queue scheduling:

It also produces CPU overheads

It is the most complex algorithm.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel
Feedback Queue Scheduling.

Comparison between various CPU Scheduling algorithms


Here is a brief comparison between different CPU scheduling algorithms:

Average
Algorithm Allocation is Complexity waiting time Preemption Starvation Performance
(AWT)
According to the
arrival time of
Simple and easy Slow
FCFS the processes, Large. No No
to implement performance
the CPU is
allocated.
Based on the Minimum
More complex Smaller than
SJF lowest CPU No Yes Average Waiting
than FCFS FCFS
burst time  (BT). Time
Depending on
Based on the some measures
More complex Big turn-around
LJFS highest CPU e.g., arrival time, No Yes
than FCFS time
burst time (BT) process size,
etc.

LRTF Same as LJFS More complex Depending on Yes Yes The preference
the allocation of than FCFS some measures is given to the
the CPU is e.g., arrival time, longer jobs
based on the

operating system 19
highest CPU process size,
 burst time (BT). etc.
But it is
preemptive
Same as SJF
the allocation of
the CPU is Depending on
The preference
based on the More complex some measures
SRTF Yes Yes is given to the
lowest CPU than FCFS e.g., arrival time,
short jobs
burst time (BT). process size, etc
But it is
preemptive.

According to the
The complexity Large as
order of the Each process
depends on compared to
RR process arrives Yes No has given a
Time Quantum SJF and Priority
with fixed time fairly fixed time
size scheduling.
quantum (TQ)
According to the Well
priority. The performance but
Priority Pre- This type is less Smaller than
bigger priority Yes Yes contain a
emptive complex FCFS
task executes starvation
first problem

According to the
priority with This type is less
Preemptive Most beneficial
Priority non- monitoring the complex than
Smaller than No Yes with batch
preemptive new incoming Priority
FCFS systems
higher priority preemptive
jobs

According to the Good


More complex
process that performance but
than the priority Smaller than
MLQ resides in the No Yes contain a
scheduling FCFS
bigger queue starvation
algorithms
priority problem
It is the most
According to the
Complex but its Smaller than all
process of a
MFLQ complexity rate scheduling types No No
bigger priority
depends on the in many cases
queue.
TQ size

UNIT 3 :
DEADLOCK

A deadlock is a situation where a set of processes are blocked because each process is holding a resource and
waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there is only one track,
none of the trains can move once they are in front of each other. A similar situation occurs in operating systems when
there are two or more processes that hold some resources and wait for resources held by other(s). For example, in the

operating system 20
below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.

Examples Of Deadlock

1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each needs another one.

2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:

P0 executes wait(A) and preempts.

P1 executes wait(B).

Now P0 and P1 enter in deadlock.

P0 P1

wait(A); wait(B)

wait(B); wait(A)

3.  Assume the space is available for allocation of 200K bytes, and the following sequence of events occurs.

P0 P1

Request 80KB; Request 70KB;

Request 60KB; Request 80KB;

Deadlock occurs if both processes progress to their second request.

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.

No Preemption: A resource cannot be taken from a process unless the process releases the resource.
Circular Wait: A set of processes waiting for each other in circular form.

Methods for handling deadlock

operating system 21
There are three ways to handle deadlock
1) Deadlock prevention or avoidance:

Prevention:
The idea is to not let the system into a deadlock state. This system will make sure that above mentioned four
conditions will not arise. These techniques are very costly so we use this in cases where our priority is making a
system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of the above-mentioned necessary
conditions for deadlock. Prevention can be done in four different ways:

1. Eliminate mutual exclusion                                        3. Allow preemption


2. Solve hold and Wait                                                   4. Circular wait Solution

Avoidance:

Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make an assumption. We need to
ensure that all information about resources that the process will need is known to us before the execution of the
process. We use Banker’s algorithm (Which is in turn a gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance decreases.

2) Deadlock detection and recovery: If Deadlock prevention or avoidance is not applied to the software then we can
handle this by deadlock detection and recovery. which consist of two phases:

1. In the first phase, we examine the state of the process and check whether there is a deadlock or not in the system.

2. If found deadlock in the first phase then we apply the algorithm for recovery of the deadlock.

In Deadlock detection and recovery, we get the correctness of data but performance decreases.
3) Deadlock ignorance: If a deadlock is very rare, then let it happen and reboot the system. This is the approach that
both Windows and UNIX take. we use the ostrich algorithm for deadlock ignorance.

In Deadlock, ignorance performance is better than the above two methods but the correctness of data.

Safe State:
A safe state can be defined as a state in which there is no deadlock. It is achievable if:

If a process needs an unavailable resource, it may wait until the same has been released by a process to which it
has already been allocated. if such a sequence does not exist, it is an unsafe state.

All the requested resources are allocated to the process.

RACE CONDITION

A race condition occurs when two or more threads can access shared data and they try to change it at the same time.
Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the
threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread
scheduling algorithm, i.e. both threads are "racing" to access/change the data.

operating system 22
CRITICAL SECTION PROBLEM
Critical Section is the part of a program which tries to access shared resources. That resource may be any resource in
a computer like a memory location, Data structure, CPU or any IO device.

The critical section cannot be executed by more than one process at the same time; operating system faces the
difficulties in allowing and disallowing the processes from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that the Race condition among the
processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical section problem. We need to
provide a solution in such a way that the following conditions can be satisfied.

Advertisement

PETERSON SOLUTION

Requirements of Synchronization mechanisms


Primary
1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process is executing inside
critical section then the other process must not enter in the critical section.

operating system 23
1. Progress

Progress means that if one process doesn't need to execute into critical section then it should not stop other processes
to get into the critical section.

Secondary
1. Bounded Waiting

We should be able to predict the waiting time for every process to get into the critical section. The process must not be
endlessly waiting for getting into the critical section.

1. Architectural Neutrality

Our mechanism must be architectural natural. It means that if our solution is working fine on one architecture then it
should also run on the other ones as well.

SEMAPHORE:
A Semaphore can be described as an object that consists of a counter, a waiting list of processes, Signal and Wait
functions. The most basic use of semaphore is to initialize it to 1. When a thread want to enter a critical section, it calls
down and enter the section. When another thread tries to do the same thing, the operation system will put it to the
sleep because the value of semaphore is already zero due to previous call to down. When first thread is finished with
the critical section, it calls up, which wakes up the other thread that’s waiting to enter.
Logically semaphore S is an integer variable that, apart from initialization can only be accessed through two atomic
operations :

Wait(S) or P : If the semaphore value is greater than 0, decrement the value. Otherwise, wait until the value is
greater than 0 and then decrement it.

Signal(S) or V : Increment the value of Semaphore

Advantages :

operating system 24
Semaphores are simple to implement and machine independent

Corrections is easy to determine

Semaphores acquire many resources simultaneously

Can have many different critical sections with different semaphores

No waste of resources due to busy waiting

Drawbacks :

Access to semaphores can come from anywhere in a program

There is no linguistic connection between the semaphore and data to which the semaphore control access

improper use of the semaphores will lead to deadlock or starvation

There is no control or guarantee of proper usage

Classical problems of Synchronization with Semaphore


Solution
Mithlesh Upadhyay

ReadDiscussCoursesPractice
In this article, we will see number of classical problems of synchronization as examples of a large class of
concurrency-control problems. In our solutions to the problems, we use semaphores for synchronization, since that is
the traditional way to present such solutions. However, actual implementations of these solutions could
use mutex locks in place of binary semaphores.
These problems are used for testing nearly every newly proposed synchronization scheme. The following problems of
synchronization are considered as classical problems:

1. Bounded-buffer (or Producer-Consumer) Problem,


2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem

These are summarized, for detailed explanation, you can view the linked articles for each.

Bounded-buffer (or Producer-Consumer) Problem: Bounded Buffer problem is also called producer consumer
problem. This problem is generalized in terms of the Producer-Consumer problem. Solution to this problem is,
creating two counting semaphores “full” and “empty” to keep track of the current number of full and empty buffers
respectively. Producers produce a product and consumers consume the product, but both use of one of the
containers each time. 

Dining-Philosophers Problem: The Dining Philosopher Problem states that K philosophers seated around a
circular table with one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be

operating system 25
picked up by any one of its adjacent followers but not both. This problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-free manner.

Readers and Writers Problem: Suppose that a database is to be shared among several concurrent processes.
Some of these processes may want only to read the database, whereas others may want to update (that is, to read
and write) the database. We distinguish between these two types of processes by referring to the former as
readers and to the latter as writers. Precisely in OS we call this situation as the readers-writers problem. Problem
parameters:

One set of data is shared among a number of processes. 

Once a writer is ready, it performs its write. Only one writer may write at a time. 

If a process is writing, no other process can read it. 

If at least one reader is reading, no other process can write. 

Readers may not write and only read. 

Sleeping Barber Problem: Barber shop with one barber, one barber chair and N chairs to wait in. When no
customers the barber goes to sleep in barber chair and must be woken when a customer comes in. When barber
is cutting hair new customers take empty seats to wait, or leave if no vacancy.

PETERSON SOLUTION
Peterson's solution ensures mutual exclusion. It is implemented in user mode and no hardware support is required
therefore it can be implemented on any platform. Now Peterson’s solution uses two variables: interest and Turn
variable.

operating system 26
Now we will first see Peterson solution algorithm and then see how any two processes P and Q get mutual exclusion
using Peterson solution.

#define N 2
#define TRUE 1
#define FALSE 0
int interested[N]=False
int turn;
void Entry_Section(int process)
{
int other;
other=1-process
interested[process]= TRUE ;
turn = process;
while(interested[other]==TRUE && Turn=process);
}
void exit_section(int process)
{
interested[process]=FALSE;
}

Explanation
There will be two processes and the process number of the first process=0 and the process number of the second
process is equal to 1.

So, if process 1 calls entry_section then other = 1-process =1-1=0.

If process 0 calls then other = 1-process = 1-0 = 1

Now, since the process which called entry_section it means that process want to execute a critical section then that
process will set interested[process]=TRUE

So, if process 1 called entry section then interested[1]=TRUE

If process 0 is called entry section then interested[0]=TRUE

After declaring that process is interesting it will set its turn. So, if process 1 is called then turn =1.
Then, while (interested[other]==TRUE && Turn=process); will be executed.

In this line, the process checks whether other processes are interested or not. If that process is interested then
interested[other]==TRUE will be true then the process thinks that it may happen that another process is executing the
critical section.

For that, it will go into a loop until another process is not interesting. Now if another process becomes interested then
interested[other]==TRUE
It will become False and the process will enter into a critical section. So, in this way, only one process may enter into
the critical section. Therefore, mutual exclusion is guaranteed in Peterson's solution. While exiting the critical section
process will set interest as False.

UNIT 4
Memory Management:

operating system 27
What is Memory Management?
In a multiprogramming computer, the Operating System resides in a part of memory and the rest is used by multiple
processes. The task of subdividing the memory among different processes is called Memory Management. Memory
management is a method in the operating system to manage operations between main memory and disk during
process execution. The main aim of memory management is to achieve efficient utilization of memory.

Why Memory Management is Required?


Allocate and de-allocate memory before and after process execution.

To keep track of used memory space by processes.

To minimize fragmentation issues.

To proper utilization of main memory.

To maintain data integrity while executing of process.

Now we are discussing the concept of Logical Address Space and Physical Address Space

Logical and Physical Address Space


Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It is also known as
a Virtual address. Logical address space can be defined as the size of the process. A logical address can be
changed.

Physical Address Space: An address seen by the memory unit (i.e the one loaded into the memory address
register of the memory) is commonly known as a “Physical Address”. A Physical address is also known as a Real
address. The set of all physical addresses corresponding to these logical addresses is known as Physical address
space. A physical address is computed by MMU. The run-time mapping from virtual to physical addresses is done
by a hardware device Memory Management Unit(MMU). The physical address always remains constant.

Static and Dynamic Loading


Loading a process into the main memory is done by a loader. There are two different types of loading :

Static Loading: Static Loading is basically loading the entire program into a fixed address. It requires more
memory space.

Dynamic Loading: The entire program and all data of a process must be in physical memory for the process to
execute. So, the size of a process is limited to the size of physical memory. To gain proper memory utilization,
dynamic loading is used. In dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a relocatable load format. One of the advantages of dynamic loading is that the unused routine is never
loaded. This loading is useful when a large amount of code is needed to handle it efficiently.

Static and Dynamic Linking

operating system 28
To perform a linking task a linker is used. A linker is a program that takes one or more object files generated by a
compiler and combines them into a single executable file.

Static Linking: In static linking, the linker combines all necessary program modules into a single executable
program. So there is no runtime dependency. Some operating systems support only static linking, in which system
language libraries are treated like any other object module.

Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic linking, “Stub” is
included for each appropriate library routine reference. A stub is a small piece of code. When the stub is executed,
it checks whether the needed routine is already in memory or not. If not available then the program loads the
routine into memory.

Swapping
When a process is executed it must have resided in memory. Swapping is a process of swapping a process
temporarily into a secondary memory from the main memory, which is fast compared to secondary memory. A
swapping allows more processes to be run and can be fit into memory at one time. The main part of swapping is
transferred time and the total time is directly proportional to the amount of memory swapped. Swapping is also known
as roll-out, or roll because if a higher priority process arrives and wants service, the memory manager can swap out
the lower priority process and then load and execute the higher priority process. After finishing higher priority work, the
lower priority process swapped back in memory and continued to the execution process.

swapping in memory management

Memory Management with Monoprogramming (Without Swapping)


This is the simplest memory management approach the memory is divided into two sections:

One part of the operating system

The second part of the user program

operating system 29
Fence Register
operating system user program

In this approach, the operating system keeps track of the first and last location available for the allocation of the
user program

The operating system is loaded either at the bottom or at top

Interrupt vectors are often loaded in low memory therefore it makes sense to load the operating system in low
memory

Sharing of data and code does not make much sense in a single process environment

The Operating system can be protected from user programs with the help of a fence register.

Advantages of Memory Management


It is a simple management approach

Disadvantages of Memory Management


It does not support multiprogramming

Memory is wasted

Multiprogramming with Fixed Partitions (Without Swapping)


Memory partitions scheme with a fixed number of partitions was introduced to support multiprogramming. this
scheme is based on contiguous allocation

Each partition is a block of contiguous memory

Memory is partitioned into a fixed number of partition

Each partition is of fixed size

Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for updating the system the
remaining four partitions are for the user program
operating system

p1

p2

p3

p4

Fixed Size Partitioning

Partition Table
Once partitions are defined operating system keeps track of the status of  memory partitions it is done through a data
structure called a partition table

Starting Address of Partition Size of Partition Status

0k 200k allocated

200k 100k free

operating system 30
300k 150k free
450k 250k allocated

                                                              Sample Partition Table

Logical vs Physical Address


An address generated by the CPU is commonly referred to as a logical address. the address seen by the memory unit
is known as the physical address. The logical address can be mapped to a physical address by hardware with the help
of a base register this is known as dynamic relocation of memory reference.

Contiguous Memory Allocation
The main memory should oblige both the operating system and the different client processes.  Therefore, the
allocation of memory becomes an important task in the operating system.  The memory is usually divided into two
partitions: one for the resident operating system and one for the user processes. We normally need several user
processes to reside in memory simultaneously. Therefore, we need to consider how to allocate available memory to
the processes that are in the input queue waiting to be brought into memory. In adjacent memory allotment, each
process is contained in a single contiguous segment of memory.

Contiguous Memory Allocation

Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the simplest methods
for allocating memory is to divide memory into several fixed-sized partitions and each partition contains exactly one
process. Thus, the degree of multiprogramming is obtained by the number of partitions.

operating system 31
Multiple partition allocation: In this method, a process is selected from the input queue and loaded into the free
partition. When the process terminates, the partition becomes available for other processes.

Fixed partition allocation: In this method, the operating system maintains a table that indicates which parts of
memory are available and which are occupied by processes. Initially, all memory is available for user processes
and is considered one large block of available memory. This available memory is known as a “Hole”. When the
process arrives and needs memory, we search for a hole that is large enough to store this process. If the
requirement is fulfilled then we allocate memory to process, otherwise keeping the rest available to satisfy future
requests. While allocating a memory sometimes dynamic storage allocation problems occur, which concerns how
to satisfy a request of size n from a list of free holes. There are some solutions to this problem:

First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.

First Fit

Here, in this diagram, a 40 KB memory block is the first available free hole that can store process A (size of 25 KB),
because the first two blocks did not have sufficient memory space.

Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this, we search the entire
list, unless the list is ordered by size.

Best Fit

Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable hole for
Process A(size 25KB). In this method, memory utilization is maximum as compared to other memory allocation
techniques.

operating system 32
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the largest leftover hole.

Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which is 60KB.
Inefficient memory utilization is a major issue in the worst fit.

Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from memory, it creates a small
free hole. These holes can not be assigned to new processes because holes are not combined or do not fulfill the
memory requirement of the process.  To achieve a degree of multiprogramming, we must reduce the waste of memory
or fragmentation problems. In the operating systems two types of fragmentation:

1. Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the process more
than their requested size. Due to this some unused space is left over and creating an internal fragmentation
problem.Example: Suppose there is a fixed partitioning used for memory allocation and the different sizes of
blocks 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and demands a block of
memory. It gets a memory block of 3MB but 1MB block of memory is a waste, and it can not be allocated to other
processes too. This is called internal fragmentation.

2. External fragmentation: In External Fragmentation, we have a free memory block, but we can not assign it to a
process because blocks are not contiguous. Example: Suppose (consider the above example) three processes
p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB,
and 7MB allocated respectively. After allocating the process p1 process and the p2 process left 1MB and 2MB.
Suppose a new process p4 comes and demands a 3MB block of memory, which is available, but we can not
assign it because free memory space is not contiguous. This is called external fragmentation.

Both the first-fit and best-fit systems for memory allocation are affected by external fragmentation. To overcome the
external fragmentation problem Compaction is used. In the compaction technique, all free memory space combines
and makes one large block. So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address space of the processes to be
noncontiguous, thus permitting a process to be allocated physical memory wherever the latter is available.

Paging
Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical memory.
This scheme permits the physical address space of a process to be non-contiguous.

operating system 33
Logical Address or Virtual Address (represented in bits): An address generated by the CPU

Logical Address Space or Virtual Address Space (represented in words or bytes): The set of all logical
addresses generated by a program

Physical Address (represented in bits): An address actually available on a memory unit

Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding to
the logical addresses

Example:

If Logical Address = 31 bits, then Logical Address Space = 2 words = 2 G words (1 G = 2)


31

30

If Logical Address Space = 128 M words = 2 * 2 words, then Logical Address = log 2 = 27 bits
7

20

27

If Physical Address = 22 bits, then Physical Address Space = 2 words = 4 M words (1 M = 2)

22

20

If Physical Address Space = 16 M words = 2 * 2 words, then Physical Address = log 2 = 24 bits
4

20

24

The mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware
device and this mapping is known as the paging technique.

The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.

The Logical Address Space is also split into fixed-size blocks, called pages.

Page Size = Frame Size

Let us consider an example:

Physical Address = 12 bits, then Physical Address Space = 4 K words

Logical Address = 13 bits, then Logical Address Space = 8 K words

Page size = frame size = 1 K words (assumption)

operating system 34
Paging

The address generated by the CPU is divided into:

Page Number(p): Number of bits required to represent the pages in Logical Address Space or Page number

Page Offset(d): Number of bits required to represent a particular word in a page or page size of Logical Address
Space or word number of a page or page offset.

Physical Address is divided into:

Frame Number(f): Number of bits required to represent the frame of Physical Address Space or Frame number
frame

Frame Offset(d): Number of bits required to represent a particular word in a frame or frame size of Physical
Address Space or word number of a frame or frame offset.

The hardware implementation of the page table can be done by using dedicated registers. But the usage of the
register for the page table is satisfactory only if the page table is small. If the page table contains a large number of
entries then we can use TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.

The TLB is an associative, high-speed memory.

Each entry in TLB consists of two parts: a tag and a value.

When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the
corresponding value is returned.

operating system 35
Page Map Table

Main memory access time = m


If page table are kept in main memory,
Effective access time = m(for page table)
+ m(for particular page in page table)

What is address binding in the operating system?


The Address Binding refers to the mapping of computer instructions and data to physical memory locations. Both
logical and physical addresses are used in computer memory. It assigns a physical memory region to a logical pointer
by mapping a physical address to a logical address known as a virtual address. It is also a component of computer
memory management that the OS performs on behalf of applications that require memory access.

Types of Address Binding in Operating System


There are mainly three types of an address binding in the OS. These are as follows:

operating system 36
1. Compile Time Address Binding

2. Load Time Address Binding

3. Execution Time or Dynamic Address Binding

Compile Time Address Binding


It is the first type of address binding. It occurs when the compiler is responsible for performing address binding, and
the compiler interacts with the operating system to perform the address binding. In other words, when a program is
executed, it allocates memory to the system code of the computer. The address binding assigns a logical address to
the beginning of the memory segment to store the object code. Memory allocation is a long-term process and may only
be modified by recompiling the program.

Load Time Address Binding


It is another type of address binding. It is done after loading the program in the memory, and it would be done by the
operating system memory manager, i.e., loader. If memory allocation is specified when the program is assigned, no
program in its compiled state may ever be transferred from one computer to another. Memory allocations in the
executable code may already be in use by another program on the new system. In this case, the logical addresses of
the program are not connected to physical addresses until it is applied and loaded into memory.

Execution Time or Dynamic Address Binding


Execution time address binding is the most popular type of binding for scripts that aren't compiled because it only
applies to variables in the program. When a variable in a program is encountered during the processing of instructions
in a script, the program seeks memory space for that variable. The memory would assign the space to that variable
until the program sequence finished or unless a specific instruction within the script released the memory address
connected to a variable.

virtual memory

Virtual Memory:
Virtual memory is a memory management technique used by operating systems to provide an illusion of a larger
memory space than is physically available. It allows programs to use more memory than what is physically present in
the system. Virtual memory works by dividing the logical or virtual address space of a process into smaller units called
pages. These pages are then mapped to physical memory or storage.

Virtual Address Space:


Virtual address space is the range of addresses that a process can use to access memory. Each process has its own
virtual address space, which provides isolation and protection between processes. The virtual address space is
typically divided into multiple segments, such as the code segment, data segment, stack segment, and heap segment.

Paging Scheme:
Paging is a memory management scheme that divides the virtual address space and physical memory into fixed-size
blocks called pages and frames, respectively. The virtual memory is organized into a page table, which keeps track of
the mapping between virtual pages and physical frames. When a process accesses a virtual address, the page table is
consulted to determine the corresponding physical address.

operating system 37
Pure Segmentation:
Pure segmentation is a memory management scheme where the virtual address space is divided into variable-sized
segments. Each segment corresponds to a logical unit, such as code, data, stack, or heap. Segmentation provides
flexibility in managing memory, but it can lead to external fragmentation, where free memory is scattered in small
chunks, making it difficult to allocate contiguous memory blocks.

Segmentation with Paging Scheme (Hardware Support and Implementation Details):


Segmentation with paging scheme combines the benefits of both segmentation and paging. The virtual address space
is divided into segments, and each segment is further divided into fixed-size pages. The segment table contains the
base address and size of each segment, while the page table maps virtual pages to physical frames.

To implement segmentation with paging scheme, the hardware requires support for both segment registers and page
tables. The segment registers store the base address and size of each segment, and they are used to calculate the
starting point of a segment. The page table, similar to the paging scheme, maps virtual pages to physical frames.

Memory Fragmentation:
Memory fragmentation refers to the phenomenon where free memory is divided into small, non-contiguous blocks due
to allocation and deallocation of memory. There are two types of memory fragmentation:

1. External Fragmentation: It occurs when free memory is available in small non-contiguous chunks, making it
challenging to allocate large contiguous blocks of memory.

2. Internal Fragmentation: It occurs when allocated memory blocks are larger than what is required by the process.
This results in wasted memory within each block.

Demand Paging:
Demand paging is a technique used in virtual memory systems to bring pages into physical memory only when they
are required by a process. Instead of loading the entire program into memory at once, only the necessary pages are
loaded on-demand. This reduces the amount of memory required and allows for more efficient memory utilization.

Working Set Model:


The working set model is a concept used in demand paging systems to determine the set of pages actively used by a
process at any given time. The working set represents the minimum set of pages that should be present in memory to
avoid excessive page faults. By keeping the working set in memory, the system can optimize performance by reducing
the number of page faults.
Page Fault Frequency:
Page fault frequency refers to the rate at which page faults occur in a system. A page fault happens when a process
accesses a page that is not present in physical memory and needs to be fetched from secondary storage. High page
fault frequency indicates that the system is experiencing a high demand for memory, potentially causing performance
issues.
Thrashing:
Thrashing is a phenomenon in virtual memory systems where the system spends a significant amount of time
swapping pages between physical memory and disk, resulting in very low CPU utilization and poor performance. It
occurs when the system is constantly paging in and paging out pages at a high rate, often due to excessive demand
for memory or poor page replacement algorithms.

Page Replacement Algorithms:


Page replacement algorithms are used in virtual memory systems to decide which pages to evict from physical
memory when it is full and a new page needs to be loaded. Some commonly used page replacement algorithms are:

1. Optimal: The optimal algorithm selects the page that will not be used for the longest duration in the future.
However, it requires knowledge of future memory references, which is generally not feasible in real systems. It
serves as an upper bound for evaluating other algorithms.

operating system 38
2. FIFO (First-In-First-Out): The FIFO algorithm replaces the page that has been in memory the longest. It is easy to
implement but suffers from the "Belady's Anomaly," where increasing the number of page frames can lead to an
increase in the number of page faults.

3. LRU (Least Recently Used): The LRU algorithm replaces the page that has not been used for the longest duration.
It requires maintaining a timestamp or access counter for each page, making it more complex to implement than
FIFO. LRU performs better than FIFO and avoids the Belady's Anomaly.

TLB (Translation Lookaside Buffer):


The Translation Lookaside Buffer (TLB) is a cache-like hardware component used in memory management to speed
up the translation of virtual addresses to physical addresses. It is a small, fast memory that stores recently used page
table entries, eliminating the need to access the main page table for every memory access. TLB provides faster
address translation and reduces the overhead of memory accesses, improving system performance.

UNIT - 5
file management

File Management:
File management involves organizing and controlling files in a computer system. It includes concepts, operations, and
structures for creating, accessing, and managing files stored on various storage devices.

Concepts:

File: A named collection of related information or data stored on secondary storage.

File System: A method or structure used to store and organize files on a storage device.

File Attributes: Metadata associated with a file, such as name, size, type, creation/modification dates, and
permissions.

File Operations: Actions performed on files, such as create, read, write, delete, open, close, and modify.

File Types: Different types of files, such as text files, binary files, executable files, and directories.

Naming:
Files are typically named using a combination of characters, numbers, and symbols. File naming conventions may vary
across different operating systems and environments. They often impose restrictions on allowed characters, length,
and case sensitivity.
Attributes:
File attributes are metadata associated with files to store additional information about them. Common attributes include
file name, size, type, creation/modification dates, ownership, and permissions.
Operations:

Create: Creating a new file.

Read: Reading the content of a file.

Write: Modifying or appending data to a file.

Delete: Removing a file from the file system.

operating system 39
Open: Opening a file for subsequent operations.

Close: Closing an opened file.

Modify: Changing file attributes or content.

File Organization & Access Methods:

Sequential Access: Reading or writing files in a sequential manner from the beginning to the end. It is suitable for
processing files in a linear order but inefficient for random access.

Direct Access: Allows reading or writing files directly by specifying the file's location using a unique identifier, such
as a file handle or block number. It enables faster access but requires a file system that supports direct access.

Index Sequential Access: Combines sequential and direct access methods. Files are divided into fixed-length
blocks, and an index is maintained for each block, enabling direct access to blocks while preserving sequential
access within blocks.

Memory Mapped Files:


Memory mapped files provide a mechanism to map files directly into the virtual memory of a process. It allows
accessing files as if they were regular memory, enabling efficient read and write operations. Changes made to the
memory-mapped file are automatically synchronized with the underlying file on disk.

Directory Structures:

One Level: A flat directory structure where all files are stored in a single directory without any subdirectories.

Two Level: Files are organized into multiple directories, allowing a two-level hierarchy. Each file is associated with
a directory.

Hierarchical/Tree: Files are organized into a hierarchical or tree-like structure, with a single root directory and
multiple subdirectories.

Acyclic Graph: Files are organized in a directed acyclic graph (DAG) structure, where directories and files are
represented as nodes, and links represent relationships.

General Graph: Files are organized in a general graph structure, allowing cyclic relationships between directories
and files.

File System Mounting:


File system mounting is the process of making a file system accessible by associating it with a mount point in the
directory hierarchy. When a file system is mounted, its root directory becomes accessible through the mount point,
allowing access to files and directories within the file system.

File Sharing:
File sharing involves providing simultaneous access to files by multiple users or processes. It can be achieved through
various methods such as file locking, access control mechanisms, and network file sharing protocols like NFS
(Network File System) and SMB (Server Message Block).

Path Name:
A path name is a string that specifies the location of a file or directory within a file system hierarchy. It typically consists
of a sequence of directory names separated by a delimiter, such as slashes ("/") in Unix-like systems and backslashes
("\") in Windows.

Directory Operations:
Directory operations include creating, deleting, renaming, and moving directories. They also involve listing the contents
of directories, changing the current working directory, and navigating through the directory hierarchy.

operating system 40
Overview of File Systems in Linux & Windows:
Linux:
Linux supports various file systems, including ext4 (the default for most Linux distributions), ext3, XFS, Btrfs, and
more. The file system is typically mounted to the directory hierarchy starting from the root ("/"). Linux file systems
support features like permissions, symbolic links, file attributes, and journaling for data consistency.

Windows:
Windows supports different file systems, including NTFS (New Technology File System), FAT32 (File Allocation Table),
and exFAT (Extended File Allocation Table). NTFS is the default file system for modern Windows versions. It provides
features like file and folder permissions, encryption, compression, and support for large file sizes and volumes.

Both Linux and Windows offer directory structures, file attributes, file operations, and file sharing capabilities. However,
the specific features, naming conventions, and behavior may vary between the two operating systems.

Learn more about advanced memory management techniques such as non-contiguous memory allocation and
virtual memory

Familiarize yourself with common page replacement algorithms and their strengths and weaknesses

Understand the role of TLB in memory management and how it speeds up address translation

Click anywhere and just start typing

Hit / to see all the types of content you can add - headers, videos, sub pages, etc.

Highlight any text, and use the menu that pops up to style your writing however you like

See the ⋮⋮ to the left of this checkbox on hover? Click and drag to move this line

Click the + New Page button at the bottom of your sidebar to add a new page

Click Templates in your sidebar to get started with pre-built pages

This is a toggle block. Click the little triangle to see more useful tips!

notion.com/templates: More templates built by the Notion community

notion.com/help: Guides and FAQs for everything in Notion

notion.com/guides: Watch videos and read tutorials to become a Notion expert

👉 Have a question? Click the ? at the bottom right for more guides, or to send us a message.

operating system 41

You might also like