Operating System Notion
Operating System Notion
Operating System Notion
💡 By Ritik parmar
An Operating System performs all the basic tasks like managing files, processes, and memory. Thus operating system
acts as the manager of all the resources, i.e. resource manager. Thus, the operating system becomes an interface
between the user and the machine. It is one of the most required software that is present in the device.
Operating System is a type of software that works as an interface between the system program and the hardware.
There are several types of Operating Systems in which many of which are mentioned below. Let’s have a look at them.
Multi-Programming System
Multi-Processing System
operating system 1
Batch Operating System
It is very difficult to guess or know the time required for any job to complete. Processors of the batch systems know
how long the job would be when it is in the queue.
It is sometimes costly.
The other jobs will have to wait for an unknown time if any job fails.
operating system 2
MultiProgramming
Advantages of Multi-Programming Operating System
There is not any facility for user interaction of system resources with the system.
Multiprocessing
Advantages of Multi-Processing Operating System
operating system 3
As it has several processors, so, if one processor fails, we can proceed with another processor.
Due to the multiple CPU, it can be more complex and somehow difficult to understand.
There are two types of Multi-Tasking Systems which are listed below.
Preemptive Multi-Tasking
Cooperative Multi-Tasking
Multitasking
operating system 4
Time-Sharing OS
Advantages of Time-Sharing OS
Resource Sharing: Time-sharing systems allow multiple users to share hardware resources such as the CPU,
memory, and peripherals, reducing the cost of hardware and increasing efficiency.
Improved Productivity: Time-sharing allows users to work concurrently, thereby reducing the waiting time for their
turn to use the computer. This increased productivity translates to more work getting done in less time.
Improved User Experience: Time-sharing provides an interactive environment that allows users to communicate
with the computer in real time, providing a better user experience than batch processing.
Disadvantages of Time-Sharing OS
Reliability problem.
One must have to take care of the security and integrity of user programs and data.
High Overhead: Time-sharing systems have a higher overhead than other operating systems due to the need for
scheduling, context switching, and other overheads that come with supporting multiple users.
Complexity: Time-sharing systems are complex and require advanced software to manage multiple users
simultaneously. This complexity increases the chance of bugs and errors.
Security Risks: With multiple users sharing resources, the risk of security breaches increases. Time-sharing
systems require careful management of user access, authentication, and authorization to ensure the security of
data and software.
IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was first introduced in 1972. It is still in use
today, providing a virtual machine environment that allows multiple users to run their own instances of operating
operating system 5
systems and applications.
TSO (Time Sharing Option): TSO is a time-sharing operating system that was first introduced in the 1960s by
IBM for the IBM System/360 mainframe computer. It allowed multiple users to access the same computer
simultaneously, running their own applications.
Windows Terminal Services: Windows Terminal Services is a time-sharing operating system that allows multiple
users to access a Windows server remotely. Users can run their own applications and access shared resources,
such as printers and network storage, in real-time.
Distributed OS
Advantages of Distributed Operating System
Failure of one will not affect the other network communication, as all systems are independent of each other.
Since resources are being shared, computation is highly fast and durable.
These systems are easily scalable as many systems can be easily added to the network.
These types of systems are not readily available as they are very expensive. Not only that the underlying software
is highly complex and not understood well yet.
operating system 6
7. Network Operating System
These systems run on a server and provide the capability to manage data, users, groups, security, applications, and
other networking functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network. One more important aspect of Network
Operating Systems is that all the users are well aware of the underlying configuration, of all other users within the
network, their individual connections, etc. and that’s why these computers are popularly known as tightly coupled
systems.
New technologies and hardware up-gradation are easily integrated into the system.
Server access is possible remotely from different locations and types of systems.
Examples of Network Operating Systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008,
UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.
operating system 7
8. Real-Time Operating System
These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very small.
This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile systems, air traffic
control systems, robots, etc.
Types of Real-Time Operating Systems
Hard Real-Time Systems: Hard Real-Time OSs are meant for applications where time constraints are very strict
and even the shortest possible delay is not acceptable. These systems are built for saving life like automatic
parachutes or airbags which are required to be readily available in case of an accident. Virtual memory is rarely
found in these systems.
Soft Real-Time Systems: These OSs are for applications where time-constraint is less strict.
For more, refer to the Difference Between Hard Real-Time OS and Soft Real-Time OS.
Maximum Consumption: Maximum utilization of devices and systems, thus more output from all the resources.
Task Shifting: The time assigned for shifting tasks in these systems is very less. For example, in older systems, it
takes about 10 microseconds in shifting from one task to another, and in the latest systems, it takes 3
microseconds.
Focus on Application: Focus on running applications and less importance on applications that are in the queue.
Real-time operating system in the embedded system: Since the size of programs is small, RTOS can also be
used in embedded systems like in transport and others.
operating system 8
Disadvantages of RTOS
Limited Tasks: Very few tasks run at the same time and their concentration is very less on a few applications to
avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.
Device driver and interrupt signals: It needs specific device drivers and interrupts signal to respond earliest to
interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
1. A system call function may create and use kernel processes to execute the asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege executes
in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not present in the kernel protection
domain.
4. The code and data for system calls are stored in global kernel memory.
2. Network connections require the system calls to sending and receiving data packets.
operating system 9
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you need a system call.
THREAD:
Here are the basics:A thread is a single sequence stream within a process. Threads are also called lightweight
processes as they possess some of the properties of processes. Each thread belongs to exactly one process. In an
operating system that supports multithreading, the process can consist of many threads.
UNIT 2
PCB
When the process is created by the operating system it creates a data structure to store the information of that
process. This is known as Process Control Block (PCB).
Process Control block (PCB) is a data structure that stores information of a process.
operating system 10
PCBs are stored in specially reserved memory for the operating system known as kernel space.
*Note: **The Random Access Memory (RAM) can be logically divided into two distinct regions namely - the kernel
space and the user space. kernel space is the core of the operating system. It normally has full access to all
memory and machine hardware and it cant be accessed by the user.
PCB is unique for every process which consists of various attributes such as process ID, priority, registers, program
counters, process states, list of open files, etc.
operating system 11
Medium Term Scheduler
Medium-term scheduling is an important part of swapping. It enables you to handle the swapped out-processes. In
this scheduler, a running process can become suspended, which makes an I/O request.
A running process can become suspended if it makes an I/O request. A suspended processes can’t make any
progress towards completion. In order to remove the process from memory and make space for other processes, the
suspended process should be moved to secondary storage.
Turn Around Time: Time Difference between completion time and arrival time.
Waiting Time(W.T): Time Difference between turn around time and burst time.
CPU utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible. Theoretically,
CPU usage can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending on the
system load.
Throughput: The average CPU performance is the number of processes performed and completed during each
unit. This is called throughput. The output may vary depending on the length or duration of the processes.
Turn round Time: For a particular process, the important conditions are how long it takes to perform that process.
The time elapsed from the time of process delivery to the time of completion is known as the conversion time.
operating system 12
Conversion time is the amount of time spent waiting for memory access, waiting in line, using CPU, and waiting for
I / O.
Waiting Time: The Scheduling algorithm does not affect the time required to complete the process once it has
started performing. It only affects the waiting time of the process i.e. the time spent in the waiting process in the
ready queue.
Response Time: In a collaborative system, turn around time is not the best option. The process may produce
something early and continue to computing the new results while the previous results are released to the user.
Therefore another method is the time taken in the submission of the application process until the first response is
issued. This measure is called response time.
Preemptive Scheduling: Preemptive scheduling is used when a process switches from running state to ready
state or from the waiting state to the ready state.
operating system 13
Characteristics of FCFS:
This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
Easy to implement
Disadvantages of FCFS:
The average waiting time is much higher than the other algorithms.
FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on First come,
First serve Scheduling.
operating system 14
Characteristics of SJF:
Shortest Job first has the advantage of having a minimum average waiting time among all operating system
scheduling algorithms.
It may cause starvation if shorter processes keep coming. This problem can be solved using the concept of
ageing.
As SJF reduces the average waiting time thus, it is better than the first come first serve scheduling algorithm.
Disadvantages of SJF:
Many times it becomes complicated to predict the length of the upcoming CPU request
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Shortest Job
First.
Characteristics of LJF:
Among all the processes waiting in a waiting queue, CPU is always assigned to the process having largest burst
time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the process that arrived first is
processed first.
Advantages of LJF:
No other task can schedule until the longest job or process executes completely.
Disadvantages of LJF:
Generally, the LJF algorithm gives a very high average waiting time and average turn-around time for a given
set of processes.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Longest job
first scheduling.
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that
works based on the priority of a process. In this algorithm, the editor sets the functions to be as important, meaning
that the most important process must be done first. In the case of any conflict, that is, where there are more than one
operating system 15
processor with equal value, then the most important CPU planning algorithm works on the basis of the FCFS (First
Come First Serve) algorithm.
Characteristics of Priority Scheduling:
When the higher priority work arrives while a task with less priority is executed, the higher priority work takes the
place of the less priority one and
Less complex
One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the Starvation
Problem. This is the problem in which a process has to wait for a longer amount of time to get scheduled into the
CPU. This condition is called the starvation problem.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Priority
Preemptive Scheduling algorithm.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It is
the preemptive version of First come First Serve CPU Scheduling algorithm. Round Robin CPU Algorithm
generally focuses on Time Sharing technique.
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.
It is considered preemptive as the processes are given to the CPU for a very limited time.
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Round
robin Scheduling algorithm.
SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s overhead charges are not
counted.
operating system 16
The context switch is done a lot more times in SRTF than in SJF and consumes the CPU’s valuable time for
processing. This adds up to its processing time and diminishes its advantage of fast processing.
Advantages of SRTF:
The system also requires very little overhead since it only makes a decision when a process completes or a new
process is added.
Disadvantages of SRTF:
Like the shortest job first, it also has the potential for process starvation.
Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the shortest
remaining time first.
Among all the processes waiting in a waiting queue, the CPU is always assigned to the process having the largest
burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the process that arrived first is
processed first.
Advantages of LRTF:
No other process can execute until the longest task executes completely.
Disadvantages of LRTF:
This algorithm gives a very high average waiting time and average turn-around time for a given set of
processes.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the longest
remaining time first.
HRRN is considered as the modification of Shortest Job First to reduce the problem of starvation.
operating system 17
In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted to the next process which has
the highest response ratio and not to the process having less burst time.
Response Ratio = (W + S)/SHere, W is the waiting time of the process so far and S is the
Burst time of the process.
Advantages of HRRN:
HRRN Scheduling algorithm generally gives better performance than the shortest job first Scheduling.
There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.
Disadvantages of HRRN:
The implementation of HRRN scheduling is not possible as it is not possible to know the burst time of every job in
advance.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Highest
Response Ratio Next.
System Processes: The CPU itself has its process to run, generally termed as System Process.
Interactive Processes: An Interactive Process is a type of process in which there should be the same type of
interaction.
Batch Processes: Batch processing is generally a technique in the Operating system that collects the programs
and data together in the form of a batch before the processing starts.
The main merit of the multilevel queue is that it has a low scheduling overhead.
Starvation problem
It is inflexible in nature
operating system 18
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel
Queue Scheduling.
As the processes are permanently assigned to the queue, this setup has the advantage of low scheduling
overhead,
It is more flexible
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel
Feedback Queue Scheduling.
Average
Algorithm Allocation is Complexity waiting time Preemption Starvation Performance
(AWT)
According to the
arrival time of
Simple and easy Slow
FCFS the processes, Large. No No
to implement performance
the CPU is
allocated.
Based on the Minimum
More complex Smaller than
SJF lowest CPU No Yes Average Waiting
than FCFS FCFS
burst time (BT). Time
Depending on
Based on the some measures
More complex Big turn-around
LJFS highest CPU e.g., arrival time, No Yes
than FCFS time
burst time (BT) process size,
etc.
LRTF Same as LJFS More complex Depending on Yes Yes The preference
the allocation of than FCFS some measures is given to the
the CPU is e.g., arrival time, longer jobs
based on the
operating system 19
highest CPU process size,
burst time (BT). etc.
But it is
preemptive
Same as SJF
the allocation of
the CPU is Depending on
The preference
based on the More complex some measures
SRTF Yes Yes is given to the
lowest CPU than FCFS e.g., arrival time,
short jobs
burst time (BT). process size, etc
But it is
preemptive.
According to the
The complexity Large as
order of the Each process
depends on compared to
RR process arrives Yes No has given a
Time Quantum SJF and Priority
with fixed time fairly fixed time
size scheduling.
quantum (TQ)
According to the Well
priority. The performance but
Priority Pre- This type is less Smaller than
bigger priority Yes Yes contain a
emptive complex FCFS
task executes starvation
first problem
According to the
priority with This type is less
Preemptive Most beneficial
Priority non- monitoring the complex than
Smaller than No Yes with batch
preemptive new incoming Priority
FCFS systems
higher priority preemptive
jobs
UNIT 3 :
DEADLOCK
A deadlock is a situation where a set of processes are blocked because each process is holding a resource and
waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there is only one track,
none of the trains can move once they are in front of each other. A similar situation occurs in operating systems when
there are two or more processes that hold some resources and wait for resources held by other(s). For example, in the
operating system 20
below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.
Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each needs another one.
P1 executes wait(B).
P0 P1
wait(A); wait(B)
wait(B); wait(A)
3. Assume the space is available for allocation of 200K bytes, and the following sequence of events occurs.
P0 P1
Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the resource.
Circular Wait: A set of processes waiting for each other in circular form.
operating system 21
There are three ways to handle deadlock
1) Deadlock prevention or avoidance:
Prevention:
The idea is to not let the system into a deadlock state. This system will make sure that above mentioned four
conditions will not arise. These techniques are very costly so we use this in cases where our priority is making a
system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of the above-mentioned necessary
conditions for deadlock. Prevention can be done in four different ways:
Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make an assumption. We need to
ensure that all information about resources that the process will need is known to us before the execution of the
process. We use Banker’s algorithm (Which is in turn a gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance decreases.
2) Deadlock detection and recovery: If Deadlock prevention or avoidance is not applied to the software then we can
handle this by deadlock detection and recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether there is a deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery of the deadlock.
In Deadlock detection and recovery, we get the correctness of data but performance decreases.
3) Deadlock ignorance: If a deadlock is very rare, then let it happen and reboot the system. This is the approach that
both Windows and UNIX take. we use the ostrich algorithm for deadlock ignorance.
In Deadlock, ignorance performance is better than the above two methods but the correctness of data.
Safe State:
A safe state can be defined as a state in which there is no deadlock. It is achievable if:
If a process needs an unavailable resource, it may wait until the same has been released by a process to which it
has already been allocated. if such a sequence does not exist, it is an unsafe state.
RACE CONDITION
A race condition occurs when two or more threads can access shared data and they try to change it at the same time.
Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the
threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread
scheduling algorithm, i.e. both threads are "racing" to access/change the data.
operating system 22
CRITICAL SECTION PROBLEM
Critical Section is the part of a program which tries to access shared resources. That resource may be any resource in
a computer like a memory location, Data structure, CPU or any IO device.
The critical section cannot be executed by more than one process at the same time; operating system faces the
difficulties in allowing and disallowing the processes from entering the critical section.
The critical section problem is used to design a set of protocols which can ensure that the Race condition among the
processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical section problem. We need to
provide a solution in such a way that the following conditions can be satisfied.
Advertisement
PETERSON SOLUTION
Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process is executing inside
critical section then the other process must not enter in the critical section.
operating system 23
1. Progress
Progress means that if one process doesn't need to execute into critical section then it should not stop other processes
to get into the critical section.
Secondary
1. Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical section. The process must not be
endlessly waiting for getting into the critical section.
1. Architectural Neutrality
Our mechanism must be architectural natural. It means that if our solution is working fine on one architecture then it
should also run on the other ones as well.
SEMAPHORE:
A Semaphore can be described as an object that consists of a counter, a waiting list of processes, Signal and Wait
functions. The most basic use of semaphore is to initialize it to 1. When a thread want to enter a critical section, it calls
down and enter the section. When another thread tries to do the same thing, the operation system will put it to the
sleep because the value of semaphore is already zero due to previous call to down. When first thread is finished with
the critical section, it calls up, which wakes up the other thread that’s waiting to enter.
Logically semaphore S is an integer variable that, apart from initialization can only be accessed through two atomic
operations :
Wait(S) or P : If the semaphore value is greater than 0, decrement the value. Otherwise, wait until the value is
greater than 0 and then decrement it.
Advantages :
operating system 24
Semaphores are simple to implement and machine independent
Drawbacks :
There is no linguistic connection between the semaphore and data to which the semaphore control access
ReadDiscussCoursesPractice
In this article, we will see number of classical problems of synchronization as examples of a large class of
concurrency-control problems. In our solutions to the problems, we use semaphores for synchronization, since that is
the traditional way to present such solutions. However, actual implementations of these solutions could
use mutex locks in place of binary semaphores.
These problems are used for testing nearly every newly proposed synchronization scheme. The following problems of
synchronization are considered as classical problems:
These are summarized, for detailed explanation, you can view the linked articles for each.
Bounded-buffer (or Producer-Consumer) Problem: Bounded Buffer problem is also called producer consumer
problem. This problem is generalized in terms of the Producer-Consumer problem. Solution to this problem is,
creating two counting semaphores “full” and “empty” to keep track of the current number of full and empty buffers
respectively. Producers produce a product and consumers consume the product, but both use of one of the
containers each time.
Dining-Philosophers Problem: The Dining Philosopher Problem states that K philosophers seated around a
circular table with one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be
operating system 25
picked up by any one of its adjacent followers but not both. This problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-free manner.
Readers and Writers Problem: Suppose that a database is to be shared among several concurrent processes.
Some of these processes may want only to read the database, whereas others may want to update (that is, to read
and write) the database. We distinguish between these two types of processes by referring to the former as
readers and to the latter as writers. Precisely in OS we call this situation as the readers-writers problem. Problem
parameters:
Once a writer is ready, it performs its write. Only one writer may write at a time.
Sleeping Barber Problem: Barber shop with one barber, one barber chair and N chairs to wait in. When no
customers the barber goes to sleep in barber chair and must be woken when a customer comes in. When barber
is cutting hair new customers take empty seats to wait, or leave if no vacancy.
PETERSON SOLUTION
Peterson's solution ensures mutual exclusion. It is implemented in user mode and no hardware support is required
therefore it can be implemented on any platform. Now Peterson’s solution uses two variables: interest and Turn
variable.
operating system 26
Now we will first see Peterson solution algorithm and then see how any two processes P and Q get mutual exclusion
using Peterson solution.
#define N 2
#define TRUE 1
#define FALSE 0
int interested[N]=False
int turn;
void Entry_Section(int process)
{
int other;
other=1-process
interested[process]= TRUE ;
turn = process;
while(interested[other]==TRUE && Turn=process);
}
void exit_section(int process)
{
interested[process]=FALSE;
}
Explanation
There will be two processes and the process number of the first process=0 and the process number of the second
process is equal to 1.
Now, since the process which called entry_section it means that process want to execute a critical section then that
process will set interested[process]=TRUE
After declaring that process is interesting it will set its turn. So, if process 1 is called then turn =1.
Then, while (interested[other]==TRUE && Turn=process); will be executed.
In this line, the process checks whether other processes are interested or not. If that process is interested then
interested[other]==TRUE will be true then the process thinks that it may happen that another process is executing the
critical section.
For that, it will go into a loop until another process is not interesting. Now if another process becomes interested then
interested[other]==TRUE
It will become False and the process will enter into a critical section. So, in this way, only one process may enter into
the critical section. Therefore, mutual exclusion is guaranteed in Peterson's solution. While exiting the critical section
process will set interest as False.
UNIT 4
Memory Management:
operating system 27
What is Memory Management?
In a multiprogramming computer, the Operating System resides in a part of memory and the rest is used by multiple
processes. The task of subdividing the memory among different processes is called Memory Management. Memory
management is a method in the operating system to manage operations between main memory and disk during
process execution. The main aim of memory management is to achieve efficient utilization of memory.
Now we are discussing the concept of Logical Address Space and Physical Address Space
Physical Address Space: An address seen by the memory unit (i.e the one loaded into the memory address
register of the memory) is commonly known as a “Physical Address”. A Physical address is also known as a Real
address. The set of all physical addresses corresponding to these logical addresses is known as Physical address
space. A physical address is computed by MMU. The run-time mapping from virtual to physical addresses is done
by a hardware device Memory Management Unit(MMU). The physical address always remains constant.
Static Loading: Static Loading is basically loading the entire program into a fixed address. It requires more
memory space.
Dynamic Loading: The entire program and all data of a process must be in physical memory for the process to
execute. So, the size of a process is limited to the size of physical memory. To gain proper memory utilization,
dynamic loading is used. In dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a relocatable load format. One of the advantages of dynamic loading is that the unused routine is never
loaded. This loading is useful when a large amount of code is needed to handle it efficiently.
operating system 28
To perform a linking task a linker is used. A linker is a program that takes one or more object files generated by a
compiler and combines them into a single executable file.
Static Linking: In static linking, the linker combines all necessary program modules into a single executable
program. So there is no runtime dependency. Some operating systems support only static linking, in which system
language libraries are treated like any other object module.
Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic linking, “Stub” is
included for each appropriate library routine reference. A stub is a small piece of code. When the stub is executed,
it checks whether the needed routine is already in memory or not. If not available then the program loads the
routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of swapping a process
temporarily into a secondary memory from the main memory, which is fast compared to secondary memory. A
swapping allows more processes to be run and can be fit into memory at one time. The main part of swapping is
transferred time and the total time is directly proportional to the amount of memory swapped. Swapping is also known
as roll-out, or roll because if a higher priority process arrives and wants service, the memory manager can swap out
the lower priority process and then load and execute the higher priority process. After finishing higher priority work, the
lower priority process swapped back in memory and continued to the execution process.
operating system 29
Fence Register
operating system user program
In this approach, the operating system keeps track of the first and last location available for the allocation of the
user program
Interrupt vectors are often loaded in low memory therefore it makes sense to load the operating system in low
memory
Sharing of data and code does not make much sense in a single process environment
The Operating system can be protected from user programs with the help of a fence register.
Memory is wasted
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for updating the system the
remaining four partitions are for the user program
operating system
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of memory partitions it is done through a data
structure called a partition table
0k 200k allocated
operating system 30
300k 150k free
450k 250k allocated
Contiguous Memory Allocation
The main memory should oblige both the operating system and the different client processes. Therefore, the
allocation of memory becomes an important task in the operating system. The memory is usually divided into two
partitions: one for the resident operating system and one for the user processes. We normally need several user
processes to reside in memory simultaneously. Therefore, we need to consider how to allocate available memory to
the processes that are in the input queue waiting to be brought into memory. In adjacent memory allotment, each
process is contained in a single contiguous segment of memory.
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the simplest methods
for allocating memory is to divide memory into several fixed-sized partitions and each partition contains exactly one
process. Thus, the degree of multiprogramming is obtained by the number of partitions.
operating system 31
Multiple partition allocation: In this method, a process is selected from the input queue and loaded into the free
partition. When the process terminates, the partition becomes available for other processes.
Fixed partition allocation: In this method, the operating system maintains a table that indicates which parts of
memory are available and which are occupied by processes. Initially, all memory is available for user processes
and is considered one large block of available memory. This available memory is known as a “Hole”. When the
process arrives and needs memory, we search for a hole that is large enough to store this process. If the
requirement is fulfilled then we allocate memory to process, otherwise keeping the rest available to satisfy future
requests. While allocating a memory sometimes dynamic storage allocation problems occur, which concerns how
to satisfy a request of size n from a list of free holes. There are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.
First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can store process A (size of 25 KB),
because the first two blocks did not have sufficient memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this, we search the entire
list, unless the list is ordered by size.
Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable hole for
Process A(size 25KB). In this method, memory utilization is maximum as compared to other memory allocation
techniques.
operating system 32
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the largest leftover hole.
Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which is 60KB.
Inefficient memory utilization is a major issue in the worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from memory, it creates a small
free hole. These holes can not be assigned to new processes because holes are not combined or do not fulfill the
memory requirement of the process. To achieve a degree of multiprogramming, we must reduce the waste of memory
or fragmentation problems. In the operating systems two types of fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the process more
than their requested size. Due to this some unused space is left over and creating an internal fragmentation
problem.Example: Suppose there is a fixed partitioning used for memory allocation and the different sizes of
blocks 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and demands a block of
memory. It gets a memory block of 3MB but 1MB block of memory is a waste, and it can not be allocated to other
processes too. This is called internal fragmentation.
2. External fragmentation: In External Fragmentation, we have a free memory block, but we can not assign it to a
process because blocks are not contiguous. Example: Suppose (consider the above example) three processes
p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB,
and 7MB allocated respectively. After allocating the process p1 process and the p2 process left 1MB and 2MB.
Suppose a new process p4 comes and demands a 3MB block of memory, which is available, but we can not
assign it because free memory space is not contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external fragmentation. To overcome the
external fragmentation problem Compaction is used. In the compaction technique, all free memory space combines
and makes one large block. So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address space of the processes to be
noncontiguous, thus permitting a process to be allocated physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical memory.
This scheme permits the physical address space of a process to be non-contiguous.
operating system 33
Logical Address or Virtual Address (represented in bits): An address generated by the CPU
Logical Address Space or Virtual Address Space (represented in words or bytes): The set of all logical
addresses generated by a program
Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding to
the logical addresses
Example:
30
If Logical Address Space = 128 M words = 2 * 2 words, then Logical Address = log 2 = 27 bits
7
20
27
22
20
If Physical Address Space = 16 M words = 2 * 2 words, then Physical Address = log 2 = 24 bits
4
20
24
The mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware
device and this mapping is known as the paging technique.
The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.
The Logical Address Space is also split into fixed-size blocks, called pages.
operating system 34
Paging
Page Number(p): Number of bits required to represent the pages in Logical Address Space or Page number
Page Offset(d): Number of bits required to represent a particular word in a page or page size of Logical Address
Space or word number of a page or page offset.
Frame Number(f): Number of bits required to represent the frame of Physical Address Space or Frame number
frame
Frame Offset(d): Number of bits required to represent a particular word in a frame or frame size of Physical
Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated registers. But the usage of the
register for the page table is satisfactory only if the page table is small. If the page table contains a large number of
entries then we can use TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.
When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the
corresponding value is returned.
operating system 35
Page Map Table
operating system 36
1. Compile Time Address Binding
virtual memory
Virtual Memory:
Virtual memory is a memory management technique used by operating systems to provide an illusion of a larger
memory space than is physically available. It allows programs to use more memory than what is physically present in
the system. Virtual memory works by dividing the logical or virtual address space of a process into smaller units called
pages. These pages are then mapped to physical memory or storage.
Paging Scheme:
Paging is a memory management scheme that divides the virtual address space and physical memory into fixed-size
blocks called pages and frames, respectively. The virtual memory is organized into a page table, which keeps track of
the mapping between virtual pages and physical frames. When a process accesses a virtual address, the page table is
consulted to determine the corresponding physical address.
operating system 37
Pure Segmentation:
Pure segmentation is a memory management scheme where the virtual address space is divided into variable-sized
segments. Each segment corresponds to a logical unit, such as code, data, stack, or heap. Segmentation provides
flexibility in managing memory, but it can lead to external fragmentation, where free memory is scattered in small
chunks, making it difficult to allocate contiguous memory blocks.
To implement segmentation with paging scheme, the hardware requires support for both segment registers and page
tables. The segment registers store the base address and size of each segment, and they are used to calculate the
starting point of a segment. The page table, similar to the paging scheme, maps virtual pages to physical frames.
Memory Fragmentation:
Memory fragmentation refers to the phenomenon where free memory is divided into small, non-contiguous blocks due
to allocation and deallocation of memory. There are two types of memory fragmentation:
1. External Fragmentation: It occurs when free memory is available in small non-contiguous chunks, making it
challenging to allocate large contiguous blocks of memory.
2. Internal Fragmentation: It occurs when allocated memory blocks are larger than what is required by the process.
This results in wasted memory within each block.
Demand Paging:
Demand paging is a technique used in virtual memory systems to bring pages into physical memory only when they
are required by a process. Instead of loading the entire program into memory at once, only the necessary pages are
loaded on-demand. This reduces the amount of memory required and allows for more efficient memory utilization.
1. Optimal: The optimal algorithm selects the page that will not be used for the longest duration in the future.
However, it requires knowledge of future memory references, which is generally not feasible in real systems. It
serves as an upper bound for evaluating other algorithms.
operating system 38
2. FIFO (First-In-First-Out): The FIFO algorithm replaces the page that has been in memory the longest. It is easy to
implement but suffers from the "Belady's Anomaly," where increasing the number of page frames can lead to an
increase in the number of page faults.
3. LRU (Least Recently Used): The LRU algorithm replaces the page that has not been used for the longest duration.
It requires maintaining a timestamp or access counter for each page, making it more complex to implement than
FIFO. LRU performs better than FIFO and avoids the Belady's Anomaly.
UNIT - 5
file management
File Management:
File management involves organizing and controlling files in a computer system. It includes concepts, operations, and
structures for creating, accessing, and managing files stored on various storage devices.
Concepts:
File System: A method or structure used to store and organize files on a storage device.
File Attributes: Metadata associated with a file, such as name, size, type, creation/modification dates, and
permissions.
File Operations: Actions performed on files, such as create, read, write, delete, open, close, and modify.
File Types: Different types of files, such as text files, binary files, executable files, and directories.
Naming:
Files are typically named using a combination of characters, numbers, and symbols. File naming conventions may vary
across different operating systems and environments. They often impose restrictions on allowed characters, length,
and case sensitivity.
Attributes:
File attributes are metadata associated with files to store additional information about them. Common attributes include
file name, size, type, creation/modification dates, ownership, and permissions.
Operations:
operating system 39
Open: Opening a file for subsequent operations.
Sequential Access: Reading or writing files in a sequential manner from the beginning to the end. It is suitable for
processing files in a linear order but inefficient for random access.
Direct Access: Allows reading or writing files directly by specifying the file's location using a unique identifier, such
as a file handle or block number. It enables faster access but requires a file system that supports direct access.
Index Sequential Access: Combines sequential and direct access methods. Files are divided into fixed-length
blocks, and an index is maintained for each block, enabling direct access to blocks while preserving sequential
access within blocks.
Directory Structures:
One Level: A flat directory structure where all files are stored in a single directory without any subdirectories.
Two Level: Files are organized into multiple directories, allowing a two-level hierarchy. Each file is associated with
a directory.
Hierarchical/Tree: Files are organized into a hierarchical or tree-like structure, with a single root directory and
multiple subdirectories.
Acyclic Graph: Files are organized in a directed acyclic graph (DAG) structure, where directories and files are
represented as nodes, and links represent relationships.
General Graph: Files are organized in a general graph structure, allowing cyclic relationships between directories
and files.
File Sharing:
File sharing involves providing simultaneous access to files by multiple users or processes. It can be achieved through
various methods such as file locking, access control mechanisms, and network file sharing protocols like NFS
(Network File System) and SMB (Server Message Block).
Path Name:
A path name is a string that specifies the location of a file or directory within a file system hierarchy. It typically consists
of a sequence of directory names separated by a delimiter, such as slashes ("/") in Unix-like systems and backslashes
("\") in Windows.
Directory Operations:
Directory operations include creating, deleting, renaming, and moving directories. They also involve listing the contents
of directories, changing the current working directory, and navigating through the directory hierarchy.
operating system 40
Overview of File Systems in Linux & Windows:
Linux:
Linux supports various file systems, including ext4 (the default for most Linux distributions), ext3, XFS, Btrfs, and
more. The file system is typically mounted to the directory hierarchy starting from the root ("/"). Linux file systems
support features like permissions, symbolic links, file attributes, and journaling for data consistency.
Windows:
Windows supports different file systems, including NTFS (New Technology File System), FAT32 (File Allocation Table),
and exFAT (Extended File Allocation Table). NTFS is the default file system for modern Windows versions. It provides
features like file and folder permissions, encryption, compression, and support for large file sizes and volumes.
Both Linux and Windows offer directory structures, file attributes, file operations, and file sharing capabilities. However,
the specific features, naming conventions, and behavior may vary between the two operating systems.
Learn more about advanced memory management techniques such as non-contiguous memory allocation and
virtual memory
Familiarize yourself with common page replacement algorithms and their strengths and weaknesses
Understand the role of TLB in memory management and how it speeds up address translation
Hit / to see all the types of content you can add - headers, videos, sub pages, etc.
Highlight any text, and use the menu that pops up to style your writing however you like
See the ⋮⋮ to the left of this checkbox on hover? Click and drag to move this line
Click the + New Page button at the bottom of your sidebar to add a new page
This is a toggle block. Click the little triangle to see more useful tips!
👉 Have a question? Click the ? at the bottom right for more guides, or to send us a message.
operating system 41