Complete study material with sample exercise questions
Complete study material with sample exercise questions
Complete study material with sample exercise questions
Detailed Syllabus
Pre-requisite: Basic knowledge of Data Structures and Computer Organization.
COURSE OUTCOMES:
CO 1: Students will be able to understand the different services provided by Operating System
and different scheduling algorithms at different level.
CO 2: Students will be able to learn synchronization techniques to avoid
deadlock.
CO 3: Students will acquire a knowledge about different memory management techniques like
paging, segmentation and demand paging etc.
CO 4: students will have a comprehensive understanding of I/O hardware and software principles,
secondary-storage structures, file management, and disk management.
TEXT BOOK:
1. Operating System Concepts Essentials, 9th Edition by Abraham Silberschatz, Peter Galvin,
Greg Gagne, Wiley Asia Student Edition.
2. Operating Systems: Internals and Design Principles, 5th Edition, William Stallings, Prentice
Hall of India.
REFERENCE BOOKS:
1. Operating System Concepts, Ekta Walia, Khanna Publishing House (AICTE Recommended
Textbook – 2018).
2. Operating System: A Design-oriented Approach, 1st Edition by Charles Crowley, Irwin
Publishing.
ONLINE RESOURCES:
1. https://online.stanford.edu/courses/cs111-operating-systems-principles)
2. https://onlinecourses.nptel.ac.in/noc20_cs04/preview
3. https://www.coursera.org/specializations/codio-introduction-operating-systems
4. https://www.coursera.org/learn/akamai-operating-systems#modules
MODULE 1
This type of operating system do not interact with the computer directly. There is an operator
which takes similar jobs having same requirement and group them into batches. It is the
responsibility of operator to sort the jobs with similar needs.
Each task has given some time to execute, so that all the tasks work smoothly. Each user gets
time of CPU as they use single system. These systems are also known as Multitasking Systems.
The task can be from single user or from different users also. The time that each task gets to
execute is called quantum. After this time interval is over OS switches over to next task.
These types of operating system is a recent advancement in the world of computer technology
and are being widely accepted all-over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate each other using a shared communication
network. Independent systems possess their own memory unit and CPU. These are referred as
loosely coupled systems or distributed systems. These systems processors differ in sizes and
functions. The major benefit of working with these types of operating system is that it is always
possible that one user can access the files or software which are not actually present on his
system but on some other system connected within this network i.e., remote access is enabled
within the devices connected in that network.
These systems runs on a server and provides the capability to manage data, users, groups,
security, applications, and other networking functions. These type of operating systems allows
shared access of files, printers, security, applications, and other networking functions over a
5|Page Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
small private network. One more important aspect of Network Operating Systems is that all the
users are well aware of the underlying configuration, of all other users within the network, their
individual connections etc. and that’s why these computers are popularly known as tightly
coupled systems.
These types of OSs serves the real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements are very strict like missile
systems, air traffic control systems, robots etc.
These OSs are meant for the applications where time constraints are very strict and
even the shortest possible delay is not acceptable. These systems are built for saving
life like automatic parachutes or air bags which are required to be readily available in
case of any accident. Virtual memory is almost never found in these systems.
These OSs are for applications where for time-constraint is less strict.
Definition of Process:
Component of a Process
Stack
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
Heap
This is dynamically allocated memory to a process during its run time.
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
Data
This section contains the global and static variables.
Start
This is the initial state when a process is first started/created.
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come
into this state after Start state or while running it by but interrupted by the scheduler to assign
CPU to some other process.
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is
set to running and the processor executes its instructions.
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table –
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Scheduling Queues
Job Queue- This queue keeps all the processes in the system.
Ready Queue- This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
Device Queue- The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Schedulers
10 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run.
Short-Term Scheduler- It is also called as CPU scheduler. CPU scheduler selects a process
among the processes that are ready to execute and allocates CPU to one of them.
Context Switching
A context switch is the mechanism to store and restore the state or context of a process in
Process Control block so that a process execution can be resumed from the same point at a later
time.
In this, the process that comes first will be executed first and next process starts only after
the previous gets fully executed.
11 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 5 – 0=5
P1 8 – 1=7
P2 16 – 2=14
P3 22 – 3=19
Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
12 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Shortest Job First Scheduling: Shortest job first (SJF) or shortest job next, is a scheduling
policy that selects the waiting process with the smallest execution time to execute next. SJF is a
non- preemptive algorithm.
Example
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Gantt Chart
P1 P2 P4 P3
0 8 12 17 26
13 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
P1 0 - 0 =0 8 – 0 =8
P2 8 - 1 =7 12 – 1=11
P3 17 - 2=15 26- 2 = 24
P4 12 - 3=9 17 – 3 =14
Shortest Remaining Time First Scheduling: This Algorithm is the preemptive version of SJF
scheduling. In SRTF, the execution of the process can be stopped after certain amount of time.
At the arrival of every process, the short term scheduler schedules the process with the least
remaining burst time among the list of available processes and the running process.
Example
P1 0 8
P2 1 4
P3 2 9
P4 3 5
14 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
P1 0 + (10 – 1) =9 17 – 0 = 17
P2 1-1 =0 5 – 1= 4
P3 17 - 2=15 26- 2 = 24
P4 5–3=2 10 – 3 = 7
Longest Remaining Time First Scheduling:This is a pre-emptive version of Longest Job First
(LJF) scheduling algorithm. In this scheduling algorithm, we find the process with maximum
remaining time and then process it. We check for the maximum remaining time after some
interval of time to check if another process having more Burst Time arrived up to that time.
Priority Scheduling: Each process is assigned a priority. Process with the highest priority is
to be executed first and so on. Processes with the same priority are executed on first come first
served basis. Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
Example
15 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
P1 0 11 2
P2 5 28 0
P3 12 2 3
P4 2 10 1
P5 9 16 4
Gantt Chart
P1 P2 P4 P3 P5
0 11 39 49 51 67
Example
P1 0 11 2
16 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
P2 5 28 0
P3 12 2 3
P4 2 10 1
P5 9 16 4
Gantt Chart
P1 P4 P2 P4 P1 P3 P5
0 2 5 33 40 49 51 67
Thread: A thread is a path of execution within a process. A process can contain multiple
threads.
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc.
Process vs Thread?
The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result threads share
with other threads their code section, data section, and OS resources (like open files and
signals). But, like process, a thread has its own program counter (PC), register set, and stack
space.
1. Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower compared to process
context switch. Process context switching requires more overhead from the CPU.
17 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
4. Resource sharing: Resources like code, data, and files can be shared among all threads
within a process.
Note: stack and registers can’t be shared among the threads. Each thread has its own
stack and registers.
6. Enhanced throughput of the system: If a process is divided into multiple threads, and
each thread function is considered as one job, then the number of jobs completed per unit of
time is increased, thus increasing the throughput of the system.
18 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Consider the following events and answer the questions that follow. Assume there
are 5 processes, all either in the read or running states initially. Assume the
processes are using a single processor.
• At time 5: P1 executes a command to read from disk 3.
• At time 15: P3’s time slice ends.
• At time 18: P4 executes a command to write to disk 3.
• At time 20: P2 executes a command to read from disk 2.
• At time 24: P3 executes a command to join with P5.
• At time 33: An interrupt occurs indicating that P2’s read is complete.
• At time 36: An interrupt occurs indicating that P1’s read is complete.
• At time 38: P5 terminates.
• At time 48: An interrupt occurs indicating that P4’s write is complete.
For time 22, 37 and 47, identify which state each process is in. If it is waiting,
indicate what it is waiting for.
4 Consider the 3 processes, P1, P2 and P3 shown in the table. 4 1 10
Process Arrival time Time Units Required
P1 0 5
19 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
P2 1 7
P3 3 4
What will be the completion order of the 3 processes under the policies FCFS and
RR2 (round robin scheduling with CPU quantum of 2 time units)?
5 Three processes A, B and C each execute a loop of 100 iterations. In each 6 1 10
iteration of the loop, a process performs a single computation that requires tc CPU
milliseconds and then initiates a single I/O operation that lasts for tio
milliseconds. It is assumed that the computer where the processes execute has
sufficient number of I/O devices and the OS of the computer assigns different I/O
devices to each process. Also, the scheduling overhead of the OS is negligible.
The processes have the following characteristics:
Process id tc tio
A 100ms 500 ms
B 350 ms 500 ms
C 200 ms 500 ms
The processes A, B, and C are started at times 0, 5 and 10 milliseconds
respectively, in a pure time sharing system (round robin scheduling) that uses a
time slice of 50 milliseconds. What is the time in milliseconds at which process C
would complete its first I/O operation?
6 Consider three CPU-intensive processes, which require 10, 20 and 30 time units 5 1 10
and arrive at times 0, 2 and 6, respectively. How many context switches are
needed if the operating system implements a shortest remaining time first
scheduling algorithm? Do not count the context switches at time zero and at the
end.
7 Consider three processes, all arriving at time zero, with total execution time of 10, 6 1 10
20 and 30 units, respectively. Each process spends the first 20% of execution time
doing I/O, the next 70% of time doing computation, and the last 10% of time
doing I/O again. The operating system uses a shortest remaining compute time
first scheduling algorithm and schedules a new process either when the running
process gets blocked on I/O or when the running process finishes its compute
burst. Assume that all I/O operations can be overlapped as much as possible. For
what percentage of time does the CPU remain idle?
8 Consider the following table of arrival time and burst time for three processes P0, 4 1 10
P1 and P2.
Process Arrival time Burst Time
P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is
carried out only at arrival or completion of processes. What is the average waiting
time for the three processes?
9 Assume every process requires 3 seconds of service time in a system with single 4 1 10
processor. If new processes are arriving at the rate of 10 processes per minute,
then estimate the fraction of time CPU is busy in system?
10 Under what circumstances does a multithreaded solution using multiple kernel 5 1 10
threads provide better performance than a single- threaded solution on a single-
processor system?
Describe the actions taken by a thread library to context switch between
user- level threads.
20 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
21 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
MODULE 2
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process : Execution of one process does not affects the execution of other
processes.
Cooperative Process : Execution of one process affects the execution of other
processes.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion : If a process is executing in its critical section, then no other process
is allowed to execute in the critical section.
Progress : If no process is in the critical section, then no other process from outside can
block it from entering the critical section.
Bounded Waiting : A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
22 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical
section
int turn : The process whose turn is to enter the critical section.
Semaphores
A Semaphore is an integer variable, which can be accessed only through two operations wait
() and signal ().
There are two types of semaphores: Binary Semaphores and Counting Semaphores
Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as
the locks can provide mutual exclusion. All the processes can share the same mutex
semaphore that is initialized to 1. Then, a process has to wait until the lock becomes 0. Then,
the process can make the mutex semaphore 1 and start its critical section. When it completes
its critical section, it can reset the value of mutex semaphore to 0 and some other process can
enter its critical section.
Counting Semaphores: They can have any value and are not restricted over a certain domain.
They can be used to control access a resource that has a limitation on the number of
simultaneous accesses. The semaphore can be initialized to the number of instances of the
resource. Whenever a process wants to use that resource, it checks if the number of remaining
23 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
instances is more than zero, i.e., the process has an instance available. Then, the process can
enter its critical section thereby decreasing the value of the counting semaphore by 1. After
the process is over with the use of the instance of the resource, it can leave the critical section
thereby adding 1 to the number of available instances of the resource
24 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Deadlock
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and
there is only one track, none of the trains can move once they are in front of each other.
Similar situation occurs in operating systems when there are two or more processes
hold some resources and wait for resources held by other(s). For example, in the below
diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired
by process 2, and process 2 is waiting for resource 1.
DEADLOCK PREVENTION:
26 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Eliminate No Preemption
Preempt resources from process when resources required by other high priority process.
Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all
the request made by processes for resources, it check for safe state, if after granting request
system remains in the safe state it allows the request and if there is no safe state it don’t allow
the request made by the process.
Inputs to Banker’s Algorithm
1. Max need of resources by each process.
2. Currently allocated resources by each process.
3. Max free available resources in the system.
Request will only be granted under below condition.
1. If request made by process is less than equal to max need to that process.
2. If request made by process is less than equal to freely available resource in the system.
27 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
6 Consider method used by process P1 and P2 for accessing critical section. The 5 2 10
initial values of shared Boolean variables S1 and S2 are randomly selected.
P1() P2()
{ {
While (S1==S2); While(S1!=S2);
critical section critical section
S1=S2; S1=not (S2);
Which of the following is true? Give explanation.
a. Mutual exclusion + Progress
b. Mutual exclusion only
c. Progress only
d. None
28 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
29 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
MODULE 3
Memory Management
In a multiprogramming computer, the Operating System resides in a part of memory, and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called Memory Management. Memory management is a method in the operating
system to manage operations between main memory and disk during process execution. The
main aim of memory management is to achieve efficient utilization of memory.
Why Memory Management is Required?
Allocate and de-allocate memory before and after process execution.
To keep track of used memory space by processes.
To minimize fragmentation issues.
To proper utilization of main memory.
To maintain data integrity while executing of process.
Logical and Physical Address Space
Logical Address Space: An address generated by the CPU is known as a “Logical Address”.
It is also known as a Virtual address. Logical address space can be defined as the size of the
process. A logical address can be changed.
Physical Address Space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is
done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different types of
loading :
Static Loading: Static Loading is basically loading the entire program into a fixed address. It
requires more memory space.
Dynamic Loading: The entire program and all data of a process must be in physical memory
for the process to execute. So, the size of a process is limited to the size of physical memory.
To gain proper memory utilization, dynamic loading is used. In dynamic loading, a routine is
not loaded until it is called. All routines are residing on disk in a relocatable load format. One
of the advantages of dynamic loading is that the unused routine is never loaded. This loading
is useful when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
30 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
To perform a linking task a linker is used. A linker is a program that takes one or more object
files generated by a compiler and combines them into a single executable file.
Static Linking: In static linking, the linker combines all necessary program modules into a
single executable program. So there is no runtime dependency. Some operating systems
support only static linking, in which system language libraries are treated like any other
object module.
Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In
dynamic linking, “Stub” is included for each appropriate library routine reference. A stub is a
small piece of code. When the stub is executed, it checks whether the needed routine is
already in memory or not. If not available then the program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory, which is
fast compared to secondary memory. A swapping allows more processes to be run and can be
fit into memory at one time. The main part of swapping is transferred time and the total time
is directly proportional to the amount of memory swapped. Swapping is also known as roll-
out, or roll because if a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process and then load and execute the higher priority
process. After finishing higher priority work, the lower priority process swapped back in
memory and continued to the execution process.
31 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner.
One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions.
Multiple partition allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available
for other processes.
Fixed partition allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes. Initially,
all memory is available for user processes and is considered one large block of available
memory. This available memory is known as a “Hole”. When the process arrives and needs
memory, we search for a hole that is large enough to store this process. If the requirement is
fulfilled then we allocate memory to process, otherwise keeping the rest available to satisfy
future requests. While allocating a memory sometimes dynamic storage allocation problems
occur, which concerns how to satisfy a request of size n from a list of free holes. There are
some solutions to this problem:
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from
memory, it creates a small free hole. These holes can not be assigned to new processes
because holes are not combined or do not fulfill the memory requirement of the process. To
achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problems. In the operating systems two types of fragmentation:
Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to
the process more than their requested size. Due to this some unused space is left over and
creating an internal fragmentation problem.Example: Suppose there is a fixed partitioning
used for memory allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in
memory. Now a new process p4 of size 2MB comes and demands a block of memory. It gets
a memory block of 3MB but 1MB block of memory is a waste, and it can not be allocated to
other processes too. This is called internal fragmentation.
32 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
33 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
34 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
3 5 3 10
i. Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
for a memory with three frames and calculate number of page faults and page hits
by using FIFO (First In First Out) Page replacement algorithm
What will be the completion order of the 3 processes under the policies FCFS and
RR2 (round robin scheduling with CPU quantum of 2 time units)?
5 Consider a single level paging scheme with a TLB. Assume no page fault occurs. 4 3 5
It takes 20 ns to search the TLB and 100 ns to access the physical memory. If
TLB hit ratio is 80%, the effective memory access time is _______ msec.
6 Consider three CPU-intensive processes, which require 10, 20 and 30 time units 4 3 5
and arrive at times 0, 2 and 6, respectively. How many context switches are
needed if the operating system implements a shortest remaining time first
scheduling algorithm? Do not count the context switches at time zero and at the
end.
7 Consider three processes, all arriving at time zero, with total execution time of 10, 5 3 10
20 and 30 units, respectively. Each process spends the first 20% of execution time
doing I/O, the next 70% of time doing computation, and the last 10% of time
doing I/O again. The operating system uses a shortest remaining compute time
first scheduling algorithm and schedules a new process either when the running
process gets blocked on I/O or when the running process finishes its compute
burst. Assume that all I/O operations can be overlapped as much as possible. For
what percentage of time does the CPU remain idle?
8 Consider the following table of arrival time and burst time for three processes P0, 4 3 10
P1 and P2.
Process Arrival time Burst Time
P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is
carried out only at arrival or completion of processes. What is the average waiting
time for the three processes?
9 Assume every process requires 3 seconds of service time in a system with single 5 5
processor. If new processes are arriving at the rate of 10 processes per minute,
then estimate the fraction of time CPU is busy in system?
10 Consider a logical address space of eight pages of 1024 words each, mapped onto 4 10
a physical memory of 32 frames.
36 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
MODULE 4
Disc Scheduling:It is done by operating systems to schedule I/O requests arriving for disk. Disk
scheduling is also known as I/O scheduling.
Multiple I/O requests may arrive by different processes and only one I/O request can
be served at a time by disk controller. Thus other I/O requests need to wait in waiting
queue and need to be scheduled.
Two or more request may be far from each other so can result in greater disk arm
movement.
Hard drives are one of the slowest parts of computer system and thus need to be
accessed in an efficient manner.
Secondary storage devices are those devices whose memory is non volatile, meaning, the
stored data will be intact even if the system is turned off. Here are a few things worth noting
about secondary storage.
Secondary storage is less expensive when compared to primary memory like RAMs.
The speed of the secondary storage is also lesser than that of primary storage.
Hence, the data which is less frequently accessed is kept in the secondary storage.
A few examples are magnetic disks, magnetic tapes, removable thumb drives etc.
In modern computers, most of the secondary storage is in the form of magnetic disks. Hence,
knowing the structure of a magnetic disk is necessary to understand how the data in the disk
is accessed by the computer.
37 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
A magnetic disk contains several platters. Each platter is divided into circular shaped tracks.
The length of the tracks near the centre is less than the length of the tracks farther from the
centre. Each track is further divided into sectors, as shown in the figure.
Tracks of the same distance from centre form a cylinder. A read-write head is used to read
data from a sector of the magnetic disk.
38 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Transfer rate: This is the rate at which the data moves from disk to the computer.
Random access time: It is the sum of the seek time and rotational latency.
Seek time is the time taken by the arm to move to the required track. Rotational latency is
defined as the time taken by the arm to reach the required sector in the track.
Even though the disk is arranged as sectors and tracks physically, the data is logically
arranged and addressed as an array of blocks of fixed size. The size of a block can
be 512 or 1024 bytes. Each logical block is mapped with a sector on the disk, sequentially. In
this way, each sector in the disk will have a logical address.
There are several Disk Several Algorithms. We will discuss each one of them.
FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed
in the order they arrive in the disk queue. Let us understand this with the help of an example.
39 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Example:
So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642
Advantages of FCFS
No indefinite postponement
Disadvantages of FCFS
In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm
will get executed first. SSTF is certainly an improvement over FCFS as it decreases the
average response time and increases the throughput of the system. Let us understand this with
the help of an example.
Example:
40 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
So,
(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) =208
Throughput increases
41 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Can cause Starvation for a request if it has a higher seek time as compared to
incoming requests
The high variance of response time as SSTF favors only some requests
SCAN
In the SCAN algorithm the disk arm moves in a particular direction and services the requests
coming in its path and after reaching the end of the disk, it reverses its direction and again
services the request arriving in its path. So, this algorithm works as an elevator and is hence
also known as an elevator algorithm. As a result, the requests at the midrange are serviced
more and those arriving behind the disk arm will have to wait.
Example:
SCAN Algorithm
Therefore, the total overhead movement (total distance covered by the disk arm) is
calculated as
42 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
High throughput
Long waiting time for requests for locations just visited by disk arm
C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after
reversing its direction. So, it may be possible that too many requests are waiting at the other
end or there may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of
reversing its direction goes to the other end of the disk and starts servicing the requests from
there. So, the disk arm moves in a circular fashion and this algorithm is also similar to the
SCAN algorithm hence it is known as C-SCAN (Circular SCAN).
Example:
43 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Circular SCAN
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request to be
44 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
serviced in front of the head and then reverses its direction from there only. Thus it prevents
the extra delay which occurred due to unnecessary traversal to the end of the disk.
Example:
LOOK Algorithm
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the
CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes
only to the last request to be serviced in front of the head and then from there goes to the
other end’s last request. Thus, it also prevents the extra delay which occurred due to
unnecessary traversal to the end of the disk.
Example:
45 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
C-LOOK
So, the total overhead movement (total distance covered by the disk arm) is calculated as
A computer file is defined as a medium used for saving and managing data in the computer
system. The data stored in the computer system is completely in digital format, although
there can be various types of files that help us to store the data.
1. FAT (File Allocation Table): An older file system used by older versions of Windows
and other operating systems.
2. NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and Unix-based
operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their Macs and
iOS devices.
A file is a collection of related information that is recorded on secondary storage. Or file is
a collection of logically related entities. From the user’s perspective, a file is the smallest
allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
name
extension, separated by a period.
Issues Handled By File System
We’ve seen a variety of data structures where the file could be kept. The file system’s job is
to keep the files organized in the best way possible.
A free space is created on the hard drive whenever a file is deleted from it. To reallocate
them to other files, many of these spaces may need to be recovered. Choosing where to
store the files on the hard disc is the main issue with files one block may or may not be
used to store a file. It may be kept in the disk’s non-contiguous blocks. We must keep track
of all the blocks where the files are partially located.
Files Attributes And Their Operations
Attributes Types Operations
Author C Append
47 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Close
Compiled, machine
Object obj, o
language not linked
Commands to the
Batch bat, sh
command interpreter
48 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
information
It contains libraries of
Library lib, a ,so, dll
routines for programmers
49 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Two-Level Directory
In this separate directories for each user is maintained.
Path name: Due to two levels there is a path name for every file to locate that file.
Now, we can have the same file name for different users.
Searching is efficient in this method.
50 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is
grouping capability. We have absolute or relative path name for a file.
51 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
52 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
54 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
table in addition to a file allocation table. The following are the approaches used for free
space management.
1. Bit Tables: This method uses a vector containing one bit for each block on the disk.
Each entry for a 0 corresponds to a free block and each 1 corresponds to a block in use.
For example 00011010111100110001
In this vector every bit corresponds to a particular block and 0 implies that that
particular block is free and 1 implies that the block is already occupied. A bit table has
the advantage that it is relatively easy to find one or a contiguous group of free blocks.
Thus, a bit table works well with any of the file allocation methods. Another advantage
is that it is as small as possible.
2. Free Block List: In this method, each block is assigned a number sequentially and the
list of the numbers of all free blocks is maintained in a reserved block of the disk.
55 | P a g e Study Material
Paper Name: Operating Systems Paper Code: PCCCS503
3 Suppose the following disk request sequence (track numbers) for a disk with 100 5 4 10
tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position
of the R/W head is on track 50. What will be the additional distance (in terms of
number of tracks) that will be traversed by the R/W head when the Shortest Seek
Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm
(assuming that SCAN algorithm moves towards 100 when it starts execution)
4 Evaluate the efficiency and effectiveness of different file organization methods 5 4 10
(e.g., sequential, indexed, hashed) in managing large volumes of data.
5 Synthesize a comprehensive file management strategy for a multinational 6 4 10
corporation with geographically dispersed offices, considering factors such as
security, accessibility, and scalability.
6 Assess the trade-offs between different disk scheduling algorithms 5 4 10
(e.g., FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK) in terms
of throughput, response time, and fairness, and recommend the
most suitable algorithm for specific system configurations.
56 | P a g e Study Material
1. Three processes arrive at time zero with CPU bursts of 16, 20 and 10 milliseconds. If
the scheduler has prior knowledge about the length of the CPU bursts, what will be
the minimum achievable average waiting time for these three processes in a non-
preemptive scheduler (rounded to nearest integer)?
2. An operating system needs to manage a set of tasks with mixed priority levels. There
are three types of tasks:
Apply the priority-based preemptive scheduling algorithm to these tasks and determine
the execution order. Calculate the average waiting time and turnaround time for the system.
4. If the CPU scheduling policy is SJF preemptive, in the following system, calculate the
average waiting time and average turn around time.
Process Id Arrival time Burst time
P1 1 3
P2 2 5
P3 4 2
P4 0 4
P5 2 2
5. Consider the following table of arrival time and burst time for four processes P1, P2,
P3 and P4.
Process Id Arrival time Burst time
P1 1 2
P2 2 4
P3 3 6
P4 4 8
Calculate the average waiting time and average turn around time, if the system follows
preemptive Longest Remaining Time First CPU scheduling algorithm.
Simulate the execution of these tasks using a multilevel queue scheduling algorithm with
the given time quantum for each queue. Provide a Gantt chart of the execution process, and
calculate the total CPU idle time.
Draw the Resource Allocation Graph (RAG) for this system. Based on the graph,
determine if there is a deadlock. If deadlock exists, identify the processes involved.
2. A system with three processes (P1, P2, P3) and three resources (R1, R2, R3) is in a
potential deadlock situation. The current resource allocation and request state is as
follows:
(a) Draw the current Resource Allocation Graph (RAG) for this system.
(b) Modify the graph by introducing one possible change to the allocation or request
of resources such that the system avoids deadlock. Explain your reasoning behind the
modification.
3. A system consists of four processes (P1, P2, P3, P4) and three resources (R1, R2, R3),
each having only one instance. The following allocation and request table is given:
The system has detected a deadlock involving some or all of these processes.
(a) Analyze the system's state by identifying which processes are in a deadlock.
(b) Propose a suitable deadlock recovery strategy (e.g., process termination or
resource preemption).
4. A system has detected a deadlock involving three processes (P1, P2, P3) and two
resources (R1, R2). The following information is available:
P1 holds R1 and is requesting R2.
P2 holds R2 and is requesting R1.
P3 is waiting for either R1 or R2 to be released.
The system administrator needs to terminate one process to resolve the deadlock.
(a) Analyze which process should be terminated to resolve the deadlock. Consider the
resources each process holds and requests.
(b) Explain the reasoning behind your decision.
5. Consider a system with five processes (P1, P2, P3, P4, P5) and three resource types
(R1, R2, R3). The following tables show the allocation, maximum, and available
resources:
Available resources: R1 = 1, R2 = 1, R3 = 2
Apply the Banker’s Algorithm to determine whether the system is in a safe state.
6. Define deadlock in the context of operating systems. Describe the four necessary
conditions for a deadlock to occur, and explain how deadlock detection differs from
deadlock prevention.
1. Consider a system with 1000 KB of total memory. The following processes arrive and
request memory in the given order:
3. Consider a system that uses paging for memory management. The page size is 4 KB.
A process has the following logical address:
(a) Calculate the page number and offset from the given logical address.
(b) Use the page table to determine the corresponding physical address.
The process generates the following logical address for a memory access:
Translate the given logical address into a physical address using the segment table.
5. A process has been assigned a relocation register value of 5000 and a limit register
value of 3000. The process attempts to access the following logical addresses:
(a) For each logical address, determine the physical address by applying the
relocation register.
(b) Check if each access is valid by comparing the logical address with the limit
register. If the access is invalid, explain why.
Compare the performance, data redundancy, and fault tolerance of each RAID level.
Provide specific advantages and disadvantages for each configuration in the context
of high availability and data recovery.
(a) Analyze the causes of fragmentation (both internal and external) in disk
management systems.
(b) Evaluate the impact of fragmentation on disk performance and suggest strategies
to minimize fragmentation, including defragmentation techniques.
3. A hard disk drive has 4 platters, with each platter containing 1000 tracks. Each track
has 50 sectors, and each sector can store 512 bytes of data.
(a) Calculate the total storage capacity of the disk.
(b) If a file requires 2048 bytes of storage, how many sectors and tracks will it occupy?
4. Describe the physical structure of a hard disk drive (HDD). Include details on
components such as platters, tracks, sectors, and read/write heads. How do these
components work together to store and retrieve data?
1. Explain the concept of demand paging in virtual memory systems. How does it differ
from pre-paging? Discuss the benefits and drawbacks of demand paging in terms of
memory utilization and system performance.
3. Compare and contrast two common page replacement algorithms: Least Recently
Used (LRU), and Optimal Page Replacement.
Consider a reference string of page requests: [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5] with 3
page frames. Calculate the number of page faults for each algorithm.
4. Discuss how the size of page frames affects the performance of a virtual memory
system.
Analyze the trade-offs involved in choosing larger versus smaller page sizes,
particularly regarding page fault rates and internal fragmentation.