Device Management

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

DEVICE MANAGEMENT

 Principle of I/O Hardware


Computers operate a great many kinds of devices. General types include storage devices (disks,
tapes), transmission devices (network cards, modems), and human-interface devices (screen,
keyboard, mouse).

A device communicates with a computer system by sending signals over a cable or even through the
air. The device communicates with the machine via a connection point termed a port (for example, a
serial port). If one or more devices use a common set of wires, the connection is called a bus.

When device A has a cable that plugs into device B, and device B has a cable that plugs into device
C, and device C plugs into a port on the computer, this arrangement is called a daisy chain. It usually
operates as a bus.

A controller is a collection of electronics that can operate a port, a bus, or a device. A serial-port
controller is an example of a simple device controller. It is a single chip in the computer that controls the
signals on the wires of a serial port.

An I/O port typically consists of four registers, called the status, control, data-in, and data-out registers.
The status register contains bits that can be read by the host. These bits indicate states such as whether the
current command has completed, whether a byte is available to be read from the data-in register, and
whether there has been a device error. The control register can be written by the host to start a command or
to change the mode of a device. For instance, a certain bit in the control register of a serial port chooses
between full-duplex and half-duplex communication, another enables parity checking, a third bit sets the
word length to 7 or 8 bits, and other bits select one of the speeds supported by the serial port.

The data-in register is read by the host to get input, and the data out register is written by the host to
send output. The data registers are typically 1 to 4 bytes. Some controllers have FIFO chips that can hold
several bytes of input or output data to expand the capacity of the controller beyond the size of the data
register. A FIFO chip can hold a small burst of data until the device or host is able to receive those data.

POLLING

Incomplete protocol for interaction between the host and a controller can be intricate, but the basic
handshaking notion is simple. The controller indicates its state through the busy bit in the status register.
(Recall that to set a bit means to write a 1 into the bit, and to clear a bit mean to write a 0 into it.)

The controller sets the busy bit when it is busy working, and clears the busy bit when it is ready to
accept the next command. The host signals its wishes via the command-ready bit in the command register.
The host sets the command-ready bit when a command is available for the controller to execute.

For this example, the host writes output through a port, coordinating with the controller by handshaking as
follows.

1. The host repeatedly reads the busy bit until that bit becomes clear.
2. The host sets the write bit in the command register and writes a byte into the data-out register.
3. The host sets the command-ready bit.
4. When the controller notices that the command-ready bit is set, it sets the Busy.
5. The controller reads the command register and sees the write command.
6. It reads the data-out register to get the byte, and does the I/O to the device.
7. The controller clears the command-ready bit, clears the error bit in the status register to indicate that the
device I/O succeeded, and clears the busy bit to indicate that it is finished.

The host is busy-waiting or polling: It is in a loop, reading the status register over and over until the busy bit
becomes clear. If the controller and device are fast, this method is a reasonable one. But if the wait may be
long, the host should probably switch to another task.

Input/output Devices
Basically input/output devices can be divided into two categories:

1) Block devicesA block device is one that stores information in fixed size block, each one with its own
address. The range of block size lies between 5k bytes to 32768 bytes. The essential properties of a
block device are that it is possible to read or write each block independently of all the other one. For
example, disk is the most common block devices.

2) Character devices A character device delivers or accepts block structure. It is not addressable and
does not have any seek operation. Printers, network interfaces, mice, rats are not disk like can be seen
as character devices.
Input/output devices cover a huge range in executions which puts considerable pressure on the
software to perform well over many orders of magnitude in data rates.

External devices that engage in I/o with computer system can be divided into three categories.
1. Human readable
2. Machine readable
3. Communication

1. Human Readable is suitable for communicating with the computer user. Examples are printers, video
display terminals, keyboard etc.
2. Machine Readable is suitable for communicating with electronic equipment. Examples are disk and tape
drives, sensors, controllers and actuators.
3. Communication is suitable for communicating with remote devices. Examples are digital line drivers and
modems.

 Differences between I/O Devices


1). Data rate : there may be differences of several orders of magnitude between the data transfer rates.
2). Application: Different devices have different use in the system.
3). Complexity of Control: A disk is much more complex whereas printer requires simple control interface.
4). Unit of transfer: Data may be transferred as a stream of bytes or characters or in larger blocks.
5). Data representation: Different data encoding schemes are used for different devices.
6). Error Conditions: The nature of errors differs widely from one device to another.

Organization of I/O Function

There are three techniques for performing the I/O function.


1) Programmed I/O the processor issues an I/O command, on behalf of a process, to an I/O module;
that process then busy waits for the operations to be completed before processing.
2) Interrupt driven I/O the process issues an I/O command on behalf of a process, continues to
execute subsequent instructions, and is interrupted by the I/O module when the latter has completed its
work. The subsequent instructions may be in the same process.

3) Direct memory Access(DMA) A special control unit may be provided to allow transfer of a
block of data directly between an external device and the main memory, without continuous intervention by
the processor. This approach is called Direct Memory Access(DMA).

DMA can be used with either polling or interrupt software. DMA is particularly useful on devices like disks,
where many bytes of information can be transferred in single I/O operations. When used in conjunction with
an interrupt, the CPU is notified only after the entire block of data has been transferred. For each byte or
word transferred, it must provide the memory address and all the bus signals that control the data transfer.

Interaction with a device controller is managed through a device driver.

Device drivers are part of the operating system, but not necessarily part of the OS kernel. The operating
system provides a simplified view of the device to user applications (e.g., character devices vs. block devices
in UNIX). In some operating systems (e.g., Linux), devices are also accessible through the /dev file system.

In some cases, the operating system buffers data that are transferred between a device and a user space
program (disk cache, network buffer). This usually increases performance, but not always.
Input/output buffering
Let a user process wants to read blocks of data from a tape one at a time, with each block having a length of 512
bytes. The data are to be read into a data area within the address space of the user process at virtual location
1000 to 1511. The simplest way would be execute an I/O command to the tape unit and then wait for the data to
become available. The waiting could wither by busy waiting or more practically, process suspension or an
interrupt.
There are two problems with this approach. (1) the program is hang up waiting for the relatively slow I/O to
complete. (2) This approach to I/O interfaces with swapping decisions by the operating system. Virtual locations
1000 to 1511 must remain in main memory during the course of the block transfer. Otherwise some of the data
may be lost.
To avoid these overheads and in inefficiencies, it is sometimes easy to perform input transfer in advance of
requests being made and to perform output transfers sometimes after the request is made. This technique is
known as buffering.

Types of buffering
There are three types of buffering which are given below:
1) Single buffering
2) Double buffering
3) Circular buffering

(1) Single buffering:


Operating system user process
I/O in
Move

Device
The simplest type of support that the OS can provide is single buffering. When a user process issues an I/O request the OS
assigns a buffer in the system portion of main memory to the operation.
For block oriented devices, the single buffering can be explained as: input transfers are made to the system buffer. When
the transfer is complete, the process moves the block into user space and immediately requests another block. This is called
reading ahead or anticipated input;
For stream oriented input/output, the single buffering scheme can be used in a line at a time fashion or a byte-at-a-time
fashion. Line-at-a-time operation is appropriate and scroll-mode terminals. With this form of terminal, user input is one line
at-time with a carriage return signaling the of a line; and output to the terminal is similar one line at a time. A line printer is
another example of such a device.
(2) Double buffering:

Operating system User Process

I/O In Move

Device

An improvement over single buffering can be had by assigning two system buffers to the operations. A process now
transfers data to (or from) one buffer while the operating system empties (or files) the other. This techniques is known as
double buffering or buffer swapping.

(3) Circular buffering:


Operating system

User process

I/O Move
Device

A double buffer scheme should smooth out the flow of data between an I/O device and a process. If the performance of a
particular process is the focus of our concern, then we would like to the I/O operation to be able to keep up with the
process. Double buffering may be inadequate if the process performs rapid bursts of I/O. In this case, the problem can often
be alleviated by using more than two buffers.

When more than two buffers are used, the collection of buffers is itself referred to as a circular buffer. With each individual
buffer being one unit in the circular buffer. This is simply the bounded buffer producer/consumer model.

Device Controllers
A computer system contains a multitude of I/O devices and
Their respective controllers:
● Network card
● Graphics adapter
● Disk controller
● DVD-ROM controller
● Serial port
● USB
● Sound card

Principle of I/O Software


 Interrupts
 The CPU hardware has a wire called the interrupt request line that the CPU senses after executing every
instruction. When the CPU detects that a controller has asserted a signal on the interrupt request line, the
CPU saves a small amount of state, such as the current value of the instruction pointer, and jumps to the
interrupt-handler routine at a fixed
Address in memory.
 the interrupt handler determines the cause of the interrupt, performs the necessary processing, and
executes a return from interrupt instruction to return the CPU to the execution state prior to the interrupt. We
say that the device controller raises an interrupt by asserting a signal on the interrupt request line, the CPU
catches the interrupt and dispatches to the interrupt handler, and the handler clears the interrupt by servicing
the device. Figure 12.3 summarizes the interrupt driven I/O cycle.
 this basic interrupt mechanism enables the CPU to respond to an asynchronous event, such as a device
controller becoming ready for service. In a modern operating system, we need more sophisticated interrupt-
handling features.
 First, we need the ability to defer interrupt handling during critical processing. Second, we need an efficient
way to dispatch to the proper interrupt handler for a device, without first polling all the devices to see which
one raised the interrupt. Third, we need multilevel interrupts, so that the operating system can distinguish
between high- and low priority interrupts, and can respond with the appropriate
Degree of urgency.
 In modern computer hardware, these three features are provided by the CPU and by the interrupt-
controller hardware.
 CPUs have two interrupt request lines. One is the nonmaskable interrupt, which is reserved for events
such as unrecoverable memory errors. The second interrupt line is mask able. It can be turned off by the
CPU before the execution of critical instruction sequences that must not be interrupted. The mask able
interrupt is used by device controllers to request service.
 This address is an offset in a table called the interrupt vector. This vector contains the memory addresses
of .specialized interrupt handlers. The purpose of a vectored interrupt mechanism is to reduce the need for a
single interrupt handler to search all possible sources of interrupts to determine which one needs service.
 the interrupt mechanism also implements a system of interrupt priority levels. This mechanism enables the
CPU to defer the handling of low-priority interrupts without masking off all interrupts, and makes it possible
for a high-priority interrupt to preempt the execution of a low-priority interrupt.

 Application I/O Interfaced


 Structuring techniques and interfaces for the operating system enable I/O devices to be treated in a
standard, uniform way. For instance, how an application can open a file on a disk without knowing what kind
of disk it is, and how new disks and other devices can be added to a computer without the operating system
being disrupted.
 the actual differences are encapsulated in ken modules called device drivers mat internally are custom
tailored to each device but that export one of the standard interfaces.
 the purpose of the device-driver layer is to hide the differences among device controllers from the I/O
subsystem of the kernel, much as the I/O system calls.

Character-stream or block. A character-stream device transfers bytes one by one, whereas a block device
transfers a block of bytes as a unit.
Sequential or random-access. A sequential device transfers data in a fixed order that is determined by the
device, whereas the user of a random-access device can instruct the device to seek to any of the available
data storage locations.
Synchronous or asynchronous. A synchronous device is one that performs data transfers with predictable
response times. An asynchronous device exhibits irregular or unpredictable response times.
Sharable or dedicated. A sharable device can be used concurrently by several processes or threads; a
dedicated device cannot.
Speed of operation. Device speeds range from a few bytes per second to a few gigabytes per second.
Read-write, read only, or writes only. Some devices perform both input and output, but others support only
one data direction. For the purpose of application access, many of these differences are hidden by the
operating system, and the devices are grouped into a few conventional types.
 Operating systems also provide special system calls to access a few additional devices, such as a time-of-
day clock and a timer. The performance and addressing characteristics of network I/O differ significantly from
those of disk I/O, most operating systems provide a network I/O interface that is different from the read-
write-seek interface used for disks.

 Clocks and Timers


 Most computers have hardware clocks and timers that provide three basic functions:
1. Give the current time
2. Give the elapsed time
3. Set a timer to trigger operation X at time T
 These functions are used heavily by the operating system, and also by time sensitive applications. The
hardware to measure elapsed time and to trigger operations is called a programmable interval timer.

 Blocking and Non-Blocking Input/output

When an application issues a blocking system call, the execution of the application is suspended. The
application is moved from the operating system’s run queue to waiting queue. After the system call
completes, the application is moved back to the run queue, where it is eligible to resume execution, at which
time it will receive the values returned by the system call.

A Non-blocking call does not halt the execution of the application for an extended time. Instead, it returns
quickly, with a return value that indicates how many bytes were transferred. An example of non-blocking
behavior is the select () system call for network sockets.

A different to the non-blocking system call is an asynchronous system call. An asynchronous call returns
immediately without waiting for the I/O to complete. The difference between non-blocking and asynchronous
system call is that a non-blocking read() returns immediately with whatever data are available the full number
of bytes requested, fewer, or none at all. An asynchronous read () call requests a transfer that will be
performed in its entirely, but that will complete at some future time.

 Kernel Input/output sub-systems


 Kernels provide many services related to I/O. The services that we describe are I/O scheduling, buffering
caching, spooling, device reservation, and error handling.
 Scheduling
 To schedule a set of I/O requests means to determine a good order in which to execute them. The order in
which applications issue system calls rarely is the best choice. Scheduling can improve overall system
performance, can share device access fairly among processes, and can reduce the average waiting time for
I/O to complete. Operating-system developers implement scheduling by maintaining a queue of requests for
each device. When an application issues a blocking I/O system call, the request is placed on the queue for
that device.
 The I/O scheduler rearranges the order of the queue to improve the overall system efficiency and the
average response time experienced by applications.

 Buffering
 A buffer is a memory area that stores data while they are transferred between two devices or between a
device arid an application. Buffering is done for three reasons.
 One reason is to cope with a speed mismatch between the producer and consumer of a data stream.
 Second buffer while the first buffer is written to disk. A second use of buffering is to adapt between devices
that have different data transfer sizes.
 A third use of buffering is to support copy semantics for application I/O.

 Caching
 A cache is region of fast memory that holds copies of data. Access to the cached copy is more efficient
than access to the original.
 Caching and buffering are two distinct functions, but sometimes a region of memory can be used for both
Purposes.

 Spooling and Device Reservation


 A spool is a buffer that holds output for a device, such as a printer, that cannot accept interleaved data
streams. The
spooling system copies the queued spool files to the printer
one at a time.
 In some operating systems, spooling is managed by a system daemon process. In other operating
systems, it is handled by an in kernel thread.

 Error Handling
An operating system that uses protected memory can guard against many kinds of hardware and application
errors.

DEVICE DRIVERS
 In computing, a device driver or software driver is a computer program allowing higher-level computer
programs to interact with a hardware device.
 A driver typically communicates with the device through the computer bus or communications subsystem
to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues
commands to the device.
 Once the device sends data back to the driver, the driver may invoke routines in the original calling
program. Drivers are hardware-dependent and operating-system-specific.
 They usually provide the interrupt handling required for any necessary asynchronous time-dependent
hardware interface.

Disk Structure

Disk provides secondary storage for computer system. In early days magnetic tape was used as secondary
storage, but it was very slow and takes much time. Now a days disks are used as secondary storage and in
comparisons to magnetic tapes it is very fast. But now-a-days tapes are used as backup for storing
information and also transferring the information from one system to another system.

Disk drives are addressed as large one-dimensional array of logical blocks, where each block is smallest unit
of transfer. The size of each block is 512 bytes.
The logical blocks are mapped into sectors of disk. Sector 0 is first sector of first track and outermost
cylinder. The mapping proceeds through the tracks in the cylinder from outermost to innermost. The track
has sectors. These sectors, are responsible to improve the disk technology.

Disk parameters:
Various disk parameters are following as:
 When the disk drive is operating, the disk is rotating at constant speed. To read or write, the head must be
positioned at the desired track and at the beginning of the desired sector on that track.
 Track selection involves moving the head in a movable-head system or electronically selecting one head
on a fixed-head system. On a movable-head system, the time it takes to position the head at the track is
known as seek time.
 When once the track is selected, the disk controller waits until the appropriate sector rotates to line up with
the head. The time it takes for the beginning of the sector to reach the head is known as rotational delay, or
rotational latency. The sum of the seek time, if any, and the rotational delay equals the access time, which is
the time it takes to get into position to read or write.
 Once the head is in position, the read or write operation is then performed as the sector moves under the
head; this is the data transfer portion of the operation; the time required for the transfer is the transfer time.
 Seek Time Seek time is the time required to move the disk arm to the required track. It turns out that this
is a difficult quantity to pin down. The seek time consists of two key components: the initial startup time and
the time taken to traverse the tracks that have to be crossed once the access arm is up to speed.
Ts = m x n + s
 Rotational Delay Disks, other than floppy disks, rotate at speeds ranging from 3600 rpm up to, as of this
writing, 15,000 rpm; at this latter speed, there is one revolution per 4ms. Thus, on the average, the rotational
delay will be 2ms. Floppy disks typically rotate at between 300 and 600 rpm. Thus the average delay will be
between 100 and 50ms.
 Transfer Time The transfer time to or from the disk depends on the rotation speed of the disk in the
following fashion:

T= b/rN
where
T = transfer time
b = number of bytes to be transferred
N = number of bytes on a track
r = rotation speed, in revolutions per second
Thus the total average access time can be expressed as
Ta = Ts +
where Ts is the average seek time.

Disk Scheduling / Disk Scheduling Algorithms


Most jobs depend upon the disk for loading and input/output files. Every disk drive has a queue of pending
requests. Whenever a process needs input/output to or from the disk it issues a system call to the operating
system. If the desired disk drive and controller is available the request can be serviced immediately. However
while the drive or controller is serving one requests any additional requests, normally from other processes will
need to be queued.
For a multiprogramming system with many processes, the disk queue may often be non-empty. Thus when a
request is complete we must pick a new request from the queue and service it. A disk service requires that the
head be moved to the desired track then a wait for latency and finally the transfer of data. It is important that
disk service be as fast as possible. The operating system can improve on the average disk service time by
scheduling the requests for disk access.

There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look at some
of the important terms:
 Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the data
is to be read or write. So the disk scheduling algorithm that gives minimum average seek time is
better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate
into a position so that it can access the read/write heads. So the disk scheduling algorithm that
gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of
the disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:

Disk Access Time = Seek Time +


Rotational Latency +
Transfer Time

 Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with respect
to average response time. So the disk scheduling algorithm that gives minimum variance response
time is better.

The list of various disks scheduling algorithm is given below. Each algorithm is carrying some advantages and
disadvantages. The limitation of each algorithm leads to the evolution of a new algorithm.

o FCFS scheduling algorithm


o SSTF (shortest seek time first) algorithm
o SCAN scheduling
o C-SCAN scheduling
o LOOK Scheduling
o C-LOOK scheduling

1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.
Advantages:
 Every request gets a fair chance
 No indefinite postponement
Disadvantages:
 Does not try to optimize seek time
 May not provide the best possible service
Example:

 Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 90, 4, 50, 89, 52,
61, 87, 25

 Head pointer starting at 50 and moving in left direction. Find the number of head movements in
cylinders using FCFS scheduling.

Solution:

Number of cylinders moved by the head

= (50-45)+(45-21)+(67-21)+(90-67)+(90-4)+(50-4)+(89-50)+(61-52)+(87-61)+(87-25)
= 5 + 24 + 46 + 23 + 86 + 46 + 49 + 9 + 26 + 62

= 376

2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm will
get executed first. SSTF is certainly an improvement over FCFS as it decreases the average
response time and increases the throughput of system.
Advantages:
 Average Response Time decreases
 Throughput increases
Disadvantages:
 Overhead to calculate seek time in advance
 Can cause Starvation for a request if it has higher seek time as compared to incoming requests
 High variance of response time as SSTF favours only some requests

Example
Consider the following disk request sequence for a disk with 100 tracks

45, 21, 67, 90, 4, 89, 52, 61, 87, 25

Head pointer starting at 50. Find the number of head movements in cylinders using SSTF scheduling.

Solution:
Number of cylinders = 5 + 7 + 9 + 6 + 20 + 2 + 1 + 65 + 4 + 17 = 136

3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the
requests coming in its path and after reaching the end of disk, it reverses its direction and again
services the request arriving in its path. So, this algorithm works as an elevator and hence also
known as elevator algorithm. As a result, the requests at the midrange are serviced more and
those arriving behind the disk arm will have to wait.
Advantages:
 High throughput
 Low variance of response time
 Average response time
Disadvantages:
 Long waiting time for requests for locations just visited by disk arm

Example
Consider the following disk request sequence for a disk with 100 tracks

next →← prev

SCAN and C-SCAN algorithm


Scan Algorithm
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular direction till
the end, satisfying all the requests coming in its path,and then it turns backand moves in the reverse
direction satisfying requests coming in its path.

It works in the way an elevator works, elevator moves in a direction completely till the last floor of that
direction and then turns back.

Example

Consider the following disk request sequence for a disk with 100 tracks

98, 137, 122, 183, 14, 133, 65, 78

Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using SCAN scheduling.
Number of Cylinders = 40 + 14 + 65 + 13 + 20 + 24 + 11 + 4 + 46 = 237

4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after
reversing its direction. So, it may be possible that too many requests are waiting at the other end or
there may be zero or few requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its direction
goes to the other end of the disk and starts servicing the requests from there. So, the disk arm moves in
a circular fashion and this algorithm is also similar to SCAN algorithm and hence it is known as C-SCAN
(Circular SCAN).
Advantages:
 Provides more uniform wait time compared to SCAN

Consider the following disk request sequence for a disk with 100 tracks

98, 137, 122, 183, 14, 133, 65, 78

Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using C-SCAN scheduling.
No. of cylinders crossed = 40 + 14 + 199 + 16 + 46 + 4 + 11 + 24 + 20 + 13 = 387

LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of
going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its
direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end
of the disk.

Example
Consider the following disk request sequence for a disk with 100 tracks

98, 137, 122, 183, 14, 133, 65, 78

Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using LOOK scheduling.
Number of cylinders crossed = 40 + 51 + 13 + +20 + 24 + 11 + 4 + 46 = 209

C Look Scheduling
C Look Algorithm is similar to C-SCAN algorithm to some extent. In this algorithm, the arm of the disk moves
outwards servicing requests until it reaches the highest request cylinder, then it jumps to the lowest request
cylinder without servicing any request then it again start moving outwards servicing the remaining requests.

It is different from C SCAN algorithm in the sense that, C SCAN force the disk arm to move till the last
cylinder regardless of knowing whether any request is to be serviced on that cylinder or not.

Example
Consider the following disk request sequence for a disk with 100 tracks

98, 137, 122, 183, 14, 133, 65, 78

Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using C LOOK scheduling.
Number of cylinders crossed = 11 + 13 + 20 + 24 + 11 + 4 + 46 + 169 = 298

Each algorithm is unique in its own way. Overall Performance depends on the number and type of
requests.
Note:Average Rotational latency is generally taken as 1/2(Rotational latency).

Numerical on SSTF and SCAN


Question:

Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90,
10, 50, 60, 80 and 70. Assume that the initial position of the R/W head is on track 50. The additional
distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used
compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it
starts execution) is _________ tracks

(A) 5
(B) 9
(C) 10
(D) 11
Using SSTF Algorithm
Number of track are 100.

Initial Position of R/W head is 50.

The requests are: 45, 20, 90, 10, 50, 60, 80 and 70

Number of crossed cylinders = 5 + 15 + 10 + 10 + 10 + 70 + 10 = 130

Using SCAN Algorithm


Number of cylinders crosses = 0 + 10 + 10 + 10 + 10 + 10 + 55 + 20 + 10 = 135

Therefore the answer is (A). The SCAN algorithm travels for 5 additional tracks.

Numerical on Disk Scheduling Algorithms


Q. Consider a disk with 200 tracks and the queue has random requests from
different processes in the order:
55, 58, 39, 18, 90, 160, 150, 38, 184

Initially arm is at 100. Find the Average Seek length using FIFO, SSTF, SCAN and C-SCAN algorithm.

Solution :
PRACTICE PROBLEM BASED ON FCFS DISK SCHEDULING ALGORITHM-
 

Problem-
 
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The
FCFS scheduling algorithm is used. The head is initially at cylinder number 53. The cylinders are numbered
from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is
_______.
 

Solution-
 
 
Total head movements incurred while servicing these requests
= (98 – 53) + (183 – 98) + (183 – 41) + (122 – 41) + (122 – 14) + (124 – 14) + (124 – 65) + (67 – 65)
= 45 + 85 + 142 + 81 + 108 + 110 + 59 + 2
=632

PRACTICE PROBLEMS BASED ON SSTF DISK SCHEDULING ALGORITHM-


 

Problem-01:
 
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The
SSTF scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
 

Solution-
 
 
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14) + (98 – 14) + (122 – 98) + (124 – 122) + (183 – 124)
= 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59
= 236

Problem-02:
 
Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following
sequence-
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1
ms to move from one cylinder to adjacent one and shortest seek time first policy is used?
1. 95 ms
2. 119 ms
3. 233 ms
4. 276 ms
 

Solution-
 
 
Total head movements incurred while servicing these requests
= (50 – 34) + (34 – 20) + (20 – 19) + (19 – 15) + (15 – 10) + (10 – 7) + (7 – 6) + (6 – 4) + (4 – 2) + (73 – 2)
= 16 + 14 + 1 + 4 + 5 + 3 + 1 + 2 + 2 + 71
= 119
 
Time taken for one head movement = 1 msec. So,
Time taken for 119 head movements
= 119 x 1 msec
= 119 msec
 Thus option B is correct.

PRACTICE PROBLEM BASED ON SCAN DISK SCHEDULING ALGORITHM-


 
Problem-
 
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The
SCAN scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
 

Solution-
 

 
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 41) + (41
– 14)
Alternatively,
Total head movements incurred while servicing these requests
= (199 – 53) + (199 – 14)
= 146 + 185
=331

PRACTICE PROBLEM BASED ON C-SCAN DISK SCHEDULING ALGORITHM-


 

Problem-
 
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The C-
SCAN scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
 

Solution-
 

 
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 0) + (14 –
0) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 27
= 386
 
Alternatively,
Total head movements incurred while servicing these requests
= (199 – 53) + (199 – 0) + (41 – 0)
= 146 + 199 + 41
= 386

PRACTICE PROBLEM BASED ON LOOK DISK SCHEDULING ALGORITHM-


 

Problem-
 
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The
LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
 

Solution-
 
 
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 – 41) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 142 + 27
= 299
 
Alternatively,
Total head movements incurred while servicing these requests
= (183 – 53) + (183 – 14)
= 130 + 169
= 299

PRACTICE PROBLEMS BASED ON C-LOOK DISK SCHEDULING ALGORITHM-


 

Problem-01:
 
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The C-
LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
 

Solution-
 

 
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 – 14) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 169 + 27
= 326
 
Alternatively,
Total head movements incurred while servicing these requests
= (183 – 53) + (183 – 14) + (41 – 14)
= 130 + 169 + 27
= 326
 

Problem-02:
 
Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-
LOOK scheduling algorithm is used. The head is initially at cylinder number 63 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
 

Solution-
 

 
Total head movements incurred while servicing these requests
= (87 – 63) + (92 – 87) + (121 – 92) + (191 – 121) + (191 – 10) + (11 – 10) + (38 – 11) + (47 – 38)
= 24 + 5 + 29 + 70 + 181 + 1 + 27 + 9
= 346
 
Alternatively,
Total head movements incurred while servicing these requests
= (191 – 63) + (191 – 10) + (47 – 10)
= 128 + 181 + 37
= 346

DISK MANAGEMENT
Operating system is responsible for disk management. Following are some activities discussed.

 Disk Formatting
Disk formatting is of two types.
a) Physical formatting or low level formatting.
b) Logical Formatting

Physical Formatting
 Disk must be formatted before storing data.
 Disk must be divided into sectors that the disk controllers can read/write.
 Low level formatting files the disk with a special data structure for each sector.
 Data structure consists of three fields: header, data area and trailer.
 Header and trailer contain information used by the disk controller.
 Sector number and Error Correcting Codes (ECC) contained in the header and trailer.
 For writing data to the sector – ECC is updated.
 For reading data from the sector – ECC is recalculated.
 Low level formatting is done at factory.

Logical Formatting
 After disk is partitioned, logical formatting used.
 Operating system stores the initial file system data structures onto the disk.

 Boot Block
 When a computer system is powered up or rebooted, a program in read only memory executes.
 Diagnostic check is done first.
 Stage 0 boot program is executed.
 Boot program reads the first sector from the boot device and contains a stage-1 boot program.
 May be boot sector will not contain a boot program.
 PC booting from hard disk, the boot sector also contains a partition table.
 The code in the boot ROM instructs the disk controller to read the boot blocks into memory and then starts
executing that code.
 Full boot strap program is more sophisticated than the bootstrap loader in the boot ROM.

SWAP SPACE MANAGEMENT


Swap space management is low level task of the operating system. The main goal for the design and
implementation of swap space is to provide the best throughput for the virtual memory system.

 Swap-Space Use
The operating system needs to release sufficient main memory to bring in a process that is ready to execute.
Operating system uses this swap space in various way. Paging systems may simply store pages that have
been pushed out of main memory. Unix operating system allows the use of multiple swap space are
usually put on separate disks, so the load placed on the I/O system by paging and swapping can be spread
over the systems I/O devices.

 Swap Space Location


 Swap space can reside in two places:
1. Separate disk partition
2. Normal file System

 If the swap space is simply a large file within the file system, normal file system routines can be used to
create it, name it and allocate its space. This is easy to implement but also inefficient. External fragmentation
can greatly increase swapping times. Catching is used to improve the system performance. Block of
information is cached in the physical memory, and by using special tools to allocate physically continuous
blocks for the swap file.
 Swap space can be created in a separate disk partition. No file system or directory structure is placed on
this space. A separate swap space storage manager is used to allocate and deallocate the blocks. This
manager uses algorithms optimized for speed. Internal fragmentation may increase. Some operating
systems are flexible and can swap both in raw partitions and in file system space.

STABLE STORAGE, IMPLEMENTATION

 The write ahead log, which required the availability of stable storage.
 By definition, information residing in stable storage is never lost.
 To implement such storage, we need to replicate the required information on multiple storage devices
(usually disks) with independent failure modes.
 We also need to coordinate the writing of updates in a way that guarantees that a failure during an update
will not leave all the copies in a damaged state and that, when we are recovering from failure, we can force
all copies to a consistent and correct value, even if another failure occurs during the recovery.

DISK RELIABILITY

 Good performance means high speed, another important aspect of performance is reliability.
 A fixed disk drive is likely to be more reliable than a removable disk or tape drive.
 An optical cartridge is likely to be more reliable than a magnetic disk or tape.
 A head crash in a fixed hard disk generally destroys the data, whereas the failure of a tape drive or optical
disk drive often leaves the data cartridge unharmed.

 STORAGE AREA NETWORK


A storage area network (SAN) is a high speed special purpose network (or subnet work) that interconnects
different kinds of data storage devices with associated data servers on behalf of a larger network of users. SANs
support disk mirroring, backup and restore, archival and retrieval of achieved data, data migration from one
storage device to another and the sharing of data among different servers in a network. SANs can incorporate sub
networks with network-attached storage (NAS) systems. The fundamental feature of SANs is universal data
storage connectively. Universal connectivity enables a host of important benefits.

Storage area network advantages:


(a) High reliability of access to the data located on external storage systems.
(b) Independence of SAN topology from data storage systems and servers.
(c) Centralized data storage (reliability, security).
(d) High performance and low latency.
(e) Scalability and flexibility of storage area network logical structure.
(f) Opportunity of organizing remote backup data storage systems and remote system of backup and restore.

Some Numerals based on disk tracks

The data transfer time to or from the disk depends onn the rotation speed disk in the following fashion.

T = b/rn

where T = transfer time


b = number of bytes to be translated
N = number of bytes on a track
r = rotation speed (revolutions per second)
the total average access time can be expressed as
Ta = Ts + 1/2r + b/rN
Where Ts = average seek time
Example 1: A certain moving arm disk storage device has the following:
No. of tracks per surface = 404
Tracks storage capacity = 13030 bytes
Disk speed = 3600 rpm
Average seek time = 30m sec.
Find the average latency, the disk storage capacity and the data rate.

Solution: Disk speed = 3600rpm


Time required for one rotation = 1/3600 per min. = 1/60 sec.
Average latency time = 1/60 = 1 , sec
2 120
Disk storage capacity = 404 * 13030
Data rate = 60 * 13030
Total time for transformation of data
= 30 * 10-3 + 1/120 + 13030/60*13030
= 0.054967 sec.

Example 2: if the overhead of formatting a disk is 96 bytes for 4000 byte sector, compute the unformatted
capacity of the disk for the following parameters

Number of surfaces = 8
Outer diameter of the disk = 12cm.
Inner diameter of the disk = 4 cm.
Inner number track = 0.1mm
Number of sectors per track = 20
If the disk in above rotating at 3600 rpm, determine the effective data transfer rate which is defined as the
number of bytes transferred per second between disk and memory.

Solution:
For computing the unformatted capacity:
Number of tracks = outer diameter of disk – inner diameter of disk
Inner track space

= 12 – 4
0.01 = 800

Unformatted capacity = Number of tracks * Number of sectors per track *Number of bytes/sector
= 800 * 20 * (4000 + 96)
= 65536000 bytes

For determining effective data transfer:


Number of bytes transferred per second = number of cylinders read/seconds * Number of tracks/cylinders
*number of sectors/track* Number of bytes/sector
= 3600 * 8 * 20 * 4000
60

= 38,400,000 bytes/sec.

Example 3: calculate the total time required to read 35 sectors on a two sided floppy disk. Assume
that each track has 8 sectors and the track to track step time is 8 milliseconds. The first sector to be
read is sector 3 on track 10. Assume that the diskette is soft sectored and the controller has a 1
sector buffer. The diskette spins at 300 rpm and initially, the head is on track 10.

Solution:

Time to be read one sector = 60


8 * 300
= 25 milliseconds

To read 35 sectors starting from track 10, sector 3, we have to read track 10, side 0, track 10, side 1, track 11, side
1, track 12, side 0 and the number of sectors read are 8,8,8,5.

And two track steps are required


Time = 25 * (8+8+8+8+5) + 2*8
= 25 * 37 * 16
= 941 milliseconds

You might also like