Device Management
Device Management
Device Management
A device communicates with a computer system by sending signals over a cable or even through the
air. The device communicates with the machine via a connection point termed a port (for example, a
serial port). If one or more devices use a common set of wires, the connection is called a bus.
When device A has a cable that plugs into device B, and device B has a cable that plugs into device
C, and device C plugs into a port on the computer, this arrangement is called a daisy chain. It usually
operates as a bus.
A controller is a collection of electronics that can operate a port, a bus, or a device. A serial-port
controller is an example of a simple device controller. It is a single chip in the computer that controls the
signals on the wires of a serial port.
An I/O port typically consists of four registers, called the status, control, data-in, and data-out registers.
The status register contains bits that can be read by the host. These bits indicate states such as whether the
current command has completed, whether a byte is available to be read from the data-in register, and
whether there has been a device error. The control register can be written by the host to start a command or
to change the mode of a device. For instance, a certain bit in the control register of a serial port chooses
between full-duplex and half-duplex communication, another enables parity checking, a third bit sets the
word length to 7 or 8 bits, and other bits select one of the speeds supported by the serial port.
The data-in register is read by the host to get input, and the data out register is written by the host to
send output. The data registers are typically 1 to 4 bytes. Some controllers have FIFO chips that can hold
several bytes of input or output data to expand the capacity of the controller beyond the size of the data
register. A FIFO chip can hold a small burst of data until the device or host is able to receive those data.
POLLING
Incomplete protocol for interaction between the host and a controller can be intricate, but the basic
handshaking notion is simple. The controller indicates its state through the busy bit in the status register.
(Recall that to set a bit means to write a 1 into the bit, and to clear a bit mean to write a 0 into it.)
The controller sets the busy bit when it is busy working, and clears the busy bit when it is ready to
accept the next command. The host signals its wishes via the command-ready bit in the command register.
The host sets the command-ready bit when a command is available for the controller to execute.
For this example, the host writes output through a port, coordinating with the controller by handshaking as
follows.
1. The host repeatedly reads the busy bit until that bit becomes clear.
2. The host sets the write bit in the command register and writes a byte into the data-out register.
3. The host sets the command-ready bit.
4. When the controller notices that the command-ready bit is set, it sets the Busy.
5. The controller reads the command register and sees the write command.
6. It reads the data-out register to get the byte, and does the I/O to the device.
7. The controller clears the command-ready bit, clears the error bit in the status register to indicate that the
device I/O succeeded, and clears the busy bit to indicate that it is finished.
The host is busy-waiting or polling: It is in a loop, reading the status register over and over until the busy bit
becomes clear. If the controller and device are fast, this method is a reasonable one. But if the wait may be
long, the host should probably switch to another task.
Input/output Devices
Basically input/output devices can be divided into two categories:
1) Block devicesA block device is one that stores information in fixed size block, each one with its own
address. The range of block size lies between 5k bytes to 32768 bytes. The essential properties of a
block device are that it is possible to read or write each block independently of all the other one. For
example, disk is the most common block devices.
2) Character devices A character device delivers or accepts block structure. It is not addressable and
does not have any seek operation. Printers, network interfaces, mice, rats are not disk like can be seen
as character devices.
Input/output devices cover a huge range in executions which puts considerable pressure on the
software to perform well over many orders of magnitude in data rates.
External devices that engage in I/o with computer system can be divided into three categories.
1. Human readable
2. Machine readable
3. Communication
1. Human Readable is suitable for communicating with the computer user. Examples are printers, video
display terminals, keyboard etc.
2. Machine Readable is suitable for communicating with electronic equipment. Examples are disk and tape
drives, sensors, controllers and actuators.
3. Communication is suitable for communicating with remote devices. Examples are digital line drivers and
modems.
3) Direct memory Access(DMA) A special control unit may be provided to allow transfer of a
block of data directly between an external device and the main memory, without continuous intervention by
the processor. This approach is called Direct Memory Access(DMA).
DMA can be used with either polling or interrupt software. DMA is particularly useful on devices like disks,
where many bytes of information can be transferred in single I/O operations. When used in conjunction with
an interrupt, the CPU is notified only after the entire block of data has been transferred. For each byte or
word transferred, it must provide the memory address and all the bus signals that control the data transfer.
Device drivers are part of the operating system, but not necessarily part of the OS kernel. The operating
system provides a simplified view of the device to user applications (e.g., character devices vs. block devices
in UNIX). In some operating systems (e.g., Linux), devices are also accessible through the /dev file system.
In some cases, the operating system buffers data that are transferred between a device and a user space
program (disk cache, network buffer). This usually increases performance, but not always.
Input/output buffering
Let a user process wants to read blocks of data from a tape one at a time, with each block having a length of 512
bytes. The data are to be read into a data area within the address space of the user process at virtual location
1000 to 1511. The simplest way would be execute an I/O command to the tape unit and then wait for the data to
become available. The waiting could wither by busy waiting or more practically, process suspension or an
interrupt.
There are two problems with this approach. (1) the program is hang up waiting for the relatively slow I/O to
complete. (2) This approach to I/O interfaces with swapping decisions by the operating system. Virtual locations
1000 to 1511 must remain in main memory during the course of the block transfer. Otherwise some of the data
may be lost.
To avoid these overheads and in inefficiencies, it is sometimes easy to perform input transfer in advance of
requests being made and to perform output transfers sometimes after the request is made. This technique is
known as buffering.
Types of buffering
There are three types of buffering which are given below:
1) Single buffering
2) Double buffering
3) Circular buffering
Device
The simplest type of support that the OS can provide is single buffering. When a user process issues an I/O request the OS
assigns a buffer in the system portion of main memory to the operation.
For block oriented devices, the single buffering can be explained as: input transfers are made to the system buffer. When
the transfer is complete, the process moves the block into user space and immediately requests another block. This is called
reading ahead or anticipated input;
For stream oriented input/output, the single buffering scheme can be used in a line at a time fashion or a byte-at-a-time
fashion. Line-at-a-time operation is appropriate and scroll-mode terminals. With this form of terminal, user input is one line
at-time with a carriage return signaling the of a line; and output to the terminal is similar one line at a time. A line printer is
another example of such a device.
(2) Double buffering:
I/O In Move
Device
An improvement over single buffering can be had by assigning two system buffers to the operations. A process now
transfers data to (or from) one buffer while the operating system empties (or files) the other. This techniques is known as
double buffering or buffer swapping.
User process
I/O Move
Device
A double buffer scheme should smooth out the flow of data between an I/O device and a process. If the performance of a
particular process is the focus of our concern, then we would like to the I/O operation to be able to keep up with the
process. Double buffering may be inadequate if the process performs rapid bursts of I/O. In this case, the problem can often
be alleviated by using more than two buffers.
When more than two buffers are used, the collection of buffers is itself referred to as a circular buffer. With each individual
buffer being one unit in the circular buffer. This is simply the bounded buffer producer/consumer model.
Device Controllers
A computer system contains a multitude of I/O devices and
Their respective controllers:
● Network card
● Graphics adapter
● Disk controller
● DVD-ROM controller
● Serial port
● USB
● Sound card
Character-stream or block. A character-stream device transfers bytes one by one, whereas a block device
transfers a block of bytes as a unit.
Sequential or random-access. A sequential device transfers data in a fixed order that is determined by the
device, whereas the user of a random-access device can instruct the device to seek to any of the available
data storage locations.
Synchronous or asynchronous. A synchronous device is one that performs data transfers with predictable
response times. An asynchronous device exhibits irregular or unpredictable response times.
Sharable or dedicated. A sharable device can be used concurrently by several processes or threads; a
dedicated device cannot.
Speed of operation. Device speeds range from a few bytes per second to a few gigabytes per second.
Read-write, read only, or writes only. Some devices perform both input and output, but others support only
one data direction. For the purpose of application access, many of these differences are hidden by the
operating system, and the devices are grouped into a few conventional types.
Operating systems also provide special system calls to access a few additional devices, such as a time-of-
day clock and a timer. The performance and addressing characteristics of network I/O differ significantly from
those of disk I/O, most operating systems provide a network I/O interface that is different from the read-
write-seek interface used for disks.
When an application issues a blocking system call, the execution of the application is suspended. The
application is moved from the operating system’s run queue to waiting queue. After the system call
completes, the application is moved back to the run queue, where it is eligible to resume execution, at which
time it will receive the values returned by the system call.
A Non-blocking call does not halt the execution of the application for an extended time. Instead, it returns
quickly, with a return value that indicates how many bytes were transferred. An example of non-blocking
behavior is the select () system call for network sockets.
A different to the non-blocking system call is an asynchronous system call. An asynchronous call returns
immediately without waiting for the I/O to complete. The difference between non-blocking and asynchronous
system call is that a non-blocking read() returns immediately with whatever data are available the full number
of bytes requested, fewer, or none at all. An asynchronous read () call requests a transfer that will be
performed in its entirely, but that will complete at some future time.
Buffering
A buffer is a memory area that stores data while they are transferred between two devices or between a
device arid an application. Buffering is done for three reasons.
One reason is to cope with a speed mismatch between the producer and consumer of a data stream.
Second buffer while the first buffer is written to disk. A second use of buffering is to adapt between devices
that have different data transfer sizes.
A third use of buffering is to support copy semantics for application I/O.
Caching
A cache is region of fast memory that holds copies of data. Access to the cached copy is more efficient
than access to the original.
Caching and buffering are two distinct functions, but sometimes a region of memory can be used for both
Purposes.
Error Handling
An operating system that uses protected memory can guard against many kinds of hardware and application
errors.
DEVICE DRIVERS
In computing, a device driver or software driver is a computer program allowing higher-level computer
programs to interact with a hardware device.
A driver typically communicates with the device through the computer bus or communications subsystem
to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues
commands to the device.
Once the device sends data back to the driver, the driver may invoke routines in the original calling
program. Drivers are hardware-dependent and operating-system-specific.
They usually provide the interrupt handling required for any necessary asynchronous time-dependent
hardware interface.
Disk Structure
Disk provides secondary storage for computer system. In early days magnetic tape was used as secondary
storage, but it was very slow and takes much time. Now a days disks are used as secondary storage and in
comparisons to magnetic tapes it is very fast. But now-a-days tapes are used as backup for storing
information and also transferring the information from one system to another system.
Disk drives are addressed as large one-dimensional array of logical blocks, where each block is smallest unit
of transfer. The size of each block is 512 bytes.
The logical blocks are mapped into sectors of disk. Sector 0 is first sector of first track and outermost
cylinder. The mapping proceeds through the tracks in the cylinder from outermost to innermost. The track
has sectors. These sectors, are responsible to improve the disk technology.
Disk parameters:
Various disk parameters are following as:
When the disk drive is operating, the disk is rotating at constant speed. To read or write, the head must be
positioned at the desired track and at the beginning of the desired sector on that track.
Track selection involves moving the head in a movable-head system or electronically selecting one head
on a fixed-head system. On a movable-head system, the time it takes to position the head at the track is
known as seek time.
When once the track is selected, the disk controller waits until the appropriate sector rotates to line up with
the head. The time it takes for the beginning of the sector to reach the head is known as rotational delay, or
rotational latency. The sum of the seek time, if any, and the rotational delay equals the access time, which is
the time it takes to get into position to read or write.
Once the head is in position, the read or write operation is then performed as the sector moves under the
head; this is the data transfer portion of the operation; the time required for the transfer is the transfer time.
Seek Time Seek time is the time required to move the disk arm to the required track. It turns out that this
is a difficult quantity to pin down. The seek time consists of two key components: the initial startup time and
the time taken to traverse the tracks that have to be crossed once the access arm is up to speed.
Ts = m x n + s
Rotational Delay Disks, other than floppy disks, rotate at speeds ranging from 3600 rpm up to, as of this
writing, 15,000 rpm; at this latter speed, there is one revolution per 4ms. Thus, on the average, the rotational
delay will be 2ms. Floppy disks typically rotate at between 300 and 600 rpm. Thus the average delay will be
between 100 and 50ms.
Transfer Time The transfer time to or from the disk depends on the rotation speed of the disk in the
following fashion:
T= b/rN
where
T = transfer time
b = number of bytes to be transferred
N = number of bytes on a track
r = rotation speed, in revolutions per second
Thus the total average access time can be expressed as
Ta = Ts +
where Ts is the average seek time.
There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look at some
of the important terms:
Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the data
is to be read or write. So the disk scheduling algorithm that gives minimum average seek time is
better.
Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate
into a position so that it can access the read/write heads. So the disk scheduling algorithm that
gives minimum rotational latency is better.
Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of
the disk and number of bytes to be transferred.
Disk Access Time: Disk Access Time is:
Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with respect
to average response time. So the disk scheduling algorithm that gives minimum variance response
time is better.
The list of various disks scheduling algorithm is given below. Each algorithm is carrying some advantages and
disadvantages. The limitation of each algorithm leads to the evolution of a new algorithm.
1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.
Advantages:
Every request gets a fair chance
No indefinite postponement
Disadvantages:
Does not try to optimize seek time
May not provide the best possible service
Example:
Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 90, 4, 50, 89, 52,
61, 87, 25
Head pointer starting at 50 and moving in left direction. Find the number of head movements in
cylinders using FCFS scheduling.
Solution:
= (50-45)+(45-21)+(67-21)+(90-67)+(90-4)+(50-4)+(89-50)+(61-52)+(87-61)+(87-25)
= 5 + 24 + 46 + 23 + 86 + 46 + 49 + 9 + 26 + 62
= 376
2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm will
get executed first. SSTF is certainly an improvement over FCFS as it decreases the average
response time and increases the throughput of system.
Advantages:
Average Response Time decreases
Throughput increases
Disadvantages:
Overhead to calculate seek time in advance
Can cause Starvation for a request if it has higher seek time as compared to incoming requests
High variance of response time as SSTF favours only some requests
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 50. Find the number of head movements in cylinders using SSTF scheduling.
Solution:
Number of cylinders = 5 + 7 + 9 + 6 + 20 + 2 + 1 + 65 + 4 + 17 = 136
3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the
requests coming in its path and after reaching the end of disk, it reverses its direction and again
services the request arriving in its path. So, this algorithm works as an elevator and hence also
known as elevator algorithm. As a result, the requests at the midrange are serviced more and
those arriving behind the disk arm will have to wait.
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by disk arm
Example
Consider the following disk request sequence for a disk with 100 tracks
next →← prev
It works in the way an elevator works, elevator moves in a direction completely till the last floor of that
direction and then turns back.
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using SCAN scheduling.
Number of Cylinders = 40 + 14 + 65 + 13 + 20 + 24 + 11 + 4 + 46 = 237
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after
reversing its direction. So, it may be possible that too many requests are waiting at the other end or
there may be zero or few requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its direction
goes to the other end of the disk and starts servicing the requests from there. So, the disk arm moves in
a circular fashion and this algorithm is also similar to SCAN algorithm and hence it is known as C-SCAN
(Circular SCAN).
Advantages:
Provides more uniform wait time compared to SCAN
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using C-SCAN scheduling.
No. of cylinders crossed = 40 + 14 + 199 + 16 + 46 + 4 + 11 + 24 + 20 + 13 = 387
LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of
going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its
direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end
of the disk.
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using LOOK scheduling.
Number of cylinders crossed = 40 + 51 + 13 + +20 + 24 + 11 + 4 + 46 = 209
C Look Scheduling
C Look Algorithm is similar to C-SCAN algorithm to some extent. In this algorithm, the arm of the disk moves
outwards servicing requests until it reaches the highest request cylinder, then it jumps to the lowest request
cylinder without servicing any request then it again start moving outwards servicing the remaining requests.
It is different from C SCAN algorithm in the sense that, C SCAN force the disk arm to move till the last
cylinder regardless of knowing whether any request is to be serviced on that cylinder or not.
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 54 and moving in left direction. Find the number of head movements in cylinders
using C LOOK scheduling.
Number of cylinders crossed = 11 + 13 + 20 + 24 + 11 + 4 + 46 + 169 = 298
Each algorithm is unique in its own way. Overall Performance depends on the number and type of
requests.
Note:Average Rotational latency is generally taken as 1/2(Rotational latency).
Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90,
10, 50, 60, 80 and 70. Assume that the initial position of the R/W head is on track 50. The additional
distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used
compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it
starts execution) is _________ tracks
(A) 5
(B) 9
(C) 10
(D) 11
Using SSTF Algorithm
Number of track are 100.
The requests are: 45, 20, 90, 10, 50, 60, 80 and 70
Therefore the answer is (A). The SCAN algorithm travels for 5 additional tracks.
Initially arm is at 100. Find the Average Seek length using FIFO, SSTF, SCAN and C-SCAN algorithm.
Solution :
PRACTICE PROBLEM BASED ON FCFS DISK SCHEDULING ALGORITHM-
Problem-
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The
FCFS scheduling algorithm is used. The head is initially at cylinder number 53. The cylinders are numbered
from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is
_______.
Solution-
Total head movements incurred while servicing these requests
= (98 – 53) + (183 – 98) + (183 – 41) + (122 – 41) + (122 – 14) + (124 – 14) + (124 – 65) + (67 – 65)
= 45 + 85 + 142 + 81 + 108 + 110 + 59 + 2
=632
Problem-01:
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The
SSTF scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
Solution-
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14) + (98 – 14) + (122 – 98) + (124 – 122) + (183 – 124)
= 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59
= 236
Problem-02:
Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following
sequence-
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1
ms to move from one cylinder to adjacent one and shortest seek time first policy is used?
1. 95 ms
2. 119 ms
3. 233 ms
4. 276 ms
Solution-
Total head movements incurred while servicing these requests
= (50 – 34) + (34 – 20) + (20 – 19) + (19 – 15) + (15 – 10) + (10 – 7) + (7 – 6) + (6 – 4) + (4 – 2) + (73 – 2)
= 16 + 14 + 1 + 4 + 5 + 3 + 1 + 2 + 2 + 71
= 119
Time taken for one head movement = 1 msec. So,
Time taken for 119 head movements
= 119 x 1 msec
= 119 msec
Thus option B is correct.
Solution-
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 41) + (41
– 14)
Alternatively,
Total head movements incurred while servicing these requests
= (199 – 53) + (199 – 14)
= 146 + 185
=331
Problem-
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The C-
SCAN scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
Solution-
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 0) + (14 –
0) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 27
= 386
Alternatively,
Total head movements incurred while servicing these requests
= (199 – 53) + (199 – 0) + (41 – 0)
= 146 + 199 + 41
= 386
Problem-
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The
LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
Solution-
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 – 41) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 142 + 27
= 299
Alternatively,
Total head movements incurred while servicing these requests
= (183 – 53) + (183 – 14)
= 130 + 169
= 299
Problem-01:
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The C-
LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
Solution-
Total head movements incurred while servicing these requests
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 – 14) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 169 + 27
= 326
Alternatively,
Total head movements incurred while servicing these requests
= (183 – 53) + (183 – 14) + (41 – 14)
= 130 + 169 + 27
= 326
Problem-02:
Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-
LOOK scheduling algorithm is used. The head is initially at cylinder number 63 moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement
(in number of cylinders) incurred while servicing these requests is _______.
Solution-
Total head movements incurred while servicing these requests
= (87 – 63) + (92 – 87) + (121 – 92) + (191 – 121) + (191 – 10) + (11 – 10) + (38 – 11) + (47 – 38)
= 24 + 5 + 29 + 70 + 181 + 1 + 27 + 9
= 346
Alternatively,
Total head movements incurred while servicing these requests
= (191 – 63) + (191 – 10) + (47 – 10)
= 128 + 181 + 37
= 346
DISK MANAGEMENT
Operating system is responsible for disk management. Following are some activities discussed.
Disk Formatting
Disk formatting is of two types.
a) Physical formatting or low level formatting.
b) Logical Formatting
Physical Formatting
Disk must be formatted before storing data.
Disk must be divided into sectors that the disk controllers can read/write.
Low level formatting files the disk with a special data structure for each sector.
Data structure consists of three fields: header, data area and trailer.
Header and trailer contain information used by the disk controller.
Sector number and Error Correcting Codes (ECC) contained in the header and trailer.
For writing data to the sector – ECC is updated.
For reading data from the sector – ECC is recalculated.
Low level formatting is done at factory.
Logical Formatting
After disk is partitioned, logical formatting used.
Operating system stores the initial file system data structures onto the disk.
Boot Block
When a computer system is powered up or rebooted, a program in read only memory executes.
Diagnostic check is done first.
Stage 0 boot program is executed.
Boot program reads the first sector from the boot device and contains a stage-1 boot program.
May be boot sector will not contain a boot program.
PC booting from hard disk, the boot sector also contains a partition table.
The code in the boot ROM instructs the disk controller to read the boot blocks into memory and then starts
executing that code.
Full boot strap program is more sophisticated than the bootstrap loader in the boot ROM.
Swap-Space Use
The operating system needs to release sufficient main memory to bring in a process that is ready to execute.
Operating system uses this swap space in various way. Paging systems may simply store pages that have
been pushed out of main memory. Unix operating system allows the use of multiple swap space are
usually put on separate disks, so the load placed on the I/O system by paging and swapping can be spread
over the systems I/O devices.
If the swap space is simply a large file within the file system, normal file system routines can be used to
create it, name it and allocate its space. This is easy to implement but also inefficient. External fragmentation
can greatly increase swapping times. Catching is used to improve the system performance. Block of
information is cached in the physical memory, and by using special tools to allocate physically continuous
blocks for the swap file.
Swap space can be created in a separate disk partition. No file system or directory structure is placed on
this space. A separate swap space storage manager is used to allocate and deallocate the blocks. This
manager uses algorithms optimized for speed. Internal fragmentation may increase. Some operating
systems are flexible and can swap both in raw partitions and in file system space.
The write ahead log, which required the availability of stable storage.
By definition, information residing in stable storage is never lost.
To implement such storage, we need to replicate the required information on multiple storage devices
(usually disks) with independent failure modes.
We also need to coordinate the writing of updates in a way that guarantees that a failure during an update
will not leave all the copies in a damaged state and that, when we are recovering from failure, we can force
all copies to a consistent and correct value, even if another failure occurs during the recovery.
DISK RELIABILITY
Good performance means high speed, another important aspect of performance is reliability.
A fixed disk drive is likely to be more reliable than a removable disk or tape drive.
An optical cartridge is likely to be more reliable than a magnetic disk or tape.
A head crash in a fixed hard disk generally destroys the data, whereas the failure of a tape drive or optical
disk drive often leaves the data cartridge unharmed.
The data transfer time to or from the disk depends onn the rotation speed disk in the following fashion.
T = b/rn
Example 2: if the overhead of formatting a disk is 96 bytes for 4000 byte sector, compute the unformatted
capacity of the disk for the following parameters
Number of surfaces = 8
Outer diameter of the disk = 12cm.
Inner diameter of the disk = 4 cm.
Inner number track = 0.1mm
Number of sectors per track = 20
If the disk in above rotating at 3600 rpm, determine the effective data transfer rate which is defined as the
number of bytes transferred per second between disk and memory.
Solution:
For computing the unformatted capacity:
Number of tracks = outer diameter of disk – inner diameter of disk
Inner track space
= 12 – 4
0.01 = 800
Unformatted capacity = Number of tracks * Number of sectors per track *Number of bytes/sector
= 800 * 20 * (4000 + 96)
= 65536000 bytes
= 38,400,000 bytes/sec.
Example 3: calculate the total time required to read 35 sectors on a two sided floppy disk. Assume
that each track has 8 sectors and the track to track step time is 8 milliseconds. The first sector to be
read is sector 3 on track 10. Assume that the diskette is soft sectored and the controller has a 1
sector buffer. The diskette spins at 300 rpm and initially, the head is on track 10.
Solution:
To read 35 sectors starting from track 10, sector 3, we have to read track 10, side 0, track 10, side 1, track 11, side
1, track 12, side 0 and the number of sectors read are 8,8,8,5.