Unit 5 (1)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Unit 5

I/O Systems: Overview


One of the important jobs of an Operating System is to manage various I/O devices including
mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped screen,
LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers
etc.
An I/O system is required to take an application I/O request and send it to the physical
device, then take whatever response comes back from the device and send it to the
application.
Operating System has a certain portion of code that is dedicated to managing Input/output
in order to improve the reliability and the performance of the system. A computer system
contains CPUs and more than one device controllers connected to a common bus channel,
generally referred to as the device driver. These device drivers provide an interface to I/O
devices for communicating with the system hardware promoting ease of communication
providing access to shared memory.

I/O Hardware
A computer system operates on multiple devices. An important service provided by an OS is
I/O management. Some common I/O devices are mouse, keyboard, touchpad, USB devices,
Bit-mapped screen, LED, On/off switch, network connections, audio I/O, printers etc.
An I/O system takes an I/O request and sends it to the physical device. It then sends the
response from the device to the application. There are two types of I/O devices:
 Block devices: The driver communicates with this device by sending to it blocks of
data.
 Character devices: The driver communicates by sending and receiving single
characters with the help of this device.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 1


Categories of I/O devices
Following are the three categories of I/O devices:
1. Human-readable: These are suitable for communicating with the user. For example,
mouse, printer, keyboard, etc.
2. Machine-readable: These are suitable for communicating with electronic equipment. For
example disk and tape drives, sensors, etc.
3. Communication: Suitable for communicating with remote devices. For example digital
line drivers, modems, etc.
There are three ways for a CPU to communicate with an I/O device. There are:
 Special Instruction I/O
This type uses CPU instructions specially tailored for controlling I/O devices. These
instructions help send data to an I/O device and read data from an I/O device.
 Memory-mapped I/O
In this type, the same address space is shared by memory and I/O devices. It allows the I/O
devices to transfer blocks of data to/from memory without going through the CPU. Every
instruction accessing the system memory can manipulate an I/O device. Thus, this I/O is
used for high-speed I/O devices.
 Direct Memory Access (DMA)
Slow I/O devices tend to generate an interrupt to the main CPU after the transfer of each
byte. But if the same was the case with fast devices, then the OS would be stuck handling
these interrupts. In order to avoid this situation, a computer uses DMA hardware.
When the CPU grants the I/O module the authority to read/write in the system memory, this
is called direct memory access. The DMA module controls the exchange of data between the
main memory and the I/O device without any interference from the CPU.

Application I/O Interface


Application I/O Interface represents the structuring techniques and interfaces for the
operating system to enable I/O devices to be treated in a standard, uniform way. The actual
differences lies kernel level modules called device drivers which are custom tailored to
corresponding devices but show one of the standard interfaces to applications. The purpose
of the device-driver layer is to hide the differences among device controllers from the I/O
subsystem of the kernel, such as the I/O system calls.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 2


Following are the characteristics of I/O interfaces with respected to devices:
• Character-stream / block - A character-stream device transfers bytes in one by one
fashion, whereas a block device transfers a complete unit of bytes.
• Sequential / random-access - A sequential device transfers data in a fixed order
determined by the device, random access device can be instructed to seek position to any of
the available data storage locations.
• Synchronous / asynchronous - A synchronous device performs data transfers with
known response time where as an asynchronous device shows irregular or unpredictable
response time.
• Sharable / dedicated - A sharable device can be used concurrently by several processes
or threads but a dedicated device cannot be used.
• Speed of operation - Device speeds may range from a few bytes per second to a few
gigabytes per second.
• Read-write, read only, or write only - Some devices perform both input and output, but
others support only one data direction that is read only.

Kernel I/O Subsystem


The kernel provides many I/O services. The kernel provides several functions that rely on
the hardware and device driver infrastructure, such as caching, scheduling, spooling, device
reservation, and error handling.
 Scheduling
The term "schedule" refers to determining an excellent sequence to perform a series of I/O
requests. Scheduling can increase the system's overall performance, distribute device access
permissions evenly among all processes, and reduce average wait times, response times, and
turnaround times for I/O to complete.
When an application makes a blocking I/O system call, the request is placed in the wait queue
for that device, maintained by OS engineers.
 Buffering
The buffer is a section of main memory used to temporarily store or keep data sent between
two devices or between a device and an application.
Assists in dealing with device speed discrepancies.
Assists in dealing with device transfer size mismatches.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 3


Data is transferred from user application memory into kernel memory.
Data from kernel memory is then sent to the device to maintain "copy semantics."
It prevents an application from altering the contents of a buffer while it is being written.
 Caching
It involves storing a replica of data in a location that is easier to reach than the original.
When you request a file from a Web page, for example, it is stored on your hard disc in a
cache subdirectory under your browser's directory. When you return to a page you've
recently visited, the browser can retrieve files from the cache rather than the actual server,
saving you time and reducing network traffic.
The distinction between cache and buffer is that cache stores a copy of an existing data item,
whereas buffer stores a duplicate copy of another data item.
 Spooling
A spool is a buffer that holds jobs for a device until it is ready to take them. Spooling regards
disks as a massive buffer that can hold as many tasks as the device needs until the output
devices are ready to take them.
If the device can only serve one request at a time, a buffer retains output for a device that
cannot handle interleaved data streams.
Spooling allows a user to view specific data streams and, if wanted, delete them.
For example, when you are using a printer.
 Error Handling
Protected memory operating systems can safeguard against a wide range of hardware and
application faults, ensuring that each tiny mechanical glitch does not result in a complete
system failure.
Devices and I/O transfers can fail for various reasons, including transitory causes, such as
when a network gets overcrowded, and permanent reasons, such as when a disc controller
fails.
 I/O Protection
System calls are required for I/O. Illegal I/O instructions may be used by user programs to
try to interrupt regular operation, either accidentally or on purpose.
To restrict a user from performing all privileged I/O instructions. System calls must be used
to accomplish I/O. Memory-mapped and I/O port memory ports both need to be secured.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 4


Transforming I/O to Hardware Operations
Users request data using file names, which must ultimately be mapped to specific blocks of
data from a specific device managed by a specific device driver.
DOS uses the colon separator to specify a particular device ( e.g. C:, LPT:, etc. )
UNIX uses a mount table to map filename prefixes ( e.g. /usr ) to specific mounted devices.
Where multiple entries in the mount table match different prefixes of the filename the one
that matches the longest prefix is chosen. ( e.g. /usr/home instead of /usr where both exist
in the mount table and both match the desired file. )
UNIX uses special device files, usually located in /dev, to represent and access physical
devices directly.
Each device file has a major and minor number associated with it, stored and displayed
where the file size would normally go.
The major number is an index into a table of device drivers, and indicates which device driver
handles this device. ( E.g. the disk drive handler. )
The minor number is a parameter passed to the device driver, and indicates which specific
device is to be accessed, out of the many which may be handled by a particular device driver.
( e.g. a particular disk drive or partition. )

Disk Scheduling
Disk Scheduling Algorithm is an algorithm that keeps and manages input and output
requests arriving for the disk in a system. As we know, for executing any process memory is
required. And when it comes to accessing things from a hard disk, the process becomes very
slow as a hard disk is the slowest part of our computer. There are various methods by which
the process can be scheduled and can be done efficiently.
In our system, multiple requests are coming to the disk simultaneously which will make a
queue of requests. This queue of requests will result in an increased waiting time of requests.
The requests wait until the under processing request executes. To overcome this queuing
and manage the timing of these requests, 'Disk Scheduling' is important in our Operating
System.
There are many terms that we need to know for a better understanding of Disk Scheduling.
 Seek Time: As we know, the data may be stored on various blocks of disk. To access
these data according to the request, the disk arm moves and find the required block.
The time taken by the arm in doing this search is known as "Seek Time".

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 5


 Rotational Latency: The required data block needs to move at a particular position
from where the read/write head can fetch the data. So, the time taken in this
movement is known as "Rotational Latency". This rotational time should be as less as
possible so, the algorithm that will take less time to rotate will be considered a better
algorithm.
 Transfer Time: When a request is made from the user side, it takes some time to fetch
these data and provide them as output. This taken time is known as "Transfer Time".
 Disk Access Time: It is defined as the total time taken by all the above processes. Disk
access time = (seek time + rotational latency time + transfer time)
 Disk Response Time: The disk processes one request at a single time. So, the other
requests wait in a queue to finish the ongoing process of request. The average of this
waiting time is called "Disk Response Time".
 Starvation: Starvation is defined as the situation in which a low-priority job keeps
waiting for a long time to be executed. The system keeps sending high-priority jobs
to the disk scheduler to execute first.

Disk Management
Disk Management is an important functionality provided by the Operating System which can
be used to create, delete, format disk partitions, and much more. It enables users to manage
and view the different disks and functions like viewing, creating, deleting, and shrinking the
partitions associated with the disk drives. Some of the other functions of Disk Management
are:
 Disk Management helps to format disk drives.
 Disk Management enables the user to rename a disk.
 Disk Management also enables the user to change the file system of a disk drive.
 Using Disk Management, the user can assign a Drive Letter to a disk. For example, C
Drive or D drive, etc. can be found in Windows File System.

Swap – Space Management


Swapping is a memory management technique used in multi-programming to increase the
number of processes sharing the CPU. The area on the disk where the swapped-out
processes are stored is called swap space.
Swap-Space management is another low-level task of the operating system. In the swap-
space management we are using disk space, so it will significantly decreases system
performance. The goal of this swap-space implementation is to provide the virtual memory
the best throughput.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 6


The systems which are implementing swapping may use swap space to hold the entire
process which may include image, code and data segments. Paging systems may simply
store pages that have been pushed out of the main memory.

RAID Structure
RAID, or “Redundant Arrays of Independent Disks” is a technique which makes use of a
combination of multiple disks instead of using a single disk for increased performance, data
redundancy or both. The term was coined by David Patterson, Garth A. Gibson, and Randy
Katz at the University of California, Berkeley in 1987.
RAID should not be considered a replacement for backing up your data. If critical data is
going onto a RAID array, it should be backed up to another physical drive or logical set of
drives.

The following are terms that are normally used in connection with RAID:

 Striping: data is split between multiple disks.


 Mirroring: data is mirrored between multiple disks.
 Parity: also referred to as a checksum. Parity is a calculated value used to mathematically
rebuild data.
Different RAID levels exist for different application requirements.

RAID
Description Operation Advantages Disadvantages Recovery
mode
Data is split Large size If one or more
evenly between and the No drives fails, this
RAID 0 Striped disks
two or more fastest redundancy. results in array
disks. speed. failure.
A single
Two or more Speed and size
drive failure Only one drive is
Mirrored drives have is limited by
RAID 1 will not needed for
disks identical data the slowest and
result in recovery.
on them. smallest disk.
data loss.
Data is split
evenly between High speeds Poor
Striped set
two or more for performance A single drive
with
RAID 3 disks, plus a sequential for multiple failure will
dedicated
dedicated drive read/write simultaneous rebuild.
parity
for parity operations. instructions.
storage.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 7


Data is split
Striped disks evenly between Large size,
The total array A single drive
with three or more fast speed,
RAID 5 size is reduced failure will
distributed disks. Parity is and
by parity. rebuild.
parity split between redundancy.
disks.
Larger size
Four or more and higher
1+0; Striped
drives are made speed than Only one drive
RAID set of
into two RAID-1, and No parity. in a mirrored set
10 Mirrored
mirrors that are more can fail.
Subset
striped. redundancy
than RAID-0.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 8

You might also like