Unit 2 - Operating System

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Subject Name: Operating System

Subject Code: CS-5002


Semester: 5th
Downloaded from be.rgpvnotes.in

UNIT-II

Operating System - File System

File: -A file is a named collection of related information that is recorded on secondary storage
such as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits,
bytes, lines or records whose meaning is defined by the files creator and user.

File Structure

A File Structure should be according to a required format that the operating system can
understand.



A file has a certain defined structure according to its type.


A text file is a sequence of characters organized into lines.


A source file is a sequence of procedures and functions.
An object file is a sequence of bytes organized into blocks that are understandable by the

 When operating system defines different file structures, it also contains the code to support
machine.

these file structure. UNIX, MS-DOS support minimum number of file structure.
File Type

File type refers to the ability of the operating system to distinguish different types of file such as
text files source files and binary files etc. Many operating systems support many types of files.
Operating system like MS-DOS a d UNIX ha e the follo i g t pes of files −

Ordinary files

 These are the files that contain user information.


 These may have text, databases or executable program.
 The user can apply various operations on such files like add, modify, delete or even remove
the entire file.
Directory files

 These files contain list of file names and other information related to these files.
Special files

 These files are also known as device files.


 These files represent physical device like disks, terminals, printers, networks, tape drive etc.
These files a e of t o t pes −

 Character special files− data is ha dled ha a te character as in case of terminals or

 Block special files− data is ha dled i lo ks as i the ase of disks a d tapes.


printers.

Page no: 1 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

File Access Mechanisms

File access mechanism refers to the manner in which the records of a file may be accessed.
There are seve al a s to a ess files −

 Sequential access
 Direct/Random access
 Indexed sequential access
Sequential access

A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is
the most primitive one. Example: Compilers usually access files in this fashion.

Direct/Random access

 Random access file organization provides, accessing the records directly.


 Each record has its own address on the file with by the help of which it can be directly

 The records need not be in any sequence within the file and they need not be in adjacent
accessed for reading or writing.

locations on the storage medium.


Indexed sequential access

 This mechanism is built up on base of sequential access.


 An index is created for each file which contains pointers to various blocks.
 Index is searched sequentially and its pointer is used to access the file directly.
Space Allocation

Files are allocated disk spaces by operating system. Operating systems deploy following three
main ways to allocate disk space to files.

 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
Contiguous Allocation

 Each file occupies a contiguous address space on disk.


 Assigned disk address is in linear order.
 Easy to implement.
 External fragmentation is a major issue with this type of allocation technique.
Linked Allocation

 Each file carries a list of links to disk blocks.


 Directory contains link / pointer to first block of a file.

Page no: 2 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

 No external fragmentation
 Effectively used in sequential access file.
 Inefficient in case of direct access file.
Indexed Allocation

 Provides solutions to problems of contiguous and linked allocation.


 An index block is created having all pointers to files.
 Each file has its own index block which stores the addresses of disk space occupied by the

 Directory contains the addresses of index blocks of files.


file.

Directory systems
A directory is a location for storing files on your computer. Directories are found in
a hierarchical file system, such as Linux, MS-DOS, OS/2, and Unix.

A collection of nodes containing information about all files

A directory system can be classified in to single level and hierarchical directory system:

Single level directory system: In this type of directory system, there is a root directory which
has all files. It has a simple architecture and there are no sub directories. Advantage of single
level directory system is that it is easy to find a file in the directory. This type of directory
system is used in cameras and phones.

Hierarchical directory system: In a hierarchical directory system, files are grouped together to
form a sub directory At the top of the hierarchy is the root directory and then there are sub
directories which has files. Advantage of hierarchical directory system is that users can be
provided access to a sub directory rather than the entire directory. It provides a better
structure to file system. Also, managing millions of files is easy with this architecture. Personal
computers use hierarchical directory system for managing files.

Page no: 3 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Disk & Drum Scheduling

I/O request issues a system call to the OS. ‰ If desired disk drive or controller is available,
request is served immediately. ‰ If busy, new request for service will be placed in the queue of
pending requests. When one request is completed, the OS has to choose which pending
request to service next.

FCFS Scheduling

Simplest, perform operations in order requested „ no reordering of work queue „ no


starvation: every request is serviced „ Does ’t p o ide fastest se i e „ Ex: a disk queue with
requests for I/O to blocks on cylinders 23, 89, 132, 42, 187 With disk head initially at 100

FCFS 23, 89, 132, 42, 187

Page no: 4 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

If the
requests for cylinders 23 and 42 could be serviced together, total head movement could be
decreased substantially.

SSTF Scheduling „

Like SJF, select the disk I/O request that requires the least movement of the disk arm from its
current position, regardless of direction „ reduces total seek time compared to FCFS. „
Disadvantages ‰ starvation is possible; stay in one area of the disk if very busy ‰ switching
directions slows things down ‰ Not the most optimal.

Page no: 5 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

SCAN „ go from the outside to the inside servicing requests and then back from the outside to
the inside servicing requests. „ Sometimes called the elevator algorithm. „ Reduces variance
compared to SSTF. „ If a request arrives in the queue ‰ just in front of the head ‰ Just behind

Page no: 6 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

C-SCAN „

Circular SCAN „ moves inwards servicing requests until it reaches the innermost cylinder; then
jumps to the outside cylinder of the disk without servicing any requests. „ Why C-SCAN? ‰ Few
requests are in front of the head, since these cylinders have recently been serviced. Hence
provides a more uniform wait time.

Page no: 7 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

LOOK „ like SCAN but stops moving inwards (or outwards) when no more requests in that
direction exist.

Page no: 8 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Compared to SCAN, LOOK saves going from 23 to 0 and then back. Most efficient for this
sequence of requests

File System

free space managements

protection, organization ,sharing & implementation issues

I/0 devices
organization, I/0 devices organization, I/0 buffering, I/O Hardware, Kernel I/O subsystem,
Transforming I/O request to hardware operations. Device Driver: Path managements, Sub
module,
Procedure, Scheduler, Handler, Interrupt Service Routine

Free space managements

Since disk space is limited, we need to reuse the space from deleted files for new files, if
possible. (Write-once optical disks only allow one write to any given sector, and thus such reuse
is not physically possible.) To keep track of free disk space, the system maintains a free-space
list. The free space list records all free disk blocks-those not allocated to some file or directory.
To create a file, we search the free-space list for the required amount of space and allocate that

Page no: 9 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

space to the new file. This space is then removed from the free-space list. When a file is
deleted, its disk space is added to the free-space list.

 Bit vector
Types of free space management

 Liked list
 Grouping
 Counting

Bit Vector

Frequently, the free-space list is implemented as bit map or bit vector. Each block is
represented by 1 bit. If the block is free, the bit is 1; if the block is allocated, the bit is 0. For
example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27 are
free and the rest of the blocks are allocated. The free-space bit map would be
001111001111110001100000011100000 ...
The main advantage of this approach is its relative simplicity and its efficiency in finding the first
free block or n consecutive free blocks on the disk.

Linked List

Another approach to free-space management is to link together all the free disk blocks, keeping
a pointer to the first free block in a special location on the disk and caching it in memory. This
first block contains a pointer to the next free disk block, and so on. Recall above example, in
which blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27 were free and the rest of the
blocks were allocated. In this situation, we would keep a pointer to block 2 as the first free
block. Block 2 would contain a pointer to block 3, which would point to block 4, which would
point to block 5, which would point to block 8, and so on as shown in figure below. This scheme
is not efficient; to traverse the list, we must read each block, which requires substantial I/0
times. Fortunately, however, traversing the free list is not a frequent action.

Page no: 10 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Grouping

A modification of the free-list approach stores the addresses of n free blocks in the first free
block. The first n-1 of these blocks is actually free. The last block contains the addresses of
another n free blocks, and so on. The addresses of a large number of free blocks can now be
found quickly, unlike the situation when the standard linked-list approach is used.
Counting
another approach takes advantage of the fact that, generally, several contiguous blocks may be
allocated or freed simultaneously, particularly when space is allocated with the contiguous-
allocation algorithm or through clustering. Thus, rather than keeping a list of n free disk
addresses, we can keep the address of the first free block and the number (n) of free
contiguous blocks that follow the first block. Each entry in the free-space list then consists of a
disk address and a count. Although each entry requires more space than would a simple disk
address, the overall list is shorter, as long as the count is generally greater than 1. Note that this
method of tracking free space is similar to the extent method of allocating blocks. These entries
can be stored in a B-tree, rather than a linked list for efficient lookup, insertion, and deletion.

Protection

When information is stored in a computer system, we want to keep it safe from physical
damage (the issue of reliability) and improper access (the issue of protection). Reliability is
generally provided by duplicate copies of files. Many computers have systems programs that
automatically (or through computer-operator intervention) copy disk files to tape at regular
intervals (once per day or week or month) to maintain a copy should a file system be
accidentally destroyed. File systems can be damaged by hardware problems (such as errors in
reading or writing), power surges or failures, head crashes, dirt, temperature extremes, and
vandalism. Files may be deleted accidentally. Bugs in the file-system software can also cause file
contents to be lost. Protection can be provided in many ways. For a small single-user system,
we might provide protection by physically removing the floppy disks and locking them in a desk
drawer or file cabinet. In a multiuser system, however, other mechanisms are needed.

I/0 devices

The control of devices connected to the computer is a major concern of operating-system


designers. Because I/O devices vary so widely in their function and speed (consider a mouse, a
hard disk, and a tape robot), varied methods are needed to control them. These methods form
the I/O subsystem of the kernel, which separates the rest of the kernel from the complexities of
managing I/O devices. 587 588 Chapter 13 I/O Systems I/O-device technology exhibits two
conflicting trends. On the one hand, we see increasing standardization of software and
hardware interfaces. This trend helps us to incorporate improved device generations into
existing computers and operating systems. On the other hand, we see an increasingly broad
variety of I/O devices. Some new devices are so unlike previous devices that it is a challenge to
incorporate them into our computers and operating systems. This challenge is met by a

Page no: 11 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

combination of hardware and software techniques. The basic I/O hardware elements, such as
ports, buses, and device controllers, accommodate a wide variety of I/O devices. To
encapsulate the details and oddities of different devices, the kernel of an operating system is
structured to use device-driver modules. The device drivers present a uniform device access
interface to the I/O subsystem, much as system calls provide a standard interface between the
application and the operating system.

I/0 Hardware & I/0 devices organization

A device communicates with a computer system by sending signals over a cable or even
through the air. The device communicates with the machine via a connection point, or port—
for example, a serial port. If devices share a common set of wires, the connection is called a
bus. A bus is a set of wires and a rigidly defined protocol that specifies a set of messages that
can be sent on the wires. In terms of the electronics, the messages are conveyed by patterns of
electrical voltages applied to the wires with defined timings. When device A has a cable that
plugs into device B, and device B has a cable that plugs into device C, and device C plugs into a
port on the computer, this arrangement is called a daisy chain. A daisy chain usually operates as
a bus. Buses are used widely in computer architecture and vary in their signaling methods,
speed, throughput, and connection methods. A typical PC bus structure appears in Figure 13.1.
In the figure, a PCI bus (the common PC system bus) connects the processor–memory
subsystem to fast devices, and an expansion bus connects relatively slow devices, such as the
keyboard and serial and USB ports. In the upper-right portion of the figure, four disks are
connected together on a Small Computer System Interface (SCSI) bus plugged into a SCSI
controller. Other common buses used to interconnect main parts of a computer include PCI
Express (PCIe), with throughput of up to 16 GB per second, and Hyper Transport, with
throughput of up to 25 GB per second.

Page no: 12 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Kernel I/O Subsystem

Kernels provide many services related to I/O. Several services—scheduling, buffering, caching,
spooling, device reservation, and error handling—a e p o ided the ke el’s I/O su s ste
and build on the hardware and device driver infrastructure. The I/O subsystem is also
responsible for protecting itself from errant processes and malicious users.
I/O Scheduler

To schedule a set of I/O requests means to determine a good order in which to execute them.
The order in which applications issue system calls rarely is the best choice. Scheduling can
improve overall system performance, can share device access fairly among processes, and can
reduce the average waiting time for I/O to complete. Here is a simple example to illustrate.
Suppose that a disk arm is near the beginning of a disk and that three applications issue
blocking read calls to that disk. Application 1 requests a block near the end of the disk,
application 2 requests one near the beginning, and application 3 requests one in the middle of
the disk. The operating system can reduce the distance that the disk arm travels by serving the
applications in the order 2, 3, 1. Rearranging the order of service in this way is the essence of
I/O scheduling. Operating-system developers implement scheduling by maintaining a wait
queue of requests for each device. When an application issues a blocking I/O system call, the
request is placed on the queue for that device. The I/O scheduler rearranges the order of the
queue to improve the overall system efficiency and the average response time experienced by
applications. The operating system may also try to be fair, so that no one application receives
especially poor service, or it may give priority service for delay-sensitive requests. For instance,
requests from the virtual memory subsystem may take priority over application requests

I/O Buffering

A buffer, of course, is a memory area that stores data being transferred between two devices or
between a device and an application. Buffering is done for three reasons. One reason is to cope
with a speed mismatch between the producer and consumer of a data stream. Suppose, for
example, that a file is being received via modem for storage on the hard disk. The modem is
about a thousand times slower than the hard disk. So a buffer is created in main memory to
accumulate the bytes received from the modem. When an entire buffer of data has arrived, the
buffer can be written to disk in a single operation. Since the disk write is not instantaneous and
the modem still needs a place to store additional incoming data, two buffers are used. After the
modem fills the first buffer, the disk write is requested. The modem then starts to fill the
second buffer while the first buffer is written to disk. By the time the modem has filled the
second buffer, the disk write from the first one should have completed, so the modem can
switch back to the first buffer while the disk writes the second one. This double buffering
decouples the producer of data from the consumer, thus relaxing timing requirements between
them.

Page no: 13 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Transforming I/O Requests to Hardware Operation


An I/O operation requires a great many steps that together consume a tremendous number of
CPU cycles. 1. A process issues a blocking read() system call to a file descriptor of a file that has
been opened previously. 2. The system-call code in the kernel checks the parameters for
correctness. In the case of input, if the data are already available in the buffer cache, the data
are returned to the process, and the I/O request is completed. 13.6 STREAMS 613 3. Otherwise,
a physical I/O must be performed. The process is removed from the run queue and is placed on
the wait queue for the device, and the I/O request is scheduled. Eventually, the I/O subsystem
sends the request to the device driver. Depending on the operating system, the request is sent
via a subroutine call or an in-kernel message. 4. The device driver allocates kernel buffer space
to receive the data and schedules the I/O. Eventually, the driver sends commands to the device
controller by writing into the device-control registers. 5. The device controller operates the
device hardware to perform the data transfer. 6. The driver may poll for status and data, or it
may have set up a DMA transfer into kernel memory. We assume that the transfer is managed
by a DMA controller, which generates an interrupt when the transfer completes. 7. The correct
interrupt handler receives the interrupt via the interruptvector table, stores any necessary data,
signals the device driver, and returns from the interrupt. 8. The device driver receives the
sig al, dete i es hi h I/O e uest has o pleted, dete i es the e uest’s status, a d
signals the kernel I/O subsystem that the request has been completed. 9. The kernel transfers
data or return codes to the address space of the requesting process and moves the process
from the wait queue back to the ready queue. 10. Moving the process to the ready queue
unblocks the process. When the scheduler assigns the process to the CPU, the process resumes
execution at the completion of the system call.

Page no: 14 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Interrupt Service

The basic interrupt mechanism works as follows. The CPU hardware has a wire called the
interrupt-request line that the CPU senses after executing every instruction. When the CPU
detects that a controller has asserted a signal on the interrupt-request line, the CPU performs a
state save and jumps to the interrupt-handler routine at a fixed address in memory. The
interrupt handler determines the cause of the interrupt, performs the necessary processing,
performs a state restore, and executes a return from interrupt instruction to return the CPU to
the execution state prior to the interrupt. We say that the device controller raises an interrupt
by asserting a signal on the interrupt request line, the CPU catches the interrupt and dispatches
it to the interrupt handler, and the handler clears the interrupt by servicing the device. Figure
13.3 summarizes the interrupt-driven I/O cycle. We stress interrupt management in this
chapter because even single-user modern systems manage hundreds of interrupts per second
and servers hundreds of thousands per second. The basic interrupt mechanism just described

Page no: 15 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

enables the CPU to respond to an asynchronous event, as when a device controller becomes
ready for service. In a modern operating system, however, we need more sophisticated
interrupt-handling features. 13.2 I/O Hardware 593 1. We need the ability to defer interrupt
handling during critical processing. 2. We need an efficient way to dispatch to the proper
interrupt handler for a device without first polling all the devices to see which one raised the
interrupt. 3. We need multilevel interrupts, so that the operating system can distinguish
between high- and low-priority interrupts and can respond with the appropriate degree of
urgency. In modern computer hardware, these three features are provided by the CPU and by
the interrupt-controller hardware. Most CPUs have two interrupt request lines. One is the
nonmaskable interrupt, which is reserved for events such as unrecoverable memory errors. The
second interrupt line is maskable: it can be turned off by the CPU before the execution of
critical instruction sequences that must not be interrupted. The maskable interrupt is used by
device controllers to request service.

File system in Linux & Windows

Linux

Linux is one of popular version of UNIX operating System. It is open source as its source code is
freely available. It is free to use. Linux was designed considering UNIX compatibility. Its
functionality list is quite similar to that of UNIX.

Components of Linux System

Linux Operating System has primarily three components

 Kernel− Ke el is the o e pa t of Li u . It is espo si le fo all ajo a ti ities of this


operating system. It consists of various modules and it interacts directly with the underlying
hardware. Kernel provides the required abstraction to hide low level hardware details to

 System Library− S ste li a ies a e spe ial fu tio s o p og a s usi g hi h appli atio
system or application programs.

programs or system utilities accesses Kernel's features. These libraries implement most of
the functionalities of the operating system and do not require kernel module's code access

 System Utility− S ste Utilit p og a s a e espo si le to do spe ialized, individual level


rights.

tasks.

Page no: 16 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Kernel Mode vs User Mode

Kernel component code executes in a special privileged mode called kernel mode with full
access to all resources of the computer. This code represents a single process, executes in
single address space and do not require any context switch and hence is very efficient and fast.
Kernel runs each process and provides system services to processes, provides protected access
to hardware to processes.

Support code which is not required to run in kernel mode is in System Library. User programs
and other system programs works in User Mode which has no access to system hardware and
kernel code. User programs/ utilities use System libraries to access Kernel functions to get
system's low level tasks.

Basic Features

Following are some of the important features of Linux Operating System.

 Portable− Po ta ilit ea s soft a e a o ks o diffe e t t pes of ha d a e i sa e a .


Linux kernel and application programs support their installation on any kind of hardware

 Open Source− Li u sou e ode is f eel a aila le a d it is o


platform.
u it ased de elop e t
project. Multiple teams work in collaboration to enhance the capability of Linux operating

 Multi-User− Li u is a multiuser system means multiple users can access system resources
system and it is continuously evolving.

 Multiprogramming− Li u is a ultip og a
like memory/ ram/ application programs at same time.
i g s ste ea s ultiple appli atio s a
run at same time.

Page no: 17 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

 Hierarchical File System− Li u p ovides a standard file structure in which system files/ user

 Shell− Li u p o ides a spe ial i te p ete p og a


files are arranged.
hi h a e used to e e ute o a ds
of the operating system. It can be used to do various types of operations, call application

 Security− Li u p o ides use se u it usi g authe ti atio featu es like pass o d


programs. etc.

protection/ controlled access to specific files/ encryption of data.


Architecture

The follo i g illust atio sho s the a hite tu e of a Li u s ste −

The a hite tu e of a Li u S ste o sists of the follo i g la e s −

 Hardware layer− Ha d a e o sists of all pe iphe al de i es RAM/ HDD/ CPU et .


 Kernel− It is the o e o po e t of Ope ati g S ste , i te a ts di e tl ith ha d a e, a d

 Shell− A i te fa e to ke el, hidi g o ple it of ke el's fu tio s f o use s. The shell


provides low level services to upper layer components.

 Utilities− Utilit p og a s that p o ide the use ost of the functionalities of an operating
takes commands from the user and executes kernel's functions.

systems.

Page no: 18 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

CS-5002

Operating System

Page no: 19 Follow us on facebook to get real-time updates from RGPV


We hope you find these notes useful.
You can get previous year question papers at
https://qp.rgpvnotes.in .

If you have any queries or you want to submit your


study notes please write us at
rgpvnotes.in@gmail.com

You might also like