Operating System Ii Memory Management Memory Management Is A Form of Resource Management Applied

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

OPERATING SYSTEM II

MEMORY MANAGEMENT

Memory management is a form of resource management applied


to computer memory. The essential requirement of memory
management is to provide ways to dynamically allocate portions of
memory to programs at their request, and free it for reuse when no
longer needed. This is critical to any advanced computer system where
more than a single process might be underway at any time.

The task of Operating System include making the most efficient use of
the machine resources in which main memory or immediate access
store is among. The main memory contain Operating System together
with number of user programs. The limited space in main memory is
controlled by Operating System. In a multiprogramming computer, the
operating system resides in part of memory and the rest is used by
multiple processes.
Operating systems must employ techniques to
– Track where and how a program resides in memory
– Convert logical addresses into actual addresses

Logical address - reference to a stored value relative to the program
making the reference

Physical address - actual address in main memory

Page 1
Memory management requirements

 Relocation - programs not loaded into a fixed point in memory but


may reside in various areas.
 Protection - each process should be protected against unwanted
interference by other processes either accidentally or intentionally. All
memory references generated by a process must be checked at run
time to ensure that they only reference memory that has been
allocated to that process.
 Sharing – Protection mechanisms must be flexible enough to allow a
number of processes to access the same area of memory. For
example if a number of processes are executing the same program, it
is advantageous to allow each program to access the same copy of
the program.
 Logical Organisation – Programs are normally organised into
modules some of which are non modifiable and some of which
contain data that may be modified. How does the operating system
organise RAM which is a linear address space to reflect this logical
structure.
 Physical Organisation – Computer memory is organised into a least
two levels : main memory and secondary memory. The task of
moving information between the two levels of memory should be the
systems responsibility (not the programmers).

There are methods used to divide up the main store as efficiently as


possible. Among them are paging, partitioning and segmentation.

a. Paging: where the main store are divided into equal sized chunks or
pages. The programs and other software are similarly divided into
pages of the same size and are allocated to hardware pages when
vacant.
b. Partitioning: This is where the main memory is divided into partition of
different sizes and the software is fitted into one of the partitions if
sufficient store is available. It can be relocated within the main
memory by being marked using special register in the hardware.

Page 2
c. Segmentation: is where the main memory is organized into segments
which reflect the logical divisions within each program; so that each
segment corresponds to a module of the program.
The memory management module of the Operating System is
responsible for controlling the method of allocating the space and for
overseeing the movement of data in and out of the main store.

MEMORY MANAGEMENT TECHNIQUES

SINGLE CONTIGOUS ALLOCATION (Single User, Single Process


System)

This is the same as direct addressing technique. In this technique,


the OS resides in the upper or lower part of the main memory, while the
remaining area is assigned to the simple job or program needing it.

In this system – only one user is present and one process is running at
any time. Memory management is easy – memory is divided into two
parts – one for the Operating system and one for the program currently
being executed.

Single Contiguous Memory Management

There are only two programs in memory


- operating system
- application program

Page 3
This scheme has been employed with single user, stand- alone system,
such as IBM 1130, disk monitor system. It is found nowadays in simple
systems like game computers. Early MS-DOS systems made use of this
scheme.

A major advantage of this technique is its simplicity in design and


implementation.

However, two main drawbacks are associated with the technique:

(i) A large portion of the main memory is wasted since only one
program is in the main memory at any one time.
(ii) Most of the other system resources could not be effectively
utilized.

PARTITIONED ALLOCATION

This technique which is a variation of base-and-limit addressing


technique, is developed as a technique to solve the problems of wasted
time and memory of single contiguous allocation technique. In this
technique, the main memory is divided into a number of partitions and
each job or task is allocated to separate contiguous sections of the main
memory.

OS and any number of other


programs in memory at one time -
there are two schemes for dividing up
memory for programs:

- Fixed
partitions -
main memory
is divided into
a fixed number of partitions into which programs
can be loaded

Page 4
- Dynamic partitions - partitions are created as needed to fit
the programs waiting to be loaded

Base register - register that holds the beginning address of the current
partition (the one that is running)
Bounds register - register that holds the length of the current partition
Partitioning of the memory could be fixed or variable.

In the fixed partition, the main memory is divided into a number of


separate fixed areas each of which can hold one process. Figure below
illustrates this method. In the figure, the main memory of size 11000k, is
divided into three fixed partitions of size 2000k, 3000k, and 4000k
respectively. With the OS occupying the first 1000k portion. Each
partition respectively contains the process p1 of size 1000k, process p2
of size 2500k and process p3 of size 3000k.

OPERATING SYSTEM
2000KB
3000KB P1 1000KBin multi-
This method has been employed
4000KB programming computer systems,
P2 such as
2500KB
1000KB IBM’s OS/360 MFT(multiprogramming with
P3 3000KB
fixed number of task).

Advantages

i. The scheme is relatively simple to implement.


ii. It has improved utilization of main memory more than the single
contiguous allocation scheme.
iii. It has little OS overhead.

Disadvantages

i. The fixed partition sizes can prevent a process from being run
due to unavailability of partition of adequate size.
ii. The total number of unused spaces within each partition can be
large enough to accommodate another process, though neither
of them is large enough to accommodate new process.
iii. The maximum number of the active processes is fixed.

Page 5
Variable Partition Allocation or Dynamic Allocation

The problems of fixed partition led to the development of variable


partition. In this technique, the size of the partition is made to vary at
load time and only large enough to accommodate a process that is being
run. Processes are loaded into consecutive areas until the memory is
filled up or the remaining space is not large enough to contain another
process.

This technique has been employed in IBM’s OS/360 MVT.

A major advantage of partitioned allocation scheme is the possibility of


multiprogramming which is not permitted in the single contiguous
allocation scheme. Multiprogramming increases throughput and
maximizes CPU idleness.

In any typical multiprogramming partition scheme, CPU time allotted to


each job on a time slice priority basis. For a time-slice, the OS causes
the processor to execute a program in its address space (partition) until
one of the following conditions occurs:

i. Program/job is completed
ii. Error is detected
iii. I/O is required
iv. Program/job’s current time-slice expires.

In each case, the processor is then assigned to the job with the next
highest priority.

This technique is usually more expensive than the single contiguous


allocation scheme. The following reasons are responsible.

a. Special protection mechanism are required to prevent one job from


inadvertently interfering with another one.
b. Additional core storage is required since the OS in now more
complex and of bigger size.

A major problem with the partitioned memory technique, however, is


fragmentation. This refers to the unusable hole or fragments on portions
of the main memory. For example, with respect to figure above even if

Page 6
process p2 completes and there is a process requiring 3500k of
contiguous memory, the program cannot run.

Fragmentation refers to free memory space(called holes) scattered all


over the memory and cannot be used because they are not in
contiguous area of main memory.

Internal and External Fragmentation

The unused, free space(hole) within each partition allocated to a process


as in the fixed partition memory is called internal fragmentation.

The holes in the partition that are not allocated to any process, usually
because they are not large enough to contain the process that want to
run is referred to as external fragmentation.

The problem of fragmentation in the memory may be solved in several


ways. Two of these ways are relocatable partition and paging.

(a) Relocatable partition

In this case when there are fragments of free space, the remaining
programs in memory can be compacted by relocating them upwards, so
that all free fragments come together. A relocating register whose
contents are automatically added to every address that is used in
referencing memory is often used.

(b) PAGING TECHNIQUE

This is another solution to the problem of fragmentation in the memory.


The technique solves fragmentation problem without the need to
rearrange partition after job terminates.

There are two types of the paging technique: Direct and Demand-
Paging.

Motivation for Paging Technique

i. Reduction, if not complete elimination, of fragmentation problem to


the barest minimum.
ii. Provision of unlimited virtual storage, especially in a demand –
paging technique.

Page 7
iii. Elimination of the necessity of allocating contiguous areas of
memory to process
iv. Provision of high degree of concurrency/multiprogramming, which
also results in improved utilization of the systems resources.

Problems of paging technique

The following are associated with paging technique:

i. Overhead associated with each address transformation is


relatively high.
ii. Extra core or registers are needed for page map table
iii. A portion of the last frame may still be wasted since a program
may not usually be an exact multiple of a page size.
iv. Code sharing is not supported.
v. Thrashing problem is also possible

Thrashing

This is a problem in which pages are read into memory, removed and
then read back frequently to the extent that the system throughput is
undermined. This particularly occurs when too many programs are being
time-shared using demand-paging with insufficient main memory.

MAIN MEMORY AND DISK FRAGMENTATION

A major problem with partitioned memory technique of memory


management is fragmentation. This refers to the unused, tiny pieces of
free areas of memory (called ‘hole’) scattered all over the memory.
Similar fragmentation also exists with disk management. This is referred
to as disk fragmentation. Disk fragmentation is of two types: file
fragmentation and free space fragmentation.

File Fragmentation: this is the condition in which individual files on a disk


are not contiguous but are broken up in pieces scattered around the
disk. Free space fragmentation, more commonly called disk
fragmentation, is a condition in which the free space on a disk consist of
little bits of free space here are there rather than being together to form
a single large free space.

Page 8
Fragmentation generally causes performance problem wherever it
exists. Main memory fragmentation causes performance problem in
terms of wasted memory space, reduction in the number of programs
that can be concurrently run together and reduction in optimal utilization
of most of the system resources. Compaction and paging techniques
have been used to solve paging fragmentation problems on the main
memory.

File Fragmentation causes performance problems when reading files,


while disk fragmentation causes performance problem when creating
files or extending existing files. Solution techniques to get rid of the
fragmentation problems collectively referred to as defragmentation.

MAIN MEMORY MANAGEMENT POLICIES


The OS is responsible for allocating areas of main memory to programs
and/or processes that wants to run. When such an area of memory is
required, the OS will have to find out if there are free areas (partitions or
pages) where the program/processes can be placed. If there are some
free areas it will determine which of the free areas is to be allocated to
the processes. If there is no free area, the OS will have to determine
which information/processes in the main memory is to be removed from
main memory into the auxiliary storage so as to give room for the
process to be run or what information in memory is to be removed to
give room for the program/process constitutes what is called memory
allocation policies.
The policies of managing the main memory fall into three main
categories
i. Replacement Policy This determines what information or
process is to be removed from the main memory
ii. Fetch Policy:- This determines when information process is to be
loaded.
iii. Placement policy:- This determines where information/process is
to be placed in main memory.
Replacement and fetch policies are the same for paged and non-paged
systems. Placement policy, on the other hand, differs between paged
and/non-paged systems.

Page 9
REPLACEMENT POLICIES/ALGORITHMS
When a block of information (such as a page in a paged system) must
be fetched into a full main memory, one of those already present must
be replaced. The rule that determines the victim is called a replacement
algorithm.
An ideal replacement algorithm should try to minimize the number of
page transfers. Such an ideal algorithm would be one that replaces the
page which will remain unreferenced for the longest period.
Unfortunately, knowledge of the future pattern of reference is hard to
come by; at best, this can only be inferred from the past behaviour.
The four most common replacement algorithms are:
(i) Optimal (OPT) Algorithm
(ii) FIFO Algorithm (first -in-first out)
(iii) LRU (least recently used)
(tv) Clock Algorithm (also called NRU)
a) Optimal (OPT) Algorithm
The OPT algorithm removes the page to be referenced in the most
distant future. Since this requires future knowledge the OPT algorithm is
not realizable. Its significance is theoretical; as it can serve as a
yardstick for comparism with other algorithms. This algorithm is due to
Belady.
b) First-in- First- out (FIFO) Algorithm
The FIFO algorithm (also called oldest resident) replaces the resident
page that has been in the main memory for the longest time. Whenever
a page is to be removed, the oldest page is identified and removed from
main memory.
In order to implement the FIFO page replacement algorithm, the memory
manager must keep track of the relative order of the loading of pages
into the mam memory. One way of accomplishing this is maintaining a
FIFO queue of pages.
For purpose of illustration, assume in a paged system that there are only
3 page frames allocated to a process with 4 distinct pages. Assume
further the following sequence of page reference during execution:
2, 1, 2, 3, 2, 4, 2, 1.

Page 10
Reference to pages 2. and 1 will require these pages to be fetched and
placed in the first and second page frames respectively. The 3 rd page
reference is to page 2 which is already in memory; No fetch is required.
The 4th reference refers to page 3, this will have to be fetched and
placed in the 3rd page frame.
The 5th page reference (page2) does not require any fetching since it is
already in the memory. The 6th page, references (page 4), however, will
require to be fetched and be placed in memory. The oldest resident page
is page 2. This will be removed from main memory to allow page 4 to be
brought in to the vacated page frame.
The 7th page reference (page 2) will require page 2 to be brought into
memory again. The page frame occupied by page 1 will now have to be
removed since it is the oldest in memory.
The FIFO algorithm has the advantage of being easy to implement. It
however suffers from what is referred to as Belady anomaly. This refers
to a situation in which a page is brought back into main memory almost
immediately after it is replaced. This occurs when a replacement
algorithm does not take into account the pattern of usage of a given
page.
For example, in the illustration above, the 6 th page reference caused
page 2 to be replaced while, the 7th page reference made this very page
2 to be brought back into main memory again almost immediately after it
was replaced.
The FIFO algorithm tends to throw away frequently used pages because
they naturally tend to stay longer in memory.
Another problem with the FIFO replacement algorithm is that the number
of page faults that informs page transfers increases as more page
frames available to a process increases.
c) Least Recently Used (LRU) Algorithm.
The LRU algorithm replaces the least recently used resident page. The
overhead is that of recording the sequence and frequency of access to
the page.
To illustrate the LRU algorithm, assume page frames and the former
sequence of page, references: 2, 1, 2, 3, 2, 4, 2, 1.
The first two page reference causes pages 2 and 1 to be brought into
the main memory as a result of page faults. The 3 rd page reference is

Page 11
made to page two, which is already in memory. The 4th page reference
causes page 3 to be placed in the 3rd page frame.
The 5th page reference (page 2) does not require any fetching. At the 6 th
page reference when page 4 is to be brought in, the most recently used
page is page 2; white the least recently used page is page 1. Thus page
1 is replaced giving room for page 4 The image of the process in main
memory is now.
At the 8th page reference, page 1 will need to be brought in back again.
At this time, the least recently used page frame contains page 3; this
page 3 will be removed from the main memory to allow page 1 to be in
memory.
The LRU algorithm is implemented by means of a stack structure with
the algorithm of first-in, last out. When a resident page is referenced, it is
removed from its current stack position and placed at the top of the
stack. When a page removal is required, the page at the bottom of the
stack is removed from memory.
One main advantage of the LRU replacement algorithm is that it does
not suffer from the Belady's anomaly. Its problem lies in the complexity
of practically implementing the algorithms.
In general the LRU algorithm performs better than FIFO. The reason is
that LRU takes into account the patterns of program behaviour by
assuming that the pages used in the most distant past is least likely to
be referenced in the near future. This algorithm is used in the
multiprogramming systems as well as in the UNIX OS, LINUX in
swapping out a whole process.
d) The Clock Algorithm
As a result of the poor performance and potentially anomalous behaviour
of the FIFO and implementation complexity of the LRU, the Clock
algorithm was developed as a compromise between the two algorithms
or a hybrid of the two of them. The clock algorithm combined the
relatively low overhead of FIFO with tracking of the resident page usage
which accounts for the better performance of the LRU. This algorithm is
sometimes called not recently used (NRU). The current version of UNIX,
which made use of virtual paging as memory management employs, this
replacement algorithm.
Simulation studies have shown a difference in the performance of these
three algorithms, especially in terms of the number of page transfers

Page 12
required during the running of a number of programs. FIFO algorithm
performs worse than the other two.
Global and Local Replacement Policies
Local replacement algorithm refers to the replacement of one of the
resident pages belonging to one process. Global replacement algorithm
refers to replacement of a resident page from all the page frames
available, regardless of the owner of the page frame.

FETCH POLICIES.
Fetch policies determine when to load a block of information or process
from auxiliary memory to the main memory. Two approaches are usually
used: demand and anticipatory. Demand policies fetch required
information/Process when they are needed; while anticipatory policies
fetch them in advance.

MAIN MEMORY PLACEMENT POLICIES


Placement policies differ between paged and non-paged systems. This
is because pages are of the same size; whereas the free areas of
memory in non-pagedf systems are usually of arbitrary size and may not
necessarily contain the process or block of information needing space.
The placement policy for m pages is simply to use the replacement
policy for freeing m page frames.
The placement policy for non-paged system will have to maintain a list of
the free, unallocated areas of main memory, usually called 'holes”, It has
the task of selecting which hole to use and to update the hole list after
each placement The following sections describe four placement
algorithms
a) First -fit
In this algorithm, the first free allocated area of memory large enough to
contain the process is allocated This algorithm is commonly employed
by the UNIX OS and MS - DOS.
b) Best - fit
This algorithm selects the smallest hole that is large enough to contain
the process. In general, the first fit algorithm is faster than best-fit since

Page 13
the required size of free space is terminated upon finding the first one,
however it does not minimize wasted memory. Best-fit algorithm is
slower but it provides better memory utilization since it has to search
through all the holes to obtain the smallest size that is large enough to
accommodate the process.
c) Next-fit
This is a modification of the first-fit algorithm. In the first-fit algorithm, a
search for appropriate free space always starts at the beginning of the
free hole list, where as in next-fit, the search for appropriate free hole
continues from where the last, one left off.
The motive behind this algorithm is to reduce the search time by
avoiding examination of smaller holes that, in the long run, tend to be
created at the beginning of the free list as a result of previous placement.
In general, however, next fit has not been found to be superior to the
first-fit in reducing the amount of wasted memory.

d) Worst-fit
This algorithm allocates the largest free hole, provided the hole exceeds
the size of the process The idea behind this algorithm is to reduce the
rate of production of very tiny hole, which will become unusable in the
long run. Production of such unusable tiny holes is common when the
best-fit algorithm is used for memory allocation. Some simulation studies
have shown that worst -fit algorithm is not very effective in reducing
wasted memory in the processing of series of request.

VITRUAL MEMORY

Virtual memory is a feature of an operating system that enables a


computer to be able to compensate shortages of physical memory by
transferring pages of data from random access memory to disk storage.

This process is done temporarily and is designed to work as a


combination of RAM and space on the hard disk.

This means that when RAM runs low, virtual memory can move data
from it to a space called a paging file. This process allows for RAM to be
freed up so that a computer can complete the task.

Page 14
Occasionally a user might be shown a message that says the virtual
memory is running low, this means that either more RAM needs to be
added, or the size of the paging file needs to be increased.

An imaginary memory area supported by some operating systems (for


example, Windows but not DOS) in conjunction with the hardware. You
can think of virtual memory as an alternate set of
memory addresses. Programs use these virtual addresses rather than
real addresses to store instructions and data. When the program is
actually executed, the virtual addresses are convertedinto real memory
addresses.

The purpose of virtual memory is to enlarge the address space, the set


of addresses a program can utilize. For example, virtual memory might
contain twice as many addresses as main memory. A program using all
of virtual memory, therefore, would not be able to fit in main memory all
at once. Nevertheless, the computer could execute such a program
by copying into main memory those portions of the program needed at
any given point during execution.

To facilitate copying virtual memory into real memory, the operating


system divides virtual memory into pages, each of which contains a fixed
number of addresses. Each page is stored on a diskuntil it is needed.
When the page is needed, the operating system copies it from disk to
main memory, translating the virtual addresses into real addresses.

The process of translating virtual addresses into real addresses is


called mapping. The copying of virtual pages from disk to main memory
is known as paging or swapping.

Page 15
FILE SYSTEM AND MANAGEMENT
A file system defines how files are named, oragnised, stored,
and retrieved from a storage device.
Attributes of a File Name – only information kept in human-readable
form • Identifier – unique tag (number) identifies file within file system •
Type – needed for systems that support different types • Location –
pointer to file location on device • Size – current file size • Protection –
controls who can do reading, writing, executing • Time, date, and user
identification – data for protection, security, and usage monitoring •
Information about files is kept in the directory structure, which is
maintained on the disk.
File management is one of the basic and important features of operating
system. Operating system is used to manage files of computer system.
All the files with different extensions are managed by operating system.
A file is collection of specific information stored in the memory of
computer system. File management is defined as the process of
manipulating files in computer system, it management includes the
process of creating, modifying and deleting the files.
The following are some of the tasks performed by file management of
operating system of any computer system:
It helps to create new files in computer system and placing them at the
specific locations.
It helps in easily and quickly locating these files in computer system.
It makes the process of sharing of the files among different users very
easy and user friendly.
It helps to stores the files in separate folders known as directories.
These directories help users to search file quickly or to manage the files
according to their types or uses.
It helps the user to modify the data of files or to modify the name of the
file in the directories.

Page 16
The above figure shows the general hierarchy of the storage in an
operating system. In this figure the root directory is present at the
highest level in the hierarchical structure. It includes all the
subdirectories in which the files are stored. Subdirectory is a directory
present inside another directory in the file storage system. The directory
base storage system ensures better organization of files in the memory
of the computer system.
Origin of the term
Before the advent of computers, the term file system was used to
describe a method of storing and retrieving paper documents. By 1961
the term was being applied to computerized filing alongside the original
meaning. By 1964 it was in general use.

The file management of function in operating system (OS) is based


on the following concepts:
Page 17
File Attributes
It specifies the characteristics of the files such as type, date of last
modification, size, location on disk etc. file attributes help the user to
understand the value and location of files. File attributes is one most
important feature. It is uses to describe all the information regarding
particular file.
File Operations: It specifies the task that can be performed on a file
such as opening and closing of file.
File Access permission: It specifies the access permissions related to
a file such as read and write.
File Systems: It specifies the logical method of file storage in a
computer system. Some of the commonly used files systems include
FAT and NTFS.
The system may contain features like:
Assigning queued document numbers for processing
Owner and process mapping to track various stages of processing
Report generation
Notes
Status
Create, modify, copy, delete and other file operations
File systems and operating systems
Unix and Unix-like operating systems
1. Unix operating systems create a virtual file system, which makes all
the files on all the devices appear to exist in a single hierarchy. This
means, in those systems, there is one root directory, and every file
existing on the system is located under it somewhere. Unix-like systems
can use a RAM disk or network shared resource as its root directory.
2. Linux
Linux supports numerous file systems, but common choices for the
system disk on a block device include XFS, JFS, and btrsfs. For raw
flash without a flash translation layer(FTL) or Memory Technology
Device(MTD), there are UBIFS, JFFS 2 and YAFFS, among others.
SquashFS is a common compressed read-only file system.

Page 18
3. macOS
macOS uses the Apple File System(APFS), which recently replaced a
file system inherited from classic mac old called HFS Plus(HFS+). Apple
also uses the term “Mac OS Extended” for HFS+.HFS Plus is a
metadata-rich and case-preserving but (usually) case-insensitive file
system. Due to the Unix roots of macOS, Unix permissions were added
to HFS Plus. Later versions of HFS Plus added journaling to prevent
corruption of the file system structure and introduced a number of
optimizations to the allocation algorithms in an attempt to defragment
files automatically without requiring an external defragmenter.
4. Microsoft Windows
Windows makes use of the FAT, NTFS, exFAT , Live File System and
ReFS file systems (the last of these is only supported and usable in
Windows Servers, Windows 8,8.1,10.
Aspects of File Systems
1.Space management
File System Fragmentation occurs when unused space or single files are
not contiguous. As a file system is used, files are created, modified and
deleted. When a file is created the file system allocates space for the
data. Some file systems permit or require specifying an initial space
allocation and subsequent incremental allocations as the file grows. As
files are deleted the space they were allocated eventually is considered
available for use by other files. This creates alternating used and unused
areas of various sizes. This is free space fragmentation. When a file is
created and there is not an area of contiguous space available for its
initial allocation the space must be assigned in fragments. When a file is
modified such that it becomes larger it may exceed the space initially
allocated to it, another allocation must be assigned elsewhere and the
file becomes fragmented.
2. Filenames
A filename (or file name) is used to identify a storage location in the file
system. Most file systems have restrictions on the length of filenames. In
some file systems, filenames are not case sensitive(i.e., the names
MYFILE and myfile refer to the same file in a directory); in others,
filenames are case sensitive (i.e., the names MYFILE, MyFile , and
myfile refer to three separate files that are in the same directory).
3. Directories

Page 19
File systems typically have directories (also called folders) which allow
the user to group files into separate collections. This may be
implemented by associating the file name with an index in a table of
contents or an inode in a Unix-like file system. Directory structures may
be flat (i.e. linear), or allow hierarchies where directories may contain
subdirectories. The first file system to support arbitrary hierarchies of
directories was used in the Multics operating system.
File system as an abstract user interface
In some cases, a file system may not make use of a storage device but
can be used to organize and represent access to any data, whether it is
stored or dynamically generated (e.g. procfs).
Limitations
1. Converting the type of a file system
It may be advantageous or necessary to have files in a different file
system than they currently exist. Reasons include the need for an
increase in the space requirements beyond the limits of the current file
system. The depth of path may need to be increased beyond the
restrictions of the file system. There may be performance or reliability
considerations. Providing access to another operating system which
does not support the existing file system is another reason.
2. Long file paths and long file names
In hierarchical file systems, files are accessed by means of a path that is
a branching list of directories containing the file. Different file systems
have different limits on the depth of the path. File systems also have a
limit on the length of an individual filename.
Copying files with long names or located in paths of significant depth
from one file system to another may cause undesirable results. This
depends on how the utility doing the copying handles the discrepancy.

RESOURCE MANAGEMENT TECHNIQUES


Resource management is the dynamic allocation and de-allocation by an
operating system of processor cores, memory pages, and various types
of bandwidth to computations that compete for those resources. The
objective is to allocate resources so as to optimize responsiveness
subject to the finite resources available. Historically, resource
management solutions have been relatively unsystematic, and now the

Page 20
very assumptions underlying the traditional strategies fail to hold. First,
applications increasingly differ in their ability to exploit resources,
especially processor cores. Second, application responsiveness is
approximately two-valued for "Quality-Of-Service" (QOS) applications,
depending on whether deadlines are met. Third, power and battery
energy have become constrained. This talk will propose a scheme for
addressing the operating system resource management problem.
Virtual Memory
When there are shortages of physical memory (i.e. Memory slots are
very full) a method called Virtual Memory is used. Virtual Memory is
transferring data that is stored in primary memory (e.g RAM) into disk
storage i.e your Hard Drive.

Although this process makes programs run slower as the Hard Drive is
slower than primary memory therefore if you are looking for performance
it would be better to have large RAM capacity so that virtual memory is
only used when opening new programs. On the other side it is more
economically beneficial as it makes it seem like you have unlimited RAM
though in reality you don’t.

When you are running man programs at the same time and you RAM is
not enough to cope virtual memory plays a big role, and is constantly
swapping information between RAM and Hard Drive, this is called
thrashing and it makes your computer run very slowly.
The information that is stored in the hard drives stored in "page files".

Multitasking

Multitasking operating systems are capable of running more than one


program at a time. Nowadays, OS are configured to execute multiple
programs simultaneously. Older operating systems would switch system
resources back and forth between two or more running processes. In
order to multitask, computers time-slice. This is, the system would freeze
the executed program for a few seconds and then execute tasks for
another program. Time-slicing is considerably efficient since the time lag
for the operator was of no more than a few seconds. Today, most
desktop, laptop, and netbook operating systems function with some type
of multitasking operating system. In the early stages of multitasking,
operating systems did not truly multitask but did rapid switching between

Page 21
programs that seemed simultaneous. Modern computers with only one
processor are capable of seeming to perform multiple tasks
simultaneously by time-slicing with improved performance. However,
multiple processors allow many different programs to run at the same
time.

Paging

Paging is a method of writing data to, and reading it from, secondary for
use in primary storage/ main memory. Paging plays a role in memory
management for a computer's operating system.
In a memory management system that takes advantage of paging, the
OS reads data from secondary storage in blocks called pages. these
individual pages can be more easily moved around as they aren't
contiguous.

Interrupt
An interrupt is a signal from a device attached to a computer or from a
program within the computer that causes the operating system to stop
and figure out what to do next. Almost all personal (or larger) computers
today are interrupt-driven - that is, they start down the list of computer
instruction s in one program (perhaps an application such as a word
processor) and keep running the instructions until either (A) they can't go
any further or (B) an interrupt signal is sensed. After the interrupt signal
is sensed, the computer either resumes running the program it was
running or begins running another program.

A software interrupt is caused either by an exceptional condition in the


processor itself, or a special instruction in the instruction set which
causes an interrupt when it is executed.

Hardware interrupts are used by devices to communicate that they


require attention from the operating system .

Polling

Polling, or polled operation, in computer science, refers to actively


sampling the status of an external device by a client program as a

Page 22
synchronous activity. Polling is most often used in terms of input/output
(I/O), and is also referred to as polled I/O or software-driven I/O.
Polling has the disadvantage that if there are too many devices to check,
the time required to poll them can exceed the time available to service
the I/O device.
Polling is often intimately involved with very low-level hardware. For
example, polling a parallel printer port to check whether it is ready for
another character involves examining as little as one bit of a byte. That
bit represents, at the time of reading, whether a single wire in the printer
cable is at low or high voltage. The I/O instruction that reads this byte
directly transfers the voltage state of eight real world wires to the eight
circuits (flip flops) that make up one byte of a CPU register.

Page 23

You might also like