Operating System Ii Memory Management Memory Management Is A Form of Resource Management Applied
Operating System Ii Memory Management Memory Management Is A Form of Resource Management Applied
Operating System Ii Memory Management Memory Management Is A Form of Resource Management Applied
MEMORY MANAGEMENT
The task of Operating System include making the most efficient use of
the machine resources in which main memory or immediate access
store is among. The main memory contain Operating System together
with number of user programs. The limited space in main memory is
controlled by Operating System. In a multiprogramming computer, the
operating system resides in part of memory and the rest is used by
multiple processes.
Operating systems must employ techniques to
– Track where and how a program resides in memory
– Convert logical addresses into actual addresses
–
Logical address - reference to a stored value relative to the program
making the reference
Page 1
Memory management requirements
a. Paging: where the main store are divided into equal sized chunks or
pages. The programs and other software are similarly divided into
pages of the same size and are allocated to hardware pages when
vacant.
b. Partitioning: This is where the main memory is divided into partition of
different sizes and the software is fitted into one of the partitions if
sufficient store is available. It can be relocated within the main
memory by being marked using special register in the hardware.
Page 2
c. Segmentation: is where the main memory is organized into segments
which reflect the logical divisions within each program; so that each
segment corresponds to a module of the program.
The memory management module of the Operating System is
responsible for controlling the method of allocating the space and for
overseeing the movement of data in and out of the main store.
In this system – only one user is present and one process is running at
any time. Memory management is easy – memory is divided into two
parts – one for the Operating system and one for the program currently
being executed.
Page 3
This scheme has been employed with single user, stand- alone system,
such as IBM 1130, disk monitor system. It is found nowadays in simple
systems like game computers. Early MS-DOS systems made use of this
scheme.
(i) A large portion of the main memory is wasted since only one
program is in the main memory at any one time.
(ii) Most of the other system resources could not be effectively
utilized.
PARTITIONED ALLOCATION
- Fixed
partitions -
main memory
is divided into
a fixed number of partitions into which programs
can be loaded
Page 4
- Dynamic partitions - partitions are created as needed to fit
the programs waiting to be loaded
Base register - register that holds the beginning address of the current
partition (the one that is running)
Bounds register - register that holds the length of the current partition
Partitioning of the memory could be fixed or variable.
OPERATING SYSTEM
2000KB
3000KB P1 1000KBin multi-
This method has been employed
4000KB programming computer systems,
P2 such as
2500KB
1000KB IBM’s OS/360 MFT(multiprogramming with
P3 3000KB
fixed number of task).
Advantages
Disadvantages
i. The fixed partition sizes can prevent a process from being run
due to unavailability of partition of adequate size.
ii. The total number of unused spaces within each partition can be
large enough to accommodate another process, though neither
of them is large enough to accommodate new process.
iii. The maximum number of the active processes is fixed.
Page 5
Variable Partition Allocation or Dynamic Allocation
i. Program/job is completed
ii. Error is detected
iii. I/O is required
iv. Program/job’s current time-slice expires.
In each case, the processor is then assigned to the job with the next
highest priority.
Page 6
process p2 completes and there is a process requiring 3500k of
contiguous memory, the program cannot run.
The holes in the partition that are not allocated to any process, usually
because they are not large enough to contain the process that want to
run is referred to as external fragmentation.
In this case when there are fragments of free space, the remaining
programs in memory can be compacted by relocating them upwards, so
that all free fragments come together. A relocating register whose
contents are automatically added to every address that is used in
referencing memory is often used.
There are two types of the paging technique: Direct and Demand-
Paging.
Page 7
iii. Elimination of the necessity of allocating contiguous areas of
memory to process
iv. Provision of high degree of concurrency/multiprogramming, which
also results in improved utilization of the systems resources.
Thrashing
This is a problem in which pages are read into memory, removed and
then read back frequently to the extent that the system throughput is
undermined. This particularly occurs when too many programs are being
time-shared using demand-paging with insufficient main memory.
Page 8
Fragmentation generally causes performance problem wherever it
exists. Main memory fragmentation causes performance problem in
terms of wasted memory space, reduction in the number of programs
that can be concurrently run together and reduction in optimal utilization
of most of the system resources. Compaction and paging techniques
have been used to solve paging fragmentation problems on the main
memory.
Page 9
REPLACEMENT POLICIES/ALGORITHMS
When a block of information (such as a page in a paged system) must
be fetched into a full main memory, one of those already present must
be replaced. The rule that determines the victim is called a replacement
algorithm.
An ideal replacement algorithm should try to minimize the number of
page transfers. Such an ideal algorithm would be one that replaces the
page which will remain unreferenced for the longest period.
Unfortunately, knowledge of the future pattern of reference is hard to
come by; at best, this can only be inferred from the past behaviour.
The four most common replacement algorithms are:
(i) Optimal (OPT) Algorithm
(ii) FIFO Algorithm (first -in-first out)
(iii) LRU (least recently used)
(tv) Clock Algorithm (also called NRU)
a) Optimal (OPT) Algorithm
The OPT algorithm removes the page to be referenced in the most
distant future. Since this requires future knowledge the OPT algorithm is
not realizable. Its significance is theoretical; as it can serve as a
yardstick for comparism with other algorithms. This algorithm is due to
Belady.
b) First-in- First- out (FIFO) Algorithm
The FIFO algorithm (also called oldest resident) replaces the resident
page that has been in the main memory for the longest time. Whenever
a page is to be removed, the oldest page is identified and removed from
main memory.
In order to implement the FIFO page replacement algorithm, the memory
manager must keep track of the relative order of the loading of pages
into the mam memory. One way of accomplishing this is maintaining a
FIFO queue of pages.
For purpose of illustration, assume in a paged system that there are only
3 page frames allocated to a process with 4 distinct pages. Assume
further the following sequence of page reference during execution:
2, 1, 2, 3, 2, 4, 2, 1.
Page 10
Reference to pages 2. and 1 will require these pages to be fetched and
placed in the first and second page frames respectively. The 3 rd page
reference is to page 2 which is already in memory; No fetch is required.
The 4th reference refers to page 3, this will have to be fetched and
placed in the 3rd page frame.
The 5th page reference (page2) does not require any fetching since it is
already in the memory. The 6th page, references (page 4), however, will
require to be fetched and be placed in memory. The oldest resident page
is page 2. This will be removed from main memory to allow page 4 to be
brought in to the vacated page frame.
The 7th page reference (page 2) will require page 2 to be brought into
memory again. The page frame occupied by page 1 will now have to be
removed since it is the oldest in memory.
The FIFO algorithm has the advantage of being easy to implement. It
however suffers from what is referred to as Belady anomaly. This refers
to a situation in which a page is brought back into main memory almost
immediately after it is replaced. This occurs when a replacement
algorithm does not take into account the pattern of usage of a given
page.
For example, in the illustration above, the 6 th page reference caused
page 2 to be replaced while, the 7th page reference made this very page
2 to be brought back into main memory again almost immediately after it
was replaced.
The FIFO algorithm tends to throw away frequently used pages because
they naturally tend to stay longer in memory.
Another problem with the FIFO replacement algorithm is that the number
of page faults that informs page transfers increases as more page
frames available to a process increases.
c) Least Recently Used (LRU) Algorithm.
The LRU algorithm replaces the least recently used resident page. The
overhead is that of recording the sequence and frequency of access to
the page.
To illustrate the LRU algorithm, assume page frames and the former
sequence of page, references: 2, 1, 2, 3, 2, 4, 2, 1.
The first two page reference causes pages 2 and 1 to be brought into
the main memory as a result of page faults. The 3 rd page reference is
Page 11
made to page two, which is already in memory. The 4th page reference
causes page 3 to be placed in the 3rd page frame.
The 5th page reference (page 2) does not require any fetching. At the 6 th
page reference when page 4 is to be brought in, the most recently used
page is page 2; white the least recently used page is page 1. Thus page
1 is replaced giving room for page 4 The image of the process in main
memory is now.
At the 8th page reference, page 1 will need to be brought in back again.
At this time, the least recently used page frame contains page 3; this
page 3 will be removed from the main memory to allow page 1 to be in
memory.
The LRU algorithm is implemented by means of a stack structure with
the algorithm of first-in, last out. When a resident page is referenced, it is
removed from its current stack position and placed at the top of the
stack. When a page removal is required, the page at the bottom of the
stack is removed from memory.
One main advantage of the LRU replacement algorithm is that it does
not suffer from the Belady's anomaly. Its problem lies in the complexity
of practically implementing the algorithms.
In general the LRU algorithm performs better than FIFO. The reason is
that LRU takes into account the patterns of program behaviour by
assuming that the pages used in the most distant past is least likely to
be referenced in the near future. This algorithm is used in the
multiprogramming systems as well as in the UNIX OS, LINUX in
swapping out a whole process.
d) The Clock Algorithm
As a result of the poor performance and potentially anomalous behaviour
of the FIFO and implementation complexity of the LRU, the Clock
algorithm was developed as a compromise between the two algorithms
or a hybrid of the two of them. The clock algorithm combined the
relatively low overhead of FIFO with tracking of the resident page usage
which accounts for the better performance of the LRU. This algorithm is
sometimes called not recently used (NRU). The current version of UNIX,
which made use of virtual paging as memory management employs, this
replacement algorithm.
Simulation studies have shown a difference in the performance of these
three algorithms, especially in terms of the number of page transfers
Page 12
required during the running of a number of programs. FIFO algorithm
performs worse than the other two.
Global and Local Replacement Policies
Local replacement algorithm refers to the replacement of one of the
resident pages belonging to one process. Global replacement algorithm
refers to replacement of a resident page from all the page frames
available, regardless of the owner of the page frame.
FETCH POLICIES.
Fetch policies determine when to load a block of information or process
from auxiliary memory to the main memory. Two approaches are usually
used: demand and anticipatory. Demand policies fetch required
information/Process when they are needed; while anticipatory policies
fetch them in advance.
Page 13
the required size of free space is terminated upon finding the first one,
however it does not minimize wasted memory. Best-fit algorithm is
slower but it provides better memory utilization since it has to search
through all the holes to obtain the smallest size that is large enough to
accommodate the process.
c) Next-fit
This is a modification of the first-fit algorithm. In the first-fit algorithm, a
search for appropriate free space always starts at the beginning of the
free hole list, where as in next-fit, the search for appropriate free hole
continues from where the last, one left off.
The motive behind this algorithm is to reduce the search time by
avoiding examination of smaller holes that, in the long run, tend to be
created at the beginning of the free list as a result of previous placement.
In general, however, next fit has not been found to be superior to the
first-fit in reducing the amount of wasted memory.
d) Worst-fit
This algorithm allocates the largest free hole, provided the hole exceeds
the size of the process The idea behind this algorithm is to reduce the
rate of production of very tiny hole, which will become unusable in the
long run. Production of such unusable tiny holes is common when the
best-fit algorithm is used for memory allocation. Some simulation studies
have shown that worst -fit algorithm is not very effective in reducing
wasted memory in the processing of series of request.
VITRUAL MEMORY
This means that when RAM runs low, virtual memory can move data
from it to a space called a paging file. This process allows for RAM to be
freed up so that a computer can complete the task.
Page 14
Occasionally a user might be shown a message that says the virtual
memory is running low, this means that either more RAM needs to be
added, or the size of the paging file needs to be increased.
Page 15
FILE SYSTEM AND MANAGEMENT
A file system defines how files are named, oragnised, stored,
and retrieved from a storage device.
Attributes of a File Name – only information kept in human-readable
form • Identifier – unique tag (number) identifies file within file system •
Type – needed for systems that support different types • Location –
pointer to file location on device • Size – current file size • Protection –
controls who can do reading, writing, executing • Time, date, and user
identification – data for protection, security, and usage monitoring •
Information about files is kept in the directory structure, which is
maintained on the disk.
File management is one of the basic and important features of operating
system. Operating system is used to manage files of computer system.
All the files with different extensions are managed by operating system.
A file is collection of specific information stored in the memory of
computer system. File management is defined as the process of
manipulating files in computer system, it management includes the
process of creating, modifying and deleting the files.
The following are some of the tasks performed by file management of
operating system of any computer system:
It helps to create new files in computer system and placing them at the
specific locations.
It helps in easily and quickly locating these files in computer system.
It makes the process of sharing of the files among different users very
easy and user friendly.
It helps to stores the files in separate folders known as directories.
These directories help users to search file quickly or to manage the files
according to their types or uses.
It helps the user to modify the data of files or to modify the name of the
file in the directories.
Page 16
The above figure shows the general hierarchy of the storage in an
operating system. In this figure the root directory is present at the
highest level in the hierarchical structure. It includes all the
subdirectories in which the files are stored. Subdirectory is a directory
present inside another directory in the file storage system. The directory
base storage system ensures better organization of files in the memory
of the computer system.
Origin of the term
Before the advent of computers, the term file system was used to
describe a method of storing and retrieving paper documents. By 1961
the term was being applied to computerized filing alongside the original
meaning. By 1964 it was in general use.
Page 18
3. macOS
macOS uses the Apple File System(APFS), which recently replaced a
file system inherited from classic mac old called HFS Plus(HFS+). Apple
also uses the term “Mac OS Extended” for HFS+.HFS Plus is a
metadata-rich and case-preserving but (usually) case-insensitive file
system. Due to the Unix roots of macOS, Unix permissions were added
to HFS Plus. Later versions of HFS Plus added journaling to prevent
corruption of the file system structure and introduced a number of
optimizations to the allocation algorithms in an attempt to defragment
files automatically without requiring an external defragmenter.
4. Microsoft Windows
Windows makes use of the FAT, NTFS, exFAT , Live File System and
ReFS file systems (the last of these is only supported and usable in
Windows Servers, Windows 8,8.1,10.
Aspects of File Systems
1.Space management
File System Fragmentation occurs when unused space or single files are
not contiguous. As a file system is used, files are created, modified and
deleted. When a file is created the file system allocates space for the
data. Some file systems permit or require specifying an initial space
allocation and subsequent incremental allocations as the file grows. As
files are deleted the space they were allocated eventually is considered
available for use by other files. This creates alternating used and unused
areas of various sizes. This is free space fragmentation. When a file is
created and there is not an area of contiguous space available for its
initial allocation the space must be assigned in fragments. When a file is
modified such that it becomes larger it may exceed the space initially
allocated to it, another allocation must be assigned elsewhere and the
file becomes fragmented.
2. Filenames
A filename (or file name) is used to identify a storage location in the file
system. Most file systems have restrictions on the length of filenames. In
some file systems, filenames are not case sensitive(i.e., the names
MYFILE and myfile refer to the same file in a directory); in others,
filenames are case sensitive (i.e., the names MYFILE, MyFile , and
myfile refer to three separate files that are in the same directory).
3. Directories
Page 19
File systems typically have directories (also called folders) which allow
the user to group files into separate collections. This may be
implemented by associating the file name with an index in a table of
contents or an inode in a Unix-like file system. Directory structures may
be flat (i.e. linear), or allow hierarchies where directories may contain
subdirectories. The first file system to support arbitrary hierarchies of
directories was used in the Multics operating system.
File system as an abstract user interface
In some cases, a file system may not make use of a storage device but
can be used to organize and represent access to any data, whether it is
stored or dynamically generated (e.g. procfs).
Limitations
1. Converting the type of a file system
It may be advantageous or necessary to have files in a different file
system than they currently exist. Reasons include the need for an
increase in the space requirements beyond the limits of the current file
system. The depth of path may need to be increased beyond the
restrictions of the file system. There may be performance or reliability
considerations. Providing access to another operating system which
does not support the existing file system is another reason.
2. Long file paths and long file names
In hierarchical file systems, files are accessed by means of a path that is
a branching list of directories containing the file. Different file systems
have different limits on the depth of the path. File systems also have a
limit on the length of an individual filename.
Copying files with long names or located in paths of significant depth
from one file system to another may cause undesirable results. This
depends on how the utility doing the copying handles the discrepancy.
Page 20
very assumptions underlying the traditional strategies fail to hold. First,
applications increasingly differ in their ability to exploit resources,
especially processor cores. Second, application responsiveness is
approximately two-valued for "Quality-Of-Service" (QOS) applications,
depending on whether deadlines are met. Third, power and battery
energy have become constrained. This talk will propose a scheme for
addressing the operating system resource management problem.
Virtual Memory
When there are shortages of physical memory (i.e. Memory slots are
very full) a method called Virtual Memory is used. Virtual Memory is
transferring data that is stored in primary memory (e.g RAM) into disk
storage i.e your Hard Drive.
Although this process makes programs run slower as the Hard Drive is
slower than primary memory therefore if you are looking for performance
it would be better to have large RAM capacity so that virtual memory is
only used when opening new programs. On the other side it is more
economically beneficial as it makes it seem like you have unlimited RAM
though in reality you don’t.
When you are running man programs at the same time and you RAM is
not enough to cope virtual memory plays a big role, and is constantly
swapping information between RAM and Hard Drive, this is called
thrashing and it makes your computer run very slowly.
The information that is stored in the hard drives stored in "page files".
Multitasking
Page 21
programs that seemed simultaneous. Modern computers with only one
processor are capable of seeming to perform multiple tasks
simultaneously by time-slicing with improved performance. However,
multiple processors allow many different programs to run at the same
time.
Paging
Paging is a method of writing data to, and reading it from, secondary for
use in primary storage/ main memory. Paging plays a role in memory
management for a computer's operating system.
In a memory management system that takes advantage of paging, the
OS reads data from secondary storage in blocks called pages. these
individual pages can be more easily moved around as they aren't
contiguous.
Interrupt
An interrupt is a signal from a device attached to a computer or from a
program within the computer that causes the operating system to stop
and figure out what to do next. Almost all personal (or larger) computers
today are interrupt-driven - that is, they start down the list of computer
instruction s in one program (perhaps an application such as a word
processor) and keep running the instructions until either (A) they can't go
any further or (B) an interrupt signal is sensed. After the interrupt signal
is sensed, the computer either resumes running the program it was
running or begins running another program.
Polling
Page 22
synchronous activity. Polling is most often used in terms of input/output
(I/O), and is also referred to as polled I/O or software-driven I/O.
Polling has the disadvantage that if there are too many devices to check,
the time required to poll them can exceed the time available to service
the I/O device.
Polling is often intimately involved with very low-level hardware. For
example, polling a parallel printer port to check whether it is ready for
another character involves examining as little as one bit of a byte. That
bit represents, at the time of reading, whether a single wire in the printer
cable is at low or high voltage. The I/O instruction that reads this byte
directly transfers the voltage state of eight real world wires to the eight
circuits (flip flops) that make up one byte of a CPU register.
Page 23