Os Module4
Os Module4
Below Figure shows the various stages of the binding processes and the units involved
in each stage:
Rather than loading an entire program into memory at once, dynamic loading
loads up each routine as it is called.
The advantage is that unused routines need not be loaded, thus reducing total
memory usage and generating faster program startup times.
The disadvantage is the added complexity and overhead of checking to see if a
routine is loaded every time it is called and then loading it up if it is not already
loaded.
With static linking library modules get fully included in executable modules,
wasting both disk space and main memory usage, because every program that
included a certain routine from the library would have to have their own copy of
that routine linked into their executable code.
With dynamic linking, however, only a stub is linked into the executable module,
containing references to the actual library module linked in at run time.
This method saves disk space, because the library routines do not need to be
fully included in the executable modules, only the stubs.
An added benefit of dynamically linked libraries (DLLs, also known as shared
libraries or shared objects on UNIX systems) involves easy upgrades and
updates.
8.2 Swapping
--------------------------------------------------------------------------------------------
• One method of allocating contiguous memory is to divide all available memory into
equal sized partitions, and to assign each process to their own partition (called as
MFT). This restricts both the number of simultaneous processes and the maximum
size of each process, and is no longer used.
There are many different strategies for finding the "best" allocation of memory to
processes, including the three most commonly discussed:
1. First fit - Search the list of holes until one is found that is big enough to satisfy
the request, and assign a portion of that hole to that process. Whatever fraction of
the hole not needed by the request is left on the free list as a smaller hole.
Subsequent requests may start looking either from the beginning of the list or from
the point at which this search ended.
2. Best fit - Allocate the smallest hole that is big enough to satisfy the request. This
saves large holes for other process requests that may need them later, but the
resulting unused portions of holes may be too small to be of any use, and will
therefore be wasted. Keeping the free list sorted can speed up the process of finding
the right hole.
3. Worst fit - Allocate the largest hole available, thereby increasing the likelihood
that the remaining portion will be usable for satisfying future requests. Simulations
show that either first or best fit are better than worst fit in terms of both time and
storage utilization. First and best fits are about equal in terms of storage utilization,
but first fit is faster.
--------------------------------------------------------------------------------------------
8.3.3 Fragmentation
----------------------------------------------------------------------------------------
8.4 Paging
Paging hardware
Some TLBs store address-space identifiers, ASIDs, to keep track of which process
"owns" a particular entry in the TLB.
The percentage of time that the desired information is found in the TLB is termed the
hit ratio.
▪ For example, suppose that it takes 100 nanoseconds to access main memory, and
only 20 nanoseconds to search the TLB. So a TLB hit takes 120 nanoseconds total (
20 to find the frame number and then another 100 to go get the data ), and a TLB
miss takes 220 ( 20 to search the TLB, 100 to go get the frame number, and then
another 100 to go get the data. ) So with an 80% TLB hit ratio, the average memory
access time would be: 0.80 * 120 + 0.20 * 220 = 140 nanoseconds for a 40%
slowdown to get the frame number. A 98% hit rate would yield 122 nanoseconds
average access time ( you should verify this ), for a 22% slowdown.
8.4.3 Protection
• The page table can also help to protect processes from accessing memory. • A bit
can be added to the page table. Valid / invalid bits can be added to the page table.
The valid bit ‘V’ shows that the page is valid and updated, and the invalid bit ‘i’ shows
that the page is not valid and updated page is not in the physical memory. • Note
that the valid / invalid bits described above cannot block all illegal memory accesses,
due to the internal fragmentation. Many processes do not use all the page table entries
available to them. • Addresses of the pages 0,1,2,3,4 and 5 are mapped using the
page table as they are valid. • Addresses of the pages 6 and 7 are invalid and cannot
be mapped. Any attempt to access those pages will send a trap to the OS.
• Paging systems can make it very easy to share blocks of memory, by simply
duplicating page numbers in multiple page frames. This may be done with either code
or data. • If code is reentrant(read-only files) that means that it does not write to or
change the code in any way. More importantly, it means the code can be shared by
multiple processes, so long as each has their own copy of the data and registers,
including the instruction register. • In the example given below, three different users
are running the editor simultaneously, but the code is only loaded into memory ( in
the page frames ) one time. • Some systems also implement shared memory in this
fashion.
1) Hierarchical Paging
2) Hashed Page-tables
3) Inverted Page-tables
Hashed page-table
8.6 Segmentation
8.6.1 Basic Method
• This is a memory-management scheme that supports user-view of memory.
• A logical-address space is a collection of segments.
• Each segment has a name and a length.
• The addresses specify both
→ segment-name and
→ offset within the segment.
• Normally, the user-program is compiled, and the compiler automatically
constructs segments reflecting the input program.
For ex:
→ The code → Global variables
→ The heap, from which memory is allocated → The stacks used by each
thread
→ The standard C library
9.1 Virtual memory is a technique that allows the execution of processes that
are not completely in memory.
One major advantage of this scheme is that programs can be larger than physical
Instead of swapping in a whole process, the pager brings only those necessary
pages into memory
Advantages:
Avoids reading into memory-pages that will not be used,
Decreases the swap-time and
Decreases the amount of physical-memory needed.
The valid-invalid bit scheme can be used to distinguish between
→ pages that are in memory and
→ pages that are on the disk.
If the bit is set to valid, the associated page is both legal and in memory.
If the bit is set to invalid, the page either
→ is not valid (i.e. not in the logical-address space of the process) or
→ is valid but is currently on the disk
9.2.2 Performance:
Demand paging can significantly affect the performance of a computer-system.
Let p be the probability of a page-fault (0≤p ≤1).
if p = 0, no page-faults.
if p = 1, every reference is a fault.
effective access time(EAT)=[(1 - p) *memory access]+ [p *page-fault time]
A page-fault causes the following events to occur:
Trap to the OS.
Save the user-registers and process-state.
Determine that the interrupt was a page-fault. '
Canara Engineering College
Check that the page-reference was legal and determine the location of the
page on the disk.
Issue a read from the disk to a free frame:
Wait in a queue for this device until the read request is serviced.
Wait for the device seek time.
Begin the transfer of the page to a free frame.
While waiting, allocate the CPU to some other user.
Receive an interrupt from the disk I/O subsystem (I/O completed).
Save the registers and process-state for the other user (if step 6 is
executed).
Determine that the interrupt was from the disk.
Correct the page-table and other tables to show that the desired page is
now in memory.
Wait for the CPU to be allocated to this process again.
Restore the user-registers, process-state, and new page-table, and then
resume the interrupted instruction.
9.3 Copy-on-Write
This technique allows the parent and child processes initially to share the same
pages.
If either process writes to a shared-page, a copy of the shared-page is created.
For example:
Assume that the child process attempts to modify a page containing portions of
the stack, with the pages set to be copy-on-write.
OS will then create a copy of this page, mapping it to the address space of the
child process.
Child process will then modify its copied page & not the page belonging to the
parent process.
-----------------------------------------------------------------------------------------
In order to make the most use of virtual memory, we load several processes into
memory at the same time. Since we only load the pages that are actually needed
by each process at any given time, there are frames to load many more processes
in memory.
If some process suddenly decides to use more pages and there aren't any free
frames available. Then there are several possible solutions to consider:
Adjust the memory used by I/O buffering, etc., to free up some frames for user
processes.
Put the process requesting more pages into a wait queue until some free frames
become available.
Swap some process out of memory completely, freeing up its page frames.
Find some page in memory that isn't being used right now, and swap that page
only out to disk, freeing up a frame that can be allocated to the process requesting
it. This is known as page replacement, and is the most common solution. There
are many different algorithms for page replacement.
9.4.1 Basic page replacement approach is:
Page replacement
Problem: If no frames are free, 2 page transfers (1 out & 1 in) are required. This
situation
→ doubles the page-fault service-time and
→ increases the EAT accordingly.
Solution: Use a modify-bit (or dirty bit).
Each page or frame has a modify-bit associated with the hardware.
The modify-bit for a page is set by the hardware whenever any word is written
into the page (indicating that the page has been modified).
Working:
When we select a page for replacement, we examine it‟s modify-bit.
If the modify-bit =1, the page has been modified. So, we must write the page
to the disk.
If the modify-bit=0, the page has not been modified. So, we need not write the
page to the disk, it is already there.
Advantage:
Can reduce the time required to service a page-fault.
We must solve 2 major problems to implement demand paging:
Develop a Frame-allocation algorithm:
If we have multiple processes in memory, we must decide how many
frames to allocate to each process.
Develop a Page-replacement algorithm:
We must select the frames that are to be replaced.
Canara Engineering College
--------------------------------------------------------------------------------------
Page Replacement algorithm
FIFO page replacement
Optimal page replacement
LRU page replacement (Least Recently Used)
LFU page replacement (Least Frequently Used)
The first three references(7, 0, 1) cause page-faults and are brought into these
empty frames.
The next reference(2) replaces page 7, because page 7 was brought in first.
Since 0 is the next reference and 0 is already in memory, we have no fault for this
reference.
The first reference to 3 results in replacement of page 0, since it is now first in line.
This process continues till the end of string.
There are fifteen faults altogether.
Advantage:
Easy to understand & program.
Disadvantages:
Performance is not always good .
A bad replacement choice increases the page-fault rate (Belady's anomaly).
For some algorithms, the page-fault rate may increase as the number of
allocated frames increases. This is known as Belady's anomaly.
The first three references cause faults that fill the three empty frames.
The reference to page 2 replaces page 7, because page 7 will not be used until
reference 18.
The page 0 will be used at 5, and page 1 at 14.
With only nine page-faults, optimal replacement is much better than a FIFO
algorithm, which results in fifteen faults.
Advantage:
Guarantees the lowest possible page-fault rate for a fixed number of frames.
Disadvantage:
Difficult to implement, because it requires future knowledge of the reference
string.
*******************************
The first five faults are the same as those for optimal replacement.
When the reference to page 4 occurs, LRU sees that of the three frames, page 2
was used least recently. Thus, the LRU replaces page 2.
The LRU algorithm produces twelve faults.
Two methods of implementing LRU:
1. Counters
Each page-table entry is associated with a time-of-use field.
A counter(or logical clock) is added to the CPU.
The clock is incremented for every memory-reference.
Whenever a reference to a page is made, the contents of the clock register
are copied to the time-of-use field in the page-table entry for that page.
We replace the page with the smallest time value.
2. Stack
Keep a stack of page-numbers (Figure 4.11).
Whenever a page is referenced, the page is removed from the stack and put
on the top.
The most recently used page is always at the top of the stack. The least
recently used page is always at the bottom.
Stack is best implement by a doubly linked-list.
Advantage:
Does not suffer from Belady's anomaly.
Disadvantage:
Few computer systems provide sufficient h/w support for true LRU page
replacement.
****************************************
Allocation Algorithms:
Equal Allocation:
We split m frames among n processes is to give everyone an equal share, m/n frames.
(For ex: if there are 93 frames and five processes, each process will get 18
frames. The three leftover frames can be used as a free-frame buffer pool).
Proportional Allocation:
We can allocate available memory to each process according to its size.
In both 1 & 2, the allocation may vary according to the multiprogramming level.
If the multiprogramming level is increased, each process will lose some frames to
Figure:Thrashing
Canara Engineering College
Methods to avoid thrashing:
Use Local Replacement
If one process starts thrashing, it cannot
→ steal frames from another process and
→ cause the latter to thrash as well.
We must provide a process with as many frames as it needs. This approach defines
the locality model of process execution.
Locality Model states that, As a process executes, it moves from locality
to locality.
A locality is a set of pages that are actively used together.
A program may consist of several different localities, which may overlap.
--------------------------------------------------------------------------------------
What is Belady's anomaly? Explain with an example.
Solution:
Belady's Anomaly:
“On increasing the number of page frames, the no. of page faults do
not necessarily decrease, they may also increase”.
• Example: Consider the following reference string when number
of frame used is 3 and 4: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3,
4, 5
(i) FIFO with 3 frames:
Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 2 3 4 1 2 5 5 5 3 4 4
2 1 2 3 4 1 2 2 2 5 3 3
3 1 2 3 4 1 1 1 2 5 5
No. of Page √ √ √ √ √ √ √ √ √
faults
No. of page faults=9
---------------------------------------------------------------------------------------