Week 14 - Internal OS (Part 2)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

KNR2753

Computer System Architecture

Internal OS

1
PAGING
PAGING = divides the LOGICAL MEMORY into small, equal
sized blocks called PAGES
Divides PHYSICAL MEMORY into blocks called
FRAMES which are equal to PAGE-size.

Typical size 2KB or 4KB


No correlation between Number of Pages and Number of
Frames since:
 No. of Pages depends on size of program
 No. of Frames depends on size of physical memory

Pages and Frames are numbered sequentially from 0


PAGING
One page of
process is to be
stored in one of
the frames of
the memory.
The pages can
be stored at
different
locations but
the priority is to
find contiguous
frames.
PAGING
Consider Main
Memory = 16 Kb
Frame = 1Kb
No. Frames = 16

P1 = 4 Kb
P2 = 4 Kb
P3 = 4 Kb
P4 = 4 Kb

Each P1-P4 into


1Kb Page

* All Pages can fit


into Frames.
PAGING
Consider
P5 = 8 Kb
Waiting in queue

If P2 and P4
completed, P5 can
load to the Frames
previously utilized
by P2 and P4.

P5 = 8 Kb is stored
into two different
places.
PAGING
A LOGICAL ADDRESS consist of :
1. Page Number
2. Offset from the start of the frame

An index table called a PAGE TABLE is maintained for each


program as mapping between pages and corresponding
frames and contains an entry for each page in the program.
PAGE TABLE
Mapping from LOGICAL ADDRESS to PHYSICAL ADDRESS
i. Extract Page Number from Logical Address
ii. Lookup Frame Number which corresponds to Page
Number
iii. Extract offset from Logical Address
iv. Combine Frame Number with offset to produce final
Physical Address
PAGE TABLE

https://www.youtube.com/watch?v
=U0UDwYPBiK8
PAGING
Virtual storage depends on separation between Logical and
Physical memory. Each process has its own virtual memory
and its own Page Table.
o Programs are written as though each will be loaded
starting from address 0
o Logical to Physical mapping services to resolve the conflict
between the programs using similar virtual address
o Physical memory is shared among the different processes
PAGE FAULTS
To execute a program instruction or access data to memory,
TWO related requirements must be met:
1) The instruction or data must be in memory
2) The Page Table for the program must contain mapping
entry to the physical location containing the references
instruction or data

If a Page Table entry do not exist, the fetch-execute cycle for


the instruction will not be able to complete.
PAGE FAULTS
To handle a Page Fault:
The CPU causes an interrupt, called PAGE FAULT or PAGE
FAULT TRAP. Then, the operating system memory manager
handles the Page Fault interrupt and selects a memory frame
in which to place the required Page. If every memory Frame
is already in use, the operating system must select a Page to
be replaced (also known as Page Swapping). When the Page
Swapping is complete, the execution of instruction resumes.

Most systems perform Page Loading from disk to memory


when a Page Fault occur. This technique is known as Demand
Paging.
PAGE FAULTS
In other cases, some systems attempt to anticipate program
page needs before Page Faults can occur. This technique is
called PrePaging. So far, PrePaging algorithms have limited
success.

Page Fault allows the storing of large number of programs in


a small amount of Physical memory. Only small parts of each
program is loaded into memory. Page Swapping is used to
handle situations when required pages are not present in
Physical memory and Logical Address mapping allows the
loading of programs Page anywhere in available memory.
Locality
When a new process enters the system, the operating system
must answer the question, how many Pages should be
assigned to the new process?

Often, this is a trade-off between the more pages are assigned


to the new process, the less likely Page Fault will occur.
However, the more Pages are assigned, the fewer number of
processes can be admitted to the system.
Locality
The concept of Locality is typically used to determine the
number of assigned Pages.
 During program execution, exhibit a tendency to stay
within small areas of memory during any given time
 Although the areas will continue to change over time, the
property continue to hold throughout the execution of the
program
 Locality is typically insured since most programs are
written modularly, especially when written in high-level
programming language
Working Sets
The number of Pages which meets the requirements of
locality is called a Working Set.

The Working Sets will vary from one program to another and
in most cases, it is possible for the operating system to
statically determine the working set with reasonable success.

Some systems go further and dynamically adjust the working


sets for each process to maintain Page Faults to their lowest
level. In these cases, the operating system continues to
monitor each process to try to meet its locality needs and
adjust the working sets for a process when Page Faults
getting high.
Page Replacement Algorithms
Page Replacement is only needed when all Pages are in use.
The goal of any Page Replace algorithm is to replace a Page
that will be least needed in the near future (i.e. minimize
Page Faults as a result of Page Replacement)

Some systems select Pages to be replaced from the same


process, this method is called Local Page Replacement.

Other systems select Pages to be replaced from any process,


this method is called Global Page Replacement.
Page Replacement Algorithms
In Global Page Replacement,
 The method provide more flexibility as the choice of
candidate pages are larger
 However, the disadvantage is that it affect the working set
of other processes and must be managed carefully

Thus, many algorithms are used for Page Replacement with


many trade-offs
 FIFO
 LRU
 LFU
 NUR
Page Replacement Algorithms
First In First Out (FIFO)
The oldest Page remaining in the Page Table is selected for
replacement
Advantage: simple to implement
Disadvantages:
 Does not take into account the usage history of the
selected page
 The page being replaced may be in current use and its
replacement will result in an immediate page fault
 In some conditions, the use of FIFO results in more Page
Faults with an increase of Working Sets
 FIFO is not considered a good Page Replacement algorithm
Page Replacement Algorithms
Least Recently Used (LRU)
Replace a Page that has not been used for the longest time.
This algorithm is based on the theory that the longer the time
since a Page is used, the less chance it will be needed again.
Advantage: provide reasonable good results
Disadvantages:
 The Page Table must record Page usage time
 Every time the Page is referenced, the usage time must be
reset
 Every Page must be checked to compute the longest usage
time for Page Replacement
Page Replacement Algorithms
Least Frequently Used (LFU)
Replace a Page that has not been used the least number of
times.
Advantage: appealing algorithm
Disadvantage:
 The main flaw of this algorithm is that, the page that has
just been brought into memory is been used the least and
that will cause to its replacement.
 In many cases, this Page will soon be referenced causing
immediate Page Fault.
Page Replacement Algorithms
Not Used Recently (NUR)
A simplification of Least Recently Used algorithm.
2 bits are provided to each Page Table entry:
I. The first bit, also known as Reference Bit, is set every
time the page is referenced.
II. The second bit, also known as Dirty Bit, is only set when
data in the Page is changed.

This algorithm will search for Page to be replaced in the


following order:
Page Replacement Algorithms
1. Find a page with BOTH bits UNSET
 Page has not been used for a while
 Page has not been modified for a while

2. Find a page DIRTY is set, REFERENCE is unset


 Page has not been used for a while
 Page has been modified

3. Find a page DIRTY is unset, REFERENCE is set


 Page has been used recently
 Page has not been modified

4. Find a page DIRTY and REFERENCE are set


 Page has been used recently
 Page has been modified recently
Page Replacement Algorithms
Not Used Recently (NUR)
Advantages:
 very good results
 commonly used algorithm
Disadvantages:
 Hardware complexity to implement extra 2 bits.
Thrashing
Trashing is a condition which occurs when ALL pages are in
used and a Page Replacement is always selecting a Page that
will immediately used, resulting a continuous Page Faults.

Thrashing is most serious when Global Page Replacement is


used. Programs continue to steal Pages from each other and
no program is able to execute and the system as a whole is
brought to a crawl.

With Local Page Replacement, the number of Thrashing


programs is more limited but can still have a major
performance impact of the system.
Secondary Storage Scheduling
On a busy system, it is common to have a number of disk
requests pending at any given time. The simplest method to
service I/O requests is in the order they are received but that
method lacks efficiency.

The goal of secondary storage scheduling is to attempt to


process disk access requests in a way that enhances system
performance.

Different scheduling algorithms are in used with trade offs


such as FCFS, SDF, SCAN, N-Step-C-Scan.
Secondary Storage Scheduling
First Come First Serve (FCFS)
Requests are handled in the order they arrive
Advantage:
 very simple to implement
 fair
Disadvantages:
 Inefficient as it can have long seek time as the disk head is
moving all over the disk
Secondary Storage Scheduling
Shortest Distance First (SDF)
Look at all the requests in the queue and select the one
with the nearest to the current location of the head
Advantage:
 Minimized time
Disadvantages:
 Starvation
 Unfair
Secondary Storage Scheduling
Scan Scheduling
The head continue to scan back and forth across the disk,
processing requests as it goes
Advantage:
 Fairer than SDF
Disadvantages:
 Blocks that are near the middle tracks are processed twice
as often as blocks near the edge
 Still unfair
Secondary Storage Scheduling
N-Step C-Scan Scheduling
Cycle in one direction and processing requests as it goes.
Return back to the back and start over. An alternative
queue is used to place requests that arrive as the head is
in the process of scanning.

Advantage:
 Fairer than normal scan algorithm
Disadvantages:
 Wasting time to come back
Thank You

30

You might also like