16.1 Operating System (0S)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Purpose of an operating system

16.1
16.1.1 How an operating system can maximise the use of
computer resources
When a computer is first switched on
BIOS (basic input/output system) in ROM starts A BOOTSTRAP
PROGRAM
BOOTSTRAP PROGRAM loads OS in hard disk/SSD into RAM
Flash memory
Although tablets and mobile phones use RAM, that process is less obvious
(instantaneous).
Because their main internal memory is supplied by FLASH MEMORY
Flash memory is split into two parts
1. The part where the OS resides. It is read only. This is why the OS can be updated
by the mobile phone/tablet manufacturers but the user cannot interfere with the
software or ‘steal’ memory from this part of memory.
2. The part where the apps (and associated data) are stored.

The user does not have direct access to this part of memory either.
RAM stores the apps executing and data currently in use.
Resource management
OS’s task is to maximize the utilization of computer resources
1. the CPU
2. memory
3. the input/output (I/O) system.
Scheduling for better utilization of CPU time and resources
OS will do:
● any I/O operation which has been initiated by the computer user
● any I/O operation which occurs while software is being run and resources, such as
printers or disk drives, are requested.
DMA
The direct memory access (DMA) controller is needed to allow hardware
to access the main memory independently of the CPU.

● The DMA initiates the data transfers.


● The CPU carries out other tasks while this data transfer operation is
taking place.
● Once the data transfer is complete, an interrupt signal is sent to the
CPU from the DMA.
DMA - Direct memory access controller
KERNEL
Central component of OS
which is responsible for
communication between
hardware, software and
memory
OS hides the complexities of hardware from user
This can be done by

● using GUI interfaces rather than CLI


● using device drivers (which simplifies the complexity of hardware
interfaces)
● simplifying the saving and retrieving of data from memory and storage
devices
● carrying out background utilities, such as virus scanning which the user
can ‘leave to its own devices’
16.1.2 Process management
Multitasking allows computers to carry out more than one task at a time.
Low level scheduling
SCHEDULING - scheduling is used to decide which processes should be

carried out

● Determines CPU allocation for processes in the ready state after OS calls.

● Aims to maximize system throughput, maintain acceptable response times, and

ensure system stability.

● Resolves conflicts between processes vying for the same resources.


Process scheduler
● Process Priority Determinants:
● Category:
● CPU:
● Resource Requirements:
● Turnaround, Waiting, Response Times:
● Interruptibility:
Factors Affecting Prioritized Tasks:

● Deadline:

● CPU Time Needed:

● Memory Requirements:
16.1.3
Process states
16.1.3 Process states
Process control block (PCB) – data structure which contains all the data
needed for a process to run.

The PCB will store:

● current process state (ready, running or blocked)


● process privileges (such as which resources it is allowed to access)
● register values (PC, MAR, MDR and ACC)
● process priority and any scheduling information
● the amount of CPU time the process will need to complete
● a process ID which allows it to be uniquely identified.
Process state conditions
Running, ready, blocked
Round robin process
● Each process has an equal time slice (known as a quantum).
● When a time slice ends, the low level scheduler puts the process back into the
READY QUEUE allowing another process to use CPU time.
● Typical time slices are about 10 to 100ms long (a 2.7GHz clock speed would
mean that 100ms of CPU time is equivalent to 27 million clock cycles, giving
considerable amount of potential processing time to a process).
● When a time slice ends, the status of each process must be saved so that it can
continue from where it left off when it is allocated its next time slice.
● The contents of the CPU registers (PC, MAR, MDR, ACC) are saved to the process
control block (PCB); each process has its own control block.
● When the next process takes control of the CPU (burst time), its previous state is
reinstated or restored (this is known as context switching).
Scheduling routine algorithms
1. first come first served scheduling (FCFS)
2. shortest job first scheduling (SJF)
3. shortest remaining time first scheduling (SRTF)
4. round robin
First come first served scheduling (FCFS)
It is similar to queue structure uses FIFO principle
Shortest job first scheduling (SJF) and shortest remaining
time first scheduling (SRTF)
● Best approaches to minimize waiting time
● SJF is non-preemptive
● SRTF is preemptive
shortest remaining time first scheduling (SRTF)
the processes are placed in the ready queue as they arrive; but when a process with a shorter burst time
arrives, the existing process is removed (pre-empted) from execution. The shorter process is then executed
first.
Round robin
A fixed time slice (a quantum) is given to each process.

Once a process is executed during its time slice, it is removed and placed in
the blocked queue; then another process from the ready queue is executed
in its own time slice.

Context switching is used to save the state of the pre-empted processes.

The ready queue gives each process its time slice in the correct order (if a
process completes before the end of its time slice, then the next process is
brought into the ready queue for its time slice).
Average waiting times for the four scheduling routines
Interrupt handling with OS kernels
Types of interrupts:

● Device interrupt (for example, printer out of paper, device not present,
and so on).
● Exceptions (for example, instruction faults such as division by zero,
unidentified op code, stack fault, and so on).
● Traps/software interrupt (for example, process requesting a resource
such as a disk drive).
When an interrupt is received, the kernel will consult the interrupt dispatch
table (IDT) – this table links a device description with the appropriate
interrupt routine.
Interrupts will be prioritised using interrupt priority levels (IPL) (numbered 0
to 31). A process is suspended only if its interrupt priority level is greater
than that of the current task. The process with the lower IPL is saved in the
interrupt register and is handled (serviced) when the IPL value falls to a certain
level.
Examples of IPLs include:
31: power fail interrupt
24: clock interrupt
20-23: I/O devices
16.1. 4
Memory
Management
16.1.4 - Memory Management

● Memory management optimizes CPU processes, preventing

fragmentation.

● Allocates memory for processes and deallocates upon completion.


Single (Contiguous) Allocation

● Dedicates all memory to a single application.

● Leads to inefficient use of main memory due to fragmentation.


Paged Memory / Paging
● Memory split into fixed-size partitions (pages).

● Logical divided into pages and and physical memory - frames.

● Processes allocated more pages than needed and loaded into frames

during execution.
Segmentation
Segmented memory
address of segment in physical memory space = segment number + offset value
Summary of the differences between paging and segmentation
Video - OCR A Level (H046-H446) SLR4 - 1.2
Paging, segmentation and virtual memory

One more presentation


16.1.5 Virtual memory
● RAM is the physical memory

● Virtual memory is RAM + swap space on the hard disk (or SSD)
Problem Solution
● Multiple processes running-> separately mapping each program’s
no available memory in RAM. memory space to RAM and utilising
HDD
Without Virtual Management:
With Virtual Management:
The main benefits of virtual memory are

» programs can be larger than physical memory and can still be executed

» it leads to more efficient multi-programming with less I/O loading and


swapping programs into and out of memory

» there is no need to waste memory with data that is not being used (for
example, during error handling)

» it eliminates external fragmentation/reduces internal fragmentation

» it removes the need to buy and install more expensive RAM memory.
Disk Thrashing: Thrash point:
More and more data pages move More time spent on moving
in/out of virt.mem->more read/write pages;speed decreases,process
movements. comes to a halt.

Some suggestions to improve:

★ installing more RAM;


★ reducing the number of programs running at a time;
★ reducing the size of the swap file.
Process of accessing data using VM.
➢ The program executes the load process with a virtual address (VA).
➢ The computer translates the address to a physical address (PA) in memory
➢ If PA is not in memory, the OS loads it from HDD.
➢ The computer then reads RAM using PA and returns the data to the program
16.1.6 Page replacement
Description and algorithms of page replacement
occurs when a requested page is not
in memory and a free page cannot When P flag=0
be used to satisfy allocation.

Page fault occurs when OS replaces


It is an interrupt raised by
one of the existing pages with the
hardware
new page(s)

Page replacement is done by swapping


pages from back-up store to main memory
Page Replacement:
FIFO (First In, First Out)
● Mechanism: FIFO works like a queue. The oldest page in memory is the first to be replaced when a new page needs to
be brought in.

Optimal Page Replacement (OPR)


● Mechanism: OPR looks ahead to see which page will be least used in the future and replaces it.

LRU (Least Recently Used)


● Mechanism: LRU replaces the page that has not been used for the longest time.

Clock Page Replacement / Second-Chance


● Mechanism: It uses a circular queue structure and a pointer to identify and replace pages.
16.1.7 Summary of the basic differences between
processor management and memory management
Processor management decides which processes will be executed and in
which order (to maximise resources).

Memory management will decide where in memory data/programs will be


stored and how they will be stored.

Both are essential to the efficient and stable running of any computer system.

You might also like