الملخص لنظم التشغيل من 1الى6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

UNIVERSITY OF SCIENCE AND TECHNOLOGY ADEN

Prepared by: Eng. Mohammed A. Badeh 2024


1
Operating System
An operating system, commonly abbreviated as OS, is software that manages
computer functions.

It acts as an interface between computer hardware like the processor, memory,


hard drives and the software (programs and applications) being run by users.

The OS allocates resources, handles input and output requests, and translates
software instructions into instructions a computer chipset can execute.

Without an OS, a computer cannot load apps, display information, accept input,
or function at more than the most basic level.
Prepared by: Eng. Mohammed A. Badeh 2
What is an Operating System?
▪ A program that acts as an intermediary between a user of a computer and the computer
hardware
▪ Operating system goals:
➢ Execute user programs and make solving user problems easier
➢ Make the computer system convenient to use
➢ Use the computer hardware in an efficient manner
▪ “The one program running at all times on the computer” is the kernel, part of the operating
system
▪ Everything else is either
➢ A system program (ships with the operating system, but not part of the kernel) , or
➢ An application program, all programs not associated with the operating system
▪ Today’s OSes for general purpose and mobile computing also include middleware – a set of
software frameworks that provide additional services to application developers such as
databases, multimedia, graphics Prepared by: Eng. Mohammed A. Badeh 3
What Operating Systems Do?
▪ Depends on the point of view
▪ Users want convenience, ease of use and good performance
• Don’t care about resource utilization
▪ But shared computer such as mainframe or minicomputer must keep all users happy
• Operating system is a resource allocator and control program making efficient use of
HW and managing execution of user programs

Prepared by: Eng. Mohammed A. Badeh 4


Types of Operating Systems

Some widely used operating systems are as follows:

1- Batch operating systems


2- Multi-programmed OSes
3- Time-shared OSes
4- Distributed OSes
5- Network OSes
6- Real-Time OSes
7- Multiprocessor operating systems

Prepared by: Eng. Mohammed A. Badeh 5


1- Batch Operating System

Example: Payroll systems , Bank statement system’s


Prepared by: Eng. Mohammed A. Badeh 6
3- Time-Sharing OSes

Examples: Multics, Unix, etc.


Prepared by: Eng. Mohammed A. Badeh 7
4- Distributed OSes

Examples: Microsoft Windows Server 2003, 2008, 2012,


Ubuntu, UNIX, Linux (Apache Server).
Prepared by: Eng. Mohammed A. Badeh 8
5- Network OSes

Examples: Microsoft Windows Server 2003,


Microsoft Windows Server 2008, Microsoft
Windows Server 2012, Microsoft Windows
Server 2016, UNIX, Linux, Mac OS X, etc.

Prepared by: Eng. Mohammed A. Badeh 9


6- Real-Time OSes
These types of OSes serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time. Two types of Real-
Time Operating System which are as follows:
Hard Real-Time Systems:

These OSes are meant for applications where time constraints are very strict and even the
shortest possible delay is not acceptable. These systems are built for saving life like automatic
parachutes or airbags which are required to be readily available in case of any accident,
virtual memory is rarely found in these systems.
Soft Real-Time Systems:

These Operating Systems are for applications where for time-constraints is less strict.

Prepared by: Eng. Mohammed A. Badeh 10


Computing System Environments
Computing takes place in a variety of environments, including:

▪ Traditional
▪ Peer-to-Peer
▪ Client Server
▪ Mobile
▪ Cloud computing
▪ Real-time Embedded

Prepared by: Eng. Mohammed A. Badeh 11


Chapter 2: Operating System Architecture
and Services
• Computer System • System Services
Organizations
• Computer System Structure • Common Functions of
• Computer-System Interrupts
Architecture • Operating-System
• Resource Management Operations
• Operating System Services • Transition from User to
• System Calls Kernel Mode
• Types of System Calls • Operating System Structure
Prepared by: Eng. Mohammed A. Badeh 33
Computer System Organizations
▪ Computer-system operation
• One or more CPUs, device controllers connect through common bus providing access
to shared memory
• Concurrent execution of CPUs and devices competing for memory cycles

Prepared by: Eng. Mohammed A. Badeh 35


Computer System Organizations Cont.

▪ I/O devices and the CPU can execute concurrently


▪ Each device controller is in charge of a particular device type
▪ Each device controller has a local buffer
▪ Each device controller type has an operating system device driver to manage it
▪ CPU moves data from/to main memory to/from local buffers
▪ I/O is from the device to local buffer of controller
▪ Device controller informs CPU that it has finished its operation by causing an interrupt

Prepared by: Eng. Mohammed A. Badeh 36


Computer System Structure
▪ Computer system can be divided into four components:
• Hardware – provides basic computing resources
CPU, memory, I/O devices
• Operating system
Controls and coordinates use of hardware among various applications and users
• Application programs – define the ways in which the system resources are used to
solve the computing problems of the users
Word processors, compilers, web browsers, database systems, video games
• Users
People, machines, other computers
Prepared by: Eng. Mohammed A. Badeh 37
Computer System Structure

Prepared by: Eng. Mohammed A. Badeh 38


Computer-System Architecture
• A computer system can be organized in several different ways, which we can
categorize roughly according to the number of general-purpose processors
used.

• Kinds of computer-system architecture

1. Single-processors Systems
2. Multi-processors Systems
3. Clustered Systems

Prepared by: Eng. Mohammed A. Badeh 39


Computer-System Architecture
2- Multiprocessor Systems
• Multiprocessors systems growing in use and importance
• Also known as parallel systems, tightly-coupled systems
• Advantages include:
• 1- Increased throughput
• 2- Economy of scale
• 3- Increased reliability – graceful degradation or fault tolerance
• Two types:
• 1- Asymmetric Multiprocessing – each processor is assigned a specific
task.
• 2- Symmetric Multiprocessing – each processor performs all tasks.
Prepared by: Eng. Mohammed A. Badeh 41
Computer-System Architecture
3- Clustered Systems

▪ Like multiprocessor systems, but multiple


systems working together
• Usually sharing storage via a storage-
area network (SAN)
• Provides a high-availability service
which survives failures
Asymmetric clustering
Symmetric clustering

Prepared by: Eng. Mohammed A. Badeh 42


Resource Management
• An operating system is a resource manager. The resources that the operating
system must manage.

1. Process Management
2. Memory Management
3. File-System Management
4. Mass-Storage Management
5. Cache Management
6. I/O System Management

Prepared by: Eng. Mohammed A. Badeh 43


Operating System Services
▪ Operating systems provide an environment for execution of programs and services to
programs and users
▪ One set of operating-system services provides functions that are helpful to the user:
• User interface -
Varies between Command-Line (CLI), Graphics User Interface (GUI), touch-
screen, Batch
• Program execution –
• I/O operations -
• File-system manipulation –
• Communications –
• Error detection–
Prepared by: Eng. Mohammed A. Badeh 51
System Calls
• Programming interface to the services provided by the OS
• Typically written in a high-level language (C or C++)
• Mostly accessed by programs via a high-level Application Programming
Interface (API) rather than direct system call use
• Three most common APIs are
• Win32 API for Windows,
• POSIX API for POSIX-based systems (including virtually all versions
of UNIX, Linux, and Mac OS X),
• and Java API for the Java virtual machine (JVM)

Prepared by: Eng. Mohammed A. Badeh 55


Types of System Calls
• System calls can be grouped roughly into six major categories:

• 1- Process control
• 2- File management
• 3- Device management
• 4- Information maintenance
• 5- Communications
• 6- Protection

Prepared by: Eng. Mohammed A. Badeh 57


System Services
• System programs provide a convenient environment for program development
and execution. They can be divided into:
• File manipulation
• Status information sometimes stored in a file
• Programming language support
• Program loading and execution
• Communications
• Background services
• Application programs

• Provide a convenient environment for program development and execution


Prepared by: Eng. Mohammed A. Badeh 62
Common Functions of Interrupts
• Interrupt transfers control to the interrupt service routine generally, through
the interrupt vector, which contains the addresses of all the service routines
• Interrupt architecture must save the address of the interrupted instruction
• A trap or exception is a software-generated interrupt caused either by an
error or a user request
• An operating system is interrupt driven
• Interrupt driven (hardware and software)
• Hardware interrupt by one of the devices
• Software interrupt (exception or trap):
• Software error (e.g., division by zero)
• Request for operating system service
• Other process problems include infinite loop, processes modifying each
other or the operating system

Prepared by: Eng. Mohammed A. Badeh 66


Operating-System Operations
• Dual-mode operation allows OS to protect itself and other system
components
• User mode and kernel mode
• Mode bit provided by hardware
• Provides ability to distinguish when system is running user code or
kernel code
• Some instructions designated as privileged, only executable in kernel
mode
• System call changes mode to kernel, return from call resets it to user
• Increasingly CPUs support multi-mode operations
• i.e. virtual machine manager (VMM) mode for guest VMs
Prepared by: Eng. Mohammed A. Badeh 67
Transition from User to Kernel Mode
• Timer to prevent infinite loop / process hogging resources
• Timer is set to interrupt the computer after some time period
• Keep a counter that is decremented by the physical clock.
• Operating system set the counter (privileged instruction)
• When counter zero generate an interrupt
• Set up before scheduling process to regain control or terminate program
that exceeds allotted time

68
Prepared by: Eng. Mohammed A. Badeh
Operating System Structure
• Most common used structure:
➢ Monolithic Structures
➢ Layered Approach
➢ Microkernels
➢ Modules
➢ Hybrid System

Prepared by: Eng. Mohammed A. Badeh 69


Chapter 3: Processes

• Process Concept
• Process Scheduling
• Operations on Processes
• Inter-process Communication

Prepared by: Eng. Mohammed A. Badeh 80


Process Concept
▪ An operating system executes a variety of programs that run as a process.
▪ Process – a program in execution; process execution must progress in
sequential fashion.
▪ Multiple parts
• The program code, also called text section
• Current activity including program counter, processor registers
• Stack containing temporary data
➢ Function parameters, return addresses, local variables
• Data section containing global variables
• Heap containing memory dynamically allocated during run time
Prepared by: Eng. Mohammed A. Badeh 82
Process Concept (Cont.)
• Program is passive entity stored on disk (executable
file); process is active
• Program becomes process when an executable file
is loaded into memory
• Execution of program started via GUI mouse clicks,
command line entry of its name, etc.
• One program can be several processes
• Consider multiple users executing the same
program
Process in Memory
Prepared by: Eng. Mohammed A. Badeh 83
Process State
• As a process executes, it changes state
• New: The process is being created
• Running: Instructions are being executed
• Waiting: The process is waiting for some event to occur
• Ready: The process is waiting to be assigned to a processor
• Terminated: The process has finished execution

Prepared by: Eng. Mohammed A. Badeh 84


Diagram of Process State

Prepared by: Eng. Mohammed A. Badeh 85


Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is defined by an integer ID (PID).A PCB keeps all the information needed
to keep track of a process. Information associated with each process(also called task control
block)
• Process state – running, waiting, etc.
• Program counter – location of instruction to next execute
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling queue pointers
• Memory-management information – memory allocated to the process
• Accounting information – CPU used, clock time elapsed since start, time
limits
• I/O status information – I/O devices allocated to process, list of open files
Prepared by: Eng. Mohammed A. Badeh 86
Process Scheduling
• Process scheduler selects among available processes for next execution on
CPU core
• Goal -- Maximize CPU use, quickly switch processes onto CPU core
• Maintains scheduling queues of processes
• Ready queue – set of all processes residing in main memory, ready and
waiting to execute
• Wait queues – set of processes waiting for an event (i.e., I/O)
• Processes migrate among the various queues

Prepared by: Eng. Mohammed A. Badeh 87


Representation of Process Scheduling

Prepared by: Eng. Mohammed A. Badeh 88


Schedulers
▪ Short-term scheduler (or CPU scheduler) – selects which process should be executed next and
allocates CPU
• Sometimes the only scheduler in a system
• Short-term scheduler is invoked frequently (milliseconds)  (must be fast)
▪ Long-term scheduler (or job scheduler) – selects which processes should be brought into the
ready queue
• Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)
• The long-term scheduler controls the degree of multiprogramming
▪ Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
• CPU-bound process – spends more time doing computations; few very long CPU bursts
▪ Long-term scheduler strives for good process mix
Prepared by: Eng. Mohammed A. Badeh 89
Addition of Medium Term Scheduling
▪ Medium-term scheduler can be added if degree of multiple programming
needs to decrease
• Remove process from memory, store on disk, bring back in from disk to
continue execution: swapping

Prepared by: Eng. Mohammed A. Badeh 90


CPU Switch From Process to Process
▪ A context switch occurs when the CPU switches from one process to another.

Prepared by: Eng. Mohammed A. Badeh 91


Context Switch
▪ When CPU switches to another process, the system must save the state of
the old process and load the saved state for the new process via a context
switch
▪ Context of a process represented in the PCB
▪ Context-switch time is pure overhead; the system does no useful work while
switching
• The more complex the OS and the PCB ➔ the longer the context switch
▪ Time dependent on hardware support
• Some hardware provides multiple sets of registers per CPU ➔ multiple
contexts loaded at once
Prepared by: Eng. Mohammed A. Badeh 92
Process Termination
▪ If no parent waiting (did not invoke wait()) process is a zombie
▪ If parent terminated without invoking wait(), process is an orphan

Prepared by: Eng. Mohammed A. Badeh 97


Inter-process Communication
▪ Processes within a system may be independent or cooperating
▪ Cooperating process can affect or be affected by other processes, including sharing data
▪ Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
▪ Cooperating processes need interprocess communication (IPC)
▪ Two models of IPC
• Shared memory
• Message passing
Prepared by: Eng. Mohammed A. Badeh 98
Producer-Consumer Problem
▪ Paradigm for cooperating processes:
• producer process produces information that is consumed by a consumer
process
▪ Two variations:
• unbounded-buffer places no practical limit on the size of the buffer:
Producer never waits
Consumer waits if there is no buffer to consume
• bounded-buffer assumes that there is a fixed buffer size
Producer must wait if all buffers are full
Consumer waits if there is no buffer to consume
Prepared by: Eng. Mohammed A. Badeh 99
Chapter 4: CPU Scheduling
▪ Basic Concepts
▪ Scheduling Criteria
▪ Scheduling Algorithms

Prepared by: Eng. Mohammed A. Badeh 102


Basic Concepts
1- Arrival Time: The time at which the process enters the ready queue is called the arrival time.

2- Completion Time: The time at which the process enters the completion state or the time at which the process
completes its execution, is called completion time.

3- Turnaround time: The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.

4- Burst Time: The total amount of time required by the CPU to execute the whole process is called burst time.
This does not include the waiting time.

5- Waiting Time: The total amount of time for which the process waits for the CPU to be assigned is called
waiting time.

6- Response time: The difference between the arrival time and the time at which the process first gets the CPU is
called response time.

Prepared by: Eng. Mohammed A. Badeh 105


CPU Scheduler
▪ The CPU scheduler selects from among the processes in ready queue, and allocates a CPU
core to one of them
• Queue may be ordered in various ways
▪ CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
▪ For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one
exists in the ready queue) must be selected for execution.
▪ For situations 2 and 3, however, there is a choice.
Prepared by: Eng. Mohammed A. Badeh 106
Preemptive and Non preemptive Scheduling
▪ When scheduling takes place only under circumstances 1 and 4, the
scheduling scheme is non preemptive.
▪ Otherwise, it is preemptive.
▪ Under Non preemptive scheduling, once the CPU has been allocated to a
process, the process keeps the CPU until it releases it either by terminating
or by switching to the waiting state.
▪ Virtually all modern operating systems including Windows, MacOS, Linux,
and UNIX use preemptive scheduling algorithms.

Prepared by: Eng. Mohammed A. Badeh 107


Preemptive Scheduling and Race Conditions
▪ Preemptive scheduling can result in race conditions when data are shared
among several processes.
▪ Consider the case of two processes that share data. While one process is
updating the data, it is preempted so that the second process can run. The
second process then tries to read the data, which are in an inconsistent state.

Prepared by: Eng. Mohammed A. Badeh 108


Dispatcher
▪ Dispatcher module gives control of the CPU to the
process selected by the CPU scheduler; this
involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user
program to restart that program
▪ Dispatch latency – time it takes for the dispatcher
to stop one process and start another running

Prepared by: Eng. Mohammed A. Badeh 109


Scheduling Criteria

▪ CPU utilization – keep the CPU as busy as possible


▪ Throughput – Nos of processes that complete their execution per time unit
▪ Turnaround time –
▪ Waiting time –
▪ Response time –

Prepared by: Eng. Mohammed A. Badeh 110


Scheduling Algorithm Optimization Criteria
▪ Max CPU utilization
▪ Max throughput
▪ Min turnaround time
▪ Min waiting time
▪ Min response time

Prepared by: Eng. Mohammed A. Badeh 111


First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
▪ Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


▪ Average waiting time: (0 + 24 + 27)/3 = 17
Prepared by: Eng. Mohammed A. Badeh 112
FCFS Scheduling (Cont.)
▪ Suppose that the processes arrive in the order:
▪ P2 , P3 , P1
▪ The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

▪ Waiting time for P1 = 6; P2 = 0; P3 = 3


▪ Average waiting time: (6 + 0 + 3)/3 = 3
▪ Much better than previous case
▪ Convoy effect – happens when a set of processes need to use a resource for a short time,
and one process holds the resources for a long time , blocking all of the other processes .
Causes poor utilization of the other resources in the system.
Prepared by: Eng. Mohammed A. Badeh 113
Shortest-Job-First (SJF) Scheduling
▪ Associate with each process the length of its next CPU burst
• Use these lengths to schedule the process with the shortest time
▪ SJF is optimal – gives minimum average waiting time for a given set of
processes
• The difficulty is knowing the length of the next CPU request
▪ Preemptive version called shortest-remaining-time-first
▪ How do we determine the length of the next CPU burst?
• Could ask the user
• Estimate
Prepared by: Eng. Mohammed A. Badeh 114
Example of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3

▪ SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

▪ Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Prepared by: Eng. Mohammed A. Badeh 115
Example of Shortest-remaining-time-first
▪ Now we add the concepts of varying arrival times and preemption to the analysis
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
▪ Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26

▪ Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5


Prepared by: Eng. Mohammed A. Badeh 116
Round Robin (RR)
▪ Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the end
of the ready queue.
▪ If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits
more than (n-1)q time units.
▪ Timer interrupts every quantum to schedule next process
▪ Performance
• q large  FIFO
• q small  q must be large with respect to context switch, otherwise overhead is too
high

Prepared by: Eng. Mohammed A. Badeh 117


Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
▪ The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

▪ Waiting time for P1=(10-4), P2=(4), P3=(7)


▪ Average waiting time = (6+4+7) / 3 = 17 3 = 5.66
▪ Typically, higher average turnaround than SJF, but better response
▪ q should be large compared to context switch time
• q usually 10 milliseconds to 100 milliseconds,
• Context switch < 10 microseconds
Prepared by: Eng. Mohammed A. Badeh 118
Time Quantum and Context Switch Time

Prepared by: Eng. Mohammed A. Badeh 119


Priority Scheduling
▪ A priority number (integer) is associated with each process
▪ The CPU is allocated to the process with the highest priority (smallest integer  highest
priority)
• Preemptive
• Non preemptive
▪ SJF is priority scheduling where priority is the inverse of predicted next CPU burst time
▪ Problem  Starvation – low priority processes may never execute
▪ Solution  Aging – as time progresses increase the priority of the process

Prepared by: Eng. Mohammed A. Badeh 120


Example of Priority Scheduling
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

▪ Priority scheduling Gantt Chart

▪ Average waiting time = 6+0+16+18+1/5 =41/5


▪ Average waiting time = 8.2
Prepared by: Eng. Mohammed A. Badeh 121
Priority Scheduling w/ Round-Robin
Process Burst Time Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
▪ Run the process with the highest priority. Processes with the same priority run round-robin

▪ Gantt Chart with time quantum = 2

▪ Average waiting time = 22+11+12+0+24/5 =69/5


▪ Average waiting time = 13.8
Prepared by: Eng. Mohammed A. Badeh 122
EXE.(1)
Consider the following set of processes, with the length of the CPU burst time given in milliseconds:

Process Burst Time Priority


P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.

a. a. Draw four Gantt charts that illustrate the execution of these processes using the following scheduling
algorithms: FCFS, SJF, non-preemptive priority (a larger priority number implies a higher priority), and RR
(quantum = 2).

b. What is the waiting time of each process for each of these scheduling algorithms?

Prepared by: Eng. Mohammed A. Badeh 137


EXE.(2)
Consider the following set of processes, with the length of the CPU burst time given in milliseconds:

Process Priority Burst Arrival


P1 8 15 0
P2 3 20 0
P3 4 20 20
P4 4 20 25

The processes are assumed to have arrived in the order P1, P2, P3, P4 all at time 0.

a. a. Draw four Gantt charts that illustrate the execution of these processes using the following scheduling
algorithms: FCFS, SRTF, Non preemptive priority (a larger priority number implies a higher priority, quantum =
5), and RR (quantum = 5).

b. What is the waiting time of each process for each of these scheduling algorithms?

Prepared by: Eng. Mohammed A. Badeh 138

You might also like