Basic Operating System Concepts: A Review
Basic Operating System Concepts: A Review
Basic Operating System Concepts: A Review
A Review
Main Goals of OS
1. Resource Management: Disk, CPU cycles, etc. must be managed efficiently to maximize overall system performance 2. Resource Abstraction: Software interface to simplify use of hardware resources 3. Resource virtualization: Supports resource sharing gives each process the appearance of an unshared resource
Fundamental Concepts
System Calls Execution Modes
Operational concepts that allow the operating system to maintain control and protection. Based on hardware features.
System Call
An entry point to OS code Allows users to request OS services APIs/library functions usually provide an interface to system calls
e.g, language-level I/O functions map user parameters into system-call format
Thus, the run-time support system of a prog. language acts as an interface between programmer and OS interface
Mode Switching
System calls cross the boundary.
System call initiates mode switch from user to kernel mode
Special instruction software interrupt transfers control to a location in the interrupt vector
OS executes kernel code, mode switch occurs again when control returns to user process
Review Topics
Processes &Threads Scheduling Synchronization Memory Management File and I/O Management
Review of Processes
Processes
process image states and state transitions process switch (context switch)
Threads Concurrency
Process Definition
A process is an instance of a program in execution. It encompasses the static concept of program and the dynamic aspect of execution. As the process runs, its context (state) changes register contents, memory contents, etc., are modified by execution
blocked
Other States
New Exit Suspended (Swapped)
Suspended blocked Suspended ready
Context Switch
(sometimes called process switch)
The status (context) of one process is saved; the status of the second process restored. Dont confuse with mode switch.
Concurrent Processes
Two processes are concurrent if their executions overlap in time. In a uniprocessor environment, multiprogramming provides concurrency. In a multiprocessor, true parallel execution can occur.
Protection
When multiple processes exist at the same time, and execute concurrently, the OS must protect them from mutual interference. Memory protection (memory isolation) prevents one process from accessing the physical address space of another process. Base/limit registers, virtual memory are techniques to achieve memory protection.
Threads
Several threads can share the address space of a single process, along with resources such as files. Each thread has its own stack, PC, and TCB (thread control block)
Each thread executes a separate section of the code and has private data All threads can access global data of process
Two threads in a single process can share global data automatically as easily as two functions in a single process.
Types of Threads
Kernel-level threads: are created & managed by the kernel, just as processes are. Mode switches are required when KLT are created, scheduled, switched, etc. User-level threads: are created by libraries that run at the user level. Thread creation, scheduling, etc., can be done at the cost of a function call.
Review Topics
Processes &Threads Scheduling Synchronization Memory Management File and I/O Management
Multiprogramming vs Multitasking
Multi programming: Multiprogramming is the technique of running several programs at a time. Multiprogramming creates logical parallelism. The OS keeps several jobs in memory simultaneously. It selects a job from the job pool and starts executing it. When that job needs to wait for any i/o operation the CPU is switched to another job. So the main idea here is that the CPU is never idle. Multi tasking: Multitasking is the logical extension of multiprogramming. After a certain amount of time the CPU is switched to another job. The difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as time-sharing. Sometimes the two terms are used interchangeably.
Multiprocessing: involves multiple processors
Scheduling Algorithms:
FCFS (first-come, first-served): nonpreemptive: process run until they complete or block themselves for event wait RR (round robin): preemptive FCFS, based on time slice
Time slice = length of time process can run before being preempted Return to Ready state when preempted
Scheduling Goals
Optimize turnaround time and/or response time Optimize throughput Avoid starvation (be fair) Respect priorities
Static Dynamic
Review Topics
Processes &Threads Scheduling Synchronization Memory Management File and I/O Management
Interprocess Communication
Processes (or threads) that cooperate to solve problems must exchange information. Two approaches:
Shared memory Message passing (involves copying information from one process address space to another)
Shared memory is more efficient (no copying), but isnt always possible.
Process/Thread Synchronization
Concurrent processes are asynchronous: the relative order of events within the two processes cannot be predicted in advance. If processes are related (exchange information in some way) it may be necessary to synchronize their activity at some points.
Process/Thread Synchronization
Concurrent processes are asynchronous: the relative order of events within the two processes cannot be predicted in advance. If processes are related (exchange information in some way) it may be necessary to synchronize their activity at some points.
Instruction Streams
Process A: A1, A2, A3, A4, A5, A6, A7, A8, , Am
III: A1, A2, B1, B2, B3, A3, A4, B4, B5, , Bn, A5, A6, , Am
Mutual Exclusion
A critical section is the code that accesses shared data or resources. A solution to the critical section problem must ensure that only one process at a time can execute its critical section (CS). Two separate shared resources can be accessed concurrently.
Synchronization
Processes and threads are responsible for their own synchronization, but programming languages and operating systems may have features to help. Virtually all operating systems provide some form of semaphore, which can be used for mutual exclusion and other forms of synchronization.
Semaphores
Definition: A semaphore is an integer variable (S) which can only be accessed in the following ways:
Initialize (S) P(S) // {wait(S)} V(S) // {signal(S)}
The operating system must ensure that all operations are indivisible, and that no other access to the semaphore variable is allowed
High-level Algorithms
Assume S is a semaphore P(S): if S > = 1 then S = S 1 else block the process on S queue V(S): if some processes are blocked on the queue for S then unblock a process else S = S + 1
Deadlock
A set of processes is deadlocked when each is in the Blocked state because it is waiting for a resource that is allocated to one of the others. Deadlocks can only be resolved by agents outside of the deadlock
Wait-For Graphs
Simple deadlocks can be modeled by waitfor graphs (WFG) where nodes represent processes and edges represent the waiting relation between processes.
P1 P2
Causes of Deadlock
Mutual exclusion (exclusive access) Wait while hold (hold and wait) No preemption Circular wait
Exceptions: some transaction systems have roll-back capability or apply ordering techniques to control acquiring of locks.
Review Topics
Processes &Threads Scheduling Synchronization Memory Management File and I/O Management
Memory Management
Introduction Allocation methods
One user at a time Multiple processes, contiguous allocation Multiple processes, virtual memory
Allocation Methods:
Multiple Processes, Contiguous Allocation
Several processes resided in memory at one time (multiprogramming). The entire process image for each process is stored in a contiguous set of locations. Drawbacks:
Limited number of processes at one time Fragmentation of memory
unused
unused
Allocation Methods:
Multiple Processes, Virtual Memory
Method:
allow program to be loaded non-contiguously allow program to execute even if it is not entirely in memory.
Virtual Memory
The OS software lets a process execute as if it is loaded into a contiguous set of addresses, when in fact this is not the case. Actually, the process address space is not contiguously stored and parts of it may not even be in memory at all.
Paging - continued
General idea save space by loading only those pages that a program needs now. Result more programs can be in memory at any given time Problems:
How to tell whats needed How to keep track of where the pages are How to translate virtual addresses to physical
Virtual Addresses
Addresses in an executable are virtual assigned without regard to physical addresses During execution a virtual address must first be translated to the physical, or real, address. Performed by MMU, using data from page table.
OS Responsibilities
Maintain page tables Perform page replacement
Review Topics
Processes &Threads Scheduling Synchronization Memory Management File and I/O Management
File Systems
Maintaining a shared file system is a major job for the operating system. Single user systems require protection against loss, efficient look-up service, etc. Multiple user systems also need to provide access control.
End of OS Review