Good Material OS
Good Material OS
Good Material OS
Objectives of Operating Systems To hide details of hardware by creating abstraction. To allocate resources). resources to processes (Manage
Fourth Generation
With the development of LSI (Large Scale Integration) circuits, chips, operating system entered in the personal computer and the workstation age. Microprocessor technology evolved to the point that it becomes possible to build desktop computers as powerful as the mainframes of the 1970s.
System Components
Process Management
A process is only ONE instant of a program in execution. There are many processes can be running the same program.
The five major activities of an operating system in regard to process management are: Creation and deletion of user and system processes. Suspension and resumption of processes. A mechanism for process synchronization. A mechanism for process communication. A mechanism for deadlock handling.
System Components
Main-Memory Management
Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices. The major activities of an operating system in regard to memory-management are: Keep track of which part of memory are currently being used and by whom. Decide which processes are loaded into memory when memory space becomes available. Allocate and deallocate memory space as needed.
System Components
I/O System Management One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user. Secondary-Storage Management Generally speaking, systems have several levels of storage, including primary storage, secondary storage and cache storage. Instructions and data must be placed in primary storage or cache to be referenced by a running program.
System Components
Networking
A distributed system is a collection of processors that do not share memory, peripheral devices, or a clock. The processors communicate with one another through communication lines called network.
Protection System
Protection refers to mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system.
Process, on the other hand, includes: Current value of Program Counter (PC) Contents of the processors registers Value of the variables The processes stack (SP) which typically contains temporary data such as subroutine parameter, return address, and temporary variables. A data section that contains global variables.
Process State
As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states:
New State: The process being created. Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant. Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens. Ready State: A process is said to be ready if it is waiting to be assigned to a processor. Terminated state: The process has finished execution.
Basic Concepts
The idea of multiprogramming is relatively simple. A process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU would then just sit idle. Scheduling is a fundamental operating-system function. Almost all computer resources are scheduled before use.
Context Switch
To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each switch of the CPU from one process to another is called a context switch.
Preemptive Scheduling
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for. example, I/O request, or invocation of wait for the termination of one of the child processes). 2. When a process switches from the running state to the ready state (for example, when an interrupt occurs). 3. When a process switches from the waiting state to the ready state (for example, completion of I/O). 4. When a process terminates.
Dispatcher
Switching context. Switching to user mode. Jumping to the proper location in the user program to restart that program
Scheduling Criteria
Different CPU scheduling algorithms have different properties and may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms.
Many criteria have been suggested for comparing CPU scheduling algorithms. Criteria that are used include the following: CPU utilization. Throughput. Turnaround time. Waiting time. Response time.
24 3 3
P2
27
P3
30
Shortest-Job-First Scheduling
Process P1 P2 P3 P4
P4
0 3
Burst Time
6 8 7
3
P2
16 24
P1
9
P3
Priority Scheduling
Process P1 P2 P3 P4 P5
P2 P5
Burst Time
Priority
10 1 2 1 5
P1
3 1 3 4 2
P3 P4
Round-Robin Scheduling
Process P1 P2 P3
Burst Time
24 3 3
In a multilevel queue scheduling processes are permanently assigned to one queues. The processes are permanently assigned to one another, based on some property of the process, such as
Memory size Process priority Process type
Algorithm chooses the process from the occupied queue that has the highest priority, and run that process either
Preemptive or Non-preemptively
6. Process Synchronization
A cooperating process is one that can affect or be affected by the other processes executing in the system. Cooperating processes may either directly share a logical address space(that is, both code and data), or be allowed to share data only through files. The former case is achieved through the use of lightweight processes or threads. Concurrent access to shared data may result in data inconsistency. In this lecture, we discuss various mechanisms to ensure the orderly execution of cooperating processes that share a logical address space, so that data consistency is maintained.
Cooperating Processes
The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. On the other hand, a process is cooperating if it can affect or be affected by the other processes executing in the system.
There are several reasons for providing an environment that allows process cooperation:
Information sharing
Computation speedup
Modularity
Convenience
Race condition When several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition.
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress: If no process is executing in its critical section and there exist some processes that wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision of which will enter its critical section next, and this selection cannot be postponed indefinitely. 3. Bounded Waiting: There exist a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
DEADLOCKS
A process requests resources; if the resources are not available at that time, the process enters a wait state. It may happen that waiting processes will never again change state, because the resources they have requested are held by other waiting processes. This situation is called a deadlock. In this lecture, we describe methods that an operating system can use to deal with the deadlock problem.
Resources
A process must request a resource before using it, and must release the resource after using it. A process may request as many resources as it requires to carry out its designated task. a process may utilize a resource in only the following sequence:
Request
Use
Release
Deadlock Characterization
In a deadlock, processes never finish executing and system resources are tied up, preventing other jobs from ever starting. Before we discuss the various methods for dealing with the deadlock problem, we shall describe features that characterize deadlocks.
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
Mutual exclusion Hold and wait No preemption Circular wait
Methods for Handling Deadlocks Principally, there are three different methods for dealing with the deadlock problem:
We can use a protocol to ensure that the system will never enter a deadlock state. We can allow the system to enter a deadlock state and then recover. Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX.
Deadlock Prevention
By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock
Mutual Exclusion not required for sharable resources; must hold for nonsharable resources. Hold and Wait must guarantee that whenever a process requests a resource, it does not hold any other resources. No Preemption o If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released. Circular Wait impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.
Deadlock Avoidance
Requires that the system has some additional a priori information available.
Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition. Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes.
Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In this environment, the system must provide: An algorithm that examines the state of the system to determine whether a deadlock has Occurred. An algorithm to recover from the deadlock
Memory Management
Program must be brought (from disk) into memory and placed within a process for it to be run. Main memory and registers are only storage CPU can access directly. Register access in one CPU clock (or less). Main memory can take many cycles. Cache sits between main memory and CPU registers. Protection of memory required to ensure correct operation.
Virtual Memory