operating system pdf

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 77

Course Name: Introduction of Logic Circuit & Digital Design

Course Code: BCAC0003

B.Tech AIML 3rd Semester


Fundamentals of Operating system
Course Code:
Presented by:
Mr Yadvendra Yadav -
Assistant Professor, Dept. Of CSE
Sanskriti University, Mathura
An Operating System
• An Operating System (OS) is a collection of program which provides an
interface between a computer user and computer hardware.
• An operating system is a software which performs all the basic tasks like
file management, memory management, process management,
handling input and output, and controlling peripheral devices such as
disk drives and printers.
• A Collection program that acts as an intermediary between a user of a
computer and the computer hardware.
• It is responsible for the management and coordination of activities and
the sharing of the resources of the computer.
Computer System
Components
• Hardware Provides basic computing resources (CPU, memory, I/O
devices).
• Operating System Controls and coordinates the use of hardware among
application programs.
• Application Programs Solve computing problems of users (compilers,
database systems, video games, business programs such as banking
software).
• Users People, machines, other computers
Four Components of a Computer System
Operating System Views
• Resource allocator :to allocate resources (software and hardware) of the
computer system and manage them efficiently.

• Control program : Controls execution of user programs and operation of


I/O devices.

• Kernel :The program that executes forever (everything else is an


application with respect to the kernel).
Goals of an Operating System
• Simplify the execution of user programs and make solving user problems
easier.
• Use computer hardware efficiently.
• Allow sharing of hardware and software resources.
• Make application software portable and versatile.
• Provide isolation, security and protection among user programs.
• Improve overall system reliability
Functions/Components of Operating System:

Process management
• A program in its execution state is known as process.
• A process needs certain resources including CPU time, memory, files and
I/O devices to accomplish its task.
• These resources are either given to the process when it is created or
allocated to it while it is running.
• A program is a passive entity such as contents of a file stored on the disk
whereas a process is an active entity
The operating system is responsible for the following activities in
process management:
• Creating and deleting both user and system processes
• Suspending and resuming processes
• Providing mechanisms for process synchronization
• Providing mechanisms for process communication
• Providing mechanisms for deadlock handling.
2. Memory management
Main memory is a collection of quickly accessible data shared by the CPU and I/O
devices.
The central processor reads instructions from main memory (during instruction-
fetch cycle) and both reads and writes data from main memory (during data-
fetch cycle).
The operating system is responsible for the following activities in memory
management:
• Keeping track of which parts of memory are currently being used and by whom
• Deciding which processes and data to move into and out of memory
• Allocating and deallocating memory space as needed.
3. File-System Management
The operating system is responsible for the following activities with file management:
• Creating and deleting files
• Creating and deleting directories to organize files
• Supporting primitives for manipulating files and directories
• Backing up files on stable (nonvolatile) storage media.

4. Secondary storage ManagementThe operating system is responsible for the


following activities with disk management:
• Free-space management
• Storage allocation
• Disk scheduling
5. I/O system Management
The operating system is responsible for the following activities with I/O subsystem:
• A memory-management component that includes buffering, caching, and spooling
• A general device-driver interface
• Drivers for specific hardware devices

6. Protection and Security


• A mechanisms ensure that files, memory segments, CPU, and other resources can
be operated on by only those processes that have been allowed proper
authorization from the operating system.
• For example, memory-addressing hardware ensures that a process can execute
only within its own address space.
• Protection is a mechanism for controlling the access of processes or users to the
resources defined by a computer system.
7. Networking
• A distributed system is a collection of physically separated computer systems that are networked
to provide the users with access to the various resources that the system maintains.
• Access to a shared resource increases computation speed, functionality, data availability, and
reliability.
• Network Operating System (NOS) provides remote access to the users.
• It also provides the sharing of h/w and s/w resources from remote machine to own systems.

8.Command Interpreter
• To interface with the operating System we use command-line interface or command interpreter
that allows users to directly enter commands that are to be performed by the operating system.
• The main function of the command interpreter is to get and execute the user-specified
commands. Many of the commands given at this level manipulate files: create, delete, list, print,
copy, execute, and so on. Eg: MS-DOS and UNIX shells.
Types of Operating System
• Single user Operating system This OS provides the environment for
single user i.e. only one user can interact with the system at a time. Eg:
MS-DOS, MS WINDOWS-XP, ME, 2000 etc.

• Multi user Operating System This OS provides the environment for


multiple users i.e. many user can interact with the system at a time.
These users are remotely connected to a system taking benefits of
shared resources of master system through networking. Eg: UNIX,
LINUX.
Serial Processing Operating System

• Early computer from late 1940 to the mid 1950.


• The programmer interacted directly with the computer hardware.
• These machine are called bare machine as they don't have OS.
• Every computer system is programmed in its machine language.
• Uses Punch Card, paper tapes and language translator

• In a typical sequence first the editor is been called to create a source code of user
program then translator is been called to convert source code into its object code,
finally the loader is been called to load its executable program into main memory for
execution.

• If syntax errors are detected than the whole program must be restarted from the
beginning.
Batch processing Operating System

• Batch processing is executing a series of non –interactive jobs


all at one time.
• Usually,jobs with similar request are grouped together during
working hours and then executed during the evening or
whenever the computer is idle.

• Batching similar jobs brought higher utilization of system


resources
Multiprogramming
• Multiprogramming is a technique to execute number of
programs simultaneously by a single processor.
• In Multiprogramming, number of processes reside in main
memory at a time.
• The OS picks and begins to executes one of the jobs in the
main memory.
• If any I/O wait happened in a process, then CPU switches from
that job to another job.
• Hence CPU in not idle at any time.
• Figure dipicts the layout of
OS multiprogramming system.
Job 1 • The main memory consists of 5 jobs
at a time, the CPU executes one by
Job 2 one.
Job 3 • Advantages:
• Efficient memory utilization
Job 4
• Throughput increases. (Throughput is a measure of
Job 5 how many units of information a system can process in a given amount of time.)

• CPU is never idle, so performance


increases.
Multiprocessor system :-This system has more than one processor which
share common bus, clock, peripheral devices and sometimes memory.
These systems have following advantages:
• Increased throughput By increasing the number of processor, we get more
work done in less time.
• Economy of scale Multiprocessor system are cost effective as several
processors share same resources of the system(memory, peripherals etc.)
• Increased reliability each processor is been fairly allotted with different job,
failure of one processor will not halt the system, only it will slower down the
performance. for example if we have ten processor and one fails, then each
of the remaining nine processor share the work of failed processor. Thus the
system will be 10% slower rather than failing altogether
This system can be categorized into:

• i) SMP (Symmetric multiprocessing)In this system each


processor runs an identical copy of the task and however these
processor communicate with each other whenever needed. E.g.
Windows NT, Solaris, Unix/Linux.

• ii) ASMP(Asymmetric multiprocessing) In asymmetric


multiprocessing each processor runs a specific task. As there is a
master processor which controls the whole system,and other
processor looks to the master for instructions. E.g. Sun OS ver. 4
Time sharing System(Multitasking)

• Time sharing, or multitasking, is a logical extension of multiprogramming.


• Multiple jobs are executed by switching the CPU between them.
• In this, the CPU time is shared by different processes, so it is called as
“Time sharing Systems”.
• A time-shared operating system uses CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared computer.
• Time slice is defined by the OS, for sharing CPU time between processes.
• Examples: Multics, Unix, etc.,
Real time Operating System

• A real time system has well defined fixed time constraints, processing
must be done within defined constraints or system will get failed.
• System that controls scientific system, experimenting medical system,
industrial control system and certain display systems are real time
system.
• They are also applicable to automobile engine fuel system, home
appliance controller and weapon systems.
• There are two types of real system:
• i) Hard real time system This system guarantees that critical tasks be
completed on time. For this all the delays in the system should be
bounded, from the retrieval of stored data to the time it takes operating
system to finish any request made to it.

• ii) Soft real time system This is less restrictive type of system defined as
not hard real-time, simply providing that a critical real-time task will
receive priority over other tasks and that it will retain the priority until it
completes.
Netwok Operating System :-An Os that includes special functions for
connecting computers and devices into a LAN. Some OS such as UNIX and
the Mac OS, having networking functions built in.
Some popular NOS’s for DOS and Windows systems include Novell
Netware, Microsoft LAN Manager and Windows NT.
• Some characteristics
• Each computer has its own private OS, instead of running part of a global
system wide operating system.
• Each user normally works on his/her own system.
Distributed Operating System :-It hides the existence of multiple
computers from the user i.e. the user doesn’t know that many
computers are being used to process the data.
These computers may be located at many places around the globe. This
OS provides provide single-system image to its users. All these
computers work in close coordination with each other.
• In this OS, each processor has its own memory and clock. The processor
communicates with each other through various communication lines
such as high speed buses and telephone lines.
Operating-System Services
1) User interface Almost all operating systems have a user interface
(UI).This interface can take several forms.
• command-line interface (CLI) which uses text commands.
• batch interface, in which commands and directives to control those
commands are entered into files
• graphical user interface (GUI) the interface is a window system with a
pointing device to direct I/O, choose from menus, and make selections
and keyboard to enter text.
2) Program execution:- The system should to load a program into memory and to run
that program. The program must be able to end its execution.

3) I/O operations:- A running program may require I/O operation(such as recording to a


CD or DVD drive or blanking a CRT screen). For efficiency and protection, users usually
cannot control I/O devices directly. Therefore the operating system must provide a
means to do I/O.

4)File-system manipulation:- Programs need to read and write files. They also need to
create and delete. Finally, some programs include permissions management to allow or
deny access to files or directories based on file ownership.
5) Input / Output Operations:-A program which is currently executing
may require I/O, which may involve file or other I/O device. For
efficiency and protection, users cannot directly govern the I/O devices.
So, the OS provide a means to do I/O Input / Output operation which
means read or write operation with any file
6) Communications. There are many circumstances in which one process needs to
exchange information with another process.
Such communication may occur between processes that are executing on the
same computer or between processes that are executing on different computer
systems tied together by a computer network.

7) Error detection. Errors may occur in the CPU and memory hardware (such as a
memory error or a power failure), in I/O devices (a network failure, or lack of
paper in the printer), and in the user program (such as an arithmetic overflow, an
attempt to access an illegal memory location, or too-great use of CPU time).
For each type of error, the operating system should take the appropriate action
to ensure correct and consistent computing.
8. Resource allocation. When there are multiple users or multiple jobs running at the same
time, resources must be allocated to each of them. Many different types of resources are
managed by the operating system trough various methods such as CPU-scheduling.

9. Accounting. OS keeps track of which users use how much and what kind of computer
resources. This record keeping may be used for accounting

10. Protection and security. When several separate processes execute concurrently, it should
not be possible for one process to interfere with the others or with the operating system
itself.
Protection involves ensuring that all access to system resources is controlled. Security of the
system from outsiders is also important by means of a password, to gain access to system
resources.
Interrupts

• An interrupt is a signal from a device attached to a computer or


from a program within the computer that causes the main program
that operates the computer to stop and figure out what to do next.
• The occurrence of an event is usually signaled by an interrupt from
either the hardware or the software. Hardware may trigger an
interrupt at any time by sending a signal to the CPU, usually
through system bus. Software may trigger an interrupt by executing
a special operation called a system call or monitor call
Interrupts processing
• The basic interrupt mechanism works as follows.
• The CPU hardware has a wire called the interrupt-request line that the CPU senses after
executing every instruction.
• When the CPU detects that a controller has asserted a signal on the interrupt request
line, the CPU performs a state save and jumps to the interrupthandler routine at a fixed
address in memory.
• The interrupt handler determines the cause of the interrupt, performs the necessary
processing, performs a state restore, and executes a return from interrupt instruction to
return the CPU to the execution state prior to the interrupt.
• We say that the device controller raises an interrupt by asserting a signal on the
interrupt request line, the CPU catches the interrupt and dispatches it to the interrupt
handler, and the handler clears the interrupt by servicing the device.
• Computers perform operations concurrently
– For example, compiling a program, sending a file to a printer,
rendering a Web page, playing music and receiving e-mail
– Processes enable systems to perform and track simultaneous
activities
– Processes transition between process states
– Operating systems perform operations on processes such as creating,
destroying, suspending, resuming and waking
3.1.1 Definition of Process
• A program in execution
– A process has its own address space consisting of:
• Text region
– Stores the code that the processor executes
• Data region
– Stores variables and dynamically allocated memory
• Stack region
– Stores instructions and local variables for active procedure calls
Process States: Life Cycle of a Process
• Process State
• As a process executes, it changes state. The state of a process is defined in
part by the current activity of that process. Each process may be in one of
the following states:
• • New. The process is being created.
• • Running. Instructions are being executed.
• • Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• • Ready. The process is waiting to be assigned to a processor.
• • Terminated. The process has finished execution.

Process state transition diagram
Process Management
• Operating systems provide fundamental services to processes
including:
– Creating processes
– Destroying processes
– Suspending processes
– Resuming processes
– Changing a process’s priority
– Blocking processes
– Waking up processes
– Dispatching processes
– Interprocess communication (IPC)
3.1 Process States and State Transitions
• Process states
– The act of assigning a processor to the first process on the ready list is
called dispatching
– The OS may use an interval timer to allow a process to run for a specific
time interval or quantum
– Cooperative multitasking lets each process run to completion
• State Transitions
– At this point, there are four possible state transitions
• When a process is dispatched, it transitions from ready to running
• When the quantum expires, it transitions from running to ready
• When a process blocks, it transitions from running to blocked
• When the event occurs, it transitions from blocked to ready
Process Control Blocks (PCBs)
• Process Control Block
• Each process is represented in the operating system by a process control block (PCB) also called a
task control block. A PCB contains many pieces of information associated with a specific process,
including these:
• • Process state. The state may be new, ready, running, waiting, halted, and so on.
• • Program counter. The counter indicates the address of the next instruction to be executed for this
process.
• • CPU registers. The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus any
condition-code information. Along with the program counter, this state information must
• be saved when an interrupt occurs, to allow the process to be continued correctly afterward.
Block diagram of PCB
• • CPU-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• • Memory-management information. This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating system.
• • Accounting information. This information includes the amount of CPU and real
time used, time limits, account members, job or process numbers, and so on.
• • I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
• In brief, the PCB simply serves as the repository for any information that may
vary from process to process.

Process Scheduling
• The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization. The objective
of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is
running. To meet these objectives, the process scheduler
selects an available process (possibly from a set of several
available processes) for program execution on the CPU.
• As processes enter the system, they are put into a job queue,
which consists of all processes in the system. The processes that
are residing in main memory and are ready and waiting to execute
are kept on a list called the ready queue. A ready-queue header
contains pointers to the first and final PCBs in the list.
• The list of processes waiting for a particular I/O device is called a
device queue.
• A new process is initially put in the ready queue. It waits there until
it is selected for execution, or is dispatched. Once the process is
allocated the CPU and is executing, one of several events could
occur:
• The process could issue an I/O request and then be placed in an
I/O queue.
• The process could create a new subprocess and wait for the
subprocess's termination.
• The process could be removed forcibly from the CPU, as a result
of an interrupt, and be put back in the ready queue.
• A process continues this cycle until it terminates, at which time
it is removed from all queues and has its PCB and resources
deallocated.
Fig: Queuing diagram representation of process scheduling
Schedulers
• A process migrates among the various scheduling queues
throughout its lifetime. The operating system must select, for
scheduling purposes, processes from these queues in some
fashion. The selection process is carried out by the appropriate
scheduler.
• Types of schedulers:
• i) Long term Scheduler In batch system, more processes are submitted
than can be executed immediately. These processes are spooled to a mass-
storage device (typically a disk), where they are kept for later execution.
The long-term scheduler, or job scheduler, selects processes from this
pool and loads them into memory for execution. The long-term scheduler
executes much less frequently; minutes may separate the creation of one
new process and the next. The long-term scheduler controls the degree of
multiprogramming(the number of processes in memory). It is important
that the long-term scheduler make a careful selection.
• ii) Short term Scheduler The short-term scheduler, or CPU scheduler,
selects from among the processes that are ready to execute and allocates
the CPU to one of them. Because of the short time between executions,
the short-term scheduler must be fast.
• medium-term scheduler The key idea behind a medium-term
scheduler is that sometimes it can be advantageous to remove
processes from memory for I/O operation and thus reduce the
degree of multiprogramming. Later, the process can be reintroduced
into memory, and its execution can be continued where it left off.
This scheme is called swapping. The process is swapped out, and is
later swapped in, by the medium-term scheduler. Swapping may be
necessary to improve the process mix or because a change in
memory requirements has overcommitted available memory,
requiring memory to be freed up.
Scheduling Algorithms
• A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms.
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
• These algorithms are either non-preemptive or preemptive.
• Non-preemptive algorithms are designed so that once a
process enters the running state, it cannot be preempted until
it completes its allotted time
• preemptive scheduling is based on priority where a scheduler
may preempt a low priority running process anytime when a
high priority process enters into a ready state.
• Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time and
burst time.
Waiting Time = Turn Around Time – Burst Time
First-Come, First-Served (FCFS) Scheduling
• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high
FCFS Non Preemptive Example
Average Waiting Time

Waiting Time = Starting Time - Arrival Time


Waiting time of
P1 = 0
P2 = 5 - 0 = 5 ms
P3 = 29 - 0 = 29 ms
P4 = 45 - 0 = 45 ms
P5 = 55 - 0 = 55 ms
Average Waiting Time = Waiting Time of all Processes / Total Number of
Process
Therefore, average waiting time = (0 + 5 + 29 + 45 + 55) / 5 = 26.8 ms
Average Turnaround Time
• Turnaround Time = Waiting time in the ready queue + executing time +
waiting time in waiting-queue for I/O
Turnaround time of
P1 = 0 + 5 + 0 = 5ms
P2 = 5 + 24 + 0 = 29ms
P3 = 29 + 16 + 0 = 45ms
P4 = 45 + 10 + 0 = 55ms
P5 = 55 + 3 + 0 = 58ms
Total Turnaround Time = (5 + 29 + 45 + 55 + 58)ms = 192ms
Average Turnaround Time = (Total Turnaround Time / Total Number of
Process) = (192 / 5)ms = 38.4ms
Shortest Job First (SJF)
• This is a non-preemptive, pre-emptive scheduling algorithm.
• pre-emptive SJF is also known as SRTF(Shortest remaining time first)
algorithm
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where required CPU time
is not known.
• The processer should know in advance how much time process will take.
Example of Non Preemptive SJF
• Average Waiting Time : arrival time is common to all processes(i.e.,
zero).
Waiting Time for
P1 = 3 - 0 = 3ms
P2 = 34 - 0 = 34ms
P3 = 18 - 0 = 18ms
P4 = 8 - 0 = 8ms
P5 = 0ms
Now, Average Waiting Time = (3 + 34 + 18 + 8 + 0) / 5 = 12.6ms
• Average Turnaround Time
• According to the SJF Gantt chart and the turnaround time formulae,
Turnaround Time of
P1 = 3 + 5 = 8ms
P2 = 34 + 24 = 58ms
P3 = 18 + 16 = 34ms
P4 = 8 + 10 = 18ms
P5 = 0 + 3 = 3ms
Therefore, Average Turnaround Time = (8 + 58 + 34 + 18 + 3) / 5 =
24.2ms
Example of Preemptive SJF/SRTF
• Average Waiting Time
• First of all, we have to find the waiting time for each process.
Waiting Time of process
P1 = 0ms
P2 = (3 - 2) + (10 - 4) = 7ms
P3 = (4 - 4) = 0ms
P4 = (15 - 6) = 9ms
P5 = (8 - 8) = 0ms
Therefore, Average Waiting Time = (0 + 7 + 0 + 9 + 0) / 5 = 3.2ms
• Average Turnaround Time
• First of all, we have to find the turnaround time of each process.
Turnaround Time of process
P1 = (0 + 3) = 3ms
P2 = (7 + 6) = 13ms
P3 = (0 + 4) = 4ms
P4 = (9 + 5) = 14ms
P5 = (0 + 2) = 2ms
Therefore, Average Turnaround Time = (3 + 13 + 4 + 14 + 2) / 5 = 7.2ms
Priority Scheduling
• Priority scheduling is a non-preemptive algorithm and one of the
most common scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is
to be executed first and so on.
• Processes with same priority are executed on first come first
served basis.
• Priority can be decided based on memory requirements, time
requirements or any other resource requirement
Priority Scheduling Example
• Average Waiting Time
• First of all, we have to find out the waiting time of each process.
Waiting Time of process
P1 = 3ms
P2 = 13ms
P3 = 25ms
P4 = 0ms
P5 = 9ms
Therefore, Average Waiting Time = (3 + 13 + 25 + 0 + 9) / 5 = 10ms
• Average Turnaround Time
• First finding Turnaround Time of each process.
Turnaround Time of process
P1 = (3 + 6) = 9ms
P2 = (13 + 12) = 25ms
P3 = (25 + 1) = 26ms
P4 = (0 + 3) = 3ms
P5 = (9 + 4) = 13ms
Therefore, Average Turnaround Time = (9 + 25 + 26 + 3 + 13) / 5 =
15.2ms
Round Robin (RR)

• Round Robin is the preemptive process scheduling algorithm.


• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
• Context switching is used to save states of preempted processes.
Round Robin (RR) Example
Time quantum is 5
• Average Waiting Time
• For finding Average Waiting Time, we have to find out the waiting time
of each process.

Waiting Time of
P1 = 0 + (15 - 5) + (24 - 20) = 14ms
P2 = 5 + (20 - 10) = 15ms
P3 = 10 + (21 - 15) = 16ms
Therefore, Average Waiting Time = (14 + 15 + 16) / 3 = 15ms
• Average Turnaround Time

• Same concept for finding the Turnaround Time.


Turnaround Time of

P1 = 14 + 30 = 44ms
P2 = 15 + 6 = 21ms
P3 = 16 + 8 = 24ms

Therefore, Average Turnaround Time = (44 + 21 + 24) / 3 = 29.66ms
Multilevel Queue
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues
– Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
– 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
• A process can move between the various queues; aging can be
implemented this way
• Multilevel-feedback-queue scheduler defined by the following
parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will enter when that
process needs service
Example of Multilevel Feedback Queue
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved
to queue Q1.
– At Q1 job is again served FCFS and receives 16 additional milliseconds. If it
still does not complete, it is preempted and moved to queue Q2.
Multilevel Feedback Queues

You might also like