0% found this document useful (0 votes)
11 views52 pages

Os Mid Isu

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 52

International Standard University

Dept. of Computer Science & Engineering (CSE)


Course Title: Operating System
Course No.: CSE 327

Prepared By:
Hasibul Islam Peyal
Lecturer, Dept. of CSE, International Standard University
What is Operating System?
An Operating System (OS) is an interface between a computer
user and computer hardware. An operating system is a
software which performs all the basic tasks like file
management, memory management, process management,
handling input and output, and controlling peripheral devices
such as disk drives and printers.
Types of Operating System
Batch operating system
• The users of a batch operating system do not interact with the computer directly.
Each user prepares his job on an off-line device like punch cards and submits it to
the computer operator.
• To speed up processing, jobs with similar needs are batched together and run as a
group.
• The programmers leave their programs with the operator and the operator then
sorts the programs with similar requirements into batches.
Time-sharing operating systems
• Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time.
• Processor's time which is shared among multiple users simultaneously is termed
as time-sharing.
• Time-Sharing Systems, the objective is to minimize response time.
Types of Operating System
Distributed operating System
• Distributed systems use multiple central processors to serve multiple real-time applications
and multiple users.
• Data processing jobs are distributed among the processors accordingly.
• The processors communicate with one another through various communication lines (such
as high-speed buses or telephone lines). These are referred as loosely coupled systems or
distributed systems.

Network operating System


• A Network Operating System runs on a server and provides the server the capability to
manage data, users, groups, security, applications, and other networking functions.
• The primary purpose of the network operating system is to allow shared file and printer
access among multiple computers in a network, typically a local area network (LAN), a
private network or to other networks.
Types of Operating System

Real Time operating System (RTOS)


• A real-time system is a system in which the time interval required to process
and respond to inputs is so small.
• So in this method, the response time is very less as compared to online
processing.
• Real-time systems are used when there are rigid time requirements on the
operation of a processor
• For example, Scientific experiments, medical imaging systems, industrial
control systems, weapon systems, robots, air traffic control systems, etc.
Services provided by an operating system
Program execution
• Loads a program into memory.
• Executes the program.
• Handles program's execution.
• Provides a mechanism for process synchronization.
• Provides a mechanism for process communication.
• Provides a mechanism for deadlock handling.
I/O Operation
• I/O operation means read or write operation with any file or any specific I/O device.
• Operating system provides the access to the required I/O device when required.
File system manipulation
• Program needs to read a file or write a file.
• The operating system gives the permission to the program for operation on file.
• Permission varies from read-only, read-write, denied and so on.
• Operating System provides an interface to the user to create/delete files.
• Operating System provides an interface to the user to create/delete directories.
• Operating System provides an interface to create the backup of file system.
Services provided by an operating system
Communication
• Two processes often require data to be transferred between them
• Both the processes can be on one computer or on different computers, but are
connected through a computer network.
• Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.
Error handling
• The OS constantly checks for possible errors.
• The OS takes an appropriate action to ensure correct and consistent computing.
Resource Management
• The OS manages all kinds of resources using schedulers.
• CPU scheduling algorithms are used for better utilization of CPU.
Services provided by an operating system
Protection
• The OS ensures that all access to system resources is controlled.
• The OS ensures that external I/O devices are protected from invalid
access attempts.
• The OS provides authentication features for each user by means of
passwords.
Process
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.

Example: We write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in
the program.
Program
• A program is a piece of code which may be a single line or millions of
lines. A computer program is usually written by a computer programmer
in a programming language.
• A collection of instructions that performs a specific task when executed
by a computer. When we compare a program with a process, we can
conclude that a process is a dynamic instance of a computer program.
What is Process Life Cycle / Describe Process states
When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time
S.N. State & Description
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to
them by the operating system so that they can run. Process may come into this state after Start state or while
running it by but interrupted by the scheduler to assign CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the
processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for
a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated
state where it waits to be removed from main memory.
What is Process Life Cycle/ Describe Process states (Most Important)
Process Scheduling
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
Categories of Scheduling
There are two categories of scheduling:
• Non-preemptive:
❑ Here the resource can’t be taken from a process until the process completes
execution.
❑ The switching of resources occurs when the running process terminates.
• Preemptive:
❑ Here the OS allocates the resources to a process for a fixed amount of time.
❑ This switching occurs as the CPU may give priority to other processes and
replace the process with higher priority with the running process.
Schedulers
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types −
Long-Term Scheduler
• It is also called a job scheduler.
• A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them
into memory for execution.
• The primary objective of the job scheduler is to provide a balanced mix of
jobs, such as I/O bound and processor bound. The job scheduler increases
efficiency by maintaining a balance between the two.
• Used in batch-processing systems.
Short-Term Scheduler
❑ Short term scheduler is also known as CPU scheduler. It selects one of the
process from the ready queue and dispatch to the CPU for the execution.
❑ Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next.
❑ Short-term schedulers are faster than long-term schedulers.
Medium term scheduler
❑ Medium term scheduler is used for swapping purpose.
❑ It removes the process from the running state to make room for the other
processes. Such processes are the swapped out processes and this procedure
is called swapping.
❑ The medium term scheduler is responsible for suspending and resuming the
processes.
Interprocess communication
Interprocess communication is the mechanism provided by
the operating system that allows processes to communicate
with each other.
This communication could involve a process letting another
process know that some event has occurred or the
transferring of data from one process to another.
Processes can communicate with each other through both:
1. Shared Memory
2. Message passing
1. Shared Memory:-
It can be referred to as a type of memory that can be used or
accessed by multiple processes simultaneously.
It is primarily used so that the processes can communicate with each
other.
2. Message Passing:
Multiple processes can read and write data to the message queue without
being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
Context Switching
❑ Context switching in an operating system involves saving the context or state of a
running process so that it can be restored later, and then loading the context or
state of another. process and run it.
❑ Context Switching refers to the process/method used by the system to change the
process from one state to another using the CPUs present in the system to
perform its job.
The Need for Context Switching
❑ Context switching makes it easier for the operating system to use the CPU’s
resources to carry out its tasks and store its context while switching between
multiple processes.
❑ Context switching only allows a single CPU to handle multiple processes requests
parallelly without the need for any additional processors.
Context Changes as a Trigger
The three different categories of context-switching triggers are as follows.
1. Interrupts
2. Multitasking
3. User/Kernel switch
Interrupts: When a CPU requests that data be read from a disc, if any interruptions
occur, context switching automatically switches to a component of the hardware that
can handle the interruptions more quickly.
Multitasking: The ability for a process to be switched from the CPU so that another
process can run is known as context switching. When a process is switched, the
previous state is retained so that the process can continue running at the same spot in
the system.
Kernel/User Switch: This trigger is used when the OS needed to switch between the
user mode and kernel mode.
When switching between user mode and kernel/user mode is necessary, operating
systems use the kernel/user switch.
Process Control Block
❑ The Process Control block(PCB) is also known as a Task Control
Block.
❑ It represents a process in the Operating System.
❑ A process control block (PCB) is a data structure used by a
computer to store all information about a process.
Process Control Block
State Diagram of Context Switching
So the context switching of two processes, the priority-based process occurs in
the ready queue of the process control block. These are the following steps.
❑ The state of the current process must be saved for rescheduling.
❑ The process state contains records, credentials, and operating system-specific
information stored on the PCB or switch.
❑ The PCB can be stored in a single layer in kernel memory or in a custom OS
file.
❑ A handle has been added to the PCB to have the system ready to run.
❑ The operating system aborts the execution of the current process and selects
a process from the waiting list by tuning its PCB.
❑ Load the PCB’s program counter and continue execution in the selected
process.
❑ Process/thread values can affect which processes are selected from the
queue, this can be important.
CPU Scheduling Criteria
❑ Maximum CPU utilization: The main objective of any CPU scheduling algorithm is
to keep the CPU as busy as possible.
❑ Fare allocation of CPU
❑ Maximum throughput: A measure of the work done by the CPU is the number of
processes being executed and completed per unit of time. This is called
throughput.
❑ Minimum turnaround time: The time elapsed from the time of submission of a
process to the time of completion is known as the turnaround time.
❑ Minimum waiting time: The amount of time a process spends waiting in the ready
queue before it gets a chance to execute on the CPU
❑ Minimum response time: Response time is the time at which CPU has been
allocated to a particular process first time.In case of non-preemptive scheduling,
generally Waiting time and Response time is same.
CPU Scheduling Algorithms
FCFS is considered as simplest CPU-scheduling algorithm. In FCFS algorithm, the process that
requests the CPU first is allocated in the CPU first. The implementation of FCFS algorithm is
managed with FIFO (First in first out) queue. FCFS scheduling is non-preemptive. Nonpreemptive
means, once the CPU has been allocated to a process, that process keeps the CPU until it executes
a work or job or task and releases the CPU, either by requesting I/O.
• Arrival time (AT) − Arrival time is the time at which the process arrives in ready queue.
• Burst time (BT) or CPU time of the process − Burst time is the unit of time in which a particular
process completes its execution.
• Completion time (CT) − Completion time is the time at which the process has been terminated.
Completion time refers to the actual time it takes for a process or task to finish its execution.
• Turn-around time (TAT) − The total time from arrival time to completion time is known as
turn-around time. key difference between completion time and turnaround time is that
completion time only considers the time spent executing, while turnaround time takes into
account both execution time and waiting time in the queue. Turnaround time provides a
more comprehensive measure of the overall efficiency of a scheduling algorithm or a
system's performance, as it considers the entire lifecycle of a process.
CPU Scheduling Algorithms
Turn-around time (TAT) = Completion time (CT) – Arrival time (AT)
or, TAT = Burst time (BT) + Waiting time (WT)
Waiting time (WT) = Turn-around time (TAT) – Burst time (BT)
Gantt chart − Gantt chart is a visualization which helps to scheduling and
managing particular tasks in a project. It is used while solving scheduling
problems, for a concept of how the processes are being allocated in
different algorithms.
FCFS: With arrival time

https://www.youtube.com/watch?v=DoQAzEBcIBI&t=830s
FCFS: Without arrival time

https://www.youtube.com/watch?v=p4D01y18lTc&list=PLgrAmbRAezujiknEO3sqpyCC4K3IgS4KU&index=22
Shortest Job First (SJF): With arrival time

https://www.youtube.com/watch?v=o2jGbOLJFLc&list=PLgrAmbRAezujiknEO3sqpyCC4K3IgS4KU&index=23
Shortest Job First (SJF ): Without arrival time

https://www.youtube.com/watch?v=J17w40jM-hY&list=PLgrAmbRAezujiknEO3sqpyCC4K3IgS4KU&index=24
SJF / SRTF (shortest remaining time first) | Preemptive
Priority Scheduling Algorithm | Non Preemptive | With arrival time
Round Robin with arrival time | Preemptive

Round Robin with arrival time | Preemptive | operating system | Bangla Tutorial (youtube.com)
Race Condition

Process Synchronization | Race Condition | operating system | Bangla Tutorial (youtube.com)


Critical Section Problem

Critical Section Problem | operating system | Bangla Tutorial (youtube.com)


Solving race condition using semaphore

Semaphore | Process Synchronization | Operating system | Bangla Tutorial - YouTube


What is the Producer-Consumer Problem?
The producer-consumer problem is an example of a multi-process synchronization
problem. The problem describes two processes, the producer and the consumer that
share a common fixed-size buffer and use it as a queue.
● The producer’s job is to generate data, put it into the buffer, and start again.
● At the same time, the consumer is consuming the data (i.e., removing it from the
buffer), one piece at a time.
● What is the Actual Problem?
● Given the common fixed-size buffer, the task is to make sure that the producer can’t
add data into the buffer when it is full and the consumer can’t remove data from an
empty buffer. Accessing memory buffers should not be allowed to producers and
consumers at the same time.
Solution of Producer-Consumer Problem

The producer is to either go to sleep or discard data if the buffer is


full. The next time the consumer removes an item from the buffer, it
notifies the producer, who starts to fill the buffer again. In the same
manner, the consumer can go to sleep if it finds the buffer to be
empty. The next time the producer transfer data into the buffer, it
wakes up the sleeping consumer.
Producer Consumer Problem | Process Synchronization | operating system |
Bangla Tutorial (youtube.com)
Producer Consumer Problem Using Semaphore | Operating System | Bangla
Tutorial - YouTube
USER LEVEL THREADS Vs Kernel Level THREADS
Multi-Threading***
What is Thread?
● A thread is a flow of execution through the process code, with its
own program counter that keeps track of which instruction to
execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
● A thread shares with its peer threads few information like code
segment, data segment and open files. When one thread alters a
code segment memory item, all other threads see that.

● A thread is also called a lightweight process. Threads provide a


way to improve application performance through parallelism.
Multi - Threading*** (Threads creation)


Multi-Threading: Advantages
● Parallelism: Enables concurrent execution of tasks for faster
performance.
● Responsiveness: Allows other threads to run when some are
waiting, ensuring system responsiveness.
● Resource Sharing: Threads share the same resources, simplifying
communication and data sharing.
● Resource Utilization: Maximizes CPU and system resource usage
by enabling continuous execution.
● Modularity: Facilitates modular design, with different threads
handling specific tasks.
● Improved Throughput: Enhances overall system throughput by
running multiple threads concurrently.

You might also like