0% found this document useful (0 votes)
10 views35 pages

Class Notes Operating System

Uploaded by

Monika Sehgal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views35 pages

Class Notes Operating System

Uploaded by

Monika Sehgal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

CLASS NOTES OPERATING SYSTEM

UNIT –I

Introduction to Operating Systems: Objectives and Characteristics. Classification: Batch,


Multi programming, Multi-processing, Multi-tasking, Time-sharing, Distributed, Network
and Real time Operating systems. System Calls and Services.

Functions and Structures: Operating System Functions- Process management, Memory


management, Secondary storage management, I/O management, File management, Protection
and Security. Structures Simple Structure, Monolithic structure, Layered approach,
Microkernel, Exokernel and Virtual Machines.

UNIT –II
Process Management and Scheduling: Process concept- Process State Model, Process
Control Block and Threads. Process Scheduling- Scheduling Queues, Schedulers and
Context Switch. Operations on Processes, Cooperating processes and Inter-Process
Communication.

Process Scheduling: Scheduling Criteria, Scheduling Algorithms: Single Processor


Scheduling: FCFS, SJF, Round Robin, Multi Feedback Queue. Multiple Processor
Scheduling and Real Time scheduling. Scheduling Algorithm Evaluation.

UNIT – III
Memory Management: Concepts of Memory Management, Logical and Physical address
space, Swapping, Memory allocation: Contiguous and Non-Contiguous. Paging: Hardware
Support. Page Map Table and Protection. Segmentation: Hardware Support and Protection
and Sharing.

Virtual Memory: Need of Virtual Memory, Demand paging, Pure Demand Paging.
Handling page faults, Performance of Demand Paging. Page replacement Algorithms and
Allocation of Frames: Allocation algorithms and Global vs Local Allocation. Thrashing.

UNIT – IV
I/O Management: Basic I/O Devices, Types of I/O Devices: Block and Character Devices.
I/O Software: Device Independent I/O, User Space I/O and Kernel I/O Software. Device
Controllers, Device Drivers and Interrupt Handlers. Communication Approaches to I/O
Devices: Special Instruction I/O, Memory Mapped I/O and Direct Memory Access (DMA).
Secondary Storage Structure: Disk Structure and Disk Scheduling Algorithms.
File System Interface: File Concept: Attributes, Operations and Types. File Access
Methods: Sequential Access, Direct Access and Indexed Sequential. Free Space
Management. Directory Structures: Single Level, Two level and Tree Structured. File
Protection and Sharing.

Course Objectives:
This objective of this course is to enable students to learn about important concepts related to
Operating Systems. It will help the students to enrich their knowledge and understanding of
major functions performed by Operating System. It will give in-depth knowledge to students
covering process management, memory management, secondary storage structure, file
management and Input/Output management.

UNIT-1 NOTES:

Introduction to Operating System: An operating system acts as an


intermediary between the user of a computer and computer hardware. In short its an interface
between computer hardware and user.

 The purpose of an operating system is to provide an environment in which a user can


execute programs conveniently and efficiently.
 An operating system is software that manages computer hardware and software. The
hardware must provide appropriate mechanisms to ensure the correct operation of the
computer system and to prevent user programs from interfering with the proper operation
of the system.
 The operating system (OS) is a program that runs at all times on a computer. All other
programs, including application programs, run on top of the operating system.
 It does assignment of resources like memory, processors and input / output devices to
different processes that need the resources. The assignment of resources has to be fair and
secure.
List of Common Operating Systems
There are multiple types of operating systems each having its own unique features:

Windows OS
 Developer : Microsoft
 Key Features : User-friendly interface, software compatibility, hardware support,
Strong gaming support.
 Advantages : Easy to use for most users, Broad support from third-party
applications ,Frequent updates and support.
 Typical Use Cases : Personal computing, Business environment, Gaming.

macOS
 Developer : Apple.
 Key Features : Sleek, intuitive user interface, Strong integration with other Apple
products, Robust security features, High performance and stability.
 Advantages : Optimized for Apple hardware, Seamless experience across Apple
ecosystem, Superior graphics and multimedia capabilities.
 Typical Use Cases : Creative industries (design, video editing, music production),
Personal computing, Professional environments.

Linux
 Developer : Community-driven (various distributions).
 Key Features : Open-source and highly customizable, Robust security and stability,
Lightweight and can run on older hardware, Large selection of distributions (e.g.,
Ubuntu, Fedora, Debian).
 Advantages : Free to use and distribute, Strong community support, Suitable for servers
and development environments.
 Typical Use Cases : Servers and data centers, Development and programming, Personal
computing for tech enthusiasts.

Unix
 Developer: Originally AT&T Bell Labs, various commercial and open-source versions
available
 Key Features: Multiuser and multitasking capabilities, Strong security and stability,
Powerful command-line interface, Portability across different hardware platforms
 Advantages: Reliable and robust performance, Suitable for high-performance
computing and servers, Extensive support for networking
 Typical Use Cases: Servers and workstations, Development environments, Research
and academic settings

Characteristics of Operating Systems


 Device Management: The operating system keeps track of all the devices. So, it is also
called the Input/Output controller that decides which process gets the device, when and
for how much time.
 File Management: It allocates and de-allocates the resources and also decides who gets
the resource.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages and other debugging and error-detecting methods.
 Memory Management: It is responsible for managing the primary memory of a
computer, including what part of it are in use by whom also check how much amount
free or used and allocate process
 Processor Management: It allocates the processor to a process and then de-allocates
the processor when it is no longer required or the job is done.
 Security: It prevents unauthorized access to programs and data using passwords or
some kind of protection technique .

Types of Operating Systems

Batch Operating System:


A Batch Operating System is designed to handle large groups of similar jobs efficiently. It
does not interact with the computer directly but instead processes jobs that are grouped by
an operator. These jobs are queued and executed one after the other, without user
interaction during the process.
Advantages of Batch Operating System

 Efficient Job Management: Multiple users can efficiently share the system, making it
cost-effective.
 Minimal Idle Time: The system minimizes idle time by processing jobs in a continuous
sequence without human intervention.
 Handling Repetitive Tasks: Ideal for managing large, repetitive tasks, such as payroll
and billing, with minimal effort.
 Improved Throughput: Batch systems can handle high volumes of jobs at once,
improving overall system throughput.

Disadvantages of Batch Operating System

 Inefficient CPU Utilization: When a job is waiting for input/output (I/O), the CPU
remains idle, leading to poor utilization of resources.
 Unpredictable Job Completion: If one job fails, others may be delayed indefinitely,
making job completion time unpredictable.
 Increased Response Time: The time between job submission and output can be high as
all jobs are processed sequentially.
 Lack of Real-Time Feedback: Users cannot interact with the system in real-time,
making it less suitable for interactive tasks.

Examples:
Payroll Systems
Bank Statements

2. Multi-Programming Operating System


In a Multi-Programming Operating System, multiple programs run in memory at the same
time. The CPU switches between programs, utilizing its resources more effectively and
improving overall system performance.
Advantages of Multi-Programming Operating System

 CPU is better utilized and the overall performance of the system improves.
 It helps in reducing the response time.

3. Multi-tasking/Time-sharing Operating systems


Multitasking OS is a type of Multiprogramming system with every process running in
round robin manner. Each task is given some time to execute so that all the tasks work
smoothly. Each user gets the time of the CPU as they use a single system. These systems
are also known as Multitasking Systems. The task can be from a single user or different
users. The time that each task gets to execute is called quantum. After this time interval is
over, the OS switches over to the next task.

Advantages of Time-Sharing OS

 Each task gets an equal opportunity.


 Fewer chances of duplication of software.
 CPU idle time can be reduced.
 Resource Sharing: Time-sharing systems allow multiple users to share hardware
resources such as the CPU, memory and peripherals, reducing the cost of hardware and
increasing efficiency.
 Improved Productivity: Time-sharing allows users to work concurrently, thereby
reducing the waiting time for their turn to use the computer. This increased productivity
translates to more work getting done in less time.
 Improved User Experience: Time-sharing provides an interactive environment that
allows users to communicate with the computer in real time, providing a better user
experience than batch processing.
Disadvantages of Time-Sharing OS

 Reliability problem.
 One must take care of the security and integrity of user programs and data.
 Data communication problem.
 High Overhead: Time-sharing systems have a higher overhead than other operating
systems due to the need for scheduling, context switching and other overheads that
come with supporting multiple users.
 Complexity: Time-sharing systems are complex and require advanced software to
manage multiple users simultaneously. This complexity increases the chance of bugs
and errors.
 Security Risks: With multiple users sharing resources, the risk of security breaches
increases. Time-sharing systems require careful management of user access,
authentication and authorization to ensure the security of data and software.

Examples:
IBM VM/CMS
TSO (Time Sharing Option
Windows Terminal Services

4. Multi-Processing Operating System


A Multi-Processing Operating System is a type of Operating System in which more than
one CPU is used for the execution of resources. It betters the throughput of the System.
Advantages of a Multi-Processing Operating System

 It increases the throughput of the system as processes can be parallelized.


 As it has several processors, so, if one processor fails, we can proceed with another
processor.

5. Distributed Operating System


Distributed operating systems are a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, at a great pace.
Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and
CPU. Systems. These systems' processors differ in size and function. The major benefit of
working with these types of operating systems is that it is always possible that one user can
access the files or software which are not present on his system but on some other system
connected within this network, i.e., remote access is enabled within the devices connected
to that network.

Advantages of Distributed Operating System


 Failure of one will not affect the other network communication, as all systems are
independent of each other.
 Electronic mail increases the data exchange speed.
 Since resources are being shared, computation is highly fast and durable.
 Load on host computer reduces.
 These systems are easily scalable as many systems can be easily added to the network.
 Delay in data processing reduces.

 Disadvantages of Distributed Operating System


 Failure of the main network will stop the entire communication.
 To establish distributed systems, the language is not yet well-defined.
 These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet.

Network Operating System


These systems run on a server and provide the capability to manage data, users, groups,
security, applications and other networking functions. These types of operating systems
allow shared access to files, printers, security, applications and other networking functions
over a small private network. One more important aspect of Network Operating Systems is
that all the users are well aware of the underlying configuration, of all other users within
the network, their connections, etc. and that’s why these computers are popularly known
a tightly coupled systems .

Advantages of Network Operating System

 Highly stable, centralized servers.


 Security concerns are handled through servers.
 New technologies and hardware upgrades are easily integrated into the system.
 Server access is possible remotely from different locations and types of systems.

Disadvantages of Network Operating System

 Servers are costly.


 The user has to depend on a central location for most operations.
 Maintenance and updates are required regularly.

Examples:
Microsoft Windows Server 2003
Microsoft Windows Server 2008
UNIX, Linux
Mac OS X
Novell NetWare

7. Real-Time Operating System


These types of OSs serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time. Real-time
systems are used when there are time requirements that are very strict like missile systems,
air traffic control systems, robots, etc.

Operating Systems Structures


The structure of the OS depends mainly on how the various standard components of the
operating system are interconnected and merge into the kernel.

Simple Structure:
Simple structure operating systems do not have well-defined structures and are small,
simple and limited. The interfaces and levels of functionality are not well separated. MS-
DOS is an example of such an operating system. In MS-DOS, application programs are able
to access the basic I/O routines. These types of operating systems cause the entire system to
crash if one of the user programs fails.
Layered Structure:
An OS can be broken into pieces and retain much more control over the system. In Layered
structure, the OS is broken into a number of layers (levels). The bottom layer (layer 0) is
the hardware and the topmost layer (layer N) is the user interface. These layers are so
designed that each layer uses the functions of the lower-level layers. This simplifies the
debugging process, if lower-level layers are debugged and an error occurs during
debugging, then the error must be on that layer only, as the lower-level layers have already
been debugged. The main disadvantage of this structure is that at each layer, the data needs
to be modified and passed on which adds overhead to the system. Moreover, careful
planning of the layers is necessary, as a layer can use only lower-level layers.

UNIX is an example of this structure.


Monolithic Structure
A monolithic structure is a type of operating system architecture where the entire operating
system is implemented as a single large process in kernel mode. Essential operating system
services, such as process management, memory management, file systems and device
drivers, are combined into a single code block

Micro-Kernel Structure
Micro-Kernel structure designs the operating system by removing all non-essential
components from the kernel and implementing them as system and user programs. This
results in a smaller kernel called the micro-kernel. Advantages of this structure are that all
new services need to be added to user space and does not require the kernel to be modified.
Thus it is more secure and reliable as if a service fails, then rest of the operating system
remains untouched. Mac OS is an example of this type of OS.

Exo-Kernel Structure:
Exokernel is an operating system developed at MIT to provide application-level
management of hardware resources. By separating resource management from protection,
the exokernel architecture aims to enable application-specific customization. Due to its
limited operability, exokernel size typically tends to be minimal. The OS will always have
an impact on the functionality, performance and scope of the apps that are developed on it
because it sits in between the software and the hardware. The exokernel operating system
makes an attempt to address this problem by rejecting the notion that an operating system
must provide abstractions upon which to base applications. The objective is to limit
developers use of abstractions as little as possible while still giving them freedom.

System Call
A system call is a programmatic way in which a computer program requests a service from
the kernel of the operating system on which it is executed.
System Calls are,
 A way for programs to interact with the operating system.
 Provide the services of the operating system to the user programs.
 Only entry points into the kernel and are executed in kernel mode.

 A system call can be written in high-level languages like C or C++ or Pascal or in


assembly language.
 A system call is initiated by the program executing a specific instruction, which triggers
a switch to kernel mode, allowing the program to request a service from the OS. The OS
then handles the request, performs the necessary operations and returns the result back
to the program.
 Without system calls, each program would need to implement its methods for accessing
hardware and system services, leading to inconsistent and error-prone behavior.
 A system call can be written in high-level languages like C or C++ or Pascal or in
assembly language.
 A system call is initiated by the program executing a specific instruction, which triggers
a switch to kernel mode, allowing the program to request a service from the OS. The OS
then handles the request, performs the necessary operations and returns the result back
to the program.
 Without system calls, each program would need to implement its methods for accessing
hardware and system services, leading to inconsistent and error-prone behaviour.

Functions of Operating System


1. Process Management
Process management in operating system is about managing processes. A Process is a
running program. The life cycle of process is from the moment program start until it
finishes. Operating system makes sure each process:
 gets its turn to use the CPU
 synchronized when needed
 Has access to the resources it needs, like memory, files, and input/output devices.

2. Memory Management
Memory management is an essential task of the operating system that handles the storage
and organization of data in both main (primary) memory and secondary storage. The OS
ensures that memory is allocated and deallocated properly to keep programs running
smoothly. It also manages the interaction between volatile main memory and non-volatile
secondary storage.
3. File System Management
File management in the operating system ensures the organized storage, access and control
of files. The OS abstracts the physical storage details to present a logical view of files,
making it easier for users to work with data. It manages how files are stored on different
types of storage devices (like hard drives or SSDs) and ensures smooth access through
directories and permissions.

File System Management includes managing of:

File Attributes
 File Name: Identifies the file with a name and extension (e.g., .txt, .jpg).
 File Type: Defines the format of the file (e.g., text, image, executable).
 Size: The amount of storage the file occupies.
 Permissions: Determines who can read, write, or execute the file.

File Types
 Text Files: Contain human-readable content (e.g., .txt, .md).
 Binary Files: Store data in binary format (e.g., .jpg, .mp3).
 Executable Files: Contain program code (e.g., .exe, .out).
4. Device Management (I/O System)
Device management of an operating system handles the communication between the system
and its hardware devices, like printers, disks or network interfaces. The OS provides device
drivers to control these devices, using techniques like Direct Memory Access (DMA) for
efficient data transfer and strategies like buffering and spooling to ensure smooth operation.

5. Protection and Security


Protection and security mechanisms in an operating system are designed to safeguard
system resources from unauthorized access or misuse. These mechanisms control which
processes or users can access specific resources (such as memory, files, and CPU time) and
ensure that only authorized users can perform specific actions. While protection ensures
proper access control, security focuses on defending the system against external and
internal attacks.

UNIT-2 NOTES:
Process Control Block:
Process Control Block (PCB) contains information about the process, i.e. registers,
quantum, priority, etc. The Process Table is an array of PCBs, which logically contains a
PCB for all of the current processes in the system.

Structure of the Process Control Block


A Process Control Block (PCB) is a data structure used by the operating system to manage
information about a process. The process control keeps track of many important pieces of
information needed to manage processes efficiently. The diagram helps explain some of
these key data items.

 Pointer: It is a stack pointer that is required to be saved when the process is switched
from one state to another to retain the current position of the process.
 Process state: It stores the respective state of the process.
 Process number: Every process is assigned a unique id known as process ID or PID
which stores the process identifier.
 Program counter: Program Counter stores the counter, which contains the address of the
next instruction that is to be executed for the process.
 Register: Registers in the PCB, it is a data structure. When a processes is running and it's
time slice expires, the current value of process specific registers would be stored in the
PCB and the process would be swapped out. When the process is scheduled to be run, the
register values is read from the PCB and written to the CPU registers. This is the main
purpose of the registers in the PCB.
 Memory limits: This field contains the information about memory management
system used by the operating system. This may include page tables, segment tables, etc.
 List of Open files: This information includes the list of files opened for a process.

Context Switching:
Context Switching in an operating system is a critical function that allows the CPU to
efficiently manage multiple processes. By saving the state of a currently active process and
loading the state of another, the system can handle various tasks simultaneously without
losing progress. This switching mechanism ensures optimal use of the CPU, enhancing the
system's ability to perform multitasking effectively.

Example of Context Switching


Suppose the operating system has (N) processes stored in a Process Control Block (PCB).
Each process runs using the CPU to perform its task. While a process is running, other
processes with higher priorities queue up to use the CPU and complete their tasks.
Switching the CPU to another process requires saving the state of the current process and
restoring the state of a different process. This task is known as a context switch. When a
context switch occurs, the kernel saves the context of the old process in its PCB and loads
the saved context of the new process scheduled to run. Context-switch time is pure
overhead because the system does no useful work while switching. The switching speed
varies from machine to machine, depending on factors such as memory speed, the number
of registers that need to be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). A typical context switch takes a few milliseconds.
Context-switch times are highly dependent on hardware support. For example, some
processors (such as the Sun UltraSPARC) provide multiple sets of registers. In this case, a
context switch simply requires changing the pointer to the current register set. However, if
there are more active processes than available register sets, the system resorts to copying
register data to and from memory, as before. Additionally, the more complex the operating
system, the greater the amount of work that must be done during a context switch.

Need of Context Switching


 One process does not directly switch to another within the system. Context switching
makes it easier for the operating system to use the CPU's resources to carry out its tasks
and store its context while switching between multiple processes.
 Context switching enables all processes to share a single CPU to finish their execution and
store the status of the system's tasks. The execution of the process begins at the same
place where there is a conflict when the process is reloaded into the system.
 Context switching only allows a single CPU to handle multiple processes requests
parallelly without the need for any additional processors.

PROCESS STATE MODEL


A process in an Operating System goes through a series of states during its execution.
The Process State Model is used to represent these states and the transitions between them.

Process States
State Description
New Process is being created. (Not yet ready for execution)
Ready Process is loaded into main memory and waiting for CPU.
Running Process is currently being executed by CPU.
Waiting / Blocked Process cannot proceed until some event occurs (e.g., I/O completion).
Terminated Process has finished execution and is removed from the system.

Operations on Processes
An Operating System can perform various operations on processes during their life cycle.

a) Process Creation

A process can create child processes.

The creating process is called parent, and the new one is child.

Each process is identified by a PID (Process ID).

Reasons for creation:

 New job / program loaded.


 User request to start a program.
 System initialization (boot time).
 Batch job initiation.

System Calls used:

 fork() in UNIX/Linux → creates a copy of the calling process.


 exec() → replaces process memory with a new program.

b) Process Termination

 A process ends when:


o It finishes execution normally.
o It is killed due to an error.
o It is terminated by its parent.
 On termination:
o Resources are freed.
o Child processes may also be terminated (cascading termination).

c) Process Suspension & Resumption

 Suspend → Temporarily stop execution (e.g., to free resources or due to priority


scheduling).
 Resume → Bring back to the ready state.

d) Process Scheduling

 Decides which process runs next.


 Types:
o Long-term scheduling (New → Ready).
o Short-term scheduling (Ready → Running).
o Medium-term scheduling (Suspend ↔ Ready).

Cooperating Processes
A process is said to be cooperating if it can affect or be affected by other processes during
execution.

Why Cooperation is needed?

 Information sharing → e.g., database systems.


 Computation speed-up → break task into sub-tasks.
 Modularity → divide program into modules/processes.
 Convenience → multiple applications working together.

Independent Process vs Cooperating Process:

Independent Process Cooperating Process


Feature
Data sharing None Yes
Effect of other processes None Can be affected
Synchronization Not needed Needed
Inter-Process Communication (IPC)
IPC allows processes to exchange data and synchronize with each other.

IPC Models

1. Shared Memory Model


o Processes communicate via a shared region of memory.
o Advantages:
 Fast communication (no kernel involvement after setup).
o Disadvantages:
 Needs synchronization (e.g., semaphores, mutexes) to avoid race
conditions.

2. Message Passing Model


o Processes exchange messages via the OS.
o Advantages:
 No shared variables → simpler for distributed systems.
o Disadvantages:
 Slower than shared memory.
o Primitives:
 send(message)
 receive(message)

THREADS
A thread is the smallest unit of CPU execution.
A process can have one or more threads sharing the same code, data, and resources, but
each thread has its own program counter, registers, and stack.

1.1 Benefits of Threads

 Responsiveness → parts of program continue running even if one thread is blocked.


 Resource Sharing → threads share process memory and resources.
 Economy → creating threads is faster than creating processes.
 Utilization of Multiprocessor Architectures → multiple threads can run in parallel.
Types of Threads

1. User-Level Threads (ULTs)


o Managed in user space without kernel knowledge.
o Switching between threads is fast.
o If one thread blocks, the whole process blocks.
2. Kernel-Level Threads (KLTs)
o Managed directly by OS kernel.
o Kernel schedules threads individually.
o Slightly slower to manage, but blocking of one thread does not affect others.
3. Hybrid Model
o Combines both user and kernel threads for better performance.

Multithreading Models

 Many-to-One → many user threads mapped to one kernel thread.


 One-to-One → each user thread mapped to a kernel thread.
 Many-to-Many → many user threads mapped to many kernel threads.

PROCESS SCHEDULING
Scheduling is the decision-making process that determines which process runs next on the
CPU.

Scheduling Levels

1. Long-Term Scheduling
o Controls admission of processes into the system (New → Ready).
o Determines the degree of multiprogramming.
2. Medium-Term Scheduling
o Temporarily suspends or resumes processes to balance CPU & I/O usage.
3. Short-Term Scheduling
o Selects process from the ready queue to execute (Ready → Running).
o Happens frequently (milliseconds).

SCHEDULING QUEUES
When a process enters the system, it is placed in a queue.

Types of Scheduling Queues:

1. Job Queue → all processes in the system.


2. Ready Queue → processes in memory, waiting for CPU.
3. Device Queue → processes waiting for I/O device.
The three main types of Schedulers are:

1. Long-Term Scheduler (Job Scheduler)

Purpose:

 Controls admission of processes into the system from Job Queue → Ready Queue.
 Decides which jobs to load into main memory for execution.

Key Points:

 Runs infrequently (seconds, minutes).


 Works in batch systems or time-sharing systems.
 Determines the degree of multiprogramming (number of processes in memory).
 Can choose a mix of CPU-bound and I/O-bound processes to keep the system
balanced.

Example:
When you submit many batch jobs, the OS doesn’t load them all into RAM immediately.
The long-term scheduler picks a few based on priority, resource availability, etc.

Medium-Term Scheduler

Purpose:

 Temporarily removes processes from memory (suspension) and later resumes them.
 Improves CPU utilization and system performance.

Key Points:

 Runs occasionally (seconds).


 Often used in time-sharing systems.
 Helps when there is memory contention (too many processes in RAM).
 May swap out low-priority or waiting processes to disk → frees RAM for active
processes.
 When resources are free, swapped-out processes are brought back into Ready Queue.

Example:
In a time-sharing system, if many users log in at once, the OS may suspend some
background processes to give CPU to interactive tasks.

Short-Term Scheduler (CPU Scheduler)

Purpose:

 Selects one process from Ready Queue → CPU for execution.


 Runs very frequently (milliseconds).

Key Points:

 Works directly with CPU burst scheduling.


 Must be fast (decision-making time is in microseconds).
 Uses scheduling algorithms like:
o First-Come First-Served (FCFS)
o Shortest Job Next (SJN/SJF)
o Priority Scheduling
o Round Robin (RR)
o Multilevel Queue
o Multilevel Feedback Queue

Example:
When the CPU finishes a time slice in Round Robin scheduling, the short-term scheduler
quickly picks the next process from the ready queue.

MULTIPLE PROCESSOR SCHEDULING


When a system has more than one CPU, scheduling becomes more complex.

1.1 Types of Multiprocessor Scheduling

a) Asymmetric Multiprocessing

 One processor is the master — handles all scheduling and I/O.


 Other processors (slaves) execute only user code.
 Advantage → Simple to implement.
 Disadvantage → Master can become a bottleneck.

b) Symmetric Multiprocessing (SMP)

 Each processor runs its own scheduler.


 Processes are in a common ready queue, or each processor may have its own ready
queue.
 Most modern systems use SMP.

REAL-TIME SCHEDULING
Used in systems where tasks must meet strict deadlines.
2.1 Types

 Hard Real-Time Systems


o Missing a deadline = system failure.
o Example: Airbag control, pacemaker.
 Soft Real-Time Systems
o Occasional deadline misses are tolerable, but performance degrades.
o Example: Video streaming, online gaming.

Real-Time Scheduling Algorithms


a) Rate Monotonic Scheduling (RMS)

 Static priority: Shorter period tasks → higher priority.


 Used in periodic task systems.

b) Earliest Deadline First (EDF)

 Dynamic priority: Task with nearest deadline gets CPU first.


 Optimal for single-processor preemptive systems.

c) Least Laxity First (LLF)

 Laxity = (Deadline - Current Time - Remaining Execution Time).


 Task with smallest laxity runs first.

SCHEDULING ALGORITHM EVALUATION


When selecting a scheduling algorithm, we measure performance criteria.

CPU Scheduling Evaluation Criteria

Metric Description Goal


CPU Utilization % of time CPU is busy Maximize
Throughput No. of processes completed per unit time Maximize
Turnaround Time Total time from submission to completion Minimize
Waiting Time Time spent in ready queue Minimize
Response Time Time from request submission to first response Minimize
Predictability Consistent performance over time High

UNIT-3 NOTES:
MEMORY MANAGEMENT
Memory management is a function of the Operating System that handles the allocation and
deallocation of memory space to processes during execution.

1. CONCEPTS OF MEMORY MANAGEMENT

 Goal:
o Keep the CPU busy by ensuring that processes have the memory they need.
o Allocate memory efficiently to maximize performance.
 Functions:

1. Tracking memory usage (which parts are free/occupied).


2. Allocating memory to processes.
3. Deallocating memory when processes finish.
4. Protecting memory so that one process doesn’t access another’s.

2. LOGICAL & PHYSICAL ADDRESS SPACE


2.1 Logical Address

 Generated by CPU during program execution.


 Also called virtual address.
 The process thinks it has its own continuous memory space.

2.2 Physical Address

 Actual location in the main memory (RAM).

Key Point:

 Logical addresses are converted to physical addresses by the Memory Management


Unit (MMU).

Example:

Logical Address (CPU): 1200


Base Address (in RAM): 3000
Physical Address = 1200 + 3000 = 4200
SWAPPING

 A technique where entire processes are moved between main memory and backing
store (disk) to free up space.
 Used in: Multiprogramming systems.

Steps:

1. Process is swapped out to disk.


2. Later, swapped in to memory for execution.

Advantage:

 More processes can be executed.

Disadvantage:

 Disk I/O is slow → increases overhead.

4. MEMORY ALLOCATION
4.1 Contiguous Allocation

 Each process is allocated a single contiguous block of memory.


 Two types:
1. Fixed Partitioning → memory divided into fixed sizes.
2. Variable Partitioning → partitions created dynamically.

Problems:

 External fragmentation (small free spaces between blocks).


 Internal fragmentation (unused space inside allocated block).

4.2 Non-Contiguous Allocation

 Process’s memory is scattered across physical memory.


 Examples: Paging, Segmentation.
 Eliminates external fragmentation.

5. PAGING
5.1 Concept

 Divides logical memory into fixed-size blocks → pages.


 Divides physical memory into fixed-size blocks → frames.
 Pages are mapped to frames via a Page Table.

Example:

 Page size = Frame size = 1 KB.


 Logical Address 2050:
o Page Number = 2050 ÷ 1024 = 2
o Offset = 2050 % 1024 = 2
o Frame Number (from Page Table) → e.g., 5
o Physical Address = (Frame Number × 1024) + Offset = (5 × 1024) + 2

Segmentation is a memory management scheme that divides the process's memory into
variable-sized segments rather than fixed-sized pages.

 Each segment represents a logical unit such as:


o Main program
o Functions / Methods
o Data arrays
o Stack
o Symbol tables

Purpose:

 To reflect user’s logical view of memory instead of just physical chunks.


 To allow protection and sharing at the logical unit level.

Key Concepts

1. Segment
o A contiguous block of memory for a specific purpose.
o Each segment has:
 Name / Segment number (ID)
 Base address (starting location in physical memory)
 Limit (length of the segment in bytes)
2. Segment Table
o Maintained by the OS for each process.
o Maps logical segment numbers to physical addresses.
o Each entry contains:
 Base → Starting physical address
 Limit → Size of segment
o Hardware uses Segment Table Base Register (STBR) to locate the segment
table.
Need of Virtual Memory

Definition:
Virtual Memory is a technique that gives an illusion to the user that they have a large,
continuous block of memory, even though the physical RAM may be smaller.

Why we need it:

1. Programs are bigger than RAM → We can run programs that do not completely fit
into main memory at once.
2. Better Multiprogramming → Multiple programs can share memory without
interfering with each other.
3. Efficient use of RAM → Only required parts of the program are loaded into memory.
4. Isolation & Security → Each process works in its own virtual address space, so they
can’t harm each other.
Example:
Suppose you have 4 GB of RAM but you want to run a program that needs 8 GB.
Virtual memory loads only the needed parts of the program into RAM and keeps the rest in
the disk, swapping them when required.

Demand Paging

Definition:
A type of virtual memory management where pages (small fixed-size blocks of a program)
are loaded into RAM only when needed.

How it works:

1. When a process tries to access a page not in RAM → Page Fault occurs.
2. The operating system loads the required page from disk into RAM.
3. Execution resumes from where it was stopped.

Advantages:

 Saves memory space.


 Faster program start (no need to load all pages at once).

Pure Demand Paging

 A special case of demand paging.


 No pages are loaded in RAM at the start of execution.
 All pages are loaded only when they are first accessed.

Drawback:
High page fault rate in the beginning.

Page Replacement Algorithms

When RAM is full, OS must replace an existing page to load a new one.
Some popular algorithms:

(a) FIFO (First-In, First-Out)

 Replace the oldest loaded page.


 Simple but can cause Belady’s Anomaly (more frames may cause more faults).

(b) Optimal Page Replacement

 Replace the page that will not be used for the longest time in future.
 Best in theory but requires future knowledge → used only for comparison.
(c) LRU (Least Recently Used)

 Replace the page that has not been used for the longest time in the past.
 Closer to optimal but requires hardware support (counters or stacks).

Allocation of Frames

How frames are given to processes.

(a) Equal Allocation:

 Divide total frames equally among all processes.

(b) Proportional Allocation:

 Allocate based on the size of the process.

8. Global vs Local Allocation

 Global Allocation: A process can take a frame from another process if needed (can
improve throughput but may cause starvation).
 Local Allocation: A process can replace only its own pages (more stable
performance).

9. Thrashing

Definition:
When a process spends more time swapping pages in/out than executing actual instructions.

Cause:

 Too many processes + too few frames → High page fault rate.

Solution:

 Reduce the degree of multiprogramming.


 Use working set model to ensure each process has enough frames.

UNIT-4
Basic I/O Devices

I/O (Input/Output) devices are hardware components that allow a computer to communicate
with the outside world.

 Input devices → send data to the computer (keyboard, mouse, scanner, microphone).
 Output devices → send data from the computer to the outside (monitor, printer,
speakers).
 Storage devices → both input and output (hard disk, USB drive).

💡 Example:
When you type on a keyboard (input), the CPU processes the data, and then the monitor
displays the text (output).

2. Types of I/O Devices


(a) Block Devices

 Store data in fixed-size blocks (like a book with fixed-size pages).


 Can read/write entire blocks at a time.
 Examples: Hard disk, SSD, CD-ROM.

(b) Character Devices

 Handle data one character/byte at a time.


 No block structure.
 Examples: Keyboard, mouse, printer.

3. I/O Software

Software that manages communication between the CPU and I/O devices.

(a) Device Independent I/O

 Works for all devices, regardless of type.


 Example: File open, read, write — same functions for hard disk, pen drive, etc.

(b) User Space I/O

 I/O operations performed in user programs without switching to kernel mode (e.g.,
reading a file in a text editor).

(c) Kernel I/O Software

 Handled by the operating system’s kernel.


 Includes scheduling, buffering, and handling hardware details.

4. Device Controllers

 Small processors inside devices that manage their operation.


 Convert CPU commands into device-specific actions.
 Example: A hard disk controller manages reading/writing sectors.

5. Device Drivers

 Software that allows the OS to talk to hardware.


 Acts like a translator between the OS and the device controller.

6. Interrupt Handlers

 Special functions in the OS that deal with interrupts (signals from devices).
 Example: When a key is pressed, the keyboard sends an interrupt, and the handler
processes it.

7. Communication Approaches to I/O Devices


(a) Special Instruction I/O

 CPU uses special instructions to communicate with devices.


 Example: IN and OUT instructions in assembly language.

(b) Memory-Mapped I/O

 I/O devices are assigned memory addresses.


 CPU reads/writes to these addresses like normal memory.

(c) Direct Memory Access (DMA)

 Device transfers data directly to/from memory without CPU involvement.


 Example: When copying a file from a USB to disk, DMA moves the data while CPU
does other work.
8. Secondary Storage Structure
Disk Structure

 Disks have platters, tracks, and sectors.


 Data is stored in cylinders (same track number across all platters).

9. Disk Scheduling Algorithms

Used to decide the order in which disk requests are served.

1. FCFS (First Come, First Serve) → Serve requests in arrival order.


2. SSTF (Shortest Seek Time First) → Serve the request closest to current head
position.
3. SCAN → Head moves in one direction, serving requests until end, then reverses.
4. C-SCAN → Like SCAN but returns directly to start without serving requests on the
way back.

File System Interface


File Concept

 A file is a collection of related data stored on secondary storage.

File Attributes:

 Name
 Type (text, binary, executable)
 Size
 Permissions
 Creation/modification date

File Operations:

 Create, read, write, delete, rename, copy.

File Types:

 Text files, binary files, image files, executable files.

11. File Access Methods


(a) Sequential Access

 Data is read/written in order.


 Example: Playing a song (you start from the beginning).
(b) Direct Access

 Jump directly to any part of the file.


 Example: Watching a specific scene in a movie file.

(c) Indexed Sequential

 Uses an index to jump quickly to specific locations.


 Example: Searching for a word in a dictionary app.

12. Free Space Management

 OS must track unused space on disk.


 Methods:
o Bit vector → a bit for each block (0 = free, 1 = used).
o Linked list → list of free blocks.
o Grouping → group free blocks together.

13. Directory Structures


(a) Single Level

 All files in one directory.


 Simple but not good for many files.

(b) Two Level

 Separate directory for each user.

(c) Tree Structured

 Folders within folders (most common).


 Example: C:/Users/Ritu/Documents/Notes.txt

What is file protection and file sharing? (Short idea)

 File protection = ways the OS ensures that only the right users/programs can read,
write, execute, delete a file.
 File sharing = allowing more than one user or program to use the same file at the
same time safely — either reading, or reading-and-writing, depending on rules.

Both are about safety + correctness + fairness.


2. Basic building blocks
Authentication

 First step: confirm who the user is (password, token, key).


 Without authentication, protection is meaningless.

Authorization

 Decide what an authenticated user is allowed to do (read, write, execute).


 Implemented by permission bits, ACLs, capabilities, policies.

Auditing / Logging

 Record who accessed what and when (for accountability, debugging, forensics).

You might also like