0% found this document useful (0 votes)
11 views17 pages

Operating System Unit-1 Introduction and Overview of Os

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 17

Syllabus Of Operating System:

Syllabus 1:

1. Introduction and overview of OS (basic introduction or fundamentals)


2. Process management and process synchronization (process concept, CPU scheduling, multithreading,
process synchronization, synchronization mechanism, classical ipc problems, monitors, deadlocks,
deadlock handling strategies)
3. Memory management (abstract view of memory, loading and linking, address binding, memory
management techniques, swapping, partitioning, non-contagious allocation, virtual memory)
4. Storage management (File system and device management) (physical structure of disk, logical
structure of disk, file system interface, file system implementation, disk scheduling)
5. Security and protection. (not so important)

UNIT 1.1 BASIC INTRODUCTION:

OPERATING SYSTEM
An Operating System (OS) is system software that manages and handles the hardware and software resources
of a computer system. It provides interaction between users of computers and computer hardware. An
operating system is responsible for managing and controlling all the activities and sharing of computer
resources. An operating system is a low-level Software that includes all the basic functions like processor
management, memory management, Error detection, etc.

The purpose of an operating system is to provide an environment in which a user can execute programs
conveniently and efficiently.

Specifically, when we say that the OS acts as an interface between the user and the hardware, it means that
the OS provides a means for users and applications to interact with the physical components of the
computer in a manageable and efficient way. If OS is not there then we have to write the complex programs
repeatedly to give job to hardware.

 Software abstracting hardware.


 Interface between user or application and hardware.
 Set of utilities to simplify application development/execution.
 Control program (system software).
 Act like a government.
 Resource manager (CPU + memory + io + hw + sw).

Services of os

1. Program Execution:
a. The operating system provides a convenient environment where users can run their programs.
b. The operating system performs memory allocation to programs, and load them into
appropriate location so that they can execute. The users do not have to worry about all these
tasks.
2. I/O Operations:
a. In order to execute a program, it usually requires an I/O operation. For example, it may need
to read a file and print the output.
b. When all these I/O operations are performed users cannot control I/O devices.
c. All I/O is performed under the control of the operating system.
3. Communication:
a. The various processes executing on a system may need to communicate in order to exchange
data or information.
b. The operating system provides this communication by using a facility for message passing. In
message passing packets of information are moved between processes by the operating
system.
4. User interface, Manipulation of File System, Resource allocation, System services, etc.

Goal of OS:

1. Convenience (user-friendly): Ensures the system is user-friendly and easy to use for all users.
2. Efficiency (effective utilization): Maximizes effective utilization of system resources to perform tasks
easily.
3. Scalability (ability to evolve with new features): Provides the ability to evolve and incorporate new
features seamlessly.
4. Robustness (strong enough to bear the errors): Ensures the system is strong enough to handle and
recover from errors.
5. Reliability and stability (tolerance level): Maintains high tolerance levels to ensure consistent and
stable operation.
6. Portability (should be able to work with similar architecture with): Enables the system to work across
similar architectures without significant changes.
7. Resource Management (Manage system resources effectively.): Manages system resources effectively
to optimize performance and utilization.
8. Throughput: An OS should be constructed so that It can give maximum throughput (Number of tasks
per unit time).
9. Primary goal of os is always change depending on need of different computing domain.

Function of os:

1. Process management: Manages the creation, scheduling, and termination of processes to ensure
efficient CPU utilization.
2. Memory management: Handles allocation and deallocation of memory spaces to optimize system
performance and resource use.
3. File management: Manages file operations and permissions on storage devices ensuring organized
data access and security.
4. Device management: Controls and coordinates hardware devices through drivers managing I/O
operations efficiently.
5. Networking: facilitates communication and data exchange between devices or nodes within a
computer network by managing protocols, routing, and connectivity.
6. Security and protection: Ensures secure network communication and access control.
7. Error handling or detecting aids: Detects and responds to system errors and faults to protect system
resources and data maintaining system stability and reliability.
8. System Performance Monitoring: Monitors and optimizes system performance metrics to improve
overall efficiency and effectiveness.
9. User Interface: Provides CLI and GUI for user interaction with the system
10. Coordination b/w users and other software: Facilitates communication and resource sharing among
multiple users and applications.
11. Job Accounting: It keeps track of time and resources used by various jobs or users.

Characteristics/Properties/Features of OS: These are the inherent qualities or attributes that define an
operating system, such as multitasking, multi-user support, virtual memory, and security. responsiveness,
reliability, efficiency, scalability, and compatibility with hardware and software. graphical user interface (GUI),
device driver support, scheduling algorithms, and power management.

TYPES OF OPERATING SYSTEM:

Batch Operating system


A Batch Operating System is a type of operating system where jobs are collected, grouped, and processed in
batches without user interaction during their execution. This type of system was prevalent in the early days of
computing when interactive computing was not feasible due to limited technology and resource constraints.

In, 1950s and 1960s: Batch operating systems were common in the early mainframe computers during this
period. The systems were designed to handle large volumes of data and perform complex calculations with
minimal human intervention. Example systems: IBM 1401, IBM System/360 etc.

Characteristics:
 Job Scheduling: Jobs are submitted to the system and placed in a queue. The system processes these
jobs one at a time or in groups.
 No User Interaction: Once jobs are submitted, there is no interaction between the user and the
program. The output is collected after the job is completed.
 Sequential Execution: Jobs are executed in the order they are received, which could sometimes lead to
long wait times for certain tasks.

How It Worked:
1. Job Collection: Users prepared jobs on punch cards or magnetic tape and submitted them to a
computer operator.
2. Job Queue: The operator would group similar jobs and place them in a queue for processing.
3. Execution: The operating system executed the jobs sequentially. Each job would be loaded into
memory, executed, and then the output would be printed or written to an output device.
4. Output Collection: After execution, the output was collected, and the results were returned to the
users.

Benefits:
 Resource Efficiency: By processing jobs in batches, the system aimed to keep the CPU busy and
maximize resource usage.
 Reduced User Overhead: Users could submit their jobs and do not need to interact with the system
until the job is completed, simplifying the user interface.

Limitations:
 Lack of Interactivity: No real-time interaction with the system made it unsuitable for tasks requiring
immediate feedback.
 Idle Time During I/O: The CPU could be idle during input/output operations, leading to inefficiencies.
 Long Turnaround Time: Users had to wait for their jobs to be processed, which could be time-
consuming, especially if the system was handling a large volume of jobs.

Multiprogramming Operating Systems


A Multiprogramming Operating System is designed to increase CPU utilization by organizing jobs (code and
data) so that the CPU always has one to execute. By keeping multiple jobs in memory simultaneously, the
operating system ensures that the CPU can switch to another job if the current one has to wait for I/O
operations.

Why Multiprogramming Came About:

In the early days of computing, batch processing systems often left the CPU idle while waiting for slow I/O
operations to complete. This inefficiency prompted the development of multiprogramming systems to keep
the CPU busy and improve overall system performance. Example: IBM OS/360, UNIX

Multiprogramming became prominent in the 1960s with the advent of more powerful mainframe computers.
Systems like the IBM OS/360 were among the first to implement multiprogramming, significantly improving
efficiency and setting the stage for more advanced operating systems.

How Multiprogramming Works:


1. Job Pool: Multiple jobs are loaded into the system's memory, forming a job pool.
2. Job Scheduling: The operating system schedules jobs based on certain criteria (e.g., priority, resource
requirements).
3. Context Switching: When one job needs to wait (e.g., for I/O operations), the CPU switches to another
job, thereby keeping the CPU active.
4. Concurrency: While it appears that jobs are running simultaneously, the CPU rapidly switches between
them, providing a concurrent processing experience.

Benefits of Multiprogramming Operating Systems:


1. Increased CPU Utilization: By keeping multiple jobs in memory, the CPU can switch to another job
when the current one is waiting for I/O, reducing idle time.
2. Improved Throughput: More jobs are completed in a given period, as the system can manage multiple
jobs efficiently.
3. Better Resource Management: Multiprogramming optimizes the use of memory, CPU, and I/O devices,
ensuring balanced resource utilization.
4. Reduced Turnaround Time: Jobs can be processed faster since the CPU does not remain idle, leading
to quicker job completion times.

Limitations of Multiprogramming Operating Systems:


1. Complexity in Scheduling: Efficiently scheduling multiple jobs requires sophisticated algorithms,
increasing the complexity of the operating system.
2. Memory Management: Managing multiple jobs in memory can be challenging, especially with limited
memory resources, requiring advanced memory management techniques.
3. Context Switching Overhead: Frequent context switching between jobs can introduce overhead,
slightly reducing overall system efficiency.
4. Security and Protection: Ensuring that jobs do not interfere with each other requires robust protection
mechanisms, adding to the system's complexity.

Multitasking Operating Systems

It is often also called time sharing os as they share the same fundamental concept. But have small
difference.

A Multitasking Operating System is designed to handle multiple tasks (processes) simultaneously by rapidly
switching between them, giving the illusion that they are running concurrently. This capability is essential for
modern computing environments, enabling users to run multiple applications at the same time.

Multitasking OS refers to systems that can execute multiple tasks or processes at the same time, regardless of
whether the system is single-user or multi-user. The focus is on the ability to run multiple applications
simultaneously. Example system: Windows, macOS and Linux.

Multitasking operating systems became prominent with the rise of personal computing in the 1980s and
1990s. Systems like Windows and macOS brought multitasking capabilities to the masses, while UNIX-based
systems like Linux offered robust multitasking for both personal and enterprise environments.

Why Multitasking Came About:

As computers evolved and user expectations grew, the need for systems that could handle multiple operations
simultaneously became apparent. Users wanted to run several applications concurrently, such as editing
documents, browsing the web, and listening to music, all while the system handled background tasks like
printing or managing network connections. This led to the development of multitasking operating systems.

How Multitasking Works:


1. Time Slicing: The CPU's time is divided into small slices, and each task gets a slice in turn. This rapid
switching happens so quickly that it appears tasks are running simultaneously.
2. Process Scheduling: The operating system uses scheduling algorithms to decide which task to execute
next, based on priority, fairness, and resource needs.
3. Context Switching: The CPU switches from one task to another, saving the state of the current task and
loading the state of the next one, allowing each task to resume where it left off.
4. Concurrency: Although tasks do not run exactly at the same time (unless on a multicore system), the
quick switching creates a concurrent execution environment.
Benefits of Multitasking Operating Systems:
1. Increased Productivity: Users can run multiple applications simultaneously, improving efficiency and
workflow.
2. Responsive Systems: By handling multiple tasks concurrently, systems remain responsive to user
inputs even when performing intensive background operations.
3. Efficient Resource Utilization: Multitasking optimizes the use of CPU, memory, and I/O devices by
keeping them busy with multiple tasks, reducing idle time.
4. Improved User Experience: Users experience seamless interaction with multiple applications, such as
copying data between them, multitasking, and multitasking in a way that feels natural and intuitive.

Limitations of Multitasking Operating Systems:


1. Complexity: Implementing effective multitasking requires complex scheduling algorithms and efficient
context switching mechanisms, increasing the system's complexity.
2. Overhead: Frequent context switches can introduce overhead, slightly reducing system performance
due to the time spent saving and loading task states.
3. Resource Contention: Multiple tasks competing for the same resources (e.g., CPU, memory) can lead
to contention and potential performance bottlenecks.
4. Security and Isolation: Ensuring that tasks do not interfere with each other requires robust security
mechanisms to protect memory and resources, adding to system complexity.

Similar operating systems :


A Time-Sharing Operating System is designed to allow multiple users to interact with a computer
simultaneously by rapidly switching between them, giving each user a small portion of CPU time. This creates
the illusion that each user has a dedicated machine, even though the computer is shared among many.

Time-sharing OS specifically refers to systems designed to support multiple users interacting with the
computer simultaneously. The primary goal is to minimize response time and provide a seamless user
experience for interactive tasks.

Multiuser Operating System: A Multiuser Operating System allows multiple users to access and use the
computer resources simultaneously. Each user can run their own set of programs and interact with the system
as if they were the only user. Relation to Time-Sharing: Multiuser operating systems are often synonymous
with time-sharing systems. The key idea is that the operating system uses time-sharing techniques to allocate
CPU time and other resources to multiple users, giving each the illusion of having a dedicated machine.

Single-User Operating System: An operating system designed to manage the computer resources for one user
at a time, typically found in personal computers, providing a responsive and efficient environment for
individual use. Relation to Multitasking: Single-user operating systems often incorporate multitasking
capabilities. This means that while only one user interacts with the system, multiple applications or processes
can run simultaneously.

Multiprocessing Operating System: A Multiprocessing Operating System uses more than one processor (CPU)
to execute multiple processes simultaneously. It supports the concurrent execution of processes, enhancing
system performance and reliability. Relation to Multitasking: Multiprocessing is closely related to multitasking.
While multitasking refers to the ability to run multiple tasks or processes concurrently on a single CPU,
multiprocessing extends this by using multiple CPUs to run multiple processes simultaneously, providing true
parallelism.

Real-Time Operating Systems (RTOS):


A Real-Time Operating System (RTOS) is designed to process data as it comes in, typically within a strict time
constraint. This type of operating system is used in environments where timing and predictable responses are
crucial.

Key Characteristics:
1. Deterministic Timing: Guarantees that certain tasks will be completed within a specified time frame.
2. Low Latency: Responds quickly to external events.
3. Reliability: Consistently performs as expected under all conditions.
4. Priority Scheduling: Uses priority levels to ensure that critical tasks are executed first.

Types of Real-Time Operating Systems:

1. Hard Real-Time Systems:


o Definition: Systems where missing a deadline can lead to catastrophic consequences.
o Example Applications: Airbag systems in cars, pacemakers, industrial control systems.
o Characteristics: Strict timing constraints; failure to meet deadlines is unacceptable.

2. Soft Real-Time Systems:


o Definition: Systems where missing a deadline is undesirable but not catastrophic.
o Example Applications: Video streaming, online transaction systems.
o Characteristics: Timing constraints are less strict; occasional deadline misses are tolerable.

Why Real-Time Operating Systems Are Important:


 Critical Applications: Used in critical applications where timely and predictable responses are
essential.
 Safety and Reliability: Ensure safety and reliability in systems that interact with the physical world.
 Efficiency: Optimize system resources to meet stringent timing requirements.

Benefits of RTOS:
1. Predictability: Ensures tasks are completed within a predictable timeframe.
2. High Reliability: Designed for applications that require consistent performance.
3. Efficient Resource Management: Optimizes the use of system resources for time-critical tasks.
4. Priority-Based Scheduling: Prioritizes critical tasks over less important ones.

Limitations of RTOS:
1. Complexity: More complex to design and implement compared to general-purpose operating systems.
2. Resource Constraints: Often need to operate within tight resource constraints.
3. Cost: Typically, more expensive to develop and maintain due to their specialized nature.

Examples of Real-Time Operating Systems:


1. VxWorks: Widely used in aerospace, automotive, and industrial applications.
2. RTLinux: Real-time extension of the Linux kernel, used in various embedded systems.
3. FreeRTOS: Open-source RTOS commonly used in microcontroller-based applications.
4. QNX: Used in automotive infotainment systems and industrial automation.

How RTOS Works:


1. Task Scheduling: Tasks are scheduled based on priority and deadlines.
2. Interrupt Handling: Efficient handling of interrupts to ensure timely responses.
3. Resource Allocation: Manages system resources to ensure high-priority tasks get the necessary
resources.
4. Synchronization: Mechanisms to synchronize tasks and manage dependencies.

Other Operating Systems:


Distributed Operating System: A Distributed Operating System manages a group of independent computers
and makes them appear to the users as a single coherent system. It coordinates the activities of multiple
machines to achieve a common goal.

Clustered Operating System: A Clustered Operating System involves multiple computers connected together
to work as a single system. These systems are designed to improve performance, scalability, and availability.

Embedded Operating System: An Embedded Operating System is designed to operate on small machines like
microcontrollers, typically with limited resources. These systems are optimized for specific tasks and are an
integral part of the hardware they run on.

Summary:
 Distributed OS: Manages a network of independent computers to work as a unified system, focusing
on resource sharing and fault tolerance.
 Clustered OS: Connects multiple computers to work as a single system, emphasizing scalability,
performance, and high availability.
 Embedded OS: Designed for specific hardware with limited resources, optimized for specific tasks,
often with real-time capabilities.

Each type of operating system is tailored to different environments and requirements, providing specialized
solutions for various computing needs.

Summary:

 General-Purpose OS: Desktop OS, Server OS


 Specialized OS: Mobile OS, Embedded OS, Wearable OS, Handheld OS
 Batch and Multiprogramming OS: Batch OS, Multiprogramming OS
 Real-Time OS: Hard RTOS, Soft RTOS
 Multi-User and Multi-Tasking OS: Time-Sharing OS, Multi-User OS, Multitasking OS
 Multi-Processor OS: Designed for systems with multiple CPUs
 Network and Distributed OS: Network OS, Distributed OS
 Clustered OS: For high-performance computing and servers
 Virtualization OS: Hypervisors for managing virtual machines
UNIT 1.2 OPERATING SYSTEM STRUCTURE:

Components of an Operating Systems:


Shell: A shell is a user interface that allows users to interact with the system's core functionalities. It can be
command-line based, enabling users to execute text commands and scripts for file and process management,
or graphical, providing a visual interface with elements like windows, icons, and menus. Essentially, the shell
acts as an intermediary between the user and the operating system, facilitating communication and task
execution. It is the outermost component of OS.

Kernel: The kernel is the core component of an operating system that manages system resources and
facilitates communication between hardware and software. It operates at a fundamental level to control
processes, memory, device drivers, and system calls, ensuring that different software applications can function
efficiently and securely. The kernel acts as the primary interface between a computer's hardware and its
applications, maintaining overall system stability and performance Or, The kernel is the primary interface
between the Operating system and Hardware.

Functions of Kernel: It helps in controlling the System Calls. It helps in I/O Management. It helps in the
management of applications, memory, etc. The kernel manages core system operations, including process
scheduling and execution, memory allocation and management, hardware communication through device
drivers, and handling system calls and security. It ensures efficient resource use and system stability by acting
as the bridge between software applications and hardware components.

Kernel loads first into memory when an operating system is loaded and remains into memory until operating
system is shut down again. It is responsible for various tasks such as disk management, task management, and
memory management.

Kernel is the core part of an operating system that manages system resources. It also acts as a bridge between
the application and hardware of the computer. It is one of the first programs loaded on start-up (after the
Bootloader).

Booting:
What is Booting?

"Booting is the process of starting up a computer and loading the operating system into the system's main
memory or RAM. It involves initializing hardware components, running a series of diagnostic tests, and
launching the operating system so that it becomes ready for use."

What is Cold Booting?

"Cold booting, also known as a hard boot, refers to the process of starting a computer from a completely
powered-off state. This involves turning on the power switch, which initiates a full hardware initialization,
running the POST (Power-On Self Test), and loading the operating system from scratch. Cold booting is typically
done when the computer has been shut down completely."

What is Warm Booting?


"Warm booting, also known as a soft boot, involves restarting a computer that is already powered on. This can
be done by using the operating system's restart function or pressing a specific key combination (such as Ctrl +
Alt + Delete). Warm booting reinitializes the system without cutting off power completely, which typically
results in a faster startup since some components are already active and don't require full reinitialization."

The booting process is the sequence of steps that a computer system goes through when it is powered on or
restarted. It involves several key stages to get the system up and running, starting from the initial hardware
checks to loading the operating system (OS) into memory. Here is an organized and easy-to-understand
breakdown of the booting process:

1. Power Supply Activation: Electricity is sent to components.


2. The Basic Input/Output System (BIOS) or Unified Extensible Firmware Interface (UEFI) initializes
hardware components like the CPU, memory, and storage devices. This stage ensures that all essential
hardware components are properly detected, configured, and made ready for the subsequent stages
of the booting process.
3. POST (Power-On Self-Test) is a diagnostic process where hardware components are tested for proper
functionality to ensure they meet the minimum requirements for booting. It is a critical stage of the
boot process that ensures the integrity and functionality of the computer's hardware components
before proceeding with the booting of the operating system. It helps detect and diagnose hardware
problems early, ensuring a stable and reliable computing environment.
4. Bootstrap Loader: The bootstrap loader is a small program stored in the BIOS/UEFI firmware that
locates and loads the initial bootloader from the storage device.
5. Bootloader Execution: The bootloader, such as GRUB or NTLDR, loads the operating system kernel into
memory and passes control to it.
6. Kernel Initialization: The operating system kernel initializes essential components like the process
scheduler, memory manager, and device drivers.
7. System Initialization: Once the kernel is initialized, System initialization stage is of starting system
processes and services. And it involves configuring network settings, mounting filesystems, and
launching essential system daemons and services. The primary function of the system initialization
stage is to prepare the operating system environment for user interaction and application execution.
8. User Space Initialization: After system initialization is complete, user space initialization begins., user-
space processes and services, such as the init process in Unix-based systems, are started.
9. User Login or Desktop Environment Start: Finally, the user is prompted to log in or, in graphical
environments, the desktop environment is started, providing access to user applications and services.

BIOS: A firmware program that performs hardware initialization during the booting process and provides
runtime services for operating systems and programs.

Bootloader or bootstrap: A small program that loads the OS kernel into memory from the storage device. It is
a small program that is responsible for loading the operating system (OS) into memory when a computer is
powered on or restarted. It is a critical component of the booting process, acting as the initial code that runs to
set up the system and load the full OS.

Kernel: The core part of the operating system responsible for managing system resources and hardware-
software communication.

POST: A diagnostic testing sequence run by the BIOS to ensure hardware components are working correctly.
UEFI: It stands for Unified Extensible Firmware Interface. UEFI is a modern replacement for BIOS, offering
enhanced features such as graphical user interfaces, support for larger boot drives, faster boot times,
improved security with Secure Boot, and greater extensibility. These improvements make UEFI a more robust
and future-proof solution for managing system firmware and boot processes in modern computers.

Dual Mode Operations:


Kernel mode and user mode are modes of operation for the CPU (Central Processing Unit) that determine the
level of access to system resources and the types of operations that can be performed. These modes are
fundamental or essential to the design of modern operating systems, providing a mechanism (a way) for
protecting the system from mistakes or malicious code.

Kernel Mode

1. Definition: Kernel mode, also known as supervisor mode or system mode, is a privileged mode of
operation for the CPU. It allowing execution of special instructions (privileged instructions)
necessary for the operating system's tasks, like managing memory protection.

2. Access: In kernel mode, the CPU has unrestricted access to all system resources, including hardware,
memory, and peripherals.

3. Capabilities:

o Can execute any CPU instruction and access any memory address.

o Can perform tasks such as managing hardware devices, executing system calls, and
managing memory.

User Mode

4. Definition: User mode is a restricted mode of operation for the CPU, designed for running user
applications and processes.

5. Access: In user mode, the CPU has limited access to system resources. Applications can only access
their own memory space and must use system calls to request services from the kernel.

6. Capabilities:

o Can execute only a subset of CPU instructions.

o Cannot directly access hardware or critical system resources; must go through the kernel.
Mode Transitions in Kernel and User Mode:
Mode transitions refer to the process of switching the CPU between user mode and kernel mode. These
transitions are crucial for ensuring system security and stability by controlling how and when user
applications can access critical system resources.

How Mode Transitions Work:

1. User Mode to Kernel Mode (System Call):

o Trigger: This transition occurs when a user application needs to request a service from the
operating system that requires higher privileges, such as accessing hardware or performing
file operations.

2. Kernel Mode to User Mode (Return from System Call):

o Trigger: This transition occurs after the kernel has completed a requested operation and
needs to return control to the user application.

Examples of Mode Transitions:

 System Calls: When a user application wants to read a file, it uses a system call to request this
operation from the kernel. The CPU switches to kernel mode to handle the request and then returns
to user mode.

 Interrupts or trap: Hardware devices, such as a keyboard or a network card, may generate interrupts
that require the CPU's attention. When an interrupt occurs, the CPU may switch to kernel mode to
handle the interrupt through an interrupt handler. Interrupts are signals that inform the CPU that an
event needs immediate attention. These can be generated by hardware devices or software
instructions.

 Exceptions and Illegal Operations: If a user program attempts to execute forbidden instructions or If
a user application encounters an error (like dividing by zero), the CPU may switch to kernel mode to
handle the exception and ensure the system remains stable.

Importance of Mode Transitions:

1. Security: Mode transitions help protect the system by ensuring that user applications cannot directly
access critical system resources or execute privileged operations.

2. Stability: By controlling access to system resources, mode transitions help prevent user applications
from causing system crashes or corruption.

3. Resource Management: The operating system can manage hardware and other resources more
effectively by mediating access through controlled transitions between user mode and kernel mode.

System Call Handling Process:


1. User Process Executing:

o Description: The CPU is in user mode, running a user application.


o Mode: User mode.
o Action: The application performs its tasks until it needs a service from the operating system.

2. Calls System Call:

o Description: The user process needs to request a service from the operating system (e.g., file
I/O, memory allocation).
o Action: The application invokes a system call, which is a controlled way to request OS services.
3. Trap to Kernel Mode (Trap Mode bit = 0):

o Description: The system call generates a software interrupt (trap).


o Action: This interrupt causes the CPU to switch from user mode to kernel mode.
o Mode Bit: Set to 0 to indicate kernel mode.
o Control Transfer: The CPU transfers control to the appropriate system call handler in the
kernel.
4. Execute System Call:

o Description: The kernel executes the system call, performing the requested service.
o Action: The kernel performs tasks that may involve accessing hardware, managing memory, or
other privileged operations restricted to kernel mode.
5. Return from System Call (Return Mode bit = 1):

o Description: After completing the system call, the kernel prepares to return control to the user
process.
o Action: The CPU switches back to user mode.
o Mode Bit: Reset to 1 to indicate user mode.
6. User Process Continues Execution:

o Description: The user process resumes execution in user mode with the results of the system
call.
o Action: The application continues its operations, utilizing the results provided by the system
call.

Privileged and Non-Privileged Instructions in Operating Systems:

Key Differences:
 Access: Privileged instructions have direct access to system resources, non-privileged do not.
 Execution Mode: Privileged instructions run in kernel mode, non-privileged in user mode.
 Permissions: Privileged instructions require special permissions, non-privileged do not.
 Purpose: Privileged for low-level system operations, non-privileged for general computing.
 Risks: Privileged instructions pose higher risks if misused.

System calls:
System calls are the interface between user applications and the operating system kernel, allowing user-level
processes to request services from the kernel
Types of System Calls:
1. Process Control: Manage processes, including creation, termination, and coordination.
o Examples: fork()( Creates a new process), exit()(Terminates a process), wait()( Waits for a
process to finish execution), etc.
2. File Management: Handle file operations such as creating, deleting, reading, and writing files.
o Examples: open()( Opens a file), close()Closes a file), read()(Reads data from a file), write()
(Writes data to a file), unlink()( Deletes a file), etc.
3. Device Management: Manage hardware devices by requesting and releasing device access.
o Examples: IOCtl()(Performs device-specific operations), read()(Reads from a device), write()
(Writes to a device), etc.
4. Information Maintenance: Retrieve and set system information.
o Examples: getpid()(Gets the process ID), alarm()(Sets a timer for the process), sleep()
(Suspends process execution for a specified time), etc.
5. Communication: Facilitate inter-process communication (IPC).
o Examples: pipe(): Creates a unidirectional data channel, shmget(): Allocates a shared memory
segment, mmap(): Maps files or devices into memory, msgsnd(): Sends a message to a
message queue, msgrcv(): Receives a message from a message queue.

System Program:
Definition: System programs are software that provide a convenient environment for program
development and execution. They perform a wide range of functions that facilitate the operation of
the computer system, making it easier for users to interact with the hardware and the operating
system.

Examples of System Programs:

1. File Management: Tools for creating, deleting, copying, and managing files and directories
(e.g., cp, rm, mv commands in Unix/Linux).
2. Text Editors: Programs for creating and editing text files (e.g., vi, nano in Unix/Linux, Notepad
in Windows).
3. Compilers and Interpreters: Software that translates and executes programming languages
(e.g., GCC for C/C++ in Unix/Linux).
4. Communication Programs: Tools for facilitating communication between users and systems
(e.g., ssh, ftp in Unix/Linux).
5. Utility Programs: Software for system maintenance and performance monitoring (e.g., top, ps
in Unix/Linux).
6. Linkers: Programs that combine multiple object files generated by compiler into a single
executable file. It operates during the build process. Examples: ld in Unix/Linux.
7. Loaders: Programs that load executables into memory and relocates addresses, initializes
runtime environment, and starts execution, in other words prepare them for execution. It
operates at runtime. Examples: Part of the operating system’s runtime environment (e.g.,
the loader in Unix/Linux).

How System Programs Differ from System Calls:


1. Level of Abstraction:
o System Calls: Low-level functions that provide an interface between user applications
and the operating system kernel. They are the basic mechanism by which a program
requests a service from the kernel.
o System Programs: High-level programs that provide user-friendly interfaces and
functionalities, often utilizing multiple system calls to perform their operations.
2. Usage:
o System Calls: Directly invoked by user applications or system programs to request
specific services from the operating system, such as file operations, process control, or
communication.
o System Programs: Used by users and administrators to perform various tasks like
editing files, managing files, or monitoring system performance. These programs
internally use system calls to interact with the operating system.
3. Examples:
o System Calls: open(), read(), write(), fork(), exec(), wait().
o System Programs: ls (list directory contents), cat (concatenate and display files), gcc
(GNU Compiler Collection), bash (Unix shell).

Interrupts:
Definition: Interrupts are signals generated by hardware devices or software events to interrupt the normal
execution flow of a CPU.

Functionality: They notify the operating system about important events that require immediate attention,
such as I/O completion, hardware errors, or time-sensitive tasks.

Interrupts are critical mechanisms in operating systems that allow the CPU to respond to urgent events. They
can be classified into four main categories based on their sources and purposes:

1. Hardware Interrupts: Hardware interrupts are generated by hardware devices to signal the CPU that they
need attention. These interrupts are asynchronous, meaning they can occur at any time, independent of the
current CPU operations.

 Examples:
 Timer Interrupts: Generated by a system timer to signal periodic events, such as context
switching in multitasking environments.
 I/O Device Interrupts: Generated by peripherals like keyboards, mice, network cards, and disk
drives to signal the completion of an I/O operation or to indicate an error.
 Power Interrupts: Generated when there are changes in power status, such as battery low
signals or power failure warnings.
2. Software Interrupts: Software interrupts are generated by programs when they require OS services or when
specific conditions within the software are met. These interrupts are synchronous, meaning they occur at
specific points during program execution.

 Examples:
 System Calls: Invoked by user applications to request services from the OS, such as file
operations or process management.
 Exceptions: Triggered by the CPU when it encounters an error during instruction execution,
such as division by zero, invalid memory access, or illegal instructions.
3. Internal Interrupts: Internal interrupts, also known as exceptions or traps, are generated by the CPU itself
when it detects an error or a specific condition during instruction execution. These interrupts are also
synchronous.

 Examples:
 Divide by Zero: Triggered when an arithmetic operation attempts to divide a number by zero.
 Invalid Opcode: Generated when the CPU encounters an invalid or undefined instruction.
 Page Fault: Occurs when a program accesses a section of memory that is not currently
mapped to physical memory.
 Overflow: Triggered when an arithmetic operation produces a result that exceeds the
representational capacity of the data type.
4. External Interrupts: External interrupts are generated by external hardware devices or by other processors
in a multiprocessor system. These interrupts are generally asynchronous.

Examples:
 Interrupt Requests (IRQs): Common in single-processor systems where external devices signal
the need for CPU attention.
 Inter-Processor Interrupts (IPIs): Used in multiprocessor systems where one processor sends
an interrupt to another processor to perform specific tasks or to manage synchronization.
Summary

The four classes of interrupts are distinguished based on their origin and nature of occurrence:

1. Hardware Interrupts: Generated by hardware devices; examples include timer interrupts, I/O device
interrupts, and power interrupts.

2. Software Interrupts: Generated by software; examples include system calls and exceptions.

3. Internal Interrupts: Generated by the CPU due to execution errors; examples include divide by zero,
invalid opcode, page faults, and overflow.

4. External Interrupts: Generated by external sources or other processors; examples include IRQs and
IPIs.

Interrupt Mechanism in Operating Systems


The interrupt mechanism is a crucial feature of modern operating systems and computer architecture. It allows
the CPU to respond to urgent events and execute tasks efficiently. When an interrupt occurs, the CPU
temporarily halts its current execution to address the interrupt, then resumes normal processing.

You might also like