0% found this document useful (0 votes)
4 views17 pages

Overview of Operating system

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 17

UNIT- 1

Fundamentals of Operating System

Introduction to Operating System: An operating system (OS) is the most essential


software that runs on a computer. It manages hardware and software resources and
provides services for computer programs. Every general-purpose computer must
have an OS to run other programs and applications.

• Examples of Operating Systems: Windows, macOS, Linux, Android, iOS.

Functions of an Operating System:

1. Process Management:

o Handles the creation, scheduling, and termination of processes.


o Ensures that processes and applications run smoothly without
interfering with each other.
o Provides mechanisms for process synchronization and communication.

2. Memory Management:

o Allocates memory to various programs as needed and manages the


sharing of memory among applications.
o Handles memory hierarchy, including RAM and cache.
o Provides virtual memory to extend the apparent size of physical
memory.

3. File System Management:

o Manages files on different storage devices.


o Provides a way to create, delete, read, write, and organize files.
o Implements directories and file systems for data organization.

4. Device Management:

o Manages device communication via their respective drivers.


o Controls and monitors hardware components like hard drives, printers,
and display devices.
o Manages I/O operations and ensures efficient data transfer.

5. Security and Access Control:

o Protects data and system resources from unauthorized access.


o Implements user authentication and authorization.
o Provides mechanisms to enforce security policies.

6. User Interface:

o Provides an interface for user interaction with the system.


o Can be command-line based (CLI) or graphical (GUI).

Operating System as a Resource Manager: An OS manages hardware resources


(CPU, memory, I/O devices) and software resources (files, processes). It allocates
these resources efficiently to ensure maximum performance and fair distribution
among users and applications.

Structure of Operating System:

1. Kernel:

o The core component of an OS.


o Manages system resources and communication between hardware and
software.
o Handles low-level tasks like process management, memory
management, and device management.
o Examples: Monolithic kernels (Linux), Microkernels (Minix).

2. Shell:

o An interface that allows users to interact with the kernel.


o Can be command-line based (CLI) like Bash or graphical (GUI) like
GNOME or Windows Explorer.
o Translates user commands into actions performed by the OS.

Views of Operating System:


1. User View:

o Provides an interface for user interaction with the computer.


o Focuses on ease of use and convenience.
o Examples: Desktop environments, application interfaces.

2. System View:

o Manages hardware resources and ensures efficient and fair resource


allocation.
o Focuses on resource management, performance, and reliability.

Evolution of Operating Systems:

1. Batch Processing Systems:

o Early systems where jobs were processed in batches without user


interaction.
o Example: IBM OS/360.

2. Time-Sharing Systems:

o Multiple users can interact with the system simultaneously.


o Example: UNIX.

3. Personal Computing:

o Introduction of user-friendly interfaces for personal computers.


o Example: Microsoft Windows, macOS.

4. Distributed Systems:

o Systems where resources and processing are distributed across multiple


machines.
o Example: Google File System (GFS).

5. Real-Time Systems:

o Systems that require immediate processing and response.


o Example: Industrial control systems, medical devices.
Types of Operating Systems:

1. Batch Operating Systems:

o Executes batches of jobs without user interaction.


o Suitable for large jobs with similar needs.

2. Time-Sharing Systems:

o Multiple users share system resources simultaneously.


o Provides fast response time and efficient resource utilization.

3. Distributed Operating Systems:

o Manages a group of distinct computers and makes them appear to be


a single computer.
o Enhances performance, reliability, and scalability.

4. Network Operating Systems:

o Provides services to computers connected to a network.


o Examples: Novell NetWare, Windows Server.

5. Real-Time Operating Systems (RTOS):

o Provides immediate processing and response for time-sensitive


applications.
o Examples: VxWorks, RTLinux.

Process & Thread Management [CO2]

Program vs. Process:

• Program: A static set of instructions stored on disk (executable file). It is


passive and doesn’t perform actions until executed.
• Process: A program in execution, including the program counter, registers,
variables, and memory space. It is dynamic and performs the actual tasks.

Process Control Block (PCB): A data structure in the OS that contains information
about a specific process, such as:
1. Process State: Current state (new, ready, running, waiting, terminated).
2. Process ID: Unique identifier for the process.
3. Program Counter: Address of the next instruction to be executed.
4. CPU Registers: Contents of all process-specific registers.
5. Memory Management Information: Base and limit registers, page tables.
6. I/O Status Information: List of I/O devices allocated.
7. Accounting Information: CPU used, clock time elapsed, time limits.

State Transition Diagram: Represents the various states a process can be in and the
transitions between these states:

1. New: Process is being created.


2. Ready: Process is waiting to be assigned to a CPU.
3. Running: Process is executing on the CPU.
4. Waiting: Process is waiting for some event to occur (e.g., I/O completion).
5. Terminated: Process has finished execution.

Scheduling Queues: Processes are placed in different queues based on their state:

1. Job Queue: Contains all processes in the system.


2. Ready Queue: Contains processes that are ready to execute.
3. Device Queues: Contains processes waiting for a particular I/O device.

Types of Schedulers:

1. Long-Term Scheduler (Job Scheduler):

o Decides which processes should be brought into the ready queue.


o Controls the degree of multiprogramming (number of processes in
memory).

2. Short-Term Scheduler (CPU Scheduler):

o Decides which process should be executed next.


o Executes frequently and makes quick decisions.

3. Medium-Term Scheduler:

o Swaps processes in and out of memory to manage the degree of


multiprogramming.
o Used for swapping and managing suspended processes.
Concept of Thread: A thread is the smallest unit of processing that can be
performed in an OS. Threads within the same process share the same data and
resources but execute independently.

Benefits of Threads:

1. Responsiveness: Allows a program to continue running even if part of it is


blocked.
2. Resource Sharing: Threads within a process share resources like memory and
files.
3. Economy: Creating and managing threads is more efficient than processes.
4. Scalability: Multithreading allows efficient use of multiprocessor systems.

Types of Threads:

1. User Threads: Managed by user-level libraries rather than the OS.

o Advantages: Fast and efficient.


o Disadvantages: Lack of kernel support can lead to poor performance on
multiprocessor systems.

2. Kernel Threads: Managed by the OS kernel.

o Advantages: Better performance on multiprocessor systems.


o Disadvantages: Slower and more resource-intensive.

Process Synchronization: Ensures that multiple processes or threads can execute


concurrently without conflicting:

1. Critical Section: Part of the program where shared resources are accessed.
2. Mutual Exclusion: Ensures that only one process accesses the critical section
at a time.
3. Synchronization Mechanisms:
o Semaphores: Integer variable used for signaling between processes.
o Mutexes: Locks that provide mutual exclusion.
o Monitors: High-level synchronization constructs that provide a
mechanism for threads to safely access shared resources.

CPU Scheduling [CO3]


Need for CPU Scheduling: To maximize CPU utilization and system throughput,
minimize turnaround time, waiting time, and response time, and ensure fairness
among processes.

CPU I/O Burst Cycle: Processes alternate between CPU bursts (processing) and I/O
bursts (waiting for I/O operations). CPU scheduling decisions are made based on
these cycles.

Pre-emptive vs. Non-pre-emptive Scheduling:

1. Pre-emptive Scheduling: The OS can interrupt a running process to assign


the CPU to another process. Useful for ensuring responsiveness and fairness.
2. Non-pre-emptive Scheduling: Once a process starts executing, it runs to
completion before the CPU is assigned to another process. Simpler but can
lead to poor performance in a multi-tasking environment.

Scheduling Criteria:

1. CPU Utilization: Keep the CPU as busy as possible.


2. Throughput: Number of processes completed per unit time.
3. Turnaround Time: Time taken to execute a process from submission to
completion.
4. Waiting Time: Time a process spends in the ready queue.
5. Response Time: Time from submission of a request until the first response is
produced.

Scheduling Algorithms:

1. First-Come, First-Served (FCFS):

o Processes are scheduled in the order they arrive.


o Simple but can lead to the convoy effect, where short processes wait
for long ones.

2. Shortest Job First (SJF):

o Process with the shortest execution time is selected next.


o Minimizes average waiting time but requires accurate knowledge of
process execution time.
o Can be pre-emptive (Shortest Remaining Time First, SRTF) or non-pre-
emptive.
3. Round-Robin (RR):

o Each process gets a small unit of CPU time (time quantum) in a cyclic
order.
o Fair and simple but performance depends on the size of the time
quantum.

4. Multilevel Queue Scheduling:

o Processes are divided into different queues based on priority or type


(e.g., foreground vs. background).
o Each queue has its own scheduling algorithm.
o High flexibility but can lead to complexity in managing queues and
ensuring fairness.

UNIT – 2

Memory Management

Introduction to Memory Management: Memory management is a crucial function


of an operating system that involves managing the computer's primary memory. The
OS keeps track of each byte in a computer’s memory and manages the allocation
and deallocation of memory spaces as needed by various programs to optimize
overall system performance.

Address Binding: Address binding is the process of mapping logical addresses


(generated by a program) to physical addresses (actual locations in memory). This
can occur at different stages:

1. Compile Time: If memory location is known a priori, absolute code can be


generated.
2. Load Time: The compiler generates relocatable code and final binding is done
at load time.
3. Execution Time: Binding is delayed until run time, requiring hardware support
for address mapping (e.g., using a Memory Management Unit).

Relocation: Relocation is the process of adjusting addresses used in the code so that
the program can be loaded anywhere in memory. This involves changing the
addresses used in the code to match the actual physical addresses assigned to the
program during loading.

Loading: Loading is the process of bringing the program into memory from
secondary storage for execution. Depending on the OS, this can involve loading the
entire program at once or loading parts of the program as needed.

Linking: Linking combines multiple object files into a single executable file. There are
two types:

1. Static Linking: All the code needed is combined by the linker at compile time.
2. Dynamic Linking: Code is not included until runtime, reducing the executable
file size and allowing for updates without recompiling.

Memory Sharing and Protection:

• Memory Sharing: Allows multiple processes to access the same memory


space for communication and efficient resource utilization.
• Memory Protection: Ensures that a process cannot access memory that has
not been allocated to it, thus preventing interference and corruption of data.
Techniques include:
o Base and Limit Registers: Define the range of legal addresses a
process can access.
o Segmentation and Paging: Provide mechanisms to isolate process
memory spaces.

Paging and Segmentation:

• Paging: A memory management scheme that eliminates the need for


contiguous allocation of physical memory, thus avoiding fragmentation.
Memory is divided into fixed-size blocks called pages (logical memory) and
frames (physical memory). The OS maintains a page table to map logical
pages to physical frames.

o Advantages: Efficient memory use, no external fragmentation.


o Disadvantages: Requires additional memory for page tables, potential
page table overhead.
• Segmentation: Divides the memory into variable-sized segments based on
logical divisions such as functions, arrays, and data structures. Each segment
has a segment table that stores the base address and limit.

o Advantages: Provides a logical view of the program, easy to


implement protection and sharing.
o Disadvantages: Can suffer from external fragmentation, complex
management.

Virtual Memory: Virtual memory is a memory management technique that provides


an "illusion" of a very large memory to programs by using disk space to extend the
physical memory. It allows for the execution of programs that require more memory
than physically available.

Basic Concepts of Demand Paging: Demand paging is a method of implementing


virtual memory where pages of memory are loaded from disk to RAM only when
they are needed, rather than loading the entire program into memory at once.

• Page Fault: Occurs when a program tries to access a page that is not currently
in memory, triggering the OS to fetch the page from disk.

Page Replacement Algorithms: When a page fault occurs and there is no free
frame available, the OS must replace one of the pages in memory. Common page
replacement algorithms include:

1. First-In-First-Out (FIFO):

o Replaces the oldest page in memory.


o Simple but can lead to poor performance (Belady's anomaly).

2. Least Recently Used (LRU):

o Replaces the page that has not been used for the longest period.
o Provides good performance but requires hardware support or
additional data structures.

3. Optimal Page Replacement:

o Replaces the page that will not be used for the longest period in the
future.
o Theoretical best performance but not implementable in practice since it
requires future knowledge.
4. Clock (Second Chance):

o A practical approximation of LRU using a circular queue and a reference


bit.
o Pages with the reference bit set are given a second chance before
being replaced.

By using these memory management techniques and algorithms, an operating


system can efficiently utilize memory resources, provide multitasking capabilities, and
ensure system stability and performance.

UNIT – 3

I/O Device Management

I/O Devices and Controllers:

• I/O Devices: These are peripherals that facilitate the input and output
operations of a computer. They are crucial for user interaction and data
exchange with external devices. Examples include keyboards, mice, monitors,
printers, scanners, and network adapters.
• I/O Controllers: These are hardware components that manage the
communication between the CPU, memory, and I/O devices. They handle data
transfer, error detection and correction, and synchronization of operations. I/O
controllers often include buffer memory to temporarily store data and
manage data flow rates between devices and the CPU.

Device Drivers:

• Device drivers are software components that act as intermediaries between


the operating system and hardware devices. They provide a standard interface
for the operating system to control the hardware, abstracting the complexities
of the hardware implementation.
• Device drivers typically include routines for initializing the device, handling
data transfers (input and output), managing interrupts (events that require
immediate attention), and providing error handling and status reporting
mechanisms.

Disk Storage:

• Disk storage is a non-volatile storage medium used for storing data and
programs. It consists of one or more disks (or platters) coated with a magnetic
material. Data is stored magnetically on the disk's surface.
• Disk storage is organized into tracks (concentric circles on a disk) and sectors
(segments of a track). The operating system uses a file system to manage the
storage and retrieval of data on the disk.

File Management

Basic Concepts:

• A file is a named collection of related information stored on secondary


storage. Files can represent documents, programs, databases, or any other
type of data.
• File attributes include the file name, file type (e.g., text, binary), file size,
location (path), and access permissions (read, write, execute).

File Operations:

• Create: Allows the creation of a new file. The operating system assigns a
unique file identifier (inode) to the file and initializes its attributes.
• Open: Opens an existing file for reading, writing, or both. The operating
system locates the file on disk and creates a file descriptor to track the file's
status and position.
• Read: Reads data from a file into memory. The operating system manages the
data transfer between the file and the requesting process.
• Write: Writes data from memory to a file. The operating system ensures that
the data is written correctly and updates the file's attributes.
• Close: Closes an open file, freeing up system resources. The operating system
releases the file descriptor associated with the file.
• Delete: Deletes a file from the file system. The operating system removes the
file's entry from the directory and marks the disk space occupied by the file as
available.

Access Methods:
• Sequential Access: Data is read or written sequentially from the beginning of
the file to the end. This access method is efficient for processing data in a
linear manner, such as reading log files.
• Direct Access: Also known as random access, allows data to be read or
written at any point in the file. This access method is more flexible but may be
less efficient for large sequential operations.

Directory Structures and Management:

• Directories are used to organize files into a hierarchical structure. Directories


can contain both files and other directories, allowing for a nested structure.
• Directory management includes creating directories, renaming directories,
moving directories, deleting directories, and listing directory contents.

Remote File Systems:

• Remote file systems allow a computer to access files stored on another


computer over a network. This enables file sharing and collaboration between
multiple users and computers.
• Examples of remote file systems include NFS (Network File System) and SMB
(Server Message Block, used by Windows).

File Protection:

• File protection ensures that files can only be accessed or modified by


authorized users or processes. This is achieved through access control
mechanisms, such as file permissions and file ownership.
• File permissions specify which users or groups can read, write, or execute a
file. File ownership determines which user or group owns the file and has the
right to set permissions.

Effective I/O device management and file management are critical for the efficient
and secure operation of an operating system. They ensure that users can interact
with the system and their data effectively while maintaining the integrity and security
of the system.
UNIT – 4

Introduction to Distributed Operating Systems

Characteristics:

• Transparency: Distributed operating systems aim to hide the distribution of


resources from users and applications, providing a unified view of the system.
• Scalability: These systems can easily scale by adding more machines to the
network without affecting the overall performance.
• Fault Tolerance: Distributed systems are designed to withstand failures in
individual components, ensuring the system remains operational.
• Concurrency: They support concurrent execution of processes, allowing
multiple processes to run simultaneously on different machines.
• Resource Sharing: Distributed systems enable sharing of resources such as
files, printers, and computational resources across the network.
• Heterogeneity: They can support different types of hardware and software
platforms, allowing for a diverse computing environment.

Architecture:

• Distributed operating systems consist of multiple nodes (computers)


connected by a network. Each node has its own processor, memory, and I/O
devices.
• Nodes communicate with each other through message passing or remote
procedure calls (RPCs), which allow processes running on different nodes to
interact.
• Middleware is used to provide an abstraction layer that hides the details of
communication and resource management from the applications.

Issues:

• Communication: Ensuring efficient and reliable communication between


distributed components is a major challenge. Communication protocols and
mechanisms must be designed to handle network failures and delays.
• Synchronization: Coordinating the activities of processes running on different
nodes requires synchronization mechanisms such as locks, semaphores, and
barriers.
• Consistency: Ensuring consistency of data across distributed nodes is crucial.
Techniques such as distributed transactions and replication are used to
achieve this.
• Fault Tolerance: Distributed systems must be resilient to failures, which can
occur at any node or in the network. Techniques such as redundancy and
checkpointing are used to recover from failures.

Communication & Synchronization:

• Communication in distributed systems is typically achieved through message


passing, where processes send messages to each other over the network.
• Synchronization is necessary to ensure that processes running on different
nodes can coordinate their activities. This is achieved through synchronization
primitives such as locks and barriers.

Introduction to Multiprocessor Operating Systems

Architecture:

• Multiprocessor operating systems (MOS) manage multiple processors in a


single system. These processors may share a common memory or have their
own memory.
• Symmetric multiprocessing (SMP) systems have multiple processors that share
the same memory and I/O devices. Each processor is capable of executing any
task in the system.
• Asymmetric multiprocessing (AMP) systems have one master processor that
controls the system and one or more slave processors that perform specific
tasks assigned by the master.

Structure:

• The structure of a multiprocessor operating system includes a kernel that


manages the processors, memory, and I/O devices.
• The kernel must handle issues such as process scheduling, synchronization,
and communication between processors.
• Shared memory is often used to facilitate communication and data sharing
between processors.

Synchronization & Scheduling:


• Synchronization in multiprocessor systems is critical to ensure that multiple
processors can access shared resources without conflicts.
• Scheduling algorithms must be designed to efficiently utilize the available
processors and balance the workload among them.
• Techniques such as parallel processing and load balancing are used to
maximize the performance of multiprocessor systems.

Introduction to Real-Time Operating Systems

Characteristics:

• Real-time operating systems (RTOS) are designed to meet strict timing


requirements for processing and responding to events.
• They are used in applications where timing is critical, such as embedded
systems, control systems, and multimedia applications.
• RTOSs must provide predictable and deterministic behavior, ensuring that
tasks are executed within specified deadlines.

Structure:

• The structure of a real-time operating system includes a kernel that manages


tasks, interrupts, and resources.
• Tasks in an RTOS are typically classified as either real-time tasks with strict
timing requirements or non-real-time tasks with more relaxed timing
constraints.
• The kernel must ensure that real-time tasks are scheduled and executed in a
timely manner to meet their deadlines.

Scheduling:

• Scheduling in real-time operating systems is crucial to ensure that tasks are


executed within their deadlines.
• RTOSs use scheduling algorithms that prioritize real-time tasks over non-real-
time tasks and ensure that critical tasks are not preempted by less critical
ones.
• Techniques such as priority-based scheduling and rate-monotonic scheduling
are commonly used in real-time operating systems.

Case Study of the Linux Operating System


• Linux is a widely used open-source operating system kernel that forms the
basis of many Linux distributions (distros).
• It was created by Linus Torvalds in 1991 and is released under the GNU
General Public License (GPL).
• Linux is a monolithic kernel, meaning that all kernel services run in a single
address space and have direct access to the kernel's internal data structures.
• Linux supports a wide range of hardware platforms and architectures,
including x86, ARM, and MIPS.
• Linux has a strong emphasis on security, stability, and performance, making it
popular for use in servers, embedded systems, and supercomputers.

You might also like