Operating System Exam
Operating System Exam
Operating System Exam
introduction:
Concept of Operating Systems
An Operating System (OS) is system software that manages hardware resources and provides
services for application software. It acts as an intermediary between the hardware and user
applications. The OS ensures efficient resource allocation, file management, process
management, and system security.
Generations of Operating Systems
Operating systems have evolved over time, starting with early batch systems, moving to
multiprogramming, time-sharing systems, personal computers, and modern distributed
systems. Each generation focused on improving efficiency, user interaction, and handling
more complex tasks.
Types of Operating Systems
Operating systems can be classified into types such as Batch OS, Time-Sharing OS, Real-Time
OS, Distributed OS, and Network OS, depending on how they manage tasks, resources, and
user interaction.
OS Services
Operating systems offer several services, including process scheduling, memory
management, file management, I/O operations, and security services like user
authentication and access control, ensuring that resources are efficiently managed.
Processes:
Definition
A process is a program in execution, consisting of the program code, its current activity, and
associated resources like memory and registers.
Process Relationship
Processes can have relationships such as parent-child processes. A parent creates a child
process, which may itself create other processes. These relationships help in resource
allocation and management.
Different states of a Process
A process can exist in several states: New (being created), Ready (waiting for CPU), Running
(being executed), Blocked (waiting for an event), and Terminated (finished execution).
Process State transitions
Processes transition between states based on scheduling decisions, I/O operations, or
system interrupts, moving from New to Ready, Ready to Running, Running to Blocked, and
back to Ready or Terminated.
Process Control Block (PCB)
The PCB is a data structure that stores important information about a process, such as its
state, program counter, CPU registers, memory management details, and scheduling
information.
Context switching
Context switching occurs when the CPU switches from one process to another. The current
state of the process is saved, and the state of the new process is loaded, allowing
multitasking.
Thread:
Definition
A thread is the smallest unit of execution within a process. Multiple threads can exist within
a process, sharing resources like memory but having individual execution paths.
Various states
Threads can exist in states such as New (created but not started), Ready (waiting for CPU),
Running (executing), Blocked (waiting for I/O), and Terminated (finished execution).
Benefits of threads
Threads allow for more efficient multitasking and better resource utilization. They make the
system more responsive and improve performance by allowing concurrent execution within
a process.
Types of threads
Threads can be classified as user-level threads, which are managed by user-level libraries,
and kernel-level threads, which are managed by the OS kernel.
Multithreading
Multithreading allows multiple threads to run concurrently within a process, improving
performance, especially in applications that need parallel execution.
Process Scheduling:
Foundation and Scheduling objectives
Process scheduling is the method by which the OS decides which process gets to use the
CPU and when. The objective is to ensure maximum CPU utilization and fair process
execution.
Types of Schedulers
Schedulers include the Long-term scheduler (decides which processes are admitted into the
ready queue), the Short-term scheduler (decides which process runs next), and the Medium-
term scheduler (manages swapping).
Scheduling criteria
• CPU utilization: Maximizing CPU usage by keeping it busy as much as possible.
• Throughput: The number of processes completed per unit of time.
• Turnaround Time: Time from process submission to completion.
• Waiting Time: Time a process spends waiting in the ready queue.
• Response Time: Time from request submission to the first response.
Scheduling algorithms
• Pre-emptive: The OS can interrupt a running process to assign CPU to another (e.g.,
Round Robin).
• Non-pre-emptive: A running process is allowed to finish before another is assigned
CPU (e.g., First Come First Serve).
Scheduling algorithms:
• FCFS (First Come First Serve): Processes are executed in the order they arrive.
• SJF (Shortest Job First): The process with the shortest burst time is executed first.
• SRTF (Shortest Remaining Time First): A pre-emptive version of SJF, where the
process with the shortest remaining time is executed.
• RR (Round Robin): Processes are executed in a cyclic order with a fixed time
quantum.
Unit-2
UNIT-3
Memory Management
Basic Concept: Memory management is the process of controlling and coordinating
computer memory, assigning blocks to processes, and optimizing memory usage.
Logical and Physical Address Map: Logical addresses are generated by the CPU, while
physical addresses refer to actual locations in RAM. The mapping from logical to physical
addresses is done by the memory management unit (MMU).
Memory Allocation:
• Contiguous Memory Allocation: Memory is allocated in contiguous blocks.
o Fixed Partition: Fixed-sized blocks allocated for processes.
o Variable Partition: Dynamic partitioning where sizes vary based on process
needs.
o Internal Fragmentation: Wasted space within allocated blocks.
o External Fragmentation: Unused space between blocks.
Compaction: Defragmenting memory by relocating processes to eliminate external
fragmentation.
Paging
Principle of Operation: Paging divides physical memory into fixed-size blocks (pages) and
processes into equal-sized pages, enabling efficient memory management.
Page Allocation: Pages are allocated non-contiguously in memory, allowing more flexibility
and efficient use of memory.
Hardware Support: The MMU handles address translation, converting logical addresses into
physical addresses using a page table.
Protection and Sharing: Paging allows multiple processes to share pages while ensuring
protection via access control.
Disadvantages of Paging: Increased memory overhead due to page tables, and potential
performance issues due to frequent page swapping.
Virtual Memory
Basics: Virtual memory extends the apparent size of physical memory using disk space,
enabling larger programs to run without requiring more physical RAM.
Hardware and Control Structures: Virtual memory relies on hardware support, including
MMUs and control registers to manage virtual to physical address translation.
Locality of Reference: Memory access patterns where a process tends to use a small set of
memory locations over time.
Page Fault: Occurs when a program accesses a page that is not currently in memory,
triggering a fetch from disk.
Working Set: The set of pages a process is currently using, often optimized for performance.
Dirty Page/Dirty Bit: A dirty page is one that has been modified, and the dirty bit indicates
whether a page needs to be written back to disk.
Demand Paging: Pages are loaded into memory only when needed, improving system
efficiency.
Page Replacement Algorithms
• Optimal: Replaces the page that will not be used for the longest period of time.
• FIFO: The oldest page in memory is replaced first.
• LRU: The least recently used page is replaced, ensuring active pages stay in memory.
UNIT-4
File Management
Concept of File: A file is a collection of data stored on a storage device, identified by a name.
It can be a document, image, or program, and represents persistent storage.
Access Methods: Files can be accessed sequentially (from beginning to end), randomly
(direct access to specific parts), or by indexed methods (using an index to locate data).
File Types: Different types of files include text files, binary files, executable files, and system
files.
File Operation: Operations include creating, reading, writing, modifying, and deleting files.
Directory Structure: A hierarchy for organizing files in directories and subdirectories for
efficient access.
File System Structure: Defines how files are stored and retrieved, managing file allocation,
metadata, and access permissions.
Allocation Methods:
• Contiguous: Stores files in consecutive blocks for easy access.
• Linked: Stores files in non-contiguous blocks, linked through pointers.
• Indexed: Uses an index table for fast access to file blocks.
Efficiency and Performance: Affects file retrieval speed and disk space utilization.
Disk Management
Disk Structure: Disks are organized into sectors, tracks, and cylinders.
Disk Scheduling: Algorithms manage the order in which disk requests are served to optimize
performance.
• FCFS: First-Come-First-Served scheduling.
• SSTF: Shortest Seek Time First, which reduces the seek time.
• SCAN: Moves the disk arm in one direction, serving requests, then reverses.
• C-SCAN: Similar to SCAN but the arm returns to the start without serving requests.
Disk Reliability: Refers to the probability of disk failure and mechanisms like
redundancy for data protection.
Disk Formatting: Prepares a disk for use, creating file system structures.
Boot-block: Contains essential information to load the operating system.
Bad Blocks: Physical areas on a disk that can no longer reliably store data.
Case Study on UNIX and WINDOWS Operating System
• UNIX: A multi-user, multitasking operating system known for its stability, security,
and scalability. It uses a command-line interface, supporting a wide range of file
systems and offering features like networking and process control.
• Windows: A widely used graphical user interface (GUI)-based OS that prioritizes ease
of use, offering compatibility with various applications and hardware. Known for its
flexibility in hardware support and user-friendly environment, it includes features like
multitasking and networking.
Case Studies: Comparative Study of WINDOWS, UNIX, & LINUX Systems
• WINDOWS: Best for desktop users, offering a rich GUI, user-friendly environment,
and compatibility with various software applications. Windows supports a broad
hardware ecosystem, making it versatile but potentially less stable under heavy
workloads compared to UNIX/Linux.
• UNIX: Known for its robustness, scalability, and security. It's widely used in servers,
academic, and research environments. UNIX is less user-friendly than Windows but is
powerful for professionals who require flexibility and control over their system.
• LINUX: A UNIX-like open-source operating system, offering stability, security, and
flexibility. It’s widely used in servers, embedded systems, and as a development
platform. Linux has a strong developer community, and its open-source nature offers
significant cost advantages over both Windows and UNIX in many scenarios.