0% found this document useful (0 votes)
7 views8 pages

Operating System Exam

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 8

UNIT-1

introduction:
Concept of Operating Systems
An Operating System (OS) is system software that manages hardware resources and provides
services for application software. It acts as an intermediary between the hardware and user
applications. The OS ensures efficient resource allocation, file management, process
management, and system security.
Generations of Operating Systems
Operating systems have evolved over time, starting with early batch systems, moving to
multiprogramming, time-sharing systems, personal computers, and modern distributed
systems. Each generation focused on improving efficiency, user interaction, and handling
more complex tasks.
Types of Operating Systems
Operating systems can be classified into types such as Batch OS, Time-Sharing OS, Real-Time
OS, Distributed OS, and Network OS, depending on how they manage tasks, resources, and
user interaction.
OS Services
Operating systems offer several services, including process scheduling, memory
management, file management, I/O operations, and security services like user
authentication and access control, ensuring that resources are efficiently managed.

Processes:
Definition
A process is a program in execution, consisting of the program code, its current activity, and
associated resources like memory and registers.
Process Relationship
Processes can have relationships such as parent-child processes. A parent creates a child
process, which may itself create other processes. These relationships help in resource
allocation and management.
Different states of a Process
A process can exist in several states: New (being created), Ready (waiting for CPU), Running
(being executed), Blocked (waiting for an event), and Terminated (finished execution).
Process State transitions
Processes transition between states based on scheduling decisions, I/O operations, or
system interrupts, moving from New to Ready, Ready to Running, Running to Blocked, and
back to Ready or Terminated.
Process Control Block (PCB)
The PCB is a data structure that stores important information about a process, such as its
state, program counter, CPU registers, memory management details, and scheduling
information.
Context switching
Context switching occurs when the CPU switches from one process to another. The current
state of the process is saved, and the state of the new process is loaded, allowing
multitasking.

Thread:
Definition
A thread is the smallest unit of execution within a process. Multiple threads can exist within
a process, sharing resources like memory but having individual execution paths.
Various states
Threads can exist in states such as New (created but not started), Ready (waiting for CPU),
Running (executing), Blocked (waiting for I/O), and Terminated (finished execution).
Benefits of threads
Threads allow for more efficient multitasking and better resource utilization. They make the
system more responsive and improve performance by allowing concurrent execution within
a process.
Types of threads
Threads can be classified as user-level threads, which are managed by user-level libraries,
and kernel-level threads, which are managed by the OS kernel.
Multithreading
Multithreading allows multiple threads to run concurrently within a process, improving
performance, especially in applications that need parallel execution.

Process Scheduling:
Foundation and Scheduling objectives
Process scheduling is the method by which the OS decides which process gets to use the
CPU and when. The objective is to ensure maximum CPU utilization and fair process
execution.
Types of Schedulers
Schedulers include the Long-term scheduler (decides which processes are admitted into the
ready queue), the Short-term scheduler (decides which process runs next), and the Medium-
term scheduler (manages swapping).
Scheduling criteria
• CPU utilization: Maximizing CPU usage by keeping it busy as much as possible.
• Throughput: The number of processes completed per unit of time.
• Turnaround Time: Time from process submission to completion.
• Waiting Time: Time a process spends waiting in the ready queue.
• Response Time: Time from request submission to the first response.
Scheduling algorithms
• Pre-emptive: The OS can interrupt a running process to assign CPU to another (e.g.,
Round Robin).
• Non-pre-emptive: A running process is allowed to finish before another is assigned
CPU (e.g., First Come First Serve).
Scheduling algorithms:
• FCFS (First Come First Serve): Processes are executed in the order they arrive.
• SJF (Shortest Job First): The process with the shortest burst time is executed first.
• SRTF (Shortest Remaining Time First): A pre-emptive version of SJF, where the
process with the shortest remaining time is executed.
• RR (Round Robin): Processes are executed in a cyclic order with a fixed time
quantum.

Unit-2

. Inter-process Communication (IPC)


Inter-process communication (IPC) is a mechanism that allows processes to communicate
with each other and synchronize their actions. It enables processes to share data, exchange
information, and coordinate tasks in a multi-processing environment. IPC mechanisms
include message passing, shared memory, semaphores, and more. Proper synchronization is
crucial to prevent issues like race conditions and deadlocks.
2. Critical Section
A critical section is a part of a program that accesses shared resources and needs to be
executed by only one process at a time to avoid data corruption. It is crucial for ensuring the
integrity of data in a multi-process environment. Synchronization mechanisms like
semaphores, locks, and monitors are used to manage critical sections.
3. Race Conditions
Race conditions occur when two or more processes access shared resources concurrently,
and the outcome depends on the timing of their execution. These can lead to unpredictable
behavior, including data corruption. Proper synchronization (mutexes, semaphores) is
required to avoid race conditions by controlling access to shared resources.
4. Mutual Exclusion
Mutual exclusion ensures that only one process can access a critical section at a time,
preventing race conditions. It is a fundamental principle of concurrent programming,
ensuring that shared resources are used exclusively by one process to maintain consistency.
Mechanisms like locks and semaphores enforce mutual exclusion.
5. The Producer/Consumer Problem
The producer/consumer problem involves two processes: the producer, which generates
data, and the consumer, which consumes it. The challenge is to coordinate the processes to
ensure that the consumer does not consume data before the producer creates it. Shared
buffers and synchronization techniques like semaphores are used to prevent race conditions
and deadlocks.
6. Semaphores
A semaphore is a synchronization primitive used to control access to shared resources in a
concurrent system. It maintains a counter that is incremented and decremented by
processes to signal the availability of resources. Semaphores can be binary (mutex) or
counting, where the latter allows more than one resource to be controlled.
7. Event Counters
Event counters are used in IPC to track and manage the occurrence of specific events. An
event counter typically increments when an event occurs and decrements when another
process acknowledges the event. These counters help manage synchronization between
processes and prevent conflicts in accessing shared resources.
8. Monitors
A monitor is a high-level synchronization construct that combines data and procedures for
mutual exclusion and condition synchronization. Monitors provide a safe and controlled
environment for process communication and resource sharing by allowing only one process
to execute inside the monitor at a time, using condition variables for waiting and signaling.
9. Message Passing
Message passing is an IPC technique where processes communicate by sending and
receiving messages, typically using a message queue. This approach avoids shared memory
and provides a mechanism for processes to exchange data across different address spaces. It
is often used in distributed systems and environments where processes need to be
decoupled.
10. Classical IPC Problems
• Reader’s & Writer Problem: Multiple readers can access shared data simultaneously,
but writers must have exclusive access. The challenge is to prevent readers from
reading while a writer is writing, and vice versa.
• Dining Philosophers Problem: Philosophers must share resources (e.g., forks) to eat,
but this problem requires synchronization to avoid deadlock and resource starvation.
11. Deadlocks
A deadlock occurs when a set of processes are blocked because each is holding a resource
and waiting for another, creating a circular wait. Deadlock can halt system processes and
cause significant performance issues.
12. Necessary and Sufficient Conditions for Deadlock
The four conditions necessary for deadlock are:
1. Mutual Exclusion: Resources are shared exclusively.
2. Hold and Wait: Processes hold resources and wait for others.
3. No Preemption: Resources cannot be forcibly taken from a process.
4. Circular Wait: A set of processes exists where each process is waiting for another in
the set.
13. Deadlock Prevention
Deadlock prevention techniques aim to eliminate one or more of the necessary conditions
for deadlock. Examples include disallowing circular waits by imposing an ordering of
resource requests, or allowing preemption to break the hold-and-wait condition.
14. Deadlock Avoidance: Banker’s Algorithm
The Banker’s Algorithm is a deadlock avoidance algorithm that checks resource allocation
and determines whether granting a resource request will leave the system in a safe state. It
works by simulating resource allocation and ensuring no circular wait can occur, thus
avoiding deadlock.
15. Deadlock Detection and Recovery
In deadlock detection, the system checks for cycles in the resource allocation graph to detect
deadlock. Once detected, recovery mechanisms, like process termination or resource
preemption, can be used to break the deadlock and allow system processes to resume
execution.

UNIT-3

Memory Management
Basic Concept: Memory management is the process of controlling and coordinating
computer memory, assigning blocks to processes, and optimizing memory usage.
Logical and Physical Address Map: Logical addresses are generated by the CPU, while
physical addresses refer to actual locations in RAM. The mapping from logical to physical
addresses is done by the memory management unit (MMU).
Memory Allocation:
• Contiguous Memory Allocation: Memory is allocated in contiguous blocks.
o Fixed Partition: Fixed-sized blocks allocated for processes.
o Variable Partition: Dynamic partitioning where sizes vary based on process
needs.
o Internal Fragmentation: Wasted space within allocated blocks.
o External Fragmentation: Unused space between blocks.
Compaction: Defragmenting memory by relocating processes to eliminate external
fragmentation.
Paging
Principle of Operation: Paging divides physical memory into fixed-size blocks (pages) and
processes into equal-sized pages, enabling efficient memory management.
Page Allocation: Pages are allocated non-contiguously in memory, allowing more flexibility
and efficient use of memory.
Hardware Support: The MMU handles address translation, converting logical addresses into
physical addresses using a page table.
Protection and Sharing: Paging allows multiple processes to share pages while ensuring
protection via access control.
Disadvantages of Paging: Increased memory overhead due to page tables, and potential
performance issues due to frequent page swapping.
Virtual Memory
Basics: Virtual memory extends the apparent size of physical memory using disk space,
enabling larger programs to run without requiring more physical RAM.
Hardware and Control Structures: Virtual memory relies on hardware support, including
MMUs and control registers to manage virtual to physical address translation.
Locality of Reference: Memory access patterns where a process tends to use a small set of
memory locations over time.
Page Fault: Occurs when a program accesses a page that is not currently in memory,
triggering a fetch from disk.
Working Set: The set of pages a process is currently using, often optimized for performance.
Dirty Page/Dirty Bit: A dirty page is one that has been modified, and the dirty bit indicates
whether a page needs to be written back to disk.
Demand Paging: Pages are loaded into memory only when needed, improving system
efficiency.
Page Replacement Algorithms
• Optimal: Replaces the page that will not be used for the longest period of time.
• FIFO: The oldest page in memory is replaced first.
• LRU: The least recently used page is replaced, ensuring active pages stay in memory.

UNIT-4
File Management
Concept of File: A file is a collection of data stored on a storage device, identified by a name.
It can be a document, image, or program, and represents persistent storage.
Access Methods: Files can be accessed sequentially (from beginning to end), randomly
(direct access to specific parts), or by indexed methods (using an index to locate data).
File Types: Different types of files include text files, binary files, executable files, and system
files.
File Operation: Operations include creating, reading, writing, modifying, and deleting files.
Directory Structure: A hierarchy for organizing files in directories and subdirectories for
efficient access.
File System Structure: Defines how files are stored and retrieved, managing file allocation,
metadata, and access permissions.
Allocation Methods:
• Contiguous: Stores files in consecutive blocks for easy access.
• Linked: Stores files in non-contiguous blocks, linked through pointers.
• Indexed: Uses an index table for fast access to file blocks.
Efficiency and Performance: Affects file retrieval speed and disk space utilization.
Disk Management
Disk Structure: Disks are organized into sectors, tracks, and cylinders.
Disk Scheduling: Algorithms manage the order in which disk requests are served to optimize
performance.
• FCFS: First-Come-First-Served scheduling.
• SSTF: Shortest Seek Time First, which reduces the seek time.
• SCAN: Moves the disk arm in one direction, serving requests, then reverses.
• C-SCAN: Similar to SCAN but the arm returns to the start without serving requests.
Disk Reliability: Refers to the probability of disk failure and mechanisms like
redundancy for data protection.
Disk Formatting: Prepares a disk for use, creating file system structures.
Boot-block: Contains essential information to load the operating system.
Bad Blocks: Physical areas on a disk that can no longer reliably store data.
Case Study on UNIX and WINDOWS Operating System
• UNIX: A multi-user, multitasking operating system known for its stability, security,
and scalability. It uses a command-line interface, supporting a wide range of file
systems and offering features like networking and process control.
• Windows: A widely used graphical user interface (GUI)-based OS that prioritizes ease
of use, offering compatibility with various applications and hardware. Known for its
flexibility in hardware support and user-friendly environment, it includes features like
multitasking and networking.
Case Studies: Comparative Study of WINDOWS, UNIX, & LINUX Systems
• WINDOWS: Best for desktop users, offering a rich GUI, user-friendly environment,
and compatibility with various software applications. Windows supports a broad
hardware ecosystem, making it versatile but potentially less stable under heavy
workloads compared to UNIX/Linux.
• UNIX: Known for its robustness, scalability, and security. It's widely used in servers,
academic, and research environments. UNIX is less user-friendly than Windows but is
powerful for professionals who require flexibility and control over their system.
• LINUX: A UNIX-like open-source operating system, offering stability, security, and
flexibility. It’s widely used in servers, embedded systems, and as a development
platform. Linux has a strong developer community, and its open-source nature offers
significant cost advantages over both Windows and UNIX in many scenarios.

You might also like