OSY Notes Vol 1 - Ur Engineering Friend
OSY Notes Vol 1 - Ur Engineering Friend
OSY Notes Vol 1 - Ur Engineering Friend
Definition -:
Dual mode operation, also known as dual privilege mode or privileged mode, is
a feature of modern operating systems that enables a clear distinction between
privileged and non-privileged operations. It allows the operating system to
protect critical system resources and maintain security by enforcing access
control and preventing unauthorized access or manipulation.
1. User mode: In this mode, the CPU executes instructions on behalf of user
applications or processes. The user mode provides a restricted environment
where applications can run, but they have limited access to system resources.
User mode applications cannot directly access hardware devices or perform
privileged operations.
The transition between user mode and kernel mode occurs through system calls
or exceptions. When a user application needs to perform a privileged operation
or access a protected resource, it makes a request to the operating system
through a system call. The system call interrupts the execution of the user mode
code, transfers control to the kernel mode, and executes the requested operation
on behalf of the application. After completing the privileged operation, control
is returned to the user mode, and the application continues execution.
OS as a Resource Manager
An operating system (OS) acts as a resource manager, responsible for efficiently
allocating and managing the various hardware and software resources of a
computer system. It ensures that these resources are utilized effectively to fulfill
the demands of user applications and provide a seamless computing experience.
Here are some key aspects of how an OS functions as a resource manager:
By managing these resources, the OS ensures that they are allocated efficiently,
conflicts are resolved, and the overall system operates in a stable and reliable
manner. It serves as an intermediary layer between the hardware and software,
abstracting the complexities of resource management and providing a unified
interface for applications to interact with the system.
Imp Questions -:
4. Limited Error Handling: In batch systems, errors or faults in one job may
impact subsequent jobs in the batch queue. If a job encounters an error or
failure, it may require manual intervention to correct the issue and resume
processing, which can slow down the overall job execution.
5. Dependency on Job Order: The order in which jobs are submitted to the
batch queue may impact overall performance. If high-priority or critical jobs are
placed behind long-running jobs, it can delay their execution and affect the
system's responsiveness.
In a time-shared operating system, the CPU time is divided into small time
intervals called time slices or quantum. Each user or process is allocated a time
slice during which it can execute its tasks. When the time slice expires, the
operating system interrupts the execution and switches to the next user or
process in line.
The significance of a time-sharing operating system lies in its ability to
provide several important benefits for both users and computer systems. Here
are some key significances of time-sharing operating systems:
- Security and Privacy Concerns: Sharing system resources raises security and
privacy concerns, as one user's actions or programs may potentially affect
others.
There are various types of Distributed Operating systems. Some of them are as
follows
1. Client-Server Systems
2. Peer-to-Peer Systems
Client-Server System
This type of system requires the client to request a resource, after which the
server gives the requested resource. When a client connects to a server, the
server may serve multiple clients at the same time.
This system allows the interface, and the client then sends its own requests to be
executed as an action. After completing the activity, it sends a back response
and transfers the result to the client.
It provides a file system interface for clients, allowing them to execute actions
like file creation, updating, deletion, and more.
Peer-to-Peer System
The nodes play an important role in this system. The task is evenly distributed
among the nodes. Additionally, these nodes can share data and resources as
needed. Once again, they require a network to connect.
Mobile OS
Android -:
Applications
Application Framework
Android Runtime
Platform Libraries
Linux Kernel
Applications –
Application framework –
Application runtime –
Platform libraries –
The Platform Libraries includes various C/C++ core libraries and Java based
libraries such as Media, Graphics, Surface Manager, OpenGL etc. to provide a
support for android development.
Media library provides support to play and record an audio and video
formats.
Surface manager responsible for managing access to the display subsystem.
SGL and OpenGL both cross-language, cross-platform application program
interface (API) are used for 2D and 3D computer graphics.
SQLite provides database support and FreeType provides font support.
Web-Kit This open source web browser engine provides all the functionality
to display web content and to simplify page loading.
SSL (Secure Sockets Layer) is security technology to establish an
encrypted link between a web server and a web browser.
Linux Kernel –
Linux Kernel is heart of the android architecture. It manages all the available
drivers such as display drivers, camera drivers, Bluetooth drivers, audio
drivers, memory drivers, etc. which are required during the runtime.
The Linux Kernel will provide an abstraction layer between the device
hardware and the other components of android architecture. It is responsible
for management of memory, power, devices etc.
Security: The Linux kernel handles the security between the application and
the system.
Memory Management: It efficiently handles the memory management
thereby providing the freedom to develop our apps.
Process Management: It manages the process well, allocates resources to
processes whenever they need them.
Network Stack: It effectively handles the network communication.
Driver Model: It ensures that the application works properly on the device
and hardware manufacturers responsible for building their drivers into the
Linux build.
Imp Questions :
Doubts Column :
2. Services & Components of OS
5. User Interface: The operating system provides a user interface that allows
users to interact with the computer system. This can include command-line
interfaces (CLI), graphical user interfaces (GUI), or a combination of both,
enabling users to execute commands, launch applications, and manage files.
10. System Utilities: Operating systems offer a range of utility programs that
assist in system management and maintenance. These utilities may include disk
management tools, performance monitoring tools, backup and restore utilities,
and system configuration tools.
System Calls
1. File System Operations: System calls such as "open," "read," "write," and
"close" are used for file manipulation. They allow programs to create, open,
read from, write to, and close files.
2. Process Management: System calls like "fork," "exec," "exit," and "wait"
are used for managing processes. They allow programs to create new processes,
replace the current process with a different program, terminate processes, and
wait for process termination.
4. Memory Management: System calls like "brk" and "mmap" are used for
memory management. They allow programs to allocate and deallocate memory
dynamically, map files into memory, and modify memory protection settings.
6. Time and Date Management: System calls like "time," "gettimeofday," and
"sleep" are used to obtain and manipulate system time and dates.
7. Process Control: System calls like "kill" and "signal" are used for process
control. They allow programs to send signals to processes, handle signal events,
and modify signal behavior.
Imp questions
Process management
Files management
Command Interpreter
System calls
Signals
Network management
Security management
I/O device management
Secondary storage management
Main memory management
Process Management :
Executable program
Program’s data
Stack and stack pointer
Program counter and other CPU registers
Details of opened files
Files Management :
Files are used for long-term storage. Files are used for both input and output.
Every operating system provides a file management service. This file
management service can also be treated as an abstraction as it hides the
information about the disks from the user. The operating system also provides
a system call for file management. The system call for file management
includes –
File creation
File deletion
Read and Write operations
Command Interpreter :
There are several ways for users to interface with the operating system. One of
the approaches to user interaction with the operating system is through
commands. Command interpreter provides a command-line interface. It
allows the user to enter a command on the command line prompt (cmd).
System Calls :
Network Management :
Security Management:
The security mechanisms in an operating system ensure that authorized
programs have access to resources, and unauthorized programs have no access
to restricted resources. Security management refers to the various processes
where the user changes the file, memory, CPU, and other hardware resources
that should have authorization from the operating system.
The I/O device management component is an I/O manager that hides the
details of hardware devices and manages the main memory for devices using
cache and spooling. This component provides a buffer cache and general
device driver code that allows the system to manage the main memory and the
hardware devices connected to it. It also provides and manages custom drivers
for particular hardware devices.
The purpose of the I/O system is to hide the details of hardware devices from
the application programmer. An I/O device management component allows
highly efficient resource utilization while minimizing errors and making
programming easy on the entire range of devices available in their systems.
Broadly, the secondary storage area is any space, where data is stored
permanently and the user can retrieve it easily. Your computer’s hard drive is
the primary location for your files and programs. Other spaces, such as CD-
ROM/DVD drives, flash memory cards, and networked devices, also provide
secondary storage for data on the computer. The computer’s main memory
(RAM) is a volatile storage device in which all programs reside, it provides
only temporary storage space for performing tasks. Secondary storage refers to
the media devices other than RAM (e.g. CDs, DVDs, or hard disks) that
provide additional space for permanent storing of data and software programs
which is also called non-volatile storage.
Main memory management :
Task Scheduler -:
Performance Monitor -:
Doubts Column :
3. Process Management
Process -:
Process States -:
A process has several stages that it passes through from beginning to end.
There must be a minimum of five states. Even though during execution, the
process could be in one of these states, the names of the states are not
standardized. Each process goes through several stages throughout its life
cycle.
Process States in Operating System
New (Create): In this step, the process is about to be created but not yet
created. It is the program that is present in secondary memory that will be
picked up by OS to create the process.
Ready: New -> Ready to run. After the creation of a process, the process
enters the ready state i.e. the process is loaded into the main memory. The
process here is ready to run and is waiting to get the CPU time for its
execution. Processes that are ready for execution by the CPU are maintained
in a queue called ready queue for ready processes.
Run: The process is chosen from the ready queue by the CPU for execution
and the instructions within the process are executed by any one of the
available CPU cores.
Blocked or Wait: Whenever the process requests access to I/O or needs input
from the user or needs access to a critical region(the lock for which is already
acquired) it enters the blocked or waits for the state. The process continues to
wait in the main memory and does not require CPU. Once the I/O operation
is completed the process goes to the ready state.
Terminated or Completed: Process is killed as well as PCB is deleted. The
resources allocated to the process will be released or deallocated.
Suspend Ready: Process that was initially in the ready state but was
swapped out of main memory(refer to Virtual Memory topic) and placed onto
external storage by the scheduler is said to be in suspend ready state. The
process will transition back to a ready state whenever the process is again
brought onto the main memory.
Suspend wait or suspend blocked: Similar to suspend ready but uses the
process which was performing I/O operation and lack of main memory
caused them to move to secondary memory. When work is finished it may go
to suspend ready.
The Process Control Block (PCB), also known as the Task Control Block
(TCB), is a data structure used by an operating system to manage and track
information about a specific process. It contains essential details and control
information that the operating system needs to manage the process effectively.
Each process in the system has its own PCB, and the operating system uses the
PCB to perform process management and scheduling tasks.
The Process Control Block is a vital data structure used by the operating system
to manage and control processes efficiently. It allows the operating system to
maintain and retrieve the necessary information for process scheduling, context
switching, resource allocation, and interprocess communication. By maintaining
a PCB for each process, the operating system can effectively manage the
execution and control of multiple processes concurrently.
Here are the key components and information typically found in a Process
Control Block:
2. Process State: Indicates the current state of the process, such as running,
ready, blocked, or terminated. The state is updated as the process moves through
different phases of execution and interacts with the operating system.
3. Program Counter (PC): The Program Counter holds the address of the next
instruction to be executed by the process. It allows the operating system to keep
track of the execution progress of the process.
4. CPU Registers: The PCB contains the values of various CPU registers
associated with the process, such as the accumulator, stack pointer, and index
registers. These registers hold the intermediate results, program variables, and
execution context of the process.
8. Accounting Information: The PCB may include accounting data, such as the
amount of CPU time used by the process, the number of times it has been
executed, or other statistics related to resource utilization. This information
assists in performance analysis, billing, and system monitoring.
1. Fairness: The scheduler ensures that each process gets a fair share of CPU
time, preventing any particular process from monopolizing system resources.
Fairness promotes an equitable distribution of CPU resources among processes.
Scheduling Queue
Ready Queue:
1. Purpose: The primary purpose of the ready queue is to hold processes that
are waiting to be scheduled for execution on the CPU. These processes have
met the necessary criteria to run, such as having their required resources
available or completing any required I/O operations.
2. Organization: The ready queue is typically implemented as a queue data
structure, where processes are added to the back of the queue and removed from
the front. This follows the First-Come, First-Served (FCFS) principle, where the
process that arrives first is scheduled first.
4. Process State: Processes in the ready queue are in a ready state, indicating
that they are prepared to execute but are waiting for CPU allocation. Once a
process is selected from the ready queue, it transitions to the running state and
starts executing on the CPU.
Device Queue:
The device queue, also known as the I/O queue or waiting queue, is a data
structure used by the operating system to manage processes that are waiting for
access to I/O devices. It holds processes that are waiting for I/O operations to
complete before they can proceed with their execution.
1. Purpose: The device queue is used to hold processes that are waiting for I/O
operations to be performed on a specific device, such as reading from or writing
to a disk, accessing a printer, or interacting with other peripherals. These
processes are unable to proceed until the requested I/O operation is completed.
2. Organization: Similar to the ready queue, the device queue is typically
implemented as a queue data structure. Processes are added to the end of the
queue when they are waiting for an I/O operation and are removed from the
front when the operation is completed.
4. I/O Scheduling: The device queue, in conjunction with the I/O scheduler,
manages the order in which processes access I/O devices. The scheduler
determines the sequence in which processes are granted access to the device,
aiming to optimize device utilization and minimize waiting times.
Schedulers
Schedulers in an operating system are responsible for making decisions about
process execution, resource allocation, and process management. They
determine which processes should run, in what order, and for how long.
Schedulers play a crucial role in achieving efficient utilization of system
resources, responsiveness, fairness, and meeting performance objectives. Here
are the main types of schedulers found in operating systems:
Context Switch
Context switches are essential for multitasking and providing a responsive and
concurrent environment in an operating system. They allow the CPU to
efficiently allocate its processing power to multiple processes or threads,
enabling concurrent execution, time-sharing, and the illusion of parallelism in a
system.
Inter-process Communication
3. Pipes and FIFOs : Pipes and FIFOs (First-In-First-Out) are forms of inter-
process communication that are typically used for communication between
related processes. A pipe is a unidirectional communication channel that allows
data to flow in one direction. FIFOs, also known as named pipes, are similar to
pipes but can be accessed by unrelated processes. Pipes and FIFOs provide a
simple and straightforward method of IPC, with one process writing data into
the pipe and another process reading it.
Message Passing :
Shared Memory :
Shared memory provides several benefits. It is generally faster than other IPC
methods since it avoids the need for data copying between processes. As
processes can directly access the shared memory, it is suitable for scenarios that
require high-performance data sharing, such as inter-process coordination or
inter-thread communication within a single machine.
Threads
Thread Lifecycle -:
The life cycle of a thread describes the different stages that a thread goes
through during its existence. The thread life cycle typically includes the
following states:
1. New : In the new state, a thread is created but has not yet started executing.
The necessary resources for the thread, such as its stack space and program
counter, have been allocated, but the thread has not been scheduled by the
operating system to run.
2. Runnable : Once the thread is ready to execute, it enters the runnable state.
In this state, the thread is eligible to run and can be scheduled by the operating
system's thread scheduler. However, it does not guarantee immediate execution
as it competes with other runnable threads for CPU time.
3. Running : When a thread is selected by the scheduler to execute on a CPU
core, it enters the running state. In this state, the actual execution of the thread's
instructions takes place. The thread remains in the running state until it
voluntarily yields the CPU or its time slice expires.
4. Blocked : Threads can transition to the blocked (or waiting) state if they
need to wait for a particular event or condition to occur. For example, a thread
might block if it requests I/O operations or synchronization primitives like locks
or semaphores. While blocked, the thread is not eligible for CPU time and
remains in this state until the event or condition it is waiting for is satisfied.
Multi-threading Model
The choice of the multithreading model depends on various factors, such as the
application requirements, performance goals, and the level of control and
concurrency desired. Each model has its advantages and trade-offs in terms of
concurrency, overhead, resource usage, and ease of programming.
ps command -:
`ps -u username`: This option allows you to filter and display processes
owned by a specific user. Replace "username" with the actual username.
`ps -p PID`: Use the `-p` option followed by a process ID (PID) to
retrieve information about a specific process.
wait command -:
kill command -:
1. Define Process.
2. List out any two difference between process and program.
3. Explain PCB.
4. With the help of a neat labelled diagram explain process states.
5. Explain types of schedulers with the help of a diagram.
6. What do you mean by scheduling ?
7. What is a thread ? Explain types of thread.
8. Explain multi-threading model.
9. Explain thread life cycle diagram with the help of a diagram.
10. Explain context switching.
11. Explain Inter-process communication. List out all the techniques used in IPC
and explain any one of them.
12. Write a short note on : ps command , sleep , wait , kill
Chapter 3 Checklist
Doubts Column :