0% found this document useful (0 votes)
28 views39 pages

Cs OSS 1

CYBER SECURITY NOTES

Uploaded by

cjjasmin3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views39 pages

Cs OSS 1

CYBER SECURITY NOTES

Uploaded by

cjjasmin3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Computer system consists of one or more CPUs and a number of device controllers connected through acommon

bus that provides access to shared memory (Figure 1.2).

Each device controller is in charge of a specific type of device. The CPU and the device controllers can execute
concurrently, competing for memory cycles. To ensure orderly access to the shared memory, a memory
controller is provided whose function is to synchronize access to the memory.
Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the system bus.
Software may trigger an interrupt by executing a special operation called a system call(also called a monitor
call).

When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fixed location.
The fixed location usually contains the starting address where the service routine for the interrupt is located.
The interrupt service routine executes; on completion, the CPU resumes the interrupted computation.

A time line of this operation is shown in Figure 1.3.


Interrupts are an important part of a computer architecture. Each computer design has its own interrupt
mechanism, but several functions are common. The interrupt must transfer control to the appropriate interrupt
service routine.

The straightforward method for handling this transfer would be to invoke a generic routine to examine the
interrupt information; the routine, in turn, would call the interrupt-specific handler.
The interrupt routine is called indirectly through the table, with no intermediate routine needed. Generally, the
table of pointers is stored in low memory (the first 100 or so locations). These locations holdthe addresses of
the interrupt service routines for the various devices. This array, or interrupt vector, of addresses is then indexed
by a unique device number, given with the interrupt request, to provide the address of the interrupt service
routine for the interrupting device.
The interrupt architecture must also save the address of the interrupted instruction. After the interrupt is
serviced, the saved return address is loaded into the program counter, and the interrupted computation
resumes as though the interrupt had not occurred.

· Storage Structure

Computer programs must be in main memory (also called random-access memory or RAM) to be executed.
Main memory is the only large storage area (millions to billions of bytes) that the processor can access
directly.
Dynamic random-access memory (DRAM) -> is implemented in a semiconductor technology which forms an
array of memory words.
A typical instruction-execution cycle, as executed on a system with a von Neumann architecture, first fetches
an instruction from memory and stores that instruction in the instruction register.

Two possible reasons if we want the programs and data to reside in main memory permanently.

1. Main memory is usually too small to store all needed programs and data permanently.
2. Main memory is a volatile storage device that loses its contents when power is turned off or otherwise lost.

Secondary storage which most computer systems provide as an extension of main memory. The main
requirement for secondary storage is that it be able to hold large quantities of data permanently.
Magnetic disk is the most common secondary-storage device which provides storage for both programs and
data. Most programs (web browsers, compilers, word processors, spreadsheets, and so on) are stored on a
disk until they are loaded into memory.

The wide variety of storage systems in a computer system can be organized in a hierarchy (Figure 1.4)
according to speed and cost. The higher levels are expensive, but they are fast.

· I/O Structure

Storage is only one of many types of I/O devices within a computer. A large portion of operating system code is
dedicated to managing I/O, both because of its importance to the reliability and performance of a system and
because of the varying nature of the devices.
A general-purpose computer system consists of CPUs and multiple device controllers that are connected
through a common bus. Each device controller is in charge of a specific type of device. Depending on the
controller, there may be more than one attached device.
A device controller maintains some local buffer storage and a set of special-purpose registers. The device
controller is responsible for moving the data between the peripheral devices that it controls and its local
buffer storage.

Typically, operating systems have a device driver for each device controller. This device driver understands the
device controller and presents a uniform interface to the device to the rest of the operating system. On these
systems, multiple components can talk to other components concurrently, rather than competing for cycles on
a shared bus. Figure 1.5 shows the interplay of all components of a computer system.

1.2 Computer System Architecture

A computer system may be organized in a number of different ways, which we can categorize roughly
according to the number of general-purpose processors used.

1.2.1 Single-Processor Systems


Most systems vise a single processor. a single-processor system, there is one main CPU capable of
executing a general-purpose instruction set, including instructions from user processes. Almost all systems
have other special-purpose processors as well. They may come in the form of device-specific processors,
such as disk, keyboard, and graphics controllers; or, on mainframes, they may come in the form of more
general-purpose processors, such as I/O processors that move data rapidly among the components of the
system.

All of these special-purpose processors run a limited instruction set and do not run user processes.
Sometimes they are managed by the operating system, in that the operating system sends them information
about their next task and monitors their status.

1.2.2 Multiprocessor Systems

Multiprocessor systems (also known as parallel systems or tightly coupled systems) are growing in
importance. Such systems have two or more processors in close communication, sharing the computer bus
and sometimes the clock, memory, and peripheral devices.

Multiprocessor systems have three main advantages:

1. Increased throughput. By increasing the number of processors, we expect to get more work done in less
time. The speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple
processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working
correctly. This overhead, plus contention for shared
resources, lowers the expected gain from additional processors. Similarly, N programmers working closely
together do not produce N times the amount of work a single programmer would produce.

2. Economy of scale. Multiprocessor systems can cost less than equivalent multiple single-processor
systems, because they can share peripherals, mass storage, and power supplies. If several programs operate
on the same set of data, it is cheaper to store those data on one disk and to have all the processors share
them than to have many computers with local
disks and many copies of the data.

3. Increased reliability. If functions can be distributed properly among several processors, then the failure of
one processor will not halt the system, only slow it down. If we have ten processors and one fails, then each
of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire
system runs only 10 percent slower, rather than failing altogether.

Graceful degradation is the ability to continue providing service proportional to the level of surviving
hardware.
Fault tolerant some systems go beyond graceful degradation.

The multiple-processor systems in use today are of two types.

· Asymmetric multiprocessing, which each processor is assigned a specific task.


· Symmetric multiprocessing (SMP), The most common systems use in which each processor performs all
tasks within the operating system
1.2.3 Clustered Systems
Clustered system is an another type of multiple-CPU system. Clustered systems differ from multiprocessor
systems, however, in that they are composed of two or more individual systems coupled together. The
definition of the term clustered is not concrete; many commercial packages wrestle with what a clustered
system is and why one form is better than another. The generally accepted definition is that clustered
computers share storage and are closely linked via a local-area network (LAN) or a faster interconnect such
as InfiniBand.

Clustering is usually used to provide high-availability service; that is, service will continue even if one or more
systems in the cluster fail. Clustering can be structured asymmetrically or symmetrically. Cluster technology is
changing rapidly. Some cluster products support dozens of systems in a cluster, as well as clustered nodesthat
are separated by miles. Many of these improvements are made possible by storage-area networks (SANs),
as described in Section 12.3.3, which allow many systems to attach to a pool of storage.

1.3 Operating-System Operations

Interrupt driven is modern operating systems.


Trap (or an exception) is a software-generated interrupt caused either by an error (for example, division by
zero or invalid memory access) or by a specific request from a user program that an operating-system service
be performed. The interrupt-driven nature of an operating system defines that system's general structure. For
each type of interrupt, separate segments of code in the operating system determine what action should be
taken. An interrupt service routine is provided that is responsible for dealing with the interrupt.
Since the operating system and the users share the hardware and software resources of the computer
system, we need to make sure that an error in a user program could cause problems only for the one program
that was running.

1.3.1 Dual-Mode Operation

Two separate modes of operation: user mode and kernel mode (also called supervisor mode, system mode,
or privileged mode). A bit, called the mode bit, is added to the hardware of the computer to indicate the
current mode: kernel (0) or user (1).
The dual mode of operation provides us with the means for protecting the operating system from errant users
—and errant users from one another.
Picture
We accomplish this protection by designating some of the machine instructions that may cause harm as
privileged instructions. The hardware allows privileged instructions to be executed only in kernel mode. If an
attempt is made to execute a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system.

1.3.2 Timer

A timer can be set to interrupt the computer after a specified period. The period may be fixed (for example,
1/60 second) or variable (for example, from 1 millisecond to 1 second).
Variable timer is generally implemented by a fixed-rate clock and a counter.
The operating system sets the counter. Every time the clock ticks, the counter is decremented. We can use
the timer to prevent a user program from running too long. A simple technique is to initialize a counter with
the amount of time that a program is allowed to run.

1.4 Resource Management

Resource Management in Operating System is the process to manage all the resources efficiently like CPU,
memory, input/output devices, and other hardware resources among the various programs and processes
running in the computer.

Resource management is an important thing because resources of a computer are limited and multiple
processes or users may require access to the same resources like CPU, memory etc. at the same time. The
operating system has to manage and ensure that all processes get the resources they need to execute,
without any problems like deadlocks.

Here are some Terminologies related to the resource management in OS:

Resource Allocation: This terms defines the process of assigning the available resources to processes in the
operating system. This can be done dynamically or statically.
Resource: Resource can be anything that can be assigned dynamically or statically in the operating system.
Example may include CPU time, memory, disk space, and network bandwidth etc.
Resource Management: It refers to how to manage resources efficiently between different processes.
Process: Process refers to any program or application that is being executed in the operating system and has
its own memory space, execution state, and set of system resources.
Scheduling: It is the process of determining from multiple number of processes which process should be
allocated a particular resource at a given time.
Deadlock: When two or more processes are waiting for some resource but resources are busy somewhere
else and resources are also waiting for some process to complete their execution . In such condition neither
resources will be freed norprocess would get it and this situation is called deadlock.
Semaphore: It is the method or tool which is used to prevent race condition. Semaphore is an integer variable
which is used in mutual exclusive manner by various concurrent cooperative process in order to achieve
synchronization.
Mutual Exclusion: It is the technique to prevent multiple number of process to access the same resources
simultaneously.
Memory Management: Memory management is a method used in the operating systems to manage
operations between main memory and disk during process execution.

Features or characteristics of the Resource management of operating system:

Resource scheduling: The OS allocate available resources to the processes. It decides the sequence of which
process will get access to the CPU, memory, and other resources at any given time.
Resource Monitoring: The operating system monitors which resources is used by which process and also take
action if any process takesmany resources at the same time causing into deadlock.
Resource Protection: The OS protects the system from unauthorized or fake access by the user or any other
process.
Resource Sharing: The operating system permits many processes like memory and I/O devices to share
resources. It guarantees that common resources are utilized in a fair and productive way.
Deadlock prevention: The OS prevents deadlock and also ensure that no process is holding resources
indefinitely . For that it uses techniques likes resource preemption.
Resource accounting: The operating system always tracks the use of resources by different processes for
allocation and statistical purposes.
Performance optimization: The OS optimizes resources distribution , the reason is to increase the system
performance. For that many techniques like load balancing and memory management are followed that
ensures efficient resources distribution.

Diagrammatically representation of the Resource management :

1.5 Security and Protection

Protection and security requires that computer resources such as CPU, softwares, memory etc. are protected.
This extends to the operating system as well as the data in the system. This can be done by ensuring integrity,
confidentiality and availability in the operating system. The system must be protect against unauthorized
access, viruses, worms etc.

Threats to Protection and Security


A threat is a program that is malicious in nature and leads to harmful effects for the system. Some of the
common threats that occur in a system are −

Virus
Viruses are generally small snippets of code embedded in a system. They are very dangerous and can corrupt
files, destroy data, crash systems etc. They can also spread further by replicating themselves as required.

Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious user can use these to enter
the system as a harmless being and wreak havoc.

Trap Door
A trap door is a security breach that may be present in a system without the knowledge of the users. It can be
exploited to harm the data or files in a system by malicious people.

Worm
A worm can destroy a system by using its resources to extreme levels. It can generate multiple copies which
claim all the resources and don't allow any other processes to access them. A worm can shut down a whole
network in this way.

Denial of Service
These type of attacks do not allow the legitimate users to access a system. It overwhelms the system with
requests so it is overwhelmed and cannot work properly for other user.

Protection and Security Methods


The different methods that may provide protect and security for different computer systems are −

Authentication
This deals with identifying each user in the system and making sure they are who they claim to be. The
operating system makes sure that all the users are authenticated before they access the system. The
different ways to make sure that the users are authentic are:

Username/ Password
Each user has a distinct username and password combination and they need to enter it correctly before they
can access the system.

User Key/ User Card


The users need to punch a card into the card slot or use they individual key on a keypad to access the system.

User Attribute Identification


Different user attribute identifications that can be used are fingerprint, eye retina etc. These are unique for
each user and are compared with the existing samples in the database. The user can only access the system
if there is a match.

One Time Password


These passwords provide a lot of security for authentication purposes. A one time password can be
generated exclusively for a login every time a user wants to enter the system. It cannot be used more than
once. The various ways a one time password can be implemented are −

Random Numbers
The system can ask for numbers that correspond to alphabets that are pre arranged. This combination can be
changed each time a login is required.

Secret Key
A hardware device can create a secret key related to the user id for login. This key can change each time.
1.6 Distributed Systems

A distributed Operating System refers to a model in which applications run on multiple interconnected
computers, offering enhanced communication and integration capabilities compared to a network operating
system.

In a Distributed Operating System, multiple CPUs are utilized, but for end-users, it appears as a typical
centralized operating system. It enables the sharing of various resources such as CPUs, disks, network
interfaces, nodes, and computers across different sites, thereby expanding the available data within the entire
system.
Effective communication channels like high-speed buses and telephone lines connect all processors, each
equipped with its own local memory and other neighboring processors.

Due to its characteristics, a distributed operating system is classified as a loosely coupled system. It
encompasses multiple computers, nodes, and sites, all interconnected through LAN/WAN lines. The ability of a
Distributed OS to share processing resources and I/O files while providing users with a virtual machine
abstraction is an important feature.

The diagram below illustrates the structure of a distributed operating system:

Types of Distributed Operating System


There are many types of Distributed Operating System, some of them are as follows:

1. Client-Server Systems
In a client-server system within a distributed operating system, clients request services or resources from
servers over a network. Clients initiate communication, send requests, and handle user interfaces, while
servers listen for requests, perform tasks, and manage resources.
This model allows for scalable resource utilization, efficient sharing, modular development, centralized
control, and fault tolerance.
It facilitates collaboration between distributed entities, promoting the development of reliable, scalable, and
interoperable distributed systems.

2. Peer-to-Peer(P2P) Systems
In peer-to-peer (P2P) systems, interconnected nodes directly communicate and collaborate without
centralized control. Each node can act as both a client and a server, sharing resources and services with other
nodes. P2P systems enable decentralized resource sharing, self-organization, and fault tolerance.

They support efficient collaboration, scalability, and resilience to failures without relying on central servers. This
model facilitates distributed data sharing, content distribution, and computing tasks, making it suitable for
applications like file sharing, content delivery, and blockchain networks.

3. Middleware
Middleware acts as a bridge between different software applications or components, enabling
communication and interaction across distributed systems. It abstracts complexities of network
communication, providing services like message passing, remote procedure calls (RPC), and object
management.

Middleware facilitates interoperability, scalability, and fault tolerance by decoupling application logic from
underlying infrastructure.
It supports diverse communication protocols and data formats, enabling seamless integration between
heterogeneous systems.
Middleware simplifies distributed system development, promotes modularity, and enhances system flexibility,
enabling efficient resource utilization and improved system reliability.

4. Three-Tier
In a distributed operating system, the three-tier architecture divides tasks into presentation, logic, and data
layers. The presentation tier, comprising client machines or devices, handles user interaction. The logic tier,
distributed across multiple nodes or servers, executes processing logic and coordinates system functions.

The data tier manages storage and retrieval operations, often employing distributed databases or file systems
across multiple nodes.
This modular approach enables scalability, fault tolerance, and efficient resource utilization, making it ideal for
distributed computing environments.

5. N-Tier
In an N-tier architecture, applications are structured into multiple tiers or layers beyond the traditional three- tier
model. Each tier performs specific functions, such as presentation, logic, data processing, and storage, with
the flexibility to add more tiers as needed. In a distributed operating system, this architecture enables complex
applications to be divided into modular components distributed across multiple nodes or servers.

Each tier can scale independently, promoting efficient resource utilization, fault tolerance, and maintainability. N-
tier architectures facilitate distributed computing by allowing components to run on separate nodes or
servers, improving performance and scalability.
This approach is commonly used in large-scale enterprise systems, web applications, and distributed
systems requiring high availability and scalability.

Applications of Distributed Operating System


Distributed operating systems find applications across various domains where distributed computing is
essential. Here are some notable applications:

Cloud Computing Platforms: Distributed operating systems form the backbone of cloud computing
platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These
platforms provide scalable, on-demand computing resources distributed across multiple data centers,
enabling organizations to deploy and manage applications, storage, and services in a distributed
manner.
Internet of Things (IoT): Distributed operating systems play a crucial role in IoT networks, where
numerous interconnected devices collect and exchange data. These operating systems manage
communication, coordination, and data processing tasks across distributed IoT devices, enabling
applications such as smart home automation, industrial monitoring, and environmental sensing.
Distributed Databases: Distributed operating systems are used in distributed database management
systems (DDBMS) to manage and coordinate data storage and processing across multiple nodes or
servers.These systems ensure data consistency, availability, and fault tolerance in distributed
environments, supporting applications such as online transaction processing (OLTP), data warehousing,
and real-time analytics.
Content Delivery Networks (CDNs): CDNs rely on distributed operating systems to deliver web content,
media, and applications to users worldwide.These operating systems manage distributed caching,
content replication, and request routing across a network of edge servers, reducinglatency and
improving performance for users accessing web content from diverse geographic locations.
Peer-to-Peer (P2P) Networks: Distributed operating systems are used in peer-to-peer networks to enable
decentralized communication, resource sharing, and collaboration among distributed nodes.These
systems facilitate file sharing, content distribution, and decentralized applications (DApps) by
coordinating interactions between peers without relying on centralized servers.
High-Performance Computing (HPC): Distributed operating systems are employed in HPC clusters and
supercomputers to coordinate parallel processing tasks across multiple nodes or compute units.These
systems support scientific simulations, computational modeling, and data-intensive computations by
distributing workloads and managing communication between nodes efficiently.
Distributed File Systems:Distributed operating systems power distributed file systems like Hadoop
Distributed File System (HDFS), Google File System (GFS), and CephFS.These file systems
enable distributed storage and retrieval of large-scale data sets across clusters of machines,
supporting applications such as big data analytics, data processing, and content storage.

Examples of Distributed Operating System


Below are some Examples of Distributed Operating System.

Solaris: The SUN multiprocessor workstations are the intended use for it.
OSF/1: The Open Foundation Software Company designed it, and it works with Unix.
Micros: All nodes in the system are assigned work by the MICROS operating system, which also
guarantees a balanced data load.
DYNIX: It is created for computers with many processors, known as Symmetry.
Locus: It can be viewed simultaneously from both local and distant files without any location
restrictions.
Mach: It permits the features of multitasking and multithreading.

Security in Distributed Operating system


Protection and security are crucial aspects of a Distributed Operating System, especially in organizational
settings. Measures are employed to safeguard the system from potential damage or loss caused by external
sources. Various security measures can be implemented, including authentication methods such as
username/password and user key. One Time Password (OTP) is also commonly utilized in distributed OS
security applications.

Advantages of Distributed Operating System

Below are some Advantages of Distributed Operating System.

It can increase data availability throughout the system by sharing all resources (CPU, disk, network
interface, nodes, computers, and so on) between sites.
Because all data is replicated across all sites, it reduces the probability of data corruption because
users can access data from another operating site in the event that one site fails.
Data transfer from one site to another is accelerated by it.
Since it may be accessible from both local and remote sites, it is an open system.
It facilitates a reduction in the time needed to process data.
The majority of distributed systems are composed of multiple nodes that work together to provide fault
tolerance. Even if one machine malfunctions, the system still functions.

Disadvantages of Distributed Operating System

Below are some Disadvantages of Distributed Operating System.

Which tasks need to be completed, when they need to be completed, and where they need to be
completed must be determined by the system. The restrictions of a scheduler might result in
unpredictable runtimes and unused hardware.
Since the nodes and connections in DOS need to be secured, it is challenging to establish sufficient
security.
Comparing a DOS-connected database to a single-user system, the latter is easier to maintain and less
complex.
Compared to other systems, the underlying software is incredibly sophisticated and poorly understood.
Compiling, analyzing, displaying, and keeping track of hardware utilization metrics for large clusters may be
quite challenging.

1.7 Kernal Data Structures

Kernel is central component of an operating system that manages operations of computer and hardware. It
basically manages operations of memory and CPU time. It is core component of an operating system. Kernel
acts as a bridge between applications and data processing performed at hardware level using inter-process
communication and system calls.

Kernel loads first into memory when an operating system is loaded and remains into memory until operating
system is shut down again. It is responsible for various tasks such as disk management, task management,
and memory management.

Kernel has a process table that keeps track of all active processes
• Process table contains a per process region table whose entry points to entries in region table.

Kernel loads an executable file into memory during ‘exec’ system call’.

Objectives of Kernel :

To establish communication between user level application and hardware.


To decide state of incoming processes.
To control disk management.
To control memory management.
To control task management.
Types of Kernel :

1. Monolithic Kernel –

It is one of types of kernel where all operating system services operate in kernel space. It has dependencies
between systems components. It has huge lines of code which is complex.

Example:

Unix, Linux, Open VMS, XTS-400 etc.

Advantage:
1. Efficiency: Monolithic kernels are generally faster than other types of kernels because they don’t have to
switch between user and kernel modes for every system call, which can cause overhead.

2. Tight integration: Since all the operating system services are running in kernel space, they can
communicate more efficiently with each other, making it easier to implement complex functionalities and
optimizations.

3. Simplicity: Monolithic kernels are simpler to design, implement, and debug than other types of kernels
because they have a unified structure that makes it easier to manage the code.

4. Lower latency: Monolithic kernels have lower latency than other types of kernels because system calls and
interrupts can be handled directly by the kernel.

Disadvantage:

1. Stability issues: Monolithic kernels can be less stable than other types of kernels because any bug or
security vulnerability in a kernel service can affect the entire system.

2. Security vulnerabilities: Since all the operating system services are running in kernel space, any security
vulnerability in one of the services can compromise the entire system.

3. Maintenance difficulties: Monolithic kernels can be more difficult to maintain than other types of kernels
because any change in one of the services can affect the entire system.

4. Limited modularity: Monolithic kernels are less modular than other types of kernels because all the
operating system services are tightly integrated into the kernel space. This makes it harder to add or remove
functionality without affecting the entire system.

2. Micro Kernel –

It is kernel types which has minimalist approach. It has virtual memory and thread scheduling. It is more
stable with less services in kernel space. It puts rest in user space.

It is use in small os.

Example :

Mach, L4, AmigaOS, Minix, K42 etc.

Advantages:

1. Reliability: Microkernel architecture is designed to be more reliable than monolithic kernels. Since most of
the operating system services run outside the kernel space, any bug or security vulnerability in a service won’t
affect the entire system.

2. Flexibility: Microkernel architecture is more flexible than monolithic kernels because it allows different
operating system services to be added or removed without affecting the entire system.

3. Modularity: Microkernel architecture is more modular than monolithic kernels because each operating
system service runs independently of the others. This makes it easier to maintain and debug the system.

4. Portability: Microkernel architecture is more portable than monolithic kernels because most of the
operating system services run outside the kernel space. This makes it easier to port the operating system to
different hardware architectures.

Disadvantages:
1. Performance: Microkernel architecture can be slower than monolithic kernels because it requires more
context switches between user space and kernel space.

2. Complexity: Microkernel architecture can be more complex than monolithic kernels because it requires
more communication and synchronization mechanisms between the different operating system services.

3. Development difficulty: Developing operating systems based on microkernel architecture can be more
difficult than developing monolithic kernels because it requires more attention to detail indesigning the
communication and synchronization mechanisms between the different services.

4. Higher resource usage: Microkernel architecture can use more system resources, such as memory and
CPU, than monolithic kernels because it requires more communication and synchronization mechanisms
between the different operating system services.

3. Hybrid Kernel –
It is the combination of both monolithic kernel and microkernel. It has speed and design of monolithic kernel
and modularity and stability of microkernel.

Example :

Windows NT, Netware, BeOS etc.


Advantages:

1. Performance: Hybrid kernels can offer better performance than microkernels because they
reduce the number of context switches required between user space and kernel space.

2. Reliability: Hybrid kernels can offer better reliability than monolithic kernels because they isolate
drivers and other kernel components inseparate protection domains.
3. Flexibility: Hybrid kernels can offer better flexibility than monolithic kernels because they allow
different operating system services to be added or removed without affecting the entire system.

4. Compatibility: Hybrid kernels can be more compatible than microkernels because they can support a
wider range of device drivers.

Disadvantages:

1. Complexity: Hybrid kernels can be more complex than monolithic kernels because they include
both monolithic and microkernel components, which can make the design and implementation
more difficult.

2. Security: Hybrid kernels can be less secure than microkernels because they have a larger attack
surface due to the inclusion of monolithic components.

3. Maintenance: Hybrid kernels can be more difficult to maintain than microkernels because they have a
more complex design and implementation.

4. Resource usage: Hybrid kernels can use more system resources than microkernels because they
include both monolithic and microkernel components.

4. Exo Kernel –
It is the type of kernel which follows end-to-end principle. It has fewest hardware abstractions as
possible. It allocates physical resources to applications.

Example :

Nemesis, ExOS etc.

Advantages:
1. Flexibility: Exokernels offer the highest level of flexibility, allowing developers to customize and
optimize
the operating system for their specific application needs.

2. Performance: Exokernels are designed to provide better performance than traditional kernels
because they eliminate unnecessary abstractions and allow applications to directly access hardware
resources.

3. Security: Exokernels provide better security than traditional kernels because they allow for fine-
grained control over the allocation of system resources, suchas memory and CPU time.

4. Modularity: Exokernels are highly modular, allowing for the easy addition or removal of operating
system services.

Disadvantages:

1. Complexity: Exokernels can be more complex to develop than traditional kernels because they
require greater attention to detail and careful consideration of system resource allocation.

2. Development Difficulty: Developing applications for exokernels can be more difficult than for
traditional kernels because applications must be written to directly access hardware resources.

3. Limited Support: Exokernels are still an emerging technology and may not have the same level of
support and resources as traditional kernels.

4. Debugging Difficulty: Debugging applications and operating system services on exokernels can
be more difficult than on traditional kernels because of the direct access to hardware resources.

5. Nano Kernel –

It is the type of kernel that offers hardwareabstraction but without system services. Micro Kernel
also does not have system services therefore the Micro Kernel and Nano Kernel have become
analogous.

Example :

EROS etc.

Advantages
:

1. Small size: Nanokernels are designed to be extremely small, providing only the most essential
functions needed to run the system. This can make them more efficient and faster than other kernel
types.
2. High modularity: Nanokernels are highly modular, allowing for the easy addition or removal of operating
system services, making them more flexible and customizable than traditional monolithic kernels.

3. Security: Nanokernels provide better security than traditional kernels because they have a smaller attack
surface and a reduced risk of errors or bugs in the code.

4. Portability: Nanokernels are designed to be highly portable, allowing them to run on a wide range
ofhardware architectures.

Disadvantages:

1. Limited functionality: Nanokernels provide only the most essential functions, making them unsuitable for
more complex applications that require a broader range of services.

2. Complexity: Because nanokernels provide only essential functionality, they can be more complex to develop
and maintain than other kernel types.

3. Performance: While nanokernels are designed for efficiency, their minimalist approach may not be able to
provide the same level of performance as other kernel types in certain situations.

4. Compatibility: Because of their minimalist design, nanokernels may not be compatible with all hardware
and software configurations, limiting their practical use in certain contexts.

1.8 Operating system services


Operating system is a software that acts as an intermediary between the user and computer hardware. It is a
program with the help of which we are able to run various applications. It is the one program that is running all
the time. Every computer must have an operating system to smoothly execute other programs. The OS
coordinates the use of the hardware and application programs for various users. It provides a platform for
other application programs to work. The operating system is a set of special programs that run on a computer
system that allows it to work properly. It controls input-output devices, execution of programs, managing files,
etc.

Services of Operating System

Program execution
Input Output Operations
Communication between Process
File Management
Memory Management
Process Management
Security and Privacy
Resource
Management User
Interface Networking
Error handling
Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the program into the
memory after which it is executed. The order in which they are executed depends on the CPU Scheduling
Algorithms. A few are FCFS, SJF, etc. When the program is in execution, the Operating System also handles
deadlock i.e. no two processes come for execution at the same time. The Operating System is responsible for
the smooth execution of both user and system programs. The Operating System utilizes various resources
available for the efficient running of all types of functionalities.

Input Output Operations


Operating System manages the input-output operations and establishes communication between the user
and device drivers. Device drivers are software that is associated with hardware that is being managed by the
OS so that the sync between the devices works properly. It also provides access to input-output devices to a
program when needed.

Communication between Processes


The Operating system manages the communication between processes. Communication between processes
includes data transfer among them. If the processes are not on the same computer but connected through a
computer network, then also their communication is managed by the Operating System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the operating
system that grants access. These permissions include read-only, read-write, etc. It also provides a platform for
the user to create, and delete files. The Operating System is responsible for making decisions regarding the
storage of all types of data or files, i.e, floppy disk/hard disk/pen drive, etc. The Operating System decides how
the data should be manipulated and stored.

Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with limited number of
player . The team manager (OS) decide whether the upcoming player will be in playing 11 ,playing 15 or will
not be included in team , based on his performance . In the same way, OS first check whether the upcoming
program fulfil all requirement to get memory space or not ,if all things good, it checks how much memory
space will be sufficient for program and then load the program into memory at certain location. And thus , it
prevents program from using unnecessary memory.

Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the (CPU) where all
cooking(execution) is really happen and chef as the (OS) who uses kitchen-stove(CPU) to cook different
dishes(program). The chef(OS) has to cook different dishes(programs) so he ensure that any particular
dish(program) does not take long time(unnecessary time) and all dishes(programs) gets a chance to
cooked(execution) .The chef(OS) basically scheduled time for all dishes(programs) torun kitchen(all the
system) smoothly and thus cooked(execute) all the different dishes(programs) efficiently.

Security and Privacy


Security : OS keep our computer safe from an unauthorized user by adding security layer to it. Basically,
Security is nothing but just a layer of protection which protect computer from bad guys like viruses and
hackers. OS provide us defenses like firewalls and anti-virus software and ensure good safety of computer
and personal information.
Privacy : OS give us facility to keep our essential information hidden like having a lock on our door, where only
you can enter and other are not allowed . Basically , it respect our secrets and provide us facility to keep it
safe.

Resource Management
System resources are shared between various processes. It is the Operating system that manages resource
sharing. It also manages the CPU time among processes using CPU Scheduling Algorithms. It also helps in
the memory management of the system. It also controls input-output devices. The OS also ensures the proper
use of all the resources available by deciding which resource to be used by whom.

User Interface
User interface is essential and all operating systems provide it. Users either interface with the operating
system through the command-line interface or graphical user interface or GUI. The command interpreter
executes the next user-specified command.

A GUI offers the user a mouse-based window and menu system as an interface.

Networking
This service enables communication between devices on a network, such as connecting to the internet,
sending and receiving data packets, and managing network connections.

Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices, etc. It also ensures
that an error does not occur frequently and fixes the errors. It also prevents the process from coming to a
deadlock. It also looks for any type of error or bugs that can occur while any task. The well-secured OS
sometimes also acts as a countermeasure for preventing any sort of breach of the Computer System from
any external source and probably handling them.

Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be stop(red)=>(simple
queue) , start(yellow)=>(ready queue),move(green)=>(under execution) and this light (control) changes after a
certain interval of time at each side of the road(computer system) so that the cars(program) from all side of
road move smoothly without traffic.

1.9 System Calls

In computing, a system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact with the
operating system. A computer program makes a system call when it makes a request to the operating
system’s kernel. System call provides the services of the operating system to the user programs via
Application Program Interface(API). It provides an interface between a process and an operating system to
allow user-level processes to request services of the operating system. System calls are the only entry points
into the kernel system. All programs needing resources must use system calls.

A user program can interact with the operating system using a system call. A number of services are
requested by the program, and the OS responds by launching a number of systems calls to fulfill the request.
A system call can be written in high-level languages like C or Pascal or in assembly language. If a high-level
language is used, the operating systemmay directly invoke system calls, which are predefined functions.

A system call is a mechanism used by programs to request services from the operating system (OS). In
simpler terms, it is a way for a program to interact with the underlying system, such as accessing hardware
resources or performing privileged operations.

A system call is initiated by the program executing a specific instruction, which triggers a switch to kernel
mode, allowing the program to request a service from the OS. The OS then handles the request, performs the
necessary operations, and returns the result back to the program.

System calls are essential for the proper functioning of an operating system, as they provide a standardized
way for programs to access system resources. Without system calls, each program would need to implement
its own methods for accessing hardware and system services, leading to inconsistent and error-prone
behavior.

Services Provided by System Calls


Process creation and management
Main memory management
File Access, Directory, and File system management
Device handling(I/O)
Protection
Networking, etc.

1. Process control: end, abort, create, terminate, allocate, and free memory.
2. File management: create, open, close, delete, read files,s, etc.
3. Device management
4. Information maintenance
5. Communication

Features of System Calls


Interface: System calls provide a well-defined interface between user programs and the operating system.
Programs make requests by calling specific functions, and the operating system responds by executing the
requested service and returning a result.
Protection: System calls are used to access privileged operations that are not available to normal user
programs. The operating system uses this privilege to protect the system from malicious or unauthorized
access.
Kernel Mode: When a system call is made, the program is temporarily switched from user mode to kernel
mode. In kernel mode, the program has access to all system resources, including hardware, memory, and
other processes.
Context Switching: A system call requires a context switch, which involves saving the state of the current
process and switching to the kernel mode to execute the requested service. This can introduce overhead,
which can impact system performance.
Error Handling: System calls can return error codes to indicate problems with the requested service.
Programs must check for these errors and handle them appropriately.
Synchronization: System calls can be used to synchronize access to shared resources, such as files or
network connections. The operating system provides synchronization mechanisms, such as locks or
semaphores, to ensure that multiple programs can access these resources safely.
System Calls Advantages
Access to hardware resources: System calls allow programs to access hardware resources such as disk
drives, printers, and network devices.
Memory management: System calls provide a way for programs to allocate and deallocate memory, as well
as access memory-mapped hardware devices.
Process management: System calls allow programs to create and terminate processes, as well as manage
inter-process communication.
Security: System calls provide a way for programs to access privileged resources, such as the ability to
modify system settings or perform operations that require administrative permissions.
Standardization: System calls provide a standardized interface for programs to interact with the operating
system, ensuring consistency and compatibility across different hardware platforms and operating system
versions.

How does System Call Work?


Here is the detailed explanation step by step how system call work:

User need special resources : Sometimes programs need to do some special things which can’t be done
without the permission of OS like reading from a file, writing to a file , getting any information from the
hardware or requesting a space in memory.

Program makes a system call request : There are special predefined instruction to make a request to the
operating system. These instruction are nothing but just a “system call”. The program uses these system
calls in its code when needed.

Operating system sees the system call : When the OS sees the system call then it recongnises that the
program need help at this time so it temporarily stop the program execution and give all the control to
special part of itself called ‘Kernel’ . Now ‘Kernel’ solve the need of program.

Operating system performs the operations :Now the operating system perform the operation which is
requested by program . Example : reading content from a file etc.

Operating system give control back to the program : After performing the special operation, OS give control
back to the program for further execution of program .
Examples of a System Call in Windows and Unix
System calls for Windows and Unix come in many different forms. These are listed in the table below as
follows:

open(): Accessing a file on a file system is possible with the open() system call. It gives the file resources it
needs and a handle the process can use. A file can be opened by multiple processes simultaneously or just
one process. Everything is based on the structure and file system.

read(): Data from a file on the file system is retrieved using it. In general, it accepts three arguments:

A description of a file.

A buffer for read data storage.

How many bytes should be read from the file

Before reading, the file to be read could be identified by its file descriptor and opened using the open()
function.

wait(): In some systems, a process might need to hold off until another process has finished running before
continuing. When a parent process creates a child process, the execution of the parent process is halted until
the child process is complete. The parent process is stopped using the wait() system call. The parent process
regains control once the child process has finished running.

write(): Data from a user buffer is written using it to a device like a file. A program can produce data in one
way by using this system call. generally, there arethree arguments:

1. A description of a file.
2. A reference to the buffer where data is stored.
3. The amount of data that will be written from the buffer in bytes.

fork(): The fork() system call is used by processes to create copies of themselves. It is one of the methods
used the most frequently in operating systems to create processes. When a parent process creates a child
process, the parent process’s execution is suspended until the child process is finished. The parent process
regains control once the child process has finished running.

exit(): A system call called exit() is used to terminate a program. In environments with multiple threads, this
call indicates that the thread execution is finished. After using the exit() system function, the operating system
recovers the resources used by the process.

Methods to pass parameters to OS


If a system call occur, we have to pass parameter to the Kernal part of the Operating system.

For example look at the given open() system call:

//function call example

#include <fcntl.h>

int open(const char *pathname, int flags, mode_t mode);


Here pathname, flags and mode_t are the parameters.
So it is to be noted that :
 We can’t pass the parameters directly like in an ordinary function call.
 In Kernal mode there is a different way to perform a function call.
So we can’t run it in the normal address space that the process had already created and hence we cant place the
parameters in the top of the stack because it is not available to the Kernal of the operating system for processing.
so we have to adopt any other methods to pass the parameters to the Kernal of the OS.
We can done it through,
1. Passing parameters in registers
2. Address of the block is passed as a parameter in a register.
3. Parameters are pushed into a stack.
Let us discuss about each points in detail:
1. Passing parameters in registers.
 It is the simplest method among the three
 Here we directly pass the parameters to registers.
 But it will it is limited when, number of parameters are greater than the number of registers.
Here is the C program code:

// Passing parameters in registers.

#include <fcntl.h>
#include <stdio.h>

int main()
{
const char* pathname = "example.txt";
int flags = O_RDONLY;
mode_t mode = 0644;

int fd = open(pathname, flags, mode);


// in function call open(), we passed the parameters pathanme,flags,mode to the kernal directly

if (fd == -1) {
perror("Error opening file");
return 1;
}

// File operations here...

close(fd);
return 0;
}
2.Address of the block is passed as parameters
 It can be applied when the number of parameters are greater than the number of registers.
 Parameters are stored in blocks or table.
 The address of the block is passed to a register as a parameter.
 Most commonly used in Linux and Solaris.
Here is the C program code:

//Address of the block is passed as parameters

#include <stdio.h>

#include <fcntl.h>
int main() {

const char *pathname = "example.txt";

int flags = O_RDONLY;

mode_t mode = 0644;

int params[3];

// Block of data(parameters) in array

params[0] = (int)pathname;

params[1] = flags;

params[2] = mode;

int fd = syscall(SYS_open, params);

// system call

if (fd == -1) {

perror("Error opening file");

return 1;

// File operations here...

close(fd);

return 0;

}
3.Parameters are pushed in a stack
 In this method parameters can be pushed in using the program and popped out using the operating
system
 So the Kernal can easily access the data by retrieving information from the top of the stack.
Here is the C program code

//parameters are pushed into the stack

#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>

int main() {
const char *pathname = "example.txt";
int flags = O_RDONLY;
mode_t mode = 0644;

int fd;
asm volatile(
"mov %1, %%rdi\n"
"mov %2, %%rsi\n"
"mov %3, %%rdx\n"
"mov $2, %%rax\n"
"syscall"
: "=a" (fd)
: "r" (pathname), "r" (flags), "r" (mode)
: "%rdi", "%rsi", "%rdx"
);

if (fd == -1) {
perror("Error opening file");
return 1;
}

// File operations here...

close(fd);
return 0;
}

1.10System Services
Operating system is a software that acts as an intermediary between the user and computer hardware. It
is a program with the help of which we are able to run various applications. It is the one program that is
running all the time. Every computer must have an operating system to smoothly execute other
programs. The OS coordinates the use of the hardware and application programs for various users. It
provides a platform for other application programs to work. The operating system is a set of special
programs that run on a computer system that allows it to work properly. It controls input-output
devices, execution of programs, managing files, etc.
Services of Operating System
2 Program execution
3 Input Output Operations
4 Communication between Process
5 File Management
6 Memory Management
7 Process Management
8 Security and Privacy
9 Resource Management
10 User Interface
11 Networking
12 Error handling
13 Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the program
into the memory after which it is executed. The order in which they are executed depends on the CPU
Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is in execution, the Operating
System also handles deadlock i.e. no two processes come for execution at the same time. The Operating
System is responsible for the smooth execution of both user and system programs. The Operating
System utilizes various resources available for the efficient running of all types of functionalities.
Input Output Operations
Operating System manages the input-output operations and establishes communication between the user
and device drivers. Device drivers are software that is associated with hardware that is being managed
by the OS so that the sync between the devices works properly. It also provides access to input-output
devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes. Communication between
processes includes data transfer among them. If the processes are not on the same computer but
connected through a computer network, then also their communication is managed by the Operating
System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the operating
system that grants access. These permissions include read-only, read-write, etc. It also provides a
platform for the user to create, and delete files. The Operating System is responsible for making
decisions regarding the storage of all types of data or files, i.e, floppy disk/hard disk/pen drive, etc. The
Operating System decides how the data should be manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with limited
number of player . The team manager (OS) decide whether the upcoming player will be in playing 11
,playing 15 or will not be included in team , based on his performance . In the same way, OS first check
whether the upcoming program fulfil all requirement to get memory space or not ,if all things good, it
checks how much memory space will be sufficient for program and then load the program into memory
at certain location. And thus , it prevents program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the (CPU)
where all cooking(execution) is really happen and chef as the (OS) who uses kitchen-stove(CPU) to
cook different dishes(program). The chef(OS) has to cook different dishes(programs) so he ensure that
any particular dish(program) does not take long time(unnecessary time) and all dishes(programs) gets a
chance to cooked(execution) .The chef(OS) basically scheduled time for all dishes(programs) to run
kitchen(all the system) smoothly and thus cooked(execute) all the different dishes(programs)
efficiently.
Security and Privacy
 Security : OS keep our computer safe from an unauthorized user by adding security layer to it.
Basically, Security is nothing but just a layer of protection which protect computer from bad guys
like viruses and hackers. OS provide us defenses like firewalls and anti-virus software and ensure
good safety of computer and personal information.
 Privacy : OS give us facility to keep our essential information hidden like having a lock on our door,
where only you can enter and other are not allowed . Basically , it respect our secrets and provide us
facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that manages
resource sharing. It also manages the CPU time among processes using CPU Scheduling Algorithms. It
also helps in the memory management of the system. It also controls input-output devices. The OS also
ensures the proper use of all the resources available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the operating
system through the command-line interface or graphical user interface or GUI. The command
interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the internet,
sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices, etc. It also
ensures that an error does not occur frequently and fixes the errors. It also prevents the process from
coming to a deadlock. It also looks for any type of error or bugs that can occur while any task. The
well-secured OS sometimes also acts as a countermeasure for preventing any sort of breach of the
Computer System from any external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue) , start(yellow)=>(ready queue),move(green)=>(under execution) and this
light (control) changes after a certain interval of time at each side of the road(computer system) so that
the cars(program) from all side of road move smoothly without traffic.

1.11 Why applications are operating system specific


I'm trying to determine the technical details of why software produced using programming languages for
certain operating systems only work with them.

It is my understanding that binaries are specific to certain processors due to the processor specific
machine language they understand and the differing instruction sets between different processors. But
where does the operating system specificity come from? I used to assume it was APIs provided by the OS
but then I saw this diagram in a

book:

Operating Systems - Internals and Design Principles 7th ed - W. Stallings (Pearson, 2012)
As you can see, APIs are not indicated as a part of the operating system.

If for example I build a simple program in C using the following code:

#include<stdio.h>

main()
{
printf("Hello World");

}
Is the compiler doing anything OS specific when compiling this?
 operating-systems
 compiler

CPU Security Model


The first program run on most CPU architectures runs inside what is called the inner ring or ring 0. How a
specific CPU arch implements rings varies, but it stands that nearly every modern CPU has at least 2
modes of operation, one which is privileged and runs 'bare metal' code which can perform any legal
operation the CPU can perform and the other is untrusted and runs protected code which can only
perform a defined safe set of capabilities. Some CPUs have far higher granularity however and in order to
use VMs securely at least 1 or 2 extra rings are needed (often labelled with negative numbers) however
this is beyond the scope of this answer.

Where the OS comes in


Early single tasking OSes
In very early DOS and other early single tasking based systems all code was run in the inner ring, every
program you ever ran had full power over the whole computer and could do literally anything if it
misbehaved including erasing all your data or even doing hardware damage in a few extreme cases such
as setting invalid display modes on very old display screens, worse, this could be caused by simply buggy
code with no malice whatsoever.
This code was in fact largely OS agnostic, as long as you had a loader capable of loading the program
into memory (pretty simple for early binary formats) and the code did not rely on any drivers,
implementing all hardware access itself it should run under any OS as long as it is run in ring 0. Note, a
very simple OS like this is usually called a monitor if it is simply used to run other programs and offers
no additional functionality.

Modern multi tasking OSes


More modern operating systems including UNIX, versions of Windows starting with NT and various
other now obscure OSes decided to improve on this situation, users wanted additional features such as
multitasking so they could run more than one application at once and protection, so a bug (or malicious
code) in an application could no longer cause unlimited damage to the machine and data.
This was done using the rings mentioned above, the OS would take the sole place running in ring 0 and
applications would run in the outer untrusted rings, only able to perform a restricted set of operations
which the OS allowed.

However this increased utility and protection came at a cost, programs now had to work with the OS to
perform tasks they were not allowed to do themselves, they could no longer for example take direct
control over the hard disk by accessing its memory and change arbitrary data, instead they had to ask the
OS to perform these tasks for them so that it could check that they were allowed to perform the operation,
not changing files that did not belong to them, it would also check that the operation was indeed valid and
would not leave the hardware in an undefined state.

Each OS decided on a different implementation for these protections, partially based on the architecture
the OS was designed for and partially based around the design and principles of the OS in question,
UNIX for example put focus on machines being good for multi user use and focused the available
features for this while windows was designed to be simpler, to run on slower hardware with a single user.
The way user-space programs also talk to the OS is completely different on X86 as it would be on ARM
or MIPS for example, forcing a multi-platform OS to make decisions based around the need to work on
the hardware it is targeted for.

These OS specific interactions are usually called "system calls" and encompass how a user space program
interacts with the hardware through the OS completely, they fundamentally differ based on the function
of the OS and thus a program that does its work through system calls needs to be OS specific.

The Program Loader


In addition to system calls, each OS provides a different method to load a program from the secondary
storage medium and into memory, in order to be loadable by a specific OS the program must contain a
special header which describes to the OS how it may be loaded and run.
This header used to be simple enough that writing a loader for a different format was almost trivial,
however with modern formats such as elf which support advanced features such as dynamic linking and
weak declarations it is now near impossible for an OS to attempt to load binaries which were not designed
for it, this means, even if there were not the system call incompatibilities it is immensely difficult to even
place a program in ram in a way in which it can be run.

Libraries
Programs rarely use system calls directly however, they almost exclusively gain their functionality though
libraries which wrap the system calls in a slightly friendlier format for the programming language, for
example, C has the C Standard Library and glibc under Linux and similar and win32 libs under Windows
NT and above, most other programming languages also have similar libraries which wrap system
functionality in an appropriate way.

These libraries can to some degree even overcome the cross platform issues as described above, there are
a range of libraries which are designed around providing a uniform platform to applications while
internally managing calls to a wide range of OSes such as SDL, this means that though programs cannot
be binary compatible, programs which use these libraries can have common source between platforms,
making porting as simple as recompiling.

Exceptions to the Above


Despite all I have said here, there have been attempts to overcome the limitations of not being able to run
programs on more than one operating system. Some good examples are the Wine project which has
successfully emulated both the win32 program loader, binary format and system libraries allowing
Windows programs to run on various UNIXes. There is also a compatibility layer allowing several BSD
UNIX operating systems to run Linux software and of course Apple's own shim allowing one to run old
MacOS software under MacOS X.
However these projects work through enormous levels of manual development effort. Depending on how
different the two OSes are the difficulty ranges from a fairly small shim to near complete emulation of the
other OS which is often more complex than writing an entire operating system in itself and so this is the
exception and not the rule.
As you can see, APIs are not indicated as a part of the operating system.
I think you are reading too much into the diagram. Yes, an OS will specify a binary interface for how
operating system functions are called, and it will also define a file format for executables, but it will also
provide an API, in the sense of providing a catalog of functions that can be called by an application to
invoke OS services.

I think the diagram is just trying to emphasize that operating system functions are usually invoked
through a different mechanism than a simple library call. Most of the common OS use processor
interrupts to access OS functions. Typical modern operating systems are not going to let a user program
directly access any hardware. If you want to write a character to the console, you are going to have to ask
the OS to do it for you. The system call used to write to the console will vary from OS to OS, so right
there is one example of why software is OS specific.
printf is a function from the C run time library and in a typical implementation is a fairly complex
function. If you google you can find the source for several versions online. See this page for a guided tour
of one. Down in the grass though it ends up making one or more system calls, and each of those system
calls is specific to the host operating system.

1.12 Operating system Design and Implementation


The design of an operating system is a broad and complex topic that touches on many aspects of
computer science. This article will cover the design of operating systems in general and then focus on
the implementation aspect.

Design Goals:

Design goals are the objectives of the operating system. They must be met to fulfill design requirements
and they can be used to evaluate the design. These goals may not always be technical, but they often
have a direct impact on how users perceive their experience with an operating system. While designers
need to identify all design goals and prioritize them, they also need to ensure that these goals are
compatible with each other as well as compatible with user expectations or expert advice
Designers also need to identify all possible ways in which their designs could conflict with other parts
of their systems—and then prioritize those potential conflicts based on cost-benefit analysis (CBA).
This process allows for better decision-making about what features make sense for inclusion into final
products versus those which would require extensive rework later down the road. It’s also important to
note that CBA is not just about financial costs; it can also include other factors like user experience,
time to market, and the impact on other systems.
The process of identifying design goals, conflicts, and priorities is often referred to as “goal-driven
design.” The goal of this approach is to ensure that each design decision is made with the best interest
of users and other stakeholders in mind.

Mechanisms and Policies:

An operating system is a set of software components that manage a computer’s resources and provide
overall system management.
Mechanisms and policies are the two main components of an operating system. Mechanisms handle
low-level functions such as scheduling, memory management, and interrupt handling; policies handle
higher-level functions such as resource management, security, and reliability. A well-designed OS
should provide both mechanisms and policies for each component in order for it to be successful at its
task:
Mechanisms should ensure that applications have access to appropriate hardware resources (seats).
They should also make sure that applications don’t interfere with each other’s use of these resources
(for example through mutual exclusion).
Policies determine how processes will interact with one another when they’re running simultaneously
on multiple CPUs within a single machine instance – what processor affinity should occur during
multitasking operations? Should all processes be allowed access simultaneously or just those belonging
specifically within group ‘A’?’
These are just some of the many questions that policies must answer. The OS is responsible for
enforcing these mechanisms and policies, as well as handling exceptions when they occur. The
operating system also provides a number of services to applications, such as file access and networking
capabilities.
The operating system is also responsible for making sure that all of these tasks are done efficiently and
in a timely manner. The OS provides applications with access to the underlying hardware resources and
ensures that they’re properly utilized by the application. It also handles any exceptions that occur during
execution so that they don’t cause the entire system to crash.

Implementation:

Implementation is the process of writing source code in a high-level programming language, compiling
it into object code, and then interpreting (executing) this object code by means of an interpreter. The
purpose of an operating system is to provide services to users while they run applications on their
computers.
The main function of an operating system is to control the execution of programs. It also provides
services such as memory management, interrupt handling, and file system access facilities so that
programs can be better utilized by users or other devices attached to the system.
An operating system is a program or software that controls the computer’s hardware and resources. It
acts as an intermediary between applications, users, and the computer’s hardware. It manages the
activities of all programs running on a computer without any user intervention.
The operating system performs many functions such as managing the computer’s memory, enforcing
security policies, and controlling peripheral devices. It also provides a user interface that allows users to
interact with their computers.
The operating system is typically stored in ROM or flash memory so it can be run when the computer is
turned on. The first operating systems were designed to control mainframe computers. They were very
large and complex, consisting of millions of lines of code and requiring several people to develop them.
Today, operating systems are much smaller and easier to use. They have been designed to be modular
so they can be customized by users or developers.
There are many different types of operating systems:
1. Graphical user interfaces (GUIs) like Microsoft Windows and Mac OS.
2. Command line interfaces like Linux or UNIX
3. Real-time operating systems that control industrial and scientific equipment
4. Embedded operating systems are designed to run on a single computer system without needing an
external display or keyboard.
An operating system is a program that controls the execution of computer programs and provides
services to the user.
It is responsible for managing computer hardware resources and providing common services for all
programs running on the computer. An operating system also facilitates user interaction with the
computer.
In addition to these basic functions, an operating system manages resources such as memory,
input/output devices, file systems, and other components of a computer system’s hardware architecture
(hardware). It does not manage application software or its data; this responsibility resides with
individual applications themselves or their respective developers via APIs provided by each
application’s interfaces with their respective environments (e.g., Java VM).
The operating system is the most important component of a computer, as it allows users to interact with
all of the other components. The operating system provides access to hardware resources such as
storage devices and printers, as well as making sure that programs are running correctly and
coordinating their activities.
The design and implementation of an operating system is a complex process that involves many
different disciplines. The goal is to provide users with a reliable, efficient, and convenient computing
environment, so as to make their work more efficient.

1.13 Operating system structure


The operating system can be implemented with the help of various structures. The structure of the OS depends
mainly on how the various standard components of the operating system are interconnected and melded into
the kernel.
A design known as an operating system enables user application programs to communicate with the machine’s
hardware. Given its complex design and need to be easy to use and modify, the operating system should be
constructed with the utmost care. A straightforward way to do this is to supernaturally develop the operating
system. These parts must each have unique inputs, outputs, and functionalities.
This article discusses a variety of operating system implementation structures, including those listed below, as well
as how and why they function. Additionally, the operating system structure is defined.
Depending on this, we have the following structures in the operating system:
1. Simple/Monolithic Structure
2. Micro-Kernel Structure
3. Hybrid-Kernel Structure
4. Exo-Kernel Structure
5. Layered Structure
6. Modular Structure
7. Virtual Machines
What is a System Structure for an Operating System?
Because operating systems have complex structures, we want a structure that is easy to understand so
that we can adapt an operating system to meet our specific needs. Similar to how we break down larger
problems into smaller, more manageable subproblems, building an operating system in pieces is
simpler. The operating system is a component of every segment. The strategy for integrating different
operating system components within the kernel can be thought of as an operating system structure. As
will be discussed below, various types of structures are used to implement operating systems.
Simple/Monolithic structure
Such operating systems do not have well-defined structures and are small, simple, and limited. The
interfaces and levels of functionality are not well separated. MS-DOS is an example of such an
operating system. In MS-DOS, application programs are able to access the basic I/O routines. These
types of operating systems cause the entire system to crash if one of the user programs fails.

Advantages of Simple/Monolithic Structure


 It delivers better application performance because of the few interfaces between the application
program and the hardware.
 It is easy for kernel developers to develop such an operating system.
Disadvantages of Simple/Monolithic Structure
 The structure is very complicated, as no clear boundaries exist between modules.
 It does not enforce data hiding in the operating system.
Micro-kernel Structure
This structure designs the operating system by removing all non-essential components from the kernel
and implementing them as system and user programs. This results in a smaller kernel called the micro-
kernel. Advantages of this structure are that all new services need to be added to user space and does
not require the kernel to be modified. Thus it is more secure and reliable as if a service fails, then rest of
the operating system remains untouched. Mac OS is an example of this type of OS.
Advantages of Micro-kernel Structure
 It makes the operating system portable to various platforms.
 As microkernels are small so these can be tested effectively.
Disadvantages of Micro-kernel Structure
 Increased level of inter module communication degrades system performance.
Hybrid-Kernel Structure
Hybrid-kernel structure is nothing but just a combination of both monolithic-kernel structure and micro-
kernel structure. Basically, it combines properties of both monolithic and micro-kernel and make a
more advance and helpful approach. It implement speed and design of monolithic and modularity and
stability of micro-kernel structure.
Advantages of Hybrid-Kernel Structure
 It offers good performance as it implements the advantages of both structure in it.
 It supports a wide range of hardware and applications.
 It provides better isolation and security by implementing micro-kernel approach.
 It enhances overall system reliability by separating critical functions into micro-kernel for debugging
and maintenance.
Disadvantages of Hybrid-Kernel Structure
 It increases overall complexity of system by implementing both structure (monolithic and micro) and
making the system difficult to understand.
 The layer of communication between micro-kernel and other component increases time complexity
and decreases performance compared to monolithic kernel.
Exo-Kernel Structure
Exokernel is an operating system developed at MIT to provide application-level management of
hardware resources. By separating resource management from protection, the exokernel architecture
aims to enable application-specific customization. Due to its limited operability, exokernel size
typically tends to be minimal.
The OS will always have an impact on the functionality, performance, and scope of the apps that are
developed on it because it sits in between the software and the hardware. The exokernel operating
system makes an attempt to address this problem by rejecting the notion that an operating system must
provide abstractions upon which to base applications. The objective is to limit developers use of
abstractions as little as possible while still giving them freedom.
Advantages of Exo-kernel
 Support for improved application control.
 Separates management from security.
 It improves the performance of the application.
 A more efficient use of hardware resources is made possible by accurate resource allocation and
revocation.
 It is simpler to test and create new operating systems.
 Each user-space program is allowed to use a custom memory management system.
Disadvantages of Exo-kernel
 A decline in consistency.
 Exokernel interfaces have a complex architecture.
Layered structure
An OS can be broken into pieces and retain much more control over the system. In this structure, the
OS is broken into a number of layers (levels). The bottom layer (layer 0) is the hardware, and the
topmost layer (layer N) is the user interface. These layers are so designed that each layer uses the
functions of the lower-level layers. This simplifies the debugging process, if lower-level layers are
debugged and an error occurs during debugging, then the error must be on that layer only, as the lower-
level layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be modified and passed
on which adds overhead to the system. Moreover, careful planning of the layers is necessary, as a layer
can use only lower-level layers. UNIX is an example of this structure.
Advantages of Layered Structure
 Layering makes it easier to enhance the operating system, as the implementation of a layer can be
changed easily without affecting the other layers.
 It is very easy to perform debugging and system verification.
Disadvantages of Layered structure
 In this structure, the application’s performance is degraded as compared to simple structure.
 It requires careful planning for designing the layers, as the higher layers use the functionalities of
only the lower layers.
Modular Structure
It is considered as the best approach for an OS. It involves designing of a modular kernel. The kernel
has only a set of core components and other services are added as dynamically loadable modules to the
kernel either during runtime or boot time. It resembles layered structure due to the fact that each kernel
has defined and protected interfaces, but it is more flexible than a layered structure as a module can call
any other module. For example Solaris OS is organized as shown in the figure.

VMs (virtual machines)


Based on our needs, a virtual machine abstracts the hardware of our personal computer, including the
CPU, disc drives, RAM, and NIC (Network Interface Card), into a variety of different execution
contexts, giving us the impression that each execution environment is a different computer. An
illustration of it is a virtual box.
An operating system enables us to run multiple processes concurrently while making it appear as
though each one is using a different processor and virtual memory by using CPU scheduling and virtual
memory techniques.
The fundamental issue with the virtual machine technique is disc systems. Let’s say the physical
machine only has three disc drives, but it needs to host seven virtual machines. The program that creates
virtual machines would need a significant amount of disc space in order to provide virtual memory and
spooling, so it should be clear that it is impossible to assign a disc drive to every virtual machine. The
answer is to make virtual discs available.

1.14 Building and booting an operating system

Building:

How To Build Your Own Operating System From Scratch ??


Step-1 :
There are three most important aspects to master prior to Operating System development. They are
basics of computer science , basic programming and learning both high-level and low-level
programming languages . Assembly languages or low-level languages are used to communicate directly
with CPU ( Central Processing Unit ) . Each type of CPU speaks a machine language and there is just
one corresponding assembly language for each type of CPU. x86 is the most commonly used computer
architecture and C is the most commonly used high-level programming language for the development of
an Operating System .
References :
For Low-level languages ( Assembly Language )
 Modern X86 Assembly Language Programming by Daniel Kusswurm .
 Assembly Language Step-by-Step: Programming with Linux by Jeff Duntemann .
For High-level languages ( Modern Languages )
 The C Programming Language by Kernighan and Ritchie .
 C++: The Complete Reference
 Python Programming : An Introduction to Computer Science .
Step-2 :
The very next step in the development of an Operating System is to complete OS development
tutorials.
References :
The following are some of the useful tutorials to develop an Operating System from scratch :
 Operating System Development Series from Broken Thorn Entertainment .
 The Little Book about OS Development by Erik Helin and Adam Renberg .
 The Design of the UNIX Operating System by Maurice Bach .
This is a complete step-by-step procedure to develop an Operating System from Scratch..

Booting:

Booting is the process of starting a computer. It can be initiated by hardware such as a button press or by
a software command. After it is switched on, a CPU has no software in its main memory, so some
processes must load software into memory before execution. This may be done by hardware or firmware
in the CPU or by a separate processor in the computer system.
Restarting a computer also is called rebooting, which can be "hard", e.g., after electrical power to
the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot
may optionally clear RAM to zero. Hard and soft booting can be initiated by hardware such as a button
press or a software command. Booting is complete when the operative runtime system, typically the
operating system and some applications, is attained.

The process of returning a computer from a state of sleep does not involve booting; however, restoring it
from a state of hibernation does. Minimally, some embedded systems do not require a noticeable boot
sequence to begin functioning and, when turned on, may run operational programs that are stored in
ROM. All computer systems are state machines and a reboot may be the only method to return to a
designated zero-state from an unintended, locked state.

In addition to loading an operating system or stand-alone utility, the boot process can also load a storage
dump program for diagnosing problems in an operating system.

Sequencing of Booting

Booting is a start-up sequence that starts the operating system of a computer when it is turned on. A boot
sequence is the initial set of operations that the computer performs when it is switched on. Every
computer has a boot sequence.

1. Boot Loader: Computers powered by the central processing unit can only execute code found in the
system's memory. Modern operating systems and application program code and data are stored on
nonvolatile memories. When a computer is first powered on, it must initially rely only on the code and
data stored in nonvolatile portions of the system's memory. The operating system is not really loaded at
boot time, and the computer's hardware cannot perform many complex systems actions.

The program that starts the chain reaction that ends with the entire operating system being loaded is the
boot loader or bootstrap loader. The boot loader's only job is to load other software for the operating
system to start.

2. Boot Devices: The boot device is the device from which the operating system is loaded. A modern PC
BIOS (Basic Input/Output System) supports booting from various devices. These include the local hard
disk drive, optical drive, floppy drive, a network interface card, and a USB device. The BIOS will allow
the user to configure a boot order. If the boot order is set to:

o CD Drive
o Hard Disk Drive
o Network
The BIOS will try to boot from the CD drive first, and if that fails, then it will try to boot from the hard
disk drive, and if that fails, then it will try to boot from the network, and if that fails, then it won't boot at
all.

3. Boot Sequence: There is a standard boot sequence that all personal computers use. First, the CPU runs
an instruction in memory for the BIOS. That instruction contains a jump instruction that transfers to the
BIOS start-up program. This program runs a power-on self-test (POST) to check that devices the
computer will rely on are functioning properly. Then, the BIOS goes through the configured boot
sequence until it finds a bootable device. Once BIOS has found a bootable device, BIOS loads the
bootsector and transfers execution to the boot sector. If the boot device is a hard drive, it will be a master
boot record (MBR).

The MBR code checks the partition table for an active partition. If one is found, the MBR code loads that
partition's boot sector and executes it. The boot sector is often operating system specific, and however, in
most operating systems, its main function is to load and execute the operating system kernel, which
continues start-up. Suppose there is no active partition, or the active partition's boot sector is invalid. In
that case, the MBR may load a secondary boot loader which will select a partition and load its boot
sector, which usually loads the corresponding operating system kernel.

Types of Booting

There are two types of booting in an operating system.

1. Cold Booting: When the computer starts for the first time or is in a shut-down state and switch on
the power button to start the system, this type of process to start the computer is called cold
booting. During cold booting, the system will read all the instructions from the ROM (BIOS) and
the Operating System will be automatically get loaded into the system. This booting takes more
time than Hot or Warm Booting.
2. Warm Booting: Warm or Hot Booting process is when computer systems come to no response or
hang state, and then the system is allowed to restart during on condition. It is also referred to as
rebooting. There are many reasons for this state, and the only solution is to reboot the computer.
Rebooting may be required when we install new software or hardware. The system requires a
reboot to set software or hardware configuration changes, or sometimes systems may behave
abnormally or may not respond properly. In such a case, the system has to be a force restart. Most
commonly Ctrl+Alt+Del button is used to reboot the system. Else, in some systems, the external
reset button may be available to reboot the system.

Booting Process in Operating System

When our computer is switched on, it can be started by hardware such as a button press, or by software
command, a computer's central processing unit (CPU) has no software in its main memory, there is some
process which must load software into main memory before it can be executed. Below are the six steps to
describe the boot process in the operating system, such as:
Step 1: Once the computer system is turned on, BIOS (Basic Input /Output System) performs a series of
activities or functionality tests on programs stored in ROM, called on POST (Power-on Self Test) that
checks to see whether peripherals in the system are in perfect order or not.

Step 2: After the BIOS is done with pre-boot activities or functionality test, it read bootable sequence
from CMOS (Common Metal Oxide Semiconductor) and looks for master boot record in the first physical
sector of the bootable disk as per boot device sequence specified in CMOS. For example, if the boot
device sequence is:

o Floppy Disk
o Hard Disk
o CDROM

AD

Step 3: After this, the master boot record will search first in a floppy disk drive. If not found, then the
hard disk drive will search for the master boot record. But if the master boot record is not even present on
the hard disk, then the CDROM drive will search. If the system cannot read the master boot record from
any of these sources, ROM displays "No Boot device found" and halted the system. On finding the
master boot record from a particular bootable disk drive, the operating system loader, also called
Bootstrap loader, is loaded from the boot sector of that bootable drive· into memory. A bootstrap loader is
a special program that is present in the boot sector of a bootable drive.

Step 4: The bootstrap loader first loads the IO.SYS file. After this, MSDOS.SYS file is loaded, which is
the core file of the DOS operating system.

AD

Step 5: After this, MSDOS.SYS file searches to find Command Interpreter in CONFIG.SYS file, and
when it finds, it loads into memory. If no Command Interpreter is specified in the CONFIG.SYS file,
the COMMAND.COM file is loaded as the default Command Interpreter of the DOS operating system.

Step 6: The last file is to be loaded and executed is the AUTOEXEC.BAT file that contains a sequence of
DOS commands. After this, the prompt is displayed. We can see the drive letter of bootable drive
displayed on the computer system, which indicates that the operating system has been successfully on the
system from that drive.

What is Dual Booting

When two operating systems are installed on the computer system, then it is called dual booting. Multiple
operating systems can be installed on such a system. But to know which operating system is to boot, a
boot loader that understands multiple file systems and multiple operating systems can occupy the boot
space.
Once loaded, it can boot one of the operating systems available on the disk. The disk can have multiple
partitions, each containing a different type of operating system. When a computer system turns on, a boot
manager program displays a menu, allowing the user to choose the operating system to use.

You might also like