Cs OSS 1
Cs OSS 1
Each device controller is in charge of a specific type of device. The CPU and the device controllers can execute
concurrently, competing for memory cycles. To ensure orderly access to the shared memory, a memory
controller is provided whose function is to synchronize access to the memory.
Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the system bus.
Software may trigger an interrupt by executing a special operation called a system call(also called a monitor
call).
When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fixed location.
The fixed location usually contains the starting address where the service routine for the interrupt is located.
The interrupt service routine executes; on completion, the CPU resumes the interrupted computation.
The straightforward method for handling this transfer would be to invoke a generic routine to examine the
interrupt information; the routine, in turn, would call the interrupt-specific handler.
The interrupt routine is called indirectly through the table, with no intermediate routine needed. Generally, the
table of pointers is stored in low memory (the first 100 or so locations). These locations holdthe addresses of
the interrupt service routines for the various devices. This array, or interrupt vector, of addresses is then indexed
by a unique device number, given with the interrupt request, to provide the address of the interrupt service
routine for the interrupting device.
The interrupt architecture must also save the address of the interrupted instruction. After the interrupt is
serviced, the saved return address is loaded into the program counter, and the interrupted computation
resumes as though the interrupt had not occurred.
· Storage Structure
Computer programs must be in main memory (also called random-access memory or RAM) to be executed.
Main memory is the only large storage area (millions to billions of bytes) that the processor can access
directly.
Dynamic random-access memory (DRAM) -> is implemented in a semiconductor technology which forms an
array of memory words.
A typical instruction-execution cycle, as executed on a system with a von Neumann architecture, first fetches
an instruction from memory and stores that instruction in the instruction register.
Two possible reasons if we want the programs and data to reside in main memory permanently.
1. Main memory is usually too small to store all needed programs and data permanently.
2. Main memory is a volatile storage device that loses its contents when power is turned off or otherwise lost.
Secondary storage which most computer systems provide as an extension of main memory. The main
requirement for secondary storage is that it be able to hold large quantities of data permanently.
Magnetic disk is the most common secondary-storage device which provides storage for both programs and
data. Most programs (web browsers, compilers, word processors, spreadsheets, and so on) are stored on a
disk until they are loaded into memory.
The wide variety of storage systems in a computer system can be organized in a hierarchy (Figure 1.4)
according to speed and cost. The higher levels are expensive, but they are fast.
· I/O Structure
Storage is only one of many types of I/O devices within a computer. A large portion of operating system code is
dedicated to managing I/O, both because of its importance to the reliability and performance of a system and
because of the varying nature of the devices.
A general-purpose computer system consists of CPUs and multiple device controllers that are connected
through a common bus. Each device controller is in charge of a specific type of device. Depending on the
controller, there may be more than one attached device.
A device controller maintains some local buffer storage and a set of special-purpose registers. The device
controller is responsible for moving the data between the peripheral devices that it controls and its local
buffer storage.
Typically, operating systems have a device driver for each device controller. This device driver understands the
device controller and presents a uniform interface to the device to the rest of the operating system. On these
systems, multiple components can talk to other components concurrently, rather than competing for cycles on
a shared bus. Figure 1.5 shows the interplay of all components of a computer system.
A computer system may be organized in a number of different ways, which we can categorize roughly
according to the number of general-purpose processors used.
All of these special-purpose processors run a limited instruction set and do not run user processes.
Sometimes they are managed by the operating system, in that the operating system sends them information
about their next task and monitors their status.
Multiprocessor systems (also known as parallel systems or tightly coupled systems) are growing in
importance. Such systems have two or more processors in close communication, sharing the computer bus
and sometimes the clock, memory, and peripheral devices.
1. Increased throughput. By increasing the number of processors, we expect to get more work done in less
time. The speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple
processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working
correctly. This overhead, plus contention for shared
resources, lowers the expected gain from additional processors. Similarly, N programmers working closely
together do not produce N times the amount of work a single programmer would produce.
2. Economy of scale. Multiprocessor systems can cost less than equivalent multiple single-processor
systems, because they can share peripherals, mass storage, and power supplies. If several programs operate
on the same set of data, it is cheaper to store those data on one disk and to have all the processors share
them than to have many computers with local
disks and many copies of the data.
3. Increased reliability. If functions can be distributed properly among several processors, then the failure of
one processor will not halt the system, only slow it down. If we have ten processors and one fails, then each
of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire
system runs only 10 percent slower, rather than failing altogether.
Graceful degradation is the ability to continue providing service proportional to the level of surviving
hardware.
Fault tolerant some systems go beyond graceful degradation.
Clustering is usually used to provide high-availability service; that is, service will continue even if one or more
systems in the cluster fail. Clustering can be structured asymmetrically or symmetrically. Cluster technology is
changing rapidly. Some cluster products support dozens of systems in a cluster, as well as clustered nodesthat
are separated by miles. Many of these improvements are made possible by storage-area networks (SANs),
as described in Section 12.3.3, which allow many systems to attach to a pool of storage.
Two separate modes of operation: user mode and kernel mode (also called supervisor mode, system mode,
or privileged mode). A bit, called the mode bit, is added to the hardware of the computer to indicate the
current mode: kernel (0) or user (1).
The dual mode of operation provides us with the means for protecting the operating system from errant users
—and errant users from one another.
Picture
We accomplish this protection by designating some of the machine instructions that may cause harm as
privileged instructions. The hardware allows privileged instructions to be executed only in kernel mode. If an
attempt is made to execute a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system.
1.3.2 Timer
A timer can be set to interrupt the computer after a specified period. The period may be fixed (for example,
1/60 second) or variable (for example, from 1 millisecond to 1 second).
Variable timer is generally implemented by a fixed-rate clock and a counter.
The operating system sets the counter. Every time the clock ticks, the counter is decremented. We can use
the timer to prevent a user program from running too long. A simple technique is to initialize a counter with
the amount of time that a program is allowed to run.
Resource Management in Operating System is the process to manage all the resources efficiently like CPU,
memory, input/output devices, and other hardware resources among the various programs and processes
running in the computer.
Resource management is an important thing because resources of a computer are limited and multiple
processes or users may require access to the same resources like CPU, memory etc. at the same time. The
operating system has to manage and ensure that all processes get the resources they need to execute,
without any problems like deadlocks.
Resource Allocation: This terms defines the process of assigning the available resources to processes in the
operating system. This can be done dynamically or statically.
Resource: Resource can be anything that can be assigned dynamically or statically in the operating system.
Example may include CPU time, memory, disk space, and network bandwidth etc.
Resource Management: It refers to how to manage resources efficiently between different processes.
Process: Process refers to any program or application that is being executed in the operating system and has
its own memory space, execution state, and set of system resources.
Scheduling: It is the process of determining from multiple number of processes which process should be
allocated a particular resource at a given time.
Deadlock: When two or more processes are waiting for some resource but resources are busy somewhere
else and resources are also waiting for some process to complete their execution . In such condition neither
resources will be freed norprocess would get it and this situation is called deadlock.
Semaphore: It is the method or tool which is used to prevent race condition. Semaphore is an integer variable
which is used in mutual exclusive manner by various concurrent cooperative process in order to achieve
synchronization.
Mutual Exclusion: It is the technique to prevent multiple number of process to access the same resources
simultaneously.
Memory Management: Memory management is a method used in the operating systems to manage
operations between main memory and disk during process execution.
Resource scheduling: The OS allocate available resources to the processes. It decides the sequence of which
process will get access to the CPU, memory, and other resources at any given time.
Resource Monitoring: The operating system monitors which resources is used by which process and also take
action if any process takesmany resources at the same time causing into deadlock.
Resource Protection: The OS protects the system from unauthorized or fake access by the user or any other
process.
Resource Sharing: The operating system permits many processes like memory and I/O devices to share
resources. It guarantees that common resources are utilized in a fair and productive way.
Deadlock prevention: The OS prevents deadlock and also ensure that no process is holding resources
indefinitely . For that it uses techniques likes resource preemption.
Resource accounting: The operating system always tracks the use of resources by different processes for
allocation and statistical purposes.
Performance optimization: The OS optimizes resources distribution , the reason is to increase the system
performance. For that many techniques like load balancing and memory management are followed that
ensures efficient resources distribution.
Protection and security requires that computer resources such as CPU, softwares, memory etc. are protected.
This extends to the operating system as well as the data in the system. This can be done by ensuring integrity,
confidentiality and availability in the operating system. The system must be protect against unauthorized
access, viruses, worms etc.
Virus
Viruses are generally small snippets of code embedded in a system. They are very dangerous and can corrupt
files, destroy data, crash systems etc. They can also spread further by replicating themselves as required.
Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious user can use these to enter
the system as a harmless being and wreak havoc.
Trap Door
A trap door is a security breach that may be present in a system without the knowledge of the users. It can be
exploited to harm the data or files in a system by malicious people.
Worm
A worm can destroy a system by using its resources to extreme levels. It can generate multiple copies which
claim all the resources and don't allow any other processes to access them. A worm can shut down a whole
network in this way.
Denial of Service
These type of attacks do not allow the legitimate users to access a system. It overwhelms the system with
requests so it is overwhelmed and cannot work properly for other user.
Authentication
This deals with identifying each user in the system and making sure they are who they claim to be. The
operating system makes sure that all the users are authenticated before they access the system. The
different ways to make sure that the users are authentic are:
Username/ Password
Each user has a distinct username and password combination and they need to enter it correctly before they
can access the system.
Random Numbers
The system can ask for numbers that correspond to alphabets that are pre arranged. This combination can be
changed each time a login is required.
Secret Key
A hardware device can create a secret key related to the user id for login. This key can change each time.
1.6 Distributed Systems
A distributed Operating System refers to a model in which applications run on multiple interconnected
computers, offering enhanced communication and integration capabilities compared to a network operating
system.
In a Distributed Operating System, multiple CPUs are utilized, but for end-users, it appears as a typical
centralized operating system. It enables the sharing of various resources such as CPUs, disks, network
interfaces, nodes, and computers across different sites, thereby expanding the available data within the entire
system.
Effective communication channels like high-speed buses and telephone lines connect all processors, each
equipped with its own local memory and other neighboring processors.
Due to its characteristics, a distributed operating system is classified as a loosely coupled system. It
encompasses multiple computers, nodes, and sites, all interconnected through LAN/WAN lines. The ability of a
Distributed OS to share processing resources and I/O files while providing users with a virtual machine
abstraction is an important feature.
1. Client-Server Systems
In a client-server system within a distributed operating system, clients request services or resources from
servers over a network. Clients initiate communication, send requests, and handle user interfaces, while
servers listen for requests, perform tasks, and manage resources.
This model allows for scalable resource utilization, efficient sharing, modular development, centralized
control, and fault tolerance.
It facilitates collaboration between distributed entities, promoting the development of reliable, scalable, and
interoperable distributed systems.
2. Peer-to-Peer(P2P) Systems
In peer-to-peer (P2P) systems, interconnected nodes directly communicate and collaborate without
centralized control. Each node can act as both a client and a server, sharing resources and services with other
nodes. P2P systems enable decentralized resource sharing, self-organization, and fault tolerance.
They support efficient collaboration, scalability, and resilience to failures without relying on central servers. This
model facilitates distributed data sharing, content distribution, and computing tasks, making it suitable for
applications like file sharing, content delivery, and blockchain networks.
3. Middleware
Middleware acts as a bridge between different software applications or components, enabling
communication and interaction across distributed systems. It abstracts complexities of network
communication, providing services like message passing, remote procedure calls (RPC), and object
management.
Middleware facilitates interoperability, scalability, and fault tolerance by decoupling application logic from
underlying infrastructure.
It supports diverse communication protocols and data formats, enabling seamless integration between
heterogeneous systems.
Middleware simplifies distributed system development, promotes modularity, and enhances system flexibility,
enabling efficient resource utilization and improved system reliability.
4. Three-Tier
In a distributed operating system, the three-tier architecture divides tasks into presentation, logic, and data
layers. The presentation tier, comprising client machines or devices, handles user interaction. The logic tier,
distributed across multiple nodes or servers, executes processing logic and coordinates system functions.
The data tier manages storage and retrieval operations, often employing distributed databases or file systems
across multiple nodes.
This modular approach enables scalability, fault tolerance, and efficient resource utilization, making it ideal for
distributed computing environments.
5. N-Tier
In an N-tier architecture, applications are structured into multiple tiers or layers beyond the traditional three- tier
model. Each tier performs specific functions, such as presentation, logic, data processing, and storage, with
the flexibility to add more tiers as needed. In a distributed operating system, this architecture enables complex
applications to be divided into modular components distributed across multiple nodes or servers.
Each tier can scale independently, promoting efficient resource utilization, fault tolerance, and maintainability. N-
tier architectures facilitate distributed computing by allowing components to run on separate nodes or
servers, improving performance and scalability.
This approach is commonly used in large-scale enterprise systems, web applications, and distributed
systems requiring high availability and scalability.
Cloud Computing Platforms: Distributed operating systems form the backbone of cloud computing
platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These
platforms provide scalable, on-demand computing resources distributed across multiple data centers,
enabling organizations to deploy and manage applications, storage, and services in a distributed
manner.
Internet of Things (IoT): Distributed operating systems play a crucial role in IoT networks, where
numerous interconnected devices collect and exchange data. These operating systems manage
communication, coordination, and data processing tasks across distributed IoT devices, enabling
applications such as smart home automation, industrial monitoring, and environmental sensing.
Distributed Databases: Distributed operating systems are used in distributed database management
systems (DDBMS) to manage and coordinate data storage and processing across multiple nodes or
servers.These systems ensure data consistency, availability, and fault tolerance in distributed
environments, supporting applications such as online transaction processing (OLTP), data warehousing,
and real-time analytics.
Content Delivery Networks (CDNs): CDNs rely on distributed operating systems to deliver web content,
media, and applications to users worldwide.These operating systems manage distributed caching,
content replication, and request routing across a network of edge servers, reducinglatency and
improving performance for users accessing web content from diverse geographic locations.
Peer-to-Peer (P2P) Networks: Distributed operating systems are used in peer-to-peer networks to enable
decentralized communication, resource sharing, and collaboration among distributed nodes.These
systems facilitate file sharing, content distribution, and decentralized applications (DApps) by
coordinating interactions between peers without relying on centralized servers.
High-Performance Computing (HPC): Distributed operating systems are employed in HPC clusters and
supercomputers to coordinate parallel processing tasks across multiple nodes or compute units.These
systems support scientific simulations, computational modeling, and data-intensive computations by
distributing workloads and managing communication between nodes efficiently.
Distributed File Systems:Distributed operating systems power distributed file systems like Hadoop
Distributed File System (HDFS), Google File System (GFS), and CephFS.These file systems
enable distributed storage and retrieval of large-scale data sets across clusters of machines,
supporting applications such as big data analytics, data processing, and content storage.
Solaris: The SUN multiprocessor workstations are the intended use for it.
OSF/1: The Open Foundation Software Company designed it, and it works with Unix.
Micros: All nodes in the system are assigned work by the MICROS operating system, which also
guarantees a balanced data load.
DYNIX: It is created for computers with many processors, known as Symmetry.
Locus: It can be viewed simultaneously from both local and distant files without any location
restrictions.
Mach: It permits the features of multitasking and multithreading.
It can increase data availability throughout the system by sharing all resources (CPU, disk, network
interface, nodes, computers, and so on) between sites.
Because all data is replicated across all sites, it reduces the probability of data corruption because
users can access data from another operating site in the event that one site fails.
Data transfer from one site to another is accelerated by it.
Since it may be accessible from both local and remote sites, it is an open system.
It facilitates a reduction in the time needed to process data.
The majority of distributed systems are composed of multiple nodes that work together to provide fault
tolerance. Even if one machine malfunctions, the system still functions.
Which tasks need to be completed, when they need to be completed, and where they need to be
completed must be determined by the system. The restrictions of a scheduler might result in
unpredictable runtimes and unused hardware.
Since the nodes and connections in DOS need to be secured, it is challenging to establish sufficient
security.
Comparing a DOS-connected database to a single-user system, the latter is easier to maintain and less
complex.
Compared to other systems, the underlying software is incredibly sophisticated and poorly understood.
Compiling, analyzing, displaying, and keeping track of hardware utilization metrics for large clusters may be
quite challenging.
Kernel is central component of an operating system that manages operations of computer and hardware. It
basically manages operations of memory and CPU time. It is core component of an operating system. Kernel
acts as a bridge between applications and data processing performed at hardware level using inter-process
communication and system calls.
Kernel loads first into memory when an operating system is loaded and remains into memory until operating
system is shut down again. It is responsible for various tasks such as disk management, task management,
and memory management.
Kernel has a process table that keeps track of all active processes
• Process table contains a per process region table whose entry points to entries in region table.
Kernel loads an executable file into memory during ‘exec’ system call’.
Objectives of Kernel :
1. Monolithic Kernel –
It is one of types of kernel where all operating system services operate in kernel space. It has dependencies
between systems components. It has huge lines of code which is complex.
Example:
Advantage:
1. Efficiency: Monolithic kernels are generally faster than other types of kernels because they don’t have to
switch between user and kernel modes for every system call, which can cause overhead.
2. Tight integration: Since all the operating system services are running in kernel space, they can
communicate more efficiently with each other, making it easier to implement complex functionalities and
optimizations.
3. Simplicity: Monolithic kernels are simpler to design, implement, and debug than other types of kernels
because they have a unified structure that makes it easier to manage the code.
4. Lower latency: Monolithic kernels have lower latency than other types of kernels because system calls and
interrupts can be handled directly by the kernel.
Disadvantage:
1. Stability issues: Monolithic kernels can be less stable than other types of kernels because any bug or
security vulnerability in a kernel service can affect the entire system.
2. Security vulnerabilities: Since all the operating system services are running in kernel space, any security
vulnerability in one of the services can compromise the entire system.
3. Maintenance difficulties: Monolithic kernels can be more difficult to maintain than other types of kernels
because any change in one of the services can affect the entire system.
4. Limited modularity: Monolithic kernels are less modular than other types of kernels because all the
operating system services are tightly integrated into the kernel space. This makes it harder to add or remove
functionality without affecting the entire system.
2. Micro Kernel –
It is kernel types which has minimalist approach. It has virtual memory and thread scheduling. It is more
stable with less services in kernel space. It puts rest in user space.
Example :
Advantages:
1. Reliability: Microkernel architecture is designed to be more reliable than monolithic kernels. Since most of
the operating system services run outside the kernel space, any bug or security vulnerability in a service won’t
affect the entire system.
2. Flexibility: Microkernel architecture is more flexible than monolithic kernels because it allows different
operating system services to be added or removed without affecting the entire system.
3. Modularity: Microkernel architecture is more modular than monolithic kernels because each operating
system service runs independently of the others. This makes it easier to maintain and debug the system.
4. Portability: Microkernel architecture is more portable than monolithic kernels because most of the
operating system services run outside the kernel space. This makes it easier to port the operating system to
different hardware architectures.
Disadvantages:
1. Performance: Microkernel architecture can be slower than monolithic kernels because it requires more
context switches between user space and kernel space.
2. Complexity: Microkernel architecture can be more complex than monolithic kernels because it requires
more communication and synchronization mechanisms between the different operating system services.
3. Development difficulty: Developing operating systems based on microkernel architecture can be more
difficult than developing monolithic kernels because it requires more attention to detail indesigning the
communication and synchronization mechanisms between the different services.
4. Higher resource usage: Microkernel architecture can use more system resources, such as memory and
CPU, than monolithic kernels because it requires more communication and synchronization mechanisms
between the different operating system services.
3. Hybrid Kernel –
It is the combination of both monolithic kernel and microkernel. It has speed and design of monolithic kernel
and modularity and stability of microkernel.
Example :
1. Performance: Hybrid kernels can offer better performance than microkernels because they
reduce the number of context switches required between user space and kernel space.
2. Reliability: Hybrid kernels can offer better reliability than monolithic kernels because they isolate
drivers and other kernel components inseparate protection domains.
3. Flexibility: Hybrid kernels can offer better flexibility than monolithic kernels because they allow
different operating system services to be added or removed without affecting the entire system.
4. Compatibility: Hybrid kernels can be more compatible than microkernels because they can support a
wider range of device drivers.
Disadvantages:
1. Complexity: Hybrid kernels can be more complex than monolithic kernels because they include
both monolithic and microkernel components, which can make the design and implementation
more difficult.
2. Security: Hybrid kernels can be less secure than microkernels because they have a larger attack
surface due to the inclusion of monolithic components.
3. Maintenance: Hybrid kernels can be more difficult to maintain than microkernels because they have a
more complex design and implementation.
4. Resource usage: Hybrid kernels can use more system resources than microkernels because they
include both monolithic and microkernel components.
4. Exo Kernel –
It is the type of kernel which follows end-to-end principle. It has fewest hardware abstractions as
possible. It allocates physical resources to applications.
Example :
Advantages:
1. Flexibility: Exokernels offer the highest level of flexibility, allowing developers to customize and
optimize
the operating system for their specific application needs.
2. Performance: Exokernels are designed to provide better performance than traditional kernels
because they eliminate unnecessary abstractions and allow applications to directly access hardware
resources.
3. Security: Exokernels provide better security than traditional kernels because they allow for fine-
grained control over the allocation of system resources, suchas memory and CPU time.
4. Modularity: Exokernels are highly modular, allowing for the easy addition or removal of operating
system services.
Disadvantages:
1. Complexity: Exokernels can be more complex to develop than traditional kernels because they
require greater attention to detail and careful consideration of system resource allocation.
2. Development Difficulty: Developing applications for exokernels can be more difficult than for
traditional kernels because applications must be written to directly access hardware resources.
3. Limited Support: Exokernels are still an emerging technology and may not have the same level of
support and resources as traditional kernels.
4. Debugging Difficulty: Debugging applications and operating system services on exokernels can
be more difficult than on traditional kernels because of the direct access to hardware resources.
5. Nano Kernel –
It is the type of kernel that offers hardwareabstraction but without system services. Micro Kernel
also does not have system services therefore the Micro Kernel and Nano Kernel have become
analogous.
Example :
EROS etc.
Advantages
:
1. Small size: Nanokernels are designed to be extremely small, providing only the most essential
functions needed to run the system. This can make them more efficient and faster than other kernel
types.
2. High modularity: Nanokernels are highly modular, allowing for the easy addition or removal of operating
system services, making them more flexible and customizable than traditional monolithic kernels.
3. Security: Nanokernels provide better security than traditional kernels because they have a smaller attack
surface and a reduced risk of errors or bugs in the code.
4. Portability: Nanokernels are designed to be highly portable, allowing them to run on a wide range
ofhardware architectures.
Disadvantages:
1. Limited functionality: Nanokernels provide only the most essential functions, making them unsuitable for
more complex applications that require a broader range of services.
2. Complexity: Because nanokernels provide only essential functionality, they can be more complex to develop
and maintain than other kernel types.
3. Performance: While nanokernels are designed for efficiency, their minimalist approach may not be able to
provide the same level of performance as other kernel types in certain situations.
4. Compatibility: Because of their minimalist design, nanokernels may not be compatible with all hardware
and software configurations, limiting their practical use in certain contexts.
Program execution
Input Output Operations
Communication between Process
File Management
Memory Management
Process Management
Security and Privacy
Resource
Management User
Interface Networking
Error handling
Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the program into the
memory after which it is executed. The order in which they are executed depends on the CPU Scheduling
Algorithms. A few are FCFS, SJF, etc. When the program is in execution, the Operating System also handles
deadlock i.e. no two processes come for execution at the same time. The Operating System is responsible for
the smooth execution of both user and system programs. The Operating System utilizes various resources
available for the efficient running of all types of functionalities.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with limited number of
player . The team manager (OS) decide whether the upcoming player will be in playing 11 ,playing 15 or will
not be included in team , based on his performance . In the same way, OS first check whether the upcoming
program fulfil all requirement to get memory space or not ,if all things good, it checks how much memory
space will be sufficient for program and then load the program into memory at certain location. And thus , it
prevents program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the (CPU) where all
cooking(execution) is really happen and chef as the (OS) who uses kitchen-stove(CPU) to cook different
dishes(program). The chef(OS) has to cook different dishes(programs) so he ensure that any particular
dish(program) does not take long time(unnecessary time) and all dishes(programs) gets a chance to
cooked(execution) .The chef(OS) basically scheduled time for all dishes(programs) torun kitchen(all the
system) smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
Resource Management
System resources are shared between various processes. It is the Operating system that manages resource
sharing. It also manages the CPU time among processes using CPU Scheduling Algorithms. It also helps in
the memory management of the system. It also controls input-output devices. The OS also ensures the proper
use of all the resources available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the operating
system through the command-line interface or graphical user interface or GUI. The command interpreter
executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the internet,
sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices, etc. It also ensures
that an error does not occur frequently and fixes the errors. It also prevents the process from coming to a
deadlock. It also looks for any type of error or bugs that can occur while any task. The well-secured OS
sometimes also acts as a countermeasure for preventing any sort of breach of the Computer System from
any external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be stop(red)=>(simple
queue) , start(yellow)=>(ready queue),move(green)=>(under execution) and this light (control) changes after a
certain interval of time at each side of the road(computer system) so that the cars(program) from all side of
road move smoothly without traffic.
In computing, a system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact with the
operating system. A computer program makes a system call when it makes a request to the operating
system’s kernel. System call provides the services of the operating system to the user programs via
Application Program Interface(API). It provides an interface between a process and an operating system to
allow user-level processes to request services of the operating system. System calls are the only entry points
into the kernel system. All programs needing resources must use system calls.
A user program can interact with the operating system using a system call. A number of services are
requested by the program, and the OS responds by launching a number of systems calls to fulfill the request.
A system call can be written in high-level languages like C or Pascal or in assembly language. If a high-level
language is used, the operating systemmay directly invoke system calls, which are predefined functions.
A system call is a mechanism used by programs to request services from the operating system (OS). In
simpler terms, it is a way for a program to interact with the underlying system, such as accessing hardware
resources or performing privileged operations.
A system call is initiated by the program executing a specific instruction, which triggers a switch to kernel
mode, allowing the program to request a service from the OS. The OS then handles the request, performs the
necessary operations, and returns the result back to the program.
System calls are essential for the proper functioning of an operating system, as they provide a standardized
way for programs to access system resources. Without system calls, each program would need to implement
its own methods for accessing hardware and system services, leading to inconsistent and error-prone
behavior.
1. Process control: end, abort, create, terminate, allocate, and free memory.
2. File management: create, open, close, delete, read files,s, etc.
3. Device management
4. Information maintenance
5. Communication
User need special resources : Sometimes programs need to do some special things which can’t be done
without the permission of OS like reading from a file, writing to a file , getting any information from the
hardware or requesting a space in memory.
Program makes a system call request : There are special predefined instruction to make a request to the
operating system. These instruction are nothing but just a “system call”. The program uses these system
calls in its code when needed.
Operating system sees the system call : When the OS sees the system call then it recongnises that the
program need help at this time so it temporarily stop the program execution and give all the control to
special part of itself called ‘Kernel’ . Now ‘Kernel’ solve the need of program.
Operating system performs the operations :Now the operating system perform the operation which is
requested by program . Example : reading content from a file etc.
Operating system give control back to the program : After performing the special operation, OS give control
back to the program for further execution of program .
Examples of a System Call in Windows and Unix
System calls for Windows and Unix come in many different forms. These are listed in the table below as
follows:
open(): Accessing a file on a file system is possible with the open() system call. It gives the file resources it
needs and a handle the process can use. A file can be opened by multiple processes simultaneously or just
one process. Everything is based on the structure and file system.
read(): Data from a file on the file system is retrieved using it. In general, it accepts three arguments:
A description of a file.
Before reading, the file to be read could be identified by its file descriptor and opened using the open()
function.
wait(): In some systems, a process might need to hold off until another process has finished running before
continuing. When a parent process creates a child process, the execution of the parent process is halted until
the child process is complete. The parent process is stopped using the wait() system call. The parent process
regains control once the child process has finished running.
write(): Data from a user buffer is written using it to a device like a file. A program can produce data in one
way by using this system call. generally, there arethree arguments:
1. A description of a file.
2. A reference to the buffer where data is stored.
3. The amount of data that will be written from the buffer in bytes.
fork(): The fork() system call is used by processes to create copies of themselves. It is one of the methods
used the most frequently in operating systems to create processes. When a parent process creates a child
process, the parent process’s execution is suspended until the child process is finished. The parent process
regains control once the child process has finished running.
exit(): A system call called exit() is used to terminate a program. In environments with multiple threads, this
call indicates that the thread execution is finished. After using the exit() system function, the operating system
recovers the resources used by the process.
#include <fcntl.h>
#include <fcntl.h>
#include <stdio.h>
int main()
{
const char* pathname = "example.txt";
int flags = O_RDONLY;
mode_t mode = 0644;
if (fd == -1) {
perror("Error opening file");
return 1;
}
close(fd);
return 0;
}
2.Address of the block is passed as parameters
It can be applied when the number of parameters are greater than the number of registers.
Parameters are stored in blocks or table.
The address of the block is passed to a register as a parameter.
Most commonly used in Linux and Solaris.
Here is the C program code:
#include <stdio.h>
#include <fcntl.h>
int main() {
int params[3];
params[0] = (int)pathname;
params[1] = flags;
params[2] = mode;
// system call
if (fd == -1) {
return 1;
close(fd);
return 0;
}
3.Parameters are pushed in a stack
In this method parameters can be pushed in using the program and popped out using the operating
system
So the Kernal can easily access the data by retrieving information from the top of the stack.
Here is the C program code
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
int main() {
const char *pathname = "example.txt";
int flags = O_RDONLY;
mode_t mode = 0644;
int fd;
asm volatile(
"mov %1, %%rdi\n"
"mov %2, %%rsi\n"
"mov %3, %%rdx\n"
"mov $2, %%rax\n"
"syscall"
: "=a" (fd)
: "r" (pathname), "r" (flags), "r" (mode)
: "%rdi", "%rsi", "%rdx"
);
if (fd == -1) {
perror("Error opening file");
return 1;
}
close(fd);
return 0;
}
1.10System Services
Operating system is a software that acts as an intermediary between the user and computer hardware. It
is a program with the help of which we are able to run various applications. It is the one program that is
running all the time. Every computer must have an operating system to smoothly execute other
programs. The OS coordinates the use of the hardware and application programs for various users. It
provides a platform for other application programs to work. The operating system is a set of special
programs that run on a computer system that allows it to work properly. It controls input-output
devices, execution of programs, managing files, etc.
Services of Operating System
2 Program execution
3 Input Output Operations
4 Communication between Process
5 File Management
6 Memory Management
7 Process Management
8 Security and Privacy
9 Resource Management
10 User Interface
11 Networking
12 Error handling
13 Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the program
into the memory after which it is executed. The order in which they are executed depends on the CPU
Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is in execution, the Operating
System also handles deadlock i.e. no two processes come for execution at the same time. The Operating
System is responsible for the smooth execution of both user and system programs. The Operating
System utilizes various resources available for the efficient running of all types of functionalities.
Input Output Operations
Operating System manages the input-output operations and establishes communication between the user
and device drivers. Device drivers are software that is associated with hardware that is being managed
by the OS so that the sync between the devices works properly. It also provides access to input-output
devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes. Communication between
processes includes data transfer among them. If the processes are not on the same computer but
connected through a computer network, then also their communication is managed by the Operating
System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the operating
system that grants access. These permissions include read-only, read-write, etc. It also provides a
platform for the user to create, and delete files. The Operating System is responsible for making
decisions regarding the storage of all types of data or files, i.e, floppy disk/hard disk/pen drive, etc. The
Operating System decides how the data should be manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with limited
number of player . The team manager (OS) decide whether the upcoming player will be in playing 11
,playing 15 or will not be included in team , based on his performance . In the same way, OS first check
whether the upcoming program fulfil all requirement to get memory space or not ,if all things good, it
checks how much memory space will be sufficient for program and then load the program into memory
at certain location. And thus , it prevents program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the (CPU)
where all cooking(execution) is really happen and chef as the (OS) who uses kitchen-stove(CPU) to
cook different dishes(program). The chef(OS) has to cook different dishes(programs) so he ensure that
any particular dish(program) does not take long time(unnecessary time) and all dishes(programs) gets a
chance to cooked(execution) .The chef(OS) basically scheduled time for all dishes(programs) to run
kitchen(all the system) smoothly and thus cooked(execute) all the different dishes(programs)
efficiently.
Security and Privacy
Security : OS keep our computer safe from an unauthorized user by adding security layer to it.
Basically, Security is nothing but just a layer of protection which protect computer from bad guys
like viruses and hackers. OS provide us defenses like firewalls and anti-virus software and ensure
good safety of computer and personal information.
Privacy : OS give us facility to keep our essential information hidden like having a lock on our door,
where only you can enter and other are not allowed . Basically , it respect our secrets and provide us
facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that manages
resource sharing. It also manages the CPU time among processes using CPU Scheduling Algorithms. It
also helps in the memory management of the system. It also controls input-output devices. The OS also
ensures the proper use of all the resources available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the operating
system through the command-line interface or graphical user interface or GUI. The command
interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the internet,
sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices, etc. It also
ensures that an error does not occur frequently and fixes the errors. It also prevents the process from
coming to a deadlock. It also looks for any type of error or bugs that can occur while any task. The
well-secured OS sometimes also acts as a countermeasure for preventing any sort of breach of the
Computer System from any external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue) , start(yellow)=>(ready queue),move(green)=>(under execution) and this
light (control) changes after a certain interval of time at each side of the road(computer system) so that
the cars(program) from all side of road move smoothly without traffic.
It is my understanding that binaries are specific to certain processors due to the processor specific
machine language they understand and the differing instruction sets between different processors. But
where does the operating system specificity come from? I used to assume it was APIs provided by the OS
but then I saw this diagram in a
book:
Operating Systems - Internals and Design Principles 7th ed - W. Stallings (Pearson, 2012)
As you can see, APIs are not indicated as a part of the operating system.
#include<stdio.h>
main()
{
printf("Hello World");
}
Is the compiler doing anything OS specific when compiling this?
operating-systems
compiler
However this increased utility and protection came at a cost, programs now had to work with the OS to
perform tasks they were not allowed to do themselves, they could no longer for example take direct
control over the hard disk by accessing its memory and change arbitrary data, instead they had to ask the
OS to perform these tasks for them so that it could check that they were allowed to perform the operation,
not changing files that did not belong to them, it would also check that the operation was indeed valid and
would not leave the hardware in an undefined state.
Each OS decided on a different implementation for these protections, partially based on the architecture
the OS was designed for and partially based around the design and principles of the OS in question,
UNIX for example put focus on machines being good for multi user use and focused the available
features for this while windows was designed to be simpler, to run on slower hardware with a single user.
The way user-space programs also talk to the OS is completely different on X86 as it would be on ARM
or MIPS for example, forcing a multi-platform OS to make decisions based around the need to work on
the hardware it is targeted for.
These OS specific interactions are usually called "system calls" and encompass how a user space program
interacts with the hardware through the OS completely, they fundamentally differ based on the function
of the OS and thus a program that does its work through system calls needs to be OS specific.
Libraries
Programs rarely use system calls directly however, they almost exclusively gain their functionality though
libraries which wrap the system calls in a slightly friendlier format for the programming language, for
example, C has the C Standard Library and glibc under Linux and similar and win32 libs under Windows
NT and above, most other programming languages also have similar libraries which wrap system
functionality in an appropriate way.
These libraries can to some degree even overcome the cross platform issues as described above, there are
a range of libraries which are designed around providing a uniform platform to applications while
internally managing calls to a wide range of OSes such as SDL, this means that though programs cannot
be binary compatible, programs which use these libraries can have common source between platforms,
making porting as simple as recompiling.
I think the diagram is just trying to emphasize that operating system functions are usually invoked
through a different mechanism than a simple library call. Most of the common OS use processor
interrupts to access OS functions. Typical modern operating systems are not going to let a user program
directly access any hardware. If you want to write a character to the console, you are going to have to ask
the OS to do it for you. The system call used to write to the console will vary from OS to OS, so right
there is one example of why software is OS specific.
printf is a function from the C run time library and in a typical implementation is a fairly complex
function. If you google you can find the source for several versions online. See this page for a guided tour
of one. Down in the grass though it ends up making one or more system calls, and each of those system
calls is specific to the host operating system.
Design Goals:
Design goals are the objectives of the operating system. They must be met to fulfill design requirements
and they can be used to evaluate the design. These goals may not always be technical, but they often
have a direct impact on how users perceive their experience with an operating system. While designers
need to identify all design goals and prioritize them, they also need to ensure that these goals are
compatible with each other as well as compatible with user expectations or expert advice
Designers also need to identify all possible ways in which their designs could conflict with other parts
of their systems—and then prioritize those potential conflicts based on cost-benefit analysis (CBA).
This process allows for better decision-making about what features make sense for inclusion into final
products versus those which would require extensive rework later down the road. It’s also important to
note that CBA is not just about financial costs; it can also include other factors like user experience,
time to market, and the impact on other systems.
The process of identifying design goals, conflicts, and priorities is often referred to as “goal-driven
design.” The goal of this approach is to ensure that each design decision is made with the best interest
of users and other stakeholders in mind.
An operating system is a set of software components that manage a computer’s resources and provide
overall system management.
Mechanisms and policies are the two main components of an operating system. Mechanisms handle
low-level functions such as scheduling, memory management, and interrupt handling; policies handle
higher-level functions such as resource management, security, and reliability. A well-designed OS
should provide both mechanisms and policies for each component in order for it to be successful at its
task:
Mechanisms should ensure that applications have access to appropriate hardware resources (seats).
They should also make sure that applications don’t interfere with each other’s use of these resources
(for example through mutual exclusion).
Policies determine how processes will interact with one another when they’re running simultaneously
on multiple CPUs within a single machine instance – what processor affinity should occur during
multitasking operations? Should all processes be allowed access simultaneously or just those belonging
specifically within group ‘A’?’
These are just some of the many questions that policies must answer. The OS is responsible for
enforcing these mechanisms and policies, as well as handling exceptions when they occur. The
operating system also provides a number of services to applications, such as file access and networking
capabilities.
The operating system is also responsible for making sure that all of these tasks are done efficiently and
in a timely manner. The OS provides applications with access to the underlying hardware resources and
ensures that they’re properly utilized by the application. It also handles any exceptions that occur during
execution so that they don’t cause the entire system to crash.
Implementation:
Implementation is the process of writing source code in a high-level programming language, compiling
it into object code, and then interpreting (executing) this object code by means of an interpreter. The
purpose of an operating system is to provide services to users while they run applications on their
computers.
The main function of an operating system is to control the execution of programs. It also provides
services such as memory management, interrupt handling, and file system access facilities so that
programs can be better utilized by users or other devices attached to the system.
An operating system is a program or software that controls the computer’s hardware and resources. It
acts as an intermediary between applications, users, and the computer’s hardware. It manages the
activities of all programs running on a computer without any user intervention.
The operating system performs many functions such as managing the computer’s memory, enforcing
security policies, and controlling peripheral devices. It also provides a user interface that allows users to
interact with their computers.
The operating system is typically stored in ROM or flash memory so it can be run when the computer is
turned on. The first operating systems were designed to control mainframe computers. They were very
large and complex, consisting of millions of lines of code and requiring several people to develop them.
Today, operating systems are much smaller and easier to use. They have been designed to be modular
so they can be customized by users or developers.
There are many different types of operating systems:
1. Graphical user interfaces (GUIs) like Microsoft Windows and Mac OS.
2. Command line interfaces like Linux or UNIX
3. Real-time operating systems that control industrial and scientific equipment
4. Embedded operating systems are designed to run on a single computer system without needing an
external display or keyboard.
An operating system is a program that controls the execution of computer programs and provides
services to the user.
It is responsible for managing computer hardware resources and providing common services for all
programs running on the computer. An operating system also facilitates user interaction with the
computer.
In addition to these basic functions, an operating system manages resources such as memory,
input/output devices, file systems, and other components of a computer system’s hardware architecture
(hardware). It does not manage application software or its data; this responsibility resides with
individual applications themselves or their respective developers via APIs provided by each
application’s interfaces with their respective environments (e.g., Java VM).
The operating system is the most important component of a computer, as it allows users to interact with
all of the other components. The operating system provides access to hardware resources such as
storage devices and printers, as well as making sure that programs are running correctly and
coordinating their activities.
The design and implementation of an operating system is a complex process that involves many
different disciplines. The goal is to provide users with a reliable, efficient, and convenient computing
environment, so as to make their work more efficient.
Building:
Booting:
Booting is the process of starting a computer. It can be initiated by hardware such as a button press or by
a software command. After it is switched on, a CPU has no software in its main memory, so some
processes must load software into memory before execution. This may be done by hardware or firmware
in the CPU or by a separate processor in the computer system.
Restarting a computer also is called rebooting, which can be "hard", e.g., after electrical power to
the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot
may optionally clear RAM to zero. Hard and soft booting can be initiated by hardware such as a button
press or a software command. Booting is complete when the operative runtime system, typically the
operating system and some applications, is attained.
The process of returning a computer from a state of sleep does not involve booting; however, restoring it
from a state of hibernation does. Minimally, some embedded systems do not require a noticeable boot
sequence to begin functioning and, when turned on, may run operational programs that are stored in
ROM. All computer systems are state machines and a reboot may be the only method to return to a
designated zero-state from an unintended, locked state.
In addition to loading an operating system or stand-alone utility, the boot process can also load a storage
dump program for diagnosing problems in an operating system.
Sequencing of Booting
Booting is a start-up sequence that starts the operating system of a computer when it is turned on. A boot
sequence is the initial set of operations that the computer performs when it is switched on. Every
computer has a boot sequence.
1. Boot Loader: Computers powered by the central processing unit can only execute code found in the
system's memory. Modern operating systems and application program code and data are stored on
nonvolatile memories. When a computer is first powered on, it must initially rely only on the code and
data stored in nonvolatile portions of the system's memory. The operating system is not really loaded at
boot time, and the computer's hardware cannot perform many complex systems actions.
The program that starts the chain reaction that ends with the entire operating system being loaded is the
boot loader or bootstrap loader. The boot loader's only job is to load other software for the operating
system to start.
2. Boot Devices: The boot device is the device from which the operating system is loaded. A modern PC
BIOS (Basic Input/Output System) supports booting from various devices. These include the local hard
disk drive, optical drive, floppy drive, a network interface card, and a USB device. The BIOS will allow
the user to configure a boot order. If the boot order is set to:
o CD Drive
o Hard Disk Drive
o Network
The BIOS will try to boot from the CD drive first, and if that fails, then it will try to boot from the hard
disk drive, and if that fails, then it will try to boot from the network, and if that fails, then it won't boot at
all.
3. Boot Sequence: There is a standard boot sequence that all personal computers use. First, the CPU runs
an instruction in memory for the BIOS. That instruction contains a jump instruction that transfers to the
BIOS start-up program. This program runs a power-on self-test (POST) to check that devices the
computer will rely on are functioning properly. Then, the BIOS goes through the configured boot
sequence until it finds a bootable device. Once BIOS has found a bootable device, BIOS loads the
bootsector and transfers execution to the boot sector. If the boot device is a hard drive, it will be a master
boot record (MBR).
The MBR code checks the partition table for an active partition. If one is found, the MBR code loads that
partition's boot sector and executes it. The boot sector is often operating system specific, and however, in
most operating systems, its main function is to load and execute the operating system kernel, which
continues start-up. Suppose there is no active partition, or the active partition's boot sector is invalid. In
that case, the MBR may load a secondary boot loader which will select a partition and load its boot
sector, which usually loads the corresponding operating system kernel.
Types of Booting
1. Cold Booting: When the computer starts for the first time or is in a shut-down state and switch on
the power button to start the system, this type of process to start the computer is called cold
booting. During cold booting, the system will read all the instructions from the ROM (BIOS) and
the Operating System will be automatically get loaded into the system. This booting takes more
time than Hot or Warm Booting.
2. Warm Booting: Warm or Hot Booting process is when computer systems come to no response or
hang state, and then the system is allowed to restart during on condition. It is also referred to as
rebooting. There are many reasons for this state, and the only solution is to reboot the computer.
Rebooting may be required when we install new software or hardware. The system requires a
reboot to set software or hardware configuration changes, or sometimes systems may behave
abnormally or may not respond properly. In such a case, the system has to be a force restart. Most
commonly Ctrl+Alt+Del button is used to reboot the system. Else, in some systems, the external
reset button may be available to reboot the system.
When our computer is switched on, it can be started by hardware such as a button press, or by software
command, a computer's central processing unit (CPU) has no software in its main memory, there is some
process which must load software into main memory before it can be executed. Below are the six steps to
describe the boot process in the operating system, such as:
Step 1: Once the computer system is turned on, BIOS (Basic Input /Output System) performs a series of
activities or functionality tests on programs stored in ROM, called on POST (Power-on Self Test) that
checks to see whether peripherals in the system are in perfect order or not.
Step 2: After the BIOS is done with pre-boot activities or functionality test, it read bootable sequence
from CMOS (Common Metal Oxide Semiconductor) and looks for master boot record in the first physical
sector of the bootable disk as per boot device sequence specified in CMOS. For example, if the boot
device sequence is:
o Floppy Disk
o Hard Disk
o CDROM
AD
Step 3: After this, the master boot record will search first in a floppy disk drive. If not found, then the
hard disk drive will search for the master boot record. But if the master boot record is not even present on
the hard disk, then the CDROM drive will search. If the system cannot read the master boot record from
any of these sources, ROM displays "No Boot device found" and halted the system. On finding the
master boot record from a particular bootable disk drive, the operating system loader, also called
Bootstrap loader, is loaded from the boot sector of that bootable drive· into memory. A bootstrap loader is
a special program that is present in the boot sector of a bootable drive.
Step 4: The bootstrap loader first loads the IO.SYS file. After this, MSDOS.SYS file is loaded, which is
the core file of the DOS operating system.
AD
Step 5: After this, MSDOS.SYS file searches to find Command Interpreter in CONFIG.SYS file, and
when it finds, it loads into memory. If no Command Interpreter is specified in the CONFIG.SYS file,
the COMMAND.COM file is loaded as the default Command Interpreter of the DOS operating system.
Step 6: The last file is to be loaded and executed is the AUTOEXEC.BAT file that contains a sequence of
DOS commands. After this, the prompt is displayed. We can see the drive letter of bootable drive
displayed on the computer system, which indicates that the operating system has been successfully on the
system from that drive.
When two operating systems are installed on the computer system, then it is called dual booting. Multiple
operating systems can be installed on such a system. But to know which operating system is to boot, a
boot loader that understands multiple file systems and multiple operating systems can occupy the boot
space.
Once loaded, it can boot one of the operating systems available on the disk. The disk can have multiple
partitions, each containing a different type of operating system. When a computer system turns on, a boot
manager program displays a menu, allowing the user to choose the operating system to use.