Operating System Structure.: Monolithic Systems
Operating System Structure.: Monolithic Systems
Operating System Structure.: Monolithic Systems
Monolithic Systems
The components of monolithic operating system are organized haphazardly and any
module can call any other module without any reservation. Similar to the other
operating systems, applications in monolithic OS are separated from the operating system
itself. That is, the operating system code runs in a privileged processor mode (referred to
as kernel mode), with access to system data and to the hardware; applications run in a non-
privileged processor mode (called the user mode), with a limited set of interfaces available
and with limited access to system data. The monolithic operating system structure with
separate user and kernel processor mode is shown in Figure 2.1.
When a user-mode program calls a system service, the processor traps the call and then
switches the calling thread to kernel mode. Completion of system service, switches the
thread back to the user mode, by the operating system and allows the caller to continue.
The monolithic structure does not enforce data hiding in the operating system. It
delivers better application performance, but extending such a system can be difficult work
because modifying a procedure can introduce bugs in seemingly unrelated parts of the
system.
The components of layered operating system are organized into modules and layers them
one on top of the other. Each module provide a set of functions that other module can call.
Interface functions at any particular level can invoke services provided by lower layers but
not the other way around. The layered operating system structure with hierarchical
organization of modules is shown in Figure 2.2.
One advantage of a layered operating system structure is that each layer of code is given
access to only the lower-level interfaces (and data structures) it requires, thus limiting the
amount of code that wields unlimited power. That is in this approach, the Nth layer can
access services provided by the (N- 1)th layer and provide services to the (N+1)th layer.
This structure also allows the operating system to be debugged starting at the lowest
layer, adding one layer at a time until the whole system works correctly. Layering also
makes it easier to enhance the operating system; one entire layer can be replaced
without affecting other parts of the system. Layered operating system delivers low
application performance in comparison to monolithic operating system.
A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the operating system kernel as though they were all hardware.
The operating system creates the illusion of multiple processes, each executing on
its own processor with its own (virtual) memory.
The resources of the physical computer are shared to create the virtual machines.
CPU scheduling can create the appearance that users have their own
processor.
Spooling and a file system can provide virtual card readers and virtual line
printers.
The virtual machine concept is difficult to implement due to the effort required to
provide an exact duplicate to the underlying machine.
Non Virtual Machine Virtual Machine
In this model, shown in Fig, all the kernel does is handle the communication between
clients and servers and the main OS functions are provided by a number of separate
processes.
The benefits of the microkernel approach include the ease of extending the
operating system. All new services are added to user space and consequently do not
require modification of the kernel. The microkernel also provides more security and
reliability, since most services are running as user-rather as kernel-processes. If a service
fails, the rest of the operating system remains untouched.
A distributed system can be thought of as an extension of the client server concept where
the servers are remote.
The kernel coordinates the message passing between client applications and
application servers. The client-server structure of Windows NT is:
In this model, all the run able software on the computer, sometimes including the
operating system, is organized into a number of sequential processes, or just processes
for short. A process is just an executing program, including the current values of the
program counter, registers, and variables. In multiprogramming CPU switches from
process to process..
In Fig. (b) We see four processes, each with its own flow of
control (i.e., its own logical program counter), and each one running independently of the
other ones. Of course, there is only one physical program counter, so when each process
runs, its logical program counter is loaded into the real program counter. When it is
finished for the time being, the physical program counter is saved in the process’ logical
program counter in memory.
In Fig. (c) We see that viewed over a long enough time interval, all the
processes have made progress, but at any given instant only one process is actually
running.
Process State
Batch system executes a job, time sharing system executes user program or task.
These above activities are similar so we call them process. program is passive entity but
process is active entity. Process have program, input, output and state(activities) .As a
process executes, it changes state. The state of a process is defined in
part by the current activity of that process. Each process may be in one of the
following states:
New: The process is being created
Ready: The process is waiting to be assigned to a processor.
Running: Instructions are being executed.
Waiting/block : The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
Terminated: The process has finished execution.
Ready to running:
Ready process goes to running state when all other process has
completed their sharing time and it is a time for 1 ST process to run.
Running to ready:
Running process goes to waiting/block state when a process discovers that it cannot
continue. In some systems the process must execute a system call, such as block or pause,
to get into blocked state. In other systems, including UNIX, when a process reads from a
pipe or special file (e.g., a terminal) and there is no input available, the process is
automatically blocked.
Waiting/block to ready:
Waiting process goes to ready state when the external event for which a process
was waiting (such as the arrival of some input) happens. If no other process is running at
that instant, ready to running process will be triggered and the process will start running.
Otherwise it may have to wait in ready state for a little while until the CPU is available and
its turn comes.
Process Creation
The processes in the system can execute concurrently, and they must be created and
deleted dynamically. Thus, the operating system must provide a mechanism (or facility) for
process creation and termination. There are four principal events that cause processes to
be created:
1. System initialization.
2. Execution of a process creation system call by a running process.
3. A user request to create a new process.
4. Initiation of a batch job.
5.
A process may create several new processes, via a create-process system call, during the
course of execution. The creating process is called a parent process, whereas the new
processes are called the children of that process. Each of these new processes may in turn
create other processes, forming a tree of processes (Figure).
Fig: processes creation (Here, in fig, a child has only one parent and a parent has more Childs.)
When a process creates a sub process, that sub process may be able to obtain its resources
directly from the operating System, or it may be constrained to a subset of the resources of
the parent Process. The parent may have to partition its resources among its children, or it
may be able to share some resources (such as memory or files) among several of its
children. Restricting a child process to a subset of the parent's Resources prevents any
process from overloading the system by creating too many sub processes.
When a process creates a new process, two possibilities exist in terms of execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
IN UNIX process are created by the FORK system call which creates the identical copy of
calling process.
Process Termination
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit system call. At that point, the process may
return data (output) to its parent process (via the wait system call). All the resources of the
process-including physical and virtual memory, open files, and 1/0 buffers-are deallocated
by the operating system.
A process can cause the termination of another process via an appropriate system
call (for example, abort). Usually, only the parent of the process that is to be terminated
can invoke such a system call. Otherwise, users could arbitrarily kill each other's jobs. A
parent therefore needs to know the identities of its children. Thus, when one process
creates a new process, the identity of the newly created process is passed to the parent.
A parent may terminate the execution of one of its children for a variety of
Reasons, such as these:
The child has exceeded its usage of some of the resources that it has
been allocated. This requires the parent to have a mechanism to
inspect the state of its children.
The task assigned to the child is no longer required.
The parent is exiting, and the operating system does not allow a child
to continue if its parent terminates. On such systems, if a process
terminates (either normally or abnormally), then all its children must
also be terminated. This phenomenon, referred to as cascading
termination, is normally initiated by the operating system.
In UNIX we can terminate a process by using the exit system call; its parent process may
wait for the termination of a child process by using the wait system call.
Implementation of Processes
To implement the process model, the operating system maintains a table, an array of
structures, called the process table or process control block (PCB) or Switch frame. The
activity of a process is controlled by a data structure called Process Control Block(PCB). A
PCB is created every time a program is loaded to be executed. It is also called a task
control block. Each process has its own PCB to represent it in the operating system. The
PCB is central store of information that allows the operating system to locate all key
information about the process. it contain everything about the process that must be saved
when the process is switched from the running state to the ready state so that it can be
restarted later as if it had never been stopped.
p rocess state: The state may be new, ready, running, waiting, halted, and
so on.
Process Number: Each process is identified by its process number, called
process identification number (PID).
Program counter: The counter indicates the address of the next instruction
to be executed for this process.
CPU registers: They include accumulators, index registers, stack pointers,
and general-purpose registers, plus any condition-code information. Along
with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward.
CPU-scheduling information: This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
Accounting information: This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
I/O status information: The information includes the list of I/O devices allocated to this
process, a list of open files, and so on.
These PCBs are chained into a number of lists. For example, all processes in
the ready status are in the ready queue
As processes switch in and out of the Running state, their PCBs are saved and reloaded as
shown in this diagram:
Fig: Showing CPU switch from process to process.
Threads
Single Thread
Has single thread of control
It allows the process to perform only 1 task at a time.
Multi thread
Has many threads
Simultaneous execution of different task
Why Threads?
Following are some reasons why we use threads in designing operating systems.
1. A process with multiple threads makes a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess
communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
Thread:
Usually, a process has only one thread of control – one set of machine instructions
executing at a time.
All the threads running within a process share the same address space, file
descriptor, stack and other process related attributes.
Since the threads of a process share the same memory, synchronizing the access to
the shared data within the process gains unprecedented importance.
Process:
An executing instance of a program is called a process.
Some operating systems use the term ‘task‘to refer to a program that is being
executed.
A process is always stored in the main memory also termed as the primary memory
or random access memory.
Thread Process
2. There can be more than one thread 2. Threads within a process share
in a process, the first thread calls code/data/heap, share I/O, but each
main and has the process’s stack has its own stack and registers.
3. A thread cannot live on its own, it 3. There must be at least one thread in
must live within a process a process
6. If a thread dies, its stack is reclaimed 6. If a process dies, its resources are
by the process. reclaimed & all threads die.
8. Threads are easier to create than 8. They require separate address space
processes since they don't require a
separate address space.
10. Modifying a main thread may affect 10. Changes on a parent process will not
subsequent threads. necessarily affect child processes
User Threads:
User threads are supported above the kernel and are implemented by a thread
library at the user level.
The library (or run-time system) provides support for thread creation, scheduling
and management with no support from the kernel.
When threads are managed in user space, each process needs its own private thread
table to keep track of the threads in that process.
The thread-table keeps track only of the per-thread items (program counter, stack
pointer, register, state..)
When a thread does something that may cause it to become blocked locally (e.g. wait
for another thread), it calls a run-time system procedure.
If the thread must be put into blocked state, the procedure performs thread
switching.
User-thread libraries include POSIX Pthreads, Mach C-threads, and Solaris 2 UI-
threads.
User-level Threads: Advantages
Since the kernel is not involved, thread switching may be very fast.
Each process may have its own customized thread scheduling algorithm.
The implementation of blocking system calls (the rest of your processing must wait
until the call returns) is highly problematic (e.g. read from the keyboard). All the
threads in the process risk being blocked!
Possible Solutions:
Kernel thread:
Kernel threads are supported directly by the operating system: The kernel performs
thread creation, scheduling, and management in kernel space. Because thread management
is done by the operating system, kernel threads are generally slower to create and manage
than are user threads. However, since the kernel is managing the threads, if a thread
performs a blocking system call, the kernel can schedule another thread in the application
for execution. Also, in a multiprocessor environment, the kernel can schedule threads on
different processors. Most contemporary operating systems-including Windows NT,
Windows 2000, Solaris 2, BeOS, and Tru64 UNIX (formerly Digital UN1X)-support kernel
threads.
Kernel threads are supported directly by the OS: The kernel performs thread
creation, scheduling and management in the kernel space.
The kernel has a thread table that keeps track of all threads in the system.
All calls that might block a thread are implemented as system calls (greater cost).
When a thread blocks, the kernel may choose another thread from the same process,
or a thread from a different process.
Some kernels recycle their threads; new threads use the data-structures of already
completed threads.
Advantages of threads:
The benefits of multithreaded programming can be broken down into four
major categories:
2. Resource sharing: By default, threads share the memory and the resources of the process
to which they belong. The benefit of code sharing is that it allows an application to have
several different threads of activity all within the same address space.
3. Economy: Allocating memory and resources for process creation is costly. Alternatively,
because threads share resources of the process to which they belong, it is more economical
to create and context switch threads. It can be difficult in overhead for creating and
maintaining a process rather than a thread, but in general it is much more time consuming
to create and manage processes than threads. In Solaris 2,creating a process is about 30
times slower than is creating a thread, and context switching is about five times slower.
Thread Drawbacks
• Synchronization
E.g., need to be very careful to avoid race conditions, deadlocks and other
concurrency problems.
• Lack of independence
The RAM address space is shared; No memory protection from each other
The stacks of each thread are intended to be in separate RAM, but if one
thread has a problem (e.g., with pointers or array addressing), it could write
over the stack of another thread
Multithreading Models
Many systems provide support for both user and kernel threads, resulting in
different multithreading models. We look at three common types of threading
Implementation.
1. Many-to-One Model
2. One-to-one Model
3. Many-to-Many Model
1. Many-to-One Model
The many-to-one model (Figure) maps many user-level threads to one kernel thread.
Thread management is done in user space, so it is efficient, but the entire process will block
if a thread makes a blocking system call. Also,
because only
In the many-to-one model, many user-level threads are all mapped onto a single
kernel thread.
Thread management is handled by the thread library in user space, which is very
efficient.
However, if a blocking system call is made, then the entire process blocks, even if the
other user threads would otherwise be able to continue.
Because a single kernel thread can operate only on a single CPU, the many-to-one
model does not allow individual processes to be split across multiple CPUs .i.e one
thread can access the kernel at a time, multiple threads are unable to run in parallel
on multiprocessors.
Green threads for Solaris and GNU Portable Threads implement the many-to-one
model.
2. One-to-one Model
The one-to-one model creates a separate kernel thread to handle each user
thread.
One-to-one model overcomes the problems listed above involving blocking
system calls and the splitting of processes across multiple CPUs..i.e Allow
anther thread to run if block
However the overhead of managing the one-to-one model is more significant,
involving more overhead and slowing down the system.i.e run parallel
Most implementations of this model place a limit on how many threads can
be created.
Linux and Windows from 95 to XP implement the one-to-one model for
threads.
3. Many-to-Many Model
The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
Users have no restrictions on the number of threads created.
Blocking kernel system calls do not block the entire process.
Processes can be split across multiple processors.
Individual processes may be allocated variable numbers of kernel threads,
depending on the number of CPUs present and other factors.
Thread Usage
1. Responsiveness: Multithreading an interactive application may allow a
program to continue running even if part of it is blocked or is performing
a lengthy operation, thereby increasing responsiveness to the user. For
instance, a multithreaded web browser could still allow user interaction
in one thread while an image is being loaded in another thread.
2. Resource sharing: By default, threads share the memory and the resources
of the process to which they belong. The benefit of code sharing is that it
allows an application to have several different threads of activity all within
the same address space.
3. Economy: Allocating memory and resources for process creation is costly. Alternatively,
because threads share resources of the process to which they belong, it is more economical
to create and context switch threads. It can be difficult in overhead for creating and
maintaining a process rather than a thread, but in general it is much more time consuming
to create and manage processes than threads. In Solaris 2,creating a process is about 30
times slower than is creating a thread, and context switching is about five times slower.
5.They are light process. They are easier to create and destroy. Decomposition of a process
into multi sequence of thread that run in quasi-parallel. Their programming model become
simpler.
1. user space
2. kernel space
The first method is to put the threads package entirely in user space. The kernel
knows nothing about them. As far as the kernel is concerned, it is managing
Ordinary, single-threaded processes. The first, and most obvious, advantage is
that a user-level threads package can be implemented on an operating system that does not
support threads. To implement thread in user space, thread table is created in the user
space memory.
Fig: A user-level threads package.
When threads are managed in user space, each process needs its own
private thread table to keep track of the threads in that process. The thread table is
managed by the run-time system. When a thread is moved to ready state or blocked state,
the information needed to restart it is stored in the thread table.
If a thread makes a blocking system call, all threads in the task will stop. This is
unacceptable but unavoidable if blocking system calls is the only alternative (which
is common). It can be solved in a clumsy way if there is a separate system call to test
if read will block.
Implementation of preemptive scheduling (with signals) is usually inefficient. Non-
preemptive scheduling means that a looping thread will stop all other threads in
same task.
Parallel execution of threads in a multiprocessor is not possible.
Programs that use threads usually do many system calls, making the system call
overhead for process switching less important.