Short Type S and Answers On: Operating System
Short Type S and Answers On: Operating System
Short Type S and Answers On: Operating System
ODISHA
Short Type
Questions and Answers
on
OPERATING SYSTEM
Prepared by,
Dr. Subhendu Kumar Rath,
BPUT, Odisha.
OPERATING SYSTEM (SHORT QUESTION AND ANSWERS)
By Dr.S.K.Rath, BPUT
Also called FIFO anomaly. Usually, on increasing the number of frames allocated
to a process virtual memory, the process execution is faster, because fewer page
faults occur. Sometimes, the reverse happens, i.e., the execution time increases
even when more frames are allocated to the process. This is Belady's Anomaly.
This is true for certain page reference patterns.
A binary semaphore is one, which takes only 0 and 1 as values. They are used to
implement mutual exclusion and synchronize concurrent processes.
4. What is thrashing?
Long term scheduler determines which programs are admitted to the system for
processing. It controls the degree of multiprogramming. Once admitted, a job
becomes a process.
Short term scheduler, also know as a dispatcher executes most frequently, and
makes the finest-grained decision of which process should execute next. This
scheduler is invoked whenever an event occurs. It may lead to interruption of one
process by preemption.
Turnaround time is the interval between the submission of a job and its
completion. Response time is the interval between submission of a request, and the
first response to that request.
User data: Modifiable part of user space. May include program data, user stack
area, and programs that may be modified.
System Stack: Each process has one or more LIFO stacks associated with it. Used
to store parameters and calling addresses for procedure and system calls.
In a cached system, the base addresses of the last few referenced pages is
maintained in registers called the TLB that aids in faster lookup. TLB contains
those page-table entries that have been most recently used. Normally, each virtual
memory reference causes 2 physical memory accesses- one to fetch appropriate
page-table entry, and one to fetch the desired data. Using TLB in-between, this is
reduced to just one physical memory access in cases of TLB-hit.
Resident set is that portion of the process image that is actually in real-memory at a
particular instant. Working set is that subset of resident set that is actually needed
for execution. (Relate this to the variable-window size method for swapping
techniques.)
The set of dispatchable processes is in a safe state if there exists at least one
temporal order in which all processes can be run to completion without resulting in
a deadlock.
If one or a few processes have a high access rate to data on one track of a storage
disk, then they may monopolize the device by repeated requests to that track. This
generally happens with most common device scheduling algorithms (LIFO, SSTF,
C-SCAN, etc). High-density multisurface disks are more likely to be affected by
this than low density ones.
The repeated execution of a loop of code while waiting for an event to occur is
called busy-waiting. The CPU is not engaged in any real productive activity during
this period, and the process does not progress toward completion.
In message passing, it is the condition in which, both, the sender and receiver are
blocked until the message is delivered.
Trapdoor is a secret undocumented entry point into a program used to grant access
without normal methods of access authentication. A trap is a software interrupt,
usually the result of an error condition.
20. Define latency, transfer and seek time with respect to disk I/O.
Seek time is the time required to move the disk arm to the required track.
Rotational delay or latency is the time it takes for the beginning of the required
sector to reach the head. Sum of seek time (if any) and latency is the access time.
Time taken to actually transfer a span of data is transfer time.
Free memory is maintained in linked lists, each of equal sized blocks. Any such
block is of size 2^k. When some memory is required by a process, the block size of
next higher order is chosen, and broken into two. Note that the two such pieces
differ in address only in their kth bit. Such pieces are called buddies. When any
used block is freed, the OS checks to see if its buddy is also free. If so, it is
rejoined, and put into the original free-block linked-list.
23. How are the wait/signal operations for monitor different from those for
semaphores?
If a process in a monitor signal and no task is waiting on the condition variable, the
signal is lost. So this allows easier program design. Whereas in semaphores, every
operation affects the value of the semaphore, so the wait and signal operations
should be perfectly balanced in the program.
24. In the context of memory management, what are placement and
replacement algorithms?
25. In loading programs into memory, what is the difference between load-
time dynamic linking and run-time dynamic linking?
For load-time dynamic linking: Load module to be loaded is read into memory.
Any reference to a target external module causes that module to be loaded and the
references are updated to a relative address from the start base address of the
application module.
With run-time dynamic loading: Some of the linking is postponed until actual
reference during execution. Then the correct module is loaded and linked.
With demand paging, a page is brought into memory only when a location on that
page is actually referenced during execution. With pre-paging, pages other than the
one demanded by a page fault are brought in. The selection of such pages is done
based on common access patterns, especially for secondary memory devices.
Yes.
32. What are the key object oriented concepts used by Windows NT?
When the OS at the explicit request of another process creates a process, this action
is called process spawning.
15 jobs.
37. List out some reasons for process termination.
1. Normal completion
2. Time limit exceeded
3. Memory unavailable
4. Bounds violation
5. Protection error
6. Arithmetic error
7. Time overrun
8. I/O failure
9. Invalid instruction
10.Privileged instruction
11.Data misuse
12.Operator or OS intervention
13.Parent termination.
1. swapping
2. interactive user request
3. timing
4. parent process request
It is the transfer of sufficient amount of the state of process from one machine to
the target machine.
The special thread a dispatcher will execute when no ready thread is found.
1. Ready
2. Standby
3. Running
4. Waiting
5. Transition
6. Terminated
In Windows NT, executive refers to the operating system code that runs in kernel
mode.
47. What are DDks? Name an operating system that includes this feature.
DDks are device driver kits, which are equivalent to SDKs for writing device
drivers. Windows NT includes DDks.
C2 level security.
Previous Year Question and Answer
Subject- Operating System
Q.1)(a) What is Throughput, Turnaround time, Waiting time and Response
time?
Ans: Throughput is defined as the number of processes that complete their
execution per unit time.
Turnaround time is the amount of time to execute a particular process.
Waiting time is the amount of time a process has been waiting in the ready queue.
Response time is the amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-sharing environment)
(b) What is Reentrancy?
Ans: A computer program or subroutine is called reentrant if it can be interrupted
in the middle of its execution and then safely called again ("re-entered") before its
previous invocations complete execution. The interruption could be caused by an
internal action such as a jump or call, or by an external action such as a hardware
interrupt or signal. Once the reentered invocation completes, the previous
invocations will resume correct execution.
(c) What is the difference between Hard and Soft real time Systems?
Ans: A hard real-time system guarantees that critical tasks be completed on time.
This goal requires that all delays in the system be bounded, from the retrieval of
stored data to the time that it takes the operating system to finish any request made
of it. Such time constraints dictate the facilities that are available in hard real-time
systems.
A soft real-time system is a less restrictive type of real-time system. Here, a
critical real-time task gets priority over other tasks and retains that priority until it
completes. Soft real time system can be mixed with other types of systems. Due to
less restriction, they are risky to use for industrial control and robotics.
(d) What resources are used when a thread created? How do the differ from
those when a process is created?
Ans: Because a thread is smaller than a process, thread creation typically uses
fewer resources than process creation. Creating a process requires allocating a
process control block (PCB), a rather large data structure. The PCB includes a
memory map, list of open files, and environment variables. Allocating and
managing the memory map is typically the most time-consuming activity. Creating
either a user or kernel thread involves allocating a small data structure to hold a
register set, stack, and priority.
(e) What is a binary semaphore? What is its use?
Ans: A binary semaphore is a semaphore with an integer value that can range only
between 0 and 1. A binary semaphore can be simpler to implement than a counting
semaphore, depending on the underlying hardware architecture.
(f) What is the difference between synchronization and mutual exclusion?
Ans: If one process is executing in its critical section then no other process is
allowed to enter it critical section. This is called mutual exclusion.
Synchronization refers to one of two distinct but related concepts:
synchronization of processes, and synchronization of data. Process
synchronization refers to the idea that multiple processes are to join up or
handshake at a certain point, in order to reach an agreement or commit to a certain
sequence of action. Data synchronization refers to the idea of keeping multiple
copies of a dataset in coherence with one another, or to maintain data integrity.
Process synchronization primitives are commonly used to implement data
synchronization.
g) List the Coffman’s conditions that lead to a deadlock.
Ans: A deadlock situation can arise if and only if all of the following conditions
hold simultaneously in a system:
These four conditions are known as the Coffman conditions from their first
description in a 1971 article by Edward G. Coffman.
2)(a) Name the three types of schedulers and give functions of each.
Ans: Three types of CPU schedulers are FCFS, SJF and Priority scheduling.
First-Come, First-Served Scheduling
Consider the following set of processes that arrive at time 0, with the length
of the CPU-burst time given in milliseconds:
PI 24
P2 3
P3 3
If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get
the result shown in the following Gantt chart:
P1 P2 P3
0 24 27
30
The waiting time is 0 milliseconds for process PI, 24 milliseconds for process PZ,
and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds. If the processes arrive in the order P2, P3, Pl, however,
the results will be as shown in the following Gantt chart:
P2 P3 P1
0 3 6 30
The CPU-bound process will then move back to the ready queue and be
allocated the CPU. Again, all the I/O processes end up waiting in the ready queue
until the CPU-bound process is done. There is a convoy effect, as all the other
processes wait for the one big process to get off the CPU. This effect results in
lower CPU and device utilization than might be possible if the shorter processes
were allowed to go first.
The FCFS scheduling algorithm is non-preemptive. Once the CPU has been
allocated to a process, that process keeps the CPU until it releases the CPU, either
by terminating or by requesting I/O. The FCFS algorithm is particularly
troublesome for time-sharing systems, where each user needs to get a share of the
CPU at regular intervals. It would be disastrous to allow one process to keep the
CPU for an extended period.
Shortest-Job-First Scheduling
PI 6
p2 8
p3 7
p4 3
P4 P1 P3 P2
0 3 9 16
24
Thus, users are motivated to estimate the process time limit accurately, since
a lower value may mean faster response. (Too low a value will cause a time-limit
exceeded error and require resubmission.) SJF scheduling is used frequently in
long-term scheduling.
As an example, consider the following four processes, with the length of the
CPU-burst time given in milliseconds:
P1 0 8
P2 1 4
P3 2 9
p4 3 5
If the processes arrive at the ready queue at the times shown and need the
indicated burst times, then the resulting preemptive SJF schedule is as depicted in
the following Gantt chart:
P1 P2 P4 P1 P3
0 1 5 10 17
26
Priority Scheduling
An SJF algorithm is simply a priority algorithm where the priority (p) is the
inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the
priority, and vice versa.
Note that we discuss scheduling in terms of high priority and low priority.
Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4,095.
However, there is no general agreement on whether 0 is the highest or lowest
priority. Some systems use low numbers to represent low priority; others use low
numbers for high priority. This difference can lead to confusion. In this text, we
use low numbers to represent high priority. As an example, consider the following
set of processes, assumed to have arrived at time 0, in the order PI, P2, ..., P5, with
the length of the CPU-burst time given in milliseconds:
Process Burst Time Priority
P1 10 3
p2 1 1
p3 2 4
P4 1 5
P5 5 2
P2 P5 P1 P3 P4
0 1 6 16
18 19
The average waiting time is 8.2 milliseconds. Priorities can be defined either
internally or externally. Internally defined priorities use some measurable quantity
or quantities to compute the priority of a process.
(b) Give the queuing diagram representing process scheduling and show the
action point for the different types of CPU schedulers.
Ans: A common representation for a discussion of process scheduling is a queuing
diagram.
To ensure that these difficulties do not arise, we require that the writers have
exclusive access to the shared object. This synchronization problem is referred to
as the readers-writers problem. Since it was originally stated, it has been used to
test nearly every new synchronization primitive. The readers-writers problem has
several variations, all involving priorities. The simplest one, referred to as the first
readers-writers problem, requires that no reader will be kept waiting unless a writer
has already obtained permission to use the shared object. In other words, no reader
should wait for other readers to finish simply because a writer is waiting. The
second readers-writers problem requires that, once a writer is ready, that writer
performs its write as soon as possible. In other words, if a writer is waiting to
access the object, no new readers may start reading.
A solution to either problem may result in starvation. In the first case,
writers may starve; in the second case, readers may starve. For this reason, other
variants of the problem have been proposed. In this section, we present a solution
to the first readers-writers problem.
int readcount;
do{
wait (wrt) ;
...
writing is performed
...
signal(wrt);
}while(1);
do{
wait (mutex) ;
readcount++;
if (readcount == 1)
wait (wrt) ;
signal (mutex) ;
...
reading is performed
...
wait (mutex) ;
readcount--;
if (readcount == 0)
signal(wrt1;
signal (mutex) ;
}while(1);
Note that, if a writer is in the critical section and n readers are waiting, then
one reader is queued on wrt, and n - 1 readers are queued on mutex. Also observe
that, when a writer executes signal (wrt), we may resume the execution of either
the waiting readers or a single waiting writer.
wait(S) {
while (S <= 0)
; // no-op
S --;
Signal(S){
S++;
Consider two processes P0 and P1. For convenience, when presenting Pi, we
use Pi to denote the other process; that is, j == 1 - i.
int turn;
Initially flag [0] = flag [1] = false, and the value of turn is immaterial (but is either
0 or 1). The structure of process Pi is shown below.
do{
flag[i]=true
turn=j
critical section
flag[i]=false
Remainder section
}while(1);
To enter the critical section, process Pi first sets flag [il to be true and then
sets turn to the value j, thereby asserting that if the other process wishes to enter
the critical section it can do so. If both processes try to enter at the same time, turn
will be set to both i and j at roughly the same time. Only one of these assignments
will last; the other will occur, but will be overwritten immediately. The eventual
value of turn decides which of the two processes is allowed to enter its critical
section first.
To prove property 1, we note that each Pi enters its critical section only if
either flag [jl == false or turn == i. Also note that, if both processes can be
executing in their critical sections at the same time, then flag [i] ==flag [jl == true.
These two observations imply that P0 and P1 could not have successfully executed
their while statements at about the same time, since the value of turn can be either
0 or 1, but cannot be both. Hence, one of the processes say Pj-must have
successfully executed the while statement, whereas Pi had to execute at least one
additional statement ("turn == j"). However, since, at that time, flag [j] == true, and
turn == j, and this condition will persist as long as Pi is in its critical section, the
result follows:
Thus, since Pi does not change the value of the variable turn while executing
the while statement, Pi will enter the critical section (progress) after at most one
entry by Pi (bounded waiting).
----THANK YOU-----