OPERATING-SYSTEMS Important Questions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

OPERATING SYSTEMS - Question Bank

S.
Introduction to OS
No.
1 What are the objectives of operating system

Ans: An operating system is a program that manages the computer hardware. it act as an
intermediate between a user of a computer and the computer hardware. It controls and
coordinates the use of the hardware among the various application programs for the various
users.

2 What are the advantages of peer-to-peer systemsover client-server systems?


Ans:
• The main advantage of peer to peer network is that it is easier to set up
• In peer-to-peer networks all nodes are act as server as well as client therefore no need
of dedicated server.
• The peer to peer network is less expensive.
• Peer to peer network is easier to set up and use this means that you can spend less
time in the configuration and implementation of peer to peer network.
 It is not require for the peer to peer network to use the dedicated server computer.
Any
computer on the network can function as both a network server and a user
workstation

3 What is the purpose of system programs/systemcalls?


Ans: System programs can be thought of as bundlesof useful system calls. They
provide basic functionality to users so that users do not need to write their
own programs to solve commonproblems.
4 How does an interrupt differ from a trap?
Ans: An interrupt is a hardware-generated signal that changes the flow within the system.
A trap is a software-generated interrupt.
An interrupt can be used to signal the completion of I/O so that the CPU doesn't have to
spend cycles polling the device. A trap can be used to catch arithmetic errors or to call
system routines

5 What are disadvantages of multi-processorsystems?

Complex Operating System is required


• Large main memory required
• Very expensive

6 Defend time sharing differ from multiprogramming? If so, how?

Ans: Main difference between multiprogramming and time sharing is that


multiprogramming is the effective utilization of CPU time, by allowing several programs
to use the CPU at the same time but time sharing is the sharing of a computing facility by
several users that want to use the same facility at the same time.

7 Why API’s need to be used rather than systemcall?


Ans: There are four basic reasons:
1) System calls differ from platform to platform. By using a stable API, it is easier to
migrate your software to different platforms.
2) The operating system may provide newer versions of a system call with enhanced
features. The API implementation will typically also be upgraded to provide this support,
so if you call the API, you'll get it.

3) The API usually provides more useful functionality than the system call directly. If
you make the system call directly, you'll typically have to replicate the pre-call and post-
call code that's already implemented by the API. (For example the 'fork' API includes tons
of code beyond just making the 'fork' system call. So does 'select'.)
4) The API can support multiple versions of the operating system and detect which
version it needs to use at run time. If you call the system directly, you either need to
replicate this code or you can only support limited versions.

8 Compare and contrast DMA and cache memory.


Ans: DMA(Direct Memory Access): Direct memory access (DMA) is a feature of
computer systems that allows certain hardware subsystems to access main memory
(Random-access memory), independent of the central processing unit (CPU).
Cache Memory: A cache is a smaller, faster memory, closer to a processor core, which
stores copies of the data from frequently used main memory locations.
So, both DMA and cache are used for increasing the speed of memory access.

9 Distinguish between batch systems and timesharing systems. Ans:

Batch System Time sharing system

The tasks are givenspecific time and operating system


Jobs or work is keep in switches between different tasks.
order and jobs are run
one after the other

user interaction isinvolved in the processing


there won’t be any user
interactions
10 Compare tightly coupled systems and loosely coupled systems?

Loosely coupled systems:-


Each processor has its own local memory. Each processor can communicate with other
all through communication lines

Tightly coupled systems:-


Common memory is shared by many processors No need of any special communication
lines.
11 What is real time system?
Ans: A real time system has well defined, fixed time constraints. Processing must be
done within the defined constraints, or the system will fail. It is often used as a control
device in a dedicated application.

12 What are privileged instructions?


Ans: Some of the machine instructions that may cause harm to a system are designated
as privileged instructions. The hardware allows the privileged instructions to be
executed only in monitor mode.

13 What do you mean by system calls?


Ans: System calls provide the interface between a process and the operating system.
When a system call is executed, it is treated as by the hardware as software interrupt.

14 Define: process
Ans: A process is a program in execution. It is an active entity and it includes the
process stack, containing temporary data and the data section contains global variables.

15 What is process control block?


Ans: Each process is represented in the OS by a process control block. It contain many
pieces ofinformation associated with a specific process.

16 What is scheduler?
Ans: A process migrates between the various scheduling queues through out its life
time. The OS must select processes from these queues in some fashion. This selection
process is carried out by a scheduler.
17 What are the use of job queues, ready queues and device queues?
Ans: As a process enters a system they are put in to a job queue. This queues consist of
all jobs in the system. The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called ready queue. The list of processes waiting for
particular I/O devices kept in the device queue.

18 What is meant by context switch?


Ans: Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as context switch.

19 Discuss the difference between symmetric and asymmetric multiprocessing


Ans:
Symmetric multiprocessing (SMP), in which each processor runs an identical copy of
the operating system and these copies, communicate with one another as needed.
Asymmetric multiprocessing, in which each processor is assigned a specific task. The
master processor controls the system; the other processor looks the master.

20 What is the main advantage of multiprogramming?


Ans: Multiprogramming makes efficient use of the CPU by overlapping the demands
for the CPU and its I/O devices from various users. It attempts to increase CPU
utilization by always having something for the CPU to execute.

21 Discuss the main advantages of layered approach to system design?


Ans: As in all cases of modular design, designing an operating system in a modular
way has several advantages. The system is easier to debug and modify because changes
affect only limited sections of the system rather than touching all sections of the
operating system. Information is kept only where it is needed and is accessible only
within a defined and restricted area, so any bugs affecting that data must be limited to a
specific module or layer.
22 List the advantage of multiprocessor system?
Ans:

 Increased throughput.
 Economy of scale.
 Increased reliability.
23 Define inter process communication.
Ans: Inter process communication provides a mechanism to allow the co-operating
process to communicate with each other and synchronies their actions without sharing
the same address space. It is provided a message passing system.

24 Identify the difference between mainframe and desktop operating system.


Ans: The design goals of operating systems for those machines are quite different. PCs
are inexpensive, so wasted resources like CPU cycles are inconsequential. Resources are
wasted to improve usability and increase software user interface functionality.
Mainframes are the opposite, so resource use is maximized, at the expensive of ease of
use.
25 What is bootstrap program?
Ans: A bootstrap is the program that initializes theoperating system (OS) during
startup.

26 Illustrate the different interrupt clauses.


Ans:

 Hardware interrupts
 Software interrupts

27 Identify what virtual machine is and what are the advantages virtual machines.
Ans: Virtual Machine is a completely separate individual operating system installation
on your usual operating system. It is implemented by software emulation and hardware
virtualization.

Advantages:

 Multiple OS environments can existsimultaneously on the


same machine,
isolated from each other;
 Virtual machine can offer an instruction set architecture that differs from real
computer's;
 Easy maintenance, application provisioning, availability and convenient
recovery.

28 Distinguish between hard real time systems and soft real time systems.
Ans:
A Hard Real-Time System guarantees that critical tasks complete on time.
A Soft Real Time System where a critical real-time task gets priority over other tasks
and retains that priority until it completes.

29
Summarize the functions of DMA.
Ans: Direct memory access (DMA) is a method that allows an input/output (I/O)
device to send or receive data directly to or from the main memory, bypassing the
CPU to speed up memory operations. The process is managed by a chip known as a
DMA controller (DMAC).
30 Illustrate the use of fork and exec system calls.
Ans: fork() is the name of the system call that the parent process uses to "divide" itself
("fork") into two identical processes. After calling fork(), the Creatingd child process
is an exact copy of the parent except for the return value.
When the child process calls exec(), all data in the original program is lost, and it is
replaced with a running copy of the new program. This is known as overlaying.

31 Define: Clustered systems.


Ans: A computer cluster is a set of loosely or tightly connected computers that work
together so that, in many respects, they can be viewed as a single system.

32 Some computer systems do not provide a privileged mode of operation in


hardware. Is it possible to construct a secure operating system for these computer
systems?
Ans: An operating system for a machine of this typewould need to remain
in control (or monitor mode) at all times. This couldbe accomplished by two
methods:
a. Software interpretation of all user programs (likesome BASIC, Java, and
LISP systems, for example). The softwareinterpreter would provide, in
software, what the hardware does not provide.

b. Require meant that all programs be written inhigh‐level languages so that

all object code is compiler‐produced. The compilerwould generate (either in‐


ine or by function calls) the protection checks thatthe hardware is missing.

33 Can traps be generated intentionally by a user program? If so, for what


purpose?
Ans: A trap is a software‐generated interrupt. An interrupt can be used to signal
the completion of an I/O to obviate the need for device polling. A trap can be used to
call operating system routines or to catch arithmetic errors.

34 What are the three main purposes of anoperating system?

Ans:The three main puropses are:

• To provide an environment for a computer user to execute programs on computer


hardware in a convenient and efficient manner.

• To allocate the separate resources of the computer as needed to solve the problem
given. The allocation process should be as fair and efficient as possible.

• As a control program it serves two major functions: (1) supervision of the


execution of user programs to prevent errors and improper use of the computer, and
(2) management of the operation and control of I/O devices.
35 What is the purpose of system calls?

Ans:System calls allow user-level processes torequest services of the operating


system.

36 What are the five major activities of an operatingsystem with regard to process
management?

Ans:The five major activities are:

a. The creation and deletion of both user and systemprocesses

b. The suspension and resumption of processes

c. The provision of mechanisms for processsynchronization

d. The provision of mechanisms for processcommunication

e. The provision of mechanisms for deadlockhandling

37 What are the three major activities of an operating system with regard to
memory management?

Ans:The three major activities are:

a. Keep track of which parts of memory arecurrently being used and by whom.

b. Decide which processes are to be loaded intomemory when memory space


becomes available.

c. Allocate and deallocate memory space as needed.

38 What are the three major activities of an operating system with regard to
secondary-storage management?

Ans: The three major activities are:


• Free-space management.

• Storage allocation.

• Disk scheduling

39 What is an Operating system?

Ans:An operating system is a program that manages the computer hardware. It also
provides a basis for application programs and act as an intermediary between a user
of a computer and the computer hardware. It controls and coordinates the use of the
hardware among the various application programs for the various users.

40 List the services provided by an OperatingSystem?

Ans:

Program execution

I/O Operation

File-System manipulation

Communications

Error detection
41 What is the Kernel?

Ans: A more common definition is that the OS is the one program running at all times
on the computer, usually called the kernel, with all else being application programs.

42 What is meant by Mainframe Systems?

Ans: Mainframe systems are the first computers developed to tackle many
commercial and scientific applications. These systems are developed from the batch
systems and then multiprogramming system and finally time sharing systems.

43 What is Multiprocessor System?

Ans: Multiprocessor systems have systems more than one processor for
communication, sharing thecomputer bus, the memory, clock & peripheral
devices.
44 What are the advantages of multiprocessors?

Increased throughput

Economy of scale

Increased reliability

45 What is the use of Fork and Exec System Calls?

Ans: Fork is a System calls by which a new process is created. Exec is also a System
call, which is used after a fork by one of the two processes to replace the process
memory space with a new program.

46 What are the five major categories of SystemCalls?

Ans:

Process Control File-management

Device-management Information maintenance

Communications

47 What are the modes of operation in HardwareProtection?

Ans:

User Mode

Monitor Mode
48 What is meant by Batch Systems?

Ans: Operators batched together jobs with similar needs and ran through the
computer as a group .The operators would sort programs into batches with similar
requirements and as system become available, it would run each batch.
49 List the privileged instruction.

a. Set value of timer.


b. Clear memory.
c. Turn off interrupts.
d. Modify entries in device-status tab
e. Access I/O device.
50 What are the Components of a ComputerSystem?

Ans:

Application Program System Program Operating

System

Computer Hardware

PART B

1. Explain different operating system structures with neat sketch.

2. What are the advantages and disadvantages of using the same system call interface
for both files and devices?

3. Explain the various types of system calls with examples.

4. What are the basic functions of OS and DMA

5. Explain the concept of multiprocessor and Multicore organization

6. Describe the difference between symmetric and asymmetric multiprocessing.


Discuss theadvantages and disadvantages of multiprocessor systems.

7. Discuss in detail about Distributed systems.

8. Demonstrate the three methods for passing parametersto the OS with examples.

9. Explain how protection is provided for the hardware resources by the operating
system

10. List the various services provided by operatingsystems

11. Discuss the DMA driven data transfer technique.

12. Discuss about the evolution of virtual machines. Also explain how virtualization
could be implemented in operating systems

13. With neat sketch, discuss about computer system overview.

14. Give reasons why caches are useful. What problems do they solve and cause? If a
catch can be made as large as the device for which it is catching why not make it that
large and eliminate the device?
15 Discuss the functionality of system boot withrespect to an operating system.

16. Discuss the essential properties of the followingtypes of systems,


S.
No PROCESS MANAGEMENT - Question
.
1 Compare and contrast Single-threaded and multi-threaded process.

Ans:

Single-threading is the processing of one command/ process at a time. Whereas multi-


threading is a widespread programming and execution model that allows multiple
threads to exist within the context of one process. These threads share the process's
resources, but are able to execute independently.

2 Priority inversion is a condition that occurs in real time systems – Analyzing on


this statement.
Ans: Priority inversion is a problem that occurs in concurrent processes when low-
priority threads hold shared resources required by some high-priority threads, causing
the high priority-threads to block indefinitely. This problem is enlarged when the
concurrent processes are in a real time system where high- priority threads must be
served on time.
Priority inversion occurs when task interdependency exists among tasks with different
priorities.

3 Distinguish between CPU bounded, I/O boundedprocesses.


Ans:
CPU bound process, spends majority of itstime simply using the CPU (doing
calculations).
I/O bound process, spends majority of itstime in input/output related
operations.
4 What resources are required to Creatingthreads?

Ans: When a thread is Creating the threads does not require any new resources to
execute. The thread shares the resources of the process to which it belongs to and it
requires a small data structure to hold a register set, stack, and priority.

5 Under what circumstances user level threads are better than the kernel level
threads?
Ans: User-Level threads are managed entirely by the run-time system (user-level
library).The kernel knows nothing about user-level threads and manages them as if they
were single-threaded processes. User-Level threads are small and fast, each thread is
represented by a PC, register, stack, and small thread control block. Creating a new
thread, switching between threads, and synchronizing threads are done via procedure
call. i.e. no kernel involvement. User- Level threads are hundred times faster than
Kernel- Level threads.
User level threads are simple to represent, simple to manage and fast and efficient.

6 What is the meaning of the term busy waiting?

Ans: Busy-waiting, busy-looping or spinning is a technique in which a process


repeatedly checks to see if a condition is true.

7 List out the data fields associated with process control blocks.
Ans: Process ID, pointers, process state, priority, program counter, CPU registers, I/O
information,Memory management information, Accounting information, etc.

8 Define the term ‘Dispatch Latency”.

Ans: The term dispatch latency describes the amount of time it takes for a system to
respond to a request for a process to begin operation.

9 What is the concept behind strong semaphoreand spinlock?


Ans: Strong semaphores specify the order in which processes are removed from the
queue (FIFO order),which guarantees avoiding starvation.
Spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop
("spin") while repeatedly checking if the lock is available.
10 What is a thread?
Ans: A thread otherwise called a lightweight process (LWP) is a basic unit of CPU
utilization, it comprises of a thread id, a program counter, a register set and a stack. It
shares with other threads belonging to the same process its code section, data section,
and operating system resources such as open files and signals.

11 What are the benefits of multithreadedprogramming?


Ans: The benefits of multithreaded programmingcan be broken down into four
major categories:

• Responsiveness
• Resource sharing
• Economy
• Utilization of multiprocessor architectures

12 Compare user threads and kernel threads.

Ans:
User threads:-
User threads are supported above the kernel and are implemented by a thread library at
the user level. Thread creation & scheduling are done in the user space, without kernel
intervention. Therefore they are fast to Creating and manage blocking system call will
cause the entire process to block

Kernel threads:-
Kernel threads are supported directly by the operating system .Thread creation,
scheduling and management are done by the operating system. Therefore they are
slower to Creating & manage compared to user threads. If the thread performs a
blocking system call, the kernel can schedule another thread in the application for
execution
13 What is the use of fork and exec system calls?
Ans: Fork is a system call by which a new process is Creating. Exec is also a system
call, which is used after a fork by one of the two processes to place the process
memory space with a new program.

14 Distinguish between user-level threads and kernel-level threads? Under what


circumstances is one type better than the other?
Ans:
• User-level threads are unknown by the kernel, whereas the kernel is aware of
kernel threads.
• User threads are scheduled by the thread library and the kernel schedules kernel
threads.
• Kernel threads need not be associated with a process whereas every user thread
belongs to
a process.
15 Define thread cancellation and target thread.
Ans:The thread cancellation is the task of terminating a thread before it has completed.
A thread that is to be cancelled is often referred to as the target thread. For example, if
multiple threads are concurrently searching through a database and one thread returns
the result, the remaining threads might be cancelled.
16 What are the different ways in which a threadcan be cancelled?
Ans: Cancellation of a target thread may occur intwo different scenarios:

Asynchronous cancellation: One thread immediately terminates the target


thread is called asynchronous cancellation.
Deferred cancellation: The target thread can periodically check if it should
terminate, allowing the target thread an opportunity to terminate itself in an orderly
fashion.
17 Define CPU Scheduling.
Ans: CPU scheduling is the process of switching the CPU among various processes.
CPU scheduling is the basis of multi-programmed operating systems. By switching the
CPU among processes, the operating system can make the computer more productive.
18 Distinguish between preemptive and non- preemptive Scheduling.
Ans: Under non-preemptive scheduling once the CPU has been allocated to a process,
the process keeps the CPU until it releases the CPU either by terminating or switching
to the waiting state. Preemptive scheduling can preempt a process which is utilizing the
CPU in between its execution and give the CPU to another process.
19 List the functions of Dispatcher Module.
Ans: The dispatcher is the module that gives control of the CPU to the process selected
by the short- termscheduler. This function involves:

• Switching context
• Switching to user mode
• Jumping to the proper location in the userprogram to restart that program.

20 What are the various scheduling criteria for CPUscheduling?


Ans: The various scheduling criteria are,

• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time

21 What are the requirements that a solution to thecritical section problem must
satisfy?
Ans: The three requirements are
• Mutual exclusion
• Progress
• Bounded waiting
22 Define: Critical section problem.
Ans: Consider a system consists of 'n' processes. Each process has segment of code
called a critical section, in which the process may be changing common variables,
updating a table, writing a file. When one process is executing in its critical section, no
other process can allowed executing in its critical section.
23 How will you calculate turn-around time?
Ans: Turnaround time is the interval from the timeof submission to the time of
completion of a process.
It is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.

24 Name two hardware instructions and their definitions which can be used for
implementing mutual exclusion.
Ans:
• TestAndSet

boolean TestAndSet (boolean &target)

boolean rv = target;target = true;

return rv;

• Swap

void Swap (boolean &a, boolean &b)

boolean temp = a;a = b;

b = temp;

}
25 What is a semaphore?
Ans: A semaphore 'S' is a synchronization tool which is an integer value that, apart
from initialization, is accessed only through two standard atomic operations; wait and
signal .Semaphores can be used to deal with the n-process critical section problem. It
can be also used to solve various Synchronization problems.

26 Define Deadlock.
Ans: A process requests resources; if the resources are not available at that time, the
process enters a wait state. Waiting processes may never again change state, because
the resources they have requested are held by other waiting processes. This situation is
called a deadlock.
27 List two programming examples of multithreading giving improved performance
over a single-threaded solution.
Ans:
• A Web server that services each request in a separate thread.
• A parallelized application such as matrix multiplication where different parts of
thematrix may be worked on in parallel.
• An interactive GUI program such as a debugger where a thread is used to
monitor user input, another thread represents the running application, and a
third thread
monitors performance.

28 What are the conditions under which a deadlocksituation may arise?


Ans: A deadlock situation can arise if the followingfour conditions hold
simultaneously in a system:
• Mutual exclusion
• Hold and wait
• No pre-emption
• Circular wait
29 What are the methods for handling deadlocks?
Ans: The deadlock problem can be dealt with in one
of the three ways:
a. Use a protocol to prevent or avoid deadlocks, ensuring that the system will never
enter a deadlock state.
b. Allow the system to enter the deadlock state, detect it and then recover.

c. Ignore the problem all together, and pretend that deadlocks never occur in the
system.

30 What is resource-allocation graph?


Ans: Deadlocks can be described more precisely in terms of a directed graph called a
system resource allocation graph. This graph consists of a set of vertices V and a set of
edges E. The set of vertices V is partitioned into two different types of nodes; P the set
consisting of all active processes in the system and R the set consisting of all resource
types in the system.

31 Define busy waiting and Spinlock.


Ans: When a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code. This is called as busy waiting
and this type of semaphore is also called a spinlock, because the process keeps on
waiting for the lock.

32 What are the benefits of synchronous and asynchronous communication?


(Apr/May 2018)
Ans: A benefit of synchronous communication is that it allows a rendezvous between
the sender and receiver.
An asynchronous operation is non-blocking and onlyinitiates the operation.

33 Can a multithreaded solution using multiple user-level threads achieve better


performance on a multiprocessor system than on a single- processor
system?(Nov/Dec 2018)
Ans: A multithreaded system comprising of multiple user-level threads cannot make
use of the different processors in a multiprocessor system simultaneously.

34 Define process?
Ans: A process is more than a program code, which is sometime known as the text
section. It also includes the current activity, as represented by the value of the program
counter and the processor’s registers.

35 Describe the actions taken by a kernel to context- switch between kernel level
threads.

Ans: Context switching between kernel threads typically requires saving the value of
the CPU registers from the thread being switched out and restoring the CPU registers of
the new thread being scheduled

36 What is meant by the state of the process?

Ans: The state of the process is defined in part by the current activity of that process.
Each process may be in one of the following states.

New: The process is being created. Running: Instruction are being

executed

Waiting: The process is waiting for some event tooccur.

Ready: The process is waiting to be assigned to aprocessor

Terminated: The process has finished execution

37 Define process control block contain?

Ans: Each process is represented in the operating system by a process control block
(PCB) – also called as task control block. The PCB simply serves as the repository for
any information that may vary from process to process.
38 What are the 3 different types of scheduling queues?

Ans:

Job Queue: As process enters the system theyare put into job queue.
Ready Queue: The processes that are residing in themain memory and are ready and
waiting to execute are kept in the queue.
Device Queue: The list of processes waiting forparticular I/O device is called a device
queue.

39 Define schedulers?

Ans: A process migrates between the various scheduling throughout its lifetime. The
operating system must select, for scheduling purposes, processes from these queues in
some fashion. The selection process is carried out by the appropriate scheduler.

40 What are the types of scheduler?

Ans:

Long term scheduler or job scheduler selects processes from the pool and load them
into the memory for execution. Short term scheduler or CPU scheduler, select among
the processes that are ready to execute and allocates the CPU to one of them.

41 Define critical section?

Ans: If a system consist on n processes {P0, P1,……., Pn-1}.Each process has a


segment of code called a critical section, in which the process may be changing common
variables, updating a table , writing a file. The important feature of this system is that,
when one process is in its critical section, no other process is to be allowed to execute
in its critical section.

Define Starvation in deadlock?


42
Ans: A problem related to deadlock is indefinite blocking or starvation, a situation
where processes wait indefinitely within a semaphore. Indefinite blocking may occur if
we add and remove processes from the list associated with a semaphore in LIFO order.

43 Name some classic problem of synchronization?

Ans: The Bounded – Buffer Problem

The Reader – Writer Problem

The Dining –Philosophers Problem


44 What is the sequence of operation by which a process utilizes a resource?

Ans: Under the normal mode of operation, a process may utilize a resource in only the
followingsequence:

Request: If the request cannot be granted immediately, then the requesting process
must wait until it can acquire the response.

Use: The process can operate on the resource. Release: The process releases the

resource

45 Give the condition necessary for a deadlock-situation to arise?

Ans: A deadlock situation can arise if the following4 condition hold simultaneously in
a system.

Mutual ExclusionHold and Wait No preemption

Circular Wait

46 Define ‘Safe State”?

Ans: A state is safe if the system allocates resources to each process in some order and
still avoid deadlock.

47 Define race condition.

Ans: When several process access and manipulate same data concurrently, then the
outcome of the execution depends on particular order in which the access takes place is
called race condition. To avoid race condition, only one process at a time can
manipulate the shared variable.

48 Define entry section and exit section.

Ans: The critical section problem is to design aprotocol that the processes can use to
cooperate.
Each process must request permission to enter its critical section. The section of the
code implementing this request is the entry section. The critical section is followed by
an exit section. The remaining code is the remainder section.

49 Define busy waiting and spinlock.

Ans: When a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code. This is called as busy waiting
and this type of semaphore is also called a spinlock, because the process while waiting
for the lock.

50 Explain the difference between preemptive and non - preemptive scheduling.

Ans: Preemptive scheduling allows a process to be interrupted in the midst of its


execution, taking the CPU away and allocating it to another process.

Non preemptive scheduling ensures that a process relinquishes control of the CPU only
when it finishes with its current CPU burst.

1 Suppose that the following processes arrive for execution at the times indicated. Each
process will run the listed amount of time. In answering the questions, use non-
preemptive scheduling and base all decisions on the information you have at the time
the decision must be made.
Process Arrival Time Burst Time
P1 0.0 8
P2 0.4 4
P3 1.0 1

a. Find the average turnaround time for these processes with the FCFS scheduling
algorithm?

b. Find the average turnaround time for these processes with the SJF scheduling
algorithm?
c. The SJF algorithm is supposed to improve performance, but notice that we chose to
run process P1 at time 0 because we did not know that two shorter processes would
arrive soon. Find what is the average turnaround time will be if the CPU is left idle for
the first 1 unit and then SJF scheduling is used.

Remembering that processes P1 and P2 arewaiting during this idle time, so their
waiting timemay increase. This algorithm could be known as future-
knowledgescheduling.

2 State critical section problem? Discuss three solutions to solve the critical section
problem.

Illustrate an example situation in which ordinary pipes are more suitable than named
3 pipes and an example situation in which named pipes are more suitable than ordinary
pipes.

4 Explain: why interrupts are not appropriate for implementing synchronization primitives
in multiprocessor systems.
5 Elaborate the actions taken by the kernel to context-switch between processes.
6 Consider the following resource-allocation policy. Requests and releases for resources
are allowed at any time. If a request for resources cannot be satisfied because the
resources are not available, then we check any processes that are blocked, waiting for
resources. If they have the desired resources, then these resources are taken away from
them and are given to the requesting process. The vector of resources for which the
waiting process is waiting is increased to include the resources that were taken away.

For example, consider a system with three resource types and the vector

Available initialized to (4,2,2). If process P0 asks for (2,2,1), it gets them. If P1 asks for
(1,0,1), it gets them. Then, if P0 asks for (0,0,1), it is blocked (resource not available). If
P2 now asks for (2,0,0), it gets the available one (1,0,0) and one that was allocated to
P0 (since P0 is blocked).

P0‘s Allocation vector goes down to (1,2,1), and its Need vector goes up to (1,0,1).

a. Predict whether deadlock occurs? If so, give an example. If not, which necessary
condition cannotoccur?

b. Predict whether indefinite blocking occurs?


7 Explain dining philosopher’s problem.

8 Distinguish among short-term, medium-term andlong-term scheduling with suitable


example.

9 Explain the differences in the degree to which the following scheduling algorithms
discriminate in favor of short processes: RR, Multilevel Feedback Queues

10 Discuss how the following pairs of schedulingcriteria conflict in certain settings.


i) CPU utilization and response time ii) Average turn around time and maximum
waiting time iii)I/O device utilization and CPU utilization.

11 Write about the various CPU scheduling algorithms.

12 Write about critical regions and monitors.

13 Consider the following page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1,


7, 0, 1 How
many page faults would occur for the following replacement algorithms, assuming three
frames that all frames are initially empty?

14 How can deadlock be detected? Explain.

15 Write notes about multiple-processor schedulingand real-time scheduling.

MEMORY MANAGEMENT

PART A
S. No Question
1 What is the difference between user-level instructions and privileged instructions?
Ans: A non-privileged (i.e. user-level) instruction is an instruction that any application or
user can execute. A privileged instruction, on the other hand, is an instruction that can only be
executed in kernel mode. Instructions are divided in this manner because privileged
instructions could harm the kernel.

2 Define: Belady’s anomaly?


Ans: In computer storage, Bélády's anomaly is the phenomenon in which increasing the
number of page frames results in an increase in the number of page faults for certain memory
access patterns. This phenomenon is commonly experienced when using the first-in first-out
(FIFO) page replacement algorithm.

3 What is the purpose of paging the page table?

Ans: In certain situations the page tables could become large enough that by paging the page
tables, one could simplify the memory allocation problem (by ensuring that everything is
allocated as fixed-size pages as opposed to variable-sized chunks) and also enable the
swapping of portions of page table that are not currently used.

4 Why page sizes are always power of 2?


Ans: Recall that paging is implemented by breaking up an address into a page and offset
number. It is most efficient to break the address into X page bits and Y offset bits, rather than
perform arithmetic on the address to calculate the page number and offset. Because each bit
position represents a power of 2, splitting an address between bits results in a page size that is
a power of 2.

5 List two differences between logical and physical addresses.


Ans:

Logical Physical
1. An address generated 1. An address seen by
by CPU is referred to us a memory unit that is, the one
logical address. loaded into the memory
address register of the
memory is referred to as
physical address.

2. The set of all logical 2. The set of all physical


address generated by a address corresponding to
program is a logical address these logical addresses is a
space. physical address.

3. For user view. 3. For system view.

4. The user program deals


with logical address or these 4. These are generated by
are generated by user memory management unit
(program). (MMU).

6 Define demand paging in memory management. (Nov/Dec 2015)


Ans: In virtual memory systems, demand paging is a type of swapping in which pages of
data are not copied from disk to RAM until they are needed.

7 What are the steps required to handle a page fault indemand paging? (Nov/Dec
2015)
Ans: Steps in handling page fault:

1. Operating system looks at another table todecide:


• Invalid reference - abort
• Just not in memory
2. Find free frame
3. Swap page into frame via scheduled diskoperation
4. Reset tables to indicate page now in memory Setvalidation bit = v
5. Restart the instruction that caused the page fault

Tell the significance of LDT and GDT in segmentation.


8
Ans: The LDT is supposed to contain memory segments which are private to a specific
program, while the GDT is supposed to contain global segments.

In order to reference a segment, a program must use its index inside the GDT or the LDT.
Such an index is called a segment selector or selector in short.

9 What do you meant by thrashing?


Ans: A process that is spending more time in paging than executing is said to be
thrashing. In other words it means that the process doesn't have enough frames to hold all
the pages for its execution, it will do swapping pages in and out very frequently to keep
executing.

10 Explain dynamic loading.


Ans: To obtain better memory-space utilization dynamic loading is used. With dynamic
loading, a routine is not loaded until it is called. All routines are kept on disk in a
relocatable load format. The main program is loaded into memory and executed. If the
routine needs another routine, the calling routine checks whether the routine has been
loaded. If not, the relocatable linking loader is called to load the desired program into
memory.
11 Explain dynamic Linking.
Ans: Dynamic linking is similar to dynamic loading, rather that loading being postponed
until execution time, linking is postponed. This feature is usually used with system
libraries, such as language subroutine libraries. A stub is included in the image for each
library-routine reference. The stub is a small piece of code that indicates how to locate the
appropriate memory-resident library routine, or how to load the library if the routine is not
already present.

12 Define Overlays.
Ans: To enable a process to be larger than the amount of memory allocated to it, overlays
are used. The idea of overlays is to keep in memory only those instructions and data that
are needed at a given time.

When other instructions are needed, they are loaded into space occupied previously by
instructions that are no longer needed.
13 Define swapping.
Ans: A process needs to be in memory to be executed. However a process can be swapped
temporarily out of memory to a backing store and then brought back into memory for
continued execution. This process is called swapping.

14 What is Demand Paging?


Ans: Virtual memory is commonly implemented by demand paging. In demand paging,
the pager brings only those necessary pages into memory instead of swapping in a whole
process. Thus it avoids reading into memory pages that will not be used anyway,
decreasing the swap time and the amount of physical memory needed.

15 What is pure demand paging?


Ans: When starting execution of a process with no pages in memory, the operating system
sets the instruction pointer to the first instruction of the process, which is on a non-
memory resident page, the process immediately faults for the page. After this page is
brought into memory, the process continues to execute, faulting as necessary until every
page that it needs is in memory. At that point, it can execute with no more faults. This
schema is pure demand paging.

16 Outline about virtual memory.


Ans: Virtual memory is a technique that allows the execution of processes that may not
be completely in memory. It is the separation of user logical memory from physical
memory. This separation provides an extremely large virtual memory, when only a
smaller physical memory is available.

17 Define lazy swapper.


Ans: Rather than swapping the entire process into main memory, a lazy swapper is used.
A lazy swapper never swaps a page into memory unless that page will be needed.

18 What are the common strategies to select a free hole from aset of available holes?
Ans: The most common strategies are,
• First fit
• Worst fit
• Best fit

19 Define effective access time.


Ans: Let p be the probability of a page fault . The value of p is expected to be close to 0;
that is, there will be only a few page faults. The effective access time is Effective access
time = (1-p)
* ma + p * page fault time. Where ma : memory-access time.
20 What is the basic approach for page replacement?
Ans: If no frame is free is available, find one that is not currently being used and free it.
A frame can be freed by writing its contents to swap space, and changing the page table to
indicate that the page is no longer in memory.

Now the freed frame can be used to hold the page for which the process faulted.

21 Distinguish between page and segment.


Ans: Paging is used to get a large linear address space without having to buy more
physical memory. Segmentation allows programs and data to be broken up into
logically independent
address spaces and to aid sharing and protection.
22 How the problem of external fragmentation can be solved.Ans: Solution to external
fragmentation :
1) Compaction : shuffling the fragmented memory into onecontiguous location.
2) Virtual memory addressing by using paging and
segmentation.

23 Formulate how long a paged memory reference takes if memory reference takes 200
nanoseconds .Assume a paging system with page table stored in memory.
Ans: 400 nanoseconds. 200 ns to access the page table plus 200 ns to access the word in
memory.

24 Evaluating the maximum number of pages needed If asystem supports 16 bit


address line and 1K page size.
Ans:
A 16 bit address can address 2^16 bytes in a byte addressable machine. Since the size of
a page 1K bytes (2^10),
the number of addressable pages is 2^16 / >2^10 = 2^6 = 64pages.

25 How does the system discover thrashing?


Ans: In a virtual memory system, thrashing is a situation when there is excessive
swapping of pages between memory and the hard disk, causing the application to respond
more slowly. The operating system often warns users of low virtual memory when
thrashing is occurring.
26 What you mean by compaction? In which situation is itapplied.
Ans: Compaction is a process in which the free space is collected in a large memory
chunk to make some space available for processes. In memory management, swapping
Creatings multiple fragments in the memory because of the processes moving in and out.
Compaction refers to combining all the empty spaces together and processes.

27 Outline about TLB.


Ans: A translation lookaside buffer (TLB) is a memory cache that is used to reduce the
time taken to access a user memory location. It is a part of the chip's memory-management
unit (MMU). The TLB stores the recent translations of virtual memory to physical
memory and can be called an address-translation cache.

28 List the need of inverted page table.Ans:


• There will be only one page table in memory i.e Oneentry for each real page of
memory.
• Decreases the memory needed to store each page table.

29 Define Address binding.


Ans: Address binding is the process of mapping the program's logical or virtual addresses
to corresponding physical or main memory addresses. In other words, a given logical
address is mapped by the MMU (Memory Management Unit) to a physical address.

30 List the steps needed to handle page fault.


Ans:
1. The memory address requested is first checked, tomake sure it was a valid
memory request.
2. If the reference was invalid, the process is terminated.Otherwise, the page must
be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessarypage from disk.
( This will usually block the process on an I/O wait, allowing some other
process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process’s page table is updated with the
new frame number, and the invalid bit is changed to indicate that this is now
a valid page reference.
6. The instruction that caused the page fault must now berestarted from the
beginning, ( as soon as this process gets another turn on the CPU. )
31 Define External Fragmentation.
Ans: It is a situation, when total memory available is enough toprocess a request but not
in contiguous manner.

32 What are the counting based page replacement algorithm?


Ans: These algorithms keep a counter of the number of references that have been made to
each page. Example: Least Frequently Used(LFU), Most Frequently Used(MFU)

33 Under what circumstances would a user be better off using a time-sharing system,
rather than a PC or single-user workstation.
Ans:
A user is better off under three situations: when it is cheaper, faster, or easier. For
example:

1. When the user is paying for management costs, and the costs are cheaper for a time-
sharing system than for a single-user computer.

2. When running a simulation or calculation that takes too long to run on a single PC or
workstation.

3. When a user is travelling and doesn't have laptop to carry around, they can connect
remotely to a time-shared system and do their work.

34 How is memory protected in a paged environment?


Protection bits that are associated with each frame accomplish memory protection in a
paged environment. The protection bits can be checked to verify that no writes are being
made to a read-only page.
35 What are the major problems to implement Demand Paging?

Ans:

The two major problems to implement demand paging is developing,

Frame allocation algorithm Page replacement algorithm

36 What is Internal Fragmentation?

Ans:
When the allocated memory may be slightly larger than the requested memory, the
difference between these two numbers isinternal fragmentation.

37 What do you mean by Compaction?


Compaction is a solution to external fragmentation. The memory contents are
shuffled to place all free memory together in one large block. It is possible only i f relocation
is dynamic, and is done at execution time.

38 What are Pages and Frames?

Ans:

Paging is a memory management scheme that permits the physical -address space of a
process to be non-contiguous. In the case of paging, physical memory is broken into fixed-
sized blocks called frames and logical memory is broken into blocks of the same size called
pages.

39 What is the use of Valid-Invalid Bits in Paging?


When the bit is set to valid, this value indicates that the associated page is in the
process’s logical address space, and is thus a legal page. If the bit is said to invalid, this
value indicates that the page is not in the process’s logical address space. Using the valid-
invalid bit traps illegal addresses.
40
What is the basic method of Segmentation?

Segmentation is a memory management scheme that supports the user view of memory. A
logical address space is a collection of segments. The logical address consists of segment
number and offset. If the offset is legal, it is added to the segment base to produce the
address in physical memory of the desired byte.

41 Program containing relocatable code was created, assuming it would be loaded at


address 0. In its code, the program refers to the following addresses:
50,78,150,152,154. If the program is loaded into memory starting at location 250, how
do those addresses have to be adjusted?

Ans:
All addresses need to be adjusted upward by 250.So the adjusted addresses would be 300,
328, 400, 402, and 40

42 What is a Pure Demand Paging?Ans:


When starting execution of a process with no pages in memory,
the operating system sets the instruction pointer to the first instruction of the process, which
is on a non-memory resident page, the process immediately faults for the page. After this
page is brought into memory, the process continues to execute, faulting as necessary until
every page that it needs is in memory. At that point, it can execute with no more faults. This
schema is pure demand paging.

43 What is a Reference String?

An algorithm is evaluated by running it on a particular string of


memory references and computing the number of page faults.The string of memory
reference is called a reference string

44 Define Secondary Memory.


This memory holds those pages that are not present in main
memory. The secondary memory is usually a high speed disk. It is known as the swap
device, and the section of the disk used for this purpose is known as swap space.
45 What is the basic approach of Page Replacement?
If no frame is free is available, find one that is not currently
being used and free it. A frame can be freed by writing its contents to swap space, and
changing the page table to indicate that the page is no longer in memory. Now the freed
frame can be used to hold the page for which the process faulted.

46 What is the various Page Replacement Algorithms used forPage Replacement?

Ans:
FIFO page replacement Optimal page replacement

LRU page replacement

LRU approximation page replacement Counting based page replacement

Page buffering algorithm

47 What do you mean by Best Fit?


Best fit allocates the smallest hole that is big enough. The entire list has to be searched,
unless it is sorted by size. This strategyproduces the smallest leftover hole.

48 What do you mean by First Fit?

First fit allocates the first hole that is big enough. Searching can either start at the beginning
of the set of holes or where the previous first-fit search ended. Searching can be stopped as
soonas a free hole that is big enough is found.

49 Name two differences between logical and physicaladdresses.

Ans:
A logical address does not refer to an actual existing address;rather, it refers to an
abstract address in an abstract address space. Contrast this with a physical address that
refers to an actual physical address in memory. A logical address is generated by the CPU
and is translated into a physical address by the memory management unit (MMU).
Therefore, physical addresses are generated by the MMU.
50 Consider a logical address space of 64 pages of 1024 wordseach, mapped onto a
physical memory of 32 frames.

a. How many bits are there in the logical address?

b. How many bits are there in the physical address?

Ans:
a. Logical address: 16 bits

Physical address: 15 bits

PART B
1 Explain about given memory management techniques. (i) Partitioned allocation (ii) Paging
and translation look-aside buffer.

2 Elaborate about the free space management on I/O buffering and blocking.

3 What is copy-on write feature and under what circumstances it is beneficial? What hardware
support is needed to implement this feature?

4 When page faults will occur? Describe the actions taken by operating system during page
fault.
5 Consider the following page reference string:

1, 2, 3, 4, 2, 1, 5,6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.

Identify the no.of page faults would occur for the following replacement algorithms,
assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially
empty, so your first unique pages will all cost one fault each.

a. LRU replacement

b. FIFO replacement
c. Optimal replacement

6 Explain about the difference between internal fragmentation and external fragmentation.

7 Why are segmentation and paging sometimes combined into onescheme?

8 Explain why sharing a reentrant module is easier when segmentation is used than when pure
paging is used with example.

9 Discuss situation under which the most frequently used page replacement algorithm
generates fewer page faults than the least frequently used page replacement algorithm. Also
dicuss under which circumstances the opposite holds.

10 Compare paging with segmentation in terms of the amount of memory required by the
address translation structures in orderto convert virtual addresses to physical addresses.

11 Most systems allow programs to allocate more memory to its address space during execution.
Data allocated in the heap segments of programs is an example of such allocated memory.
What is required to support dynamic memory allocation in the following schemes? (Nov/Dec
2018)
i) Contiguous memory allocation
ii)Pure segmentation
iii) Purepaging
12 Differentiate local and global page replacement algorithm.
13 Explain the basic concepts of segmentation.

14 What is thrashing and explain the methods to avoid thrash

15 What is the maximum file size supported by a file system with


16 direct blocks, single, double, and triple indirection? Theblock size is 512 bytes. Disk
block number can be stored in 4 by

You might also like