0% found this document useful (0 votes)
86 views140 pages

Os

Operation systems MCQs

Uploaded by

mariazaqout377
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views140 pages

Os

Operation systems MCQs

Uploaded by

mariazaqout377
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 140

If parent terminated without invoking wait, process is a/an ___

a.
Interrupter
b.
orphan
c.
zombie
d.
synchronizer

A message includes a header that identifies sending and receiving?


A. Programs
B. Processes
C. Application
D. System

Which of the following IPC mechanism is easier to implement in a distributed system?


a.
shared memory
b.
ordinary pipe
c.
message passing
d.
socket communication
_ is the location of instruction to next execute.
a.
CPU register
b.
CPU scheduling
c.
Program counter
d.
Process

Independent process ____


a.
cannot affect or be affected by the execution of another process
b.
interrupts the execution of another process
c.
can be executed separately
d.
can affect or be affected by the execution of another process
In RPC, the client-side stub locates the server and marshalls the parameters. Marshalls
means:
a.
send
b.
receive
c.
packs
d.
unpacks
Blocking system call is considered _____
a.
isochronous
b.
synchronous
c.
asynchronous
d.
terminator

Which of the following selects from among the processes that are in the ready queue to
execute and allocate the CPU to one of them?
a.
context switch
b.
job scheduler
c.
swapping
d.
CPU scheduler
The list of processes waiting to execute on a CPU is called a __.
a.
device queue
b.
ready queue
c.
standby queue
d.
interrupt queue CPU scheduler
Message passing ______
a.
kernel only required once when establishing the shared region
b.
kernel required multiple times when establishing the shared region
c.
requires only once system call, consume the kernel
d.
requires a lot of system call, consume the kernel

Ordinary pipes vs. Named pipes that ________


a.
ordinary are unidirectional and named pipes are bidirectional
b.
both names are unidirectional
c.
both names are unidirectional
d.
ordinary are bidirectional and named pipes are unidirection

Including the initial parent process, how many processes are created by the program shown
in the following Code? …int main(){fork(); fork(); fork(); fork();return 0;}
a.4
b.8
c.16
d.12
Which of the following system calls is used to have a new program loaded into the new
process’s memory space?
a.
exit()
b.
wait()
c.
exec()
d.
fork()

If process P0 is switched to process P1, state for P0 will be saved into _, and state from __
will be reloaded?
a.
PCB0, PCB1
b.
PCB0, PCB0
c.
PCB1, PCB1
d.
PCB1, PCB0

Which of the following process state will be switched from “ready” state?
a. ready
b. waiting
c. terminated
d. running
Which of the following process state will be switched from “running” state when an
interrupt occurs?
a.
ready
b.
new
c.
waiting
d.
terminated

Which of the following process state will be switched from "running" state when an I/O
event occurs?
A) ready
B) terminated
C) waiting
D) new
Program is active entity stored on disk ×
In ready state, the process is waiting to be assigned to a processor /
Background processes controlled by user interface ×
It is an application appearing on the display screen of a mobile device
a.
foreground
b.
display
c.
background
d.
main
The solution of Producer – Consumer problem is to use
a.
Message-Passing
b.
Shared memory
c.
Synchronous
d.
Blocking

A producer process must wait if the buffer is empty ×


the number of processes in the code "int main‫ {بيضة‬fork(); for(int i=0;i<1;i++) fork(); }" is 4
processes /

which of the following is an accounting information in the PCB ?


a.
Scheduling queue pointers
b.
None of the above
c.
Time limits
d.
List of open files

Direct communication is a physical implementation of communication link ×


it is set of all processes residing in main memory and waiting to execute
a.
Wait queue
b.
Ready queue
c.
Device queue
d.
None of the above

It contains many pieces of information associated with a specific process


a. PCB
b. Stack
c. Thread
d. Process state

which of the following process state will be switched from new state?
a.
waiting
b.
running
c.
terminated
d.
ready

During a process is running, interrupt occurs, the process will return into the ready state /
A consumer process must wait if the buffer is full ×
In shared memory, kernels required lots when establishing the shared region ×
The following C program, main() { int no= 10; fork(); fork (); printf(“the no= %d”, no); }, How
many processes are created by the program?
a.
one process
b.
four processes
c.
two processes
d.
eight processes

Which of the following IPC mechanism is easier to implement in a distributed system?


a.
socket communication
b.
ordinary pipe
c.
message passing
d.
shared memory

Unbounded-buffer places no practical limit on the size of the buffer /

If process P1 is switched to process P2, state for P1 will be saved into _, and state from __
will be reloaded?
a.PCB2, PCB1
b.PCB1, PCB1
c.PCB2, PCB2
d.PCB1, PCB2
The following C program, main() { int val= 15; fork(); printf(“the val= %d”, val); }, How many
processes are created by the program?
a.
three processes
b.
two processes
c.
four processes
d.
one process
"operating systems"

Table of Content

Chapter 5 ..................................................................................................................2
Chapter 6 ................................................................................................................ 12
Chapter 8 ................................................................................................................ 24
Chapter 5

Multiple Choice
1. Which of the following is true of cooperative scheduling?

A) It requires a timer.

B) A process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.

C) It incurs a cost associated with access to shared data.

D) A process switches from the running state to the ready state when an interrupt occurs.

2. ____ is the number of processes that are completed per time unit.

A) CPU utilization

B) Response time

C) Turnaround time

D) Throughput

3. ____ scheduling is approximated by predicting the next CPU burst with an exponential
average of the measured lengths of previous CPU bursts.

A) Multilevel queue

B) RR

C) FCFS

D) SJF

4. The ____ scheduling algorithm is designed especially for time-sharing systems.

A) SJF

B) FCFS

C) RR

D) Multilevel queue

5. Which of the following scheduling algorithms must be non-preemptive?

A) SJF

B) RR

C) FCFS

D) priority algorithms
6. Which of the following is true of multilevel queue scheduling?

A) Processes can move between queues.

B) Each queue has its own scheduling algorithm.

C) A queue cannot have absolute priority over lower-priority queues.

D) It is the most general CPU-scheduling algorithm.

7. In Little's formula, λ, represents the ____.

A) average waiting time in the queue

B) average arrival rate for new processes in the queue

C) average queue length

D) average CPU utilization

8. ______ allows a thread to run on only one processor.

A) Processor affinity

B) Processor set

C) NUMA

D) Load balancing

9. __________ involves the decision of which kernel thread to schedule onto which CPU.

A) Process-contention scope

B) System-contention scope

C) Dispatcher

D) Round-robin scheduling

10. With _______ a thread executes on a processor until a long-latency event (i.e. a memory stall)
occurs.

A) coarse-grained multithreading

B) fine-grained multithreading

C) virtualization

D) multicore processors

11. A significant problem with priority scheduling algorithms is _____.

A) complexity

B) starvation

C) determining the length of the next CPU burst

D) determining the length of the time quantum


12. The ______ occurs in first-come-first-served scheduling when a process with a long CPU
burst occupies the CPU.

A) dispatch latency

B) waiting time

C) convoy effect

D) system-contention scope

13. The rate of a periodic task in a hard real-time system is ____, where p is a period and t is the
processing time.

A) 1/p

B) p/t

C) 1/t

D) pt

14. Which of the following is true of the rate-monotonic scheduling algorithm?

A) The task with the shortest period will have the lowest priority.

B) It uses a dynamic priority policy.

C) CPU utilization is bounded when using this algorithm.

D) It is non-preemptive.

15. Which of the following is true of earliest-deadline-first (EDF) scheduling algorithm?

A) When a process becomes runnable, it must announce its deadline requirements to the system.

B) Deadlines are assigned as following: the earlier the deadline, the lower the priority; the later the
deadline, the higher the priority.

C) Priorities are fixed; that is, they cannot be adjusted when a new process starts running.

D) It assigns priorities statically according to deadline.

16. The two general approaches to load balancing are ____ and _____.

A) soft affinity, hard affinity

B) coarse grained, fine grained

C) soft real-time, hard real-time

D) push migration, pull migration

17. The purpose of multiprogramming is to:

A) Utilize CPU better

B) Make the computer hardware more user friendly

C) Make it easy for the users to run programs

D) Increase the speed of the hardware E) All of above


18. The next process to be executed by the CPU is chosen from the ready queue by

A) Race condition

B) Scheduler

C) Synchronization tools

D) Process control block

E) I/O device

19) Kernel means

A) The parts of the OS code concerned with security

B) Architecture dependent parts of the OS code

C) The entire software shipped as OS by the manufacturer

D) Program running at all times on the computer

20) What is context switch?

A) Overhead time to load program from disk to main memory

B) Overhead time to switch CPU from one process to another

C) Overhead time to switch from one I/O device to another

21) The ready queue can be implemented as a _______________.

A) FIFO queue

B) priority queue

C) tree

D) unordered linked list

E) all choices are correct

22) Which of the following circumstances can preemptive scheduling take place?

A) when a process switches from the running state to the waiting state

B) when a process switches from the waiting state to the ready state

C) when a process terminates

D) all choices are incorrect

23) Which of the following scheduling algorithm may suffer from convoy effect?

A) SJF

B) FCFS

C) RR

D) Multilevel queue
24) Which of the following scheduling algorithms gives the minimum average waiting time for a
given set of processes?

A) SJF

B) FCFS

C) RR

D) Multilevel queue

25) Which of the following scheduling algorithms gives the minimum average response time?

A) SJF

B) FCFS

C) RR

D) Multilevel queue

26) If the time quantum gets too large, RR scheduling degenerates to __________?

A) SJF

B) FCFS

C) Shortest-remaining-time-first

D) Multilevel queue

27) There are 5 different processes running on a workstation with burst times
(8ms,7ms,15ms,6ms,10ms). processes are scheduled with the Round-Robin time sharing
method. Which out of the following quantum times is the best value for small response times.

a. Qt = 8ms

b. Qt = 10ms

c. Qt = 7ms

d. Qt = 9ms
Quiz4 CH5

1) From the statements below, which one of the following cannot be scheduled by the kernel?

a) kernel level thread

b) user level thread

c) process

d) none of the mentioned

2) In priority scheduling algorithm, when a process arrives at the ready queue, its priority is
compared with the priority of ____________

a) all process

b) currently running process

c) parent process

d) init process

3) By using a priority scheduling algorithm, when a process arrives at the ready queue, a
comparison between its priority and the priority of ____.

a) init process

b) currently running process

c) all processes

d) p

4) In multilevel feedback scheduling algorithm ____________

a) a process can move to a different classified ready queue

b) classification of ready queue is permanent

c) processes are not classified into groups

d) none of the mentioned

5)The strategy of making processes that are logically runnable to be temporarily suspended is
called ____.

a) FIFO

b) non-preemptive scheduling

c) preemptive scheduling

d) SJF
6) The interval between the time a process is submitted and the time of completion is called
the________.

a) throughput

b) turnaround time

c) response time

d) waiting time

7) CPU scheduling is the basis of______.

a) multiprocessor systems

b) multiprogramming operating systems

c) larger memory sized systems

d) none of the mentioned

8) In the following cases non – preemptive scheduling occurs?

a) When a process switches from the running state to the ready state

b) When a process transitions from the running state to the waiting state

c) When a process switches from the waiting state to the ready state

d) All of the mentioned

9) The real difficulty with SJF in short term scheduling is

a) it is too good an algorithm

b) getting the length of the next CPU request

c) it is too complex to understand

d) none of the mentioned

10) Usually, I/O bound program would have ______.

a) a rare very short CPU bursts

b) several very short I/O bursts

c) several very short CPU bursts

d) a rare very short I/O bursts

11) ___________ is used productively with multiprogramming.

a) time

b) space

c) money

d) all of the mentioned


12) Response time is _______

a) none of the mentioned

b) the cumulative time taken from the time of submission till the first response is produced.

c) the cumulative time taken from the time of submission till the time of completion.

d) the cumulative time taken from the time of submission till the response is output.

13) Which module provides the process selected by the short-term scheduler with control over
the CPU?

a) dispatcher

b) interrupt

c) scheduler

d) none of the mentioned

14) The interval between the time a process is submitted and the time of completion is called the
____________.

a) waiting time

b) turnaround time

c) response time

d) throughput

15) An SJF algorithm is simply a priority algorithm where the priority is _________

a) the predicted next CPU burst

b) the inverse of the predicted next CPU burst

c) the current CPU burst

d) anything the user wants

16) The working mechanism of priority in the SJF algorithm is ____________

a) the predicted next CPU burst

b) the inverse of the predicted next CPU burst

c) the current CPU burst

d) the current I/O burst

17) A solution to the problem / issue of indefinite blockage of low – priority processes is
__________

a) Starvation

b) Wait queue

c) Ready queue

d) Aging
18) The FCFS algorithm is particularly troublesome/problematic for ____________

a) time sharing systems

b) multiprogramming systems

c) multiprocessor systems

d) operating systems

19) Scheduling is done so as to ____________

a) increase the turnaround time

b) decrease the turnaround time

c) keep the turnaround time same

d) there is no relation between scheduling and turnaround time

20) Scheduling is performed in such a way as ______.

a) remains the turnaround time same

b) the decrement of the turnaround time

c) an increment of the turnaround time

d) no relation between turnaround time and scheduling

21) Non-preemptive scheduling happens in the following situations ____________.

A) when a process transitions from the waiting state to the running state

B) all of the mentioned

C) when a process transitions from the running state to the ready state

D) when a process transitions from the running state to the waiting state
True/False
1) In preemptive scheduling, the sections of code affected by interrupts must be guarded from
simultaneous use.

Ans: True

2) In RR scheduling, the time quantum should be small with respect to the context-switch time.

Ans: False

3) The most complex scheduling algorithm is the multilevel feedback-queue algorithm.

Ans: True

4) Load balancing is typically only necessary on systems with a common run queue.

Ans: False

5) SMP systems that use multicore processors typically run faster than SMP systems that place
each processor on separate cores.

Ans: True

6) Round-robin (RR) scheduling degenerates to first-come-first-served (FCFS) scheduling if the


time quantum is too long.

Ans: True

7) Load balancing algorithms have no impact on the benefits of processor affinity.

Ans: False

8) A multicore system allows two (or more) threads that are in compute cycles to execute at the
same time.

Ans: True

9) Providing a preemptive, priority-based scheduler guarantees hard real-time functionality.

Ans: False

10) In hard real-time systems, interrupt latency must be bounded.

Ans: True

11) Interrupt latency – time from arrival of interrupt to start of routine that services interrupt.

Ans: True

12) Dispatch latency – time for schedule to take current process off CPU and switch to another.

Ans: True

13) Soft affinity – the operating system attempts to keep a thread running on the same
processor, but no guarantees.

Ans: True

14) Hard affinity – allows a process to specify a set of processors it may run on.

Ans: True
Chapter 6

Multiple Choice
1. A race condition ____.

A) results when several threads try to access the same data concurrently

B) results when several threads try to access and modify the same data concurrently

C) will result only if the outcome of execution does not depend on the order in which instructions are
executed

D) None of the above

2. An instruction that executes atomically ____.

A) must consist of only one machine instruction

B) executes as a single, uninterruptible unit

C) cannot be used to solve the critical section problem

D) All of the above

3. A counting semaphore ____.

A) is essentially an integer variable

B) is accessed through only one standard operation

C) can be modified simultaneously by multiple threads

D) cannot be used to control access to a thread's critical sections

4. A mutex lock ____.

A) is exactly like a counting semaphore

B) is essentially a Boolean variable

C) is not guaranteed to be atomic

D) can be used to eliminate busy waiting

5. In Peterson's solution, the ____ variable indicates if a process is ready to enter its critical
section.

A) turn

B) lock

C) flag[i]

D) turn[i]
6. The first readers-writers problem ____.

A) requires that, once a writer is ready, that writer performs its write as soon as possible.

B) is not used to test synchronization primitives.

C) requires that no reader will be kept waiting unless a writer has already obtained permission to use the
shared database.

D) requires that no reader will be kept waiting unless a reader has already obtained permission to use
the shared database.

7. ____________ occurs when a higher-priority process needs to access a data structure that is
currently being accessed by a lower-priority process.

A) Priority inversion

B) Deadlock

C) A race conditions

D) A critical section

8. What is the correct order of operations for protecting a critical section using mutex locks?

A) release() followed by acquire()

B) acquire() followed by release()

C) wait() followed by signal()

D) signal() followed by wait()

9. What is the correct order of operations for protecting a critical section using a binary
semaphore?

A) release() followed by acquire()

B) acquire() followed by release()

C) wait() followed by signal()

D) signal() followed by wait()

10. _____ is not a technique for handling critical sections in operating systems.

A) Non-preemptive kernels

B) Preemptive kernels

C) Spinlocks

D) Peterson's solution
11. A solution to the critical section problem does not have to satisfy which of the following
requirements?

A) mutual exclusion

B) progress

C) atomicity

D) bounded waiting

12. A(n) _______ refers to where a process is accessing/updating shared data.

A) critical section

B) entry section

C) mutex

D) test-and-set

13. _____ can be used to prevent busy waiting when implementing a semaphore.

A) Spinlocks

B) Waiting queues

C) Mutex lock

D) Allowing the wait() operation to succeed

14. What is the purpose of the mutex semaphore in the implementation of the bounded-buffer
problem using semaphores?

A) It indicates the number of empty slots in the buffer.

B) It indicates the number of occupied slots in the buffer.

C) It controls access to the shared buffer.

D) It ensures mutual exclusion.

15. How many philosophers may eat simultaneously in the Dining Philosophers problem with 5
philosophers?

A) 1

B) 2

C) 3

D) 5

16. Which of the following statements is true?

A) A counting semaphore can never be used as a binary semaphore.

B) A binary semaphore can never be used as a counting semaphore.

C) Spinlocks can be used to prevent busy waiting in the implementation of semaphore.

D) Counting semaphores can be used to control access to a resource with a finite number of instances.
17. _____ is/are not a technique for managing critical sections in operating systems.

A) Peterson's solution

B) Preemptive kernel

C) non-preemptive kernel

D) Semaphores

18. When using semaphores, a process invokes the wait() operation before accessing its critical
section, followed by the signal() operation upon completion of its critical section. Consider
reversing the order of these two operations—first calling signal(), then calling wait(). What would
be a possible outcome of this?

A) Starvation is possible.

B) Several processes could be active in their critical sections at the same time.

C) Mutual exclusion is still assured.

D) Deadlock is possible.

19. Which of the following statements is true?

A) Operations on atomic integers do not require locking.

B) Operations on atomic integers do require additional locking.

C) Linux only provides the atomic_inc() and atomic_sub() operations.

D) Operations on atomic integers can be interrupted.

20. A(n) ___________ is a sequence of read-write operations that are atomic.

A) atomic integer

B) semaphore

C) memory transaction

D) mutex lock

21. The OpenMP #pragma omp critical directive ___________.

A) behaves much like a mutex lock

B) does not require programmers to identify critical sections

C) does not guarantee prevention of race conditions

D) is similar to functional languages

22. Another problem related to deadlocks is ____________.

A) race conditions

B) critical sections

C) spinlocks

D) indefinite blocking
23. When several processes access the same data concurrently and the outcome of the
execution depends on the particular order in which the access takes place is called ________

a) dynamic condition

b) race condition

c) essential condition

d) critical condition

24. If a process is executing in its critical section, then no other processes can be executing in
their critical section. What is this condition called?

a) mutual exclusion

b) critical exclusion

c) synchronous exclusion

d) asynchronous exclusion

25. When high priority task is indirectly preempted by medium priority task effectively inverting
the relative priority of the two tasks, the scenario is called __________

a) priority inversion

b) priority removal

c) priority exchange

d) priority modification

26. In the bounded buffer problem, there are the empty and full semaphores that:

a) count the number of empty and full buffers

b) count the number of empty and full memory spaces

c) count the number of empty and full queues

d) none of the mentioned

27. The two atomic operations permissible on semaphores are:

a) wait, signal

b) wait, stop

c) wait, hold

d) hold, signal

28. If a process is executing in its critical section, then no other processes can be executing in
their critical section. This condition is called

a) mutual exclusion

b) critical exclusion

c) synchronous exclusion

d) asynchronous exclusion
29. The wait operation of the semaphore basically works on the basic _______ system call.

a) stop()

b) block()

c) hold()

d) wait()

30. An un-interruptible unit is known as:

a) single

b) atomic

c) static

d) none of the mentioned

31. In a uniprocessor system concurrent processes cannot have overlapped

a) Access

b) Termination

c) Completion

d) Execution

32. A critical section is a program segment

a) which should run in a certain specified amount of time

b) which avoids deadlocks

c) where shared resources are accessed

d) which must be enclosed by a pair of semaphore operations, P and V

33. A locking protocol is one that :

a) governs how locks are acquired

b) governs how locks are released

c) governs how locks are acquired and released

d) none of the mentioned

34. A minimum of _____ variable(s) is/are required to be shared between processes to solve the
critical section problem.

a) one

b) two

c) three

d) four
35. The operations that can be invoked on a condition variable are :

a) wait & signal

b) hold & wait

c) signal & hold

d) continue & signal

36. What is the initial value of the semaphore to allow only one of the many processes to enter
their critical section?

a) 8

b) 1

c) 16

d) 0

37. Let P1 and P2 be the two processes and S1 and S2 be the two shared Boolean variables. The
initial values of S1 and S2 are randomly assigned. For accessing the critical sections of P1 and
P2 the methods used by them are given below:

Method used by P1

While (S1 == S2);

Critical section

S1 = S2;

Method used by P2

While (S1 !=S2)

Critical section

S2 = not (S1);

Which statement / s describes that the properties are achieved?

a) Progress but not mutual exclusion

b) Mutual exclusion but not progress

c) Both mutual exclusion and progress

d) Neither mutual exclusion nor progress

38. If the semaphore value is negative:

a) its magnitude is the number of processes waiting on that semaphore

b) it is invalid

c) no operation can be further performed on it until the signal operation is performed on it

d) none of the mentioned


39. ................. is the ability of multiple process to co-ordinate their activities by exchange of
information.

a) Synchronization

b) Mutual Exclusion

c) Dead lock

d) Starvation

40. …………… is a condition in which there is a set of concurrent processes, only one of which is
able to access a given resource or perform a given function at any time.

a) Mutual Exclusion

b) Busy Waiting

c) Deadlock

d) Starvation

41. The following program consists of 3 concurrent processes and 3 binary semaphores. The
semaphores are initialized as S0 = 1, S1 = 0, S2 = 0.

Process P0 Process P1 Process P2


while(true) { wait(S1); wait(S2);
wait(S0); release(S0); release(S0);
print '0';
release(S1);
release(S2);
}

How many times will P0 print ‘0’?

a) At least twice

b) Exactly twice

c) Exactly thrice

d) Exactly once

42. A semaphore:

a) is a binary mutex

b) must be accessed from only one process

c) can be accessed from multiple processes

d) none of the mentioned


Quiz 5 CH6

1. The entry in the __________ of all PCBs of the existing processes.

a. process Register

b. process Unit

c. program Counter

d. process Table

2. __________ illustrates the degree of multiprogramming.

a. # of processes executed per unit time

b. # of processes in memory

c. # of processes in the I/O queue

d. # of processes in the ready queue

3. In a time-sharing operating system, the process moves from the running state to the
__________ when the time slot given to a process is completed.

a. ready state

b. blocked state

c. suspended state

d. terminated state

4. It is possible to have mutual exclusion via the __________.

a. none of the mentioned

b. both binary semaphores and mutex locks

c. binary semaphores

d. mutex locks

5. It is possible to do process synchronization on __________.

a. both software and hardware level

b. software level

c. hardware level

d. none of the mentioned


6. What process can be influenced by the execution of other processes in the system?

a. Cooperating process

b. Parent process

c. Child process

d. Init process

7. Constraining the child process to a subset of the resources of the parent prevents any
process from __________.

a. system overloading via using a lot of secondary storage

b. system underloading via very less CPU utilization

c. system crashing via utilizing multiple resources

d. system overloading via creating a lot of sub-processes

8. Which of the following would not disrupt the running process?

a. Timer

b. A device

c. Power failure

d. Scheduler process

9. A semaphore is a shared variable of type integer __________.

a. unable to drop more than zero

b. unable to drop below one

c. unable to drop more than one

d. unable to drop below zero

10. What is a scheduler on a short-term basis?

a. The selection of processes that need to be executed next and allocates the CPU

b. None of the mentioned

c. The selection of processes that need to be included in the ready queue

d. The selection of processes that heave by swapping to delete from memory

11. Which of the following is NOT true for Peterson’s solution?

a. The bounded-waiting requirement is met

b. Peterson’s solution works for synchronization among more than two processes

c. The progress requirement is satisfied

d. Mutual exclusion is preserved


12. Assume a process is waiting for some I/O service in a "blocked

" state. It goes to the __________ when the service is finished.

a. running state

b. ready state

c. terminated state

d. suspended state

13. What is a scheduler on a medium-term basis?

a. The selection of processes that heave by swapping to delete from memory

b. The selection of processes that need to be included in the ready queue

c. None of the mentioned

d. The selection of processes that need to be executed next and allocates the CPU

14. The number of completed processes per unit time is referred to as __________.

a. capacity

b. output

c. throughput

d. efficiency

15. If the child process finishes execution and the parent is still in execution state, then the child
process is known as __________.

a. dead

b. zombie

c. body

d. orphan

16. What is a scheduler on a long-term basis?

a. The selection of processes that need to be executed next and allocates the CPU

b. None of the mentioned

c. The selection of processes that heave by swapping to delete from memory

d. The selection of processes that need to be included in the ready queue

17. Only one process will run at a time with __________; all other processes are waiting for the
processor in the meantime. With __________, each process on a different processor can run
more than one process simultaneously.

a. Multiprogramming, Multiprocessing

b. Uniprogramming, Multiprocessing

c. Multiprocessing, Multiprogramming

d. Multiprogramming, Uniprocessing
18. Which of the below is a method for synchronization?

a. Pipe

b. Socket

c. Thread

d. Semaphore

19. A __________ is the only state transition that is started by the user process itself.

A) wakeup

B) block

C) dispatch

D) none of the mentioned

True/False
1. Race conditions are prevented by requiring that critical regions be protected by locks.

Ans: True

2. The value of a counting semaphore can range only between 0 and 1.

Ans: False

3. A deadlock-free solution eliminates the possibility of starvation.

Ans: False

4. Every object in Java has associated with it a single lock.

Ans: True

5. A thread will immediately acquire a dispatcher lock that is the signaled state.

Ans: True

6. Mutex locks and counting semaphores are essentially the same thing.

Ans: False

7. Mutex locks and binary semaphores are essentially the same thing.

Ans: True

8. A non-preemptive kernel is safe from race conditions on kernel data structures.

Ans: True

9. Linux mostly uses atomic integers to manage race conditions within the kernel.

Ans: False

10. Instructions from different processes can be interleaved when interrupts are allowed.

Ans: True

11. Both the test_ and_set() instruction and compare_and_swap() instruction are executed atomically.

Ans: True

12. Busy waiting refers to the phenomenon that while a process is in its critical section, any other process
that tries to enter its critical section must loop continuously in the call to acquire the mutex lock.

Ans: True
Chapter 8

Multiple Choice

1. A deadlocked state occurs whenever ____.

A) a process is waiting for I/O to a device that does not exist

B) the system has no available free resources

C) every process in a set is waiting for an event that can only be caused by another process in the set

D) a process is unable to release its request for a resource after use

2. One necessary condition for deadlock is ____, which states that at least one resource must be
held in a non-sharable mode.

A) hold and wait

B) mutual exclusion

C) circular wait

D) no preemption

3. One necessary condition for deadlock is ______, which states that a process must be holding
one resource and waiting to acquire additional resources.

A) hold and wait

B) mutual exclusion

C) circular wait

D) no preemption

4. One necessary condition for deadlock is ______, which states that a resource can be released
only voluntarily by the process holding the resource.

A) hold and wait

B) mutual exclusion

C) circular wait

D) no preemption

5. One necessary condition for deadlock is ______, which states that there is a chain of waiting
processes whereby P0 is waiting for a resource held by P1, P1 is waiting for a resource held by
P2, and Pn is waiting for a resource held by P0.

A) hold and wait

B) mutual exclusion

C) circular wait

D) no preemption
6. The witness software product is a ____.

A) lock-order verifier that uses mutual-exclusion locks to protect critical sections

B) modeler to develop resource allocation graphs

C) driver that can be used to prevent mutual exclusion for non-sharable resources

D) implementation of the banker's algorithm available for most operating systems

7. In a system resource-allocation graph, ____.

A) a directed edge from a process to a resource is called an assignment edge

B) a directed edge from a resource to a process is called a request edge

C) a directed edge from a process to a resource is called a request edge

D) None of the above

8. A cycle in a resource-allocation graph is ____.

A) a necessary and sufficient condition for deadlock in the case that each resource has more than one
instance

B) a necessary and sufficient condition for a deadlock in the case that each resource has exactly one
instance

C) a sufficient condition for a deadlock in the case that each resource has more than once instance

D) is neither necessary nor sufficient for indicating deadlock in the case that each resource has exactly
one instance

9. To handle deadlocks, operating systems most often _____.

A) pretend that deadlocks never occur

B) use protocols to prevent or avoid deadlocks

C) detect and recover from deadlocks

D) None of the above

10. Which of the following statements is true?

A) A safe state is a deadlocked state.

B) A safe state may lead to a deadlocked state.

C) An unsafe state is necessarily, and by definition, always a deadlocked state.

D) An unsafe state may lead to a deadlocked state.


11. Suppose that there are ten resources available to three processes. At time 0, the following
data is collected. The table indicates the process, the maximum number of resources needed by
the process, and the number of resources currently owned by each process. Which of the
following correctly characterizes this state?

Process Maximum Needs Currently Owned


P0 10 4
P1 3 1
P2 6 4

A) It is safe.

B) It is not safe.

C) The state cannot be determined.

D) It is an impossible state.

12. Suppose that there are 12 resources available to three processes. At time 0, the following
data is collected. The table indicates the process, the maximum number of resources needed by
the process, and the number of resources currently owned by each process. Which of the
following correctly characterizes this state?

Process Maximum Needs Currently Owned


P0 10 4
P1 3 2
P2 7 4

A) It is safe.

B) It is not safe.

C) The state cannot be determined.

D) It is an impossible state.

13. Which of the following data structures in the banker's algorithm is a vector of length m,
where m is the number of resource types?

A) Need

B) Allocation

C) Max

D) Available

14. Assume there are three resources, R1, R2, and R3, that are each assigned unique integer
values 15, 10, and 25, respectively. What is a resource ordering which prevents a circular wait?

A) R1, R2, R3

B) R3, R2, R1

C) R3, R1, R2

D) R2, R1, R3
15. A _____ could be preempted from a process.

A) mutex lock

B) CPU

C) semaphore

D) file lock

16. In a system that uses deadlock detection algorithm,

A) a deadlock is detected as soon as it occurs.

B) a deadlock is detected just before it occurs.

C) a deadlock is detected sometime after it has occurred but not necessarily immediately.

D) a deadlock is detected sometime before it occurs, but not necessarily just before.

17. Suppose that there are 10 resources available to three processes. At time 0, the following
data is collected. The table indicates the process, the maximum number of resources needed by
the process, and the number of resources currently owned by each process.

Process Maximum Needs Currently Owned


P0 10 4
P1 3 1
P2 5 4

At time 1, P1 requests a resource. Which of the following is correct?

A) The request will be granted, since the current state is safe.

B) The request will not be granted, since the current state is not safe.

C) The request will be granted, since the state after granting the request will be safe.

D) The request will not be granted, since the state after granting the request will be unsafe.

18. Suppose that there are two resources types (R1 and R2) with five resources each available to
four processes. At time 0, the following data is collected. The table indicates the process, the
number of each type currently allocated to the processes, and the current request of each
resource type by each process.

Process Allocation Request


R1 R2 R1 R2
P0 2 0 3 2
P1 1 1 1 0
P2 0 1 1 1
P3 1 1 3 2

Which of the following sentences is correct?

A) All four processes are currently deadlocked.

B) The system is not deadlocked.

C) Processes P0 and P3 are deadlocked.

D) Processes P0, P1 and P3 are deadlocked.


19. A system can recover from a deadlock by

A) aborting one process at a time until the deadlock cycle is eliminated.

B) aborting all deadlocked processes.

C) preempting some resources from one or more of the deadlocked threads.

D) All of the above.

20. To recover from a deadlock using resource preemption,

A) the order of resources and processes that need to be preempted must be determined to minimize
cost.

B) if a resource is preempted from a process, the process must be rolled back to some safe state and
restarted from that state.

C) ensure that starvation does not occur from always preempting resources from the same process.

D) All of the above.

21. Which of the following statements is true?

A) A safe state is a deadlocked state.

B) A safe state may lead to a deadlocked state.

C) An unsafe state is necessarily, and by definition, always a deadlocked state.

D) An unsafe state may lead to a deadlocked state.

22. To ensure no preemption, if a process is holding some resources and requests another
resource that cannot be immediately allocated to it ____________

a. then the process waits for the resources be allocated to it

b. the process keeps sending requests until the resource is allocated to it

c. the process resumes execution without the resource being allocated to it

d. then all resources currently being held are preempted

23. For sharable resources, mutual exclusion ____________

a. is required

b. is not required

c. may be or may not be required

d none of the mentioned

24. For non-sharable resources like a printer, mutual exclusion ____________

a. must exist

b. must not exist

c. may exist

d. none of the mentioned


25. The request and release of resources are ___________

a. command line statements

b. interrupts

c. system calls

d. special programs

26. One way to ensure that the circular wait condition never holds is to ____________

a. imposes a total ordering of all resource types and to determine whether one precedes another in the
ordering

b. to never let a process acquire resources that are held by other processes

c. to let a process wait for only one resource at a time

d. all of the mentioned

27. For a Hold and wait condition to prevail ____________

a. A process must be not be holding a resource, but waiting for one to be freed, and then request to
acquire it

b. A process must be holding at least one resource and waiting to acquire additional resources that are
being held by other processes

c. A process must hold at least one resource and not be waiting to acquire additional resources

d. None of the mentioned

28. A system is in the safe state if ____________

a. the system can allocate resources to each process in some order and still avoid a deadlock

b. there exist a safe sequence

c. all of the mentioned

d. none of the mentioned

29. Which one of the following is the deadlock avoidance algorithm?

a. banker’s algorithm

b. round-robin algorithm

c. elevator algorithm

d. karn’s algorithm

30. Which of the following condition is required for a deadlock to be possible?

a. mutual exclusion

b. a process may hold allocated resources while awaiting assignment of other resources

c. no resource can be forcibly removed from a process holding it

d. all of the mentioned


31. If the wait for graph contains a cycle ____________

a. then a deadlock does not exist

b. then a deadlock exists

c. then the system is in a safe state

d. either deadlock exists or system is in a safe state

32. The wait-for graph is a deadlock detection algorithm that is applicable when ____________

a. all resources have a single instance

b. all resources have multiple instances

c. all resources have a single 7 multiple instances

d. all of the mentioned

33. A system has 3 processes sharing 4 resources. If each process needs a maximum of 2 units
then, deadlock ____________

a. can never occur

b. may occur

c. has to occur

d. none of the mentioned

34. Every time a request for allocation cannot be granted immediately, the detection algorithm is
invoked. This will help identify ____________

a. the set of processes that have been deadlocked

b. the set of processes in the deadlock queue

c. the specific process that caused the deadlock

d. all of the mentioned

35. ‘m’ processes share ‘n’ resources of the same type. The maximum need of each process
doesn’t exceed ‘n’ and the sum of all their maximum needs is always less than m+n. In this
setup, deadlock ____________

a. may occur

b. can never occur

c. has to occur

d. none of the mentioned

36. A deadlock eventually cripples system throughput and will cause the CPU utilization to _____

a. increase

b. drop

c. stay still

d. none of the mentioned


37. If no cycle exists in the resource allocation graph

a. then the system will not be in a safe state

b. then the system will be in a safe state

c. all of the mentioned

d. none of the mentioned

38. The resource allocation graph is not applicable to a resource allocation system

a. with multiple instances of each resource type

b. with a single instance of each resource type

c. single & multiple instances of each resource type

d. none of the mentioned

39. The content of the matrix Need is ____________

a. Allocation – Available

b. Max – Available

c. Max – Allocation

d. Allocation – Max

40. The Banker’s algorithm is _____________ than the resource allocation graph algorithm.

a. less efficient

b. more efficient

c. equal

d. none of the mentioned

41. A system is in a safe state only if there exists a ____________

a. safe allocation

b. safe resource

c. safe sequence

d. all of the mentioned

42. Which one of the following is a visual (mathematical) way to determine the deadlock
occurrence?

a. resource allocation graph

b. starvation graph

c. inversion graph

d. none of the mentioned


43. A deadlock state is an ______.

a. safe state

b. correct state

c. effective state

d. unsafe state

44. For a deadlock to arise, which of the following conditions must be held simultaneously?

a. Mutual exclusion

b. No preemption

c. Hold and wait

d. All of the mentioned

45. what conditions must hold for deadlock to exist?

a) mutual exclusion and no preemption

b) hold-and-wait and circular wait

c) both A and B above

d) none of these

Q) Consider the following snapshot of a system.

Process Allocation Max Available


ABCD ABCD ABCD
P0 0012 0012 1520
P1 1000 1750
P2 1354 2356
P3 0632 0652
P4 0014 0656

A) What is the matrix NEED?


B) Is the system in a safe state?

Solution:

A) Need:

(0,0,0,0)

(0,7,5,0)

(1,0,0,2)

(0,0,2,0)

(0,6,4,2)

B) The system is in a safe state.

<P0, P2, P3, P4, P1> (one option of few)


Quiz 6 CH8

1) The prevention of the circular wait condition is done by _________.

a. using thread

b. a linear ordering of resource types is defined

c. all of the mentioned

d. using pipes

2) Among a set of processes, all unsafe states are ____________.

a. none of the mentioned

b. Deadlocks

c. fatal

d. not deadlocks

3) What is the disadvantage of banker’s algorithm?

a. When available, resources will vanish

b. It is unusual to know how much resource they need in advance processes

c. As time passes, the number of processes changes

d. All of the mentioned

4) There are 6 tape drives for a computer system, with ‘n’ processes vaying for them. There may
be 3 tape drives required for every operation. The full value of ‘n’ for which a deadlock-free
device is guaranteed to be ___.

a. 4

b. 2

c. 1

d. 3

5) Which of the following conditions must be maintained simultaneously for a deadlock to


occur?

a. Hold and wait

b. No preemption

c. Mutual exclusion

d. All of the mentioned


6) In a wait for graph, an edge from process Pi to Pj indicates that ____________.

a. Pi is waiting for Pj to leave the system

b. Pj is waiting for Pi to leave the system

c. Pj is waiting for Pi to release a resource needed by Pj

d. Pi is waiting for Pj to release a resource needed by Pi

7) The detection algorithm must be implemented ______, if deadlocks occur frequently.

a. rarely

b. frequently

c. none of the mentioned

d. rarely &amp; frequently

8) The detection algorithm must be invoked _______, if deadlocks occur frequently

a. rarely

b. frequently

c. rarely & frequently

d. none of the mentioned

9) Prevention of Deadlock is a set of techniques ____________.

a. deciding whether or not to provide the requested resources for a process

b. to ensure that all the conditions required do not apply

c. to ensure that it is difficult to keep at least one of the required conditions

d. to recover from a deadlock

10) Implementing the detection algorithm for each request has a drawback that ______.

a. excessive time consumed in the request for memory to be allocated

b. overhead of the algorithm of detection due to memory consumption

c. substantial overhead in computation time

d. all of the mentioned

11) If a process keeps certain resources and requests another resource that cannot be
automatically assigned to it ____________, to ensure no preemption.

a. the process resumes its execution without allocating the resource to it

b. then all currently owned resources are preempted from

c. then the process is waiting for the resources to be allocated

d. the process keeps sending requests until the resource is assigned to it


12) ________ for Mutual exclusion to exist in the system.

a. In a shareable mode, there must be at least one resource

b. No less than one resource must be held in a non-sharable mode

c. All of the mentioned

d. Rather than a multiprocessor, the processor must be a uniprocessor

13) Before starting its execution, the downside of a process allocating all its resources is ___.

a. low utilization of the CPU

b. low utilization of the resource

c. none of the mentioned

d. high utilization of the resource

14) For each resource type ____________, the resource allocation graph is not applicable.

a. with single and multiple instances

b. with a single instance

c. with multiple instances

d. none of the mentioned

15) For a deadlock to be possible, which of the following conditions is required?

a. All of the mentioned

b. Mutual exclusion

c. No resource can be extracted by force from a process that retains it

d. A process may keep allocated resources while waiting for other resources to be allocated

16) When to check for deadlock, for an efficient operating system?

a. None of the mentioned

b. At fixed periods of time

c. At fixed time intervals, a resource request is made every time

d. Any time a request for resources is made

17) The number of resources needed by a process ____________.

a. the total number of resources available in the system must not exceeds

b. the total number of resources available in the system must always be less than

c. the total number of resources available in the system must always be equal to

d. the total number of resources available in the system must exceeds


18) What is a resource that is reusable?

a. Is used at a time by more than one process

b. Is used at a time by a single process and is not exhausted by that use

c. None of the mentioned

d. Is shared between different thread

19) ________ for a hold and wait condition to exist.

a. A process must not keep a resource but wait for one to be released and then request it to be
acquired.

b. None of the mentioned

c. A process must retain at least one resource and wait for additional resources held by other processes
to be acquired.

d. A process must have at least one resource and not wait for additional resources to be acquired.

20) Deadlock avoidance is ____________.

a. the allocation of resources must only be done once

b. a fixed number of resources must be allocated to

c. using the inversion technique

d. abort all deadlocked processes

21) Multithreaded programs are_______.

a. none of the mentioned

b. further prone to deadlocks

c. fewer prone to deadlocks

d. no prone to deadlocks

22) Each request allows the system to consider the _______ in order to determine if the current
request can be met or if it must wait to prevent a potential future deadlock.

a. processes that have been in the system previously

b. existing resources allocated to each process

c. future requests and releases for each process

d. currently available resources


True/False
1. The circular-wait condition for a deadlock implies the hold-and-wait condition.

Ans: True

2. If a resource-allocation graph has a cycle, the system must be in a deadlocked state.

Ans: False

3. Protocols to prevent hold-and-wait conditions typically also prevent starvation.

Ans: False

4. The wait-for graph scheme is not applicable to a resource allocation system with multiple
instances of each resource type.

Ans: True

5. Ordering resources and requiring the resources to be acquired in order prevents the circular
wait from occurring and therefore prevents deadlock from occurring.

Ans: False

6. The banker's algorithm is useful in a system with multiple instances of each resource type.

Ans: True

7. A system in an unsafe state will ultimately deadlock.

Ans: False

8. Deadlock prevention and deadlock avoidance are essentially the same approaches for
handling deadlock.

Ans: False
Al-Azhar University-Gaza
Faculty of Engineering and Information Technology
ITCS3313 – Operating Systems
Fall 2016-2017 Midterm Exam 5th November 2017 (60 minutes)
(A)
................................‫التوقيع‬................................... ..................... .......... ...............‫اسم الطالب‬.........................‫رقم الطالب‬
- Please attempt to solve the following questions. Cell phones must be turned off.
First Question: Choose the correct answer and put it in the table below

1. What is an operating system?


a) collection of programs that manages hardware b) system service provider to the application programs
resources
c) link to interface the hardware and application d) (a) and (b) e) (a), (b), and (c) f) None of these
programs

2. Which one of the following error will be handle by the operating system?
a) power failure b) lack of paper in printer c) connection failure in the network
d) (a) and (b) e) (a), (b), and (c) f) none of these

3. The state of a process is defined by:


a) the final activity of the process b) the activity just executed by c) the activity to next be executed
the process by the process
d) the current activity of the process e) none of these

4. The degree of multi-programming is:


a) the number of processes executed b) the number of processes in the c) the number of processes in the
per unit time ready queue I/O queue
d) the number of processes in e) none of these
memory

5. Process is
a) a program in execution b) a job in secondary memory c) program in High level language kept
on disk
d ) contents of main memory e) None of the these

6. Which of the following are TRUE for direct communication:


a) A communication link can be associated with N number of process(N = max. number of processes supported
by system)
b) A communication link can be associated with exactly two processes
c) Exactly N/2 links exist between each pair of processes(N = max. number of processes supported by system)
d) Exactly two link exists between each pair of processes
e) None of these

7. In indirect communication between processes P and Q :


a) there is another process R to handle and pass on the messages between P and Q
b) there is another machine between the two processes to help communication
c) there is a mailbox to help communication between P and Q
d) none of the mentioned

8. Which of the following operation is provided by IPC facility


a) write message b) delete message c) send message
d) receive message e) (c) and (d) f) None of these
9. In the non blocking send:

a) the sending process keeps sending until the message is received


b) the sending process sends the message and resumes operation
c) the sending process keeps sending until it receives a message
d) none of these

10. While running DOS on a PC, which command would be used to display the entire diskette?

a) TYPE b) CHKDSK c) DISKCOPY


d) DIR e) None of the these

11. In Bounded Buffer we use in the producer process to check if the buffer is Full
a) while (((in + 1) % BUFFER_SIZE) == out) b) while (in == out) c) while (in != out)
d) while (((in + 1) % BUFFER_SIZE) != out) e) None of these

12. Fork is
a) the dispatching of a task b) the creation of a new job c) The creation of a new
process
d) increasing the priority of a task e) None of the these

13. What is a long-term scheduler ?


a) It selects which process has b) It selects which process has to be c) It selects which process to
to be brought into the ready executed next and allocates CPU remove from memory by swapping
queue
d) None of these

14. The systems which allows only one process execution at a time, are called
a) uni-programming systems b) uni-processing systems c) uni-tasking systems
d) none of the mentioned

15. The Zero Capacity queue:


a) is referred to as a message system with buffering b) is referred to as a message system with no buffering
c) is referred to as a link d) none of these

16. A Process Control Block(PCB) does not contain which of the following :

a) Program Counter b) I/O status information c) Heap d) Data


e) Process State f) Stack g) bootstrap program h) None of these

17. The link between two processes P and Q to send and receive messages is called :
a) communication link b) message-passing link c) synchronization link
d) All of these None of these

18. Message passing system allows processes to :

a) communicate with one another without b) share data c) communicate with one another
resorting to shared data. by resorting to shared data
d) name the recipient or sender of the message e) None of these

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

E E D D A B C E B D A C A A B G A A
Second Question:
a) Put the labels on the Process stated diagram

b- In bounded buffer, you have given the consumer process pseudo-code, write the producer process pseudo-
code
Consumer Producer

c- What is a race condition? Illustrate your answer by example.

A race condition is a situation in which more than one


processes shared resource concurrently, and the result
depends on the order of execution

Shared data int count=5

Process p1 Process p2

Count++ Count—

T1: Count=5 Count=5


T2: count=6 Count=4

Correct value of count=5 (Race condition)


Good Luck

Dr. Ahmed Y. Mahmoud


Al-Azhar University-Gaza
Faculty of Engineering and Information Technology
ITCS3313 – Operating Systems
Fall 2016-2017 Midterm Exam 5th November 2017 (60 minutes)
(B)
................................‫التوقيع‬................................... ..................... .......... ...............‫اسم الطالب‬.........................‫رقم الطالب‬
- Please attempt to solve the following questions. Cell phones must be turned off.
First Question: Choose the correct answer and put it in the table below

1. The Zero Capacity queue:


a) is referred to as a message system with no b) is referred to as a message system with buffering
buffering
c) is referred to as a link d) none of these

2. A Process Control Block(PCB) does not contain which of the following :

a) Program Counter b) I/O status information c) Heap d) Data


e) Process State f) Stack g) bootstrap program h) None of these

3. The link between two processes P and Q to send and receive messages is called :
a) communication link b) message-passing link c) synchronization link
d) All of these None of these

4. Message passing system allows processes to :


a) communicate with one another without b) share data c) communicate with one another
resorting to shared data. by resorting to shared data
d) name the recipient or sender of the message e) None of these

5. In the non blocking send:


a) the sending process sends the message and resumes operation
b) the sending process keeps sending until it receives a message
c) the sending process keeps sending until the message is received
d) none of these

6. In Bounded Buffer we use in the producer process to check if the buffer is Full
a) while (((in + 1) % BUFFER_SIZE) == out) b) while (in == out) c) while (in != out)
d) while (((in + 1) % BUFFER_SIZE) != out) e) None of these

7. Fork is
a) the dispatching of a task b) The creation of a new c) the creation of a new job
process
d) increasing the priority of a task e) None of the these

8. What is a long-term scheduler ?


a) It selects which process has to be b) It selects which process has to c) It selects which process to
executed next and allocates CPU be brought into the ready queue remove from memory by swapping
d) None of these

9. Which of the following are TRUE for direct communication:


a) A communication link can be associated with N number of process(N = max. number of processes supported
by system)
b) Exactly N/2 links exist between each pair of processes(N = max. number of processes supported by system)
c) A communication link can be associated with exactly two processes
d) Exactly two link exists between each pair of processes
e) None of these
10. In indirect communication between processes P and Q :
a) there is a mailbox to help communication between P and Q
b) there is another machine between the two processes to help communication
c) there is another process R to handle and pass on the messages between P and Q
d) none of the mentioned

11. Which of the following operation is provided by IPC facility


a) write message b) delete message c) send message
d) receive message e) (c) and (d) f) None of these

12. While running DOS on a PC, which command would be used to display the entire diskette?
a) TYPE b) DIR c) DISKCOPY
d) CHKDSK e) None of the these
13. Which one of the following error will be handle by the operating system?
a) power failure b) lack of paper in printer c) connection failure in the network
d) (a) and (b) e) (a), (b), and (c) f) none of these

14. The state of a process is defined by:


a) the final activity of the b) the activity just executed by the process c) the activity to next be executed
process by the process
d) the current activity of the e) none of these
process

15. What is an operating system?


a) collection of programs that manages hardware b) system service provider to the application programs
resources
c) link to interface the hardware and application d) (a) and (b) e) (a), (b), and (c) f) None of these
programs

16. The degree of multi-programming is:


a) the number of processes executed b) the number of processes in c) the number of processes in the
per unit time memory I/O queue
d) the number of processes in the e) none of these
ready queue

17. Process is
a) program in High level b) a job in secondary memory c) a program in execution
language kept on disk
d ) contents of main memory e) None of the these
18. The systems which allows only one process execution at a time, are called
a uni-processing systems b) ) uni-programming systems c) uni-tasking systems
d) none of the mentioned

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

A G A A A A B B C A E B E D E B C B
Second Question:
b) Put the labels on the Process stated diagram

b- In bounded buffer, you have given the consumer process pseudo-code, write the producer process pseudo-
code
Consumer Producer

c- What is a race condition? Illustrate your answer by example.

Good Luck

Dr. Ahmed Y. Mahmoud


Chapter: Chapter 1

Multiple Choice

1. In what way is an operating system like a government?


A) It seldom functions correctly.
B) It creates an environment within which other programs can do useful work.
C) It performs most useful functions by itself.
D) It is always concerned primarily with the individual's needs.

2. ____ operating systems are designed primarily to maximize resource utilization.


A) PC
B) Handheld computer
C) Mainframe
D) Network

3. The most common secondary storage device is ____.


A) random access memory
B) solid state disks
C) tape drives
D) magnetic disk

4. Which of the following would lead you to believe that a given system is an SMP-type
system?
A) Each processor is assigned a specific task.
B) There is a boss–worker relationship between the processors.
C) Each processor performs all tasks within the operating system.
D) None of the above

5. A ____ can be used to prevent a user program from never returning control to the
operating system.
A) portal
B) program counter
C) firewall
D) timer

6. Embedded computers typically run on a ____ operating system.


A) real-time
B) Windows XP
C) network
D) clustered
7. Bluetooth and 802.11 devices use wireless technology to communicate over several feet,
in essence creating a ____.
A) local-area network
B) wide-area network
C) small-area network
D) metropolitan-area network

8. A clustered system ____.


A) gathers together multiple CPUs to accomplish computational work
B) is an operating system that provides file sharing across a network
C) is used when rigid time requirements are present
D) can only operate one application at a time

9. Which of the following is a property of peer-to-peer systems?


A) Clients and servers are not distinguished from one another.
B) Separate machines act as either the client of the server but not both.
C) They do not offer any advantages over traditional client-server systems.
D) They suffer from the server acting as the bottleneck in performance.

10. Two important design issues for cache memory are ____.
A) speed and volatility
B) size and replacement policy
C) power consumption and reusability
D) size and access privileges

11. What are some other terms for kernel mode?


A) supervisor mode
B) system mode
C) privileged mode
D) All of the above

12. Which of the following statements concerning open source operating systems is true?
A) Solaris is open source.
B) Source code is freely available.
C) They are always more secure than commercial, closed systems.
D) All open source operating systems share the same set of goals.
13. Which of the following operating systems is not open source?
A) Windows
B) BSD UNIX
C) Linux
D) PCLinuxOS

14. A _____ provides a file-system interface which allows clients to create and modify files.
A) compute-server system
B) file-server system
C) wireless network
D) network computer

15. A ____ is a custom build of the Linux operating system


A) LiveCD
B) installation
C) distribution
D) VMWare Player

16. __________ is a set of software frameworks that provide additional services to


application developers.
A) System programs
B) Virtualization
C) Cloud computing
D) Middleware

17. What statement concerning privileged instructions is considered false?


A) They may cause harm to the system.
B) They can only be executed in kernel mode.
C) They cannot be attempted from user mode.
D) They are used to manage interrupts.

18. Which of the following statements is false?


A) Mobile devices must be concerned with power consumption.
B) Mobile devices can provide features that are unavailable on desktop or laptop computers.
C) The difference in storage capacity between a mobile device and laptop is shrinking.
D) Mobile devices usually have fewer processing cores than a standard desktop computer.
19. A(n) ________ is the unit of work in a system.
A) process
B) operating system
C) timer
D) mode bit

20. The two separate modes of operating in a system are


A) supervisor mode and system mode
B) kernel mode and privileged mode
C) physical mode and logical mode
D) user mode and kernel mode

21. Explain why an operating system can be viewed as a resource allocator.

Ans: A computer system has many resources that may be required to solve a problem: CPU
time, memory space, file-storage space, I/O devices, and so on. The operating system acts as the
manager of these resources. Facing numerous and possibly conflicting requests for resources,
the operating system must decide how to allocate them to specific programs and users so that it
can operate the computer system efficiently and fairly.

22. Explain the purpose of an interrupt vector.

Ans: The interrupt vector is merely a table of pointers to specific interrupt-handling routines.
Because there are a fixed number of interrupts, this table allows for more efficient handling of
the interrupts than with a general-purpose, interrupt-processing routine.

23. What is a bootstrap program, and where is it stored?

Ans: A bootstrap program is the initial program that the computer runs when it is powered up or
rebooted. It initializes all aspects of the system, from CPU registers to device controllers to
memory contents. Typically, it is stored in read-only memory (ROM) or electrically erasable
programmable read-only memory (EEPROM), known by the general term firmware, within the
computer hardware.
24. What role do device controllers and device drivers play in a computer system?

Ans: A general-purpose computer system consists of CPUs and multiple device controllers that
are connected through a common bus. Each device controller is in charge of a specific type of
device. The device controller is responsible for moving the data between the peripheral devices
that it controls and its local buffer storage. Typically, operating systems have a device driver for
each device controller. This device driver understands the device controller and presents a
uniform interface for the device to the rest of the operating system.

25. Why are clustered systems considered to provide high-availability service?

Ans: Clustered systems are considered high-availability in that these types of systems have
redundancies capable of taking over a specific process or task in the case of a failure. The
redundancies are inherent due to the fact that clustered systems are composed of two or more
individual systems coupled together.

26. Describe the differences between physical, virtual, and logical memory.

Ans: Physical memory is the memory available for machines to execute operations (i.e., cache,
random access memory, etc.). Virtual memory is a method through which programs can be
executed that requires space larger than that available in physical memory by using disk memory
as a backing store for main memory. Logical memory is an abstraction of the computer’s
different types of memory that allows programmers and applications a simplified view of
memory and frees them from concern over memory-storage limitations.

27. Describe the operating system's two modes of operation.

Ans: In order to ensure the proper execution of the operating system, most computer systems
provide hardware support to distinguish between user mode and kernel mode. A mode bit is
added to the hardware of the computer to indicate the current mode: kernel (0) or user (1).
When the computer system is executing on behalf of a user application, the system is in user
mode. However, when a user application requests a service from the operating system (via a
system call), it must transition from user to kernel mode to fulfill the request.

28. Explain cache coherency.

Ans: In multiprocessor environments, two copies of the same data may reside in the local cache
of each CPU. Whenever one CPU alters the data, the cache of the other CPU must receive an
updated version of this data. Cache coherency involves ensuring that multiple caches store the
most updated version of the stored data.
29. Why is main memory not suitable for permanent program storage or backup purposes?
Furthermore, what is the main disadvantage to storing information on a magnetic disk
drive as opposed to main memory?

Ans: Main memory is a volatile memory in that any power loss to the system will result in
erasure of the data stored within that memory. While disk drives can store more information
permanently than main memory, disk drives are significantly slower.

30. Describe the compute-server and file-server types of server systems.

Ans: The compute-server system provides an interface to which a client can send a request to
perform an action (for example, read data from a database); in response, the server executes the
action and sends back results to the client. The file-server system provides a file-system interface
where clients can create, update, read, and delete files. An example of such a system is a Web
server that delivers files to clients running Web browsers.

31. Computer systems can be divided into four approximate components. What are they?

Ans: Hardware, operating system, application programs, and users.

32. Distinguish between system and application programs.

Ans: System programs are not part of the kernel, but still are associated with the operating
system. Application programs are not associated with the operating of the system.

33. Describe why direct memory access (DMA) is considered an efficient mechanism for
performing I/O.

Ans: DMA is efficient for moving large amounts of data between I/O devices and main memory.
It is considered efficient because it removes the CPU from being responsible for transferring data.
DMA instructs the device controller to move data between the devices and main memory.

34. Describe why multi-core processing is more efficient than placing each processor on its
own chip.

Ans: A large reason why it is more efficient is that communication between processors on the
same chip is faster than processors on separate chips.
35. Distinguish between uniform memory access (UMA) and non-uniform memory access
(NUMA) systems.

Ans: On UMA systems, accessing RAM takes the same amount of time from any CPU. On
NUMA systems, accessing some parts of memory may take longer than accessing other parts of
memory, thus creating a performance penalty for certain memory accesses.

36. Explain the difference between singly, doubly, and circularly linked lists.

Ans: A singly linked list is where each item points to its successor. A doubly linked
linked list allows an item to point to its predecessor or successor. A circularly linked
list is the where the last element points back to the first.

37. What two operating systems currently dominate mobile computing?

Ans: Apple's iOS and Google's Android

38. Explain the difference between protection and security.

Ans: Protection is concerned with controlling the access of processes or users to the resources of
the computer system. The role of security is to defend the system from internal or external
attacks.

39. Distinguish mobile computing from traditional desktop computing.

Ans: Mobile computing takes place on handheld devices and tablets. Because these devices are
portable and lightweight, they typically do not have the processing power and storage capacity of
desktop systems. However, features such as GPS and accelerometers have allowed mobile
devices to provide functionality that is unavailable to desktop systems.

40. Describe cloud computing.

Ans: Cloud computing is a type of computing that delivers computing,storage, and application
services across a network. Cloud computing often uses virtualization to provide its functionality.
There are many different types of cloud environments, as well as services offered. Cloud
computing may be either public, private, or a hybrid of the two. Additionally, cloud computing
may offer applications, platforms, or system infrastructures.
True/False

41. The operating system kernel consists of all system and application programs in a
computer.

Ans: False

42. Flash memory is slower than DRAM but needs no power to retain its contents.

Ans: True

43. A system call is triggered by hardware.

Ans: False

44. UNIX does not allow users to escalate privileges to gain extra permissions for a
restricted activity.

Ans: False

45. Processors for most mobile devices run at a slower speed than a processor in a desktop
PC.

Ans: True

46. Interrupts may be triggered by either hardware of software

Ans: True

47. A dual-core system requires each core has its own cache memory.

Ans: False

48. Virtually all modern operating systems provide support for SMP

Ans: True
49. All computer systems have some sort of user interaction.

Ans: False

50. Solid state disks are generally faster than magnetic disks.

Ans: True

51. Solid state disks are considered volatile storage.

Ans: False

52. There is no universally accepted definition of an operating system.

Ans: True
Chapter: Chapter 2

Multiple Choice

1. A _____ is an example of a systems program.


A) command interpreter
B) Web browser
C) text formatter
D) database system

2. If a program terminates abnormally, a dump of memory may be examined by a ____ to


determine the cause of the problem.
A) module
B) debugger
C) shell
D) control card

3. A message-passing model is ____.


A) easier to implement than a shared memory model for intercomputer communication
B) faster than the shared memory model
C) a network protocol, and does not apply to operating systems
D) only useful for small simple operating systems

4. Policy ____.
A) determines how to do something
B) determines what will be done
C) is not likely to change across places
D) is not likely to change over time

5. The major difficulty in designing a layered operating system approach is ____.


A) appropriately defining the various layers
B) making sure that each layer hides certain data structures, hardware, and operations from
higher-level layers
C) debugging a particular layer
D) making sure each layer is easily converted to modules
6. A microkernel is a kernel ____.
A) containing many components that are optimized to reduce resident memory size
B) that is compressed before loading in order to reduce its resident memory size
C) that is compiled to produce the smallest size possible when stored to disk
D) that is stripped of all nonessential components

7. To the SYSGEN program of an operating system, the least useful piece of information
is _____.
A) the CPU being used
B) amount of memory available
C) what applications to install
D) operating-system options such as buffer sizes or CPU scheduling algorithms

8. A boot block ____.


A) typically only knows the location and length of the rest of the bootstrap program
B) typically is sophisticated enough to load the operating system and begin its execution
C) is composed of multiple disk blocks
D) is composed of multiple disk cylinders

9. _____ provide(s) an interface to the services provided by an operating system.


A) Shared memory
B) System calls
C) Simulators
D) Communication

10. _____ is not one of the major categories of system calls.


A) Process control
B) Communications
C) Protection
D) Security

11. _____ allow operating system services to be loaded dynamically.


A) Virtual machines
B) Modules
C) File systems
D) Graphical user interfaces
12. Microkernels use _____ for communication.
A) message passing
B) shared memory
C) system calls
D) virtualization

13. The Windows CreateProcess() system call creates a new process. What is the
equivalent system call in UNIX:
A) NTCreateProcess()
B) process()
C) fork()
D) getpid()

14. The close() system call in UNIX is used to close a file. What is the equivalent system
call in Windows:
A) CloseHandle()
B) close()
C) CloseFile()
D) Exit()

15. The Windows CreateFile() system call is used to create a file. What is the equivalent
system call in UNIX:
A) ioctl()
B) open()
C) fork()
D) createfile()

16. Android runs Java programs _____________


A) in the Dalvik virtual machine.
B) natively.
C) in the Java virtual machine.
D) Android does not run Java programs.

17. ______ is a mobile operating system designed for the iPhone and iPad.
A) Mac OS X
B) Android
C) UNIX
D) iOS
18. The ________ provides a portion of the system call interface for UNIX and Linux.
A) POSIX
B) Java
C) Standard C library
D) Standard API

19. Which of the following statements is incorrect?


A) An operating system provides an environment for the execution of programs.
B) An operating system manages system resources.
C) Operating systems provide both command line as well as graphical user interfaces.
D) Operating systems must provide both protection and security.

20. _____ is/are not a technique for passing parameters from an application to a system
call.
A) Cache memory
B) Registers
C) Stack
D) Special block in memory

21. There are two different ways that commands can be processed by a command
interpreter. One way is to allow the command interpreter to contain the code needed to
execute the command. The other way is to implement the commands through system
programs. Compare and contrast the two approaches.

Ans: In the first approach, upon the user issuing a command, the interpreter jumps to the
appropriate section of code, executes the command, and returns control back to the user. In the
second approach, the interpreter loads the appropriate program into memory along with the
appropriate arguments. The advantage of the first method is speed and overall simplicity. The
disadvantage to this technique is that new commands require rewriting the interpreter program
which, after a number of modifications, may get complicated, messy, or too large. The advantage
to the second method is that new commands can be added without altering the command
interpreter. The disadvantage is reduced speed and the clumsiness of passing parameters from the
interpreter to the system program.
22. Describe the relationship between an API, the system-call interface, and the operating
system.

Ans: The system-call interface of a programming language serves as a link to system calls
made available by the operating system. This interface intercepts function calls in the API and
invokes the necessary system call within the operating system. Thus, most of the details of the
operating-system interface are hidden from the programmer by the API and are managed by the
run-time support library.

23. Describe three general methods used to pass parameters to the operating system
during system calls.

Ans: The simplest approach is to pass the parameters in registers. In some cases, there may be
more parameters than registers. In these cases, the parameters are generally stored in a block, or
table, of memory, and the address of the block is passed as a parameter in a register. Parameters
can also be placed, or pushed, onto the stack by the program and popped off the stack by the
operating system.

24. What are the advantages of using a higher-level language to implement an operating
system?

Ans: The code can be written faster, is more compact, and is easier to understand and debug. In
addition, improvements in compiler technology will improve the generated code for the entire
operating system by simple recompilation. Finally, an operating system is far easier to port — to
move to some other hardware — if it is written in a higher-level language.

25. Describe some requirements, or goals, when designing an operating system.

Ans: Requirements can be divided into user and system goals. Users desire a system that is
convenient to use, easy to learn, and to use, reliable, safe, and fast. System goals are defined by
those people who must design, create, maintain, and operate the system: The system should be
easy to design, implement, and maintain; it should be flexible, reliable, error-free, and efficient.

26. What are the advantages and disadvantages of using a microkernel approach?

Ans: One benefit of the microkernel approach is ease of extending the operating system. All
new services are added to user space and consequently do not require modification of the kernel.
The microkernel also provides more security and reliability, since most services are running as
user — rather than kernel — processes. Unfortunately, microkernels can suffer from
performance decreases due to increased system function overhead.
27. Explain why a modular kernel may be the best of the current operating system design
techniques.

Ans: The modular approach combines the benefits of both the layered and microkernel design
techniques. In a modular design, the kernel needs only to have the capability to perform the
required functions and know how to communicate between modules. However, if more
functionality is required in the kernel, then the user can dynamically load modules into the kernel.
The kernel can have sections with well-defined, protected interfaces, a desirable property found
in layered systems. More flexibility can be achieved by allowing the modules to communicate
with one another.

28. Describe how Mac OS X is considered a hybrid system.

Ans: Primarily because he kernel environment is a blend of the Mach microkernel and BSD
UNIX (which is closer to a monolithic kernel.)

29. Describe how Android uses a unique virtual machine for running Java programs.

Ans: The Dalvik virtual machine is designed specifically for Android and has been optimized for
mobile devices with limited memory and CPU processing capabilities.

True/False

30. KDE and GNOME desktops are available under open-source licenses.

Ans: True

31. Many operating system merge I/O devices and files into a combined file because of the
similarity of system calls for each.

Ans: True

32. An initial bootstrap program is in the form of random-access memory (RAM).

Ans: False

33. System calls can be run in either user mode or kernel mode.

Ans: False
34. Application programmers typically use an API rather than directory invoking system
calls.

Ans: True

35. In general, Windows system calls have longer, more descriptive names and UNIX
system calls use shorter, less descriptive names.

Ans: True

36. Mac OS X is a hybrid system consisting of both the Mach microkernel and BSD UNIX.

Ans: True

37. iOS is open source, Android is closed source.

Ans: False
Chapter: Chapter 3
Multiple Choice

1. The ____ of a process contains temporary data such as function parameters, return
addresses, and local variables.
A) text section
B) data section
C) program counter
D) stack

2. A process control block ____.


A) includes information on the process's state
B) stores the address of the next instruction to be processed by a different process
C) determines which process is to be executed next
D) is an example of a process queue

3. The list of processes waiting for a particular I/O device is called a(n) ____.
A) standby queue
B) device queue
C) ready queue
D) interrupt queue

4. The _____________ refers to the number of processes in memory.


A) process count
B) long-term scheduler
C) degree of multiprogramming
D) CPU scheduler

5. When a child process is created, which of the following is a possibility in terms of the
execution or address space of the child process?
A) The child process runs concurrently with the parent.
B) The child process has a new program loaded into it.
C) The child is a duplicate of the parent.
D) All of the above

6. A _________________ saves the state of the currently running process and restores the
state of the next process to run.
A) save-and-restore
B) state switch
C) context switch
D) none of the above
7. A process may transition to the Ready state by which of the following actions?
A) Completion of an I/O event
B) Awaiting its turn on the CPU
C) Newly-admitted process
D) All of the above

8. In a(n) ____ temporary queue, the sender must always block until the recipient
receives the message.
A) zero capacity
B) variable capacity
C) bounded capacity
D) unbounded capacity

9. A blocking send() and blocking receive() is known as a(n) _________________


A) synchronized message
B) rendezvous
C) blocked message
D) asynchronous message

10 . Which of the following is true in a Mach operating system?


A) All messages have the same priority.
B) Multiple messages from the same sender are guaranteed an absolute ordering.
C) The sending thread must return immediately if a mailbox is full.
D) It is not designed for distributed systems.

11. When communicating with sockets, a client process initiates a request for a connection
and is assigned a port by the host computer. Which of the following would be a valid port
assignment for the host computer?
A) 21
B) 23
C) 80
D) 1625

12. A(n) ______________ allows several unrelated processes to use the pipe for
communication.
A) named pipe
B) anonymous pipe
C) LIFO
D) ordinary pipe
13. Which of the following statements is true?
A) Shared memory is typically faster than message passing.
B) Message passing is typically faster than shared memory.
C) Message passing is most useful for exchanging large amounts of data.
D) Shared memory is far more common in operating systems than message passing.

14. Imagine that a host with IP address 150.55.66.77 wishes to download a file from the web
server at IP address 202.28.15.123. Select a valid socket pair for a connection between this
pair of hosts.
A) 150.55.66.77:80 and 202.28.15.123:80
B) 150.55.66.77:150 and 202.28.15.123:80
C) 150.55.66.77:2000 and 202.28.15.123:80
D) 150.55.66.77:80 and 202.28.15.123:3500

15. Child processes inherit UNIX ordinary pipes from their parent process because:
A) The pipe is part of the code and children inherit code from their parents.
B) A pipe is treated as a file descriptor and child processes inherit open file descriptors from
their parents.
C) The STARTUPINFO structure establishes this sharing.
D) All IPC facilities are shared between the parent and child processes.

16. Which of the following statements is true?


A) Named pipes do not allow bi-directional communication.
B) Only the parent and child processes can use named pipes for communication.
C) Reading and writing to ordinary pipes on both UNIX and Windows systems can be performed
like ordinary file I/O.
D) Named pipes can only be used by communicating processes on the same machine.

17. Which of the following is not a process type in the Chrome browser?
A) Plug-in
B) Renderer
C) Sandbox
D) Browser

18. The ________ application is the application appearing on the display screen of a mobile
device.
A) main
B) background
C) display
D) foreground
19. A process that has terminated, but whose parent has not yet called wait(), is known as a
________ process.
A) zombie
B) orphan
C) terminated
D) init

20. The _______ process is assigned as the parent to orphan processes.


A) zombie
B) init
C) main
D) renderer

Short Answer

21. Name and describe the different states that a process can exist in at any given time.

Ans: The possible states of a process are: new, running, waiting, ready, and terminated. The
process is created while in the new state. In the running or waiting state, the process is executing
or waiting for an event to occur, respectively. The ready state occurs when the process is ready
and waiting to be assigned to a processor and should not be confused with the waiting state
mentioned earlier. After the process is finished executing its code, it enters the termination state.

22. Explain the main differences between a short-term and long-term scheduler.

Ans: The primary distinction between the two schedulers lies in the frequency of execution.
The short-term scheduler is designed to frequently select a new process for the CPU, at least
once every 100 milliseconds. Because of the short time between executions, the short-term
scheduler must be fast. The long-term scheduler executes much less frequently; minutes may
separate the creation of one new process and the next. The long-term scheduler controls the
degree of multiprogramming. Because of the longer interval between executions, the long-term
scheduler can afford to take more time to decide which process should be selected for execution.

23. Explain the difference between an I/O-bound process and a CPU-bound process.

Ans: The differences between the two types of processes stem from the number of I/O requests
that the process generates. An I/O-bound process spends more of its time seeking I/O operations
than doing computational work. The CPU-bound process infrequently requests I/O operations
and spends more of its time performing computational work.
24. Explain the concept of a context switch.

Ans: Whenever the CPU starts executing a new process, the old process's state must be
preserved. The context of a process is represented by its process control block. Switching the
CPU to another process requires performing a state save of the current process and a state restore
of a different process. This task is known as a context switch. When a context switch occurs,
the kernel saves the context of the old process in its PCB and loads the saves context of the new
process scheduled to run.

25. Explain the fundamental differences between the UNIX fork() and Windows
CreateProcess() functions.

Ans: Each function is used to create a child process. However, fork() has no parameters;
CreateProcess() has ten. Furthermore, whereas the child process created with fork() inherits a copy
of the address space of its parent, the CreateProcess() function requires specifying the address
space of the child process.

26. Name the three types of sockets used in Java and the classes that implement them.

Ans: Connection-oriented (TCP) sockets are implemented with the Socket class.
Connectionless (UDP) sockets use the DatagramSocket class. Finally, the MulticastSocket class is
a subclass of the DatagramSocket class. A multicast socket allows data to be sent to multiple
recipients.

27. What is a loopback and when is it used?

Ans: A loopback is a special IP address: 127.0.0.1. When a computer refers to IP address


127.0.0.1, it is referring to itself. When using sockets for client/server communication, this
mechanism allows a client and server on the same host to communicate using the TCP/IP
protocol.

28. Explain the purpose of external data representation (XDR).

Ans: Data can be represented differently on different machine architectures (e.g., little-endian vs.
big-endian). XDR represents data independently of machine architecture. XDR is used when
transmitting data between different machines using an RPC.
29. Explain the term marshalling.

Ans: Marshalling involves the packaging of parameters into a form that can be transmitted over
the network. When the client invokes a remote procedure, the RPC system calls the appropriate
stub, passing it the parameters provided to the remote procedure. This stub locates the port on
the server and marshals the parameters. If necessary, return values are passed back to the client
using the same technique.

30. Explain the terms "at most once" and "exactly once" and indicate how they relate to
remote procedure calls.

Ans: Because a remote procedure call can fail in any number of ways, it is important to be able
to handle such errors in the messaging system. The term "at most once" refers to ensuring that
the server processes a particular message sent by the client only once and not multiple times.
This is implemented by merely checking the timestamp of the message. The term "exactly once"
refers to making sure that the message is executed on the server once and only once so that there
is a guarantee that the server received and processed the message.

31. Describe two approaches to the binding of client and server ports during RPC calls.

Ans: First, the binding information may be predetermined, in the form of fixed port addresses.
At compile time, an RPC call has a fixed port number associated with it. Second, binding can
be done dynamically by a rendezvous mechanism. Typically, an operating system provides a
rendezvous daemon on a fixed RPC port. A client then sends a message containing the name of
the RPC to the rendezvous daemon requesting the port address of the RPC it needs to execute.
The port number is returned, and the RPC calls can be sent to that port until the process
terminates (or the server crashes).

32. Ordinarily the exec() system call follows the fork(). Explain what would happen if a
programmer were to inadvertently place the call to exec() before the call to fork().

Ans: Because exec() overwrites the process, we would never reach the call to fork() and hence, no
new processes would be created. Rather, the program specified in the parameter to exec() would
be run instead.

33. Explain why Google Chrome uses multiple processes.

Ans: Each website opens up in a separate tab and is represented with a separate renderer process.
If that webpage were to crash, only the process representing that the tab would be affected, all
other sites (represented as separate tabs/processes) would be unaffected.
34. Describe how UNIX and Linux manage orphan processes.

Ans: If a parent terminates without first calling wait(), its children are considered orphan
processes. Linux and UNIX assign the init process as the new parent of orphan processes and init
periodically calls wait() which allows any resources allocated to terminated processes to be
reclaimed by the operating system.

True/False

35. All processes in UNIX first translate to a zombie process upon termination.

Ans: True

36. The difference between a program and a process is that a program is an active entity
while a process is a passive entity.

Ans: False

37. The exec() system call creates a new process.

Ans: False

38. All access to POSIX shared memory requires a system call.

Ans: False

39. Local Procedure Calls in Windows XP are similar to Remote Procedure Calls.

Ans: True

40. For a single-processor system, there will never be more than one process in the
Running state.

Ans: True

41. Shared memory is a more appropriate IPC mechanism than message passing for
distributed systems.

Ans: False
42. Ordinary pipes in UNIX require a parent-child relationship between the
communicating processes.

Ans: True

43. Ordinary pipes in Windows require a parent-child relationship between the


communicating processes.

Ans: True

44. Using a section object to pass messages over a connection port avoids data copying.

Ans: True

45. A socket is identified by an IP address concatenated with a port number.

Ans: True

46. Sockets are considered a high-level communications scheme.

Ans: False

47. The Mach operating system treats system calls with message passing.

Ans: True

48. Named pipes continue to exist in the system after the creating process has terminated.

Ans:True

49. A new browser process is create by the Chrome browser for every new website that is
visited.

Ans: False

50. The iOS mobile operating system only supports a limited form of multitasking.

Ans: True
Chapter: Chapter 4

Multiple Choice

1. ____ is a thread library for Solaris that maps many user-level threads to one kernel
thread.
A) Pthreads
B) Green threads
C) Sthreads
D) Java threads

2. Pthreads refers to ____.


A) the POSIX standard.
B) an implementation for thread behavior.
C) a specification for thread behavior.
D) an API for process creation and synchronization.

3. The ____ multithreading model multiplexes many user-level threads to a smaller or


equal number of kernel threads.
A) many-to-one model
B) one-to-one model
C) many-to-many model
D) many-to-some model

4. Cancellation points are associated with ____ cancellation.


A) asynchronous
B) deferred
C) synchronous
D) non-deferred

5. Which of the following would be an acceptable signal handling scheme for a


multithreaded program?
A) Deliver the signal to the thread to which the signal applies.
B) Deliver the signal to every thread in the process.
C) Deliver the signal to only certain threads in the process.
D) All of the above
6. Signals can be emulated in windows through ____.
A) asynchronous procedure calls
B) local procedure calls
C) remote procedure calls
D) none of the above

7. Thread-local storage is data that ____.


A) is not associated with any process
B) has been modified by the thread, but not yet updated to the parent process
C) is generated by the thread independent of the thread's process
D) is unique to each thread

8. LWP is ____.
A) short for lightweight processor
B) placed between system and kernel threads
C) placed between user and kernel threads
D) common in systems implementing one-to-one multithreading models

9. Windows uses the ____.


A) one-to-one model
B) many-to-one model
C) one-to many-model
D) many-to-many model

10. In multithreaded programs, the kernel informs an application about certain events
using a procedure known as a(n) ____.
A) signal
B) upcall
C) event handler
D) pool

11. _____ is not considered a challenge when designing applications for multicore systems.
A) Deciding which activities can be run in parallel
B) Ensuring there is a sufficient number of cores
C) Determining if data can be separated so that it is accessed on separate cores
D) Identifying data dependencies between tasks.
12. A ____ provides an API for creating and managing threads.
A) set of system calls
B) multicore system
C) thread library
D) multithreading model

13. The _____ model multiplexes many user-level threads to a smaller or equal number of
kernel threads.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one

14. The _____ model maps many user-level threads to one kernel thread.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one

15. The _____ model maps each user-level thread to one kernel thread.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one

16. The _____ model allows a user-level thread to be bound to one kernel thread.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one

17. The most common technique for writing multithreaded Java programs is _____.
A) extending the Thread class and overriding the run() method
B) implementing the Runnable interface and defining its run() method
C) designing your own Thread class
D) using the CreateThread() function
18. In Pthreads, a parent uses the pthread_join() function to wait for its child thread to
complete. What is the equivalent function in Win32?
A) win32_join()
B) wait()
C) WaitForSingleObject()
D) join()

19. Which of the following statements regarding threads is false?


A) Sharing is automatically provided in Java threads.
B) Both Pthreads and Win32 threads share global data.
C) The start() method actually creates a thread in the Java virtual machine.
D) The Java method join() provides similar functionality as the WaitForSingleObject in Win32.

20. A _____ uses an existing thread — rather than creating a new one — to complete a task.
A) lightweight process
B) thread pool
C) scheduler activation
D) asynchronous procedure call

21. According to Amdahl's Law, what is the speedup gain for an application that is 60%
parallel and we run it on a machine with 4 processing cores?
A) 1.82
B) .7
C) .55
D) 1.43

22. _________ involves distributing tasks across multiple computing cores.


A) Concurrency
B) Task parallelism
C) Data parallelism
D) Parallelism

23. ___________ is a formula that identifies potential performance gains from adding
additional computing cores to an application that has a parallel and serial component.
A) Task parallelism
B) Data parallelism
C) Data splitting
D) Amdahl's Law
24. When OpenMP encounters the #pragma omp parallel directive, it
A) constructs a parallel region
B) creates a new thread
C) creates as many threads as there are processing cores
D) parallelizes for loops

25. Grand Central Dispatch handles blocks by


A) placing them on a dispatch queue
B) creating a new thread
C) placing them on a dispatch stack
D) constructing a parallel region

26. Why should a web server not run as a single-threaded process?

Ans: For a web server that runs as a single-threaded process, only one client can be serviced at
a time. This could result in potentially enormous wait times for a busy server.

27. List the four major categories of the benefits of multithreaded programming. Briefly
explain each.

Ans: The benefits of multithreaded programming fall into the categories: responsiveness,
resource sharing, economy, and utilization of multiprocessor architectures. Responsiveness
means that a multithreaded program can allow a program to run even if part of it is blocked.
Resource sharing occurs when an application has several different threads of activity within the
same address space. Threads share the resources of the process to which they belong. As a result,
it is more economical to create new threads than new processes. Finally, a single-threaded
process can only execute on one processor regardless of the number of processors actually
present. Multiple threads can run on multiple processors, thereby increasing efficiency.

28. What are the two different ways in which a thread library could be implemented?

Ans: The first technique of implementing the library involves ensuring that all code and data
structures for the library reside in user space with no kernel support. The other approach is to
implement a kernel-level library supported directly by the operating system so that the code and
data structures exist in kernel space.
29. Describe two techniques for creating Thread objects in Java.

Ans: One approach is to create a new class that is derived from the Thread class and to override
its run() method. An alternative — and more commonly used — technique is to define a class
that implements the Runnable interface. When a class implements Runnable, it must define a run()
method. The code implementing the run() method is what runs as a separate thread.

30. In Java, what two things does calling the start() method for a new Thread object
accomplish?

Ans: Calling the start() method for a new Thread object first allocates memory and initializes a
new thread in the JVM. Next, it calls the run() method, making the thread eligible to be run by the
JVM. Note that the run() method is never called directly. Rather, the start() method is called,
which then calls the run() method.

31. Some UNIX systems have two versions of fork(). Describe the function of each version,
as well as how to decide which version to use.

Ans: One version of fork() duplicates all threads and the other duplicates only the thread that
invoked the fork() system call. Which of the two versions of fork() to use depends on the
application. If exec() is called immediately after forking, then duplicating all threads is
unnecessary, as the program specified in the parameters to exec() will replace the process. If,
however, the separate process does not call exec() after forking, the separate process should
duplicate all threads.

32. How can deferred cancellation ensure that thread termination occurs in an orderly
manner as compared to asynchronous cancellation?

Ans: In asynchronous cancellation, the thread is immediately cancelled in response to a


cancellation request. There is no insurance that it did not quit in the middle of a data update or
other potentially dangerous situation. In deferred cancellation, the thread polls whether or not it
should terminate. This way, the thread can be made to cancel at a convenient time.

33. What is a thread pool and why is it used?

Ans: A thread pool is a collection of threads, created at process startup, that sit and wait for
work to be allocated to them. This allows one to place a bound on the number of concurrent
threads associated with a process and reduce the overhead of creating new threads and destroying
them at termination.
34. What are the general components of a thread in Windows?

Ans: The thread consists of a unique ID, a register set that represents the status of the processor,
a user stack for user mode, a kernel stack for kernel mode, and a private storage area used by
run-time libraries and dynamic link libraries.

35. Describe the difference between the fork() and clone() Linux system calls.

Ans: The fork() system call is used to duplicate a process. The clone() system call behaves
similarly except that, instead of creating a copy of the process, it creates a separate process that
shares the address space of the calling process.

36. Multicore systems present certain challenges for multithreaded programming. Briefly
describe these challenges.

Ans: Multicore systems have placed more pressure on system programmers as well as
application developers to make efficient use of the multiple computing cores. These challenges
include determining how to divide applications into separate tasks that can run in parallel on the
different cores. These tasks must be balanced such that each task is doing an equal amount of
work. Just as tasks must be separated, data must also be divided so that it can be accessed by the
tasks running on separate cores. So that data can safely be accessed, data dependencies must be
identified and where such dependencies exist, data accesses must be synchronized to ensure the
safety of the data. Once all such challenges have been met, there remains considerable challenges
testing and debugging such applications.

37. Distinguish between parallelism and concurrency.

Ans: A parallel system can perform more than one task simultaneously. A concurrent system
supports more than one task by allowing multiple tasks to make progress.

38. Distinguish between data and task parallelism.

And: Data parallelism involves distributing subsets of the same data across multiple computing
cores and performing the same operation on each core. Task parallelism involves distributing
tasks across the different computing cores where each task is performing a unique operation.
39. Describe how OpenMP is a form of implicit threading.

Ans: OpenMP provides a set of compiler directives that allows parallel programming on systems
that support shared memory. Programmers identify regions of code that can run in parallel by
placing them in a block of code that begins with the directive #pragma omp parallel. When the
compiler encounters this parallel directive, it creates as many threads as there are processing
cores in the system.

40. Describe how Grand Central Dispatch is a form of implicit threading.

Ans: Grand Central Dispatch (GCD) is a technology for Mac OS X and iOS systems that is a
combination of extensions to the C language, an API, and a runtime library that allows
developers to construct "blocks" - regions of code that can run in parallel. GCD then manages
the parallel execution of blocks in several dispatch queues.

True/False

41. A traditional (or heavyweight) process has a single thread of control.

Ans: True

42. A thread is composed of a thread ID, program counter, register set, and heap.

Ans: False

43. Virtually all contemporary operating systems support kernel threads.

Ans: True

44. Linux distinguishes between processes and threads.

Ans: False

45. In Java, data shared between threads is simply declared globally.

Ans: False

46. Each thread has its own register set and stack.

Ans: True
47. Deferred cancellation is preferred over asynchronous cancellation.

Ans: True

48. The single benefit of a thread pool is to control the number of threads.

Ans: False

49. It is possible to create a thread library without any kernel-level support.

Ans: True

50. It is possible to have concurrency without parallelism.

And: True

51. Amdahl's Law describes performance gains for applications with both a serial and parallel
component.

Ans: True

52. OpenMP only works for C, C++, and Fortran programs.

Ans: True

53. Grand Central Dispatch requires multiple threads.

Ans: False

54. The trend in developing parallel applications is to use implicit threading.

Ans: True

55. Task parallelism distributes threads and data across multiple computing cores.

Ans: False
Chapter: Chapter 5
Multiple Choice

1. Which of the following is true of cooperative scheduling?


A) It requires a timer.
B) A process keeps the CPU until it releases the CPU either by terminating or by switching to
the waiting state.
C) It incurs a cost associated with access to shared data.
D) A process switches from the running state to the ready state when an interrupt occurs.

2. ____ is the number of processes that are completed per time unit.
A) CPU utilization
B) Response time
C) Turnaround time
D) Throughput

3. ____ scheduling is approximated by predicting the next CPU burst with an exponential
average of the measured lengths of previous CPU bursts.
A) Multilevel queue
B) RR
C) FCFS
D) SJF

4. The ____ scheduling algorithm is designed especially for time-sharing systems.


A) SJF
B) FCFS
C) RR
D) Multilevel queue

5. Which of the following scheduling algorithms must be nonpreemptive?


A) SJF
B) RR
C) FCFS
D) priority algorithms
6. Which of the following is true of multilevel queue scheduling?
A) Processes can move between queues.
B) Each queue has its own scheduling algorithm.
C) A queue cannot have absolute priority over lower-priority queues.
D) It is the most general CPU-scheduling algorithm.

7. The default scheduling class for a process in Solaris is ____.


A) time sharing
B) system
C) interactive
D) real-time

8. Which of the following statements are false with regards to the Linux CFS scheduler?
A) Each task is assigned a proportion of CPU processing time.
B) Lower numeric values indicate higher relative priorities.
C) There is a single, system-wide value of vruntime.
D) The scheduler doesn't directly assign priorities.

9. The Linux CFS scheduler identifies _____________ as the interval of time during which
every runnable task should run at least once.
A) virtual run time
B) targeted latency
C) nice value
D) load balancing

10. In Little's formula, λ, represents the ____.


A) average waiting time in the queue
B) average arrival rate for new processes in the queue
C) average queue length
D) average CPU utilization

11. In Solaris, what is the time quantum (in milliseconds) of an interactive thread with
priority 35?
A) 25
B) 54
C) 80
D) 35
12. In Solaris, if an interactive thread with priority 15 uses its entire time quantum, what is
its priority recalculated to?
A) 51
B) 5
C) 160
D) It remains at 15

13. In Solaris, if an interactive thread with priority 25 is waiting for I/O, what is its priority
recalculated to when it is eligible to run again?
A) 15
B) 120
C) 52
D) It remains at 25

14. ______ allows a thread to run on only one processor.


A) Processor affinity
B) Processor set
C) NUMA
D) Load balancing

15. What is the numeric priority of a Windows thread in the


NORMAL_PRIORITY_CLASS with HIGHEST relative priority?
A) 24
B) 10
C) 8
D) 13

16. What is the numeric priority of a Windows thread in the HIGH_PRIORITY_CLASS


with ABOVE_NORMAL relative priority?
A) 24
B) 10
C) 8
D) 14
17. What is the numeric priority of a Windows thread in the
BELOW_NORMAL_PRIORITY_CLASS with NORMAL relative priority?
A) 6
B) 7
C) 5
D) 8

18. __________ involves the decision of which kernel thread to schedule onto which CPU.
A) Process-contention scope
B) System-contention scope
C) Dispatcher
D) Round-robin scheduling

19. With _______ a thread executes on a processor until a long-latency event (i.e. a
memory stall) occurs.
A) coarse-grained multithreading
B) fine-grained multithreading
C) virtualization
D) multicore processors

20. A significant problem with priority scheduling algorithms is _____.


A) complexity
B) starvation
C) determining the length of the next CPU burst
D) determining the length of the time quantum

21. The ______ occurs in first-come-first-served scheduling when a process with a long
CPU burst occupies the CPU.
A) dispatch latency
B) waiting time
C) convoy effect
D) system-contention scope

22. The rate of a periodic task in a hard real-time system is ____, where p is a period and t
is the processing time.
A) 1/p
B) p/t
C) 1/t
D) pt
23. Which of the following is true of the rate-monotonic scheduling algorithm?
A) The task with the shortest period will have the lowest priority.
B) It uses a dynamic priority policy.
C) CPU utilization is bounded when using this algorithm.
D) It is non-preemptive.

24. Which of the following is true of earliest-deadline-first (EDF) scheduling algorithm?


A) When a process becomes runnable, it must announce its deadline requirements to the
system.
B) Deadlines are assigned as following: the earlier the deadline, the lower the priority; the later
the deadline, the higher the priority.
C) Priorities are fixed; that is, they cannot be adjusted when a new process starts running.
D) It assigns priorities statically according to deadline.

25. The two general approaches to load balancing are __________ and ____________.
A) soft affinity, hard affinity
B) coarse grained, fine grained
C) soft real-time, hard real-time
D) push migration, pull migration

Essay

26. Distinguish between coarse-grained and fine-grained multithreading.

Ans: There are two approaches to multithread a processor. (1) Coarse-grained multithreading
allows a thread to run on a processor until a long-latency event, such as waiting for memory, to
occur. When a long-latency event does occur, the processor switches to another thread. (2)
Fine-grained multithreading switches between threads at a much finer-granularity, such as
between instructions.

27. Explain the concept of a CPU–I/O burst cycle.

Ans: The lifecycle of a process can be considered to consist of a number of bursts belonging to
two different states. All processes consist of CPU cycles and I/O operations. Therefore, a process
can be modeled as switching between bursts of CPU execution and I/O wait.
28. What role does the dispatcher play in CPU scheduling?

Ans: The dispatcher gives control of the CPU to the process selected by the short-term
scheduler. To perform this task, a context switch, a switch to user mode, and a jump to the proper
location in the user program are all required. The dispatch should be made as fast as possible.
The time lost to the dispatcher is termed dispatch latency.

29. Explain the difference between response time and turnaround time. These times are
both used to measure the effectiveness of scheduling schemes.

Ans: Turnaround time is the sum of the periods that a process is spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O. Turnaround time
essentially measures the amount of time it takes to execute a process. Response time, on the
other hand, is a measure of the time that elapses between a request and the first response
produced.

30. What effect does the size of the time quantum have on the performance of an RR
algorithm?

Ans: At one extreme, if the time quantum is extremely large, the RR policy is the same as the
FCFS policy. If the time quantum is extremely small, the RR approach is called processor
sharing and creates the appearance that each of n processes has its own processor running at 1/n
the speed of the real processor.

31. Explain the process of starvation and how aging can be used to prevent it.

Ans: Starvation occurs when a process is ready to run but is stuck waiting indefinitely for the
CPU. This can be caused, for example, when higher-priority processes prevent low-priority
processes from ever getting the CPU. Aging involves gradually increasing the priority of a
process so that a process will eventually achieve a high enough priority to execute if it waited for
a long enough period of time.

32. Explain the fundamental difference between asymmetric and symmetric


multiprocessing.

Ans: In asymmetric multiprocessing, all scheduling decisions, I/O, and other system activities
are handled by a single processor, whereas in SMP, each processor is self-scheduling.
33. Describe two general approaches to load balancing.

Ans: With push migration, a specific task periodically checks the load on each processor and —
if it finds an imbalance—evenly distributes the load by moving processes from overloaded to
idle or less-busy processors. Pull migration occurs when an idle processor pulls a waiting task
from a busy processor. Push and pull migration are often implemented in parallel on
load-balancing systems.

34. In Windows, how does the dispatcher determine the order of thread execution?

Ans: The dispatcher uses a 32-level priority scheme to determine the execution order. Priorities
are divided into two classes. The variable class contains threads having priorities from 1 to 15,
and the real-time class contains threads having priorities from 16 to 31. The dispatcher uses a
queue for each scheduling priority, and traverses the set of queues from highest to lowest until it
finds a thread that is ready to run. The dispatcher executes an idle thread if no ready thread is
found.

35. What is deterministic modeling and when is it useful in evaluating an algorithm?

Ans: Deterministic modeling takes a particular predetermined workload and defines the
performance of each algorithm for that workload. Deterministic modeling is simple, fast, and
gives exact numbers for comparison of algorithms. However, it requires exact numbers for input,
and its answers apply only in those cases. The main uses of deterministic modeling are
describing scheduling algorithms and providing examples to indicate trends.

36. What are the two types of latency that affect the performance of real-time systems?
Ans: Interrupt latency refers to the period of time from the arrival of an interrupt at the CPU to
the start of the routine that services the interrupt. Dispatch latency refers to the amount of time
required for the scheduling dispatcher to stop one process and start another.

37. What are the advantages of the EDF scheduling algorithm over the rate-monotonic
scheduling algorithm?
Ans: Unlike the rate-monotonic algorithm, EDF scheduling does not require that processes be
periodic, nor must a process require a constant amount of CPU time per burst. The appeal of
EDF scheduling is that it is theoretically optimal - theoretically, it can schedule processes so that
each process can meet its deadline requirements and CPU utilization will be 100 percent.
True/False

38. In preemptive scheduling, the sections of code affected by interrupts must be guarded
from simultaneous use.

Ans: True

39. In RR scheduling, the time quantum should be small with respect to the
context-switch time.

Ans: False

40. The most complex scheduling algorithm is the multilevel feedback-queue algorithm.

Ans: True

41. Load balancing is typically only necessary on systems with a common run queue.

Ans: False

42. Systems using a one-to-one model (such as Windows, Solaris , and Linux) schedule
threads using process-contention scope (PCS).

Ans: False

43. Solaris and Windows assign higher-priority threads/tasks longer time quantums and
lower-priority tasks shorter time quantums.

Ans: False

44. A Solaris interactive thread with priority 15 has a higher relative priority than an
interactive thread with priority 20

Ans: False

45. A Solaris interactive thread with a time quantum of 80 has a higher priority than an
interactive thread with a time quantum of 120.

Ans: True
46. SMP systems that use multicore processors typically run faster than SMP systems that
place each processor on separate cores.

Ans: True

47. Windows 7 User-mode scheduling (UMS) allows applications to create and manage
thread independently of the kernel

Ans: True

48. Round-robin (RR) scheduling degenerates to first-come-first-served (FCFS) scheduling


if the time quantum is too long.

Ans: True

49. Load balancing algorithms have no impact on the benefits of processor affinity.

Ans: False

50. A multicore system allows two (or more) threads that are in compute cycles to execute
at the same time.

Ans: True

51. Providing a preemptive, priority-based scheduler guarantees hard real-time


functionality.

Ans: False

52. In hard real-time systems, interrupt latency must be bounded.

Ans: True

53. In Pthread real-time scheduling, the SCHED_FIFO class provides time slicing among
threads of equal priority.

Ans: False
54. In the Linux CFS scheduler, the task with smallest value of vruntime is considered to
have the highest priority.

Ans: True

55. The length of a time quantum assigned by the Linux CFS scheduler is dependent upon
the relative priority of a task.

Ans: False

56. The Completely Fair Scheduler (CFS) is the default scheduler for Linux systems.

Ans: True
Chapter: Chapter 6
Multiple Choice

1. A race condition ____.


A) results when several threads try to access the same data concurrently
B) results when several threads try to access and modify the same data concurrently
C) will result only if the outcome of execution does not depend on the order in which
instructions are executed
D) None of the above

2. An instruction that executes atomically ____.


A) must consist of only one machine instruction
B) executes as a single, uninterruptible unit
C) cannot be used to solve the critical section problem
D) All of the above

3. A counting semaphore ____.


A) is essentially an integer variable
B) is accessed through only one standard operation
C) can be modified simultaneously by multiple threads
D) cannot be used to control access to a thread's critical sections

4. A mutex lock ____.


A) is exactly like a counting semaphore
B) is essentially a boolean variable
C) is not guaranteed to be atomic
D) can be used to eliminate busy waiting

5. In Peterson's solution, the ____ variable indicates if a process is ready to enter its
critical section.
A) turn
B) lock
C) flag[i]
D) turn[i]
6. The first readers-writers problem ____.
A) requires that, once a writer is ready, that writer performs its write as soon as possible.
B) is not used to test synchronization primitives.
C) requires that no reader will be kept waiting unless a writer has already obtained permission
to use the shared database.
D) requires that no reader will be kept waiting unless a reader has already obtained permission
to use the shared database.

7. A ___ type presents a set of programmer-defined operations that are provided mutual
exclusion within it.
A) transaction
B) signal
C) binary
D) monitor

8. ____________ occurs when a higher-priority process needs to access a data structure


that is currently being accessed by a lower-priority process.
A) Priority inversion
B) Deadlock
C) A race condition
D) A critical section

9. What is the correct order of operations for protecting a critical section using mutex
locks?
A) release() followed by acquire()
B) acquire() followed by release()
C) wait() followed by signal()
D) signal() followed by wait()

10. What is the correct order of operations for protecting a critical section using a binary
semaphore?
A) release() followed by acquire()
B) acquire() followed by release()
C) wait() followed by signal()
D) signal() followed by wait()
11. _____ is not a technique for handling critical sections in operating systems.
A) Nonpreemptive kernels
B) Preemptive kernels
C) Spinlocks
D) Peterson's solution

12. A solution to the critical section problem does not have to satisfy which of the following
requirements?
A) mutual exclusion
B) progress
C) atomicity
D) bounded waiting

13. A(n) _______ refers to where a process is accessing/updating shared data.


A) critical section
B) entry section
C) mutex
D) test-and-set

14. _____ can be used to prevent busy waiting when implementing a semaphore.
A) Spinlocks
B) Waiting queues
C) Mutex lock
D) Allowing the wait() operation to succeed

15. Assume an adaptive mutex is used for accessing shared data on a Solaris system with
multiprocessing capabilities. Which of the following statements is not true?
A) A waiting thread may spin while waiting for the lock to become available.
B) A waiting thread may sleep while waiting for the lock to become available.
C) The adaptive mutex is only used to protect short segments of code.
D) Condition variables and semaphores are never used in place of an adaptive mutex.

16. What is the purpose of the mutex semaphore in the implementation of the
bounded-buffer problem using semaphores?
A) It indicates the number of empty slots in the buffer.
B) It indicates the number of occupied slots in the buffer.
C) It controls access to the shared buffer.
D) It ensures mutual exclusion.
17. How many philosophers may eat simultaneously in the Dining Philosophers problem
with 5 philosophers?
A) 1
B) 2
C) 3
D) 5

18. Which of the following statements is true?


A) A counting semaphore can never be used as a binary semaphore.
B) A binary semaphore can never be used as a counting semaphore.
C) Spinlocks can be used to prevent busy waiting in the implementation of semaphore.
D) Counting semaphores can be used to control access to a resource with a finite number of
instances.

19. _____ is/are not a technique for managing critical sections in operating systems.
A) Peterson's solution
B) Preemptive kernel
C) Nonpreemptive kernel
D) Semaphores

20. When using semaphores, a process invokes the wait() operation before accessing its
critical section, followed by the signal() operation upon completion of its critical section.
Consider reversing the order of these two operations—first calling signal(), then calling
wait(). What would be a possible outcome of this?
A) Starvation is possible.
B) Several processes could be active in their critical sections at the same time.
C) Mutual exclusion is still assured.
D) Deadlock is possible.

21. Which of the following statements is true?


A) Operations on atomic integers do not require locking.
B) Operations on atomic integers do require additional locking.
C) Linux only provides the atomic_inc() and atomic_sub() operations.
D) Operations on atomic integers can be interrupted.
22. A(n) ___________ is a sequence of read-write operations that are atomic.
A) atomic integer
B) semaphore
C) memory transaction
D) mutex lock

23. The OpenMP #pragma omp critical directive ___________.


A) behaves much like a mutex lock
B) does not require programmers to identify critical sections
C) does not guarantee prevention of race conditions
D) is similar to functional languages

24. Another problem related to deadlocks is ____________.


A) race conditions
B) critical sections
C) spinlocks
D) indefinite blocking

Essay

25. What three conditions must be satisfied in order to solve the critical section problem?

Ans: In a solution to the critical section problem, no thread may be executing in its critical
section if a thread is currently executing in its critical section. Furthermore, only those threads
that are not executing in their critical sections can participate in the decision on which process
will enter its critical section next. Finally, a bound must exist on the number of times that other
threads are allowed to enter their critical state after a thread has made a request to enter its
critical state.

26. Explain two general approaches to handle critical sections in operating systems.

Ans: Critical sections may use preemptive or nonpreemptive kernels. A preemptive kernel
allows a process to be preempted while it is running in kernel mode. A nonpreemptive kernel
does not allow a process running in kernel mode to be preempted; a kernel-mode process will run
until it exits kernel mode, blocks, or voluntarily yields control of the CPU. A nonpreemptive
kernel is essentially free from race conditions on kernel data structures, as the contents of this
register will be saved and restored by the interrupt handler.
27. Write two short methods that implement the simple semaphore wait() and signal()
operations on global variable S.

Ans: wait (S) {


while (S <= 0);

S--;

signal (S) {

S++;

28. Explain the difference between the first readers–writers problem and the second
readers–-writers problem.

Ans: The first readers–writers problem requires that no reader will be kept waiting unless a
writer has already obtained permission to use the shared database; whereas the second
readers–writers problem requires that once a writer is ready, that writer performs its write as
soon as possible.

29. Describe the dining-philosophers problem and how it relates to operating systems.

Ans: The scenario involves five philosophers sitting at a round table with a bowl of food and
five chopsticks. Each chopstick sits between two adjacent philosophers. The philosophers are
allowed to think and eat. Since two chopsticks are required for each philosopher to eat, and only
five chopsticks exist at the table, no two adjacent philosophers may be eating at the same time. A
scheduling problem arises as to who gets to eat at what time. This problem is similar to the
problem of scheduling processes that require a limited number of resources.

30. What is the difference between software transactional memory and hardware
transactional memory?

Ans: Software transactional memory (STM) implements transactional memory entirely in


software, no special hardware is required. STM works by inserting instrumentation code inside
of transaction blocks and typically requires the support of a compiler. Hardware transactional
memory (HTM) implements transactional memory entirely in hardware using cache hierarchies
and cache coherency protocols to resolve conflicts when shared data resides in separate caches.
31. Assume you had a function named update() that updates shared data. Illustrate how a
mutex lock named mutex might be used to prevent a race condition in update().

Ans:
void update()
{
mutex.acquire();

// update shared data

mutex.release();
}

32. Describe the turnstile structure used by Solaris for synchronization.

Ans: Solaris uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or a reader–writer lock. The turnstile is a queue structure containing threads blocked on a
lock. Each synchronized object with at least one thread blocked on the object's lock requires a
separate turnstile. However, rather than associating a turnstile with each synchronized object,
Solaris gives each kernel thread its own turnstile.

33. Explain the role of the variable preempt_count in the Linux kernel.

Ans: Each task maintains a value preempt_count which is the number of locks held by each
task. When a lock is acquired, preempt_count is incremented. When a lock is released,
preempt_count is decremented. If the task currently running in the kernel has a value of
preempt_count > 0, the kernel cannot be preempted as the task currently holds a lock.
If the count is zero, the kernel can be preempted.

34. Describe how an adaptive mutex functions.

Ans: An adaptive mutex is used in the Solaris operating system to protect access to shared data.
On a multiprocessor system, an adaptive mutex acts as a standard semaphore implemented as a
spinlock. If the shared data being accessed is already locked and the thread holding that lock is
running on another CPU, the thread spins while waiting for the lock to be released, and the data
to become available. If the thread holding the lock is not in the run state, the waiting thread
sleeps until the lock becomes available. On a single processor system, spinlocks are not used and
the waiting thread always sleeps until the lock becomes available.
35. Describe a scenario when using a reader–writer lock is more appropriate than another
synchronization tool such as a semaphore.

Ans: A tool such as a semaphore only allows one process to access shared data at a time.
Reader–writer locks are useful when it is easy to distinguish if a process is only reading or
reading/writing shared data. If a process is only reading shared data, it can access the shared data
concurrently with other readers. In the case when there are several readers, a reader–writer lock
may be much more efficient.

36. Explain how Linux manages race conditions on single-processor systems such as
embedded devices.

Ans: On multiprocessor machines, Linux uses spin locks to manage race conditions. However, as
spin locks are not appropriate on single processor machines, Linux instead disables kernel
preemption which acquiring a spin lock, and enables it after releasing the spin lock.

True/False

37. Race conditions are prevented by requiring that critical regions be protected by locks.

Ans: True

38. The value of a counting semaphore can range only between 0 and 1.

Ans: False

39. A deadlock-free solution eliminates the possibility of starvation.

Ans: False

40. The local variables of a monitor can be accessed by only the local procedures.

Ans: True

41. Every object in Java has associated with it a single lock.

Ans: True

42. Monitors are a theoretical concept and are not practiced in modern programming
languages

Ans: False
43. A thread will immediately acquire a dispatcher lock that is the signaled state.

Ans: True

44. Mutex locks and counting semaphores are essentially the same thing.

Ans: False

45. Mutex locks and binary semaphores are essentially the same thing.

Ans: True

46. A nonpreemptive kernel is safe from race conditions on kernel data structures.

Ans: True

47. Linux mostly uses atomic integers to manage race conditions within the kernel.

Ans: False
Chapter: Chapter 7
Multiple Choice

1. A deadlocked state occurs whenever ____.


A) a process is waiting for I/O to a device that does not exist
B) the system has no available free resources
C) every process in a set is waiting for an event that can only be caused by another process in
the set
D) a process is unable to release its request for a resource after use

2. One necessary condition for deadlock is ____, which states that at least one resource
must be held in a nonsharable mode.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption

3. One necessary condition for deadlock is ______, which states that a process must be
holding one resource and waiting to acquire additional resources.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption

4. One necessary condition for deadlock is ______, which states that a resource can be
released only voluntarily by the process holding the resource.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption

5. One necessary condition for deadlock is ______, which states that there is a chain of
waiting processes whereby P0 is waiting for a resource held by P1, P1 is waiting for a
resource held by P2, and Pn is waiting for a resource held by P0.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption
6. The witness software product is a ____.
A) lock-order verifier that uses mutual-exclusion locks to protect critical sections
B) modeler to develop resource allocation graphs
C) driver that can be used to prevent mutual exclusion for nonsharable resources
D) implementation of the banker's algorithm available for most operating systems

7. In a system resource-allocation graph, ____.


A) a directed edge from a process to a resource is called an assignment edge
B) a directed edge from a resource to a process is called a request edge
C) a directed edge from a process to a resource is called a request edge
D) None of the above

8. A cycle in a resource-allocation graph is ____.


A) a necessary and sufficient condition for deadlock in the case that each resource has more
than one instance
B) a necessary and sufficient condition for a deadlock in the case that each resource has exactly
one instance
C) a sufficient condition for a deadlock in the case that each resource has more than once
instance
D) is neither necessary nor sufficient for indicating deadlock in the case that each resource has
exactly one instance

9. To handle deadlocks, operating systems most often _____.


A) pretend that deadlocks never occur
B) use protocols to prevent or avoid deadlocks
C) detect and recover from deadlocks
D) None of the above

10. Which of the following statements is true?


A) A safe state is a deadlocked state.
B) A safe state may lead to a deadlocked state.
C) An unsafe state is necessarily, and by definition, always a deadlocked state.
D) An unsafe state may lead to a deadlocked state.
11. Suppose that there are ten resources available to three processes. At time 0, the
following data is collected. The table indicates the process, the maximum number of
resources needed by the process, and the number of resources currently owned by each
process. Which of the following correctly characterizes this state?

Process Maximum Needs Currently Owned


P0 10 4
P1 3 1
P2 6 4
A) It is safe.
B) It is not safe.
C) The state cannot be determined.
D) It is an impossible state.

12. Suppose that there are 12 resources available to three processes. At time 0, the
following data is collected. The table indicates the process, the maximum number of
resources needed by the process, and the number of resources currently owned by each
process. Which of the following correctly characterizes this state?

Process Maximum Needs Currently Owned


P0 10 4
P1 3 2
P2 7 4
A) It is safe.
B) It is not safe.
C) The state cannot be determined.
D) It is an impossible state.

13. Which of the following data structures in the banker's algorithm is a vector of length
m, where m is the number of resource types?
A) Need
B) Allocation
C) Max
D) Available

14. Assume there are three resources, R1, R2, and R3, that are each assigned unique
integer values 15, 10, and 25, respectively. What is a resource ordering which prevents a
circular wait?
A) R1, R2, R3
B) R3, R2, R1
C) R3, R1, R2
D) R2, R1, R3
15. A _____ could be preempted from a process.
A) mutex lock
B) CPU
C) semaphore
D) file lock

Essay

16. Explain what has to happen for a set of processes to achieve a deadlocked state.

Ans: For a set of processes to exist in a deadlocked state, every process in the set must be
waiting for an event that can be caused only be another process in the set. Thus, the processes
cannot ever exit this state without manual intervention.

17. Describe the four conditions that must hold simultaneously in a system if a deadlock is
to occur.

Ans: For a set of processes to be deadlocked: at least one resource must remain in a
nonsharable mode, a process must hold at least one resource and be waiting to acquire additional
resources held by other processes, resources in the system cannot be preempted, and a circular
wait has to exist between processes.

18. What are the three general ways that a deadlock can be handled?

Ans: A deadlock can be prevented by using protocols to ensure that a deadlock will never occur.
A system may allow a deadlock to occur, detect it, and recover from it. Lastly, an operating
system may just ignore the problem and pretend that deadlocks can never occur.

19. What is the difference between deadlock prevention and deadlock avoidance?

Ans: Deadlock prevention is a set of methods for ensuring that at least one of the necessary
conditions for deadlock cannot hold. Deadlock avoidance requires that the operating system be
given, in advance, additional information concerning which resources a process will request and
use during its lifetime.
20. Describe two protocols to ensure that the hold-and-wait condition never occurs in a
system.

Ans: One protocol requires each process to request and be allocated all its resources before it
begins execution. We can implement this provision by requiring that system calls requesting
resources for a process precede all other system calls. An alternative protocol allows a process to
request resources only when it has none. A process may request some resources and use them.
Before it can request any additional resources, however, it must release all the resources that it is
currently allocated.

21. What is one way to ensure that a circular-wait condition does not occur?

Ans: One way to ensure that this condition never holds is to impose a total ordering of all
resource types, and to require that each process requests resources in an increasing order of
enumeration. This can be accomplished by assigning each resource type a unique integer number
to determine whether one precedes another in the ordering.

22. What does a claim edge signify in a resource-allocation graph?

Ans: A claim edge indicates that a process may request a resource at some time in the future.
This edge resembles a request edge in direction, but is represented in the graph by a dashed line.

23. Describe a wait-for graph and how it detects deadlock.

Ans: If all resources have only a single instance, then we can define a deadlock-detection
algorithm that uses a variant of the resource-allocation graph, called a wait-for graph. We obtain
this graph from the resource-allocation graph by removing the resource nodes and collapsing the
appropriate edges. To detect deadlocks, the system needs to maintain the wait-for graph and
periodically invoke an algorithm that searches for a cycle in the graph.

24. What factors influence the decision of when to invoke a detection algorithm?

Ans: The first factor is how often a deadlock is likely to occur; if deadlocks occur frequently,
the detection algorithm should be invoked frequently. The second factor is how many processes
will be affected by deadlock when it happens; if the deadlock-detection algorithm is invoked for
every resource request, a considerable overhead in computation time will be incurred.
25. Describe two methods for eliminating processes by aborting a process.

Ans: The first method is to abort all deadlocked processes. Aborting all deadlocked processes
will clearly break the deadlock cycle; however, the deadlocked processes may have to be
computed for a long time, and results of these partial computations must be discarded and will
probably have to be recomputed later. The second method is to abort one process at a time until
the deadlock cycle is eliminated. Aborting one process at a time incurs considerable overhead,
since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine
whether any processes are still deadlocked.

26. Name three issues that need to be addressed if a preemption is required to deal with
deadlocks.

Ans: First, the order of resources and processes that need to be preempted must be determined
to minimize cost. Second, if a resource is preempted from a process, the process must be rolled
back to some safe state and restarted from that state. The simplest solution is a total rollback.
Finally, we must ensure that starvation does not occur from always preempting resources from
the same process.

27. Describe how a safe state ensures deadlock will be avoided.

Ans: A safe state ensures that there is a sequence of processes to finish their program execution.
Deadlock is not possible while the system is in a safe state. However, if a system goes from a
safe state to an unsafe state, deadlock is possible. One technique for avoiding deadlock is to
ensure that the system always stays in a safe state. This can be done by only assigning a resource
as long as it maintains the system in a safe state.

True/False

28. The circular-wait condition for a deadlock implies the hold-and-wait condition.

Ans: True

29. If a resource-allocation graph has a cycle, the system must be in a deadlocked state.

Ans: False

30. Protocols to prevent hold-and-wait conditions typically also prevent starvation.

Ans: False
31. The wait-for graph scheme is not applicable to a resource allocation system with
multiple instances of each resource type.

Ans: True

32. Ordering resources and requiring the resources to be acquired in order prevents the
circular wait from occurring and therefore prevents deadlock from occurring.

Ans: False

33. The banker's algorithm is useful in a system with multiple instances of each resource
type.

Ans: True

34. A system in an unsafe state will ultimately deadlock.

Ans: False

35. Deadlock prevention and deadlock avoidance are essentially the same approaches for
handling deadlock.

Ans: False
Chapter: Chapter 8
Multiple Choice

1. Absolute code can be generated for ____.


A) compile-time binding
B) load-time binding
C) execution-time binding
D) interrupt binding

2. _____ is the method of binding instructions and data to memory performed by most
general-purpose operating systems.
A) Interrupt binding
B) Compile time binding
C) Execution time binding
D) Load-time binding

3. An address generated by a CPU is referred to as a ____.


A) physical address
B) logical address
C) post relocation register address
D) Memory-Management Unit (MMU) generated address

4. Suppose a program is operating with execution-time binding and the physical address
generated is 300. The relocation register is set to 100. What is the corresponding logical
address?
A) 199
B) 201
C) 200
D) 300

5. The mapping of a logical address to a physical address is done in hardware by the


________.
A) memory-management-unit (MMU)
B) memory address register
C) relocation register
D) dynamic loading register
6. In a dynamically linked library, ____.
A) loading is postponed until execution time
B) system language libraries are treated like any other object module
C) more disk space is used than in a statically linked library
D) a stub is included in the image for each library-routine reference

7. The _____ binding scheme facilitates swapping.


A) interrupt time
B) load time
C) assembly time
D) execution time

8. The roll out, roll in variant of swapping is used ____.


A) when a backing store is not necessary
B) for the round-robin scheduling algorithm
C) for priority-based scheduling algorithms
D) when the load on the system has temporarily been reduced

9. _____ is the dynamic storage-allocation algorithm which results in the smallest leftover
hole in memory.
A) First fit
B) Best fit
C) Worst fit
D) None of the above

10. _____ is the dynamic storage-allocation algorithm which results in the largest leftover
hole in memory.
A) First fit
B) Best fit
C) Worst fit
D) None of the above

11. Which of the following is true of compaction?


A) It can be done at assembly, load, or execution time.
B) It is used to solve the problem of internal fragmentation.
C) It cannot shuffle memory contents.
D) It is possible only if relocation is dynamic and done at execution time.
12. A(n) ____ page table has one page entry for each real page (or frame) of memory.
A) inverted
B) clustered
C) forward-mapped
D) virtual

13. Consider a logical address with a page size of 8 KB. How many bits must be used to
represent the page offset in the logical address?
A) 10
B) 8
C) 13
D) 12

14. Consider a logical address with 18 bits used to represent an entry in a conventional
page table. How many entries are in the conventional page table?
A) 262144
B) 1024
C) 1048576
D) 18

15. Assume a system has a TLB hit ratio of 90%. It requires 15 nanoseconds to access the
TLB, and 85 nanoseconds to access main memory. What is the effective memory access
time in nanoseconds for this system?
A) 108.5
B) 100
C) 22
D) 176.5

16. Given the logical address 0xAEF9 (in hexadecimal) with a page size of 256 bytes, what
is the page number?
A) 0xAE
B) 0xF9
C) 0xA
D) 0x00F9
17. Given the logical address 0xAEF9 (in hexadecimal) with a page size of 256 bytes, what
is the page offset?
A) 0xAE
B) 0xF9
C) 0xA
D) 0xF900

18. Consider a 32-bit address for a two-level paging system with an 8 KB page size. The
outer page table has 1024 entries. How many bits are used to represent the second-level
page table?
A) 10
B) 8
C) 12
D) 9

19. With segmentation, a logical address consists of _____.


A) segment number and offset
B) segment name and offset
C) segment number and page number
D) segment table and segment number

20. Which of the following data structures is appropriate for placing into its own segment?
A) heap
B) kernel code and data
C) user code and data
D) all of the above

21. Assume the value of the base and limit registers are 1200 and 350 respectively. Which of
the following addresses is legal?
A) 355
B) 1200
C) 1551
D) all of the above
22. A(n) ______ matches the process with each entry in the TLB.
A) address-space identifier
B) process id
C) stack
D) page number

23. Which of the following statements are true with respect to hashed page tables?
A) They only work for sparse address spaces.
B) The virtual address is used to hash into the hash table.
C) A common approach for handling address spaces larger than 32 bits.
D) Hash table collisions do not occur because of the importance of paging.

24. Which of the following statements regarding the ARM architecture are false?
A) There are essentially four different page ranging from 4-KB to 16-MB in size.
B) There are two different levels of TLB.
C) One or two level paging may be used.
D) The micro TLB must be flushed at each context switch.

25. Which of the following is not a reason explaining why mobile devices generally do not
support swapping?
A) Limited space constraints of flash memory.
B) Small size of mobile applications do not require use of swap space.
C) Limited number of writes of flash memory.
D) Poor throughput between main memory and flash memory.

Essay

26. What is the advantage of using dynamic loading?

Ans: With dynamic loading a program does not have to be stored, in its entirety, in main
memory. This allows the system to obtain better memory-space utilization. This also allows
unused routines to stay out of main memory so that memory can be used more effectively. For
example, code used to handle an obscure error would not always use up main memory.

27. What is the context switch time, associated with swapping, if a disk drive with a
transfer rate of 2 MB/s is used to swap out part of a program that is 200 KB in size?
Assume that no seeks are necessary and that the average latency is 15 ms. The time should
reflect only the amount of time necessary to swap out the process.

Ans: 200KB / 2048 KB per second + 15 ms = 113 ms


28. When does external fragmentation occur?
Ans: As processes are loaded and removed from memory, the free memory space is broken into
little pieces. External fragmentation exists when there is enough total memory space to satisfy a
request, but the available spaces are not contiguous; storage is fragmented into a large number of
small holes. Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation.

29. Distinguish between internal and external fragmentation.


Ans: Fragmentation occurs when memory is allocated and returned to the system. As this occurs,
free memory is broken up into small chunks, often too small to be useful. External fragmentation
occurs when there is sufficient total free memory to satisfy a memory request, yet the memory is
not contiguous, so it cannot be assigned. Some contiguous allocation schemes may assign a
process more memory than it actually requested (i.e. they may assign memory in fixed-block
sizes). Internal fragmentation occurs when a process is assigned more memory than it has
requested and the wasted memory fragment is internal to a process.

30. Explain the basic method for implementing paging.

Ans: Physical memory is broken up into fixed-sized blocks called frames while logical memory
is broken up into equal-sized blocks called pages. Whenever the CPU generates a logical address,
the page number and offset into that page is used, in conjunction with a page table, to map the
request to a location in physical memory.

31. Describe how a transaction look-aside buffer (TLB) assists in the translation of a
logical address to a physical address.

Ans: Typically, large page tables are stored in main memory, and a page-table base register
points are saved to the page table. Therefore, two memory accesses are needed to access a byte
(one for the page-table entry, one for the byte), causing memory access to be slowed by a factor
of 2. The standard solution to this problem is to use a TLB, a special, small fast-lookup
hardware cache. The TLB is associative, high speed memory. Each entry consists of a key and
value. An item is compared with all keys simultaneously, and if the item is found, the
corresponding value is returned.

32. How are illegal page addresses recognized and trapped by the operating system?

Ans: Illegal addresses are trapped by the use of a valid-invalid bit, which is generally attached
to each entry in the page table. When this bit is set to "valid," the associated page is in the
process's logical address space and is thus a legal (or valid) page. When the bit is set to "invalid,"
the page is not in the process's logical address space. The operating system sets this bit for each
page to allow or disallow access to the page.
33. Describe the elements of a hashed page table.

Ans: A hashed page table contains hash values which correspond to a virtual page number.
Each entry in the hash table contains a linked list of elements that hash to the same location (to
handle collisions). Each element consists of three fields: (1) the virtual page number, (2) the
value of the mapped page frame, and (3) a pointer to the next element in the linked list.

34. Briefly describe the segmentation memory management scheme. How does it differ
from the paging memory management scheme in terms of the user's view of memory?

Ans: Segmentation views a logical address as a collection of segments. Each segment has a
name and length. The addresses specify both the segment name and the offset within the segment.
The user therefore specifies each address by two quantities: a segment name and an offset. In
contrast, in a paging scheme, the user specifies a single address, which is partitioned by the
hardware into a page number and an offset, all invisible to the programmer.

35. Describe the partitions in a logical-address space of a process in the IA-32


architecture.

Ans: The logical-address space is divided into two partitions. The first partition consists of up
to 8 K segments that are private to that process. The second partition consists of up to 8 K
segments that are shared among all the processes. Information about the first partition is kept in
the local descriptor table (LDT); information about the second partition is kept in the global
descriptor table (GDT).

36. How is a limit register used for protecting main memory?

Ans: When the CPU is executing a process, it generates a logical memory address that is added
to a relocation register in order to arrive at the physical memory address actually used by main
memory. A limit register holds the maximum logical address that the CPU should be able to
access. If any logical address is greater than or equal to the value in the limit register, then the
logical address is a dangerous address and an error results.

37. Using Figure 8.14, describe how a logical address is translated to a physical address.

Ans: A logical address is generated by the CPU. This logical address consists of a page number
and offset. The TLB is first checked to see if the page number is present. If so, a TLB hit, the
corresponding page frame is extracted from the TLB, thus producing the physical address. In the
case of a TLB miss, the page table must be searched according to page number for the
corresponding page frame.
38. Explain why mobile operating systems generally do not support paging.

Ans: Mobile operating systems typically do not support swapping because file systems are
typically employed using flash memory instead of magnetic hard disks. Flash memory is
typically limited in size as well as having poor throughput between flash and main memory.
Additionally, flash memory can only tolerate a limited number of writes before it becomes less
reliable.

39. Using Figure 8.26, describe how address translation is performed on ARM
architectures.

Ans: ARM supports four different page sizes: 4-KB and 16-KB page use two-level paging, the
larger 1-MB and 16-MB page sizes use single-level paging. The ARM architecture uses two
levels of TLBs - at one level is the micro TLB which is in fact separate TLBs for data and
instructions. At the inner level is a single main TLB. Address translation begins wit first
searching the micro TLB, and in case of a TLB miss, the main TLB is then checked. If the
reference is not in the main TLB, the page table must then be consulted.

True/False
40. A relocation register is used to check for invalid memory addresses generated by a
CPU.

Ans: False

41. Reentrant code cannot be shared.

Ans: False

42. There is a 1:1 correspondence between the number of entries in the TLB and the
number of entries in the page table.

Ans: False

43. Hierarchical page tables are appropriate for 64-bit architectures.

Ans: False

43. The ARM architecture uses both single-level and two-level paging.

Ans: True
44. Fragmentation does not occur in a paging system.

Ans: False

45. Hashed page tables are particularly useful for processes with sparse address spaces.

Ans: True

46. Inverted page tables require each process to have its own page table.

Ans: False

47. Without a mechanism such as an address-space identifier, the TLB must be flushed
during a context switch.

Ans: True

48. A 32-bit logical address with 8 KB page size will have 1,000,000 entries in a
conventional page table.

Ans: False

49. Hashed page tables are commonly used when handling addresses larger than 32 bits.

Ans: True

50. The x86-64 bit architecture only uses 48 of the 64 possible bits for representing virtual
address space.

Ans: True

51. Mobile operating systems typically support swapping.

Ans: False
Chapter: Chapter 9
Multiple Choice

1. Which of the following is a benefit of allowing a program that is only partially in


memory to execute?
A) Programs can be written to use more memory than is available in physical memory.
B) CPU utilization and throughput is increased.
C) Less I/O is needed to load or swap each user program into memory.
D) All of the above

2. In systems that support virtual memory, ____.


A) virtual memory is separated from logical memory.
B) virtual memory is separated from physical memory.
C) physical memory is separated from secondary storage.
D) physical memory is separated from logical memory.

3. The vfork() system call in UNIX ____.


A) allows the child process to use the address space of the parent
B) uses copy-on-write with the fork() call
C) is not intended to be used when the child process calls exec() immediately after creation
D) duplicates all pages that are modified by the child process

4. Suppose we have the following page accesses: 1 2 3 4 2 3 4 1 2 1 1 3 1 4 and that there


are three frames within our system. Using the FIFO replacement algorithm, what is the
number of page faults for the given reference string?
A) 14
B) 8
C) 13
D) 10

5. Suppose we have the following page accesses: 1 2 3 4 2 3 4 1 2 1 1 3 1 4 and that there


are three frames within our system. Using the FIFO replacement algorithm, what will be
the final configuration of the three frames following the execution of the given reference
string?
A) 4, 1, 3
B) 3, 1, 4
C) 4, 2, 3
D) 3, 4, 2
6. Suppose we have the following page accesses: 1 2 3 4 2 3 4 1 2 1 1 3 1 4 and that there
are three frames within our system. Using the LRU replacement algorithm, what is the
number of page faults for the given reference string?

A) 14
B) 13
C) 8
D) 10

7. Given the reference string of page accesses: 1 2 3 4 2 3 4 1 2 1 1 3 1 4 and a system with


three page frames, what is the final configuration of the three frames after the LRU
algorithm is applied?
A) 1, 3, 4
B) 3, 1, 4
C) 4, 1, 2
D) 1, 2, 3

8. Belady's anomaly states that ____.


A) giving more memory to a process will improve its performance
B) as the number of allocated frames increases, the page-fault rate may decrease for all page
replacement algorithms
C) for some page replacement algorithms, the page-fault rate may decrease as the number of
allocated frames increases
D) for some page replacement algorithms, the page-fault rate may increase as the number of
allocated frames increases

9. Optimal page replacement ____.


A) is the page-replacement algorithm most often implemented
B) is used mostly for comparison with other page-replacement schemes
C) can suffer from Belady's anomaly
D) requires that the system keep track of previously used pages

10. In the enhanced second chance algorithm, which of the following ordered pairs
represents a page that would be the best choice for replacement?
A) (0,0)
B) (0,1)
C) (1,0)
D) (1,1)
11. The _____ allocation algorithm allocates available memory to each process according
to its size.
A) equal
B) global
C) proportional
D) slab

12. The ____ is the number of entries in the TLB multiplied by the page size.
A) TLB cache
B) page resolution
C) TLB reach
D) hit ratio

13. ________ allows the parent and child processes to initially share the same pages, but
when either process modifies a page, a copy of the shared page is created.
A) copy-on-write
B) zero-fill-on-demand
C) memory-mapped
D) virtual memory fork

14. _____ is the algorithm implemented on most systems.


A) FIFO
B) Least frequently used
C) Most frequently used
D) LRU

15. _____ occurs when a process spends more time paging than executing.
A) Thrashing
B) Memory-mapping
C) Demand paging
D) Swapping

16. Windows uses a local page replacement policy _____.


A) when a process exceeds its working set minimum
B) when a process exceeds its working set maximum
C) when the system undergoes automatic working set trimming
D) under all circumstances
17. Which of the following statements is false with regard to Solaris memory management?
A) The speed at which pages are examined (the scanrate) is constant.
B) The pageout process only runs if the number of free pages is less than lotsfree.
C) An LRU approximation algorithm is employed.
D) Pages selected for replacement may be reclaimed before being placed on the free list.

18. What size segment will be allocated for a 39 KB request on a system using the Buddy
system for kernel memory allocation?
A) 39 KB
B) 42 KB
C) 64 KB
D) None of the above

19. Which of the following statements is false with regard to allocating kernel memory?
A) Slab allocation does not suffer from fragmentation.
B) Adjacent segments can be combined into one larger segment with the buddy system.
C) Because the kernel requests memory of varying sizes, some of which may be quite small, the
system does not have to be concerned about wasting memory.
D) The slab allocator allows memory requests to be satisfied very quickly.

20. The _____ is an approximation of a program's locality.


A) locality model
B) working set
C) page fault frequency
D) page replacement algorithm

21. ______ allows a portion of a virtual address space to be logically associated with a file.
A) Memory-mapping
B) Shared memory
C) Slab allocation
D) Locality of reference

22. Systems in which memory access times vary significantly are known as __________.
A) memory-mapped I/O
B) demand-paged memory
C) non-uniform memory access
D) copy-on-write memory
23. Which of the following is considered a benefit when using the slab allocator?
A) Memory is allocated using a simple power-of-2 allocator.
B) It allows kernel code and data to be efficiently paged.
C) It allows larger segments to be combined using coalescing.
D) There is no memory fragmentation.

Essay
24. Explain the distinction between a demand-paging system and a paging system with
swapping.

Ans: A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory. With demand paging, when a process is executed, it is swapped
into memory. Rather than swapping the entire process into memory, however, a lazy swapper is
used. A lazy swapper never swaps a page into memory unless that page will be needed. Thus, a
paging system with swapping manipulates entire processes, whereas a demand pager is
concerned with the individual pages of a process.

25. Explain the sequence of events that happens when a page-fault occurs.

Ans: When the operating system cannot load the desired page into memory, a page-fault occurs.
First, the memory reference is checked for validity. In the case of an invalid request, the program
will be terminated. If the request was valid, a free frame is located. A disk operation is then
scheduled to read the page into the frame just found, update the page table, restart the instruction
that was interrupted because of the page fault, and use the page accordingly.

26. How is the effective access time computed for a demand-paged memory system?

Ans: In order to compute the effective access time, it is necessary to know the average memory
access time of the system, the probability of a page fault, and the time necessary to service a
page fault. The effective access time can then be computed using the formula:
effective access time = (1 – probability of page fault) * memory access time + probability of
page fault * page fault time.

27. How does the second-chance algorithm for page replacement differ from the FIFO
page replacement algorithm?

Ans: The second-chance algorithm is based on the FIFO replacement algorithm and even
degenerates to FIFO in its worst-case scenario. In the second-chance algorithm, a FIFO
replacement is implemented along with a reference bit. If the reference bit is set, then it is
cleared, the page's arrival time is set to the current time, and the program moves along in a
similar fashion through the pages until a page with a cleared reference bit is found and
subsequently replaced.
28. Explain the concept behind prepaging.

Ans: Paging schemes, such as pure demand paging, result in large amounts of initial page faults
as the process is started. Prepaging is an attempt to prevent this high level of initial paging by
bringing into memory, at one time, all of the pages that will be needed by the process.

29. Why doesn't a local replacement algorithm solve the problem of thrashing entirely?

Ans: With local replacement, if one process starts thrashing, it cannot steal frames from another
process and cause the latter to thrash as well. However, if processes are thrashing, they will be in
the queue for the paging device most of the time. The average service time for a page fault will
increase because of the longer average queue for the paging device. Thus, the effective access
time will increase, even for a process that is not thrashing.

30. Explain the difference between programmed I/O (PIO) and interrupt driven I/O.

Ans: To send out a long string of bytes through a memory-mapped serial port, the CPU writes
one data byte to the data register to signal that it is ready for the next byte. If the CPU uses
polling to watch the control bit, constantly looping to see whether the device is ready, this
method of operation is called programmer I/O. If the CPU does not poll the control bit, but
instead receives an interrupt when the device is ready for the next byte, the data transfer is said to
be interrupt driven.

31. What are the benefits of using slab allocation to allocate kernel memory?

Ans: The slab allocator provides two main benefits. First, no memory is wasted due to
fragmentation. When the kernel requests memory for an object, the slab allocator returns the
exact amount of memory required to represent the object. Second, memory requests can be
satisfied quickly. Objects are created in advance and can be quickly allocated. Also, released
objects are returned to the cache and marked as free, thus making them immediately available for
subsequent requests.

32. How are lock bits useful in I/O requests?

Ans: A lock bit is associated with every frame. If a frame is locked, it cannot be selected for
replacement. To write a block on tape, we lock into memory the pages containing the block. The
system then continues as usual with other processes if the I/O request is in a queue for that I/O
device. This avoids the replacement of the pages for other processes and the possible
unavailability of those pages when the I/O request advances to the head of the device queue.
When the I/O is complete, the pages are unlocked.
33. Explain how copy-on-write operates.

Ans: Copy-on-write (COW) initially allows a parent and child process to share the same pages.
As long as either process is only reading—and not modifying—the shared pages, both processes
can share the same pages, thus increasing system efficiency. However, as soon as either process
modifies a shared page, a copy of that shared page is created, thus providing each process with
its own private page. For example, assume an integer X whose value is 5 is in a shared page
marked as COW. The parent process then proceeds to modify X, changing its value to 10. Since
this page is marked as COW, a copy of the page is created for the parent process, which changes
the value of X to 10. The value of X remains at 5 for the child process.

34. Explain the distinction between global allocation versus local allocation.

Ans: When a process incurs a page fault, it must be allocated a new frame for bringing the
faulting page into memory. The two general strategies for allocating a new frame are global and
local allocation policies. In a global allocation scheme, a frame is allocated from any process in
the system. Thus, if process A incurs a page fault, it may be allocated a page from process B.
The page that is selected from process B may be based upon any of the page replacement
algorithms such as LRU. Alternatively, a local allocation policy dictates that when a process
incurs a page fault, it must select one of its own pages for replacement when allocating a new
page.

35. Discuss two strategies for increasing TLB reach.

Ans: TLB reach refers to the amount of memory accessible from the TLB and is the page size
multiplied by the number of entries in the TLB. Two possible approaches for increasing TLB
reach are (1) increasing the number of entries in the TLB, and (2) increasing the page size.
Increasing the number of entries in the TLB is a costly strategy as the TLB consists of
associative memory, which is both costly and power hungry. For example, by doubling the
number of entries in the TLB, the TLB reach is doubled. However, increasing the page size (or
providing multiple page sizes) allows system designers to maintain the size of the TLB, and yet
significantly increase the TLB reach. For this reason, recent trends have moved towards
increasing page sizes for increasing TLB reach.

36. What is the benefit of using sparse addresses in virtual memory?

Ans: Virtual address spaces that include holes between the heap and stack are known as sparse
address spaces. Using a sparse address space is beneficial because the holes can be filled as the
stack or heap segments grow, or when we wish to dynamically link libraries (or possibly other
shared objects) during program execution.
37. Explain the usefulness of a modify bit.

Ans: A modify bit is associated with each page frame. If a frame is modified (i.e. written), the
modify bit is then set. The modify bit is useful when a page is selected for replacement. If the bit
is not set (the page was not modified), the page does not need to be written to disk. If the modify
bit is set, the page needs to be written to disk when selected for replacement.

True/False

38. In general, virtual memory decreases the degree of multiprogramming in a system.

Ans: False

39. Stack algorithms can never exhibit Belady's anomaly.

Ans: True

40. If the page-fault rate is too high, the process may have too many frames.

Ans: False

41. The buddy system for allocating kernel memory is very likely to cause fragmentation
within the allocated segments.

Ans: True

42. On a system with demand-paging, a process will experience a high page fault rate when
the process begins execution.

Ans: True

43. On systems that provide it, vfork() should always be used instead of fork().

Ans: False

44. Only a fraction of a process's working set needs to be stored in the TLB.

Ans: False
45. Solaris uses both a local and global page replacement policy.

Ans: False

46. Windows uses both a local and global page replacement policy.

Ans: False

47. A page fault must be preceded by a TLB miss.

Ans: True

48. Non-uniform memory access has little effect on the performance of a virtual memory
system.

Ans: False

49. In Linux, a slab may only be either full or empty.

Ans: False
1. In a time-sharing operating system, the process moves from the running
state to the ______ when the time slot given to a process is completed.
a. suspended state
b. ready state
c. blocked state
d. terminated state

2. A ______ is the only state transition that is started by the user process
itself.
a. dispatch
b. none of the mentioned
c. block
d. wakeup

3. The number of completed processes per unit time is referred to as ______.


a. efficiency
b. capacity
c. output
d. throughput

4. If the child process finishes execution and the parent is still in execution
state, then the child process is known as ______.
a. orphan
b. zombie
c. body
d. dead

5. It is possible to have mutual exclusion via the ______.


a. mutex locks
b. both binary semaphores and mutex locks
c. binary semaphores

6. Which of the following is NOT true for Peterson’s solution?


a. Peterson’s solution works for synchronization among more than two
processes
b. The bounded-waiting requirement is met
c. Mutual exclusion is preserved
d. The progress requirement is satisfied
7. What is a scheduler on a long-term basis?
a. The selection of processes that need to be included in the ready queue
b. The selection of processes that need to be executed next and allocates the
CPU
c. The selection of processes that heave by swapping to delete from memory
d. None of the mentioned

8. Constraining the child process to a subset of the resources of the parent


prevents any process from ______.
a. system overloading via creating a lot of sub-processes
b. system crashing via utilizing multiple resources
c. system underloading via very less CPU utilization
d. system overloading via using a lot of secondary storage

9. What process can be influenced by the execution of other processes in


the system?
a. Child process
b. Init process
c. Parent process
d. Cooperating process

10. Only one process will run at a time with ______; all other processes are
waiting for the processor in the meantime. With ______, each process on a
different processor can run more than one process simultaneously.
a. Multiprocessing, Multiprogramming
b. Multiprogramming, Multiprocessing
c. Uniprogramming, Multiprocessing
d. Multiprogramming, Uniprocessing

11. What is a scheduler on a medium-term basis?


a. None of the mentioned
b. The selection of processes that heave by swapping to delete from
memory
c. The selection of processes that need to be executed next and allocates the
CPU
d. The selection of processes that need to be included in the ready queue
12. What one of the following does not belong to the process queues?
a. Ready Queue
b. PCB queue
c. Device Queue
d. Job Queue

13. What is a scheduler on a short-term basis?


a. None of the mentioned
b. The selection of processes that heave by swapping to delete from memory
c. The selection of processes that need to be executed next and allocates
the CPU
d. The selection of processes that need to be included in the ready queu

14. Which of the following would not disrupt the running process?
a. Scheduler process
b. A device
c. Timer
d. Power failure

15. A semaphore is a shared variable of type integer ______.


a. unable to drop more than one
b. unable to drop below zero
c. unable to drop more than zero
d. unable to drop below one

16. __________ illustrates the degree of multiprogramming..


a. # of processes executed per unit time
b. # of processes in the ready queue
c. # of processes in the I/O queue
d. # of processes in memory

17. Which of the below is a method for synchronization?


a. Pipe
b. Thread
c. Socket
d. Semaphore
18. The entry in the ______ of all PCBs of the existing processes.
a. process Register
b. program Counter
c. process Table
d. process Unit

19. It is possible to do process synchronization on ______


a. hardware level
b. both software and hardware level
c. software level
d. none of the mentioned

20. Assume a process is waiting for some I/O service in a "blocked" state. It
goes to the ______ when the service is finished.
a. terminated state
b. ready state
c. running state
d. suspended state

You might also like