Os
Os
a.
Interrupter
b.
orphan
c.
zombie
d.
synchronizer
Which of the following selects from among the processes that are in the ready queue to
execute and allocate the CPU to one of them?
a.
context switch
b.
job scheduler
c.
swapping
d.
CPU scheduler
The list of processes waiting to execute on a CPU is called a __.
a.
device queue
b.
ready queue
c.
standby queue
d.
interrupt queue CPU scheduler
Message passing ______
a.
kernel only required once when establishing the shared region
b.
kernel required multiple times when establishing the shared region
c.
requires only once system call, consume the kernel
d.
requires a lot of system call, consume the kernel
Including the initial parent process, how many processes are created by the program shown
in the following Code? …int main(){fork(); fork(); fork(); fork();return 0;}
a.4
b.8
c.16
d.12
Which of the following system calls is used to have a new program loaded into the new
process’s memory space?
a.
exit()
b.
wait()
c.
exec()
d.
fork()
If process P0 is switched to process P1, state for P0 will be saved into _, and state from __
will be reloaded?
a.
PCB0, PCB1
b.
PCB0, PCB0
c.
PCB1, PCB1
d.
PCB1, PCB0
Which of the following process state will be switched from “ready” state?
a. ready
b. waiting
c. terminated
d. running
Which of the following process state will be switched from “running” state when an
interrupt occurs?
a.
ready
b.
new
c.
waiting
d.
terminated
Which of the following process state will be switched from "running" state when an I/O
event occurs?
A) ready
B) terminated
C) waiting
D) new
Program is active entity stored on disk ×
In ready state, the process is waiting to be assigned to a processor /
Background processes controlled by user interface ×
It is an application appearing on the display screen of a mobile device
a.
foreground
b.
display
c.
background
d.
main
The solution of Producer – Consumer problem is to use
a.
Message-Passing
b.
Shared memory
c.
Synchronous
d.
Blocking
which of the following process state will be switched from new state?
a.
waiting
b.
running
c.
terminated
d.
ready
During a process is running, interrupt occurs, the process will return into the ready state /
A consumer process must wait if the buffer is full ×
In shared memory, kernels required lots when establishing the shared region ×
The following C program, main() { int no= 10; fork(); fork (); printf(“the no= %d”, no); }, How
many processes are created by the program?
a.
one process
b.
four processes
c.
two processes
d.
eight processes
If process P1 is switched to process P2, state for P1 will be saved into _, and state from __
will be reloaded?
a.PCB2, PCB1
b.PCB1, PCB1
c.PCB2, PCB2
d.PCB1, PCB2
The following C program, main() { int val= 15; fork(); printf(“the val= %d”, val); }, How many
processes are created by the program?
a.
three processes
b.
two processes
c.
four processes
d.
one process
"operating systems"
Table of Content
Chapter 5 ..................................................................................................................2
Chapter 6 ................................................................................................................ 12
Chapter 8 ................................................................................................................ 24
Chapter 5
Multiple Choice
1. Which of the following is true of cooperative scheduling?
A) It requires a timer.
B) A process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.
D) A process switches from the running state to the ready state when an interrupt occurs.
2. ____ is the number of processes that are completed per time unit.
A) CPU utilization
B) Response time
C) Turnaround time
D) Throughput
3. ____ scheduling is approximated by predicting the next CPU burst with an exponential
average of the measured lengths of previous CPU bursts.
A) Multilevel queue
B) RR
C) FCFS
D) SJF
A) SJF
B) FCFS
C) RR
D) Multilevel queue
A) SJF
B) RR
C) FCFS
D) priority algorithms
6. Which of the following is true of multilevel queue scheduling?
A) Processor affinity
B) Processor set
C) NUMA
D) Load balancing
9. __________ involves the decision of which kernel thread to schedule onto which CPU.
A) Process-contention scope
B) System-contention scope
C) Dispatcher
D) Round-robin scheduling
10. With _______ a thread executes on a processor until a long-latency event (i.e. a memory stall)
occurs.
A) coarse-grained multithreading
B) fine-grained multithreading
C) virtualization
D) multicore processors
A) complexity
B) starvation
A) dispatch latency
B) waiting time
C) convoy effect
D) system-contention scope
13. The rate of a periodic task in a hard real-time system is ____, where p is a period and t is the
processing time.
A) 1/p
B) p/t
C) 1/t
D) pt
A) The task with the shortest period will have the lowest priority.
D) It is non-preemptive.
A) When a process becomes runnable, it must announce its deadline requirements to the system.
B) Deadlines are assigned as following: the earlier the deadline, the lower the priority; the later the
deadline, the higher the priority.
C) Priorities are fixed; that is, they cannot be adjusted when a new process starts running.
16. The two general approaches to load balancing are ____ and _____.
A) Race condition
B) Scheduler
C) Synchronization tools
E) I/O device
A) FIFO queue
B) priority queue
C) tree
22) Which of the following circumstances can preemptive scheduling take place?
A) when a process switches from the running state to the waiting state
B) when a process switches from the waiting state to the ready state
23) Which of the following scheduling algorithm may suffer from convoy effect?
A) SJF
B) FCFS
C) RR
D) Multilevel queue
24) Which of the following scheduling algorithms gives the minimum average waiting time for a
given set of processes?
A) SJF
B) FCFS
C) RR
D) Multilevel queue
25) Which of the following scheduling algorithms gives the minimum average response time?
A) SJF
B) FCFS
C) RR
D) Multilevel queue
26) If the time quantum gets too large, RR scheduling degenerates to __________?
A) SJF
B) FCFS
C) Shortest-remaining-time-first
D) Multilevel queue
27) There are 5 different processes running on a workstation with burst times
(8ms,7ms,15ms,6ms,10ms). processes are scheduled with the Round-Robin time sharing
method. Which out of the following quantum times is the best value for small response times.
a. Qt = 8ms
b. Qt = 10ms
c. Qt = 7ms
d. Qt = 9ms
Quiz4 CH5
1) From the statements below, which one of the following cannot be scheduled by the kernel?
c) process
2) In priority scheduling algorithm, when a process arrives at the ready queue, its priority is
compared with the priority of ____________
a) all process
c) parent process
d) init process
3) By using a priority scheduling algorithm, when a process arrives at the ready queue, a
comparison between its priority and the priority of ____.
a) init process
c) all processes
d) p
5)The strategy of making processes that are logically runnable to be temporarily suspended is
called ____.
a) FIFO
b) non-preemptive scheduling
c) preemptive scheduling
d) SJF
6) The interval between the time a process is submitted and the time of completion is called
the________.
a) throughput
b) turnaround time
c) response time
d) waiting time
a) multiprocessor systems
a) When a process switches from the running state to the ready state
b) When a process transitions from the running state to the waiting state
c) When a process switches from the waiting state to the ready state
a) time
b) space
c) money
b) the cumulative time taken from the time of submission till the first response is produced.
c) the cumulative time taken from the time of submission till the time of completion.
d) the cumulative time taken from the time of submission till the response is output.
13) Which module provides the process selected by the short-term scheduler with control over
the CPU?
a) dispatcher
b) interrupt
c) scheduler
14) The interval between the time a process is submitted and the time of completion is called the
____________.
a) waiting time
b) turnaround time
c) response time
d) throughput
15) An SJF algorithm is simply a priority algorithm where the priority is _________
17) A solution to the problem / issue of indefinite blockage of low – priority processes is
__________
a) Starvation
b) Wait queue
c) Ready queue
d) Aging
18) The FCFS algorithm is particularly troublesome/problematic for ____________
b) multiprogramming systems
c) multiprocessor systems
d) operating systems
A) when a process transitions from the waiting state to the running state
C) when a process transitions from the running state to the ready state
D) when a process transitions from the running state to the waiting state
True/False
1) In preemptive scheduling, the sections of code affected by interrupts must be guarded from
simultaneous use.
Ans: True
2) In RR scheduling, the time quantum should be small with respect to the context-switch time.
Ans: False
Ans: True
4) Load balancing is typically only necessary on systems with a common run queue.
Ans: False
5) SMP systems that use multicore processors typically run faster than SMP systems that place
each processor on separate cores.
Ans: True
Ans: True
Ans: False
8) A multicore system allows two (or more) threads that are in compute cycles to execute at the
same time.
Ans: True
Ans: False
Ans: True
11) Interrupt latency – time from arrival of interrupt to start of routine that services interrupt.
Ans: True
12) Dispatch latency – time for schedule to take current process off CPU and switch to another.
Ans: True
13) Soft affinity – the operating system attempts to keep a thread running on the same
processor, but no guarantees.
Ans: True
14) Hard affinity – allows a process to specify a set of processors it may run on.
Ans: True
Chapter 6
Multiple Choice
1. A race condition ____.
A) results when several threads try to access the same data concurrently
B) results when several threads try to access and modify the same data concurrently
C) will result only if the outcome of execution does not depend on the order in which instructions are
executed
5. In Peterson's solution, the ____ variable indicates if a process is ready to enter its critical
section.
A) turn
B) lock
C) flag[i]
D) turn[i]
6. The first readers-writers problem ____.
A) requires that, once a writer is ready, that writer performs its write as soon as possible.
C) requires that no reader will be kept waiting unless a writer has already obtained permission to use the
shared database.
D) requires that no reader will be kept waiting unless a reader has already obtained permission to use
the shared database.
7. ____________ occurs when a higher-priority process needs to access a data structure that is
currently being accessed by a lower-priority process.
A) Priority inversion
B) Deadlock
C) A race conditions
D) A critical section
8. What is the correct order of operations for protecting a critical section using mutex locks?
9. What is the correct order of operations for protecting a critical section using a binary
semaphore?
10. _____ is not a technique for handling critical sections in operating systems.
A) Non-preemptive kernels
B) Preemptive kernels
C) Spinlocks
D) Peterson's solution
11. A solution to the critical section problem does not have to satisfy which of the following
requirements?
A) mutual exclusion
B) progress
C) atomicity
D) bounded waiting
A) critical section
B) entry section
C) mutex
D) test-and-set
13. _____ can be used to prevent busy waiting when implementing a semaphore.
A) Spinlocks
B) Waiting queues
C) Mutex lock
14. What is the purpose of the mutex semaphore in the implementation of the bounded-buffer
problem using semaphores?
15. How many philosophers may eat simultaneously in the Dining Philosophers problem with 5
philosophers?
A) 1
B) 2
C) 3
D) 5
D) Counting semaphores can be used to control access to a resource with a finite number of instances.
17. _____ is/are not a technique for managing critical sections in operating systems.
A) Peterson's solution
B) Preemptive kernel
C) non-preemptive kernel
D) Semaphores
18. When using semaphores, a process invokes the wait() operation before accessing its critical
section, followed by the signal() operation upon completion of its critical section. Consider
reversing the order of these two operations—first calling signal(), then calling wait(). What would
be a possible outcome of this?
A) Starvation is possible.
B) Several processes could be active in their critical sections at the same time.
D) Deadlock is possible.
A) atomic integer
B) semaphore
C) memory transaction
D) mutex lock
A) race conditions
B) critical sections
C) spinlocks
D) indefinite blocking
23. When several processes access the same data concurrently and the outcome of the
execution depends on the particular order in which the access takes place is called ________
a) dynamic condition
b) race condition
c) essential condition
d) critical condition
24. If a process is executing in its critical section, then no other processes can be executing in
their critical section. What is this condition called?
a) mutual exclusion
b) critical exclusion
c) synchronous exclusion
d) asynchronous exclusion
25. When high priority task is indirectly preempted by medium priority task effectively inverting
the relative priority of the two tasks, the scenario is called __________
a) priority inversion
b) priority removal
c) priority exchange
d) priority modification
26. In the bounded buffer problem, there are the empty and full semaphores that:
a) wait, signal
b) wait, stop
c) wait, hold
d) hold, signal
28. If a process is executing in its critical section, then no other processes can be executing in
their critical section. This condition is called
a) mutual exclusion
b) critical exclusion
c) synchronous exclusion
d) asynchronous exclusion
29. The wait operation of the semaphore basically works on the basic _______ system call.
a) stop()
b) block()
c) hold()
d) wait()
a) single
b) atomic
c) static
a) Access
b) Termination
c) Completion
d) Execution
34. A minimum of _____ variable(s) is/are required to be shared between processes to solve the
critical section problem.
a) one
b) two
c) three
d) four
35. The operations that can be invoked on a condition variable are :
36. What is the initial value of the semaphore to allow only one of the many processes to enter
their critical section?
a) 8
b) 1
c) 16
d) 0
37. Let P1 and P2 be the two processes and S1 and S2 be the two shared Boolean variables. The
initial values of S1 and S2 are randomly assigned. For accessing the critical sections of P1 and
P2 the methods used by them are given below:
Method used by P1
Critical section
S1 = S2;
Method used by P2
Critical section
S2 = not (S1);
b) it is invalid
a) Synchronization
b) Mutual Exclusion
c) Dead lock
d) Starvation
40. …………… is a condition in which there is a set of concurrent processes, only one of which is
able to access a given resource or perform a given function at any time.
a) Mutual Exclusion
b) Busy Waiting
c) Deadlock
d) Starvation
41. The following program consists of 3 concurrent processes and 3 binary semaphores. The
semaphores are initialized as S0 = 1, S1 = 0, S2 = 0.
a) At least twice
b) Exactly twice
c) Exactly thrice
d) Exactly once
42. A semaphore:
a) is a binary mutex
a. process Register
b. process Unit
c. program Counter
d. process Table
b. # of processes in memory
3. In a time-sharing operating system, the process moves from the running state to the
__________ when the time slot given to a process is completed.
a. ready state
b. blocked state
c. suspended state
d. terminated state
c. binary semaphores
d. mutex locks
b. software level
c. hardware level
a. Cooperating process
b. Parent process
c. Child process
d. Init process
7. Constraining the child process to a subset of the resources of the parent prevents any
process from __________.
a. Timer
b. A device
c. Power failure
d. Scheduler process
a. The selection of processes that need to be executed next and allocates the CPU
b. Peterson’s solution works for synchronization among more than two processes
a. running state
b. ready state
c. terminated state
d. suspended state
d. The selection of processes that need to be executed next and allocates the CPU
14. The number of completed processes per unit time is referred to as __________.
a. capacity
b. output
c. throughput
d. efficiency
15. If the child process finishes execution and the parent is still in execution state, then the child
process is known as __________.
a. dead
b. zombie
c. body
d. orphan
a. The selection of processes that need to be executed next and allocates the CPU
17. Only one process will run at a time with __________; all other processes are waiting for the
processor in the meantime. With __________, each process on a different processor can run
more than one process simultaneously.
a. Multiprogramming, Multiprocessing
b. Uniprogramming, Multiprocessing
c. Multiprocessing, Multiprogramming
d. Multiprogramming, Uniprocessing
18. Which of the below is a method for synchronization?
a. Pipe
b. Socket
c. Thread
d. Semaphore
19. A __________ is the only state transition that is started by the user process itself.
A) wakeup
B) block
C) dispatch
True/False
1. Race conditions are prevented by requiring that critical regions be protected by locks.
Ans: True
Ans: False
Ans: False
Ans: True
5. A thread will immediately acquire a dispatcher lock that is the signaled state.
Ans: True
6. Mutex locks and counting semaphores are essentially the same thing.
Ans: False
7. Mutex locks and binary semaphores are essentially the same thing.
Ans: True
Ans: True
9. Linux mostly uses atomic integers to manage race conditions within the kernel.
Ans: False
10. Instructions from different processes can be interleaved when interrupts are allowed.
Ans: True
11. Both the test_ and_set() instruction and compare_and_swap() instruction are executed atomically.
Ans: True
12. Busy waiting refers to the phenomenon that while a process is in its critical section, any other process
that tries to enter its critical section must loop continuously in the call to acquire the mutex lock.
Ans: True
Chapter 8
Multiple Choice
C) every process in a set is waiting for an event that can only be caused by another process in the set
2. One necessary condition for deadlock is ____, which states that at least one resource must be
held in a non-sharable mode.
B) mutual exclusion
C) circular wait
D) no preemption
3. One necessary condition for deadlock is ______, which states that a process must be holding
one resource and waiting to acquire additional resources.
B) mutual exclusion
C) circular wait
D) no preemption
4. One necessary condition for deadlock is ______, which states that a resource can be released
only voluntarily by the process holding the resource.
B) mutual exclusion
C) circular wait
D) no preemption
5. One necessary condition for deadlock is ______, which states that there is a chain of waiting
processes whereby P0 is waiting for a resource held by P1, P1 is waiting for a resource held by
P2, and Pn is waiting for a resource held by P0.
B) mutual exclusion
C) circular wait
D) no preemption
6. The witness software product is a ____.
C) driver that can be used to prevent mutual exclusion for non-sharable resources
A) a necessary and sufficient condition for deadlock in the case that each resource has more than one
instance
B) a necessary and sufficient condition for a deadlock in the case that each resource has exactly one
instance
C) a sufficient condition for a deadlock in the case that each resource has more than once instance
D) is neither necessary nor sufficient for indicating deadlock in the case that each resource has exactly
one instance
A) It is safe.
B) It is not safe.
D) It is an impossible state.
12. Suppose that there are 12 resources available to three processes. At time 0, the following
data is collected. The table indicates the process, the maximum number of resources needed by
the process, and the number of resources currently owned by each process. Which of the
following correctly characterizes this state?
A) It is safe.
B) It is not safe.
D) It is an impossible state.
13. Which of the following data structures in the banker's algorithm is a vector of length m,
where m is the number of resource types?
A) Need
B) Allocation
C) Max
D) Available
14. Assume there are three resources, R1, R2, and R3, that are each assigned unique integer
values 15, 10, and 25, respectively. What is a resource ordering which prevents a circular wait?
A) R1, R2, R3
B) R3, R2, R1
C) R3, R1, R2
D) R2, R1, R3
15. A _____ could be preempted from a process.
A) mutex lock
B) CPU
C) semaphore
D) file lock
C) a deadlock is detected sometime after it has occurred but not necessarily immediately.
D) a deadlock is detected sometime before it occurs, but not necessarily just before.
17. Suppose that there are 10 resources available to three processes. At time 0, the following
data is collected. The table indicates the process, the maximum number of resources needed by
the process, and the number of resources currently owned by each process.
B) The request will not be granted, since the current state is not safe.
C) The request will be granted, since the state after granting the request will be safe.
D) The request will not be granted, since the state after granting the request will be unsafe.
18. Suppose that there are two resources types (R1 and R2) with five resources each available to
four processes. At time 0, the following data is collected. The table indicates the process, the
number of each type currently allocated to the processes, and the current request of each
resource type by each process.
A) the order of resources and processes that need to be preempted must be determined to minimize
cost.
B) if a resource is preempted from a process, the process must be rolled back to some safe state and
restarted from that state.
C) ensure that starvation does not occur from always preempting resources from the same process.
22. To ensure no preemption, if a process is holding some resources and requests another
resource that cannot be immediately allocated to it ____________
a. is required
b. is not required
a. must exist
c. may exist
b. interrupts
c. system calls
d. special programs
26. One way to ensure that the circular wait condition never holds is to ____________
a. imposes a total ordering of all resource types and to determine whether one precedes another in the
ordering
b. to never let a process acquire resources that are held by other processes
a. A process must be not be holding a resource, but waiting for one to be freed, and then request to
acquire it
b. A process must be holding at least one resource and waiting to acquire additional resources that are
being held by other processes
c. A process must hold at least one resource and not be waiting to acquire additional resources
a. the system can allocate resources to each process in some order and still avoid a deadlock
a. banker’s algorithm
b. round-robin algorithm
c. elevator algorithm
d. karn’s algorithm
a. mutual exclusion
b. a process may hold allocated resources while awaiting assignment of other resources
32. The wait-for graph is a deadlock detection algorithm that is applicable when ____________
33. A system has 3 processes sharing 4 resources. If each process needs a maximum of 2 units
then, deadlock ____________
b. may occur
c. has to occur
34. Every time a request for allocation cannot be granted immediately, the detection algorithm is
invoked. This will help identify ____________
35. ‘m’ processes share ‘n’ resources of the same type. The maximum need of each process
doesn’t exceed ‘n’ and the sum of all their maximum needs is always less than m+n. In this
setup, deadlock ____________
a. may occur
c. has to occur
36. A deadlock eventually cripples system throughput and will cause the CPU utilization to _____
a. increase
b. drop
c. stay still
38. The resource allocation graph is not applicable to a resource allocation system
a. Allocation – Available
b. Max – Available
c. Max – Allocation
d. Allocation – Max
40. The Banker’s algorithm is _____________ than the resource allocation graph algorithm.
a. less efficient
b. more efficient
c. equal
a. safe allocation
b. safe resource
c. safe sequence
42. Which one of the following is a visual (mathematical) way to determine the deadlock
occurrence?
b. starvation graph
c. inversion graph
a. safe state
b. correct state
c. effective state
d. unsafe state
44. For a deadlock to arise, which of the following conditions must be held simultaneously?
a. Mutual exclusion
b. No preemption
d) none of these
Solution:
A) Need:
(0,0,0,0)
(0,7,5,0)
(1,0,0,2)
(0,0,2,0)
(0,6,4,2)
a. using thread
d. using pipes
b. Deadlocks
c. fatal
d. not deadlocks
4) There are 6 tape drives for a computer system, with ‘n’ processes vaying for them. There may
be 3 tape drives required for every operation. The full value of ‘n’ for which a deadlock-free
device is guaranteed to be ___.
a. 4
b. 2
c. 1
d. 3
b. No preemption
c. Mutual exclusion
a. rarely
b. frequently
a. rarely
b. frequently
10) Implementing the detection algorithm for each request has a drawback that ______.
11) If a process keeps certain resources and requests another resource that cannot be
automatically assigned to it ____________, to ensure no preemption.
13) Before starting its execution, the downside of a process allocating all its resources is ___.
14) For each resource type ____________, the resource allocation graph is not applicable.
b. Mutual exclusion
d. A process may keep allocated resources while waiting for other resources to be allocated
a. the total number of resources available in the system must not exceeds
b. the total number of resources available in the system must always be less than
c. the total number of resources available in the system must always be equal to
a. A process must not keep a resource but wait for one to be released and then request it to be
acquired.
c. A process must retain at least one resource and wait for additional resources held by other processes
to be acquired.
d. A process must have at least one resource and not wait for additional resources to be acquired.
d. no prone to deadlocks
22) Each request allows the system to consider the _______ in order to determine if the current
request can be met or if it must wait to prevent a potential future deadlock.
Ans: True
Ans: False
Ans: False
4. The wait-for graph scheme is not applicable to a resource allocation system with multiple
instances of each resource type.
Ans: True
5. Ordering resources and requiring the resources to be acquired in order prevents the circular
wait from occurring and therefore prevents deadlock from occurring.
Ans: False
6. The banker's algorithm is useful in a system with multiple instances of each resource type.
Ans: True
Ans: False
8. Deadlock prevention and deadlock avoidance are essentially the same approaches for
handling deadlock.
Ans: False
Al-Azhar University-Gaza
Faculty of Engineering and Information Technology
ITCS3313 – Operating Systems
Fall 2016-2017 Midterm Exam 5th November 2017 (60 minutes)
(A)
................................التوقيع................................... ..................... .......... ...............اسم الطالب.........................رقم الطالب
- Please attempt to solve the following questions. Cell phones must be turned off.
First Question: Choose the correct answer and put it in the table below
2. Which one of the following error will be handle by the operating system?
a) power failure b) lack of paper in printer c) connection failure in the network
d) (a) and (b) e) (a), (b), and (c) f) none of these
5. Process is
a) a program in execution b) a job in secondary memory c) program in High level language kept
on disk
d ) contents of main memory e) None of the these
10. While running DOS on a PC, which command would be used to display the entire diskette?
11. In Bounded Buffer we use in the producer process to check if the buffer is Full
a) while (((in + 1) % BUFFER_SIZE) == out) b) while (in == out) c) while (in != out)
d) while (((in + 1) % BUFFER_SIZE) != out) e) None of these
12. Fork is
a) the dispatching of a task b) the creation of a new job c) The creation of a new
process
d) increasing the priority of a task e) None of the these
14. The systems which allows only one process execution at a time, are called
a) uni-programming systems b) uni-processing systems c) uni-tasking systems
d) none of the mentioned
16. A Process Control Block(PCB) does not contain which of the following :
17. The link between two processes P and Q to send and receive messages is called :
a) communication link b) message-passing link c) synchronization link
d) All of these None of these
a) communicate with one another without b) share data c) communicate with one another
resorting to shared data. by resorting to shared data
d) name the recipient or sender of the message e) None of these
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
E E D D A B C E B D A C A A B G A A
Second Question:
a) Put the labels on the Process stated diagram
b- In bounded buffer, you have given the consumer process pseudo-code, write the producer process pseudo-
code
Consumer Producer
Process p1 Process p2
Count++ Count—
3. The link between two processes P and Q to send and receive messages is called :
a) communication link b) message-passing link c) synchronization link
d) All of these None of these
6. In Bounded Buffer we use in the producer process to check if the buffer is Full
a) while (((in + 1) % BUFFER_SIZE) == out) b) while (in == out) c) while (in != out)
d) while (((in + 1) % BUFFER_SIZE) != out) e) None of these
7. Fork is
a) the dispatching of a task b) The creation of a new c) the creation of a new job
process
d) increasing the priority of a task e) None of the these
12. While running DOS on a PC, which command would be used to display the entire diskette?
a) TYPE b) DIR c) DISKCOPY
d) CHKDSK e) None of the these
13. Which one of the following error will be handle by the operating system?
a) power failure b) lack of paper in printer c) connection failure in the network
d) (a) and (b) e) (a), (b), and (c) f) none of these
17. Process is
a) program in High level b) a job in secondary memory c) a program in execution
language kept on disk
d ) contents of main memory e) None of the these
18. The systems which allows only one process execution at a time, are called
a uni-processing systems b) ) uni-programming systems c) uni-tasking systems
d) none of the mentioned
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
A G A A A A B B C A E B E D E B C B
Second Question:
b) Put the labels on the Process stated diagram
b- In bounded buffer, you have given the consumer process pseudo-code, write the producer process pseudo-
code
Consumer Producer
Good Luck
Multiple Choice
4. Which of the following would lead you to believe that a given system is an SMP-type
system?
A) Each processor is assigned a specific task.
B) There is a boss–worker relationship between the processors.
C) Each processor performs all tasks within the operating system.
D) None of the above
5. A ____ can be used to prevent a user program from never returning control to the
operating system.
A) portal
B) program counter
C) firewall
D) timer
10. Two important design issues for cache memory are ____.
A) speed and volatility
B) size and replacement policy
C) power consumption and reusability
D) size and access privileges
12. Which of the following statements concerning open source operating systems is true?
A) Solaris is open source.
B) Source code is freely available.
C) They are always more secure than commercial, closed systems.
D) All open source operating systems share the same set of goals.
13. Which of the following operating systems is not open source?
A) Windows
B) BSD UNIX
C) Linux
D) PCLinuxOS
14. A _____ provides a file-system interface which allows clients to create and modify files.
A) compute-server system
B) file-server system
C) wireless network
D) network computer
Ans: A computer system has many resources that may be required to solve a problem: CPU
time, memory space, file-storage space, I/O devices, and so on. The operating system acts as the
manager of these resources. Facing numerous and possibly conflicting requests for resources,
the operating system must decide how to allocate them to specific programs and users so that it
can operate the computer system efficiently and fairly.
Ans: The interrupt vector is merely a table of pointers to specific interrupt-handling routines.
Because there are a fixed number of interrupts, this table allows for more efficient handling of
the interrupts than with a general-purpose, interrupt-processing routine.
Ans: A bootstrap program is the initial program that the computer runs when it is powered up or
rebooted. It initializes all aspects of the system, from CPU registers to device controllers to
memory contents. Typically, it is stored in read-only memory (ROM) or electrically erasable
programmable read-only memory (EEPROM), known by the general term firmware, within the
computer hardware.
24. What role do device controllers and device drivers play in a computer system?
Ans: A general-purpose computer system consists of CPUs and multiple device controllers that
are connected through a common bus. Each device controller is in charge of a specific type of
device. The device controller is responsible for moving the data between the peripheral devices
that it controls and its local buffer storage. Typically, operating systems have a device driver for
each device controller. This device driver understands the device controller and presents a
uniform interface for the device to the rest of the operating system.
Ans: Clustered systems are considered high-availability in that these types of systems have
redundancies capable of taking over a specific process or task in the case of a failure. The
redundancies are inherent due to the fact that clustered systems are composed of two or more
individual systems coupled together.
26. Describe the differences between physical, virtual, and logical memory.
Ans: Physical memory is the memory available for machines to execute operations (i.e., cache,
random access memory, etc.). Virtual memory is a method through which programs can be
executed that requires space larger than that available in physical memory by using disk memory
as a backing store for main memory. Logical memory is an abstraction of the computer’s
different types of memory that allows programmers and applications a simplified view of
memory and frees them from concern over memory-storage limitations.
Ans: In order to ensure the proper execution of the operating system, most computer systems
provide hardware support to distinguish between user mode and kernel mode. A mode bit is
added to the hardware of the computer to indicate the current mode: kernel (0) or user (1).
When the computer system is executing on behalf of a user application, the system is in user
mode. However, when a user application requests a service from the operating system (via a
system call), it must transition from user to kernel mode to fulfill the request.
Ans: In multiprocessor environments, two copies of the same data may reside in the local cache
of each CPU. Whenever one CPU alters the data, the cache of the other CPU must receive an
updated version of this data. Cache coherency involves ensuring that multiple caches store the
most updated version of the stored data.
29. Why is main memory not suitable for permanent program storage or backup purposes?
Furthermore, what is the main disadvantage to storing information on a magnetic disk
drive as opposed to main memory?
Ans: Main memory is a volatile memory in that any power loss to the system will result in
erasure of the data stored within that memory. While disk drives can store more information
permanently than main memory, disk drives are significantly slower.
Ans: The compute-server system provides an interface to which a client can send a request to
perform an action (for example, read data from a database); in response, the server executes the
action and sends back results to the client. The file-server system provides a file-system interface
where clients can create, update, read, and delete files. An example of such a system is a Web
server that delivers files to clients running Web browsers.
31. Computer systems can be divided into four approximate components. What are they?
Ans: System programs are not part of the kernel, but still are associated with the operating
system. Application programs are not associated with the operating of the system.
33. Describe why direct memory access (DMA) is considered an efficient mechanism for
performing I/O.
Ans: DMA is efficient for moving large amounts of data between I/O devices and main memory.
It is considered efficient because it removes the CPU from being responsible for transferring data.
DMA instructs the device controller to move data between the devices and main memory.
34. Describe why multi-core processing is more efficient than placing each processor on its
own chip.
Ans: A large reason why it is more efficient is that communication between processors on the
same chip is faster than processors on separate chips.
35. Distinguish between uniform memory access (UMA) and non-uniform memory access
(NUMA) systems.
Ans: On UMA systems, accessing RAM takes the same amount of time from any CPU. On
NUMA systems, accessing some parts of memory may take longer than accessing other parts of
memory, thus creating a performance penalty for certain memory accesses.
36. Explain the difference between singly, doubly, and circularly linked lists.
Ans: A singly linked list is where each item points to its successor. A doubly linked
linked list allows an item to point to its predecessor or successor. A circularly linked
list is the where the last element points back to the first.
Ans: Protection is concerned with controlling the access of processes or users to the resources of
the computer system. The role of security is to defend the system from internal or external
attacks.
Ans: Mobile computing takes place on handheld devices and tablets. Because these devices are
portable and lightweight, they typically do not have the processing power and storage capacity of
desktop systems. However, features such as GPS and accelerometers have allowed mobile
devices to provide functionality that is unavailable to desktop systems.
Ans: Cloud computing is a type of computing that delivers computing,storage, and application
services across a network. Cloud computing often uses virtualization to provide its functionality.
There are many different types of cloud environments, as well as services offered. Cloud
computing may be either public, private, or a hybrid of the two. Additionally, cloud computing
may offer applications, platforms, or system infrastructures.
True/False
41. The operating system kernel consists of all system and application programs in a
computer.
Ans: False
42. Flash memory is slower than DRAM but needs no power to retain its contents.
Ans: True
Ans: False
44. UNIX does not allow users to escalate privileges to gain extra permissions for a
restricted activity.
Ans: False
45. Processors for most mobile devices run at a slower speed than a processor in a desktop
PC.
Ans: True
Ans: True
47. A dual-core system requires each core has its own cache memory.
Ans: False
48. Virtually all modern operating systems provide support for SMP
Ans: True
49. All computer systems have some sort of user interaction.
Ans: False
50. Solid state disks are generally faster than magnetic disks.
Ans: True
Ans: False
Ans: True
Chapter: Chapter 2
Multiple Choice
4. Policy ____.
A) determines how to do something
B) determines what will be done
C) is not likely to change across places
D) is not likely to change over time
7. To the SYSGEN program of an operating system, the least useful piece of information
is _____.
A) the CPU being used
B) amount of memory available
C) what applications to install
D) operating-system options such as buffer sizes or CPU scheduling algorithms
13. The Windows CreateProcess() system call creates a new process. What is the
equivalent system call in UNIX:
A) NTCreateProcess()
B) process()
C) fork()
D) getpid()
14. The close() system call in UNIX is used to close a file. What is the equivalent system
call in Windows:
A) CloseHandle()
B) close()
C) CloseFile()
D) Exit()
15. The Windows CreateFile() system call is used to create a file. What is the equivalent
system call in UNIX:
A) ioctl()
B) open()
C) fork()
D) createfile()
17. ______ is a mobile operating system designed for the iPhone and iPad.
A) Mac OS X
B) Android
C) UNIX
D) iOS
18. The ________ provides a portion of the system call interface for UNIX and Linux.
A) POSIX
B) Java
C) Standard C library
D) Standard API
20. _____ is/are not a technique for passing parameters from an application to a system
call.
A) Cache memory
B) Registers
C) Stack
D) Special block in memory
21. There are two different ways that commands can be processed by a command
interpreter. One way is to allow the command interpreter to contain the code needed to
execute the command. The other way is to implement the commands through system
programs. Compare and contrast the two approaches.
Ans: In the first approach, upon the user issuing a command, the interpreter jumps to the
appropriate section of code, executes the command, and returns control back to the user. In the
second approach, the interpreter loads the appropriate program into memory along with the
appropriate arguments. The advantage of the first method is speed and overall simplicity. The
disadvantage to this technique is that new commands require rewriting the interpreter program
which, after a number of modifications, may get complicated, messy, or too large. The advantage
to the second method is that new commands can be added without altering the command
interpreter. The disadvantage is reduced speed and the clumsiness of passing parameters from the
interpreter to the system program.
22. Describe the relationship between an API, the system-call interface, and the operating
system.
Ans: The system-call interface of a programming language serves as a link to system calls
made available by the operating system. This interface intercepts function calls in the API and
invokes the necessary system call within the operating system. Thus, most of the details of the
operating-system interface are hidden from the programmer by the API and are managed by the
run-time support library.
23. Describe three general methods used to pass parameters to the operating system
during system calls.
Ans: The simplest approach is to pass the parameters in registers. In some cases, there may be
more parameters than registers. In these cases, the parameters are generally stored in a block, or
table, of memory, and the address of the block is passed as a parameter in a register. Parameters
can also be placed, or pushed, onto the stack by the program and popped off the stack by the
operating system.
24. What are the advantages of using a higher-level language to implement an operating
system?
Ans: The code can be written faster, is more compact, and is easier to understand and debug. In
addition, improvements in compiler technology will improve the generated code for the entire
operating system by simple recompilation. Finally, an operating system is far easier to port — to
move to some other hardware — if it is written in a higher-level language.
Ans: Requirements can be divided into user and system goals. Users desire a system that is
convenient to use, easy to learn, and to use, reliable, safe, and fast. System goals are defined by
those people who must design, create, maintain, and operate the system: The system should be
easy to design, implement, and maintain; it should be flexible, reliable, error-free, and efficient.
26. What are the advantages and disadvantages of using a microkernel approach?
Ans: One benefit of the microkernel approach is ease of extending the operating system. All
new services are added to user space and consequently do not require modification of the kernel.
The microkernel also provides more security and reliability, since most services are running as
user — rather than kernel — processes. Unfortunately, microkernels can suffer from
performance decreases due to increased system function overhead.
27. Explain why a modular kernel may be the best of the current operating system design
techniques.
Ans: The modular approach combines the benefits of both the layered and microkernel design
techniques. In a modular design, the kernel needs only to have the capability to perform the
required functions and know how to communicate between modules. However, if more
functionality is required in the kernel, then the user can dynamically load modules into the kernel.
The kernel can have sections with well-defined, protected interfaces, a desirable property found
in layered systems. More flexibility can be achieved by allowing the modules to communicate
with one another.
Ans: Primarily because he kernel environment is a blend of the Mach microkernel and BSD
UNIX (which is closer to a monolithic kernel.)
29. Describe how Android uses a unique virtual machine for running Java programs.
Ans: The Dalvik virtual machine is designed specifically for Android and has been optimized for
mobile devices with limited memory and CPU processing capabilities.
True/False
30. KDE and GNOME desktops are available under open-source licenses.
Ans: True
31. Many operating system merge I/O devices and files into a combined file because of the
similarity of system calls for each.
Ans: True
Ans: False
33. System calls can be run in either user mode or kernel mode.
Ans: False
34. Application programmers typically use an API rather than directory invoking system
calls.
Ans: True
35. In general, Windows system calls have longer, more descriptive names and UNIX
system calls use shorter, less descriptive names.
Ans: True
36. Mac OS X is a hybrid system consisting of both the Mach microkernel and BSD UNIX.
Ans: True
Ans: False
Chapter: Chapter 3
Multiple Choice
1. The ____ of a process contains temporary data such as function parameters, return
addresses, and local variables.
A) text section
B) data section
C) program counter
D) stack
3. The list of processes waiting for a particular I/O device is called a(n) ____.
A) standby queue
B) device queue
C) ready queue
D) interrupt queue
5. When a child process is created, which of the following is a possibility in terms of the
execution or address space of the child process?
A) The child process runs concurrently with the parent.
B) The child process has a new program loaded into it.
C) The child is a duplicate of the parent.
D) All of the above
6. A _________________ saves the state of the currently running process and restores the
state of the next process to run.
A) save-and-restore
B) state switch
C) context switch
D) none of the above
7. A process may transition to the Ready state by which of the following actions?
A) Completion of an I/O event
B) Awaiting its turn on the CPU
C) Newly-admitted process
D) All of the above
8. In a(n) ____ temporary queue, the sender must always block until the recipient
receives the message.
A) zero capacity
B) variable capacity
C) bounded capacity
D) unbounded capacity
11. When communicating with sockets, a client process initiates a request for a connection
and is assigned a port by the host computer. Which of the following would be a valid port
assignment for the host computer?
A) 21
B) 23
C) 80
D) 1625
12. A(n) ______________ allows several unrelated processes to use the pipe for
communication.
A) named pipe
B) anonymous pipe
C) LIFO
D) ordinary pipe
13. Which of the following statements is true?
A) Shared memory is typically faster than message passing.
B) Message passing is typically faster than shared memory.
C) Message passing is most useful for exchanging large amounts of data.
D) Shared memory is far more common in operating systems than message passing.
14. Imagine that a host with IP address 150.55.66.77 wishes to download a file from the web
server at IP address 202.28.15.123. Select a valid socket pair for a connection between this
pair of hosts.
A) 150.55.66.77:80 and 202.28.15.123:80
B) 150.55.66.77:150 and 202.28.15.123:80
C) 150.55.66.77:2000 and 202.28.15.123:80
D) 150.55.66.77:80 and 202.28.15.123:3500
15. Child processes inherit UNIX ordinary pipes from their parent process because:
A) The pipe is part of the code and children inherit code from their parents.
B) A pipe is treated as a file descriptor and child processes inherit open file descriptors from
their parents.
C) The STARTUPINFO structure establishes this sharing.
D) All IPC facilities are shared between the parent and child processes.
17. Which of the following is not a process type in the Chrome browser?
A) Plug-in
B) Renderer
C) Sandbox
D) Browser
18. The ________ application is the application appearing on the display screen of a mobile
device.
A) main
B) background
C) display
D) foreground
19. A process that has terminated, but whose parent has not yet called wait(), is known as a
________ process.
A) zombie
B) orphan
C) terminated
D) init
Short Answer
21. Name and describe the different states that a process can exist in at any given time.
Ans: The possible states of a process are: new, running, waiting, ready, and terminated. The
process is created while in the new state. In the running or waiting state, the process is executing
or waiting for an event to occur, respectively. The ready state occurs when the process is ready
and waiting to be assigned to a processor and should not be confused with the waiting state
mentioned earlier. After the process is finished executing its code, it enters the termination state.
22. Explain the main differences between a short-term and long-term scheduler.
Ans: The primary distinction between the two schedulers lies in the frequency of execution.
The short-term scheduler is designed to frequently select a new process for the CPU, at least
once every 100 milliseconds. Because of the short time between executions, the short-term
scheduler must be fast. The long-term scheduler executes much less frequently; minutes may
separate the creation of one new process and the next. The long-term scheduler controls the
degree of multiprogramming. Because of the longer interval between executions, the long-term
scheduler can afford to take more time to decide which process should be selected for execution.
23. Explain the difference between an I/O-bound process and a CPU-bound process.
Ans: The differences between the two types of processes stem from the number of I/O requests
that the process generates. An I/O-bound process spends more of its time seeking I/O operations
than doing computational work. The CPU-bound process infrequently requests I/O operations
and spends more of its time performing computational work.
24. Explain the concept of a context switch.
Ans: Whenever the CPU starts executing a new process, the old process's state must be
preserved. The context of a process is represented by its process control block. Switching the
CPU to another process requires performing a state save of the current process and a state restore
of a different process. This task is known as a context switch. When a context switch occurs,
the kernel saves the context of the old process in its PCB and loads the saves context of the new
process scheduled to run.
25. Explain the fundamental differences between the UNIX fork() and Windows
CreateProcess() functions.
Ans: Each function is used to create a child process. However, fork() has no parameters;
CreateProcess() has ten. Furthermore, whereas the child process created with fork() inherits a copy
of the address space of its parent, the CreateProcess() function requires specifying the address
space of the child process.
26. Name the three types of sockets used in Java and the classes that implement them.
Ans: Connection-oriented (TCP) sockets are implemented with the Socket class.
Connectionless (UDP) sockets use the DatagramSocket class. Finally, the MulticastSocket class is
a subclass of the DatagramSocket class. A multicast socket allows data to be sent to multiple
recipients.
Ans: Data can be represented differently on different machine architectures (e.g., little-endian vs.
big-endian). XDR represents data independently of machine architecture. XDR is used when
transmitting data between different machines using an RPC.
29. Explain the term marshalling.
Ans: Marshalling involves the packaging of parameters into a form that can be transmitted over
the network. When the client invokes a remote procedure, the RPC system calls the appropriate
stub, passing it the parameters provided to the remote procedure. This stub locates the port on
the server and marshals the parameters. If necessary, return values are passed back to the client
using the same technique.
30. Explain the terms "at most once" and "exactly once" and indicate how they relate to
remote procedure calls.
Ans: Because a remote procedure call can fail in any number of ways, it is important to be able
to handle such errors in the messaging system. The term "at most once" refers to ensuring that
the server processes a particular message sent by the client only once and not multiple times.
This is implemented by merely checking the timestamp of the message. The term "exactly once"
refers to making sure that the message is executed on the server once and only once so that there
is a guarantee that the server received and processed the message.
31. Describe two approaches to the binding of client and server ports during RPC calls.
Ans: First, the binding information may be predetermined, in the form of fixed port addresses.
At compile time, an RPC call has a fixed port number associated with it. Second, binding can
be done dynamically by a rendezvous mechanism. Typically, an operating system provides a
rendezvous daemon on a fixed RPC port. A client then sends a message containing the name of
the RPC to the rendezvous daemon requesting the port address of the RPC it needs to execute.
The port number is returned, and the RPC calls can be sent to that port until the process
terminates (or the server crashes).
32. Ordinarily the exec() system call follows the fork(). Explain what would happen if a
programmer were to inadvertently place the call to exec() before the call to fork().
Ans: Because exec() overwrites the process, we would never reach the call to fork() and hence, no
new processes would be created. Rather, the program specified in the parameter to exec() would
be run instead.
Ans: Each website opens up in a separate tab and is represented with a separate renderer process.
If that webpage were to crash, only the process representing that the tab would be affected, all
other sites (represented as separate tabs/processes) would be unaffected.
34. Describe how UNIX and Linux manage orphan processes.
Ans: If a parent terminates without first calling wait(), its children are considered orphan
processes. Linux and UNIX assign the init process as the new parent of orphan processes and init
periodically calls wait() which allows any resources allocated to terminated processes to be
reclaimed by the operating system.
True/False
35. All processes in UNIX first translate to a zombie process upon termination.
Ans: True
36. The difference between a program and a process is that a program is an active entity
while a process is a passive entity.
Ans: False
Ans: False
Ans: False
39. Local Procedure Calls in Windows XP are similar to Remote Procedure Calls.
Ans: True
40. For a single-processor system, there will never be more than one process in the
Running state.
Ans: True
41. Shared memory is a more appropriate IPC mechanism than message passing for
distributed systems.
Ans: False
42. Ordinary pipes in UNIX require a parent-child relationship between the
communicating processes.
Ans: True
Ans: True
44. Using a section object to pass messages over a connection port avoids data copying.
Ans: True
Ans: True
Ans: False
47. The Mach operating system treats system calls with message passing.
Ans: True
48. Named pipes continue to exist in the system after the creating process has terminated.
Ans:True
49. A new browser process is create by the Chrome browser for every new website that is
visited.
Ans: False
50. The iOS mobile operating system only supports a limited form of multitasking.
Ans: True
Chapter: Chapter 4
Multiple Choice
1. ____ is a thread library for Solaris that maps many user-level threads to one kernel
thread.
A) Pthreads
B) Green threads
C) Sthreads
D) Java threads
8. LWP is ____.
A) short for lightweight processor
B) placed between system and kernel threads
C) placed between user and kernel threads
D) common in systems implementing one-to-one multithreading models
10. In multithreaded programs, the kernel informs an application about certain events
using a procedure known as a(n) ____.
A) signal
B) upcall
C) event handler
D) pool
11. _____ is not considered a challenge when designing applications for multicore systems.
A) Deciding which activities can be run in parallel
B) Ensuring there is a sufficient number of cores
C) Determining if data can be separated so that it is accessed on separate cores
D) Identifying data dependencies between tasks.
12. A ____ provides an API for creating and managing threads.
A) set of system calls
B) multicore system
C) thread library
D) multithreading model
13. The _____ model multiplexes many user-level threads to a smaller or equal number of
kernel threads.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one
14. The _____ model maps many user-level threads to one kernel thread.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one
15. The _____ model maps each user-level thread to one kernel thread.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one
16. The _____ model allows a user-level thread to be bound to one kernel thread.
A) many-to-many
B) two-level
C) one-to-one
D) many-to-one
17. The most common technique for writing multithreaded Java programs is _____.
A) extending the Thread class and overriding the run() method
B) implementing the Runnable interface and defining its run() method
C) designing your own Thread class
D) using the CreateThread() function
18. In Pthreads, a parent uses the pthread_join() function to wait for its child thread to
complete. What is the equivalent function in Win32?
A) win32_join()
B) wait()
C) WaitForSingleObject()
D) join()
20. A _____ uses an existing thread — rather than creating a new one — to complete a task.
A) lightweight process
B) thread pool
C) scheduler activation
D) asynchronous procedure call
21. According to Amdahl's Law, what is the speedup gain for an application that is 60%
parallel and we run it on a machine with 4 processing cores?
A) 1.82
B) .7
C) .55
D) 1.43
23. ___________ is a formula that identifies potential performance gains from adding
additional computing cores to an application that has a parallel and serial component.
A) Task parallelism
B) Data parallelism
C) Data splitting
D) Amdahl's Law
24. When OpenMP encounters the #pragma omp parallel directive, it
A) constructs a parallel region
B) creates a new thread
C) creates as many threads as there are processing cores
D) parallelizes for loops
Ans: For a web server that runs as a single-threaded process, only one client can be serviced at
a time. This could result in potentially enormous wait times for a busy server.
27. List the four major categories of the benefits of multithreaded programming. Briefly
explain each.
Ans: The benefits of multithreaded programming fall into the categories: responsiveness,
resource sharing, economy, and utilization of multiprocessor architectures. Responsiveness
means that a multithreaded program can allow a program to run even if part of it is blocked.
Resource sharing occurs when an application has several different threads of activity within the
same address space. Threads share the resources of the process to which they belong. As a result,
it is more economical to create new threads than new processes. Finally, a single-threaded
process can only execute on one processor regardless of the number of processors actually
present. Multiple threads can run on multiple processors, thereby increasing efficiency.
28. What are the two different ways in which a thread library could be implemented?
Ans: The first technique of implementing the library involves ensuring that all code and data
structures for the library reside in user space with no kernel support. The other approach is to
implement a kernel-level library supported directly by the operating system so that the code and
data structures exist in kernel space.
29. Describe two techniques for creating Thread objects in Java.
Ans: One approach is to create a new class that is derived from the Thread class and to override
its run() method. An alternative — and more commonly used — technique is to define a class
that implements the Runnable interface. When a class implements Runnable, it must define a run()
method. The code implementing the run() method is what runs as a separate thread.
30. In Java, what two things does calling the start() method for a new Thread object
accomplish?
Ans: Calling the start() method for a new Thread object first allocates memory and initializes a
new thread in the JVM. Next, it calls the run() method, making the thread eligible to be run by the
JVM. Note that the run() method is never called directly. Rather, the start() method is called,
which then calls the run() method.
31. Some UNIX systems have two versions of fork(). Describe the function of each version,
as well as how to decide which version to use.
Ans: One version of fork() duplicates all threads and the other duplicates only the thread that
invoked the fork() system call. Which of the two versions of fork() to use depends on the
application. If exec() is called immediately after forking, then duplicating all threads is
unnecessary, as the program specified in the parameters to exec() will replace the process. If,
however, the separate process does not call exec() after forking, the separate process should
duplicate all threads.
32. How can deferred cancellation ensure that thread termination occurs in an orderly
manner as compared to asynchronous cancellation?
Ans: A thread pool is a collection of threads, created at process startup, that sit and wait for
work to be allocated to them. This allows one to place a bound on the number of concurrent
threads associated with a process and reduce the overhead of creating new threads and destroying
them at termination.
34. What are the general components of a thread in Windows?
Ans: The thread consists of a unique ID, a register set that represents the status of the processor,
a user stack for user mode, a kernel stack for kernel mode, and a private storage area used by
run-time libraries and dynamic link libraries.
35. Describe the difference between the fork() and clone() Linux system calls.
Ans: The fork() system call is used to duplicate a process. The clone() system call behaves
similarly except that, instead of creating a copy of the process, it creates a separate process that
shares the address space of the calling process.
36. Multicore systems present certain challenges for multithreaded programming. Briefly
describe these challenges.
Ans: Multicore systems have placed more pressure on system programmers as well as
application developers to make efficient use of the multiple computing cores. These challenges
include determining how to divide applications into separate tasks that can run in parallel on the
different cores. These tasks must be balanced such that each task is doing an equal amount of
work. Just as tasks must be separated, data must also be divided so that it can be accessed by the
tasks running on separate cores. So that data can safely be accessed, data dependencies must be
identified and where such dependencies exist, data accesses must be synchronized to ensure the
safety of the data. Once all such challenges have been met, there remains considerable challenges
testing and debugging such applications.
Ans: A parallel system can perform more than one task simultaneously. A concurrent system
supports more than one task by allowing multiple tasks to make progress.
And: Data parallelism involves distributing subsets of the same data across multiple computing
cores and performing the same operation on each core. Task parallelism involves distributing
tasks across the different computing cores where each task is performing a unique operation.
39. Describe how OpenMP is a form of implicit threading.
Ans: OpenMP provides a set of compiler directives that allows parallel programming on systems
that support shared memory. Programmers identify regions of code that can run in parallel by
placing them in a block of code that begins with the directive #pragma omp parallel. When the
compiler encounters this parallel directive, it creates as many threads as there are processing
cores in the system.
Ans: Grand Central Dispatch (GCD) is a technology for Mac OS X and iOS systems that is a
combination of extensions to the C language, an API, and a runtime library that allows
developers to construct "blocks" - regions of code that can run in parallel. GCD then manages
the parallel execution of blocks in several dispatch queues.
True/False
Ans: True
42. A thread is composed of a thread ID, program counter, register set, and heap.
Ans: False
Ans: True
Ans: False
Ans: False
46. Each thread has its own register set and stack.
Ans: True
47. Deferred cancellation is preferred over asynchronous cancellation.
Ans: True
48. The single benefit of a thread pool is to control the number of threads.
Ans: False
Ans: True
And: True
51. Amdahl's Law describes performance gains for applications with both a serial and parallel
component.
Ans: True
Ans: True
Ans: False
Ans: True
55. Task parallelism distributes threads and data across multiple computing cores.
Ans: False
Chapter: Chapter 5
Multiple Choice
2. ____ is the number of processes that are completed per time unit.
A) CPU utilization
B) Response time
C) Turnaround time
D) Throughput
3. ____ scheduling is approximated by predicting the next CPU burst with an exponential
average of the measured lengths of previous CPU bursts.
A) Multilevel queue
B) RR
C) FCFS
D) SJF
8. Which of the following statements are false with regards to the Linux CFS scheduler?
A) Each task is assigned a proportion of CPU processing time.
B) Lower numeric values indicate higher relative priorities.
C) There is a single, system-wide value of vruntime.
D) The scheduler doesn't directly assign priorities.
9. The Linux CFS scheduler identifies _____________ as the interval of time during which
every runnable task should run at least once.
A) virtual run time
B) targeted latency
C) nice value
D) load balancing
11. In Solaris, what is the time quantum (in milliseconds) of an interactive thread with
priority 35?
A) 25
B) 54
C) 80
D) 35
12. In Solaris, if an interactive thread with priority 15 uses its entire time quantum, what is
its priority recalculated to?
A) 51
B) 5
C) 160
D) It remains at 15
13. In Solaris, if an interactive thread with priority 25 is waiting for I/O, what is its priority
recalculated to when it is eligible to run again?
A) 15
B) 120
C) 52
D) It remains at 25
18. __________ involves the decision of which kernel thread to schedule onto which CPU.
A) Process-contention scope
B) System-contention scope
C) Dispatcher
D) Round-robin scheduling
19. With _______ a thread executes on a processor until a long-latency event (i.e. a
memory stall) occurs.
A) coarse-grained multithreading
B) fine-grained multithreading
C) virtualization
D) multicore processors
21. The ______ occurs in first-come-first-served scheduling when a process with a long
CPU burst occupies the CPU.
A) dispatch latency
B) waiting time
C) convoy effect
D) system-contention scope
22. The rate of a periodic task in a hard real-time system is ____, where p is a period and t
is the processing time.
A) 1/p
B) p/t
C) 1/t
D) pt
23. Which of the following is true of the rate-monotonic scheduling algorithm?
A) The task with the shortest period will have the lowest priority.
B) It uses a dynamic priority policy.
C) CPU utilization is bounded when using this algorithm.
D) It is non-preemptive.
25. The two general approaches to load balancing are __________ and ____________.
A) soft affinity, hard affinity
B) coarse grained, fine grained
C) soft real-time, hard real-time
D) push migration, pull migration
Essay
Ans: There are two approaches to multithread a processor. (1) Coarse-grained multithreading
allows a thread to run on a processor until a long-latency event, such as waiting for memory, to
occur. When a long-latency event does occur, the processor switches to another thread. (2)
Fine-grained multithreading switches between threads at a much finer-granularity, such as
between instructions.
Ans: The lifecycle of a process can be considered to consist of a number of bursts belonging to
two different states. All processes consist of CPU cycles and I/O operations. Therefore, a process
can be modeled as switching between bursts of CPU execution and I/O wait.
28. What role does the dispatcher play in CPU scheduling?
Ans: The dispatcher gives control of the CPU to the process selected by the short-term
scheduler. To perform this task, a context switch, a switch to user mode, and a jump to the proper
location in the user program are all required. The dispatch should be made as fast as possible.
The time lost to the dispatcher is termed dispatch latency.
29. Explain the difference between response time and turnaround time. These times are
both used to measure the effectiveness of scheduling schemes.
Ans: Turnaround time is the sum of the periods that a process is spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O. Turnaround time
essentially measures the amount of time it takes to execute a process. Response time, on the
other hand, is a measure of the time that elapses between a request and the first response
produced.
30. What effect does the size of the time quantum have on the performance of an RR
algorithm?
Ans: At one extreme, if the time quantum is extremely large, the RR policy is the same as the
FCFS policy. If the time quantum is extremely small, the RR approach is called processor
sharing and creates the appearance that each of n processes has its own processor running at 1/n
the speed of the real processor.
31. Explain the process of starvation and how aging can be used to prevent it.
Ans: Starvation occurs when a process is ready to run but is stuck waiting indefinitely for the
CPU. This can be caused, for example, when higher-priority processes prevent low-priority
processes from ever getting the CPU. Aging involves gradually increasing the priority of a
process so that a process will eventually achieve a high enough priority to execute if it waited for
a long enough period of time.
Ans: In asymmetric multiprocessing, all scheduling decisions, I/O, and other system activities
are handled by a single processor, whereas in SMP, each processor is self-scheduling.
33. Describe two general approaches to load balancing.
Ans: With push migration, a specific task periodically checks the load on each processor and —
if it finds an imbalance—evenly distributes the load by moving processes from overloaded to
idle or less-busy processors. Pull migration occurs when an idle processor pulls a waiting task
from a busy processor. Push and pull migration are often implemented in parallel on
load-balancing systems.
34. In Windows, how does the dispatcher determine the order of thread execution?
Ans: The dispatcher uses a 32-level priority scheme to determine the execution order. Priorities
are divided into two classes. The variable class contains threads having priorities from 1 to 15,
and the real-time class contains threads having priorities from 16 to 31. The dispatcher uses a
queue for each scheduling priority, and traverses the set of queues from highest to lowest until it
finds a thread that is ready to run. The dispatcher executes an idle thread if no ready thread is
found.
Ans: Deterministic modeling takes a particular predetermined workload and defines the
performance of each algorithm for that workload. Deterministic modeling is simple, fast, and
gives exact numbers for comparison of algorithms. However, it requires exact numbers for input,
and its answers apply only in those cases. The main uses of deterministic modeling are
describing scheduling algorithms and providing examples to indicate trends.
36. What are the two types of latency that affect the performance of real-time systems?
Ans: Interrupt latency refers to the period of time from the arrival of an interrupt at the CPU to
the start of the routine that services the interrupt. Dispatch latency refers to the amount of time
required for the scheduling dispatcher to stop one process and start another.
37. What are the advantages of the EDF scheduling algorithm over the rate-monotonic
scheduling algorithm?
Ans: Unlike the rate-monotonic algorithm, EDF scheduling does not require that processes be
periodic, nor must a process require a constant amount of CPU time per burst. The appeal of
EDF scheduling is that it is theoretically optimal - theoretically, it can schedule processes so that
each process can meet its deadline requirements and CPU utilization will be 100 percent.
True/False
38. In preemptive scheduling, the sections of code affected by interrupts must be guarded
from simultaneous use.
Ans: True
39. In RR scheduling, the time quantum should be small with respect to the
context-switch time.
Ans: False
40. The most complex scheduling algorithm is the multilevel feedback-queue algorithm.
Ans: True
41. Load balancing is typically only necessary on systems with a common run queue.
Ans: False
42. Systems using a one-to-one model (such as Windows, Solaris , and Linux) schedule
threads using process-contention scope (PCS).
Ans: False
43. Solaris and Windows assign higher-priority threads/tasks longer time quantums and
lower-priority tasks shorter time quantums.
Ans: False
44. A Solaris interactive thread with priority 15 has a higher relative priority than an
interactive thread with priority 20
Ans: False
45. A Solaris interactive thread with a time quantum of 80 has a higher priority than an
interactive thread with a time quantum of 120.
Ans: True
46. SMP systems that use multicore processors typically run faster than SMP systems that
place each processor on separate cores.
Ans: True
47. Windows 7 User-mode scheduling (UMS) allows applications to create and manage
thread independently of the kernel
Ans: True
Ans: True
49. Load balancing algorithms have no impact on the benefits of processor affinity.
Ans: False
50. A multicore system allows two (or more) threads that are in compute cycles to execute
at the same time.
Ans: True
Ans: False
Ans: True
53. In Pthread real-time scheduling, the SCHED_FIFO class provides time slicing among
threads of equal priority.
Ans: False
54. In the Linux CFS scheduler, the task with smallest value of vruntime is considered to
have the highest priority.
Ans: True
55. The length of a time quantum assigned by the Linux CFS scheduler is dependent upon
the relative priority of a task.
Ans: False
56. The Completely Fair Scheduler (CFS) is the default scheduler for Linux systems.
Ans: True
Chapter: Chapter 6
Multiple Choice
5. In Peterson's solution, the ____ variable indicates if a process is ready to enter its
critical section.
A) turn
B) lock
C) flag[i]
D) turn[i]
6. The first readers-writers problem ____.
A) requires that, once a writer is ready, that writer performs its write as soon as possible.
B) is not used to test synchronization primitives.
C) requires that no reader will be kept waiting unless a writer has already obtained permission
to use the shared database.
D) requires that no reader will be kept waiting unless a reader has already obtained permission
to use the shared database.
7. A ___ type presents a set of programmer-defined operations that are provided mutual
exclusion within it.
A) transaction
B) signal
C) binary
D) monitor
9. What is the correct order of operations for protecting a critical section using mutex
locks?
A) release() followed by acquire()
B) acquire() followed by release()
C) wait() followed by signal()
D) signal() followed by wait()
10. What is the correct order of operations for protecting a critical section using a binary
semaphore?
A) release() followed by acquire()
B) acquire() followed by release()
C) wait() followed by signal()
D) signal() followed by wait()
11. _____ is not a technique for handling critical sections in operating systems.
A) Nonpreemptive kernels
B) Preemptive kernels
C) Spinlocks
D) Peterson's solution
12. A solution to the critical section problem does not have to satisfy which of the following
requirements?
A) mutual exclusion
B) progress
C) atomicity
D) bounded waiting
14. _____ can be used to prevent busy waiting when implementing a semaphore.
A) Spinlocks
B) Waiting queues
C) Mutex lock
D) Allowing the wait() operation to succeed
15. Assume an adaptive mutex is used for accessing shared data on a Solaris system with
multiprocessing capabilities. Which of the following statements is not true?
A) A waiting thread may spin while waiting for the lock to become available.
B) A waiting thread may sleep while waiting for the lock to become available.
C) The adaptive mutex is only used to protect short segments of code.
D) Condition variables and semaphores are never used in place of an adaptive mutex.
16. What is the purpose of the mutex semaphore in the implementation of the
bounded-buffer problem using semaphores?
A) It indicates the number of empty slots in the buffer.
B) It indicates the number of occupied slots in the buffer.
C) It controls access to the shared buffer.
D) It ensures mutual exclusion.
17. How many philosophers may eat simultaneously in the Dining Philosophers problem
with 5 philosophers?
A) 1
B) 2
C) 3
D) 5
19. _____ is/are not a technique for managing critical sections in operating systems.
A) Peterson's solution
B) Preemptive kernel
C) Nonpreemptive kernel
D) Semaphores
20. When using semaphores, a process invokes the wait() operation before accessing its
critical section, followed by the signal() operation upon completion of its critical section.
Consider reversing the order of these two operations—first calling signal(), then calling
wait(). What would be a possible outcome of this?
A) Starvation is possible.
B) Several processes could be active in their critical sections at the same time.
C) Mutual exclusion is still assured.
D) Deadlock is possible.
Essay
25. What three conditions must be satisfied in order to solve the critical section problem?
Ans: In a solution to the critical section problem, no thread may be executing in its critical
section if a thread is currently executing in its critical section. Furthermore, only those threads
that are not executing in their critical sections can participate in the decision on which process
will enter its critical section next. Finally, a bound must exist on the number of times that other
threads are allowed to enter their critical state after a thread has made a request to enter its
critical state.
26. Explain two general approaches to handle critical sections in operating systems.
Ans: Critical sections may use preemptive or nonpreemptive kernels. A preemptive kernel
allows a process to be preempted while it is running in kernel mode. A nonpreemptive kernel
does not allow a process running in kernel mode to be preempted; a kernel-mode process will run
until it exits kernel mode, blocks, or voluntarily yields control of the CPU. A nonpreemptive
kernel is essentially free from race conditions on kernel data structures, as the contents of this
register will be saved and restored by the interrupt handler.
27. Write two short methods that implement the simple semaphore wait() and signal()
operations on global variable S.
S--;
signal (S) {
S++;
28. Explain the difference between the first readers–writers problem and the second
readers–-writers problem.
Ans: The first readers–writers problem requires that no reader will be kept waiting unless a
writer has already obtained permission to use the shared database; whereas the second
readers–writers problem requires that once a writer is ready, that writer performs its write as
soon as possible.
29. Describe the dining-philosophers problem and how it relates to operating systems.
Ans: The scenario involves five philosophers sitting at a round table with a bowl of food and
five chopsticks. Each chopstick sits between two adjacent philosophers. The philosophers are
allowed to think and eat. Since two chopsticks are required for each philosopher to eat, and only
five chopsticks exist at the table, no two adjacent philosophers may be eating at the same time. A
scheduling problem arises as to who gets to eat at what time. This problem is similar to the
problem of scheduling processes that require a limited number of resources.
30. What is the difference between software transactional memory and hardware
transactional memory?
Ans:
void update()
{
mutex.acquire();
mutex.release();
}
Ans: Solaris uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or a reader–writer lock. The turnstile is a queue structure containing threads blocked on a
lock. Each synchronized object with at least one thread blocked on the object's lock requires a
separate turnstile. However, rather than associating a turnstile with each synchronized object,
Solaris gives each kernel thread its own turnstile.
33. Explain the role of the variable preempt_count in the Linux kernel.
Ans: Each task maintains a value preempt_count which is the number of locks held by each
task. When a lock is acquired, preempt_count is incremented. When a lock is released,
preempt_count is decremented. If the task currently running in the kernel has a value of
preempt_count > 0, the kernel cannot be preempted as the task currently holds a lock.
If the count is zero, the kernel can be preempted.
Ans: An adaptive mutex is used in the Solaris operating system to protect access to shared data.
On a multiprocessor system, an adaptive mutex acts as a standard semaphore implemented as a
spinlock. If the shared data being accessed is already locked and the thread holding that lock is
running on another CPU, the thread spins while waiting for the lock to be released, and the data
to become available. If the thread holding the lock is not in the run state, the waiting thread
sleeps until the lock becomes available. On a single processor system, spinlocks are not used and
the waiting thread always sleeps until the lock becomes available.
35. Describe a scenario when using a reader–writer lock is more appropriate than another
synchronization tool such as a semaphore.
Ans: A tool such as a semaphore only allows one process to access shared data at a time.
Reader–writer locks are useful when it is easy to distinguish if a process is only reading or
reading/writing shared data. If a process is only reading shared data, it can access the shared data
concurrently with other readers. In the case when there are several readers, a reader–writer lock
may be much more efficient.
36. Explain how Linux manages race conditions on single-processor systems such as
embedded devices.
Ans: On multiprocessor machines, Linux uses spin locks to manage race conditions. However, as
spin locks are not appropriate on single processor machines, Linux instead disables kernel
preemption which acquiring a spin lock, and enables it after releasing the spin lock.
True/False
37. Race conditions are prevented by requiring that critical regions be protected by locks.
Ans: True
38. The value of a counting semaphore can range only between 0 and 1.
Ans: False
Ans: False
40. The local variables of a monitor can be accessed by only the local procedures.
Ans: True
Ans: True
42. Monitors are a theoretical concept and are not practiced in modern programming
languages
Ans: False
43. A thread will immediately acquire a dispatcher lock that is the signaled state.
Ans: True
44. Mutex locks and counting semaphores are essentially the same thing.
Ans: False
45. Mutex locks and binary semaphores are essentially the same thing.
Ans: True
46. A nonpreemptive kernel is safe from race conditions on kernel data structures.
Ans: True
47. Linux mostly uses atomic integers to manage race conditions within the kernel.
Ans: False
Chapter: Chapter 7
Multiple Choice
2. One necessary condition for deadlock is ____, which states that at least one resource
must be held in a nonsharable mode.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption
3. One necessary condition for deadlock is ______, which states that a process must be
holding one resource and waiting to acquire additional resources.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption
4. One necessary condition for deadlock is ______, which states that a resource can be
released only voluntarily by the process holding the resource.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption
5. One necessary condition for deadlock is ______, which states that there is a chain of
waiting processes whereby P0 is waiting for a resource held by P1, P1 is waiting for a
resource held by P2, and Pn is waiting for a resource held by P0.
A) hold and wait
B) mutual exclusion
C) circular wait
D) no preemption
6. The witness software product is a ____.
A) lock-order verifier that uses mutual-exclusion locks to protect critical sections
B) modeler to develop resource allocation graphs
C) driver that can be used to prevent mutual exclusion for nonsharable resources
D) implementation of the banker's algorithm available for most operating systems
12. Suppose that there are 12 resources available to three processes. At time 0, the
following data is collected. The table indicates the process, the maximum number of
resources needed by the process, and the number of resources currently owned by each
process. Which of the following correctly characterizes this state?
13. Which of the following data structures in the banker's algorithm is a vector of length
m, where m is the number of resource types?
A) Need
B) Allocation
C) Max
D) Available
14. Assume there are three resources, R1, R2, and R3, that are each assigned unique
integer values 15, 10, and 25, respectively. What is a resource ordering which prevents a
circular wait?
A) R1, R2, R3
B) R3, R2, R1
C) R3, R1, R2
D) R2, R1, R3
15. A _____ could be preempted from a process.
A) mutex lock
B) CPU
C) semaphore
D) file lock
Essay
16. Explain what has to happen for a set of processes to achieve a deadlocked state.
Ans: For a set of processes to exist in a deadlocked state, every process in the set must be
waiting for an event that can be caused only be another process in the set. Thus, the processes
cannot ever exit this state without manual intervention.
17. Describe the four conditions that must hold simultaneously in a system if a deadlock is
to occur.
Ans: For a set of processes to be deadlocked: at least one resource must remain in a
nonsharable mode, a process must hold at least one resource and be waiting to acquire additional
resources held by other processes, resources in the system cannot be preempted, and a circular
wait has to exist between processes.
18. What are the three general ways that a deadlock can be handled?
Ans: A deadlock can be prevented by using protocols to ensure that a deadlock will never occur.
A system may allow a deadlock to occur, detect it, and recover from it. Lastly, an operating
system may just ignore the problem and pretend that deadlocks can never occur.
19. What is the difference between deadlock prevention and deadlock avoidance?
Ans: Deadlock prevention is a set of methods for ensuring that at least one of the necessary
conditions for deadlock cannot hold. Deadlock avoidance requires that the operating system be
given, in advance, additional information concerning which resources a process will request and
use during its lifetime.
20. Describe two protocols to ensure that the hold-and-wait condition never occurs in a
system.
Ans: One protocol requires each process to request and be allocated all its resources before it
begins execution. We can implement this provision by requiring that system calls requesting
resources for a process precede all other system calls. An alternative protocol allows a process to
request resources only when it has none. A process may request some resources and use them.
Before it can request any additional resources, however, it must release all the resources that it is
currently allocated.
21. What is one way to ensure that a circular-wait condition does not occur?
Ans: One way to ensure that this condition never holds is to impose a total ordering of all
resource types, and to require that each process requests resources in an increasing order of
enumeration. This can be accomplished by assigning each resource type a unique integer number
to determine whether one precedes another in the ordering.
Ans: A claim edge indicates that a process may request a resource at some time in the future.
This edge resembles a request edge in direction, but is represented in the graph by a dashed line.
Ans: If all resources have only a single instance, then we can define a deadlock-detection
algorithm that uses a variant of the resource-allocation graph, called a wait-for graph. We obtain
this graph from the resource-allocation graph by removing the resource nodes and collapsing the
appropriate edges. To detect deadlocks, the system needs to maintain the wait-for graph and
periodically invoke an algorithm that searches for a cycle in the graph.
24. What factors influence the decision of when to invoke a detection algorithm?
Ans: The first factor is how often a deadlock is likely to occur; if deadlocks occur frequently,
the detection algorithm should be invoked frequently. The second factor is how many processes
will be affected by deadlock when it happens; if the deadlock-detection algorithm is invoked for
every resource request, a considerable overhead in computation time will be incurred.
25. Describe two methods for eliminating processes by aborting a process.
Ans: The first method is to abort all deadlocked processes. Aborting all deadlocked processes
will clearly break the deadlock cycle; however, the deadlocked processes may have to be
computed for a long time, and results of these partial computations must be discarded and will
probably have to be recomputed later. The second method is to abort one process at a time until
the deadlock cycle is eliminated. Aborting one process at a time incurs considerable overhead,
since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine
whether any processes are still deadlocked.
26. Name three issues that need to be addressed if a preemption is required to deal with
deadlocks.
Ans: First, the order of resources and processes that need to be preempted must be determined
to minimize cost. Second, if a resource is preempted from a process, the process must be rolled
back to some safe state and restarted from that state. The simplest solution is a total rollback.
Finally, we must ensure that starvation does not occur from always preempting resources from
the same process.
Ans: A safe state ensures that there is a sequence of processes to finish their program execution.
Deadlock is not possible while the system is in a safe state. However, if a system goes from a
safe state to an unsafe state, deadlock is possible. One technique for avoiding deadlock is to
ensure that the system always stays in a safe state. This can be done by only assigning a resource
as long as it maintains the system in a safe state.
True/False
28. The circular-wait condition for a deadlock implies the hold-and-wait condition.
Ans: True
29. If a resource-allocation graph has a cycle, the system must be in a deadlocked state.
Ans: False
Ans: False
31. The wait-for graph scheme is not applicable to a resource allocation system with
multiple instances of each resource type.
Ans: True
32. Ordering resources and requiring the resources to be acquired in order prevents the
circular wait from occurring and therefore prevents deadlock from occurring.
Ans: False
33. The banker's algorithm is useful in a system with multiple instances of each resource
type.
Ans: True
Ans: False
35. Deadlock prevention and deadlock avoidance are essentially the same approaches for
handling deadlock.
Ans: False
Chapter: Chapter 8
Multiple Choice
2. _____ is the method of binding instructions and data to memory performed by most
general-purpose operating systems.
A) Interrupt binding
B) Compile time binding
C) Execution time binding
D) Load-time binding
4. Suppose a program is operating with execution-time binding and the physical address
generated is 300. The relocation register is set to 100. What is the corresponding logical
address?
A) 199
B) 201
C) 200
D) 300
9. _____ is the dynamic storage-allocation algorithm which results in the smallest leftover
hole in memory.
A) First fit
B) Best fit
C) Worst fit
D) None of the above
10. _____ is the dynamic storage-allocation algorithm which results in the largest leftover
hole in memory.
A) First fit
B) Best fit
C) Worst fit
D) None of the above
13. Consider a logical address with a page size of 8 KB. How many bits must be used to
represent the page offset in the logical address?
A) 10
B) 8
C) 13
D) 12
14. Consider a logical address with 18 bits used to represent an entry in a conventional
page table. How many entries are in the conventional page table?
A) 262144
B) 1024
C) 1048576
D) 18
15. Assume a system has a TLB hit ratio of 90%. It requires 15 nanoseconds to access the
TLB, and 85 nanoseconds to access main memory. What is the effective memory access
time in nanoseconds for this system?
A) 108.5
B) 100
C) 22
D) 176.5
16. Given the logical address 0xAEF9 (in hexadecimal) with a page size of 256 bytes, what
is the page number?
A) 0xAE
B) 0xF9
C) 0xA
D) 0x00F9
17. Given the logical address 0xAEF9 (in hexadecimal) with a page size of 256 bytes, what
is the page offset?
A) 0xAE
B) 0xF9
C) 0xA
D) 0xF900
18. Consider a 32-bit address for a two-level paging system with an 8 KB page size. The
outer page table has 1024 entries. How many bits are used to represent the second-level
page table?
A) 10
B) 8
C) 12
D) 9
20. Which of the following data structures is appropriate for placing into its own segment?
A) heap
B) kernel code and data
C) user code and data
D) all of the above
21. Assume the value of the base and limit registers are 1200 and 350 respectively. Which of
the following addresses is legal?
A) 355
B) 1200
C) 1551
D) all of the above
22. A(n) ______ matches the process with each entry in the TLB.
A) address-space identifier
B) process id
C) stack
D) page number
23. Which of the following statements are true with respect to hashed page tables?
A) They only work for sparse address spaces.
B) The virtual address is used to hash into the hash table.
C) A common approach for handling address spaces larger than 32 bits.
D) Hash table collisions do not occur because of the importance of paging.
24. Which of the following statements regarding the ARM architecture are false?
A) There are essentially four different page ranging from 4-KB to 16-MB in size.
B) There are two different levels of TLB.
C) One or two level paging may be used.
D) The micro TLB must be flushed at each context switch.
25. Which of the following is not a reason explaining why mobile devices generally do not
support swapping?
A) Limited space constraints of flash memory.
B) Small size of mobile applications do not require use of swap space.
C) Limited number of writes of flash memory.
D) Poor throughput between main memory and flash memory.
Essay
Ans: With dynamic loading a program does not have to be stored, in its entirety, in main
memory. This allows the system to obtain better memory-space utilization. This also allows
unused routines to stay out of main memory so that memory can be used more effectively. For
example, code used to handle an obscure error would not always use up main memory.
27. What is the context switch time, associated with swapping, if a disk drive with a
transfer rate of 2 MB/s is used to swap out part of a program that is 200 KB in size?
Assume that no seeks are necessary and that the average latency is 15 ms. The time should
reflect only the amount of time necessary to swap out the process.
Ans: Physical memory is broken up into fixed-sized blocks called frames while logical memory
is broken up into equal-sized blocks called pages. Whenever the CPU generates a logical address,
the page number and offset into that page is used, in conjunction with a page table, to map the
request to a location in physical memory.
31. Describe how a transaction look-aside buffer (TLB) assists in the translation of a
logical address to a physical address.
Ans: Typically, large page tables are stored in main memory, and a page-table base register
points are saved to the page table. Therefore, two memory accesses are needed to access a byte
(one for the page-table entry, one for the byte), causing memory access to be slowed by a factor
of 2. The standard solution to this problem is to use a TLB, a special, small fast-lookup
hardware cache. The TLB is associative, high speed memory. Each entry consists of a key and
value. An item is compared with all keys simultaneously, and if the item is found, the
corresponding value is returned.
32. How are illegal page addresses recognized and trapped by the operating system?
Ans: Illegal addresses are trapped by the use of a valid-invalid bit, which is generally attached
to each entry in the page table. When this bit is set to "valid," the associated page is in the
process's logical address space and is thus a legal (or valid) page. When the bit is set to "invalid,"
the page is not in the process's logical address space. The operating system sets this bit for each
page to allow or disallow access to the page.
33. Describe the elements of a hashed page table.
Ans: A hashed page table contains hash values which correspond to a virtual page number.
Each entry in the hash table contains a linked list of elements that hash to the same location (to
handle collisions). Each element consists of three fields: (1) the virtual page number, (2) the
value of the mapped page frame, and (3) a pointer to the next element in the linked list.
34. Briefly describe the segmentation memory management scheme. How does it differ
from the paging memory management scheme in terms of the user's view of memory?
Ans: Segmentation views a logical address as a collection of segments. Each segment has a
name and length. The addresses specify both the segment name and the offset within the segment.
The user therefore specifies each address by two quantities: a segment name and an offset. In
contrast, in a paging scheme, the user specifies a single address, which is partitioned by the
hardware into a page number and an offset, all invisible to the programmer.
Ans: The logical-address space is divided into two partitions. The first partition consists of up
to 8 K segments that are private to that process. The second partition consists of up to 8 K
segments that are shared among all the processes. Information about the first partition is kept in
the local descriptor table (LDT); information about the second partition is kept in the global
descriptor table (GDT).
Ans: When the CPU is executing a process, it generates a logical memory address that is added
to a relocation register in order to arrive at the physical memory address actually used by main
memory. A limit register holds the maximum logical address that the CPU should be able to
access. If any logical address is greater than or equal to the value in the limit register, then the
logical address is a dangerous address and an error results.
37. Using Figure 8.14, describe how a logical address is translated to a physical address.
Ans: A logical address is generated by the CPU. This logical address consists of a page number
and offset. The TLB is first checked to see if the page number is present. If so, a TLB hit, the
corresponding page frame is extracted from the TLB, thus producing the physical address. In the
case of a TLB miss, the page table must be searched according to page number for the
corresponding page frame.
38. Explain why mobile operating systems generally do not support paging.
Ans: Mobile operating systems typically do not support swapping because file systems are
typically employed using flash memory instead of magnetic hard disks. Flash memory is
typically limited in size as well as having poor throughput between flash and main memory.
Additionally, flash memory can only tolerate a limited number of writes before it becomes less
reliable.
39. Using Figure 8.26, describe how address translation is performed on ARM
architectures.
Ans: ARM supports four different page sizes: 4-KB and 16-KB page use two-level paging, the
larger 1-MB and 16-MB page sizes use single-level paging. The ARM architecture uses two
levels of TLBs - at one level is the micro TLB which is in fact separate TLBs for data and
instructions. At the inner level is a single main TLB. Address translation begins wit first
searching the micro TLB, and in case of a TLB miss, the main TLB is then checked. If the
reference is not in the main TLB, the page table must then be consulted.
True/False
40. A relocation register is used to check for invalid memory addresses generated by a
CPU.
Ans: False
Ans: False
42. There is a 1:1 correspondence between the number of entries in the TLB and the
number of entries in the page table.
Ans: False
Ans: False
43. The ARM architecture uses both single-level and two-level paging.
Ans: True
44. Fragmentation does not occur in a paging system.
Ans: False
45. Hashed page tables are particularly useful for processes with sparse address spaces.
Ans: True
46. Inverted page tables require each process to have its own page table.
Ans: False
47. Without a mechanism such as an address-space identifier, the TLB must be flushed
during a context switch.
Ans: True
48. A 32-bit logical address with 8 KB page size will have 1,000,000 entries in a
conventional page table.
Ans: False
49. Hashed page tables are commonly used when handling addresses larger than 32 bits.
Ans: True
50. The x86-64 bit architecture only uses 48 of the 64 possible bits for representing virtual
address space.
Ans: True
Ans: False
Chapter: Chapter 9
Multiple Choice
A) 14
B) 13
C) 8
D) 10
10. In the enhanced second chance algorithm, which of the following ordered pairs
represents a page that would be the best choice for replacement?
A) (0,0)
B) (0,1)
C) (1,0)
D) (1,1)
11. The _____ allocation algorithm allocates available memory to each process according
to its size.
A) equal
B) global
C) proportional
D) slab
12. The ____ is the number of entries in the TLB multiplied by the page size.
A) TLB cache
B) page resolution
C) TLB reach
D) hit ratio
13. ________ allows the parent and child processes to initially share the same pages, but
when either process modifies a page, a copy of the shared page is created.
A) copy-on-write
B) zero-fill-on-demand
C) memory-mapped
D) virtual memory fork
15. _____ occurs when a process spends more time paging than executing.
A) Thrashing
B) Memory-mapping
C) Demand paging
D) Swapping
18. What size segment will be allocated for a 39 KB request on a system using the Buddy
system for kernel memory allocation?
A) 39 KB
B) 42 KB
C) 64 KB
D) None of the above
19. Which of the following statements is false with regard to allocating kernel memory?
A) Slab allocation does not suffer from fragmentation.
B) Adjacent segments can be combined into one larger segment with the buddy system.
C) Because the kernel requests memory of varying sizes, some of which may be quite small, the
system does not have to be concerned about wasting memory.
D) The slab allocator allows memory requests to be satisfied very quickly.
21. ______ allows a portion of a virtual address space to be logically associated with a file.
A) Memory-mapping
B) Shared memory
C) Slab allocation
D) Locality of reference
22. Systems in which memory access times vary significantly are known as __________.
A) memory-mapped I/O
B) demand-paged memory
C) non-uniform memory access
D) copy-on-write memory
23. Which of the following is considered a benefit when using the slab allocator?
A) Memory is allocated using a simple power-of-2 allocator.
B) It allows kernel code and data to be efficiently paged.
C) It allows larger segments to be combined using coalescing.
D) There is no memory fragmentation.
Essay
24. Explain the distinction between a demand-paging system and a paging system with
swapping.
Ans: A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory. With demand paging, when a process is executed, it is swapped
into memory. Rather than swapping the entire process into memory, however, a lazy swapper is
used. A lazy swapper never swaps a page into memory unless that page will be needed. Thus, a
paging system with swapping manipulates entire processes, whereas a demand pager is
concerned with the individual pages of a process.
25. Explain the sequence of events that happens when a page-fault occurs.
Ans: When the operating system cannot load the desired page into memory, a page-fault occurs.
First, the memory reference is checked for validity. In the case of an invalid request, the program
will be terminated. If the request was valid, a free frame is located. A disk operation is then
scheduled to read the page into the frame just found, update the page table, restart the instruction
that was interrupted because of the page fault, and use the page accordingly.
26. How is the effective access time computed for a demand-paged memory system?
Ans: In order to compute the effective access time, it is necessary to know the average memory
access time of the system, the probability of a page fault, and the time necessary to service a
page fault. The effective access time can then be computed using the formula:
effective access time = (1 – probability of page fault) * memory access time + probability of
page fault * page fault time.
27. How does the second-chance algorithm for page replacement differ from the FIFO
page replacement algorithm?
Ans: The second-chance algorithm is based on the FIFO replacement algorithm and even
degenerates to FIFO in its worst-case scenario. In the second-chance algorithm, a FIFO
replacement is implemented along with a reference bit. If the reference bit is set, then it is
cleared, the page's arrival time is set to the current time, and the program moves along in a
similar fashion through the pages until a page with a cleared reference bit is found and
subsequently replaced.
28. Explain the concept behind prepaging.
Ans: Paging schemes, such as pure demand paging, result in large amounts of initial page faults
as the process is started. Prepaging is an attempt to prevent this high level of initial paging by
bringing into memory, at one time, all of the pages that will be needed by the process.
29. Why doesn't a local replacement algorithm solve the problem of thrashing entirely?
Ans: With local replacement, if one process starts thrashing, it cannot steal frames from another
process and cause the latter to thrash as well. However, if processes are thrashing, they will be in
the queue for the paging device most of the time. The average service time for a page fault will
increase because of the longer average queue for the paging device. Thus, the effective access
time will increase, even for a process that is not thrashing.
30. Explain the difference between programmed I/O (PIO) and interrupt driven I/O.
Ans: To send out a long string of bytes through a memory-mapped serial port, the CPU writes
one data byte to the data register to signal that it is ready for the next byte. If the CPU uses
polling to watch the control bit, constantly looping to see whether the device is ready, this
method of operation is called programmer I/O. If the CPU does not poll the control bit, but
instead receives an interrupt when the device is ready for the next byte, the data transfer is said to
be interrupt driven.
31. What are the benefits of using slab allocation to allocate kernel memory?
Ans: The slab allocator provides two main benefits. First, no memory is wasted due to
fragmentation. When the kernel requests memory for an object, the slab allocator returns the
exact amount of memory required to represent the object. Second, memory requests can be
satisfied quickly. Objects are created in advance and can be quickly allocated. Also, released
objects are returned to the cache and marked as free, thus making them immediately available for
subsequent requests.
Ans: A lock bit is associated with every frame. If a frame is locked, it cannot be selected for
replacement. To write a block on tape, we lock into memory the pages containing the block. The
system then continues as usual with other processes if the I/O request is in a queue for that I/O
device. This avoids the replacement of the pages for other processes and the possible
unavailability of those pages when the I/O request advances to the head of the device queue.
When the I/O is complete, the pages are unlocked.
33. Explain how copy-on-write operates.
Ans: Copy-on-write (COW) initially allows a parent and child process to share the same pages.
As long as either process is only reading—and not modifying—the shared pages, both processes
can share the same pages, thus increasing system efficiency. However, as soon as either process
modifies a shared page, a copy of that shared page is created, thus providing each process with
its own private page. For example, assume an integer X whose value is 5 is in a shared page
marked as COW. The parent process then proceeds to modify X, changing its value to 10. Since
this page is marked as COW, a copy of the page is created for the parent process, which changes
the value of X to 10. The value of X remains at 5 for the child process.
34. Explain the distinction between global allocation versus local allocation.
Ans: When a process incurs a page fault, it must be allocated a new frame for bringing the
faulting page into memory. The two general strategies for allocating a new frame are global and
local allocation policies. In a global allocation scheme, a frame is allocated from any process in
the system. Thus, if process A incurs a page fault, it may be allocated a page from process B.
The page that is selected from process B may be based upon any of the page replacement
algorithms such as LRU. Alternatively, a local allocation policy dictates that when a process
incurs a page fault, it must select one of its own pages for replacement when allocating a new
page.
Ans: TLB reach refers to the amount of memory accessible from the TLB and is the page size
multiplied by the number of entries in the TLB. Two possible approaches for increasing TLB
reach are (1) increasing the number of entries in the TLB, and (2) increasing the page size.
Increasing the number of entries in the TLB is a costly strategy as the TLB consists of
associative memory, which is both costly and power hungry. For example, by doubling the
number of entries in the TLB, the TLB reach is doubled. However, increasing the page size (or
providing multiple page sizes) allows system designers to maintain the size of the TLB, and yet
significantly increase the TLB reach. For this reason, recent trends have moved towards
increasing page sizes for increasing TLB reach.
Ans: Virtual address spaces that include holes between the heap and stack are known as sparse
address spaces. Using a sparse address space is beneficial because the holes can be filled as the
stack or heap segments grow, or when we wish to dynamically link libraries (or possibly other
shared objects) during program execution.
37. Explain the usefulness of a modify bit.
Ans: A modify bit is associated with each page frame. If a frame is modified (i.e. written), the
modify bit is then set. The modify bit is useful when a page is selected for replacement. If the bit
is not set (the page was not modified), the page does not need to be written to disk. If the modify
bit is set, the page needs to be written to disk when selected for replacement.
True/False
Ans: False
Ans: True
40. If the page-fault rate is too high, the process may have too many frames.
Ans: False
41. The buddy system for allocating kernel memory is very likely to cause fragmentation
within the allocated segments.
Ans: True
42. On a system with demand-paging, a process will experience a high page fault rate when
the process begins execution.
Ans: True
43. On systems that provide it, vfork() should always be used instead of fork().
Ans: False
44. Only a fraction of a process's working set needs to be stored in the TLB.
Ans: False
45. Solaris uses both a local and global page replacement policy.
Ans: False
46. Windows uses both a local and global page replacement policy.
Ans: False
Ans: True
48. Non-uniform memory access has little effect on the performance of a virtual memory
system.
Ans: False
Ans: False
1. In a time-sharing operating system, the process moves from the running
state to the ______ when the time slot given to a process is completed.
a. suspended state
b. ready state
c. blocked state
d. terminated state
2. A ______ is the only state transition that is started by the user process
itself.
a. dispatch
b. none of the mentioned
c. block
d. wakeup
4. If the child process finishes execution and the parent is still in execution
state, then the child process is known as ______.
a. orphan
b. zombie
c. body
d. dead
10. Only one process will run at a time with ______; all other processes are
waiting for the processor in the meantime. With ______, each process on a
different processor can run more than one process simultaneously.
a. Multiprocessing, Multiprogramming
b. Multiprogramming, Multiprocessing
c. Uniprogramming, Multiprocessing
d. Multiprogramming, Uniprocessing
14. Which of the following would not disrupt the running process?
a. Scheduler process
b. A device
c. Timer
d. Power failure
20. Assume a process is waiting for some I/O service in a "blocked" state. It
goes to the ______ when the service is finished.
a. terminated state
b. ready state
c. running state
d. suspended state