Operating System Solutions
Operating System Solutions
Operating System Solutions
d. What is deadlock? List and explain conditions that are necessary for a resource deadlock to
occur.
Definition 1 Mark
List 1 Mark
Explanation 3 Marks
A set of processes is deadlocked if each process in the s et is waiting for an event that only
another process in the set can cause.
Coffman et al. (1971) showed that four conditions must hold for there to be a
(resource) deadlock:
1. Mutual exclusion condition. Each resource is either currently assigned to exactly one
process or is available.
2. Hold-and-wait condition. Processes currently holding resources that were granted earlier
can request new resources.
3. No-preemption condition. Resources previously granted cannot be forcibly taken away
from a process. They must be explicitly released by the process holding them.
4. Circular wait condition. There must be a circular list of two or more processes, each of
which is waiting for a resource held by the next member of the chain.
e. Explain deadlock detection algorithm to detect deadlock when multiple resources of each
type are available.
Explanation 5 Marks
When multiple copies of some of the resources exist, a different approach is needed to detect
deadlocks. We will now present a matrix-based algorithm for detecting deadlock among n
processes, P1 through Pn. Let the number of resource classes be m, with E1 resources of
class 1, E2 resources of class 2, and generally, Ei resources of class i (1 ≤ i ≤ m). E is the
existing resource vector. It gives the total number of instances of each resource in existence.
For example, if class 1 is
tape drives, then E1 = 2 means the system has two tape drives. At any instant, some of the
resources are assigned and are not available. Let A be the av ailable resource vector, with Ai
giving the number of instances of resource i that are currently available (i.e., unassigned). If
both of our two tape drives are assigned, A1 will be 0.
Now we need two arrays, C, the current allocation matrix, and R, the request matrix. The
ith row of C tells how many instances of each resource class Pi currently holds. Thus, Cij is
the number of instances of resource j that are held by process i. Similarly, Rij is the number
of instances of resource j that Pi wants. These four data structures are shown in Fig.
An important invariant holds for these four data structures. In particular, every resource is
either allocated or is available. This observation means that
The deadlock detection algorithm is based on comparing vectors. Let us define the relation A
B on two vectors A and B to mean that each element of A is less than or equal to the
corresponding element of B. Mathematically, A ≤ B holds if and only if Ai ≤ Bi for 1 ≤ i ≤ m.
Each process is initially said to be unmarked. As the algorithm progresses, processes will be
marked, indicating that they are able to complete and are thus not deadlocked. When the
algorithm terminates, any unmarked processes are known to be deadlocked. This algorithm
assumes a worst-case scenario: all processes keep all acquired resources until they exit.
The deadlock detection algorithm can now be given as follows.
1. Look for an unmarked process, Pi , for which the ith row of R is less than or equal to A.
2. If such a process is found, add the ith row of C to A, mark the process, and go back to
step 1.
3. If no such process exists, the algorithm terminates.
f. Explain how deadlocks are prevented?
Explanation 5 Marks
If we can ensure that at least one of these conditions is never satisfied, then deadlocks will be
structurally impossible
Attacking the Mutual-Exclusion Condition
Attacking the Hold-and-Wait ConditionZ
Attacking the No-Preemption Condition
Attacking the Circular Wait Condition
The kernel sits directly on the hardware and enables interactions with I/O devices and the
memory management unit and controls CPU access to them. At the lowest level, as shown in
Fig. it contains interrupt handlers, which are the primary way for interacting with devices,
and the low-level dispatching mechanism. This dispatching occurs when an interrupt
happens. The low-level code here stops the running process, saves its state in the kernel
process structures, and starts the appropriate driver. Process dispatching also happens when
the kernel completes some operations and it is time to start up a user process again. The
dispatching code is in assembler and is quite distinct from scheduling. Next, we divide the
various kernel subsystems into three main components.
The I/O component in Fig. contains all kernel pieces responsible for interacting with devices
and performing network and storage I/O operations. At the highest level, the I/O operations
are all integrated under a VFS (Virtual File System) layer. That is, at the top level,
performing a read operation on a file, whether it is in memory or on disk, is the same as
performing a read operation to retrieve a character from a terminal input. At the lowest level,
all I/O operations pass through some
device driver. All Linux drivers are classified as either character-device drivers or block-
device drivers, the main difference being that seeks and random accesses are allowed on
block devices and not on character devices. Technically, network devices are really character
devices, but they are handled somewhat differently, so that it is probably clearer to separate
them, as has been done in the figure.
Above the device-driver lev el, the kernel code is different for each device type. Character
devices may be used in two different ways. Some programs, such as visual editors like vi and
emacs, want every keystroke as it is hit. Raw terminal (tty) I/O makes this possible. Other
software, such as the shell, is line oriented, allowing users to edit the whole line before hitting
ENTER to send it to the program. In this case the character stream from the terminal device is
passed through a socalled
line discipline, and appropriate formatting is applied. Networking software is often modular,
with different devices and protocols supported. The layer above the network drivers handles a
kind of routing function, making sure that the right packet goes to the right device or protocol
handler. Most
Linux systems contain the full functionality of a hardware router within the kernel, although
the performance is less than that of a hardware router. Above the router code is the actual
protocol stack, including IP and TCP, but also many additional protocols. Overlaying all the
network is the socket interface, which allows programs to create sockets for particular
networks and protocols, getting back a file descriptor for each socket to use later.
On top of the disk drivers is the I/O scheduler, which is responsible for ordering and issuing
disk-operation requests in a way that tries to conserve wasteful disk head movement or to
meet some other system policy. At the very top of the block-device column are the file
systems. Linux may,
and in fact does, have multiple file systems coexisting concurrently. In order to hide the
gruesome architectural differences of various hardware devices from the file system
implementation, a generic block-device layer provides an abstraction used by all file systems.
To the right in Fig. are the other two key components of the Linux kernel. These are
responsible for the memory and process management tasks. Memory- management tasks
include maintaining the virtual to physical-memory mappings, maintaining a cache of
recently accessed pages and implementing a good page-replacement policy, and on-demand
bringing in new pages of needed code
and data into memory. The key responsibility of the process-management component is the
creation
and termination of processes. It also includes the process scheduler, which chooses which
process or, rather, thread to run next. As we shall see in the next section, the Linux kernel
treats both processes and threads simply as executable entities, and will schedule them based
on a global scheduling policy. Finally, code for signal handling also belongs to this
component.
b. Explain Android architecture.
Explanation 5 Marks
Android is built on top of the standard Linux kernel, with only a few significant extensions to
the kernel itself that will be discussed later. Once in user space, however, its implementation
is quite different from a traditional Linux distribution and uses many of the Linux features
you already understand in very different ways. As in a traditional Linux system, Android’s
first user-space process is init, which is the root of all other processes. The daemons
Android’s init process starts
are different, however, focused more on low-level details (managing file systems and
hardware access) rather than higher-level user facilities like scheduling cron jobs. Android
also has an additional layer of processes, those running Dalvik’s Java language environment,
which are responsible for executing all parts of the system implemented in Java.
Figure illustrates the basic process structure of Android. First is the init process, which
spawns a number of low-level daemon processes. One of these is zygote, which is the root of
the higher-level Java language processes. Android’s init does not run a shell in the traditional
way, since a typical
Android device does not have a local console for shell access. Instead, the daemon process
adbd listens for remote connections (such as over USB) that request shell access, forking
shell processes for them as needed. Since most of Android is written in the Java language, the
zygote daemon and
processes it starts are central to the system. The first process zygote always starts is called
system server, which contains all of the core operating system services.
Key parts of this are the power manager, package manager, window manager, and activity
manager.
Other processes will be created from zygote as needed. Some of these are ‘‘persistent’’
processes that are part of the basic operating system, such as the telephony stack in the phone
process, which must remain always running. Additional application processes will be created
and stopped as needed while the system is running.
Applications interact with the operating system through calls to libraries provided by it,
which together compose the Android framework. Some of these libraries can perform their
work within that process, but many will need to perform interprocess communication with
other processes, often services in the system server process.
The package manager provides a framework API for applications to call in their local
process, here the PackageManager class. Internally, this class must get a connection to the
corresponding
service in the system server. To accomplish this, at boot time the system server publishes
each service under a well-defined name in the service manager, a daemon started by init. The
PackageManager in the application process retrieves a connection from the service manager
to its system service using that same name.
Once the PackageManager has connected with its system service, it can make calls on it.
Most application calls to PackageManager are implemented as interprocess communication
using Android’s Binder IPC mechanism, in this case making calls to the
PackageManagerService implementation in the system server. The implementation of
PackageManagerService arbitrates interactions across all client applications and maintains
state that will be needed by multiple applications.
c. List and explain file-system system calls in Linux.
List 1 Mark
Explanation 4 Marks
d. Write down I/O and object manager steps for creating/opening a file and getting back a file
handle.
1. When an executive component, such as the I/O manager implementing the native system
call NtCreateFile, calls ObOpenObjectBy- Name in the object manager, it passes a Unicode
path name for the NT namespace, say \ ?? \ C: \ foo \ bar.
2. The object manager searches through directories and symbolic links and ultimately finds
that \ ?? \ C: refers to a device object (a type defined by the I/O manager). The device object
is a leaf node in the part of the NT namespace that the object manager manages.
3. The object manager then calls the Parse procedure for this object type, which happens to
be IopParseDevice implemented by the I/O manager. It passes not only a pointer to the device
object it found (for C:), but also the remaining string \ foo \ bar.
4. The I/O manager will create an IRP (I/O Request Packet), allocate a file object, and send
the request to the stack of I/O devices determined by the device object found by the object
manager.
5. The IRP is passed down the I/O stack until it reaches a device object representing the file-
system instance for C:. At each stage, control is passed to an entry point into the driver object
associated with the device object at that level. The entry point used here is for CREATE
operations, since the request is to create or open a file named \ foo \ bar on the volume.
6. The device objects encountered as the IRP heads toward the file system represent file-
system filter drivers, which may modify the I/O operation before it reaches the file-system
device object. Typically these intermediate devices represent system extensions like antivirus
filters.
7. The file-system device object has a link to the file-system driver object, say NTFS. So, the
driver object contains the address of the CREATE operation within NTFS.
8. NTFS will fill in the file object and return it to the I/O manager, which returns back up
through all the devices on the stack until Iop- ParseDevice returns to the object manager.
9. The object manager is finished with its namespace lookup. It received back an initialized
object from the Parse routine (which happens to be a file object—not the original device
object it found). So the object manager creates a handle for the file object in the handle table
of the current process, and returns the handle to its caller.
10. The final step is to return back to the user-mode caller, which in this example is the
Win32 API CreateFile, which will return the handle to the application.
_____________________________