OS mod 6(8m)
OS mod 6(8m)
OS mod 6(8m)
PROCESS:
A process is an active program i.e a program that is under execution. It is more than the program
code as it includes the program counter, process stack, registers, program code etc. Compared to
this, the program code is only the text section.
A program is not a process by itself as the program is a passive entity, such as file contents, while the
process is an active entity containing program counter, resources etc.
New - The process is in the new state when it has just been created.
Ready - The process is waiting to be assigned the processor by the short-term scheduler.
Waiting - The process is waiting for some event such as I/O to occur.
THREAD:
Within a program, a Thread is a separate execution path. It is a lightweight process that the operating
system can schedule and run concurrently with other threads. The operating system creates and
manages threads, and they share the same memory and resources as the program that created
them. This enables multiple threads to collaborate and work efficiently within a single program.
TYPES OF THREADS:
A process control block is associated with each of the processes. It contains important information
Process State - This specifies the process state i.e. new, ready, running, waiting or terminated.
Program Counter - This contains the address of the next instruction that needs to be executed in the
process.
Registers - This specifies the registers that are used by the process. They may include accumulators,
index registers, stack pointers, general purpose registers etc.
List of open files - These are the different files that are associated with the process.
Scheduling queues, also known as process queues or job queues, are data
structures used in operating systems to manage the execution of multiple
processes. These queues help in organizing and prioritizing processes for
the CPU to execute. Here is a brief explanation along with a simple
diagram:
1. Ready Queue: This queue holds all the processes that are ready to
be executed by the CPU. Processes in the ready queue are waiting
for their turn to run and are typically organized based on their
priority. Higher priority processes usually get scheduled first.
2. Waiting Queue: This queue holds processes that are waiting for
certain resources or events to occur. For example, a process waiting
for user input or for a file to be loaded will be placed in the waiting
queue until the required resource becomes available.
3. Blocked Queue: This queue holds processes that are blocked or
suspended due to some external conditions. Processes in this queue
cannot proceed until the condition is satisfied. For instance, a
process waiting for I/O operations to complete or waiting for a
semaphore signal will be placed in the blocked queue.
4. Job Queue: This queue contains all the processes residing in the
system, including those that are waiting to be executed, running, or
suspended. It represents the total set of processes in the system.
Here's a simplified diagram to illustrate the concept:
n this diagram, the job queue encompasses all processes in the system.
The ready queue represents processes that are ready for execution, with
higher priority processes closer to the CPU. The waiting queue holds
processes waiting for resources, and the blocked queue holds processes
that are temporarily unable to proceed.
3. Define IPC and explain about the two models of IPC (Shared memory and
Message passing)
Both models have their advantages and are used in different scenarios.
The choice between shared memory and message passing depends on
factors such as the nature of the problem, the amount of data to be
exchanged, the level of synchronization required, and the programming
paradigm being used.
Differences:
CPU Utilization Utilizes only a single CPU core Can utilize multiple CPU cores
Complexity Simpler to design and debug More complex to design and debug
6. Explain about the critical section problem and what are the three
requirements to build a solution for critical section problem
The critical section problem is a fundamental challenge in concurrent
programming. It refers to the situation where multiple processes or
threads share a common resource or data, and each process has a section
of code called the "critical section" that accesses or modifies the shared
resource. The critical section problem arises when processes or threads
need to coordinate their access to the critical section to prevent conflicts
and ensure proper execution.
Utilization of Higher CPU utilization due to more Lower CPU utilization as processes may
CPU efficient allocation of CPU time. hold the CPU for longer durations.
Better support for real-time May not provide robust support for real-
Real-Time applications as higher-priority time applications as higher-priority tasks
Applications tasks can interrupt lower-priority may have to wait for lower-priority tasks
Support tasks. to complete.
Preemptive Scheduling Non-Preemptive Scheduling