Number of Possible Combinations. in A Given A Single Processor, All The Tasks Must Be Arranged in A Single Ordered List

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20
At a glance
Powered by AI
There are several key concepts discussed in the document including CPU scheduling algorithms, preemptive vs nonpreemptive scheduling, processes and scheduling, and real-time systems scheduling.

Preemptive scheduling allows higher priority processes to interrupt lower priority processes that are currently running. It leads to better CPU utilization but is more complex. Nonpreemptive scheduling allocates the CPU to a process until it completes without preemption, leading to simpler implementation but lower CPU utilization.

For hard real-time systems, all timing constraints such as interrupt handling and process dispatch must be bounded and predictable to guarantee that processes will meet their deadlines.

6.

1 A CPU-scheduling algorithm determines an order for the execution of its scheduled


processes. Given n processes to be scheduled on one processor, how many different schedules
are possible? Give a formula in terms of n.
Answer: There are n factorial (n!) different schedules possible for an n size execution list; the
number of possible combinations. In a given a single processor, all the tasks must be arranged
in a single ordered list.
n! (n factorial = n × n – 1 × n – 2 × ... × 2 × 1).
6.2 Explain the difference between preemptive and nonpreemptive scheduling.
Answer :
Preemptive scheduling Nonpreemptive scheduling
1. It is a scheduling method in which 1. In this scheduling method , the CPU
task are assigned with their priority has been allocated to a specific
(higher one). Sometimes it is process. That process keeps the CPU
important to run a task with a higher busy until a process has been
priority before another lower priority terminated or by context
task, even if the lower priority task is switching(completion of IO ) e.g.
still running. During that time , the from running to wait or waiting for
one with lower priority task is ready.
preempt if the priority of the newly 2. The CPU utilization is less efficient
process is higher . as compare to preemptive because
2. The CPU utilization is more efficient waiting and response time is higher.
because the waiting and response 3. In this scheduling processes are
time is less as compare to non- simply put new process at the head of
preemptive. the ready queue , the state of that
3. In preemptive scheduling , processes process is never removed from the
are prioritized . higher priority scheduler until the one that enters the
process is executed first. running state is completely executed
4. This scheduling is more flexible first.
5. Some preemptive algorithms are 4. It is more rigid.
Shortest Remaining time, Round 5. Some non-preemptive algorithms are
Robin. FCFS, SJF ,Priority scheduling

6.3 Suppose that the following processes arrive for execution at the times indicated. Each
process will run for time listed. In answering the questions, use nonpreemptive scheduling, and
base all decisions on the information you have at the time the decision must be
made.
Process Arrival Time Burst Time
P1 0.0 8
P2 0.4 4
P3 1.0 1
a. What is the average turnaround time for these processes with the FCFS scheduling
algorithm?
b. What is the average turnaround time for these processes with the SJF scheduling algorithm?
c. The SJF algorithm is supposed to improve performance but notice that we chose to run
process P1 at time 0 because we did not know that two shorter processes would arrive soon.
Compute what the
average turnaround time will be if the CPU is left idle for the first 1 unit and then SJF scheduling
is used. Remember that processes P1 and P2 are waiting during this idle time, so their waiting
time
may increase. This algorithm could be called future-knowledge scheduling.

( 8+ (12−0.4) + (13−1) )
Answer a) : = 10.53 ms
3

( 8+ (9−1) + (13−0.4) )
Answer b) : = 9.53 ms
3

( 14+ (6−0.4) + (2−1) )


Answer c) : = 6.87 ms
3

6.4 What advantage is there in having different time-quantum sizes at different levels of a
multilevel queueing system?
Answer: Having different quantum sizes at different levels of a multilevel queueing system have
following benefits:
1. Reduced context switch overhead.
2. More efficient CPU Interactive processes.

Interactive processes such as editors, can be in a queue with a small-time quantum. Processes
with no need for frequent servicing can be in a queue with a larger quantum, requiring fewer
context switches to complete the processing, and thus making more efficient use of the computer.
Processes that need more frequent servicing, Processes with no need for frequent servicing (e.g.
batch process) can be in a queue with a larger quantum, requiring fewer context switches to
complete the processing.
6.5 Many CPU-scheduling algorithms are parameterized. For example, the RR algorithm
requires a parameter to indicate the time slice. Multilevel feedback queues require parameters
to define the number of queues, the scheduling algorithm for each queue, the criteria used to
move processes
between queues, and so on. These algorithms are thus really setting of algorithms (for example,
the set of RR algorithms for all time slices, and so on). One set of algorithms may include
another (for example, the FCFS algorithm is the RR algorithm with an infinite time quantum).
What (if any) relation holds between the following pairs of algorithm sets?
a. Priority and SJF
b. Multilevel feedback queues and FCFS
c. Priority and FCFS
d. RR and SJF
Answer :
1. Priority and SJF : The shortest job has the highest priority

a. State the parameters and behavior of priority scheduling


Solution
Parameter: Each job has a priority P
Behavior: Choose the job with the lowest numerical priority
b. State the parameters and behavior of SJF
Solution
Parameter: Each job has a remaining time T
Behavior: Choose the job with the lowest remaining T
c. Can SJF simulate priority scheduling for all possible parameters of priority
scheduling?
Solution
2 possible answers: (1) No - although both schedulers are priority queues, SJF is more
restrictive than a general priority scheduler since a job's time may not correspond to its
priority. (2) Yes - "spoof" the scheduler by passing priority in place of time.
d. Can priority scheduling simulate SJF for all possible parameters of SJF? (How or
why not.)
Solution
Yes - set the priority P to the remaining time T

2. Multilevel feedback queues and FCFS: The lowest level of MLFQ is FCFS.

a. State the parameters and behavior of multi-level feedback queues


Solution
Parameters: N (# queues), scheduling algorithm for each queue, function that selects in
which queue to place a job, criteria to interrupt a running job
Behavior: Always schedule the job from the head of the lowest #'d non-empty queue
according to that queue's scheduling algorithm; interrupt a running job when told to by
the interrupt criteria; place newly arrived jobs or interrupted jobs in a queue based on the
selection function
b. State the parameters and behavior of FCFS
Solution
No parameters
Behavior: Place arriving jobs at the back of a FIFO queue; when scheduling a new job,
choose the one from the front of the FIFO queue; never interrupt a running job
c. Can FCFS simulate multi-level feedback for all possible parameters of multi-level
feedback? (How or why not?)
Solution
No. Counterexample: make the top queue of MLF run RR with a short time quantum. If
two jobs arrive and each is longer than the quantum, the MLF scheduler will allow both
jobs to run for at least one quantum before either completes, but FCFS can never interrupt
a running job, so one of the two will complete before the other has a chance to run.
d. Can multi-level feedback scheduling simulate FCFS for all possible parameters of
FCFS? (How or why not?)
Solution
Yes. Make MLF have 1 queue and have that one queue run FCFS. Function to select
which queue - always choose that queue. Policy to interrupt - Never.

3. Priority and FCFS: FCFS gives the highest priority to the job having been in existence
longest.

a. Can FCFS simulate priority scheduling for all possible parameters of priority
scheduling? (How or why not?)
Solution
No. Counterexample: Suppose job 1 arrives at time 1 with priority 10 and length
1,000,000 and job 2 arrives at time 2 with priority 1 and length 1,000,000. FCFS will run
job 1 to completion, then job 2 to completion. Priority will run job 1 for 1 unit, then job 2
to completion, then job 1 to completion.
b. Can priority scheduling simulate FCFS for all possible parameters of FCFS? (How
or why not?)
Solution
Yes. Set all jobs to equal priority (we assume that the priority queue breaks ties with
FCFS and infinite time quantum)

4. RR and SJF : None.

a. State the parameters and behavior of round robin


Solution
Parameter: Quantum Q
Behavior: Run FIFO queue, but interrupt a running job Q unit after it begins running and
move it to the end of the FIFO queue
b. Can round robin simulate SJF for all possible parameters of SJF? (How or why
not?)
Solution
No. Counterexample. 3 jobs arrive: job 1 at time 0 is 10 seconds long, job 2 arrives at
time 1 and is 100 seconds long, job 3 at time 2 is 10 seconds long. SJF must run 1 to
completion, then 3 to completion, and then 2 to completion. Claim: RR must run at least
one quantum of job 2 before running any job 3.
c. Can SJF simulate round robin for all possible parameters of round robin? (How or
why not?)
Solution
No. Same counter example.
6.6 Suppose that a scheduling algorithm (at the level of short-term CPU scheduling) favors
those processes that have used the least processor time in the recent past. Why will this
algorithm favor I/O-bound programs and yet not permanently starve CPU-bound programs?
Answer: I/O-bound programs would not require much CPU usage, having short CPU bursts.
CPU-bound programs require large CPU bursts. CPU-bound processes do not have to worry
about starvation because I/O bound programs finish running quickly allowing CPU-bound
programs to use the CPU often. I/O-bound is a thread generally has a tight latency that needs a
compare to computer bond thread on the windows workload.

6.7 Distinguish between PCS and SCS scheduling.


Answer:

Process Contention Scope (PCS) System Contention Scope (SCS)


1. Scheduling completion is within the local 1. System Contention Scope is used by
process , so the threads which compete for the kernel to decide which kernel-
the CPU belongs to the same process. level thread to schedule onto a CPU.
2. Threads are scheduled on available LWPs. 2. In SCS all threads compete for CPU.
3. PCS is used for user-level threads 3. It is implemented on One-to-One
scheduling. models.
4. As it a priority bases scheduler , so the
priorities are set by the programmer and not
adjusted by the thread’s libraries.
5. PCS is implemented on many-to-many and
One-to-Many models.

6.8 Assume that an operating system maps user-level threads to the kernel using the many-to-
many model and that the mapping is done through the use of LWPs. Furthermore, the system
allows program developers to create real-time threads. Is it necessary to bind a real-time
thread to an LWP?
Answer:
By binding an LWP to a real-time thread , we are making sure that the thread will be able to run
with minimal delay once it is scheduled.
For real-time threading , timing is crucial .if the thread is identified as real-time but it is not
bound to light weight process then the thread might have to wait to be attached to LWP before
running. If the real-time thread is already running and is attached with LWP then it proceeds to
block . While the real-time thread is blocked, the LWP it was attached to has been assigned to
another thread. When the real-time thread has been scheduled to run again, it must first wait to
be attached to an LWP.
6.9 The traditional UNIX scheduler enforces an inverse relationship between priority numbers
and priorities: the higher the number, the lower the priority. The scheduler recalculates process
priorities once per second using the following function:
Priority = (recent CPU usage / 2) + base
where base = 60 and recent CPU usage refers to a value indicating how often a process has
used the CPU since priorities were last recalculated. Assume that recent CPU usage is 40 for
process P1, 18 for process P2, and 10 for process P3. What will be the new priorities for these
three processes when priorities are recalculated? Based on this information, does the
traditional UNIX scheduler raise or lower the relative priority of a CPU-bound process?
Answer:
6.10 Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound
programs?
Answer:
I / O-bound systems have the property to only execute a limited amount of Calculation before
I/O is performed. Typical of these services Don't use any of their CPU quantity. CPU-bound
programs, on the other hand, use the whole of their quantity without any Blocking Operations
I/O. By giving higher priority to I/O bound program we can make better use of system resources
as it will help them to execute ahead of the CPU-bound programs.

6.11 Discuss how the following pairs of scheduling criteria conflict in certain settings.
a. CPU utilization and response time.
b. Average turnaround time and maximum waiting time.
c. I/O device utilization and CPU utilization.
a: When we want to keep CPU as busy as possible, CPU utilization can range from 0 to 100
percent. It can be improved by reducing the amount of overhead and idle time. If the system
scheduler selects the process depending upon the response time, then the context switching will
be higher which will lead into increase in response time.
b:when the scheduler chooses the shortest task first, so by scheduling policy starvation can occur
and this can cause increase in waiting time however the average wait time is minimized by
implementing this method.
c: I/O device utilization and CPU utilization: CPU utilization is maximized by running long-
running CPU-bound tasks without performing context switches. I/O device utilization is
maximized by scheduling I/O-bound jobs as soon as they become ready to run, thereby incurring
the overheads of context switches.

6.12 One technique for implementing lottery scheduling works by assigning processes lottery
tickets, which are used for allocating CPU time. Whenever a scheduling decision has to be made,
a lottery ticket is chosen at random, and the process holding that ticket gets the CPU.
The BTV operating system implements lottery scheduling by holding a lottery 50 times each
second, with each lottery winner getting 20 milliseconds of CPU time (20 milliseconds × 50 = 1
second). Describe how the BTV scheduler can ensure that higher-priority threads receive more
attention from the CPU than lower-priority threads.
According to the problem given above , for implementing lottery scheduling is by allocating
Burst-Time and Priority to each of the process then we Assign lottery tickets to the higher
priority process. The probability of completion of process with a greater number of tickets
increases. Random ticket is generated and process having the ticket gets the CPU for the
specified quantum time e.g. that is 20milliseconds.when that certain quantum times , the running
process is preempted, and another ticket is generated .
So, by giving higher priority threads more lottery tickets we can ensure that higher-priority
threads receive more attention from the CPU as compare to lower priority threads.
6.13 In Chapter 5, we discussed possible race conditions on various kernel data structures.
Most scheduling algorithms maintain a run queue, which lists processes eligible to run on a
processor. On multicore systems, there are two general options: (1) each processing core has
its own run queue, or (2) a single run queue is shared by all processing cores. What are the
advantages and disadvantages of each of these approaches?

I was not able to solve this question .

6.14 Consider the exponential average formula used to predict the length of the next CPU burst.
What are the implications of assigning the following values to the parameters used by the
algorithm?
a. α = 0 and τ0 = 100 milliseconds
b. α = 0.99 and τ 0 = 10 milliseconds
Answer:
6.15 A variation of the round-robin scheduler is the regressive round-robin scheduler. This
scheduler assigns each process a time quantum and a priority. The initial value of a time
quantum is 50 milliseconds. However, every time a process has been allocated the CPU and
uses its entire time quantum (does not block for I/O), 10 milliseconds is added to its time
quantum, and its priority level is boosted. (The time quantum for a process can be increased to
a maximum of 100 milliseconds.) When a process blocks before using its entire time quantum,
its time quantum is reduced by 5 milliseconds, but its priority remains the same. What type of
process (CPU-bound or I/O-bound) does the regressive round-robin scheduler favor? Explain.
Answer:
As understood from the above problem , we can say that regressive round-robin favor CPU-burst
time , the more we run a process , higher the priority will be given to it and the less it runs the
less its priority. So, blocking a CPU process are less favored because they can lose quantum time
and reduce priority every-time.

6.16 Consider the following set of processes, with the length of the CPU burst given in
milliseconds:
Process Burst Time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
a. Draw four Gantt charts that illustrate the execution of these processes using the following
scheduling algorithms: FCFS, SJF, non-preemptive priority (a larger priority number implies a
higher priority), and RR (quantum = 2).
b. What is the turnaround time of each process for each of the scheduling algorithms in part a?
c. What is the waiting time of each process for each of these scheduling algorithms?
d. Which of the algorithms results in the minimum average waiting time (over all processes)?
Answer:
6.17 The following processes are being scheduled using a preemptive, round robin scheduling
algorithm. Each process is assigned a numerical priority, with a higher number indicating a
higher relative priority. In addition to the processes listed below, the system also has an idle
task (which consumes no CPU resources and is identified as Pidle ). This task has priority 0 and
is scheduled whenever the system has no other available processes to run. The length of a time
quantum is 10 units. If a process is preempted by a higher-priority process, the preempted
process is placed at the end of the queue.
Thread Priority Burst Arrival
P1 40 20 0
P2 30 25 25
P3 30 25 30
P4 35 15 60
P5 5 10 100
P6 10 10 105
a. Show the scheduling order of the processes using a Gantt chart.
b. What is the turnaround time for each process?
c. What is the waiting time for each process?
d. What is the CPU utilization rate?

Answer:
6.18 The nice command is used to set the nice value of a process on Linux, as well as on other
UNIX systems. Explain why some systems may allow any user to assign a process a nice value
>= 0 yet allow only the root user to assign nice values < 0.
Answer:

Nice values range from -20 to +19, where numerically lower nice value represents higher relative
priority .values less than zero are used by root processes and systems do not allow non-root
processes to assign themselves higher priorities.
Now let us consider this , if the user has the authority to decrease the value of nice of his task, then
this can lead to process starvation . if we give value less then zero it can cause chunks of CPU
time .so it is way root can only get value less than 0.

6.19 Which of the following scheduling algorithms could result in starvation?


a. First-come, first-served
b. Shortest job first
c. Round robin
d. Priority
Answer:
• In FCFS algorithm , all jobs are executed in sequence and no job will wait for infinity be
executed. So , there is no starvation.
• In SJF algorithm , some jobs could make long job wait for infinity to be executed, if
shorter jobs continuously arrive , which can lead to starvation.
• In round robin algorithm , all jobs are treated equally, and no job will wait for infinity to
be executed. So , there is no starvation.
• In Priority algorithm , we can make some low-priority jobs wait for infinity to be
executed if higher priority jobs continuously arrive to the ready queue . it can lead to
starvation.
Finally, Shortest job first (SJF) and priority-based scheduling algorithms could result in
starvation.
6.20 Consider a variant of the RR scheduling algorithm in which the entries in the ready queue
are pointers to the PCBs.
a. What would be the effect of putting two pointers to the same process in the ready queue?
b. What would be two major advantages and two disadvantages of this scheme?
c. How would you modify the basic RR algorithm to achieve the same effect without the
duplicate pointers?
Answer:

a) By putting two pointers , the process will indirectly increase its priority and would receive
more priority to run.
b) Advantage of this scheme is that longer jobs will be allowed more time and hence a higher
priority(more important jobs could be given more time).Another advantage is that it helps
us to prevent starvation. The disadvantage is that the task/processes with less priority or
shorter jobs will suffer.
c) Without duplicating the pointer , we will allot bigger amount of time to processes having
higher priority in-order to make them more .

6.21 Consider a system running ten I/O-bound tasks and one CPU-bound task. Assume that the
I/O-bound tasks issue an I/O operation once for every millisecond of CPU computing and that
each I/O operation takes 10 milliseconds to complete. Also assume that the context-switching
overhead is 0.1 millisecond and that all processes are long-running tasks. Describe the CPU
utilization for a round-robin scheduler when:
a. The time quantum is 1 millisecond
b. The time quantum is 10 milliseconds
Answer:
a) Time quantum is 1 ms. Whether a CPU bound or I/O bound process, it switches everyone
millisecond and when doing so, it incurs a 0.1 ms overhead, so, it is spending 1 of every
1.1 millisecond doing work. This results in a CPU utilization of 1/1.1 * 100 = 91%
b) it is similar to previous as, 21.1 millisecond spent to go through all tasks, only 20 of
that is doing actual work. There will be 11 switches(last one is of CPU bound process)
for 20.1 msec of time. so effective time CPU worked is 20.1-(11*.1)= 19ms. so, CPU
utilization (19/20.1)*100=94%
6.22 Consider a system implementing multilevel queue scheduling. What strategy can a
computer user employ to maximize the amount of CPU time allocated to the user’s process?
Answer:
he multilevel feedback queue implements a special CPU scheduling. If a process doesn’t use the
entire quantum assigned to it, the OS assumes that this process is waiting for some I/O device,
and should be kept in the high-priority queue so that is can get the CPU as soon as possible later.
if the process does use its entire assigned time quantum, the OS thinks this is a time-consuming
process and moves it to the lower priority queue. As, the system always favors the new processes
and executes them first. So, to maximize the amount of CPU time, the user process wants to
stay in the high priority queue
6.23 Consider the following preemptive priority-scheduling algorithm that is based on
dynamically changing priorities. Larger priority numbers imply higher priority. When a
process is waiting for the CPU (in the ready queue, but not running), its priority changes at a
rate A; when it is running, its priority changes at a rate B. All processes are given a priority
of 0 when they enter the ready queue. The parameters A and B can be set to give many
different scheduling algorithms.
a. What is the algorithm that results from B > A > 0?
b. What is the algorithm that results from A < B < 0?
Answer:
a. All the processes in the ready queue has the same initial priority 0 . Their priority
increases at the same rate a(a>0). Thus, the earlier the process enters the ready queue
(t0) , the higher its priority will be (P(t) = a * (t - t0) ) .Once the process with the highest
priority is running, its priority increases at a higher rate (b>a>0) than the priorities of
those processes in the ready queue. So, no other process will preempt it and it will run to
its completion. . This is the equivalent of FCFS.
b. All the processes in the ready queue has the same initial priority 0 . Their priority
decreases at the same rate a(a<0). Thus, the earlier the process enters the ready queue,
the lower its priority will be (P(t) = a * (t - t0) ).Once the process with the highest priority
is running, no other process in the ready queue will preempt it since its priority decreases
at a lower rate (a<b<0) than the priorities of those processes in the ready queue. But it
will be preempted by any new coming process because the new process's priority is 0
which is the highest possible priority. This is a LIFO (last-in-first-out) algorithm.

6.24 Explain the differences in how much the following scheduling algorithms discriminate in
favor of short processes:
a. FCFS
b. RR
c. Multilevel feedback queues
Answer:
a. FCFS—discriminates against short jobs since any short jobs arriving after long jobs will have
a longer waiting time.
b. RR—treats all jobs equally (giving them equal bursts of CPU time) so short jobs will be able
to leave the system faster since they will finish first.
c. Multilevel feedback queues work like the RR algorithm, they discriminate favorably toward
short jobs

6.25 Using the Windows scheduling algorithm determine the numeric priority of each of the
following threads.
a. A thread in the REALTIME PRIORITY CLASS with a relative priority of NORMAL
b. A thread in the ABOVE NORMAL PRIORITY CLASS with a relative priority of HIGHEST
c. A thread in the BELOW NORMAL PRIORITY CLASS with a relative priority of ABOVE NORMAL
Answer:
a. 26
b. 8
c. 14

6.26 Assuming that no threads belong to the REALTIME PRIORITY CLASS and that none may be
assigned a TIME CRITICAL priority, what combination of priority class and priority corresponds
to the highest possible relative priority in Windows scheduling?
Answer: high priority class and highest priority within that class. (numeric priority of 15)
6.27 Consider the scheduling algorithm in the Solaris operating system for time-sharing
threads.
a. What is the time quantum (in milliseconds) for a thread with priority 15? With priority 40?
b. Assume that a thread with priority 50 has used its entire time quantum without blocking.
What new priority will the scheduler assign this thread?
c. Assume that a thread with priority 20 blocks for I/O before its time quantum has expired.
What new priority will the scheduler assign this thread?
Answer:
a. 160 and 40
b. 35
c. 54
6.28 Assume that two tasks A and B are running on a Linux system. The nice values of A and B
are−5 and+5, respectively. Using the CFS scheduler as a guide, describe how the respective
values of runtime vary between the two processes given each of the following scenarios:
• Both A and B are CPU-bound.
• Ais I/O-bound, and B is CPU-bound.
• Ais CPU-bound, and B is I/O-bound.
Answer:
• Since A has a higher priority than B, vruntime will move more slowly for A than B. If
both A and B are CPU-bound (that is they both use the CPU for as long as it is allocated
to them), vruntime will generally be smaller for A than B, and hence A will have a greater
priority to run over B.
• In this situation, vruntime will be much smaller for A than B as (1) vruntime will move
more slowly for A than B due to priority differences, and (2) , A will require less CPU-
time as it is I/O-bound.
• This situation is not as clear, and it is possible that B may end up running in favor of A as
it will be using the processor less than A and in fact its value of vruntime may in fact be
less than the value of vruntime for B.
6.29 Discuss ways in which the priority inversion problem could be addressed in a real-time
system. Also discuss whether the solutions could be implemented within the context of a
proportional share scheduler.
Answer:
When processes are accessing resources required by higher priority processes , we will give
those currently running processes higher priority until they are completely finished. Once they
are done , revert them back to their original priority .
This solution can be easily be implemented within a proportional share scheduler , the shares of
the higher priority processes are changed to lower priority process for the duration.
6.30 Under what circumstances is rate-monotonic scheduling inferior to earliest-deadline-first
scheduling in meeting the deadlines associated with processes?
Answer:
Suppose we have two processes :
P1 and P2 , where P1 = 50 and P2 = 75.
t be the processing Time , t1 = 25 and t2 = 30.
If P1 were assigned a higher priority than P2, then the following scheduling events happen under
rate-monotonic scheduling.
P1 is scheduled at t = 0, P2 is scheduled at t = 25, P1 is scheduled at t = 50, and P2 is scheduled
at t = 75. P2 is not scheduled early enough to meet its deadline. The earliest deadline schedule
performs the following scheduling events:
P1 is scheduled at t = 0, P2 is scheduled at t = 25, P1 is scheduled at t = 55, and so on.
This schedule actually meets the deadlines and therefore earliest-deadline-first scheduling is
more effective than the rate-monotonic scheduler.

6.31 Consider two processes, P1 and P2, where p1 = 50, t1 = 25, p2 = 75, and t2 = 30.
a. Can these two processes be scheduled using rate-monotonic scheduling? Illustrate your
answer using a Gantt chart such as the ones in Figure 6.16–Figure 6.19.
b. Illustrate the scheduling of these two processes using earliest deadline- first (EDF) scheduling.
Answer:
a. Using rate-monotonic scheduling Consider when P1 is assigned a higher priority than
consider the following table :

P2 is not scheduled early enough to meet its deadline.


b. Using Earliest deadline- first scheduling , Consider the following table

6.32 Explain why interrupt and dispatch latency times must be bounded in a hard-real-time
system
Answer:
Scheduling behavior for real-time applications is the most significant element of real-time
scheduling class. The standard time-sharing scheduling class is not suitable for real-time
application.so when interrupt occurs , the os must save he current process state and hen
determine which type of interrupt has been generated.
When one process stops and other process starts ,the cost associated with changing this state is
called dispatch latency . Providing real-time tasks with immediate access to the CPU mandates
that real-time operating systems minimize this latency as well. For hard real-time systems, the
time-period for which interrupts are disabled must be bounded in order to guarantee the desired
quality of service.

You might also like