Skip to content

Commit 12b0487

Browse files
vingu-linaroIngo Molnar
authored andcommitted
sched/pelt: Fix update_blocked_averages() for RT and DL classes
update_blocked_averages() is called to periodiccally decay the stalled load of idle CPUs and to sync all loads before running load balance. When cfs rq is idle, it trigs a load balance during pick_next_task_fair() in order to potentially pull tasks and to use this newly idle CPU. This load balance happens whereas prev task from another class has not been put and its utilization updated yet. This may lead to wrongly account running time as idle time for RT or DL classes. Test that no RT or DL task is running when updating their utilization in update_blocked_averages(). We still update RT and DL utilization instead of simply skipping them to make sure that all metrics are synced when used during load balance. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 371bf42 ("sched/rt: Add rt_rq utilization tracking") Fixes: 3727e0e ("sched/dl: Add dl_rq utilization tracking") Link: http://lkml.kernel.org/r/1535728975-22799-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent e5e96fa commit 12b0487

File tree

1 file changed

+10
-4
lines changed

1 file changed

+10
-4
lines changed

kernel/sched/fair.c

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7263,6 +7263,7 @@ static void update_blocked_averages(int cpu)
72637263
{
72647264
struct rq *rq = cpu_rq(cpu);
72657265
struct cfs_rq *cfs_rq, *pos;
7266+
const struct sched_class *curr_class;
72667267
struct rq_flags rf;
72677268
bool done = true;
72687269

@@ -7299,8 +7300,10 @@ static void update_blocked_averages(int cpu)
72997300
if (cfs_rq_has_blocked(cfs_rq))
73007301
done = false;
73017302
}
7302-
update_rt_rq_load_avg(rq_clock_task(rq), rq, 0);
7303-
update_dl_rq_load_avg(rq_clock_task(rq), rq, 0);
7303+
7304+
curr_class = rq->curr->sched_class;
7305+
update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class);
7306+
update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class);
73047307
update_irq_load_avg(rq, 0);
73057308
/* Don't need periodic decay once load/util_avg are null */
73067309
if (others_have_blocked(rq))
@@ -7365,13 +7368,16 @@ static inline void update_blocked_averages(int cpu)
73657368
{
73667369
struct rq *rq = cpu_rq(cpu);
73677370
struct cfs_rq *cfs_rq = &rq->cfs;
7371+
const struct sched_class *curr_class;
73687372
struct rq_flags rf;
73697373

73707374
rq_lock_irqsave(rq, &rf);
73717375
update_rq_clock(rq);
73727376
update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq);
7373-
update_rt_rq_load_avg(rq_clock_task(rq), rq, 0);
7374-
update_dl_rq_load_avg(rq_clock_task(rq), rq, 0);
7377+
7378+
curr_class = rq->curr->sched_class;
7379+
update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class);
7380+
update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class);
73757381
update_irq_load_avg(rq, 0);
73767382
#ifdef CONFIG_NO_HZ_COMMON
73777383
rq->last_blocked_load_update_tick = jiffies;

0 commit comments

Comments
 (0)