Skip to content

Commit 152db03

Browse files
joelagnelrafaeljw
authored andcommitted
schedutil: Allow cpufreq requests to be made even when kthread kicked
Currently there is a chance of a schedutil cpufreq update request to be dropped if there is a pending update request. This pending request can be delayed if there is a scheduling delay of the irq_work and the wake up of the schedutil governor kthread. A very bad scenario is when a schedutil request was already just made, such as to reduce the CPU frequency, then a newer request to increase CPU frequency (even sched deadline urgent frequency increase requests) can be dropped, even though the rate limits suggest that its Ok to process a request. This is because of the way the work_in_progress flag is used. This patch improves the situation by allowing new requests to happen even though the old one is still being processed. Note that in this approach, if an irq_work was already issued, we just update next_freq and don't bother to queue another request so there's no extra work being done to make this happen. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Juri Lelli <juri.lelli@redhat.com> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
1 parent 0363997 commit 152db03

File tree

1 file changed

+26
-8
lines changed

1 file changed

+26
-8
lines changed

kernel/sched/cpufreq_schedutil.c

Lines changed: 26 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -92,9 +92,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
9292
!cpufreq_this_cpu_can_update(sg_policy->policy))
9393
return false;
9494

95-
if (sg_policy->work_in_progress)
96-
return false;
97-
9895
if (unlikely(sg_policy->need_freq_update))
9996
return true;
10097

@@ -121,7 +118,7 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
121118

122119
policy->cur = next_freq;
123120
trace_cpu_frequency(next_freq, smp_processor_id());
124-
} else {
121+
} else if (!sg_policy->work_in_progress) {
125122
sg_policy->work_in_progress = true;
126123
irq_work_queue(&sg_policy->irq_work);
127124
}
@@ -366,6 +363,13 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
366363

367364
ignore_dl_rate_limit(sg_cpu, sg_policy);
368365

366+
/*
367+
* For slow-switch systems, single policy requests can't run at the
368+
* moment if update is in progress, unless we acquire update_lock.
369+
*/
370+
if (sg_policy->work_in_progress)
371+
return;
372+
369373
if (!sugov_should_update_freq(sg_policy, time))
370374
return;
371375

@@ -440,13 +444,27 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
440444
static void sugov_work(struct kthread_work *work)
441445
{
442446
struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
447+
unsigned int freq;
448+
unsigned long flags;
449+
450+
/*
451+
* Hold sg_policy->update_lock shortly to handle the case where:
452+
* incase sg_policy->next_freq is read here, and then updated by
453+
* sugov_update_shared just before work_in_progress is set to false
454+
* here, we may miss queueing the new update.
455+
*
456+
* Note: If a work was queued after the update_lock is released,
457+
* sugov_work will just be called again by kthread_work code; and the
458+
* request will be proceed before the sugov thread sleeps.
459+
*/
460+
raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
461+
freq = sg_policy->next_freq;
462+
sg_policy->work_in_progress = false;
463+
raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
443464

444465
mutex_lock(&sg_policy->work_lock);
445-
__cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
446-
CPUFREQ_RELATION_L);
466+
__cpufreq_driver_target(sg_policy->policy, freq, CPUFREQ_RELATION_L);
447467
mutex_unlock(&sg_policy->work_lock);
448-
449-
sg_policy->work_in_progress = false;
450468
}
451469

452470
static void sugov_irq_work(struct irq_work *irq_work)

0 commit comments

Comments
 (0)