Skip to content

Commit 2fd5907

Browse files
paulmckIngo Molnar
authored andcommitted
perf: Disable IRQs across RCU RS CS that acquires scheduler lock
The perf_lock_task_context() function disables preemption across its RCU read-side critical section because that critical section acquires a scheduler lock. If there was a preemption during that RCU read-side critical section, the rcu_read_unlock() could attempt to acquire scheduler locks, resulting in deadlock. However, recent optimizations to expedited grace periods mean that IPI handlers that execute during preemptible RCU read-side critical sections can now cause the subsequent rcu_read_unlock() to acquire scheduler locks. Disabling preemption does nothiing to prevent these IPI handlers from executing, so these optimizations introduced a deadlock. In theory, this deadlock could be avoided by pulling all wakeups and printk()s out from rnp->lock critical sections, but in practice this would re-introduce some RCU CPU stall warning bugs. Given that acquiring scheduler locks entails disabling interrupts, these deadlocks can be avoided by disabling interrupts (instead of disabling preemption) across any RCU read-side critical that acquires scheduler locks and holds them across the rcu_read_unlock(). This commit therefore makes this change for perf_lock_task_context(). Reported-by: Dave Jones <davej@codemonkey.org.uk> Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Stephane Eranian <eranian@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20151104134838.GR29027@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent bad9bc2 commit 2fd5907

File tree

1 file changed

+9
-8
lines changed

1 file changed

+9
-8
lines changed

kernel/events/core.c

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1050,13 +1050,13 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
10501050
/*
10511051
* One of the few rules of preemptible RCU is that one cannot do
10521052
* rcu_read_unlock() while holding a scheduler (or nested) lock when
1053-
* part of the read side critical section was preemptible -- see
1053+
* part of the read side critical section was irqs-enabled -- see
10541054
* rcu_read_unlock_special().
10551055
*
10561056
* Since ctx->lock nests under rq->lock we must ensure the entire read
1057-
* side critical section is non-preemptible.
1057+
* side critical section has interrupts disabled.
10581058
*/
1059-
preempt_disable();
1059+
local_irq_save(*flags);
10601060
rcu_read_lock();
10611061
ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
10621062
if (ctx) {
@@ -1070,21 +1070,22 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
10701070
* if so. If we locked the right context, then it
10711071
* can't get swapped on us any more.
10721072
*/
1073-
raw_spin_lock_irqsave(&ctx->lock, *flags);
1073+
raw_spin_lock(&ctx->lock);
10741074
if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
1075-
raw_spin_unlock_irqrestore(&ctx->lock, *flags);
1075+
raw_spin_unlock(&ctx->lock);
10761076
rcu_read_unlock();
1077-
preempt_enable();
1077+
local_irq_restore(*flags);
10781078
goto retry;
10791079
}
10801080

10811081
if (!atomic_inc_not_zero(&ctx->refcount)) {
1082-
raw_spin_unlock_irqrestore(&ctx->lock, *flags);
1082+
raw_spin_unlock(&ctx->lock);
10831083
ctx = NULL;
10841084
}
10851085
}
10861086
rcu_read_unlock();
1087-
preempt_enable();
1087+
if (!ctx)
1088+
local_irq_restore(*flags);
10881089
return ctx;
10891090
}
10901091

0 commit comments

Comments
 (0)