Skip to content

Commit 53bf57f

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
locking/qspinlock: Re-order code
Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL) such that we loose the indent. This also result in a more natural code flow IMO. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: andrea.parri@amarulasolutions.com Cc: longman@redhat.com Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent ec57e2f commit 53bf57f

File tree

1 file changed

+27
-29
lines changed

1 file changed

+27
-29
lines changed

kernel/locking/qspinlock.c

Lines changed: 27 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -330,39 +330,37 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
330330
* 0,0,1 -> 0,1,1 ; pending
331331
*/
332332
val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
333-
if (!(val & ~_Q_LOCKED_MASK)) {
334-
/*
335-
* We're pending, wait for the owner to go away.
336-
*
337-
* *,1,1 -> *,1,0
338-
*
339-
* this wait loop must be a load-acquire such that we match the
340-
* store-release that clears the locked bit and create lock
341-
* sequentiality; this is because not all
342-
* clear_pending_set_locked() implementations imply full
343-
* barriers.
344-
*/
345-
if (val & _Q_LOCKED_MASK) {
346-
atomic_cond_read_acquire(&lock->val,
347-
!(VAL & _Q_LOCKED_MASK));
348-
}
349-
350-
/*
351-
* take ownership and clear the pending bit.
352-
*
353-
* *,1,0 -> *,0,1
354-
*/
355-
clear_pending_set_locked(lock);
356-
qstat_inc(qstat_lock_pending, true);
357-
return;
333+
/*
334+
* If we observe any contention; undo and queue.
335+
*/
336+
if (unlikely(val & ~_Q_LOCKED_MASK)) {
337+
if (!(val & _Q_PENDING_MASK))
338+
clear_pending(lock);
339+
goto queue;
358340
}
359341

360342
/*
361-
* If pending was clear but there are waiters in the queue, then
362-
* we need to undo our setting of pending before we queue ourselves.
343+
* We're pending, wait for the owner to go away.
344+
*
345+
* 0,1,1 -> 0,1,0
346+
*
347+
* this wait loop must be a load-acquire such that we match the
348+
* store-release that clears the locked bit and create lock
349+
* sequentiality; this is because not all
350+
* clear_pending_set_locked() implementations imply full
351+
* barriers.
352+
*/
353+
if (val & _Q_LOCKED_MASK)
354+
atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
355+
356+
/*
357+
* take ownership and clear the pending bit.
358+
*
359+
* 0,1,0 -> 0,0,1
363360
*/
364-
if (!(val & _Q_PENDING_MASK))
365-
clear_pending(lock);
361+
clear_pending_set_locked(lock);
362+
qstat_inc(qstat_lock_pending, true);
363+
return;
366364

367365
/*
368366
* End of pending bit optimistic spinning and beginning of MCS

0 commit comments

Comments
 (0)