Skip to content

Commit 7675104

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
sched: Implement lockless wake-queues
This is useful for locking primitives that can effect multiple wakeups per operation and want to avoid lock internal lock contention by delaying the wakeups until we've released the lock internal locks. Alternatively it can be used to avoid issuing multiple wakeups, and thus save a few cycles, in packet processing. Queue all target tasks and wakeup once you've processed all packets. That way you avoid waking the target task multiple times if there were multiple packets for the same task. Properties of a wake_q are: - Lockless, as queue head must reside on the stack. - Being a queue, maintains wakeup order passed by the callers. This can be important for otherwise, in scenarios where highly contended locks could affect any reliance on lock fairness. - A queued task cannot be added again until it is woken up. This patch adds the needed infrastructure into the scheduler code and uses the new wake_list to delay the futex wakeups until after we've released the hash bucket locks. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> [tweaks, adjustments, comments, etc.] Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: George Spelvin <linux@horizon.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/1430494072-30283-2-git-send-email-dave@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent 7110744 commit 7675104

File tree

2 files changed

+92
-0
lines changed

2 files changed

+92
-0
lines changed

include/linux/sched.h

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -920,6 +920,50 @@ enum cpu_idle_type {
920920
#define SCHED_CAPACITY_SHIFT 10
921921
#define SCHED_CAPACITY_SCALE (1L << SCHED_CAPACITY_SHIFT)
922922

923+
/*
924+
* Wake-queues are lists of tasks with a pending wakeup, whose
925+
* callers have already marked the task as woken internally,
926+
* and can thus carry on. A common use case is being able to
927+
* do the wakeups once the corresponding user lock as been
928+
* released.
929+
*
930+
* We hold reference to each task in the list across the wakeup,
931+
* thus guaranteeing that the memory is still valid by the time
932+
* the actual wakeups are performed in wake_up_q().
933+
*
934+
* One per task suffices, because there's never a need for a task to be
935+
* in two wake queues simultaneously; it is forbidden to abandon a task
936+
* in a wake queue (a call to wake_up_q() _must_ follow), so if a task is
937+
* already in a wake queue, the wakeup will happen soon and the second
938+
* waker can just skip it.
939+
*
940+
* The WAKE_Q macro declares and initializes the list head.
941+
* wake_up_q() does NOT reinitialize the list; it's expected to be
942+
* called near the end of a function, where the fact that the queue is
943+
* not used again will be easy to see by inspection.
944+
*
945+
* Note that this can cause spurious wakeups. schedule() callers
946+
* must ensure the call is done inside a loop, confirming that the
947+
* wakeup condition has in fact occurred.
948+
*/
949+
struct wake_q_node {
950+
struct wake_q_node *next;
951+
};
952+
953+
struct wake_q_head {
954+
struct wake_q_node *first;
955+
struct wake_q_node **lastp;
956+
};
957+
958+
#define WAKE_Q_TAIL ((struct wake_q_node *) 0x01)
959+
960+
#define WAKE_Q(name) \
961+
struct wake_q_head name = { WAKE_Q_TAIL, &name.first }
962+
963+
extern void wake_q_add(struct wake_q_head *head,
964+
struct task_struct *task);
965+
extern void wake_up_q(struct wake_q_head *head);
966+
923967
/*
924968
* sched-domains (multiprocessor balancing) declarations:
925969
*/
@@ -1532,6 +1576,8 @@ struct task_struct {
15321576
/* Protection of the PI data structures: */
15331577
raw_spinlock_t pi_lock;
15341578

1579+
struct wake_q_node wake_q;
1580+
15351581
#ifdef CONFIG_RT_MUTEXES
15361582
/* PI waiters blocked on a rt_mutex held by this task */
15371583
struct rb_root pi_waiters;

kernel/sched/core.c

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -541,6 +541,52 @@ static bool set_nr_if_polling(struct task_struct *p)
541541
#endif
542542
#endif
543543

544+
void wake_q_add(struct wake_q_head *head, struct task_struct *task)
545+
{
546+
struct wake_q_node *node = &task->wake_q;
547+
548+
/*
549+
* Atomically grab the task, if ->wake_q is !nil already it means
550+
* its already queued (either by us or someone else) and will get the
551+
* wakeup due to that.
552+
*
553+
* This cmpxchg() implies a full barrier, which pairs with the write
554+
* barrier implied by the wakeup in wake_up_list().
555+
*/
556+
if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL))
557+
return;
558+
559+
get_task_struct(task);
560+
561+
/*
562+
* The head is context local, there can be no concurrency.
563+
*/
564+
*head->lastp = node;
565+
head->lastp = &node->next;
566+
}
567+
568+
void wake_up_q(struct wake_q_head *head)
569+
{
570+
struct wake_q_node *node = head->first;
571+
572+
while (node != WAKE_Q_TAIL) {
573+
struct task_struct *task;
574+
575+
task = container_of(node, struct task_struct, wake_q);
576+
BUG_ON(!task);
577+
/* task can safely be re-inserted now */
578+
node = node->next;
579+
task->wake_q.next = NULL;
580+
581+
/*
582+
* wake_up_process() implies a wmb() to pair with the queueing
583+
* in wake_q_add() so as not to miss wakeups.
584+
*/
585+
wake_up_process(task);
586+
put_task_struct(task);
587+
}
588+
}
589+
544590
/*
545591
* resched_curr - mark rq's current task 'to be rescheduled now'.
546592
*

0 commit comments

Comments
 (0)