Skip to content

Commit 75df713

Browse files
Suresh Jayaramanaxboe
authored andcommitted
block: document blk-plug
Thus spake Andrew Morton: "And I have the usual maintainability whine. If someone comes up to vmscan.c and sees it calling blk_start_plug(), how are they supposed to work out why that call is there? They go look at the blk_start_plug() definition and it is undocumented. I think we can do better than this?" Adapted from the LWN article - http://lwn.net/Articles/438256/ by Jens Axboe and from an earlier attempt by Shaohua Li to document blk-plug. [akpm@linux-foundation.org: grammatical and spelling tweaks] Signed-off-by: Suresh Jayaraman <sjayaraman@suse.de> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@google.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent 27a84d5 commit 75df713

File tree

2 files changed

+29
-9
lines changed

2 files changed

+29
-9
lines changed

block/blk-core.c

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2595,6 +2595,20 @@ EXPORT_SYMBOL(kblockd_schedule_delayed_work);
25952595

25962596
#define PLUG_MAGIC 0x91827364
25972597

2598+
/**
2599+
* blk_start_plug - initialize blk_plug and track it inside the task_struct
2600+
* @plug: The &struct blk_plug that needs to be initialized
2601+
*
2602+
* Description:
2603+
* Tracking blk_plug inside the task_struct will help with auto-flushing the
2604+
* pending I/O should the task end up blocking between blk_start_plug() and
2605+
* blk_finish_plug(). This is important from a performance perspective, but
2606+
* also ensures that we don't deadlock. For instance, if the task is blocking
2607+
* for a memory allocation, memory reclaim could end up wanting to free a
2608+
* page belonging to that request that is currently residing in our private
2609+
* plug. By flushing the pending I/O when the process goes to sleep, we avoid
2610+
* this kind of deadlock.
2611+
*/
25982612
void blk_start_plug(struct blk_plug *plug)
25992613
{
26002614
struct task_struct *tsk = current;

include/linux/blkdev.h

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -860,17 +860,23 @@ struct request_queue *blk_alloc_queue_node(gfp_t, int);
860860
extern void blk_put_queue(struct request_queue *);
861861

862862
/*
863-
* Note: Code in between changing the blk_plug list/cb_list or element of such
864-
* lists is preemptable, but such code can't do sleep (or be very careful),
865-
* otherwise data is corrupted. For details, please check schedule() where
866-
* blk_schedule_flush_plug() is called.
863+
* blk_plug permits building a queue of related requests by holding the I/O
864+
* fragments for a short period. This allows merging of sequential requests
865+
* into single larger request. As the requests are moved from a per-task list to
866+
* the device's request_queue in a batch, this results in improved scalability
867+
* as the lock contention for request_queue lock is reduced.
868+
*
869+
* It is ok not to disable preemption when adding the request to the plug list
870+
* or when attempting a merge, because blk_schedule_flush_list() will only flush
871+
* the plug list when the task sleeps by itself. For details, please see
872+
* schedule() where blk_schedule_flush_plug() is called.
867873
*/
868874
struct blk_plug {
869-
unsigned long magic;
870-
struct list_head list;
871-
struct list_head cb_list;
872-
unsigned int should_sort;
873-
unsigned int count;
875+
unsigned long magic; /* detect uninitialized use-cases */
876+
struct list_head list; /* requests */
877+
struct list_head cb_list; /* md requires an unplug callback */
878+
unsigned int should_sort; /* list to be sorted before flushing? */
879+
unsigned int count; /* number of queued requests */
874880
};
875881
#define BLK_MAX_REQUEST_COUNT 16
876882

0 commit comments

Comments
 (0)