Skip to content

Commit b1092c9

Browse files
mlyleaxboe
authored andcommitted
bcache: allow quick writeback when backing idle
If the control system would wait for at least half a second, and there's been no reqs hitting the backing disk for awhile: use an alternate mode where we have at most one contiguous set of writebacks in flight at a time. (But don't otherwise delay). If front-end IO appears, it will still be quick, as it will only have to contend with one real operation in flight. But otherwise, we'll be sending data to the backing disk as quickly as it can accept it (with one op at a time). Signed-off-by: Michael Lyle <mlyle@lyle.org> Reviewed-by: Tang Junhui <tang.junhui@zte.com.cn> Acked-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent 6e6ccc6 commit b1092c9

File tree

3 files changed

+29
-0
lines changed

3 files changed

+29
-0
lines changed

drivers/md/bcache/bcache.h

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -320,6 +320,13 @@ struct cached_dev {
320320
*/
321321
atomic_t has_dirty;
322322

323+
/*
324+
* Set to zero by things that touch the backing volume-- except
325+
* writeback. Incremented by writeback. Used to determine when to
326+
* accelerate idle writeback.
327+
*/
328+
atomic_t backing_idle;
329+
323330
struct bch_ratelimit writeback_rate;
324331
struct delayed_work writeback_rate_update;
325332

drivers/md/bcache/request.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -996,6 +996,7 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
996996
struct cached_dev *dc = container_of(d, struct cached_dev, disk);
997997
int rw = bio_data_dir(bio);
998998

999+
atomic_set(&dc->backing_idle, 0);
9991000
generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
10001001

10011002
bio_set_dev(bio, dc->bdev);

drivers/md/bcache/writeback.c

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -356,6 +356,27 @@ static void read_dirty(struct cached_dev *dc)
356356

357357
delay = writeback_delay(dc, size);
358358

359+
/* If the control system would wait for at least half a
360+
* second, and there's been no reqs hitting the backing disk
361+
* for awhile: use an alternate mode where we have at most
362+
* one contiguous set of writebacks in flight at a time. If
363+
* someone wants to do IO it will be quick, as it will only
364+
* have to contend with one operation in flight, and we'll
365+
* be round-tripping data to the backing disk as quickly as
366+
* it can accept it.
367+
*/
368+
if (delay >= HZ / 2) {
369+
/* 3 means at least 1.5 seconds, up to 7.5 if we
370+
* have slowed way down.
371+
*/
372+
if (atomic_inc_return(&dc->backing_idle) >= 3) {
373+
/* Wait for current I/Os to finish */
374+
closure_sync(&cl);
375+
/* And immediately launch a new set. */
376+
delay = 0;
377+
}
378+
}
379+
359380
while (!kthread_should_stop() && delay) {
360381
schedule_timeout_interruptible(delay);
361382
delay = writeback_delay(dc, 0);

0 commit comments

Comments
 (0)