Skip to content

Commit d263ed9

Browse files
Jianchao Wangaxboe
authored andcommitted
blk-mq: count the hctx as active before allocating tag
Currently, we count the hctx as active after allocate driver tag successfully. If a previously inactive hctx try to get tag first time, it may fails and need to wait. However, due to the stale tag ->active_queues, the other shared-tags users are still able to occupy all driver tags while there is someone waiting for tag. Consequently, even if the previously inactive hctx is waked up, it still may not be able to get a tag and could be starved. To fix it, we count the hctx as active before try to allocate driver tag, then when it is waiting the tag, the other shared-tag users will reserve budget for it. Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent d6c02a9 commit d263ed9

File tree

2 files changed

+9
-2
lines changed

2 files changed

+9
-2
lines changed

block/blk-mq-tag.c

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,9 @@ bool blk_mq_has_free_tags(struct blk_mq_tags *tags)
2323

2424
/*
2525
* If a previously inactive queue goes active, bump the active user count.
26+
* We need to do this before try to allocate driver tag, then even if fail
27+
* to get tag when first time, the other shared-tag users could reserve
28+
* budget for it.
2629
*/
2730
bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
2831
{

block/blk-mq.c

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
285285
rq->tag = -1;
286286
rq->internal_tag = tag;
287287
} else {
288-
if (blk_mq_tag_busy(data->hctx)) {
288+
if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) {
289289
rq_flags = RQF_MQ_INFLIGHT;
290290
atomic_inc(&data->hctx->nr_active);
291291
}
@@ -367,6 +367,8 @@ static struct request *blk_mq_get_request(struct request_queue *q,
367367
if (!op_is_flush(op) && e->type->ops.mq.limit_depth &&
368368
!(data->flags & BLK_MQ_REQ_RESERVED))
369369
e->type->ops.mq.limit_depth(op, data);
370+
} else {
371+
blk_mq_tag_busy(data->hctx);
370372
}
371373

372374
tag = blk_mq_get_tag(data);
@@ -971,16 +973,18 @@ bool blk_mq_get_driver_tag(struct request *rq)
971973
.hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu),
972974
.flags = BLK_MQ_REQ_NOWAIT,
973975
};
976+
bool shared;
974977

975978
if (rq->tag != -1)
976979
goto done;
977980

978981
if (blk_mq_tag_is_reserved(data.hctx->sched_tags, rq->internal_tag))
979982
data.flags |= BLK_MQ_REQ_RESERVED;
980983

984+
shared = blk_mq_tag_busy(data.hctx);
981985
rq->tag = blk_mq_get_tag(&data);
982986
if (rq->tag >= 0) {
983-
if (blk_mq_tag_busy(data.hctx)) {
987+
if (shared) {
984988
rq->rq_flags |= RQF_MQ_INFLIGHT;
985989
atomic_inc(&data.hctx->nr_active);
986990
}

0 commit comments

Comments
 (0)