Skip to content

Commit cf66f07

Browse files
gormanmtorvalds
authored andcommitted
mm, compaction: do not consider a need to reschedule as contention
Scanning on large machines can take a considerable length of time and eventually need to be rescheduled. This is treated as an abort event but that's not appropriate as the attempt is likely to be retried after making numerous checks and taking another cycle through the page allocator. This patch will check the need to reschedule if necessary but continue the scanning. The main benefit is reduced scanning when compaction is taking a long time or the machine is over-saturated. It also avoids an unnecessary exit of compaction that ends up being retried by the page allocator in the outer loop. 5.0.0-rc1 5.0.0-rc1 synccached-v3r16 noresched-v3r17 Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%* Amean fault-both-3 2958.27 ( 0.00%) 2965.68 ( -0.25%) Amean fault-both-5 4091.90 ( 0.00%) 3995.90 ( 2.35%) Amean fault-both-7 5803.05 ( 0.00%) 5842.12 ( -0.67%) Amean fault-both-12 9481.06 ( 0.00%) 9550.87 ( -0.74%) Amean fault-both-18 14141.51 ( 0.00%) 13304.72 ( 5.92%) Amean fault-both-24 16438.00 ( 0.00%) 14618.59 ( 11.07%) Amean fault-both-30 17531.72 ( 0.00%) 16650.96 ( 5.02%) Amean fault-both-32 17101.96 ( 0.00%) 17145.15 ( -0.25%) Link: http://lkml.kernel.org/r/20190118175136.31341-18-mgorman@techsingularity.net Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: David Rientjes <rientjes@google.com> Cc: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent cb810ad commit cf66f07

File tree

1 file changed

+4
-19
lines changed

1 file changed

+4
-19
lines changed

mm/compaction.c

Lines changed: 4 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -404,21 +404,6 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
404404
return true;
405405
}
406406

407-
/*
408-
* Aside from avoiding lock contention, compaction also periodically checks
409-
* need_resched() and records async compaction as contended if necessary.
410-
*/
411-
static inline void compact_check_resched(struct compact_control *cc)
412-
{
413-
/* async compaction aborts if contended */
414-
if (need_resched()) {
415-
if (cc->mode == MIGRATE_ASYNC)
416-
cc->contended = true;
417-
418-
cond_resched();
419-
}
420-
}
421-
422407
/*
423408
* Compaction requires the taking of some coarse locks that are potentially
424409
* very heavily contended. The lock should be periodically unlocked to avoid
@@ -447,7 +432,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock,
447432
return true;
448433
}
449434

450-
compact_check_resched(cc);
435+
cond_resched();
451436

452437
return false;
453438
}
@@ -736,7 +721,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
736721
return 0;
737722
}
738723

739-
compact_check_resched(cc);
724+
cond_resched();
740725

741726
if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) {
742727
skip_on_failure = true;
@@ -1370,7 +1355,7 @@ static void isolate_freepages(struct compact_control *cc)
13701355
* suitable migration targets, so periodically check resched.
13711356
*/
13721357
if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)))
1373-
compact_check_resched(cc);
1358+
cond_resched();
13741359

13751360
page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn,
13761361
zone);
@@ -1666,7 +1651,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
16661651
* need to schedule.
16671652
*/
16681653
if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)))
1669-
compact_check_resched(cc);
1654+
cond_resched();
16701655

16711656
page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn,
16721657
zone);

0 commit comments

Comments
 (0)