Skip to content

Commit cb810ad

Browse files
gormanmtorvalds
authored andcommitted
mm, compaction: rework compact_should_abort as compact_check_resched
With incremental changes, compact_should_abort no longer makes any documented sense. Rename to compact_check_resched and update the associated comments. There is no benefit other than reducing redundant code and making the intent slightly clearer. It could potentially be merged with earlier patches but it just makes the review slightly harder. Link: http://lkml.kernel.org/r/20190118175136.31341-17-mgorman@techsingularity.net Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: David Rientjes <rientjes@google.com> Cc: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 8854c55 commit cb810ad

File tree

1 file changed

+23
-38
lines changed

1 file changed

+23
-38
lines changed

mm/compaction.c

Lines changed: 23 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -404,6 +404,21 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
404404
return true;
405405
}
406406

407+
/*
408+
* Aside from avoiding lock contention, compaction also periodically checks
409+
* need_resched() and records async compaction as contended if necessary.
410+
*/
411+
static inline void compact_check_resched(struct compact_control *cc)
412+
{
413+
/* async compaction aborts if contended */
414+
if (need_resched()) {
415+
if (cc->mode == MIGRATE_ASYNC)
416+
cc->contended = true;
417+
418+
cond_resched();
419+
}
420+
}
421+
407422
/*
408423
* Compaction requires the taking of some coarse locks that are potentially
409424
* very heavily contended. The lock should be periodically unlocked to avoid
@@ -432,33 +447,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock,
432447
return true;
433448
}
434449

435-
if (need_resched()) {
436-
if (cc->mode == MIGRATE_ASYNC)
437-
cc->contended = true;
438-
cond_resched();
439-
}
440-
441-
return false;
442-
}
443-
444-
/*
445-
* Aside from avoiding lock contention, compaction also periodically checks
446-
* need_resched() and either schedules in sync compaction or aborts async
447-
* compaction. This is similar to what compact_unlock_should_abort() does, but
448-
* is used where no lock is concerned.
449-
*
450-
* Returns false when no scheduling was needed, or sync compaction scheduled.
451-
* Returns true when async compaction should abort.
452-
*/
453-
static inline bool compact_should_abort(struct compact_control *cc)
454-
{
455-
/* async compaction aborts if contended */
456-
if (need_resched()) {
457-
if (cc->mode == MIGRATE_ASYNC)
458-
cc->contended = true;
459-
460-
cond_resched();
461-
}
450+
compact_check_resched(cc);
462451

463452
return false;
464453
}
@@ -747,8 +736,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
747736
return 0;
748737
}
749738

750-
if (compact_should_abort(cc))
751-
return 0;
739+
compact_check_resched(cc);
752740

753741
if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) {
754742
skip_on_failure = true;
@@ -1379,12 +1367,10 @@ static void isolate_freepages(struct compact_control *cc)
13791367
isolate_start_pfn = block_start_pfn) {
13801368
/*
13811369
* This can iterate a massively long zone without finding any
1382-
* suitable migration targets, so periodically check if we need
1383-
* to schedule, or even abort async compaction.
1370+
* suitable migration targets, so periodically check resched.
13841371
*/
1385-
if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))
1386-
&& compact_should_abort(cc))
1387-
break;
1372+
if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)))
1373+
compact_check_resched(cc);
13881374

13891375
page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn,
13901376
zone);
@@ -1677,11 +1663,10 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
16771663
/*
16781664
* This can potentially iterate a massively long zone with
16791665
* many pageblocks unsuitable, so periodically check if we
1680-
* need to schedule, or even abort async compaction.
1666+
* need to schedule.
16811667
*/
1682-
if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))
1683-
&& compact_should_abort(cc))
1684-
break;
1668+
if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)))
1669+
compact_check_resched(cc);
16851670

16861671
page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn,
16871672
zone);

0 commit comments

Comments
 (0)