Skip to content

Commit d097a6f

Browse files
gormanmtorvalds
authored andcommitted
mm, compaction: reduce premature advancement of the migration target scanner
The fast isolation of free pages allows the cached PFN of the free scanner to advance faster than necessary depending on the contents of the free list. The key is that fast_isolate_freepages() can update zone->compact_cached_free_pfn via isolate_freepages_block(). When the fast search fails, the linear scan can start from a point that has skipped valid migration targets, particularly pageblocks with just low-order free pages. This can cause the migration source/target scanners to meet prematurely causing a reset. This patch starts by avoiding an update of the pageblock skip information and cached PFN from isolate_freepages_block() and puts the responsibility of updating that information in the callers. The fast scanner will update the cached PFN if and only if it finds a block that is higher than the existing cached PFN and sets the skip if the pageblock is full or nearly full. The linear scanner will update skipped information and the cached PFN only when a block is completely scanned. The total impact is that the free scanner advances more slowly as it is primarily driven by the linear scanner instead of the fast search. 5.0.0-rc1 5.0.0-rc1 noresched-v3r17 slowfree-v3r17 Amean fault-both-3 2965.68 ( 0.00%) 3036.75 ( -2.40%) Amean fault-both-5 3995.90 ( 0.00%) 4522.24 * -13.17%* Amean fault-both-7 5842.12 ( 0.00%) 6365.35 ( -8.96%) Amean fault-both-12 9550.87 ( 0.00%) 10340.93 ( -8.27%) Amean fault-both-18 13304.72 ( 0.00%) 14732.46 ( -10.73%) Amean fault-both-24 14618.59 ( 0.00%) 16288.96 ( -11.43%) Amean fault-both-30 16650.96 ( 0.00%) 16346.21 ( 1.83%) Amean fault-both-32 17145.15 ( 0.00%) 19317.49 ( -12.67%) The impact to latency is higher than the last version but it appears to be due to a slight increase in the free scan rates which is a potential side-effect of the patch. However, this is necessary for later patches that are more careful about how pageblocks are treated as earlier iterations of those patches hit corner cases where the restarts were punishing and very visible. Link: http://lkml.kernel.org/r/20190118175136.31341-19-mgorman@techsingularity.net Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: David Rientjes <rientjes@google.com> Cc: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent cf66f07 commit d097a6f

File tree

1 file changed

+10
-17
lines changed

1 file changed

+10
-17
lines changed

mm/compaction.c

Lines changed: 10 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -330,24 +330,18 @@ static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
330330
* future. The information is later cleared by __reset_isolation_suitable().
331331
*/
332332
static void update_pageblock_skip(struct compact_control *cc,
333-
struct page *page, unsigned long nr_isolated)
333+
struct page *page, unsigned long pfn)
334334
{
335335
struct zone *zone = cc->zone;
336-
unsigned long pfn;
337336

338337
if (cc->no_set_skip_hint)
339338
return;
340339

341340
if (!page)
342341
return;
343342

344-
if (nr_isolated)
345-
return;
346-
347343
set_pageblock_skip(page);
348344

349-
pfn = page_to_pfn(page);
350-
351345
/* Update where async and sync compaction should restart */
352346
if (pfn < zone->compact_cached_free_pfn)
353347
zone->compact_cached_free_pfn = pfn;
@@ -365,7 +359,7 @@ static inline bool pageblock_skip_persistent(struct page *page)
365359
}
366360

367361
static inline void update_pageblock_skip(struct compact_control *cc,
368-
struct page *page, unsigned long nr_isolated)
362+
struct page *page, unsigned long pfn)
369363
{
370364
}
371365

@@ -449,7 +443,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
449443
bool strict)
450444
{
451445
int nr_scanned = 0, total_isolated = 0;
452-
struct page *cursor, *valid_page = NULL;
446+
struct page *cursor;
453447
unsigned long flags = 0;
454448
bool locked = false;
455449
unsigned long blockpfn = *start_pfn;
@@ -476,9 +470,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
476470
if (!pfn_valid_within(blockpfn))
477471
goto isolate_fail;
478472

479-
if (!valid_page)
480-
valid_page = page;
481-
482473
/*
483474
* For compound pages such as THP and hugetlbfs, we can save
484475
* potentially a lot of iterations if we skip them at once.
@@ -566,10 +557,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
566557
if (strict && blockpfn < end_pfn)
567558
total_isolated = 0;
568559

569-
/* Update the pageblock-skip if the whole pageblock was scanned */
570-
if (blockpfn == end_pfn)
571-
update_pageblock_skip(cc, valid_page, total_isolated);
572-
573560
cc->total_free_scanned += nr_scanned;
574561
if (total_isolated)
575562
count_compact_events(COMPACTISOLATED, total_isolated);
@@ -1293,8 +1280,10 @@ fast_isolate_freepages(struct compact_control *cc)
12931280
}
12941281
}
12951282

1296-
if (highest && highest > cc->zone->compact_cached_free_pfn)
1283+
if (highest && highest >= cc->zone->compact_cached_free_pfn) {
1284+
highest -= pageblock_nr_pages;
12971285
cc->zone->compact_cached_free_pfn = highest;
1286+
}
12981287

12991288
cc->total_free_scanned += nr_scanned;
13001289
if (!page)
@@ -1374,6 +1363,10 @@ static void isolate_freepages(struct compact_control *cc)
13741363
isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn,
13751364
freelist, false);
13761365

1366+
/* Update the skip hint if the full pageblock was scanned */
1367+
if (isolate_start_pfn == block_end_pfn)
1368+
update_pageblock_skip(cc, page, block_start_pfn);
1369+
13771370
/* Are enough freepages isolated? */
13781371
if (cc->nr_freepages >= cc->nr_migratepages) {
13791372
if (isolate_start_pfn >= block_end_pfn) {

0 commit comments

Comments
 (0)