Skip to content

Commit 2149cda

Browse files
JoonsooKimtorvalds
authored andcommitted
mm/compaction: enhance compaction finish condition
Compaction has anti fragmentation algorithm. It is that freepage should be more than pageblock order to finish the compaction if we don't find any freepage in requested migratetype buddy list. This is for mitigating fragmentation, but, there is a lack of migratetype consideration and it is too excessive compared to page allocator's anti fragmentation algorithm. Not considering migratetype would cause premature finish of compaction. For example, if allocation request is for unmovable migratetype, freepage with CMA migratetype doesn't help that allocation and compaction should not be stopped. But, current logic regards this situation as compaction is no longer needed, so finish the compaction. Secondly, condition is too excessive compared to page allocator's logic. We can steal freepage from other migratetype and change pageblock migratetype on more relaxed conditions in page allocator. This is designed to prevent fragmentation and we can use it here. Imposing hard constraint only to the compaction doesn't help much in this case since page allocator would cause fragmentation again. To solve these problems, this patch borrows anti fragmentation logic from page allocator. It will reduce premature compaction finish in some cases and reduce excessive compaction work. stress-highalloc test in mmtests with non movable order 7 allocation shows considerable increase of compaction success rate. Compaction success rate (Compaction success * 100 / Compaction stalls, %) 31.82 : 42.20 I tested it on non-reboot 5 runs stress-highalloc benchmark and found that there is no more degradation on allocation success rate than before. That roughly means that this patch doesn't result in more fragmentations. Vlastimil suggests additional idea that we only test for fallbacks when migration scanner has scanned a whole pageblock. It looked good for fragmentation because chance of stealing increase due to making more free pages in certain pageblock. So, I tested it, but, it results in decreased compaction success rate, roughly 38.00. I guess the reason that if system is low memory condition, watermark check could be failed due to not enough order 0 free page and so, sometimes, we can't reach a fallback check although migrate_pfn is aligned to pageblock_nr_pages. I can insert code to cope with this situation but it makes code more complicated so I don't include his idea at this patch. [akpm@linux-foundation.org: fix CONFIG_CMA=n build] Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 4eb7dce commit 2149cda

File tree

3 files changed

+29
-7
lines changed

3 files changed

+29
-7
lines changed

mm/compaction.c

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1174,13 +1174,24 @@ static int __compact_finished(struct zone *zone, struct compact_control *cc,
11741174
/* Direct compactor: Is a suitable page free? */
11751175
for (order = cc->order; order < MAX_ORDER; order++) {
11761176
struct free_area *area = &zone->free_area[order];
1177+
bool can_steal;
11771178

11781179
/* Job done if page is free of the right migratetype */
11791180
if (!list_empty(&area->free_list[migratetype]))
11801181
return COMPACT_PARTIAL;
11811182

1182-
/* Job done if allocation would set block type */
1183-
if (order >= pageblock_order && area->nr_free)
1183+
#ifdef CONFIG_CMA
1184+
/* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */
1185+
if (migratetype == MIGRATE_MOVABLE &&
1186+
!list_empty(&area->free_list[MIGRATE_CMA]))
1187+
return COMPACT_PARTIAL;
1188+
#endif
1189+
/*
1190+
* Job done if allocation would steal freepages from
1191+
* other migratetype buddy lists.
1192+
*/
1193+
if (find_suitable_fallback(area, order, migratetype,
1194+
true, &can_steal) != -1)
11841195
return COMPACT_PARTIAL;
11851196
}
11861197

mm/internal.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,8 @@ isolate_freepages_range(struct compact_control *cc,
200200
unsigned long
201201
isolate_migratepages_range(struct compact_control *cc,
202202
unsigned long low_pfn, unsigned long end_pfn);
203+
int find_suitable_fallback(struct free_area *area, unsigned int order,
204+
int migratetype, bool only_stealable, bool *can_steal);
203205

204206
#endif
205207

mm/page_alloc.c

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1194,9 +1194,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
11941194
set_pageblock_migratetype(page, start_type);
11951195
}
11961196

1197-
/* Check whether there is a suitable fallback freepage with requested order. */
1198-
static int find_suitable_fallback(struct free_area *area, unsigned int order,
1199-
int migratetype, bool *can_steal)
1197+
/*
1198+
* Check whether there is a suitable fallback freepage with requested order.
1199+
* If only_stealable is true, this function returns fallback_mt only if
1200+
* we can steal other freepages all together. This would help to reduce
1201+
* fragmentation due to mixed migratetype pages in one pageblock.
1202+
*/
1203+
int find_suitable_fallback(struct free_area *area, unsigned int order,
1204+
int migratetype, bool only_stealable, bool *can_steal)
12001205
{
12011206
int i;
12021207
int fallback_mt;
@@ -1216,7 +1221,11 @@ static int find_suitable_fallback(struct free_area *area, unsigned int order,
12161221
if (can_steal_fallback(order, migratetype))
12171222
*can_steal = true;
12181223

1219-
return fallback_mt;
1224+
if (!only_stealable)
1225+
return fallback_mt;
1226+
1227+
if (*can_steal)
1228+
return fallback_mt;
12201229
}
12211230

12221231
return -1;
@@ -1238,7 +1247,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
12381247
--current_order) {
12391248
area = &(zone->free_area[current_order]);
12401249
fallback_mt = find_suitable_fallback(area, current_order,
1241-
start_migratetype, &can_steal);
1250+
start_migratetype, false, &can_steal);
12421251
if (fallback_mt == -1)
12431252
continue;
12441253

0 commit comments

Comments
 (0)