Skip to content

Commit 1723058

Browse files
osalvadorvilardagatorvalds
authored andcommitted
mm, memory_hotplug: don't bail out in do_migrate_range() prematurely
do_migrate_range() takes a memory range and tries to isolate the pages to put them into a list. This list will be later on used in migrate_pages() to know the pages we need to migrate. Currently, if we fail to isolate a single page, we put all already isolated pages back to their LRU and we bail out from the function. This is quite suboptimal, as this will force us to start over again because scan_movable_pages will give us the same range. If there is no chance that we can isolate that page, we will loop here forever. Issue debugged in [1] has proved that. During the debugging of that issue, it was noticed that if do_migrate_ranges() fails to isolate a single page, we will just discard the work we have done so far and bail out, which means that scan_movable_pages() will find again the same set of pages. Instead, we can just skip the error, keep isolating as much pages as possible and then proceed with the call to migrate_pages(). This will allow us to do as much work as possible at once. [1] https://lkml.org/lkml/2018/12/6/324 Michal said: : I still think that this doesn't give us a whole picture. Looping for : ever is a bug. Failing the isolation is quite possible and it should : be a ephemeral condition (e.g. a race with freeing the page or : somebody else isolating the page for whatever reason). And here comes : the disadvantage of the current implementation. We simply throw : everything on the floor just because of a ephemeral condition. The : racy page_count check is quite dubious to prevent from that. Link: http://lkml.kernel.org/r/20181211135312.27034-1-osalvador@suse.de Signed-off-by: Oscar Salvador <osalvador@suse.de> Acked-by: Michal Hocko <mhocko@suse.com> Cc: David Hildenbrand <david@redhat.com> Cc: Dan Williams <dan.j.williams@gmail.com> Cc: Jan Kara <jack@suse.cz> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: William Kucharski <william.kucharski@oracle.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 5eeb633 commit 1723058

File tree

1 file changed

+2
-16
lines changed

1 file changed

+2
-16
lines changed

mm/memory_hotplug.c

Lines changed: 2 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1344,7 +1344,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
13441344
{
13451345
unsigned long pfn;
13461346
struct page *page;
1347-
int not_managed = 0;
13481347
int ret = 0;
13491348
LIST_HEAD(source);
13501349

@@ -1392,7 +1391,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
13921391
else
13931392
ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
13941393
if (!ret) { /* Success */
1395-
put_page(page);
13961394
list_add_tail(&page->lru, &source);
13971395
if (!__PageMovable(page))
13981396
inc_node_page_state(page, NR_ISOLATED_ANON +
@@ -1401,22 +1399,10 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
14011399
} else {
14021400
pr_warn("failed to isolate pfn %lx\n", pfn);
14031401
dump_page(page, "isolation failed");
1404-
put_page(page);
1405-
/* Because we don't have big zone->lock. we should
1406-
check this again here. */
1407-
if (page_count(page)) {
1408-
not_managed++;
1409-
ret = -EBUSY;
1410-
break;
1411-
}
14121402
}
1403+
put_page(page);
14131404
}
14141405
if (!list_empty(&source)) {
1415-
if (not_managed) {
1416-
putback_movable_pages(&source);
1417-
goto out;
1418-
}
1419-
14201406
/* Allocate a new page from the nearest neighbor node */
14211407
ret = migrate_pages(&source, new_node_page, NULL, 0,
14221408
MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
@@ -1429,7 +1415,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
14291415
putback_movable_pages(&source);
14301416
}
14311417
}
1432-
out:
1418+
14331419
return ret;
14341420
}
14351421

0 commit comments

Comments
 (0)