Skip to content

Commit 0b3901b

Browse files
jankaratorvalds
authored andcommitted
mm: migration: factor out code to compute expected number of page references
Patch series "mm: migrate: Fix page migration stalls for blkdev pages". This patchset deals with page migration stalls that were reported by our customer due to a block device page that had a bufferhead that was in the bh LRU cache. The patchset modifies the page migration code so that bufferheads are completely handled inside buffer_migrate_page() and then provides a new migration helper for pages with buffer heads that is safe to use even for block device pages and that also deals with bh lrus. This patch (of 6): Factor out function to compute number of expected page references in migrate_page_move_mapping(). Note that we move hpage_nr_pages() and page_has_private() checks from under xas_lock_irq() however this is safe since we hold page lock. [jack@suse.cz: fix expected_page_refs()] Link: http://lkml.kernel.org/r/20181217131710.GB8611@quack2.suse.cz Link: http://lkml.kernel.org/r/20181211172143.7358-2-jack@suse.cz Signed-off-by: Jan Kara <jack@suse.cz> Acked-by: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent d9367bd commit 0b3901b

File tree

1 file changed

+17
-10
lines changed

1 file changed

+17
-10
lines changed

mm/migrate.c

Lines changed: 17 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -424,6 +424,22 @@ static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
424424
}
425425
#endif /* CONFIG_BLOCK */
426426

427+
static int expected_page_refs(struct page *page)
428+
{
429+
int expected_count = 1;
430+
431+
/*
432+
* Device public or private pages have an extra refcount as they are
433+
* ZONE_DEVICE pages.
434+
*/
435+
expected_count += is_device_private_page(page);
436+
expected_count += is_device_public_page(page);
437+
if (page_mapping(page))
438+
expected_count += hpage_nr_pages(page) + page_has_private(page);
439+
440+
return expected_count;
441+
}
442+
427443
/*
428444
* Replace the page in the mapping.
429445
*
@@ -440,14 +456,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
440456
XA_STATE(xas, &mapping->i_pages, page_index(page));
441457
struct zone *oldzone, *newzone;
442458
int dirty;
443-
int expected_count = 1 + extra_count;
444-
445-
/*
446-
* Device public or private pages have an extra refcount as they are
447-
* ZONE_DEVICE pages.
448-
*/
449-
expected_count += is_device_private_page(page);
450-
expected_count += is_device_public_page(page);
459+
int expected_count = expected_page_refs(page) + extra_count;
451460

452461
if (!mapping) {
453462
/* Anonymous page without mapping */
@@ -467,8 +476,6 @@ int migrate_page_move_mapping(struct address_space *mapping,
467476
newzone = page_zone(newpage);
468477

469478
xas_lock_irq(&xas);
470-
471-
expected_count += hpage_nr_pages(page) + page_has_private(page);
472479
if (page_count(page) != expected_count || xas_load(&xas) != page) {
473480
xas_unlock_irq(&xas);
474481
return -EAGAIN;

0 commit comments

Comments
 (0)