Skip to content

Commit c2343d2

Browse files
yhuang-inteltorvalds
authored andcommitted
mm/swapfile.c: put_swap_page: share more between huge/normal code path
In this patch, locking related code is shared between huge/normal code path in put_swap_page() to reduce code duplication. The `free_entries == 0` case is merged into the more general `free_entries != SWAPFILE_CLUSTER` case, because the new locking method makes it easy. The added lines is same as the removed lines. But the code size is increased when CONFIG_TRANSPARENT_HUGEPAGE=n. text data bss dec hex filename base: 24123 2004 340 26467 6763 mm/swapfile.o unified: 24485 2004 340 26829 68cd mm/swapfile.o Dig on step deeper with `size -A mm/swapfile.o` for base and unified kernel and compare the result, yields, -.text 17723 0 +.text 17835 0 -.orc_unwind_ip 1380 0 +.orc_unwind_ip 1480 0 -.orc_unwind 2070 0 +.orc_unwind 2220 0 -Total 26686 +Total 27048 The total difference is the same. The text segment difference is much smaller: 112. More difference comes from the ORC unwinder segments: (1480 + 2220) - (1380 + 2070) = 250. If the frame pointer unwinder is used, this costs nothing. Link: http://lkml.kernel.org/r/20180720071845.17920-9-ying.huang@intel.com Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Shaohua Li <shli@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent b32d5f3 commit c2343d2

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

mm/swapfile.c

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1223,8 +1223,8 @@ void put_swap_page(struct page *page, swp_entry_t entry)
12231223
if (!si)
12241224
return;
12251225

1226+
ci = lock_cluster_or_swap_info(si, offset);
12261227
if (size == SWAPFILE_CLUSTER) {
1227-
ci = lock_cluster(si, offset);
12281228
VM_BUG_ON(!cluster_is_huge(ci));
12291229
map = si->swap_map + offset;
12301230
for (i = 0; i < SWAPFILE_CLUSTER; i++) {
@@ -1233,13 +1233,9 @@ void put_swap_page(struct page *page, swp_entry_t entry)
12331233
if (val == SWAP_HAS_CACHE)
12341234
free_entries++;
12351235
}
1236-
if (!free_entries) {
1237-
for (i = 0; i < SWAPFILE_CLUSTER; i++)
1238-
map[i] &= ~SWAP_HAS_CACHE;
1239-
}
12401236
cluster_clear_huge(ci);
1241-
unlock_cluster(ci);
12421237
if (free_entries == SWAPFILE_CLUSTER) {
1238+
unlock_cluster_or_swap_info(si, ci);
12431239
spin_lock(&si->lock);
12441240
ci = lock_cluster(si, offset);
12451241
memset(map, 0, SWAPFILE_CLUSTER);
@@ -1250,12 +1246,16 @@ void put_swap_page(struct page *page, swp_entry_t entry)
12501246
return;
12511247
}
12521248
}
1253-
if (size == 1 || free_entries) {
1254-
for (i = 0; i < size; i++, entry.val++) {
1255-
if (!__swap_entry_free(si, entry, SWAP_HAS_CACHE))
1256-
free_swap_slot(entry);
1249+
for (i = 0; i < size; i++, entry.val++) {
1250+
if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) {
1251+
unlock_cluster_or_swap_info(si, ci);
1252+
free_swap_slot(entry);
1253+
if (i == size - 1)
1254+
return;
1255+
lock_cluster_or_swap_info(si, offset);
12571256
}
12581257
}
1258+
unlock_cluster_or_swap_info(si, ci);
12591259
}
12601260

12611261
#ifdef CONFIG_THP_SWAP

0 commit comments

Comments
 (0)