Skip to content

Commit 90deb78

Browse files
hnaztorvalds
authored andcommitted
mm: memcg: only check swap cache pages for repeated charging
Only anon and shmem pages in the swap cache are attempted to be charged multiple times, from every swap pte fault or from shmem_unuse(). No other pages require checking PageCgroupUsed(). Charging pages in the swap cache is also serialized by the page lock, and since both the try_charge and commit_charge are called under the same page lock section, the PageCgroupUsed() check might as well happen before the counter charging, let alone reclaim. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: Michal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Wanpeng Li <liwp.linux@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 0435a2f commit 90deb78

File tree

1 file changed

+12
-5
lines changed

1 file changed

+12
-5
lines changed

mm/memcontrol.c

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2538,11 +2538,7 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg,
25382538
bool anon;
25392539

25402540
lock_page_cgroup(pc);
2541-
if (unlikely(PageCgroupUsed(pc))) {
2542-
unlock_page_cgroup(pc);
2543-
__mem_cgroup_cancel_charge(memcg, nr_pages);
2544-
return;
2545-
}
2541+
VM_BUG_ON(PageCgroupUsed(pc));
25462542
/*
25472543
* we don't need page_cgroup_lock about tail pages, becase they are not
25482544
* accessed by any other context at this point.
@@ -2807,8 +2803,19 @@ static int __mem_cgroup_try_charge_swapin(struct mm_struct *mm,
28072803
struct mem_cgroup **memcgp)
28082804
{
28092805
struct mem_cgroup *memcg;
2806+
struct page_cgroup *pc;
28102807
int ret;
28112808

2809+
pc = lookup_page_cgroup(page);
2810+
/*
2811+
* Every swap fault against a single page tries to charge the
2812+
* page, bail as early as possible. shmem_unuse() encounters
2813+
* already charged pages, too. The USED bit is protected by
2814+
* the page lock, which serializes swap cache removal, which
2815+
* in turn serializes uncharging.
2816+
*/
2817+
if (PageCgroupUsed(pc))
2818+
return 0;
28122819
if (!do_swap_account)
28132820
goto charge_cur_mm;
28142821
/*

0 commit comments

Comments
 (0)