Skip to content

Commit 2c91bd4

Browse files
joelagneltorvalds
authored andcommitted
mm: speed up mremap by 20x on large regions
Android needs to mremap large regions of memory during memory management related operations. The mremap system call can be really slow if THP is not enabled. The bottleneck is move_page_tables, which is copying each pte at a time, and can be really slow across a large map. Turning on THP may not be a viable option, and is not for us. This patch speeds up the performance for non-THP system by copying at the PMD level when possible. The speedup is an order of magnitude on x86 (~20x). On a 1GB mremap, the mremap completion times drops from 3.4-3.6 milliseconds to 144-160 microseconds. Before: Total mremap time for 1GB data: 3521942 nanoseconds. Total mremap time for 1GB data: 3449229 nanoseconds. Total mremap time for 1GB data: 3488230 nanoseconds. After: Total mremap time for 1GB data: 150279 nanoseconds. Total mremap time for 1GB data: 144665 nanoseconds. Total mremap time for 1GB data: 158708 nanoseconds. If THP is enabled the optimization is mostly skipped except in certain situations. [joel@joelfernandes.org: fix 'move_normal_pmd' unused function warning] Link: http://lkml.kernel.org/r/20181108224457.GB209347@google.com Link: http://lkml.kernel.org/r/20181108181201.88826-3-joelaf@google.com Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Acked-by: Kirill A. Shutemov <kirill@shutemov.name> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Cc: Julia Lawall <Julia.Lawall@lip6.fr> Cc: Michal Hocko <mhocko@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 4cf5892 commit 2c91bd4

File tree

2 files changed

+69
-0
lines changed

2 files changed

+69
-0
lines changed

arch/Kconfig

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -535,6 +535,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
535535
Archs need to ensure they use a high enough resolution clock to
536536
support irq time accounting and then call enable_sched_clock_irqtime().
537537

538+
config HAVE_MOVE_PMD
539+
bool
540+
help
541+
Archs that select this are able to move page tables at the PMD level.
542+
538543
config HAVE_ARCH_TRANSPARENT_HUGEPAGE
539544
bool
540545

mm/mremap.c

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,52 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
191191
drop_rmap_locks(vma);
192192
}
193193

194+
#ifdef CONFIG_HAVE_MOVE_PMD
195+
static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
196+
unsigned long new_addr, unsigned long old_end,
197+
pmd_t *old_pmd, pmd_t *new_pmd)
198+
{
199+
spinlock_t *old_ptl, *new_ptl;
200+
struct mm_struct *mm = vma->vm_mm;
201+
pmd_t pmd;
202+
203+
if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
204+
|| old_end - old_addr < PMD_SIZE)
205+
return false;
206+
207+
/*
208+
* The destination pmd shouldn't be established, free_pgtables()
209+
* should have release it.
210+
*/
211+
if (WARN_ON(!pmd_none(*new_pmd)))
212+
return false;
213+
214+
/*
215+
* We don't have to worry about the ordering of src and dst
216+
* ptlocks because exclusive mmap_sem prevents deadlock.
217+
*/
218+
old_ptl = pmd_lock(vma->vm_mm, old_pmd);
219+
new_ptl = pmd_lockptr(mm, new_pmd);
220+
if (new_ptl != old_ptl)
221+
spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
222+
223+
/* Clear the pmd */
224+
pmd = *old_pmd;
225+
pmd_clear(old_pmd);
226+
227+
VM_BUG_ON(!pmd_none(*new_pmd));
228+
229+
/* Set the new pmd */
230+
set_pmd_at(mm, new_addr, new_pmd, pmd);
231+
flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
232+
if (new_ptl != old_ptl)
233+
spin_unlock(new_ptl);
234+
spin_unlock(old_ptl);
235+
236+
return true;
237+
}
238+
#endif
239+
194240
unsigned long move_page_tables(struct vm_area_struct *vma,
195241
unsigned long old_addr, struct vm_area_struct *new_vma,
196242
unsigned long new_addr, unsigned long len,
@@ -235,7 +281,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
235281
split_huge_pmd(vma, old_pmd, old_addr);
236282
if (pmd_trans_unstable(old_pmd))
237283
continue;
284+
} else if (extent == PMD_SIZE) {
285+
#ifdef CONFIG_HAVE_MOVE_PMD
286+
/*
287+
* If the extent is PMD-sized, try to speed the move by
288+
* moving at the PMD level if possible.
289+
*/
290+
bool moved;
291+
292+
if (need_rmap_locks)
293+
take_rmap_locks(vma);
294+
moved = move_normal_pmd(vma, old_addr, new_addr,
295+
old_end, old_pmd, new_pmd);
296+
if (need_rmap_locks)
297+
drop_rmap_locks(vma);
298+
if (moved)
299+
continue;
300+
#endif
238301
}
302+
239303
if (pte_alloc(new_vma->vm_mm, new_pmd))
240304
break;
241305
next = (new_addr + PMD_SIZE) & PMD_MASK;

0 commit comments

Comments
 (0)