Skip to content

Commit 8dcd175

Browse files
committed
Merge branch 'akpm' (patches from Andrew)
Merge misc updates from Andrew Morton: - a few misc things - ocfs2 updates - most of MM * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (159 commits) tools/testing/selftests/proc/proc-self-syscall.c: remove duplicate include proc: more robust bulk read test proc: test /proc/*/maps, smaps, smaps_rollup, statm proc: use seq_puts() everywhere proc: read kernel cpu stat pointer once proc: remove unused argument in proc_pid_lookup() fs/proc/thread_self.c: code cleanup for proc_setup_thread_self() fs/proc/self.c: code cleanup for proc_setup_self() proc: return exit code 4 for skipped tests mm,mremap: bail out earlier in mremap_to under map pressure mm/sparse: fix a bad comparison mm/memory.c: do_fault: avoid usage of stale vm_area_struct writeback: fix inode cgroup switching comment mm/huge_memory.c: fix "orig_pud" set but not used mm/hotplug: fix an imbalance with DEBUG_PAGEALLOC mm/memcontrol.c: fix bad line in comment mm/cma.c: cma_declare_contiguous: correct err handling mm/page_ext.c: fix an imbalance with kmemleak mm/compaction: pass pgdat to too_many_isolated() instead of zone mm: remove zone_lru_lock() function, access ->lru_lock directly ...
2 parents afe6fe7 + fff0490 commit 8dcd175

File tree

213 files changed

+4918
-2315
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

213 files changed

+4918
-2315
lines changed

Documentation/admin-guide/cgroup-v2.rst

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1189,6 +1189,10 @@ PAGE_SIZE multiple when read back.
11891189
Amount of cached filesystem data that was modified and
11901190
is currently being written back to disk
11911191

1192+
anon_thp
1193+
Amount of memory used in anonymous mappings backed by
1194+
transparent hugepages
1195+
11921196
inactive_anon, active_anon, inactive_file, active_file, unevictable
11931197
Amount of memory, swap-backed and filesystem-backed,
11941198
on the internal memory management lists used by the
@@ -1248,6 +1252,18 @@ PAGE_SIZE multiple when read back.
12481252

12491253
Amount of reclaimed lazyfree pages
12501254

1255+
thp_fault_alloc
1256+
1257+
Number of transparent hugepages which were allocated to satisfy
1258+
a page fault, including COW faults. This counter is not present
1259+
when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1260+
1261+
thp_collapse_alloc
1262+
1263+
Number of transparent hugepages which were allocated to allow
1264+
collapsing an existing range of pages. This counter is not
1265+
present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1266+
12511267
memory.swap.current
12521268
A read-only single value file which exists on non-root
12531269
cgroups.

Documentation/admin-guide/mm/pagemap.rst

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -75,9 +75,10 @@ number of times a page is mapped.
7575
20. NOPAGE
7676
21. KSM
7777
22. THP
78-
23. BALLOON
78+
23. OFFLINE
7979
24. ZERO_PAGE
8080
25. IDLE
81+
26. PGTABLE
8182

8283
* ``/proc/kpagecgroup``. This file contains a 64-bit inode number of the
8384
memory cgroup each page is charged to, indexed by PFN. Only available when
@@ -118,8 +119,8 @@ Short descriptions to the page flags
118119
identical memory pages dynamically shared between one or more processes
119120
22 - THP
120121
contiguous pages which construct transparent hugepages
121-
23 - BALLOON
122-
balloon compaction page
122+
23 - OFFLINE
123+
page is logically offline
123124
24 - ZERO_PAGE
124125
zero page for pfn_zero or huge_zero page
125126
25 - IDLE
@@ -128,6 +129,8 @@ Short descriptions to the page flags
128129
Note that this flag may be stale in case the page was accessed via
129130
a PTE. To make sure the flag is up-to-date one has to read
130131
``/sys/kernel/mm/page_idle/bitmap`` first.
132+
26 - PGTABLE
133+
page is in use as a page table
131134

132135
IO related page flags
133136
---------------------

Documentation/cgroup-v1/memcg_test.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -107,9 +107,9 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
107107

108108
8. LRU
109109
Each memcg has its own private LRU. Now, its handling is under global
110-
VM's control (means that it's handled under global zone_lru_lock).
110+
VM's control (means that it's handled under global pgdat->lru_lock).
111111
Almost all routines around memcg's LRU is called by global LRU's
112-
list management functions under zone_lru_lock().
112+
list management functions under pgdat->lru_lock.
113113

114114
A special function is mem_cgroup_isolate_pages(). This scans
115115
memcg's private LRU and call __isolate_lru_page() to extract a page

Documentation/cgroup-v1/memory.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -267,11 +267,11 @@ When oom event notifier is registered, event will be delivered.
267267
Other lock order is following:
268268
PG_locked.
269269
mm->page_table_lock
270-
zone_lru_lock
270+
pgdat->lru_lock
271271
lock_page_cgroup.
272272
In many cases, just lock_page_cgroup() is called.
273273
per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by
274-
zone_lru_lock, it has no lock of its own.
274+
pgdat->lru_lock, it has no lock of its own.
275275

276276
2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM)
277277

MAINTAINERS

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9835,6 +9835,14 @@ F: kernel/sched/membarrier.c
98359835
F: include/uapi/linux/membarrier.h
98369836
F: arch/powerpc/include/asm/membarrier.h
98379837

9838+
MEMBLOCK
9839+
M: Mike Rapoport <rppt@linux.ibm.com>
9840+
L: linux-mm@kvack.org
9841+
S: Maintained
9842+
F: include/linux/memblock.h
9843+
F: mm/memblock.c
9844+
F: Documentation/core-api/boot-time-mm.rst
9845+
98389846
MEMORY MANAGEMENT
98399847
L: linux-mm@kvack.org
98409848
W: http://www.linux-mm.org

arch/alpha/include/asm/topology.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44

55
#include <linux/smp.h>
66
#include <linux/threads.h>
7+
#include <linux/numa.h>
78
#include <asm/machvec.h>
89

910
#ifdef CONFIG_NUMA
@@ -29,7 +30,7 @@ static const struct cpumask *cpumask_of_node(int node)
2930
{
3031
int cpu;
3132

32-
if (node == -1)
33+
if (node == NUMA_NO_NODE)
3334
return cpu_all_mask;
3435

3536
cpumask_clear(&node_to_cpumask_map[node]);

arch/arm64/Kconfig

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1467,6 +1467,10 @@ config SYSVIPC_COMPAT
14671467
def_bool y
14681468
depends on COMPAT && SYSVIPC
14691469

1470+
config ARCH_ENABLE_HUGEPAGE_MIGRATION
1471+
def_bool y
1472+
depends on HUGETLB_PAGE && MIGRATION
1473+
14701474
menu "Power management options"
14711475

14721476
source "kernel/power/Kconfig"

arch/arm64/include/asm/hugetlb.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,11 @@
2020

2121
#include <asm/page.h>
2222

23+
#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
24+
#define arch_hugetlb_migration_supported arch_hugetlb_migration_supported
25+
extern bool arch_hugetlb_migration_supported(struct hstate *h);
26+
#endif
27+
2328
#define __HAVE_ARCH_HUGE_PTEP_GET
2429
static inline pte_t huge_ptep_get(pte_t *ptep)
2530
{

arch/arm64/include/asm/memory.h

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -80,11 +80,7 @@
8080
*/
8181
#ifdef CONFIG_KASAN
8282
#define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
83-
#ifdef CONFIG_KASAN_EXTRA
84-
#define KASAN_THREAD_SHIFT 2
85-
#else
8683
#define KASAN_THREAD_SHIFT 1
87-
#endif /* CONFIG_KASAN_EXTRA */
8884
#else
8985
#define KASAN_SHADOW_SIZE (0)
9086
#define KASAN_THREAD_SHIFT 0

arch/arm64/kernel/machine_kexec.c

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@ void crash_post_resume(void)
321321
* but does not hold any data of loaded kernel image.
322322
*
323323
* Note that all the pages in crash dump kernel memory have been initially
324-
* marked as Reserved in kexec_reserve_crashkres_pages().
324+
* marked as Reserved as memory was allocated via memblock_reserve().
325325
*
326326
* In hibernation, the pages which are Reserved and yet "nosave" are excluded
327327
* from the hibernation iamge. crash_is_nosave() does thich check for crash
@@ -361,7 +361,6 @@ void crash_free_reserved_phys_range(unsigned long begin, unsigned long end)
361361

362362
for (addr = begin; addr < end; addr += PAGE_SIZE) {
363363
page = phys_to_page(addr);
364-
ClearPageReserved(page);
365364
free_reserved_page(page);
366365
}
367366
}

arch/arm64/mm/hugetlbpage.c

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,26 @@
2727
#include <asm/tlbflush.h>
2828
#include <asm/pgalloc.h>
2929

30+
#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
31+
bool arch_hugetlb_migration_supported(struct hstate *h)
32+
{
33+
size_t pagesize = huge_page_size(h);
34+
35+
switch (pagesize) {
36+
#ifdef CONFIG_ARM64_4K_PAGES
37+
case PUD_SIZE:
38+
#endif
39+
case PMD_SIZE:
40+
case CONT_PMD_SIZE:
41+
case CONT_PTE_SIZE:
42+
return true;
43+
}
44+
pr_warn("%s: unrecognized huge page size 0x%lx\n",
45+
__func__, pagesize);
46+
return false;
47+
}
48+
#endif
49+
3050
int pmd_huge(pmd_t pmd)
3151
{
3252
return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);

arch/arm64/mm/init.c

Lines changed: 0 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -118,35 +118,10 @@ static void __init reserve_crashkernel(void)
118118
crashk_res.start = crash_base;
119119
crashk_res.end = crash_base + crash_size - 1;
120120
}
121-
122-
static void __init kexec_reserve_crashkres_pages(void)
123-
{
124-
#ifdef CONFIG_HIBERNATION
125-
phys_addr_t addr;
126-
struct page *page;
127-
128-
if (!crashk_res.end)
129-
return;
130-
131-
/*
132-
* To reduce the size of hibernation image, all the pages are
133-
* marked as Reserved initially.
134-
*/
135-
for (addr = crashk_res.start; addr < (crashk_res.end + 1);
136-
addr += PAGE_SIZE) {
137-
page = phys_to_page(addr);
138-
SetPageReserved(page);
139-
}
140-
#endif
141-
}
142121
#else
143122
static void __init reserve_crashkernel(void)
144123
{
145124
}
146-
147-
static void __init kexec_reserve_crashkres_pages(void)
148-
{
149-
}
150125
#endif /* CONFIG_KEXEC_CORE */
151126

152127
#ifdef CONFIG_CRASH_DUMP
@@ -586,8 +561,6 @@ void __init mem_init(void)
586561
/* this will put all unused low memory onto the freelists */
587562
memblock_free_all();
588563

589-
kexec_reserve_crashkres_pages();
590-
591564
mem_init_print_info(NULL);
592565

593566
/*

arch/arm64/mm/numa.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ static void __init setup_node_to_cpumask_map(void)
120120
}
121121

122122
/* cpumask_of_node() will now work */
123-
pr_debug("Node to cpumask map for %d nodes\n", nr_node_ids);
123+
pr_debug("Node to cpumask map for %u nodes\n", nr_node_ids);
124124
}
125125

126126
/*

arch/ia64/kernel/numa.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ void __init build_cpu_to_node_map(void)
7474
cpumask_clear(&node_to_cpu_mask[node]);
7575

7676
for_each_possible_early_cpu(cpu) {
77-
node = -1;
77+
node = NUMA_NO_NODE;
7878
for (i = 0; i < NR_CPUS; ++i)
7979
if (cpu_physical_id(cpu) == node_cpuid[i].phys_id) {
8080
node = node_cpuid[i].nid;

arch/ia64/kernel/perfmon.c

Lines changed: 4 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -583,17 +583,6 @@ pfm_put_task(struct task_struct *task)
583583
if (task != current) put_task_struct(task);
584584
}
585585

586-
static inline void
587-
pfm_reserve_page(unsigned long a)
588-
{
589-
SetPageReserved(vmalloc_to_page((void *)a));
590-
}
591-
static inline void
592-
pfm_unreserve_page(unsigned long a)
593-
{
594-
ClearPageReserved(vmalloc_to_page((void*)a));
595-
}
596-
597586
static inline unsigned long
598587
pfm_protect_ctx_ctxsw(pfm_context_t *x)
599588
{
@@ -816,44 +805,6 @@ pfm_reset_msgq(pfm_context_t *ctx)
816805
DPRINT(("ctx=%p msgq reset\n", ctx));
817806
}
818807

819-
static void *
820-
pfm_rvmalloc(unsigned long size)
821-
{
822-
void *mem;
823-
unsigned long addr;
824-
825-
size = PAGE_ALIGN(size);
826-
mem = vzalloc(size);
827-
if (mem) {
828-
//printk("perfmon: CPU%d pfm_rvmalloc(%ld)=%p\n", smp_processor_id(), size, mem);
829-
addr = (unsigned long)mem;
830-
while (size > 0) {
831-
pfm_reserve_page(addr);
832-
addr+=PAGE_SIZE;
833-
size-=PAGE_SIZE;
834-
}
835-
}
836-
return mem;
837-
}
838-
839-
static void
840-
pfm_rvfree(void *mem, unsigned long size)
841-
{
842-
unsigned long addr;
843-
844-
if (mem) {
845-
DPRINT(("freeing physical buffer @%p size=%lu\n", mem, size));
846-
addr = (unsigned long) mem;
847-
while ((long) size > 0) {
848-
pfm_unreserve_page(addr);
849-
addr+=PAGE_SIZE;
850-
size-=PAGE_SIZE;
851-
}
852-
vfree(mem);
853-
}
854-
return;
855-
}
856-
857808
static pfm_context_t *
858809
pfm_context_alloc(int ctx_flags)
859810
{
@@ -1498,7 +1449,7 @@ pfm_free_smpl_buffer(pfm_context_t *ctx)
14981449
/*
14991450
* free the buffer
15001451
*/
1501-
pfm_rvfree(ctx->ctx_smpl_hdr, ctx->ctx_smpl_size);
1452+
vfree(ctx->ctx_smpl_hdr);
15021453

15031454
ctx->ctx_smpl_hdr = NULL;
15041455
ctx->ctx_smpl_size = 0UL;
@@ -2137,7 +2088,7 @@ pfm_close(struct inode *inode, struct file *filp)
21372088
* All memory free operations (especially for vmalloc'ed memory)
21382089
* MUST be done with interrupts ENABLED.
21392090
*/
2140-
if (smpl_buf_addr) pfm_rvfree(smpl_buf_addr, smpl_buf_size);
2091+
vfree(smpl_buf_addr);
21412092

21422093
/*
21432094
* return the memory used by the context
@@ -2266,10 +2217,8 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t
22662217

22672218
/*
22682219
* We do the easy to undo allocations first.
2269-
*
2270-
* pfm_rvmalloc(), clears the buffer, so there is no leak
22712220
*/
2272-
smpl_buf = pfm_rvmalloc(size);
2221+
smpl_buf = vzalloc(size);
22732222
if (smpl_buf == NULL) {
22742223
DPRINT(("Can't allocate sampling buffer\n"));
22752224
return -ENOMEM;
@@ -2346,7 +2295,7 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t
23462295
error:
23472296
vm_area_free(vma);
23482297
error_kmem:
2349-
pfm_rvfree(smpl_buf, size);
2298+
vfree(smpl_buf);
23502299

23512300
return -ENOMEM;
23522301
}

arch/ia64/mm/discontig.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -227,7 +227,7 @@ void __init setup_per_cpu_areas(void)
227227
* CPUs are put into groups according to node. Walk cpu_map
228228
* and create new groups at node boundaries.
229229
*/
230-
prev_node = -1;
230+
prev_node = NUMA_NO_NODE;
231231
ai->nr_groups = 0;
232232
for (unit = 0; unit < nr_units; unit++) {
233233
cpu = cpu_map[unit];
@@ -435,7 +435,7 @@ static void __init *memory_less_node_alloc(int nid, unsigned long pernodesize)
435435
{
436436
void *ptr = NULL;
437437
u8 best = 0xff;
438-
int bestnode = -1, node, anynode = 0;
438+
int bestnode = NUMA_NO_NODE, node, anynode = 0;
439439

440440
for_each_online_node(node) {
441441
if (node_isset(node, memory_less_mask))
@@ -447,7 +447,7 @@ static void __init *memory_less_node_alloc(int nid, unsigned long pernodesize)
447447
anynode = node;
448448
}
449449

450-
if (bestnode == -1)
450+
if (bestnode == NUMA_NO_NODE)
451451
bestnode = anynode;
452452

453453
ptr = memblock_alloc_try_nid(pernodesize, PERCPU_PAGE_SIZE,

arch/m68k/mm/memory.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ void __init init_pointer_table(unsigned long ptable)
5151
pr_debug("init_pointer_table: %lx, %x\n", ptable, PD_MARKBITS(dp));
5252

5353
/* unreserve the page so it's possible to free that page */
54-
PD_PAGE(dp)->flags &= ~(1 << PG_reserved);
54+
__ClearPageReserved(PD_PAGE(dp));
5555
init_page_count(PD_PAGE(dp));
5656

5757
return;

0 commit comments

Comments
 (0)