Skip to content

Commit 6829e27

Browse files
mszyprowwildea01
authored andcommitted
arm64: dma-mapping: always clear allocated buffers
Buffers allocated by dma_alloc_coherent() are always zeroed on Alpha, ARM (32bit), MIPS, PowerPC, x86/x86_64 and probably other architectures. It turned out that some drivers rely on this 'feature'. Allocated buffer might be also exposed to userspace with dma_mmap() call, so clearing it is desired from security point of view to avoid exposing random memory to userspace. This patch unifies dma_alloc_coherent() behavior on ARM64 architecture with other implementations by unconditionally zeroing allocated buffer. Cc: <stable@vger.kernel.org> # v3.14+ Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
1 parent 6544e67 commit 6829e27

File tree

1 file changed

+2
-4
lines changed

1 file changed

+2
-4
lines changed

arch/arm64/mm/dma-mapping.c

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -67,8 +67,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags)
6767

6868
*ret_page = phys_to_page(phys);
6969
ptr = (void *)val;
70-
if (flags & __GFP_ZERO)
71-
memset(ptr, 0, size);
70+
memset(ptr, 0, size);
7271
}
7372

7473
return ptr;
@@ -113,8 +112,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
113112

114113
*dma_handle = phys_to_dma(dev, page_to_phys(page));
115114
addr = page_address(page);
116-
if (flags & __GFP_ZERO)
117-
memset(addr, 0, size);
115+
memset(addr, 0, size);
118116
return addr;
119117
} else {
120118
return swiotlb_alloc_coherent(dev, size, dma_handle, flags);

0 commit comments

Comments
 (0)