Skip to content

Commit a6f5720

Browse files
Peter Zijlstratorvalds
authored andcommitted
mm/tlb: Remove tlb_remove_table() non-concurrent condition
Will noted that only checking mm_users is incorrect; we should also check mm_count in order to cover CPUs that have a lazy reference to this mm (and could do speculative TLB operations). If removing this turns out to be a performance issue, we can re-instate a more complete check, but in tlb_table_flush() eliding the call_rcu_sched(). Fixes: 2672391 ("mm, powerpc: move the RCU page-table freeing into generic code") Reported-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Rik van Riel <riel@surriel.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent db7ddef commit a6f5720

File tree

1 file changed

+0
-9
lines changed

1 file changed

+0
-9
lines changed

mm/memory.c

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -375,15 +375,6 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
375375
{
376376
struct mmu_table_batch **batch = &tlb->batch;
377377

378-
/*
379-
* When there's less then two users of this mm there cannot be a
380-
* concurrent page-table walk.
381-
*/
382-
if (atomic_read(&tlb->mm->mm_users) < 2) {
383-
__tlb_remove_table(table);
384-
return;
385-
}
386-
387378
if (*batch == NULL) {
388379
*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
389380
if (*batch == NULL) {

0 commit comments

Comments
 (0)