Skip to content

Commit 941297f

Browse files
Eric Dumazetkaber
authored andcommitted
netfilter: nf_conntrack: nf_conntrack_alloc() fixes
When a slab cache uses SLAB_DESTROY_BY_RCU, we must be careful when allocating objects, since slab allocator could give a freed object still used by lockless readers. In particular, nf_conntrack RCU lookups rely on ct->tuplehash[xxx].hnnode.next being always valid (ie containing a valid 'nulls' value, or a valid pointer to next object in hash chain.) kmem_cache_zalloc() setups object with NULL values, but a NULL value is not valid for ct->tuplehash[xxx].hnnode.next. Fix is to call kmem_cache_alloc() and do the zeroing ourself. As spotted by Patrick, we also need to make sure lookup keys are committed to memory before setting refcount to 1, or a lockless reader could get a reference on the old version of the object. Its key re-check could then pass the barrier. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Patrick McHardy <kaber@trash.net>
1 parent aa6a03e commit 941297f

File tree

2 files changed

+24
-4
lines changed

2 files changed

+24
-4
lines changed

Documentation/RCU/rculist_nulls.txt

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,11 +83,12 @@ not detect it missed following items in original chain.
8383
obj = kmem_cache_alloc(...);
8484
lock_chain(); // typically a spin_lock()
8585
obj->key = key;
86-
atomic_inc(&obj->refcnt);
8786
/*
8887
* we need to make sure obj->key is updated before obj->next
88+
* or obj->refcnt
8989
*/
9090
smp_wmb();
91+
atomic_set(&obj->refcnt, 1);
9192
hlist_add_head_rcu(&obj->obj_node, list);
9293
unlock_chain(); // typically a spin_unlock()
9394

@@ -159,6 +160,10 @@ out:
159160
obj = kmem_cache_alloc(cachep);
160161
lock_chain(); // typically a spin_lock()
161162
obj->key = key;
163+
/*
164+
* changes to obj->key must be visible before refcnt one
165+
*/
166+
smp_wmb();
162167
atomic_set(&obj->refcnt, 1);
163168
/*
164169
* insert obj in RCU way (readers might be traversing chain)

net/netfilter/nf_conntrack_core.c

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -561,23 +561,38 @@ struct nf_conn *nf_conntrack_alloc(struct net *net,
561561
}
562562
}
563563

564-
ct = kmem_cache_zalloc(nf_conntrack_cachep, gfp);
564+
/*
565+
* Do not use kmem_cache_zalloc(), as this cache uses
566+
* SLAB_DESTROY_BY_RCU.
567+
*/
568+
ct = kmem_cache_alloc(nf_conntrack_cachep, gfp);
565569
if (ct == NULL) {
566570
pr_debug("nf_conntrack_alloc: Can't alloc conntrack.\n");
567571
atomic_dec(&net->ct.count);
568572
return ERR_PTR(-ENOMEM);
569573
}
570-
574+
/*
575+
* Let ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode.next
576+
* and ct->tuplehash[IP_CT_DIR_REPLY].hnnode.next unchanged.
577+
*/
578+
memset(&ct->tuplehash[IP_CT_DIR_MAX], 0,
579+
sizeof(*ct) - offsetof(struct nf_conn, tuplehash[IP_CT_DIR_MAX]));
571580
spin_lock_init(&ct->lock);
572-
atomic_set(&ct->ct_general.use, 1);
573581
ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple = *orig;
582+
ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode.pprev = NULL;
574583
ct->tuplehash[IP_CT_DIR_REPLY].tuple = *repl;
584+
ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev = NULL;
575585
/* Don't set timer yet: wait for confirmation */
576586
setup_timer(&ct->timeout, death_by_timeout, (unsigned long)ct);
577587
#ifdef CONFIG_NET_NS
578588
ct->ct_net = net;
579589
#endif
580590

591+
/*
592+
* changes to lookup keys must be done before setting refcnt to 1
593+
*/
594+
smp_wmb();
595+
atomic_set(&ct->ct_general.use, 1);
581596
return ct;
582597
}
583598
EXPORT_SYMBOL_GPL(nf_conntrack_alloc);

0 commit comments

Comments
 (0)