Skip to content

Commit 3478588

Browse files
committed
Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar: "The biggest part of this tree is the new auto-generated atomics API wrappers by Mark Rutland. The primary motivation was to allow instrumentation without uglifying the primary source code. The linecount increase comes from adding the auto-generated files to the Git space as well: include/asm-generic/atomic-instrumented.h | 1689 ++++++++++++++++-- include/asm-generic/atomic-long.h | 1174 ++++++++++--- include/linux/atomic-fallback.h | 2295 +++++++++++++++++++++++++ include/linux/atomic.h | 1241 +------------ I preferred this approach, so that the full call stack of the (already complex) locking APIs is still fully visible in 'git grep'. But if this is excessive we could certainly hide them. There's a separate build-time mechanism to determine whether the headers are out of date (they should never be stale if we do our job right). Anyway, nothing from this should be visible to regular kernel developers. Other changes: - Add support for dynamic keys, which removes a source of false positives in the workqueue code, among other things (Bart Van Assche) - Updates to tools/memory-model (Andrea Parri, Paul E. McKenney) - qspinlock, wake_q and lockdep micro-optimizations (Waiman Long) - misc other updates and enhancements" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (48 commits) locking/lockdep: Shrink struct lock_class_key locking/lockdep: Add module_param to enable consistency checks lockdep/lib/tests: Test dynamic key registration lockdep/lib/tests: Fix run_tests.sh kernel/workqueue: Use dynamic lockdep keys for workqueues locking/lockdep: Add support for dynamic keys locking/lockdep: Verify whether lock objects are small enough to be used as class keys locking/lockdep: Check data structure consistency locking/lockdep: Reuse lock chains that have been freed locking/lockdep: Fix a comment in add_chain_cache() locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() locking/lockdep: Reuse list entries that are no longer in use locking/lockdep: Free lock classes that are no longer in use locking/lockdep: Update two outdated comments locking/lockdep: Make it easy to detect whether or not inside a selftest locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock() locking/lockdep: Initialize the locks_before and locks_after lists earlier locking/lockdep: Make zap_class() remove all matching lock order entries locking/lockdep: Reorder struct lock_class members locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache ...
2 parents c8f5ed6 + 28d49e2 commit 3478588

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+7059
-2142
lines changed

Documentation/core-api/refcount-vs-atomic.rst

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,13 @@ must propagate to all other CPUs before the release operation
5454
(A-cumulative property). This is implemented using
5555
:c:func:`smp_store_release`.
5656

57+
An ACQUIRE memory ordering guarantees that all post loads and
58+
stores (all po-later instructions) on the same CPU are
59+
completed after the acquire operation. It also guarantees that all
60+
po-later stores on the same CPU must propagate to all other CPUs
61+
after the acquire operation executes. This is implemented using
62+
:c:func:`smp_acquire__after_ctrl_dep`.
63+
5764
A control dependency (on success) for refcounters guarantees that
5865
if a reference for an object was successfully obtained (reference
5966
counter increment or addition happened, function returned true),
@@ -119,13 +126,24 @@ Memory ordering guarantees changes:
119126
result of obtaining pointer to the object!
120127

121128

122-
case 5) - decrement-based RMW ops that return a value
123-
-----------------------------------------------------
129+
case 5) - generic dec/sub decrement-based RMW ops that return a value
130+
---------------------------------------------------------------------
124131

125132
Function changes:
126133

127134
* :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test`
128135
* :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test`
136+
137+
Memory ordering guarantees changes:
138+
139+
* fully ordered --> RELEASE ordering + ACQUIRE ordering on success
140+
141+
142+
case 6) other decrement-based RMW ops that return a value
143+
---------------------------------------------------------
144+
145+
Function changes:
146+
129147
* no atomic counterpart --> :c:func:`refcount_dec_if_one`
130148
* ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)``
131149

@@ -136,7 +154,7 @@ Memory ordering guarantees changes:
136154
.. note:: :c:func:`atomic_add_unless` only provides full order on success.
137155

138156

139-
case 6) - lock-based RMW
157+
case 7) - lock-based RMW
140158
------------------------
141159

142160
Function changes:

Kbuild

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,8 @@
66
# 2) Generate timeconst.h
77
# 3) Generate asm-offsets.h (may need bounds.h and timeconst.h)
88
# 4) Check for missing system calls
9-
# 5) Generate constants.py (may need bounds.h)
9+
# 5) check atomics headers are up-to-date
10+
# 6) Generate constants.py (may need bounds.h)
1011

1112
#####
1213
# 1) Generate bounds.h
@@ -59,7 +60,20 @@ missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE
5960
$(call cmd,syscalls)
6061

6162
#####
62-
# 5) Generate constants for Python GDB integration
63+
# 5) Check atomic headers are up-to-date
64+
#
65+
66+
always += old-atomics
67+
targets += old-atomics
68+
69+
quiet_cmd_atomics = CALL $<
70+
cmd_atomics = $(CONFIG_SHELL) $<
71+
72+
old-atomics: scripts/atomic/check-atomics.sh FORCE
73+
$(call cmd,atomics)
74+
75+
#####
76+
# 6) Generate constants for Python GDB integration
6377
#
6478

6579
extra-$(CONFIG_GDB_SCRIPTS) += build_constants_py

MAINTAINERS

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2608,6 +2608,7 @@ L: linux-kernel@vger.kernel.org
26082608
S: Maintained
26092609
F: arch/*/include/asm/atomic*.h
26102610
F: include/*/atomic*.h
2611+
F: scripts/atomic/
26112612

26122613
ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
26132614
M: Bradley Grove <linuxdrivers@attotech.com>

arch/arm64/include/asm/atomic.h

Lines changed: 122 additions & 115 deletions
Original file line numberDiff line numberDiff line change
@@ -42,124 +42,131 @@
4242

4343
#define ATOMIC_INIT(i) { (i) }
4444

45-
#define atomic_read(v) READ_ONCE((v)->counter)
46-
#define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
47-
48-
#define atomic_add_return_relaxed atomic_add_return_relaxed
49-
#define atomic_add_return_acquire atomic_add_return_acquire
50-
#define atomic_add_return_release atomic_add_return_release
51-
#define atomic_add_return atomic_add_return
52-
53-
#define atomic_sub_return_relaxed atomic_sub_return_relaxed
54-
#define atomic_sub_return_acquire atomic_sub_return_acquire
55-
#define atomic_sub_return_release atomic_sub_return_release
56-
#define atomic_sub_return atomic_sub_return
57-
58-
#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
59-
#define atomic_fetch_add_acquire atomic_fetch_add_acquire
60-
#define atomic_fetch_add_release atomic_fetch_add_release
61-
#define atomic_fetch_add atomic_fetch_add
62-
63-
#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
64-
#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire
65-
#define atomic_fetch_sub_release atomic_fetch_sub_release
66-
#define atomic_fetch_sub atomic_fetch_sub
67-
68-
#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
69-
#define atomic_fetch_and_acquire atomic_fetch_and_acquire
70-
#define atomic_fetch_and_release atomic_fetch_and_release
71-
#define atomic_fetch_and atomic_fetch_and
72-
73-
#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
74-
#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire
75-
#define atomic_fetch_andnot_release atomic_fetch_andnot_release
76-
#define atomic_fetch_andnot atomic_fetch_andnot
77-
78-
#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
79-
#define atomic_fetch_or_acquire atomic_fetch_or_acquire
80-
#define atomic_fetch_or_release atomic_fetch_or_release
81-
#define atomic_fetch_or atomic_fetch_or
82-
83-
#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
84-
#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire
85-
#define atomic_fetch_xor_release atomic_fetch_xor_release
86-
#define atomic_fetch_xor atomic_fetch_xor
87-
88-
#define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
89-
#define atomic_xchg_acquire(v, new) xchg_acquire(&((v)->counter), (new))
90-
#define atomic_xchg_release(v, new) xchg_release(&((v)->counter), (new))
91-
#define atomic_xchg(v, new) xchg(&((v)->counter), (new))
92-
93-
#define atomic_cmpxchg_relaxed(v, old, new) \
94-
cmpxchg_relaxed(&((v)->counter), (old), (new))
95-
#define atomic_cmpxchg_acquire(v, old, new) \
96-
cmpxchg_acquire(&((v)->counter), (old), (new))
97-
#define atomic_cmpxchg_release(v, old, new) \
98-
cmpxchg_release(&((v)->counter), (old), (new))
99-
#define atomic_cmpxchg(v, old, new) cmpxchg(&((v)->counter), (old), (new))
100-
101-
#define atomic_andnot atomic_andnot
45+
#define arch_atomic_read(v) READ_ONCE((v)->counter)
46+
#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
47+
48+
#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed
49+
#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire
50+
#define arch_atomic_add_return_release arch_atomic_add_return_release
51+
#define arch_atomic_add_return arch_atomic_add_return
52+
53+
#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
54+
#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire
55+
#define arch_atomic_sub_return_release arch_atomic_sub_return_release
56+
#define arch_atomic_sub_return arch_atomic_sub_return
57+
58+
#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
59+
#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
60+
#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release
61+
#define arch_atomic_fetch_add arch_atomic_fetch_add
62+
63+
#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
64+
#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
65+
#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release
66+
#define arch_atomic_fetch_sub arch_atomic_fetch_sub
67+
68+
#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
69+
#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
70+
#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release
71+
#define arch_atomic_fetch_and arch_atomic_fetch_and
72+
73+
#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
74+
#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
75+
#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
76+
#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
77+
78+
#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
79+
#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
80+
#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release
81+
#define arch_atomic_fetch_or arch_atomic_fetch_or
82+
83+
#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
84+
#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
85+
#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release
86+
#define arch_atomic_fetch_xor arch_atomic_fetch_xor
87+
88+
#define arch_atomic_xchg_relaxed(v, new) \
89+
arch_xchg_relaxed(&((v)->counter), (new))
90+
#define arch_atomic_xchg_acquire(v, new) \
91+
arch_xchg_acquire(&((v)->counter), (new))
92+
#define arch_atomic_xchg_release(v, new) \
93+
arch_xchg_release(&((v)->counter), (new))
94+
#define arch_atomic_xchg(v, new) \
95+
arch_xchg(&((v)->counter), (new))
96+
97+
#define arch_atomic_cmpxchg_relaxed(v, old, new) \
98+
arch_cmpxchg_relaxed(&((v)->counter), (old), (new))
99+
#define arch_atomic_cmpxchg_acquire(v, old, new) \
100+
arch_cmpxchg_acquire(&((v)->counter), (old), (new))
101+
#define arch_atomic_cmpxchg_release(v, old, new) \
102+
arch_cmpxchg_release(&((v)->counter), (old), (new))
103+
#define arch_atomic_cmpxchg(v, old, new) \
104+
arch_cmpxchg(&((v)->counter), (old), (new))
105+
106+
#define arch_atomic_andnot arch_atomic_andnot
102107

103108
/*
104-
* 64-bit atomic operations.
109+
* 64-bit arch_atomic operations.
105110
*/
106-
#define ATOMIC64_INIT ATOMIC_INIT
107-
#define atomic64_read atomic_read
108-
#define atomic64_set atomic_set
109-
110-
#define atomic64_add_return_relaxed atomic64_add_return_relaxed
111-
#define atomic64_add_return_acquire atomic64_add_return_acquire
112-
#define atomic64_add_return_release atomic64_add_return_release
113-
#define atomic64_add_return atomic64_add_return
114-
115-
#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
116-
#define atomic64_sub_return_acquire atomic64_sub_return_acquire
117-
#define atomic64_sub_return_release atomic64_sub_return_release
118-
#define atomic64_sub_return atomic64_sub_return
119-
120-
#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
121-
#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire
122-
#define atomic64_fetch_add_release atomic64_fetch_add_release
123-
#define atomic64_fetch_add atomic64_fetch_add
124-
125-
#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
126-
#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire
127-
#define atomic64_fetch_sub_release atomic64_fetch_sub_release
128-
#define atomic64_fetch_sub atomic64_fetch_sub
129-
130-
#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
131-
#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire
132-
#define atomic64_fetch_and_release atomic64_fetch_and_release
133-
#define atomic64_fetch_and atomic64_fetch_and
134-
135-
#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
136-
#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire
137-
#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release
138-
#define atomic64_fetch_andnot atomic64_fetch_andnot
139-
140-
#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
141-
#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire
142-
#define atomic64_fetch_or_release atomic64_fetch_or_release
143-
#define atomic64_fetch_or atomic64_fetch_or
144-
145-
#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
146-
#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire
147-
#define atomic64_fetch_xor_release atomic64_fetch_xor_release
148-
#define atomic64_fetch_xor atomic64_fetch_xor
149-
150-
#define atomic64_xchg_relaxed atomic_xchg_relaxed
151-
#define atomic64_xchg_acquire atomic_xchg_acquire
152-
#define atomic64_xchg_release atomic_xchg_release
153-
#define atomic64_xchg atomic_xchg
154-
155-
#define atomic64_cmpxchg_relaxed atomic_cmpxchg_relaxed
156-
#define atomic64_cmpxchg_acquire atomic_cmpxchg_acquire
157-
#define atomic64_cmpxchg_release atomic_cmpxchg_release
158-
#define atomic64_cmpxchg atomic_cmpxchg
159-
160-
#define atomic64_andnot atomic64_andnot
161-
162-
#define atomic64_dec_if_positive atomic64_dec_if_positive
111+
#define ATOMIC64_INIT ATOMIC_INIT
112+
#define arch_atomic64_read arch_atomic_read
113+
#define arch_atomic64_set arch_atomic_set
114+
115+
#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
116+
#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire
117+
#define arch_atomic64_add_return_release arch_atomic64_add_return_release
118+
#define arch_atomic64_add_return arch_atomic64_add_return
119+
120+
#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
121+
#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
122+
#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release
123+
#define arch_atomic64_sub_return arch_atomic64_sub_return
124+
125+
#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
126+
#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
127+
#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release
128+
#define arch_atomic64_fetch_add arch_atomic64_fetch_add
129+
130+
#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
131+
#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
132+
#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
133+
#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
134+
135+
#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
136+
#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
137+
#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release
138+
#define arch_atomic64_fetch_and arch_atomic64_fetch_and
139+
140+
#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
141+
#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
142+
#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
143+
#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot
144+
145+
#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
146+
#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
147+
#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release
148+
#define arch_atomic64_fetch_or arch_atomic64_fetch_or
149+
150+
#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
151+
#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
152+
#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
153+
#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
154+
155+
#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed
156+
#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire
157+
#define arch_atomic64_xchg_release arch_atomic_xchg_release
158+
#define arch_atomic64_xchg arch_atomic_xchg
159+
160+
#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
161+
#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire
162+
#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release
163+
#define arch_atomic64_cmpxchg arch_atomic_cmpxchg
164+
165+
#define arch_atomic64_andnot arch_atomic64_andnot
166+
167+
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
168+
169+
#include <asm-generic/atomic-instrumented.h>
163170

164171
#endif
165172
#endif

0 commit comments

Comments
 (0)