Skip to content

Commit c9e2fbd

Browse files
LedestH. Peter Anvin
authored andcommitted
x86: Avoid 'constant_test_bit()' misoptimization due to cast to non-volatile
While debugging bit_spin_lock() hang, it was tracked down to gcc-4.4 misoptimization of non-inlined constant_test_bit() due to non-volatile addr when 'const volatile unsigned long *addr' cast to 'unsigned long *' with subsequent unconditional jump to pause (and not to the test) leading to hang. Compiling with gcc-4.3 or disabling CONFIG_OPTIMIZE_INLINING yields inlined constant_test_bit() and correct jump, thus working around the kernel bug. Other arches than asm-x86 may implement this slightly differently; 2.6.29 mitigates the misoptimization by changing the function prototype (commit c4295fb) but probably fixing the issue itself is better. Signed-off-by: Alexander Chumachenko <ledest@gmail.com> Signed-off-by: Michael Shigorin <mike@osdn.org.ua> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
1 parent 7329cf0 commit c9e2fbd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

arch/x86/include/asm/bitops.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -309,7 +309,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
309309
static __always_inline int constant_test_bit(unsigned int nr, const volatile unsigned long *addr)
310310
{
311311
return ((1UL << (nr % BITS_PER_LONG)) &
312-
(((unsigned long *)addr)[nr / BITS_PER_LONG])) != 0;
312+
(addr[nr / BITS_PER_LONG])) != 0;
313313
}
314314

315315
static inline int variable_test_bit(int nr, volatile const unsigned long *addr)

0 commit comments

Comments
 (0)