Skip to content

Commit a75a3f6

Browse files
committed
Merge branch 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 asm changes from Ingo Molnar: "The main change in this cycle is another step in the big x86 system call interface rework by Andy Lutomirski, which moves most of the low level x86 entry code from assembly to C, for all syscall entries except native 64-bit system calls: arch/x86/entry/entry_32.S | 182 ++++------ arch/x86/entry/entry_64_compat.S | 547 ++++++++----------------------- 194 insertions(+), 535 deletions(-) ... our hope is that the final remaining step (converting native 64-bit system calls) will be less painful as all the previous steps, given that most of the legacies and quirks are concentrated around native 32-bit and compat environments" * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (47 commits) x86/entry/32: Fix FS and GS restore in opportunistic SYSEXIT x86/entry/32: Fix entry_INT80_32() to expect interrupts to be on um/x86: Fix build after x86 syscall changes x86/asm: Remove the xyz_cfi macros from dwarf2.h selftests/x86: Style fixes for the 'unwind_vdso' test x86/entry/64/compat: Document sysenter_fix_flags's reason for existence x86/entry: Split and inline syscall_return_slowpath() x86/entry: Split and inline prepare_exit_to_usermode() x86/entry: Use pt_regs_to_thread_info() in syscall entry tracing x86/entry: Hide two syscall entry assertions behind CONFIG_DEBUG_ENTRY x86/entry: Micro-optimize compat fast syscall arg fetch x86/entry: Force inlining of 32-bit syscall code x86/entry: Make irqs_disabled checks in exit code depend on lockdep x86/entry: Remove unnecessary IRQ twiddling in fast 32-bit syscalls x86/asm: Remove thread_info.sysenter_return x86/entry/32: Re-implement SYSENTER using the new C path x86/entry/32: Switch INT80 to the new C syscall path x86/entry/32: Open-code return tracking from fork and kthreads x86/entry/compat: Implement opportunistic SYSRETL for compat syscalls x86/vdso/compat: Wire up SYSENTER and SYSCSALL for compat userspace ...
2 parents d2bea73 + 3bd2951 commit a75a3f6

39 files changed

+1709
-976
lines changed

arch/x86/Kconfig

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2027,6 +2027,55 @@ config COMPAT_VDSO
20272027
If unsure, say N: if you are compiling your own kernel, you
20282028
are unlikely to be using a buggy version of glibc.
20292029

2030+
choice
2031+
prompt "vsyscall table for legacy applications"
2032+
depends on X86_64
2033+
default LEGACY_VSYSCALL_EMULATE
2034+
help
2035+
Legacy user code that does not know how to find the vDSO expects
2036+
to be able to issue three syscalls by calling fixed addresses in
2037+
kernel space. Since this location is not randomized with ASLR,
2038+
it can be used to assist security vulnerability exploitation.
2039+
2040+
This setting can be changed at boot time via the kernel command
2041+
line parameter vsyscall=[native|emulate|none].
2042+
2043+
On a system with recent enough glibc (2.14 or newer) and no
2044+
static binaries, you can say None without a performance penalty
2045+
to improve security.
2046+
2047+
If unsure, select "Emulate".
2048+
2049+
config LEGACY_VSYSCALL_NATIVE
2050+
bool "Native"
2051+
help
2052+
Actual executable code is located in the fixed vsyscall
2053+
address mapping, implementing time() efficiently. Since
2054+
this makes the mapping executable, it can be used during
2055+
security vulnerability exploitation (traditionally as
2056+
ROP gadgets). This configuration is not recommended.
2057+
2058+
config LEGACY_VSYSCALL_EMULATE
2059+
bool "Emulate"
2060+
help
2061+
The kernel traps and emulates calls into the fixed
2062+
vsyscall address mapping. This makes the mapping
2063+
non-executable, but it still contains known contents,
2064+
which could be used in certain rare security vulnerability
2065+
exploits. This configuration is recommended when userspace
2066+
still uses the vsyscall area.
2067+
2068+
config LEGACY_VSYSCALL_NONE
2069+
bool "None"
2070+
help
2071+
There will be no vsyscall mapping at all. This will
2072+
eliminate any risk of ASLR bypass due to the vsyscall
2073+
fixed address mapping. Attempts to use the vsyscalls
2074+
will be reported to dmesg, so that either old or
2075+
malicious userspace programs can be identified.
2076+
2077+
endchoice
2078+
20302079
config CMDLINE_BOOL
20312080
bool "Built-in kernel command line"
20322081
---help---

arch/x86/Makefile

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,15 +159,21 @@ endif
159159
sp-$(CONFIG_X86_32) := esp
160160
sp-$(CONFIG_X86_64) := rsp
161161

162+
# do binutils support CFI?
163+
cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
164+
# is .cfi_signal_frame supported too?
165+
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
166+
cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
167+
162168
# does binutils support specific instructions?
163169
asinstr := $(call as-instr,fxsaveq (%rax),-DCONFIG_AS_FXSAVEQ=1)
164170
asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
165171
asinstr += $(call as-instr,crc32l %eax$(comma)%eax,-DCONFIG_AS_CRC32=1)
166172
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
167173
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
168174

169-
KBUILD_AFLAGS += $(asinstr) $(avx_instr) $(avx2_instr)
170-
KBUILD_CFLAGS += $(asinstr) $(avx_instr) $(avx2_instr)
175+
KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr)
176+
KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr)
171177

172178
LDFLAGS := -m elf_$(UTS_MACHINE)
173179

0 commit comments

Comments
 (0)