Skip to content

Commit 6b83225

Browse files
paulburtonralfbaechle
authored andcommitted
MIPS: Force CPUs to lose FP context during mode switches
Commit 9791554 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS") added support for the PR_SET_FP_MODE prctl, which allows a userland program to modify its FP mode at runtime. This is most notably required if dynamic linking leads to the FP mode requirement changing at runtime from that indicated in the initial executable's ELF header. In order to avoid overhead in the general FP context restore code, it aimed to have threads in the process become unable to enable the FPU during a mode switch & have the thread calling the prctl syscall wait for all other threads in the process to be context switched at least once. Once that happens we can know that no thread in the process whose mode will be switched has live FP context, and it's safe to perform the mode switch. However in the (rare) case of modeswitches occurring in multithreaded programs this can lead to indeterminate delays for the thread invoking the prctl syscall, and the code monitoring for those context switches was woefully inadequate for all but the simplest cases. Fix this by broadcasting an IPI if other CPUs may have live FP context for an affected thread, with a handler causing those CPUs to relinquish their FPU ownership. Threads will then be allowed to continue running but will stall on the wait_on_atomic_t in enable_restore_fp_context if they attempt to use FP again whilst the mode switch is still in progress. The end result is less fragile poking at scheduler context switch counts & a more expedient completion of the mode switch. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 9791554 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS") Reviewed-by: Maciej W. Rozycki <macro@imgtec.com> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: stable <stable@vger.kernel.org> # v4.0+ Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13145/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
1 parent bd239f1 commit 6b83225

File tree

1 file changed

+17
-23
lines changed

1 file changed

+17
-23
lines changed

arch/mips/kernel/process.c

Lines changed: 17 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -576,11 +576,19 @@ int mips_get_process_fp_mode(struct task_struct *task)
576576
return value;
577577
}
578578

579+
static void prepare_for_fp_mode_switch(void *info)
580+
{
581+
struct mm_struct *mm = info;
582+
583+
if (current->mm == mm)
584+
lose_fpu(1);
585+
}
586+
579587
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
580588
{
581589
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
582-
unsigned long switch_count;
583590
struct task_struct *t;
591+
int max_users;
584592

585593
/* Check the value is valid */
586594
if (value & ~known_bits)
@@ -609,31 +617,17 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
609617
smp_mb__after_atomic();
610618

611619
/*
612-
* If there are multiple online CPUs then wait until all threads whose
613-
* FP mode is about to change have been context switched. This approach
614-
* allows us to only worry about whether an FP mode switch is in
615-
* progress when FP is first used in a tasks time slice. Pretty much all
616-
* of the mode switch overhead can thus be confined to cases where mode
617-
* switches are actually occurring. That is, to here. However for the
618-
* thread performing the mode switch it may take a while...
620+
* If there are multiple online CPUs then force any which are running
621+
* threads in this process to lose their FPU context, which they can't
622+
* regain until fp_mode_switching is cleared later.
619623
*/
620624
if (num_online_cpus() > 1) {
621-
spin_lock_irq(&task->sighand->siglock);
622-
623-
for_each_thread(task, t) {
624-
if (t == current)
625-
continue;
626-
627-
switch_count = t->nvcsw + t->nivcsw;
628-
629-
do {
630-
spin_unlock_irq(&task->sighand->siglock);
631-
cond_resched();
632-
spin_lock_irq(&task->sighand->siglock);
633-
} while ((t->nvcsw + t->nivcsw) == switch_count);
634-
}
625+
/* No need to send an IPI for the local CPU */
626+
max_users = (task->mm == current->mm) ? 1 : 0;
635627

636-
spin_unlock_irq(&task->sighand->siglock);
628+
if (atomic_read(&current->mm->mm_users) > max_users)
629+
smp_call_function(prepare_for_fp_mode_switch,
630+
(void *)current->mm, 1);
637631
}
638632

639633
/*

0 commit comments

Comments
 (0)