Skip to content

Commit 92bb1fc

Browse files
time: Only do nanosecond rounding on GENERIC_TIME_VSYSCALL_OLD systems
We only do rounding to the next nanosecond so we don't see minor 1ns inconsistencies in the vsyscall implementations. Since we're changing the vsyscall implementations to avoid this, conditionalize the rounding only to the GENERIC_TIME_VSYSCALL_OLD architectures. Cc: Tony Luck <tony.luck@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Turner <pjt@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: John Stultz <john.stultz@linaro.org>
1 parent 576094b commit 92bb1fc

File tree

1 file changed

+31
-14
lines changed

1 file changed

+31
-14
lines changed

kernel/time/timekeeping.c

Lines changed: 31 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1062,6 +1062,33 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
10621062
return offset;
10631063
}
10641064

1065+
#ifdef CONFIG_GENERIC_TIME_VSYSCALL_OLD
1066+
static inline void old_vsyscall_fixup(struct timekeeper *tk)
1067+
{
1068+
s64 remainder;
1069+
1070+
/*
1071+
* Store only full nanoseconds into xtime_nsec after rounding
1072+
* it up and add the remainder to the error difference.
1073+
* XXX - This is necessary to avoid small 1ns inconsistnecies caused
1074+
* by truncating the remainder in vsyscalls. However, it causes
1075+
* additional work to be done in timekeeping_adjust(). Once
1076+
* the vsyscall implementations are converted to use xtime_nsec
1077+
* (shifted nanoseconds), and CONFIG_GENERIC_TIME_VSYSCALL_OLD
1078+
* users are removed, this can be killed.
1079+
*/
1080+
remainder = tk->xtime_nsec & ((1ULL << tk->shift) - 1);
1081+
tk->xtime_nsec -= remainder;
1082+
tk->xtime_nsec += 1ULL << tk->shift;
1083+
tk->ntp_error += remainder << tk->ntp_error_shift;
1084+
1085+
}
1086+
#else
1087+
#define old_vsyscall_fixup(tk)
1088+
#endif
1089+
1090+
1091+
10651092
/**
10661093
* update_wall_time - Uses the current clocksource to increment the wall time
10671094
*
@@ -1073,7 +1100,6 @@ static void update_wall_time(void)
10731100
cycle_t offset;
10741101
int shift = 0, maxshift;
10751102
unsigned long flags;
1076-
s64 remainder;
10771103

10781104
write_seqlock_irqsave(&tk->lock, flags);
10791105

@@ -1115,20 +1141,11 @@ static void update_wall_time(void)
11151141
/* correct the clock when NTP error is too big */
11161142
timekeeping_adjust(tk, offset);
11171143

1118-
11191144
/*
1120-
* Store only full nanoseconds into xtime_nsec after rounding
1121-
* it up and add the remainder to the error difference.
1122-
* XXX - This is necessary to avoid small 1ns inconsistnecies caused
1123-
* by truncating the remainder in vsyscalls. However, it causes
1124-
* additional work to be done in timekeeping_adjust(). Once
1125-
* the vsyscall implementations are converted to use xtime_nsec
1126-
* (shifted nanoseconds), this can be killed.
1127-
*/
1128-
remainder = tk->xtime_nsec & ((1ULL << tk->shift) - 1);
1129-
tk->xtime_nsec -= remainder;
1130-
tk->xtime_nsec += 1ULL << tk->shift;
1131-
tk->ntp_error += remainder << tk->ntp_error_shift;
1145+
* XXX This can be killed once everyone converts
1146+
* to the new update_vsyscall.
1147+
*/
1148+
old_vsyscall_fixup(tk);
11321149

11331150
/*
11341151
* Finally, make sure that after the rounding

0 commit comments

Comments
 (0)