Skip to content

Commit 814b879

Browse files
committed
cpuidle: menu: Avoid overflows when computing variance
The variance computation in get_typical_interval() may overflow if the square of the value of diff exceeds the maximum for the int64_t data type value which basically is the case when it is of the order of UINT_MAX. However, data points so far in the future don't matter for idle state selection anyway, so change the initial threshold value in get_typical_interval() to INT_MAX which will cause more "outlying" data points to be discarded without affecting the selection result. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
1 parent ef80068 commit 814b879

File tree

1 file changed

+1
-1
lines changed
  • drivers/cpuidle/governors

1 file changed

+1
-1
lines changed

drivers/cpuidle/governors/menu.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ static unsigned int get_typical_interval(struct menu_device *data,
186186
unsigned int min, max, thresh, avg;
187187
uint64_t sum, variance;
188188

189-
thresh = UINT_MAX; /* Discard outliers above this value */
189+
thresh = INT_MAX; /* Discard outliers above this value */
190190

191191
again:
192192

0 commit comments

Comments
 (0)