Description
Bug report
Bug description:
It's supposed to measure sort speed for different kinds of data, from best to worst case. But currently it shows about the same speed for all kinds, because the same list object is sorted a thousand times. Which means all but the first sort are just sorting the already sorted list. And that dominates the total time, making the first=real sort insignificant.
Oddly it was specifically changed to be this way, here, from
def _prepare_data(self, loops: int) -> list[float]:
bench = BENCHMARKS[self._name]
return [
bench(self._size, self._random)
for _ in range(loops)
]
to:
def _prepare_data(self, loops: int) -> list[float]:
bench = BENCHMARKS[self._name]
return [bench(self._size, self._random)] * loops
There was even a comment showing that all cases from best to worst now take about the same tiny time 29.2 us, calling it an optimization (and I think it refers to the above change). So is this intentional/desired? I highly doubt that, but if it's a mistake, it's really weird.
The whole update was btw called "Move Lib/test/sortperf.py to Tools/scripts", but in reality it wasn't just moved but completely rewritten. I don't think that was good, in my opinion moves should just be moves, not hide massive changes.
CPython versions tested on:
CPython main branch
Operating systems tested on:
No response