Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: python/pyperformance
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 1.10.0
Choose a base ref
...
head repository: python/pyperformance
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 1.11.0
Choose a head ref
  • 8 commits
  • 17 files changed
  • 5 contributors

Commits on Nov 2, 2023

  1. Configuration menu
    Copy the full SHA
    f7f3650 View commit details
    Browse the repository at this point in the history

Commits on Jan 16, 2024

  1. Configuration menu
    Copy the full SHA
    9756f98 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    32c6bbf View commit details
    Browse the repository at this point in the history

Commits on Jan 27, 2024

  1. Configuration menu
    Copy the full SHA
    dcf71dc View commit details
    Browse the repository at this point in the history

Commits on Feb 2, 2024

  1. Add a feature for using the same number of loops as a previous run (#327

    )
    
    Motivation:
    
    On the Faster CPython team, we often collect pystats (counters of various interpreter events) by running the benchmark suite.  It is very useful to compare the stats between two commits to see how a pull request affects the interpreter.  Unfortunately, with pyperformance's default behavior where the number of loops is automatically calibrated, each benchmark may not be run the same number of times from run-to-run, making the data hard to compare.
    
    This change adds a new argument to the "run" command which will use the same number of loops as a previous run.  The loops for each benchmark is looked up from the metadata in the .json output of that previous run, and passed to the underlying call to pyperf using the --loops argument.
    
    Additionally, this modifies one of the benchmarks (sqlglot) to be compatible with that scheme.  sqlglot is the only run_benchmark.py script that runs multiple benchmarks within it in a single call to the script.  This makes it impossible to set the number of loops independently for each of these benchmarks.  It's been updated to use the pattern from other "suites" of benchmarks (e.g. async_tree) where each benchmark has its own .toml file and is run independently.  This should still be backward compatible with older data collected from this benchmark, but doing "pyperformance run -b sqlglot" will now only run a single benchmark.
    mdboom authored Feb 2, 2024
    Configuration menu
    Copy the full SHA
    79f80a4 View commit details
    Browse the repository at this point in the history

Commits on Mar 5, 2024

  1. Fix the django_template benchmark (#329)

    This is broken by the removal of the cgi module in Python 3.13. This adds the legacy-cgi PyPI library as a dependency as a workaround.
    mdboom authored Mar 5, 2024
    Configuration menu
    Copy the full SHA
    1676592 View commit details
    Browse the repository at this point in the history

Commits on Mar 8, 2024

  1. Configuration menu
    Copy the full SHA
    52a4c58 View commit details
    Browse the repository at this point in the history

Commits on Mar 9, 2024

  1. Configuration menu
    Copy the full SHA
    ad7824c View commit details
    Browse the repository at this point in the history
Loading