-
Notifications
You must be signed in to change notification settings - Fork 747
Add microbenchmarking to CI #886
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I suggest separate the benchmark from the normal tests, it's merely a assistance running. And it even not running the the newest branch code for now, that may disturb the judgement of test correctness. |
@amos402 not sure what you mean by not running the newest branch code. It does run the newest Python.NET code, and compares it to the baseline. |
Oh yeah, it is. Seems just the misunderstanding of mine. But I still thought it's better to separate them. |
@amos402 why though? Having the benchmark run as part of CI (as long as it is quick and reliable) has a major advantage: if your change fails it, you'll see, and will either have a chance to fix performance regression, or adjust the performance target explicitly in the test code, which would let reviewers immediately notice and weight the performance impact against usefulness of the change. P.S. yesterday I saw tests failing in AppVeyor on timeout, but it was a random thing in AppVeyor, unrelated to the perf in CI. FYI at this moment perf tests are only run in 1 configuration: Python 3.5 x64, because the performance baseline is Python.NET NuGet package, which only has |
Jumble functional test and performance test isn't a good idea. They're two things. Just make some search you may find more articles like this https://www.neotys.com/blog/what-does-test-driven-development-mean-for-performance-testers/ |
@amos402 the article you linked actually advocates perf tests alongside functional tests 😊 |
Yeah, they're two things, don't mix them. |
@amos402 they are not mixed. They run as two separate steps of the CI. The only problem is that the results report in the GitHub UI is mixed, but you can still drill down in AppVeyor. If it is a problem, feel free to separate the pipeline. I will close the issue, as it is implemented now. That should not prevent further discussion here. Or you can open another issue for separating the pipelines and/or RFC on an alternative way to run perf tests as part of CI. |
Python.NET continuous integration needs microbenchmark tests to track changes in performance.
The text was updated successfully, but these errors were encountered: