Skip to content

Add microbenchmarking to CI #886

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
lostmsu opened this issue Jun 19, 2019 · 8 comments
Closed

Add microbenchmarking to CI #886

lostmsu opened this issue Jun 19, 2019 · 8 comments
Assignees

Comments

@lostmsu
Copy link
Member

lostmsu commented Jun 19, 2019

Python.NET continuous integration needs microbenchmark tests to track changes in performance.

@amos402
Copy link
Member

amos402 commented Feb 14, 2020

I suggest separate the benchmark from the normal tests, it's merely a assistance running. And it even not running the the newest branch code for now, that may disturb the judgement of test correctness.

@lostmsu
Copy link
Member Author

lostmsu commented Feb 15, 2020

@amos402 not sure what you mean by not running the newest branch code. It does run the newest Python.NET code, and compares it to the baseline.

@amos402
Copy link
Member

amos402 commented Feb 15, 2020

Oh yeah, it is. Seems just the misunderstanding of mine. But I still thought it's better to separate them.

@lostmsu
Copy link
Member Author

lostmsu commented Feb 15, 2020

@amos402 why though? Having the benchmark run as part of CI (as long as it is quick and reliable) has a major advantage: if your change fails it, you'll see, and will either have a chance to fix performance regression, or adjust the performance target explicitly in the test code, which would let reviewers immediately notice and weight the performance impact against usefulness of the change.

P.S. yesterday I saw tests failing in AppVeyor on timeout, but it was a random thing in AppVeyor, unrelated to the perf in CI. FYI at this moment perf tests are only run in 1 configuration: Python 3.5 x64, because the performance baseline is Python.NET NuGet package, which only has 2.3.0-py35-dotnet variant.

@lostmsu lostmsu self-assigned this Feb 15, 2020
@amos402
Copy link
Member

amos402 commented Feb 15, 2020

Jumble functional test and performance test isn't a good idea. They're two things. Just make some search you may find more articles like this https://www.neotys.com/blog/what-does-test-driven-development-mean-for-performance-testers/

@lostmsu
Copy link
Member Author

lostmsu commented Feb 15, 2020

@amos402 the article you linked actually advocates perf tests alongside functional tests 😊

@amos402
Copy link
Member

amos402 commented Feb 15, 2020

Yeah, they're two things, don't mix them.

@lostmsu
Copy link
Member Author

lostmsu commented Feb 15, 2020

@amos402 they are not mixed. They run as two separate steps of the CI. The only problem is that the results report in the GitHub UI is mixed, but you can still drill down in AppVeyor. If it is a problem, feel free to separate the pipeline.

I will close the issue, as it is implemented now. That should not prevent further discussion here. Or you can open another issue for separating the pipelines and/or RFC on an alternative way to run perf tests as part of CI.

@lostmsu lostmsu closed this as completed Feb 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants