-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
BUG, Benchmark: fix passing optimization build options to asv #17736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I tested your code in two strategies:
To sum it up, the code here works fine in my computer. Thanks @seiko2plus . |
"PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}" | ||
// pip ignores '--global-option' when pep517 is enabled, we also enabling pip verbose to | ||
// be reached from asv `--verbose` so we can verify the build options. | ||
"PIP_NO_BUILD_ISOLATION=false python {build_dir}/benchmarks/asv_pip_nopep517.py -v {numpy_global_options} --no-deps --no-index -w {build_cache_dir} {build_dir}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be broken into two steps: one to check that --no-use-pep517
is supported, and one to build the wheel. It might be easier to avoid the pip build altogether by adding a bdist_wheel
build step
rm -rf dist
python setup.py build {numpy_build_options} bdist_wheel
python -mpip install dist/*.whl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be broken into two steps: one to check that --no-use-pep517 is supported, and one to build the wheel
already handle it through a separate python script asv_pip_nopep517.py
, we're not free to use POSIX commands since we're targeting non-unix platforms too. e.g. windows
It might be easier to avoid the pip build altogether by adding a bdist_wheel build step
I tried to use bdist_wheel
but ASV wouldn't able to fetch the wheel file and cache it.
also, I tried to only use the pip
command only but idk why ccache
doesn't work properly
here is the default values of build commands :
"install_command":
["python -mpip install {wheel_file}"],
"uninstall_command":
["return-code=any python -mpip uninstall -y {project}"],
"build_command":
["python setup.py build",
"PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}"]
I think this PR is good to merge and will unlock other performance optimizations. |
While I do not debate the value in passing optimization flags to asv, I think this PR still is not ready. There should be a simpler and more direct way to get interaction between asv and code building. Here are some ideas, it may be worth discussing them more widely:
|
unfortunately
First, we need to take a decision to either accepting this pull request as a workaround or close it and revert #17284
as a start, we can create an independent script
Here is the issue, ASV is based on command arguments that kill the flexibility. EDIT: I mentioned a wrong pr |
I think It's hard time to drop |
@Qiyu8, with all respect, we can't count on another project or community in every single new feature. I see google benchmark as a good alternative, we can build our new tool on top of it. It already provides decent flexibility and been used by many projects. |
I marked this for triage-review so we can get some more opinions. |
@mattip, thank you Matti, I really appreciate your help. |
The SIMD optimizations is stretching the current workflows:
Maybe we should think a bit about separating the loops out into a separate library that would be easier to test and benchmark, along the lines of issue gh-17698. |
In the triage meeting we decided to merge this, but would like to rethink benchmarking in general. That is a different discussion though, so in this goes. |
Thanks @seiko2plus |
Infrastructure, SIMD, and GPU optimizations must be part of the core of any modern computing library, branching the core or counting on other projects will badly affect workflow, especially for big projects. We have something new here, let's give it the full chance before thinking of separating it into a separate repo. I think NumPy should find a foot in highly computing processing instead of playing the mediator role, in order to maintain its popularity.
We need also a tool for sanity data check too. OpenCV has 2 in 1 tool benchmarking, and sanity checks at the same time. see HowToUsePerfTests very effective in detecting any deviations.
Why do we need a new engine and we have python? we just need a better way to using it in our C sources. I think #17952 can be a good solution.
Thank you a lot.
I'm not sure what we should do. To create our own minimal tool that serving our only needs or contributing to ASV. What do you prefer? |
In general, I do not like re-inventing the wheel. If a tool or paradigm can be adapted to our needs, then that is preferable over creating our own, unless our needs are very different.
I disagree. If by splitting out the great work you have done with SIMD we enable others to reuse it, that would be progress.
It seems like right now ASV and pyperf are very similar, but neither serves our needs very well. I think we should explore ways to make ASV better before creating our own tool. Making those tools better would help others who are also struggling with benchmarking. |
Well said, I very much agree 👍 |
@mattip, I will give ASV a try I guess we have enough burden and no need to open a new front. |
Fix passing optimization build options to asv
closes #17716
see #17716 (comment)