-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
yield tests are deprecated in pytest #10728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Allso all the Or was it decided that e.g. running |
In #8321, @HolgerPeters made some/all of these changes. (See
#7319 (comment))
We
weren't ready for them then. We should take on the yield test changes now,
file by file, if possible.
There is, however, an easier way to transition yield tests than that
proposed in #8321 (making use of parametrize): simply execute the function
instead of yielding it.
|
Since we usually add rather than modify tests in PRs, I'm not averse to
wholesale conversion of assert_true to assert, etc. (By the same logic I
should not be averse to wholesale autopep8 of test files.) It will still
make conflicts for several PRs, though. But conflict resolution is hardly
the development bottleneck here, nor will these be hard conflicts to solve.
…On 28 February 2018 at 23:27, Roman Yurchak ***@***.***> wrote:
Allso all the assert_equal, assert_true etc (e.g. here
<https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/tests/test_logistic.py#L12-L16>)
can be removed now that pytest is used for tests. There must be an issue on
it somewhere but I'm not able to find it.
Or was it decided that e.g. running nose2pytest
<https://github.com/pytest-dev/nose2pytest> on the whole repo was not
desirable as it would create conflicts for existing PRs?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10728 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz6-1rvrZeW6yvVN8QQQjTjeL3xfPFks5tZUYqgaJpZM4SWjte>
.
|
Merge conflicts with parametrize-based fixes to yield tests will be hard to resolve... |
Thanks for the additional details @jnothman @lesteve !
Maybe module by module would be easier to follow (apart for the estimator_checks)? Here is a list of all test files (~29) that use yield (this can have some false positives, feel free to edit this comment). $ find sklearn/ -iname "*test_*py" -exec grep -l '^[[:space:]]\+yield ' {} +
sklearn/ensemble/tests/test_gradient_boosting.py
sklearn/ensemble/tests/test_forest.py
sklearn/linear_model/tests/test_ridge.py
sklearn/manifold/tests/test_t_sne.py
sklearn/metrics/tests/test_common.py
sklearn/metrics/tests/test_ranking.py
sklearn/metrics/tests/test_pairwise.py
sklearn/metrics/tests/test_score_objects.py
sklearn/preprocessing/tests/test_label.py
sklearn/svm/tests/test_bounds.py
sklearn/utils/tests/test_stats.py
sklearn/utils/tests/test_extmath.py
sklearn/gaussian_process/tests/test_kernels.py
sklearn/model_selection/tests/test_validation.py
sklearn/mixture/tests/test_gmm.py
sklearn/neighbors/tests/test_neighbors.py
sklearn/neighbors/tests/test_kd_tree.py
sklearn/neighbors/tests/test_dist_metrics.py
sklearn/neighbors/tests/test_quad_tree.py
sklearn/neighbors/tests/test_ball_tree.py
sklearn/neighbors/tests/test_kde.py
sklearn/decomposition/tests/test_pca.py
sklearn/decomposition/tests/test_online_lda.py
sklearn/tree/tests/test_tree.py
sklearn/datasets/tests/test_svmlight_format.py
sklearn/tests/test_naive_bayes.py
sklearn/tests/test_common.py
sklearn/tests/test_cross_validation.py
sklearn/tests/test_random_projection.py |
IMO the most painful place to do it is in estimator_checks because estimator_checks can be used outside of a pytest run e.g. by downstream packages (look at e.g. imblearn for an example) or just by calling One promising approach I can remember was #7319 (comment) although I have not looked at it in too many details.
|
I don't think we should do the yield changes and the assert_* changes at
the same time. The advantages of changing assert_equal are:
* sets a good example for new contributions, saving us review time in
asking them to change;
* means that pytest controls the display of failure messages (e.g. handling
of long strings in unittest's assert_equal requires modifying the class's
maxDiff, while pytest uses its command-line flags etc.).
|
I'm +1 for first removing |
Just to be explicit, the downside with replacing Having said that outside of estimator_checks it/test_common is probably fine to do the simplest thing i.e. |
Yes, I think we lose a lot in the informativeness of the test structure in
some cases by getting rid of yield in that way. And the tests are then no
longer a good example of how to write tests in pytest. I think we need to
look at examples to see to what extent parametrize is worth it.
|
The verbosity of the current approach with For instance, in the last sprint cc @glemaitre , #11000 should have been straightforward (at least on the purely coding side, ignoring 64/32bit complications for now) : add a common test that checks 32/64 bit preservation then make it pass without breaking the exiting ones. In reality, it was somewhat painful because,
There are a few other issues where this could be relevant I think. Current situation (simplified) sklearn/tests/test_common.py def test_non_meta_estimators():
for name, Estimator in all_estimators():
estimator = Estimator()
for check in _yield_all_checks(name, estimator):
yield check, name, estimator sklearn/utils/estimator_checks.py def _yield_non_meta_checks(name, estimator)
yield check_a
if name in SOME_LIST:
yield check_b
# etc.
def check_a(name, estimator):
# check something.
def check_b(name, estimator):
# check something else only applies
# to a few estimators
def check_estimatator(Estimator):
# similar code to test_non_meta_estimators
# but running check(name, estimator) instead of
# yielding Desired behavior Putting aside the fact that yield will be deprecated by pytest, the above allows to
what it doesn't allow to do is to run e.g. Possible approach The most straightforward thing, I guess, would be to parametrize def selector_check_a(name):
return True
def selector_check_b(name):
return name in SOME_LIST
@pytest.mark.parametrize('name, Estimator',
[(name, Estimator) for name, Estimator in all_estimators()
if selector_check_a(name)])
def test_non_meta_estimator_check_a(name, estimator):
check_a(name, estimator)
# same for check_b
def check_estimators(Estimator):
name = Estimator.__name__
if selector_check_a(name):
check_a(name, Estimator)
...
# essentially same thing as in _yield_non_meta_checks but with
# factorized selectors This is very rough prototyping, it would be good to associate a check and it's selector in the same object somehow, and improve other things, but in any case, I think, this should get us the desired behavior from the current setup,
I have not followed in detail the work on estimator tags and it's implication on What do you think? |
My message above fails to account for several constraints, but an improved approach that should work can be found in #11063 |
Great work!
…On 8 June 2018 at 23:00, Loïc Estève ***@***.***> wrote:
Closed #10728 <#10728>
via #11074 <#11074>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10728 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz6yQK17AmEZwevrpazW9w6xxdm7vzks5t6nVigaJpZM4SWjte>
.
|
As stated in #11063 (comment):
yield-tests are still used, so this issue is not really complete? I'm asking from the view of a downstream project, I'm testing my classifier with |
This will be included in the not yet released scikit-learn 0.20+, unless you are calling For downstream projects The issue was about having patterns of the form , def test_something():
for check in checks:
yield check in scikit-learn tests, these have been now replaced with pytest parametric tests but it should have no incidence on downstream projects. |
Yes it works, but if a single test from So actually yes, I mangled this issue with my own (stemming from using that private function), sorry for that. I should open another issue to re-fit |
It a valid concern, if you could open a separate issue about it it would be great! Thank you! |
yield tests are deprecated in pytest and we are getting many warnings in test logs, See e.g.
I think we should get rid of yield.
See pytest-dev/pytest#1714
ping @jnothman @lesteve
The text was updated successfully, but these errors were encountered: