Skip to content

[MRG+1] make more explicit which checks are run #7317

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 5, 2016

Conversation

amueller
Copy link
Member

@amueller amueller commented Aug 31, 2016

This makes it more clear which common tests are actually run.
With this PR, the output of the common tests is something like

sklearn.tests.test_common.check_estimator_sparse_data(SelectFpr) ... ok
sklearn.tests.test_common.check_estimators_pickle(SelectFpr) ... ok
sklearn.tests.test_common.check_transformer_data_not_an_array(SelectFpr) ... ok
sklearn.tests.test_common.check_transformer_general(SelectFpr) ... ok
sklearn.tests.test_common.check_transformers_unfitted(SelectFpr) ... ok
sklearn.tests.test_common.check_fit2d_predict1d(SelectFpr) ... ok
sklearn.tests.test_common.check_fit2d_1sample(SelectFpr) ... ok
sklearn.tests.test_common.check_fit2d_1feature(SelectFpr) ... ok
sklearn.tests.test_common.check_fit1d_1feature(SelectFpr) ... ok
sklearn.tests.test_common.check_fit1d_1sample(SelectFpr) ... ok
sklearn.tests.test_common.check_estimators_dtypes(SelectFwe) ... ok
sklearn.tests.test_common.check_fit_score_takes_y(SelectFwe) ... ok

instead just the test name over and over (as is in master):

sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok
sklearn.tests.test_common.test_non_meta_estimators('DBSCAN', <class 'sklearn.cluster.dbscan_.DBSCAN'>) ... ok

I did this so I can find where the deprecation warnings in #7255 come from, but I think it's generally helpful.

@amueller amueller added this to the 0.18 milestone Aug 31, 2016
@@ -75,6 +75,11 @@
"GradientBoostingClassifier", "GradientBoostingRegressor"]


def _set_test_name(function, name):
function.description = "sklearn.tests.test_common.{0}({1})".format(function.__name__, name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8!

@jnothman
Copy link
Member

We have flake8 onto you and your rampant violations of PEP8, @amueller!

@jnothman
Copy link
Member

PS: yes, I like this.

@amueller
Copy link
Member Author

amueller commented Sep 1, 2016

ugh. So can we configure flake8 for more reasonable line-length ;)
Also, I'm wondering if we should actually run flake8 on the whole code-base now ("once and for all") if we're gonna be stricter...

@jnothman
Copy link
Member

jnothman commented Sep 1, 2016

If we merged/closed all our pull requests, then ran PEP8 on the whole codebase, at least we could be confident not to induce too many unnecessary rebases! I think organic improvement will do okay.

@ogrisel
Copy link
Member

ogrisel commented Sep 1, 2016

I appreciate this PR to get more informative test names in the nose reporting. But wouldn't it even better to also keep the test_common function names in the test names? For instance:

sklearn.tests.test_common.test_non_meta_estimators.check_estimator_sparse_data(SelectFpr) ... ok
sklearn.tests.test_common.test_non_meta_estimators.check_estimators_pickle(SelectFpr) ... ok
...

Edit: Actually by reading the code it does not seem that easy to implement this suggestion. If there is no easy way to do it I am fine with keeping @amueller's current _set_test_name implementation.

@jnothman
Copy link
Member

jnothman commented Sep 1, 2016

The estimator's name is there. The test name is not.

@jnothman
Copy link
Member

jnothman commented Sep 1, 2016

Actually by reading the code it does not seem that easy to implement this suggestion.

traceback?

@jnothman
Copy link
Member

jnothman commented Sep 2, 2016

This LGTM as a first step.

@jnothman jnothman changed the title make more explicit which checks are run [MRG+1] make more explicit which checks are run Sep 2, 2016
@ogrisel
Copy link
Member

ogrisel commented Sep 5, 2016

Alright let's merge this.

@ogrisel ogrisel merged commit f916449 into scikit-learn:master Sep 5, 2016
@amueller
Copy link
Member Author

amueller commented Sep 6, 2016

thanks for the reviews :)

@jnothman
Copy link
Member

Somehow this isn't working right for me in CIs. See my list of failed runs at #7411 (where the same tests passed on my own machine). It seems to derive from setting description on the same object repeatedly. One solution is https://github.com/scikit-learn/scikit-learn/pull/7411/files#diff-a95fe0e40350c536a5e303e87ac979c4R78.

@amueller
Copy link
Member Author

It says FAIL: sklearn.tests.test_common.check_regressors_train(LinearSVR). What is the problem?

@amueller
Copy link
Member Author

Ah, I see now. I was only concerned with the passing tests, but it seems it doesn't work for the failing tests. You are right.

@jnothman
Copy link
Member

Ah, I see what you mean now: it's a problem with how nose evaluates the
description for failures, is it?

On 13 September 2016 at 23:19, Andreas Mueller notifications@github.com
wrote:

Ah, I see now. I was only concerned with the passing tests, but it seems
it doesn't work for the failing tests. You are right.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#7317 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEz62fp_ZlRu2cU9LIx0Ntd-wRctoTpks5qpqLVgaJpZM4JyHQh
.

@amueller
Copy link
Member Author

I think so.

@amueller amueller deleted the common_test_names branch May 19, 2017 20:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants