Skip to content

Build failures for master/0.20 #11878

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
amueller opened this issue Aug 21, 2018 · 20 comments
Closed

Build failures for master/0.20 #11878

amueller opened this issue Aug 21, 2018 · 20 comments
Labels
Milestone

Comments

@amueller
Copy link
Member

This is to track current build failures in case someone wants to help ;)

https://travis-ci.org/MacPython/scikit-learn-wheels

conda-forge/scikit-learn-feedstock#70

Failures here:
https://travis-ci.org/MacPython/scikit-learn-wheels/jobs/404543529
https://travis-ci.org/MacPython/scikit-learn-wheels/jobs/404543530

related to OneClassSVM and Optics.
We should configure pytest to provide nicer tracebacks this is terrible :-/

@amueller
Copy link
Member Author

test_classification_report_dictionary_output fails on 2.7

@amueller
Copy link
Member Author

amueller commented Aug 21, 2018

Failure on plat=i686:


    def check_svm_model_equal(dense_svm, sparse_svm, X_train, y_train, X_test):
        dense_svm.fit(X_train.toarray(), y_train)
        if sparse.isspmatrix(X_test):
            X_test_dense = X_test.toarray()
        else:
            X_test_dense = X_test
        sparse_svm.fit(X_train, y_train)
        assert_true(sparse.issparse(sparse_svm.support_vectors_))
        assert_true(sparse.issparse(sparse_svm.dual_coef_))
        assert_array_almost_equal(dense_svm.support_vectors_,
                                  sparse_svm.support_vectors_.toarray())
        assert_array_almost_equal(dense_svm.dual_coef_,
>                                 sparse_svm.dual_coef_.toarray())
E       AssertionError: 
E       Arrays are not almost equal to 6 decimals
E       
E       (mismatch 83.3333333333%)
E        x: array([[ 0.993464,  0.258659,  0.264268,  0.212662,  0.270947,  1.      ]])
E        y: array([[ 0.993326,  0.263301,  0.259931,  0.216799,  0.266643,  1.      ]])

Should we change the tolerance? This seems not that great.

@amueller
Copy link
Member Author

amueller commented Aug 21, 2018

>       assert_array_almost_equal(clust.reachability_, np.array(v))

also fails on i686. Though bumping from 1e-6 to 1e-5 or 1e-4 would fix that I think?

@amueller
Copy link
Member Author

is optics supposed to have NaNs in the test:
https://travis-ci.org/MacPython/scikit-learn-wheels/jobs/404543533#L1352

@amueller
Copy link
Member Author

classification report fails because recall has a change in the 16th decimal

@amueller
Copy link
Member Author

@NicolasHug you can help here if you like ;)

@amueller
Copy link
Member Author

We probably need to port AssertDeepAlmostEqual https://github.com/larsbutler/oq-engine/blob/master/tests/utils/helpers.py#L214

for the classification report test. Or we need to use the decimal argument even for the returned dictionary (right now it's ignored in that case).

@amueller
Copy link
Member Author

The failure in the def test_sparse_oneclasssvm and check_svm_model_equal are probably triggered by the gamma='scale' change. We could use gamma='auto' in these tests or increase tolerance (though the mismatch is big) or skip them on 32 bits - or try to figure out why libsvm results mismatch on 32 bits lol.

@amueller
Copy link
Member Author

Going the skip route now with #11880 we'll be able to see more clearly what's happening, I hope?

@amueller
Copy link
Member Author

Can someone with Optics knowledge comment on the failure there? https://travis-ci.org/MacPython/scikit-learn-wheels/jobs/404543529#L1208

@jnothman
Copy link
Member

I hope I have a moment to look at various things today.
@espg and @adrinjalali might be able to comment on optics

@amueller
Copy link
Member Author

amueller commented Aug 22, 2018

Possibly changing the tolerance is enough as a hotfix for OPTICS, it's a very minor mismatch.
If we have that and #11879 we're pretty much good to go. I have issues on appveyor that I don't understand though.

@amueller
Copy link
Member Author

amueller commented Aug 22, 2018

Appveyor fails because I'm not running the tests in the right environment / right folder or something:
https://ci.appveyor.com/project/sklearn-wheels/scikit-learn-wheels

ping @ogrisel @rth I'm not sure that the issue is :-/

That's the last thing really holding back the RC I think.

@espg
Copy link
Contributor

espg commented Aug 22, 2018

It's hard to tell from the output of https://travis-ci.org/MacPython/scikit-learn-wheels/jobs/404543529#L1208 ; the truncated print shows exact matches for the first and last entries. If it's just a slight mismatch due to rounding difference on 32-bit, bumping the decimal place should be fine to fix. If there's a difference in ordering, it may still fail even with the decimal increment. I mentioned a possible cause for the latter case in #11857 , but I don't have access to 32-bit instance to check.

@amueller
Copy link
Member Author

@espg everybody has access to a virtual machine ;)

@rth
Copy link
Member

rth commented Aug 22, 2018

but I don't have access to 32-bit instance to check.

everybody has access to a virtual machine ;)

Alternatively, see docker setup in MacPython/scikit-learn-wheels#7 (comment)

@rth
Copy link
Member

rth commented Aug 29, 2018

All scikit-learn-wheels should have been fixed (or skipped), @jnothman is there anything remaining before creating the 0.20.X branch, and incrementing versions there ?

http://scikit-learn.org/stable/developers/maintainer.html#making-a-release

@jnothman
Copy link
Member

jnothman commented Aug 29, 2018 via email

@rth
Copy link
Member

rth commented Aug 29, 2018

Are you intending to do the branching?

If you can, please do :)

And if my scikit-learn-wheels PR (
MacPython/scikit-learn-wheels#7) turns green

Yes, but wouldn't it be better to update the version in the branch and the submodule in the wheels PR to point to 0.20.X first ?

@lesteve
Copy link
Member

lesteve commented Sep 1, 2018

I am assuming that it can be closed since everything seems to be green now. Please reopen with a summary of the remaining problems if I missed something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants