Skip to content

2 test failures on Debian stable (stretch) amd64 #12548

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
yarikoptic opened this issue Nov 8, 2018 · 20 comments
Closed

2 test failures on Debian stable (stretch) amd64 #12548

yarikoptic opened this issue Nov 8, 2018 · 20 comments
Milestone

Comments

@yarikoptic
Copy link
Member

yarikoptic commented Nov 8, 2018

Sorry no show_versions output (yet to add to debian/rules, numpy 1.12.1), and I just wanted to possibly seek ideas/clues before digging deeper or skipping those tests - everything seems to be ok on more recentish systems:

1

____________________________ test_lars_cv_max_iter _____________________________

    @pytest.mark.filterwarnings('ignore::FutureWarning')
    def test_lars_cv_max_iter():
        with warnings.catch_warnings(record=True) as w:
            X = diabetes.data
            y = diabetes.target
            rng = np.random.RandomState(42)
            x = rng.randn(len(y))
            X = np.c_[X, x, x]  # add correlated features
            lars_cv = linear_model.LassoLarsCV(max_iter=5)
            lars_cv.fit(X, y)
>       assert_true(len(w) == 0)

re assert_true submitted #12547 - but anything sounds familiar?

2

__________________________ test_docstring_parameters ___________________________

    @pytest.mark.filterwarnings('ignore::DeprecationWarning')
    @pytest.mark.skipif(IS_PYPY, reason='test segfaults on PyPy') 
    def test_docstring_parameters(): 
        # Test module docstring formatting
...
        msg = '\n' + '\n'.join(sorted(list(set(incorrect))))
        if len(incorrect) > 0:
>           raise AssertionError("Docstring Error: " + msg)
E           AssertionError: Docstring Error:
E           sklearn.utils.fixes.divide arg mismatch: ['dtype', 'out', 'x1', 'x2']

sklearn/tests/test_docstring_parameters.py:125: AssertionError

so this one is particularly mysterious ;)

show_versions()
ATLAS version 3.10.3 built by buildd on Tue Jan 17 22:59:14 UTC 2017:
   UNAME    : Linux x86-ubc-01 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux
   INSTFLG  : -1 0 -a 1 -l 1
   ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_x86SSE2 -DATL_CPUMHZ=2297 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664
   F2CDEFS  : -DAdd_ -DF77_INTEGER=int -DStringSunStyle
   CACHEEDGE: 1048576
   F77      : /usr/bin/x86_64-linux-gnu-gfortran, version GNU Fortran (Debian 6.3.0-2) 6.3.0 20161229
   F77FLAGS : -fomit-frame-pointer -mfpmath=sse -O2 -msse2 -fPIC -m64
   SMC      : /usr/bin/gcc-6, version gcc-6 (Debian 6.3.0-2) 6.3.0 20161229
   SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -O2 -msse2 -fPIC -m64
   SKC      : /usr/bin/gcc-6, version gcc-6 (Debian 6.3.0-2) 6.3.0 20161229
   SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -O2 -msse2 -fPIC -m64

System
------
    python: 2.7.13 (default, Nov 24 2017, 17:33:09)  [GCC 6.3.0 20170516]
   machine: Linux-4.16.0-0.bpo.2-amd64-x86_64-with-debian-9.5
executable: /usr/bin/python2.7

BLAS
----
    macros: HAVE_CBLAS=None, ATLAS_INFO="\"3.10.3\""
cblas_libs: f77blas, cblas, atlas, f77blas, cblas
  lib_dirs: /usr/lib/atlas-base

Python deps
-----------
    Cython: None
     scipy: 0.18.1
setuptools: 33.1.1
       pip: None
     numpy: 1.12.1
    pandas: 0.22.0
   sklearn: 0.20.0
@amueller
Copy link
Member

amueller commented Nov 8, 2018

1 could be any sort of warning and/or the warning registry doing something weird.

2 is saying we're not documenting utils.fixes.divide correctly. I didn't think we'd be running these tests on utils.
Also, this shouldn't fire with numpy 1.12.1 but I'm not entirely sure how the docstrings are extracted.
This can definitely be safely ignored either way.

@amueller
Copy link
Member

amueller commented Nov 8, 2018

We are trying to test the docstrings in utils.fixes, but I suggest we stop doing that because running these tests consistently is painful because they are by definition version-dependent. So the upstream fix for 2 will be to add either this function or whole module to _DOCSTRING_IGNORES.
Maybe @NicolasHug remembers something about this?

@yarikoptic
Copy link
Member Author

FWIW - added show versions (collapsed) output in the initial msg

@NicolasHug
Copy link
Member

Maybe @NicolasHug remembers something about this?

I'm not sure what you're referring to? I never really played with doctest much ^^

@jnothman
Copy link
Member

jnothman commented Nov 10, 2018 via email

@amueller
Copy link
Member

@NicolasHug I thought maybe you came across something like that working on #11421.

@amueller
Copy link
Member

@yarikoptic What about all the other failures in https://buildd.debian.org/status/package.php?p=scikit-learn ?

@amueller amueller added this to the 0.20.1 milestone Nov 12, 2018
jnothman added a commit to jnothman/scikit-learn that referenced this issue Nov 13, 2018
Partial fix for scikit-learn#12548. If code coverage does not drop, then this indicates the code is working.
@jnothman
Copy link
Member

The second issue should now be fixed. Regarding the first, looking at the context in which that code was introduced (#9004) it is probably trying to avoid a divide by zero error. So rather than catching and asserting that 0 are raised, we can just use with np.errstate(divide='raise', invalid='raise') instead...?

@jnothman
Copy link
Member

no, maybe I'm confused and it should be checking for no ConvergenceWarning?

@yarikoptic
Copy link
Member Author

yarikoptic commented Nov 14, 2018

@yarikoptic What about all the other failures in https://buildd.debian.org/status/package.php?p=scikit-learn ?

"one step at a time" ;) and it is disappointing that i386 fails there too -- it built fine for me locally so I thought it was ok. That ImportError: No module named _check_build keeps haunting us around the corners.
s390 one (I am surprised that only one) is interesting:

_______________________ test_estimate_bandwidth_1sample ________________________

    def test_estimate_bandwidth_1sample():
        # Test estimate_bandwidth when n_samples=1 and quantile<1, so that
        # n_neighbors is set to 1.
        bandwidth = estimate_bandwidth(X, n_samples=1, quantile=0.3)
>       assert_equal(bandwidth, 0.)

sklearn/cluster/tests/test_mean_shift.py:41: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3.7/unittest/case.py:839: in assertEqual
    assertion_func(first, second, msg=msg)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.self = <sklearn.utils._unittest_backport.TestCase testMethod=__init__>
first = 2.384185791015625e-07, second = 0.0
msg = '2.384185791015625e-07 != 0.0'

    def _baseAssertEqual(self, first, second, msg=None):
        """The default assertEqual implementation, not type specific."""
        if not first == second:
            standardMsg = '%s != %s' % _common_shorten_repr(first, second)
            msg = self._formatMessage(msg, standardMsg)
>           raise self.failureException(msg)
E           AssertionError: 2.384185791015625e-07 != 0.0

/usr/lib/python3.7/unittest/case.py:832: AssertionError

@cdluminate
Copy link

s390x one is a duplicate to ppc64el's
#10561

@ogrisel
Copy link
Member

ogrisel commented Nov 15, 2018

@jnothman I opened #12597 to get a better idea.

@amueller
Copy link
Member

@yarikoptic that failure will be fixed in #12574.
Can you point to an import failure that's not spurious? It look to me like they were actually failures inside python3.7 and I assumed these platforms didn't support python3.7

@amueller
Copy link
Member

Oh and "one at a time" might not be best if we're trying to push a bugfix release this week.

@amueller
Copy link
Member

oh the i386 error is weird, in particular since the tests are passing....

@amueller
Copy link
Member

I'm tempted to remove the 0.20.1 tags from all the debian issues. It seems they are hard to reproduce and I'm not sure if we want to wait for them. I'm totally fine to do a 0.20.2 with just these issues if we figure them out.

@jnothman
Copy link
Member

jnothman commented Nov 20, 2018 via email

@amueller
Copy link
Member

retagging

@jnothman jnothman modified the milestones: 0.21.3, 0.23 Oct 31, 2019
@cmarmo
Copy link
Contributor

cmarmo commented Mar 29, 2020

@jnothman, @amueller, I'm wondering if this issue could be closed: the initial problem reported by @yarikoptic was obtained with python 2.7 and 0.23 is no longer compatible with python2.
It seems to me that the only Debian issue reported for python3 (and already tagged 0.23) is #13052. Am I wrong? Thanks!

@jnothman
Copy link
Member

I'm happy to close and have more focused issues when necessary, thanks @cmarmo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants