Skip to content

Switch testing to pytest completely #7974

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Feb 3, 2017
Merged

Conversation

QuLogic
Copy link
Member

@QuLogic QuLogic commented Jan 28, 2017

I'm pre-opening this PR to verify that tests continue to build correctly. However, it still depends on #7935 and #7973.

This PR stops installing nose, and uses pytest everywhere. Note that I did not yet delete any redundant builds. I want to make sure everything works with the different methods first and will then see about pruning stuff.

@QuLogic QuLogic added this to the 2.1 (next point release) milestone Jan 28, 2017
@tacaswell
Copy link
Member

attn @story645 I think I recognize this issue...

_________________________ TestUnitData.test_update_map _________________________
[gw0] linux2 -- Python 2.7.9 /home/travis/build/matplotlib/matplotlib/venv/bin/python
self = <matplotlib.tests.test_category.TestUnitData object at 0x7fffdfdb6910>
    def test_update_map(self):
        data = ['a', 'd']
        oseq = ['a', 'd']
        olocs = [0, 1]
    
        data_update = ['b', 'd', 'e', np.inf]
        useq = ['a', 'd', 'b', 'e', 'inf']
        ulocs = [0, 1, 2, 3, -2]
    
        unitdata = cat.UnitData(data)
        assert unitdata.seq == oseq
        assert unitdata.locs == olocs
    
        unitdata.update(data_update)
>       assert unitdata.seq == useq
E       assert [u'a', u'd', u'b', u'e', u'i'] == ['a', 'd', 'b', 'e', 'inf']
E         At index 4 diff: u'i' != u'inf'
E         Use -v to get the full diff
lib/matplotlib/tests/test_category.py:44: AssertionError

@QuLogic Might want to turn off the fast-fail while we debug this?

@story645
Copy link
Member

What version of numpy is that test using? that looks like the fun joy where numpy pre like 1.8 truncates inf and nan to the length of the largest string in the array, which in this case is 1.

@QuLogic
Copy link
Member Author

QuLogic commented Jan 29, 2017

It looks like NumPy 1.7.1.

Though I have a bigger question of why so few tests seem to be run compared to locally.

@NelleV
Copy link
Member

NelleV commented Jan 29, 2017

Side note: that PR is going to conflict against #7970

@@ -4,10 +4,9 @@
#
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just remove this script?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not against it, but also not going to make that decision unilaterally.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly prefer this script stay and remain functional.

@@ -240,7 +237,7 @@ def _sub_labels(axis, subs=()):
assert label_test == label_expected


@cleanup(style='default')
@pytest.mark.style('default')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the style classic being replaced by default?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both lines say default here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, they do… I thought I had seen some classic being converted to default, but I guess we'll see this once the tests run on travis anyways.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few that do; see #7973 for an explanation.

.travis.yml Outdated
@@ -155,20 +151,20 @@ script:
rm -rf ~/.cache/matplotlib
fi
export MPL_REPO_DIR=$PWD # needed for pep8-conformance test of the examples
# Workaround for pytest-xdist flaky colletion order
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: There is a typo here: colletion -> collection

@NelleV
Copy link
Member

NelleV commented Jan 29, 2017

@QuLogic We are setting the option to step testing after the first failure to speed up the tests.

@QuLogic QuLogic changed the title Switch testing to pytest completely [WIP] Switch testing to pytest completely Jan 29, 2017
@QuLogic
Copy link
Member Author

QuLogic commented Jan 29, 2017

I've marked this [WIP] so that people focus on #7973 before this one; plus I have to figure out AppVeyor.

@QuLogic
Copy link
Member Author

QuLogic commented Feb 1, 2017

I believe this should be working now, once CI catches up. It currently includes #8001 and #8002 as well as several bug fixes that were missed due to the CI not running everything:

  • Change stacklevel of warning in Figure.gca because it's triggered in two tests and on Python 2.7 it doesn't get triggered properly in the second test.
  • Skip broken categories tests temporarily (see poor categorical support w/ numpy<1.8 #7988)
  • Fix broken != comparison with SubplotSpec
  • Normalize paths when looking for fonts because Windows requires it.
  • Remove one last xfail that I missed in earlier PRs.

@tacaswell tacaswell closed this Feb 1, 2017
@tacaswell tacaswell reopened this Feb 1, 2017
@tacaswell
Copy link
Member

power-cycled as the code being run on travis (from reading the errors) was not matching the code in the PR. I saw something like this on one of my day-job projects yesterday too. I think the web-hook is firing before the +refs/PRNUMBER/merge references are updating or something...

tacaswell and others added 3 commits February 2, 2017 18:39
In python3 `__ne__` automatically delegates to `__eq__`, in python2 it
does not and falls back to the default `__ne__` test.  The NumPy
version of `assert_equal` actually checks `if a != b: raise`, whereas the
version of `assert_equal` from `unittest` (where `nose` ultimately
pulls from) checks `if not a == b: raise`
This is a problem on Windows where file paths are case-insensitive.
@QuLogic QuLogic changed the title [WIP] Switch testing to pytest completely [MRG] Switch testing to pytest completely Feb 3, 2017
@QuLogic
Copy link
Member Author

QuLogic commented Feb 3, 2017

We're good to go here now.

.travis.yml Outdated
@@ -225,7 +220,7 @@ after_success:
else
echo "Will only deploy docs build from matplotlib master branch"
fi
if [[ $NOSE_ARGS =~ "--with-coverage" || $USE_PYTEST == true ]]; then
if [[ $USE_PYTEST == true ]]; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should do this on all runs now?

Copy link
Member Author

@QuLogic QuLogic Feb 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was leaving that to #8003, but I see now it doesn't change this condition. @dopplershift Should I turn on coverage for everything here or just leave it for you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go ahead and turn it on--I'll rebase on it.

Copy link
Member

@NelleV NelleV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@NelleV NelleV changed the title [MRG] Switch testing to pytest completely [MRG+2] Switch testing to pytest completely Feb 3, 2017
@dstansby dstansby merged commit a517c60 into matplotlib:master Feb 3, 2017
@dstansby dstansby changed the title [MRG+2] Switch testing to pytest completely Switch testing to pytest completely Feb 3, 2017
@dstansby
Copy link
Member

dstansby commented Feb 3, 2017

The change to pytest will need documenting, but I've opened up a new issue for that at #8015.

@tacaswell
Copy link
Member

1: ======= 6470 passed, 10 skipped, 6 xfailed, 8 xpassed in 644.13 seconds ======== 6502
2: ======= 6477 passed, 12 skipped, 2 xfailed, 3 xpassed in 609.06 seconds ======== 6497
3: ======== 6485 passed, 4 skipped, 2 xfailed, 3 xpassed in 611.43 seconds ======== 6497
4: docs
5: ======== 6485 passed, 4 skipped, 2 xfailed, 3 xpassed in 617.37 seconds ======== 6497
6: ======= 7259 passed, 12 skipped, 2 xfailed, 3 xpassed in 610.41 seconds ======== 7276
7: ====== 5256 passed, 1227 skipped, 10 xfailed, 1 xpassed in 573.52 seconds ====== 6494
8: ======= 6477 passed, 12 skipped, 2 xfailed, 3 xpassed in 579.11 seconds ======== 6494

There seems to be some slight descrepencies in the number of tests each run finds.

@dstansby
Copy link
Member

dstansby commented Feb 3, 2017

Adding up those numbers I get 6494 for everything but (6) for which I get 7276.

@dstansby
Copy link
Member

dstansby commented Feb 3, 2017

And I'm guessing that the extra checks on (6) are PEP8 checks.

@tacaswell
Copy link
Member

Apparently I can not use a calculator 🐑

@dopplershift dopplershift mentioned this pull request Feb 3, 2017
@QuLogic QuLogic deleted the pytest branch February 4, 2017 04:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants