Skip to content

FIX: Clean up in the new quiverkey test; make new figs in scale tests #7726

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 3, 2017

Conversation

jkseppan
Copy link
Member

@jkseppan jkseppan commented Jan 2, 2017

Merging the new quiverkey test seemed to cause nondeterministic
failures of test_log_scales and test_logit_scales. I couldn't reproduce
but here's a guess: the new test creates a figure and doesn't close it,
and the scale tests just call plt.subplot which would usually create
a new figure but if the test gets executed right after the new test,
they just add a subplot on the existing figure.

Merging the new quiverkey test seemed to cause nondeterministic
failures of test_log_scales and test_logit_scales. I couldn't reproduce
but here's a guess: the new test creates a figure and doesn't close it,
and the scale tests just call plt.subplot which would usually create
a new figure but if the test gets executed right after the new test,
they just add a subplot on the existing figure.
@dstansby
Copy link
Member

dstansby commented Jan 2, 2017

Woops, sorry for missing @cleanup from my test...

@efiring
Copy link
Member

efiring commented Jan 2, 2017

Definitely good changes. Any idea why we are still getting the svg image comparison failure in the py.test case? I've been seeing that, or a similar one with svg, quite frequently.

@jkseppan
Copy link
Member Author

jkseppan commented Jan 2, 2017

No idea about that. Is it possible to see the output of the failing test case?

@efiring
Copy link
Member

efiring commented Jan 2, 2017

I don't think so. Everything is done in a fresh virtualenv. I assume it is discarded as soon as the run is finished, and only the log from the run is provided. @tacaswell, is this correct?

@tacaswell tacaswell added this to the 2.1 (next point release) milestone Jan 2, 2017
@tacaswell
Copy link
Member

Failures that happen on 'safe' branches (that is not PRs) are uploaded to aws (but I do not recall which bucket @mdboom set it up).

That svg failure has been around for a while. There was another set of svg / pdf failures due to race-conditions, but @QuLogic fixed those.

@jkseppan
Copy link
Member Author

jkseppan commented Jan 3, 2017

It seems that the test_scale.py failures happen consistently on Travis, even though I couldn't reproduce them locally. The svg failure seems unrelated to me (at least it's not the same mechanism where a Figure instance is left around from another test).

I restarted the Travis build to see if the svg failure is just random. I think since this appears to help with the test_scale failures, it would be a good idea to merge it soon.

@codecov-io
Copy link

codecov-io commented Jan 3, 2017

Current coverage is 62.12% (diff: 100%)

Merging #7726 into master will increase coverage by 0.01%

@@             master      #7726   diff @@
==========================================
  Files           174        174          
  Lines         56024      56036    +12   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits          34799      34813    +14   
+ Misses        21225      21223     -2   
  Partials          0          0          

Powered by Codecov. Last update c511628...98e6c95

@tacaswell tacaswell merged commit 20315e1 into matplotlib:master Jan 3, 2017
@jkseppan jkseppan deleted the isolate-test-cases branch January 4, 2017 13:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants