-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Coverage config #8003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coverage config #8003
Conversation
dopplershift
commented
Jan 31, 2017
- Cleaning up codecov config to remove commenting
- Remove coveralls
- Trying to set up codecov to report coverage in test code separately
- Also enabled coverage on AppVeyor
If this works first time through I'll be amazed... |
Not liking something about the configuration on windows.
|
Adding tests to coverage fixes #6902. |
f52966c
to
cefda55
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
So if I understand this correctly; codecov should pass if test files are 100% covered. How is this true, if we aren't technically fully running on Python 2.7 at the moment? Surely there's a skip for Py 2 in at least one test? |
It's not working properly yet. There should be 3 separate statuses:
Not sure why it didn't work, since this was verbatim from my own project with just some paths changed... |
The way this works is that codecov merges coverage runs...so if every line is run at least once, you'll get 100% coverage. We won't be able to see if we're not running things on py2. It looks like there is a feature called flags where we could mark each run as python 2 or 3. We could then segment that way, but then if we have version-specific test code, we would never get to 100%... |
If it's anything like the PR to stop codecov from commenting, we might need to actually merge it to see any effect. |
Oh wait, maybe it's because we've already got a |
Yeah, the duplication is probably the problem. Let's see how this one goes. |
f0fa141
to
503adfa
Compare
Still didn't take. I've rebased on master (squashed some commits) and fixed a couple minor validation problems with the yaml file I made. If this doesn't seem to take, my only thoughts are to try merging with master and hope for the best. |
Bingo! And somehow we only have 97% coverage of test files... Should we commit (maybe with a different target) and try to fix it elsewhere? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Can we push the test coverage limit down before merging? |
We've got 3% of our tests not running on travis? That's interesting to know. |
It's mostly backends (e.g., Qt4/Qt5 can't be run in same process or installed in the same conda environment), some pickle and animation stuff, and everything else seems small. But when looking at the file on codecov, it says 'File not found in report', so I can't see what the actual problematic lines are. @dopplershift any ideas why it's not coming up properly? |
503adfa
to
127301c
Compare
@NelleV Well, 2.2% of the lines of code in our tests are not being executed anyways. @QuLogic I think it's failing when it tries to compare against previous coverage on the matplotlib repo, which was excluding the tests. I've rebased to address the conflicts in |
127301c
to
819f78e
Compare
Rebased now that #7974 is merged. Hopefully everything will work as expected. |
I think we're good now. I did notice at least one coverage change caused by the fact that we have a wait loop (for locks in |
All AppVeyor builds show "No coverage report found". |
This way we can get coverage for platform-dependant lines.
This sets it to report even if CI fails (just to get to see numbers). Separate out coverage for lines in tests (which should be 100%, but is currently ~97.7%), versus those in the library itself.
819f78e
to
f78c205
Compare
Was submitting the coverage after we cleaned the repos for building packages--which nuked the coverage results. I've now added the upload to the testing phase. (I'm open to other suggestions for fixing this.) Maybe now everything will work. |
Ok, it looks to be properly running on AppVeyor now--though only for pytest=True. I thought we were entirely using pytest now? (Irrelevant for this PR either way.) |
@@ -140,6 +140,8 @@ test_script: | |||
- if x%USE_PYTEST% == xno python tests.py %PYTEST_ARGS% | |||
# Generate a html for visual tests | |||
- python visual_tests.py | |||
- if x%USE_PYTEST% == xyes pip install codecov | |||
- if x%USE_PYTEST% == xyes codecov -e PYTHON_VERSION PLATFORM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These lines mean that codecov only happens if USE_PYTEST
is true, which is only for the one build. pytest is still being run in the other builds, just through tests.py
in the root which calls test()
in matplotlib/__init__.py
.
I don't know if running the tests using test.py
collects coverage, but if it does I guess these if statements can just go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, these conditions could be dropped.