-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Include test files in coverage report #6902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for investigating this. I think the fundamental problem is that we have an opt in I am not convinced that adding the tests to the default coverage is the right solution. To me it would bias the coverage towards an artificially high number which is not really representative of the real coverage of our code. |
I am for this too. Currently for pytest I have developed two filters:
But I will drop them with pleasure.
Currently code coverage percentage is already faked (for example there are only 3 backends in coverage report, while other do not included, because they have zero coverage). I do not think anyone measures matplotlib by coverage number, but test code is a code too, and it is better to have overestimated score than tests which do not work. |
I definitely support getting rid of the whitelisting as a option or at least not making it the default. I think our experience have shown that its very error prone. The backends are definitely an issue for testing. That is the main driver behind deprecating the WX non AGG backend and the GDK backend. We do most definitely need more coverage for the Cairo, ps, pgf backends and the various interactive aspects of GUI toolkits. That being said the backends are not excluded from coverage it just happens that the python code that implements most backends is a small fraction of the total code. You cannot imply that we need to run all examples with all backends to get full coverage since most of the tests are testing non backend specific code. |
I cannot say that including tests in coverage report is a definitely good solution, but until there is no other ways (or I do not know about them) to prevent issues such like #6839 (and others mentioned above) I am for it. |
So the way coverage is currently set up, anything using py.test tanks miserably: https://coveralls.io/jobs/17551007 (eta: it's an issue for #6889) |
Fixed by #8003. |
Of course I can run coverage for tests locally, but I think it should be accessible on coveralls. You cannot blindly rely on a test report until you are sure that they are actually working, otherwise it could give you a false sense of security.
I have faced a problem where I cannot be sure that pytest runs all the tests. Moreover I have got a test failures with pytest on tests that nose does not perform (and I do not why then they are not deleted). Here is the list:
..FF.F
: RMS 0.014, 0.264, 0.065).F
: RMS 0.261)What benefits can we have by including test files coverage in the report? I collected dark spots in the test suite:
Important (test not running, uncovered cases):
TRAVIS
env variable was removed?)Less important (partial coverage):
test_delaunayNot important (requires tests for the tests, and other cases):
Someone can say that we need tests for the tests to have full coverage -- I would say no, some special places can be marked with
#pragma: no cover
, but other can be fixed to have full coverage.Currently this can be done with
nose-cov
plugin and after migrating to pytest we will have exactly same coverage reports withpytest-cov
.The text was updated successfully, but these errors were encountered: