Skip to content

Tweak coverage #8036

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Feb 20, 2017
Merged

Tweak coverage #8036

merged 10 commits into from
Feb 20, 2017

Conversation

dopplershift
Copy link
Contributor

Mainly to turn on coverage for all testing on AppVeyor, as well as tighten our expected threshold for the test lines.

WIP because I want to investigate some of the test files that aren't at 100%.

@dopplershift
Copy link
Contributor Author

Oh goody, we had two tests with the same name, so only the second ran (luckily it now succeeds here, and it's only a smoke test).

Anyone know a good way to check for this within our test suite--besides checking files with seemingly low test coverage %?

@dopplershift
Copy link
Contributor Author

dopplershift commented Feb 7, 2017

More fun:

  • Completely duplicated test methods in Test_detrend in test_mlab.py
  • In test_pickle.py, recursive_pickle (and hence depth_getter) are completely unused in living code--they're commented out debugging helpers. Suggestions on where else we could put them? Or can we remove?

@QuLogic
Copy link
Member

QuLogic commented Feb 7, 2017

For top-level functions, this is relatively easy:

$ grep -IRo '^def[[:space:]]\+[A-Za-z0-9_]\+' | sort | uniq -d
test_axes.py:def test_pcolorargs
test_contour.py:def test_contour_manual_labels

For class methods, maybe a short Python script:

import sys
from collections import Counter

for name in sys.argv[1:]:
    methods = Counter()
    class_name = None
    with open(name, 'r') as f:
        for line in f:
           if line.startswith('class'):
               class_name = line.split()[1].split('(')[0].strip()
           elif line.startswith('def'):
               class_name = None
           elif class_name and line.startswith('    def'):
               method_name = line.split()[1].split('(')[0].strip()
               methods[class_name + '.' + method_name] += 1
    duplicates = [name for name, count in methods.most_common() if count != 1]
    if duplicates:
        print(name, duplicates)
test_mlab.py ['Test_detrend.test_detrend_mean_2D_axism1', 'Test_detrend.test_detrend_mean_2D_none', 'Test_detrend.test_detrend_mean_2D_axis0', 'Test_detrend.test_detrend_str_linear_2d_slope_off_axis0', 'Test_detrend.test_detrend_mean_2D_none_T', 'Test_detrend.test_detrend_mean_2D_axis1', 'Test_detrend.test_detrend_detrend_linear_1d_slope_off_axis1']
test_skew.py ['SkewXTick.tick2On', 'SkewXTick.gridOn', 'SkewXTick.tick1On', 'SkewXTick.label2On', 'SkewXTick.label1On']

@QuLogic
Copy link
Member

QuLogic commented Feb 7, 2017

For top-level classes, too:

$ grep -IRo '^class[[:space:]]\+[A-Za-z0-9_]\+' | sort | uniq -d
test_ticker.py:class TestLogFormatter

@tacaswell tacaswell added this to the 2.1 (next point release) milestone Feb 7, 2017
@tacaswell
Copy link
Member

I am 👍 on removing the pickle debugging helpers. I have 0 pity on people depending on helper functions from in our tests (not in testing, in the tests). We have already broken those folks by default by not including the tests anyway.

@@ -158,7 +158,7 @@ def test_contour_manual_labels():

@image_comparison(baseline_images=['contour_labels_size_color'],
extensions=['png'], remove_text=True)
def test_contour_manual_labels():
def test_contour_labels_size_color():

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Travis is failing on this image comparison. I'm guessing this just needs an updated image, since it's been a while since this test was ever run.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. Surprisingly, that's the only test that had that problem. The other newly activated tests pass, thankfully.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are we handling regenerating images as a policy?

See #7905

I think we'll shortly have to revert this changed image when/if #7970 is merged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to wait on this until the other is decided upon.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. I think everything else is A+, 👍 so far.

Trying to ensure we don't backslide.
This was causing the first test to not run.
Regenerate test images now that this test is actually run.
This was an exact duplicate of the code above it.
This was deflating our coverage numbers for the tests. If deemed
necessary, this code should be resurrected elsewhere (outside tests).

Also cleaned up the code a bit.
Lack of saving the animation caused it to never run the callbacks.
@dopplershift
Copy link
Contributor Author

It's been a good exercise. All of the test files with significant uncovered lines were caused by either dead code or tests that weren't running.

I noticed a significant number of uncovered lines left are caused by pass. Thoughts on adding pass to our exclude list in .coveragerc? Or is that cheating?

Remove unused code and restructure to eliminate uncovered lines.
Since we're here making changes anyways.
@dopplershift
Copy link
Contributor Author

dopplershift commented Feb 7, 2017

Another question: Currently, our matplotlib.testing stands at ~57% coverage because all the nose stuff is disabled. Do we want to exclude the nose testing code from the report?

@NelleV
Copy link
Member

NelleV commented Feb 9, 2017

I think both solutions are fine, with a slight preference in keeping it.
The day we will get rid of this code, our coverage will just jump up.

@@ -1464,7 +1488,8 @@ def _as_mpl_axes(self):
ax_via_gca = plt.gca(projection=prj2)
assert ax_via_gca is not ax
assert ax.get_theta_offset() == 0, ax.get_theta_offset()
assert ax_via_gca.get_theta_offset() == np.pi, ax_via_gca.get_theta_offset()
assert ax_via_gca.get_theta_offset() == np.pi, \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need the second part of the assert with pytest?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I guess not.

assert xax._major_tick_kw['tick2On'] == True
assert xax._minor_tick_kw['tick1On'] == False
assert xax._minor_tick_kw['tick2On'] == True
assert not xax._major_tick_kw['tick1On']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are slightly different tests as assert 'aardvark' passes, but assert 'aardvark' == True does not.

Do we care about this change?

@dopplershift
Copy link
Contributor Author

Anyone have opinions on whether we should be ignoring pass lines for test coverage? We currently exclude:

    raise NotImplemented
    def __str__
    def __repr__
    if __name__ == .__main__.:

@tacaswell
Copy link
Member

I don't think we should skip pass lines. Either we have non-functional code (as in the body is just pass and it really should be NotImplemented or they pass lines are in fall-troughs of input sanitation/validation which if we miss them, there is a case of inputs we are missing in the tests.

@dopplershift
Copy link
Contributor Author

I think a lot of these are stubs in interfaces in the tests. I'm leaning towards turning them into some variety of NotImplemented such that they break should they be used.

Copy link
Member

@dstansby dstansby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

re. the new test image, the image would/should have been regenerated a while ago, so adding it even if it'll be changed soon seems fine to me.

@dstansby dstansby merged commit cf0d260 into matplotlib:master Feb 20, 2017
@dopplershift dopplershift deleted the tweak-coverage branch February 21, 2017 19:52
@QuLogic QuLogic changed the title [WIP] Tweak coverage Tweak coverage Feb 26, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants