-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Tweak coverage #8036
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tweak coverage #8036
Conversation
Oh goody, we had two tests with the same name, so only the second ran (luckily it now succeeds here, and it's only a smoke test). Anyone know a good way to check for this within our test suite--besides checking files with seemingly low test coverage %? |
More fun:
|
For top-level functions, this is relatively easy:
For class methods, maybe a short Python script: import sys
from collections import Counter
for name in sys.argv[1:]:
methods = Counter()
class_name = None
with open(name, 'r') as f:
for line in f:
if line.startswith('class'):
class_name = line.split()[1].split('(')[0].strip()
elif line.startswith('def'):
class_name = None
elif class_name and line.startswith(' def'):
method_name = line.split()[1].split('(')[0].strip()
methods[class_name + '.' + method_name] += 1
duplicates = [name for name, count in methods.most_common() if count != 1]
if duplicates:
print(name, duplicates)
|
For top-level classes, too:
|
I am 👍 on removing the pickle debugging helpers. I have 0 pity on people depending on helper functions from in our tests (not in |
@@ -158,7 +158,7 @@ def test_contour_manual_labels(): | |||
|
|||
@image_comparison(baseline_images=['contour_labels_size_color'], | |||
extensions=['png'], remove_text=True) | |||
def test_contour_manual_labels(): | |||
def test_contour_labels_size_color(): | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Travis is failing on this image comparison. I'm guessing this just needs an updated image, since it's been a while since this test was ever run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated. Surprisingly, that's the only test that had that problem. The other newly activated tests pass, thankfully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to wait on this until the other is decided upon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool. I think everything else is A+, 👍 so far.
Trying to ensure we don't backslide.
This was causing the first test to not run.
Regenerate test images now that this test is actually run.
This was an exact duplicate of the code above it.
This was deflating our coverage numbers for the tests. If deemed necessary, this code should be resurrected elsewhere (outside tests). Also cleaned up the code a bit.
2655c93
to
3227ed1
Compare
Lack of saving the animation caused it to never run the callbacks.
It's been a good exercise. All of the test files with significant uncovered lines were caused by either dead code or tests that weren't running. I noticed a significant number of uncovered lines left are caused by |
9bc2d92
to
af56c13
Compare
Remove unused code and restructure to eliminate uncovered lines.
Since we're here making changes anyways.
Another question: Currently, our |
I think both solutions are fine, with a slight preference in keeping it. |
@@ -1464,7 +1488,8 @@ def _as_mpl_axes(self): | |||
ax_via_gca = plt.gca(projection=prj2) | |||
assert ax_via_gca is not ax | |||
assert ax.get_theta_offset() == 0, ax.get_theta_offset() | |||
assert ax_via_gca.get_theta_offset() == np.pi, ax_via_gca.get_theta_offset() | |||
assert ax_via_gca.get_theta_offset() == np.pi, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need the second part of the assert with pytest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I guess not.
assert xax._major_tick_kw['tick2On'] == True | ||
assert xax._minor_tick_kw['tick1On'] == False | ||
assert xax._minor_tick_kw['tick2On'] == True | ||
assert not xax._major_tick_kw['tick1On'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are slightly different tests as assert 'aardvark'
passes, but assert 'aardvark' == True
does not.
Do we care about this change?
Anyone have opinions on whether we should be ignoring
|
I don't think we should skip |
I think a lot of these are stubs in interfaces in the tests. I'm leaning towards turning them into some variety of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
re. the new test image, the image would/should have been regenerated a while ago, so adding it even if it'll be changed soon seems fine to me.
Mainly to turn on coverage for all testing on AppVeyor, as well as tighten our expected threshold for the test lines.
WIP because I want to investigate some of the test files that aren't at 100%.