Skip to content

MNT CI run all examples if some code is changed #12880

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

adrinjalali
Copy link
Member

The take from the discussion in #12797 is that we'd like to try and run all examples (almost) always.

We also know that running all examples is unnecessary if the PR doesn't touch any code (e.g. documentation or CI related PRs). Therefore this PR does a complete doc build only if there's some change in .py, .pyx, or .pxd files.

An alternative is to remove the part in get_build_type which deals with changed files completely, and always return BUILD. But that would have some unnecessary overhead in build time.

@qinhanmin2014
Copy link
Member

I don't think it's worthwhile. We'll rely on the tests to detect bugs.

@qinhanmin2014
Copy link
Member

And as you can see, Circle CI is extremely time-consuming these days.

@adrinjalali
Copy link
Member Author

Well, my understanding from our informal poll there was that @amueller , @ogrisel , @rth and I were in favor or running them, hence this PR.

@rth
Copy link
Member

rth commented Dec 28, 2018

And as you can see, Circle CI is extremely time-consuming these days.

Well, it's 7 minutes here (+3min to upload artifacts), unless I'm missing something. Appveyor takes twice as much. I agree that we rely on tests to actually detect issues, but although I value fast CI, I still wonder if just running all of the examples wouldn't be a simpler solution.

@qinhanmin2014
Copy link
Member

Well, it's 7 minutes here (+3min to upload artifacts), unless I'm missing something. Appveyor takes twice as much. I agree that we rely on tests to actually detect issues, but although I value fast CI, I still wonder if just running all of the examples wouldn't be a simpler solution.

We don't change any code here? Circle CI takes around 40 min in master (and we have 2 jobs).

@qinhanmin2014
Copy link
Member

Well, my understanding from our informal poll there was that @amueller , @ogrisel , @rth and I were in favor or running them, hence this PR.

I somehow recall that there's consensus to rely on the tests :) Let's vote here. I'll vote -1 (maybe +0 if there're too many +1s)

@adrinjalali
Copy link
Member Author

I somehow recall that there's consensus to rely on the tests :) Let's vote here. I'll vote -1 (maybe +0 if there're too many +1s)

It's true that we rely on the tests to detect bugs, but we don't have any systematic way of detecting warnings and issues in the examples to let the authors of PRs fix the examples as well. To me if a PR causes an error in an example, that's an issue needing a fix in that same PR. If you have a better solution to achieve that, I'm happy to hear it.

P.S: #12797 (comment)

@rth
Copy link
Member

rth commented Dec 28, 2018

Aww, right. I agree 40 min is a lot for each commit in a PR. The summary of options was in #12797 (comment)

Honestly I have not seen to many cases where examples would break on master and not in PR, and that would usually mean that the test coverage is not very good.
I'll change my vote and give a +1 to keep things as they are (i.e. run only changed examples on PR) and all examples on master. I don't think that strictly failing on warning is worth the effort we spend on it. Working to reduce warnings before a release make sense, but it's a never ending effort as soon as one of our dependencies gets a new release.

@rth
Copy link
Member

rth commented Dec 28, 2018

To me if a PR causes an error in an example, that's an issue needing a fix in that same PR.

Also wouldn't that mean that the PR breaks backward compatibility? I.e. code that worked before now produces an error. This should be very exceptional.

@adrinjalali
Copy link
Member Author

@rth, well, it all started in my brain while I was working on #12654. There are 38 changed files and 112 changed lines in the examples in that PR alone, to fix some warnings. That doesn't really give me a feeling that those cases are rare.

@rth
Copy link
Member

rth commented Dec 28, 2018

There are 38 changed files and 112 changed lines in the examples in that PR alone, to fix some warnings

There was a significant effort to reduce warnings (and fail on warnings) in tests this summer, but if I remember correctly we didn't quite get to fixing it in examples. If I look at the changes in #12654

  • almost half of it is due to changes in dependencies (matplotlib, scikit-image). If we run all examples in CI, and fail on warnings it means that CI in PRs could fail after e.g. a new matplotlib version is released which is not really desirable (unless we pin versions).
  • the rest is due to ~3-4 PRs that deprecated some parameters over the last 6+ month. Catching those few warnings doesn't justify making contributors wait 40 min in every single PR for a green CI.

Overall 112 changed lines needed to fix warnings in examples every 6 month or so doesn't sound too bad.

@adrinjalali
Copy link
Member Author

Ok. I'm [kinda] convinced.

@albertcthomas
Copy link
Contributor

Just for the record, this was also discussed in issue #10943.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants