-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
assert mods.pop(0) == 'tests' errors for multiprocess tests on OSX #3314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
assert mods.pop(0) == 'tests'
errors for multiprocess tests on OSX
I ran into this recently, too, but couldn't reproduce it after doing a
|
The travis builds are clean, though. I suppose it could be something to do with timings, making it somewhat random. But it happens for all the travis builds. |
This is puzzling -- I can't reproduce locally on my Mac box. It might be helpful if we knew what |
This happens on Travis if I replace |
https://travis-ci.org/jenshnielsen/matplotlib/builds/88694941 fails on Travis due to this with the only change being the install via pip build from https://github.com/jenshnielsen/matplotlib/tree/pipinstalltravis |
Some more debugging shows that this happens for tests within For regular Matplotlib tests mods is something like Edit: This is after popping the first element. mods is normally something like |
The same happens without multiprocessing if you do
It looks like yet another namespace oddity |
@jenshnielsen: Thanks for getting to the bottom of this. I'm now able to reproduce... |
mods[0] does not contain the package name when installing via pip and running the multiprocess nose plugin. In the long term we should get rid of the namespace package
I'm testing OSX builds using Matt Terry's build grid. All the matplotlib builds are giving multiple errors like this:
See: https://s3.amazonaws.com/archive.travis-ci.org/jobs/30794968/log.txt
It's not clear where the failures are coming from, but this is an excerpt from the stdout of the tests.
I can replicate the failures on my laptop in a clean virtual machine, but only when running the tests via the multiprocessing plugin:
Running in single process mode gives no failures. Does anyone have any idea how I could investigate further?
The text was updated successfully, but these errors were encountered: