-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
BUG: Drastic memory usage increase in recent builds #22233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
That's disturbing. Reverting is an option, but it would be nice to figure out why it has that effect. |
Tomorrow I'll see if I can replicate on something simpler like maybe just running numpy.test with memprof |
Hmmmpf, the graph does look like we probably introduced a memory leak? Would be helpful to narrow it down a bit. Although if that is tricky, we can also run the debugging tools on the NumPy test suite to see if it points to something. |
SciPy CI jobs started dying as well with no indication of what the problem is other than Cc @eirrgang as author of the relevant PR. |
oh, I missed that we know which PR is at fault. That should be an easy fix then, no need to revert. |
@seberg only if you already understand the problem and it can fixed within a few hours. Because it's going to be a huge waste of time if every downstream CI job testing numpy main is going to fail. |
The new path to preserve dtypes provided by creating a view got the reference counting wrong, because it also hit the incref path that was needed for returning the identity. This fixes up numpygh-21995 Closes numpygh-22233
Broken wheels have been deleted and memory leak should be fixed - see gh-22236. |
Thanks for finding the root cause @larsoner |
Describe the issue:
Recently in MNE-Python our CIs started dying due to memory overconsumption on both
On my local Linux machine doing the following as a test with
git bisect --first-parent
:takes ~12GB of memory usage on
main
and also 45357de:but only a much more reasonable ~2.5GB of max memory usage on db550d5:
So I think this is due to #21995.
MNE does use SciPy, Numba, scikit-learn, dipy, etc. that could be interacting with NumPy in some bad way, too. I could at least rule out
numba
,dipy
, andscikit-learn
by uninstalling them and seeing the same overconsumption, though (some tests will just be skipped or our library with just use slower code paths this way).I'm happy to work on isolating this further if it's not clear why this might be happening just from having it pinned down to the PR/commit.
Reproduce the code example:
Error message:
No response
NumPy/Python version information:
See commit numbers above
Context for the issue:
No response
The text was updated successfully, but these errors were encountered: