-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
With NumPy 1.20, SymPy generated code cannot be serialized with dill #18547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Well, I dug a bit. And it gets stuck |
Hmm, I had marked this for triage review, but we don't know this well. Is this severe enough that it is something that we should try to fix in 1.21.2? Do we have any lead on how to fix it easily? |
@pentschev, I am a bit stumped right now. It might just be a dill bug or harmless. But if you got an idea, that would be good as well :). |
I don't have any immediate ideas, but I could see that on my side as well. The |
The use of lambdify here seems to be irrelevant. I get the same issue with import dill
from numpy import real
dill.settings['recurse'] = True
def lfunc(x):
return real(x)
dill.dumps(lfunc) Setting recurse=True is relevant, though, and without it, I get Traceback (most recent call last):
File "test.py", line 13, in <module>
dill.dumps(lfunc)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 273, in dumps
dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 267, in dump
Pickler(file, protocol, **_kwds).dump(obj)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/aaronmeurer/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 1447, in save_function
obj.__dict__, fkwdefaults), obj=obj)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/aaronmeurer/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/Users/aaronmeurer/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle PyCapsule objects which doesn't happen for the non-lambdified version (without recurse=True, the non-lambdified version pickles without any errors). |
Thanks for chiming in. I always have recurse=True set. Since the non-lambdified version works, then I thought lambdify was involved. |
The |
@mmckerns Any progress? |
@mmckerns What is the current status with dill? I'm going to kick this off to Numpy 1.22.0 for tracking. If there is a fix we can backport to the upcoming 1.21.x branch. |
Bumping the milestone, but if were have to look into breaking the cycle in NumPy, please bump this, because IIRC the cycle seems pretty natural and not that bad. |
I am going to remove the milestone, since it was pushed off before already. There were no more complaints so we are assuming it is not very pressing. Please ping if this is important and needs to be prioritized after all. |
I had a workaround by avoiding serializing generated code, so this issue is no longer pressing. I still hope that @mmckerns can fix this issue as I ran into "maximum recursion depth reached" elsewhere. But that might not have to do directly with NumPy. |
I haven't checked in on this thread in a while... How recursion is handled has recently changed in
So, it looks like this works. However, fails for
With regard to the originating code,
I'll check back again after the PyCapsule PR goes in. |
This moves dispatching for `__array_function__` into a C-wrapper. This helps speed for multiple reasons: * Avoids one additional dispatching function call to C * Avoids the use of `*args, **kwargs` which is slower. * For simple NumPy calls we can stay in the faster "vectorcall" world This speeds up things generally a little, but can speed things up a lot when keyword arguments are used on lightweight functions, for example:: np.can_cast(arr, dtype, casting="same_kind") is more than twice as fast with this. There is one alternative in principle to get best speed: We could inline the "relevant argument"/dispatcher extraction. That changes behavior in an acceptable but larger way (passes default arguments). Unless the C-entry point seems unwanted, this should be a decent step in the right direction even if we want to do that eventually, though. Closes numpygh-20790 Closes numpygh-18547 (although not quite sure why)
This moves dispatching for `__array_function__` into a C-wrapper. This helps speed for multiple reasons: * Avoids one additional dispatching function call to C * Avoids the use of `*args, **kwargs` which is slower. * For simple NumPy calls we can stay in the faster "vectorcall" world This speeds up things generally a little, but can speed things up a lot when keyword arguments are used on lightweight functions, for example:: np.can_cast(arr, dtype, casting="same_kind") is more than twice as fast with this. There is one alternative in principle to get best speed: We could inline the "relevant argument"/dispatcher extraction. That changes behavior in an acceptable but larger way (passes default arguments). Unless the C-entry point seems unwanted, this should be a decent step in the right direction even if we want to do that eventually, though. Closes numpygh-20790 Closes numpygh-18547 (although not quite sure why)
This moves dispatching for `__array_function__` into a C-wrapper. This helps speed for multiple reasons: * Avoids one additional dispatching function call to C * Avoids the use of `*args, **kwargs` which is slower. * For simple NumPy calls we can stay in the faster "vectorcall" world This speeds up things generally a little, but can speed things up a lot when keyword arguments are used on lightweight functions, for example:: np.can_cast(arr, dtype, casting="same_kind") is more than twice as fast with this. There is one alternative in principle to get best speed: We could inline the "relevant argument"/dispatcher extraction. That changes behavior in an acceptable but larger way (passes default arguments). Unless the C-entry point seems unwanted, this should be a decent step in the right direction even if we want to do that eventually, though. Closes numpygh-20790 Closes numpygh-18547 (although not quite sure why)
I had tested this with gh-23020 so it should be fixed. I am not sure why, please do open a new issue if this or similar shows up still. I think it shouldn't be hard to resolve this. |
FYI: A quick test shows after the |
I have been using SymPy to generate NumPy code through
lambdify
and using dill to serialize the code. Since upgraded to NumPy 1.20.1, some generated code cannot be serialized correctly due to a RecursionError. The example code works with NumPy 1.19.I have posted this issue in the SymPy and NumPy Gitter rooms, and I'm posting my bisecting results here.
Reproducing code example:
Prerequisites: SymPy 1.7.1, dill 0.3.3, NumPy 1.20.1
Based on my nonexhausive testings, the error occurs when the function includes
re
orim
. Other functions can be successfully serialized..Error message:
Bisecting Results
4cd6e4b is the first bad commit.
Could anyone help?
NumPy/Python version information:
1.20.1 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:12:38)
[Clang 11.0.1 ]
The text was updated successfully, but these errors were encountered: