-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
FIX: Remove the deepcopy override from transforms #21597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
other = copy.deepcopy(super(), memo) | ||
# If `c = a + b; a1 = copy(a)`, then modifications to `a1` do not | ||
# propagate back to `c`, i.e. we need to clear the parents of `a1`. | ||
other._parents = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the relevant problem: you don't want changes to the copy to invalidate dependents of the original instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think your previous tests captured that case?
def test_deepcopy(): |
I left all of the previous tests in there. The override of the
__copy__()
method is still required to keep the parent/child references intact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see. This is because the default deepcopy is also going to deepcopy _parents
, creating new dependents (that don't actually exist anywhere except as _parents
entries, so they should in fact even get gc'ed out at some point?); in any case this means that we indeed don't need to manually clear out _parents
like I previously did.
I haven't thought about the whether the intended figure deepcopy makes sense, but at least, from the pov of deepcopying transforms, I'm happy with the change here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I think my take on deepcopy is that it should be the same as if I were to pickle dump and then load, which we are already testing, so it should be feasible to achieve I think. Something like this:
__deepcopy__(self):
return pickle.loads(pickle.dumps(self))
But, we don't even need to do that in this case because the standard deepcopy (with this update) works on the figure object. Thats what I'm trying to get at in the added test and just show that we get new objects with the proper scales set from the base object.
Here is a quick test script with interactivity that I was playing with. With this PR: fig3 is separated from the interactivity on fig1 and fig2. fig1 and fig2 have to both be in an "interactive" state for the transforms to be updated, but are still linked via the copy. If fig1 is interactive, but fig2 is not, fig1 moves around. Then, turn on interactivity on fig2 and click in the canvas to force a redraw and fig2 will jump to the fig1 transforms. (side note: when doing this I get an error in end_pan when trying import matplotlib.pyplot as plt
import copy
import matplotlib
fig1, ax = plt.subplots()
ax.plot([0, 1], [2, 3])
fig2 = copy.copy(fig1)
fig3 = copy.deepcopy(fig1)
# We expect fig1 and fig2 to have y log-scaled, but fig3
# the deepcopy to be unchanged
ax.set_yscale('log')
plt.show() |
Thinking about this a little bit more, I'm not sure we want to allow a normal shallow copy... It seems like that could cause some unwanted confusion. There is shared state, but it doesn't automatically get propagated like sharex/sharey would. If you zoom/pan with fig1 in interactive mode, but not fig2 and place your mouse in fig2's axes, fig1 will pan around because it is the same axes (see code snippet below), but fig2 doesn't move because interactive mode isn't turned on. But, then when you turn pan on for fig2 and click somewhere, the figure will all of a sudden jump to the new state because a new draw was triggered. deepcopy on the other hand does make a little more sense to me and I could see usefulness of it by starting everything fresh on a new figure and keeping your old copy around to compare against. # Figures the same, axes the same, canvas the same
# copy: False True False
# deep: False False False
print("copy:", fig1 is fig2, fig1.axes[0] is fig2.axes[0], fig1.canvas is fig2.canvas)
print("deep:", fig1 is fig3, fig1.axes[0] is fig3.axes[0], fig1.canvas is fig3.canvas) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess given that you have removed special casing for the transform code it would be good to test that changing the transform stack on one figure doesn't change it on the other? I get confused by how that all works, but I would expect that zooming on one axes should not change the limits on the other?
Let Python take care of the deepcopy for us.
a7db8c2
to
cfc1609
Compare
Correct, on a deepcopy everything will be new objects on the new one and should not influence the original object. I added in a |
PR Summary
Currently, a deepcopy of a figure does not work because the transform tree does not get fully copied to the new state. I am not sure we actually need to override
__deepcopy__
on the transformNode, so just remove that and use the one that comes along with the class. I might be missing something obvious, but this does appear to work as I'd expect and not copy over any transform state to the new axes.closes #21554
PR Checklist
Tests and Styling
pytest
passes).flake8-docstrings
and runflake8 --docstring-convention=all
).Documentation
doc/users/next_whats_new/
(follow instructions in README.rst there).doc/api/next_api_changes/
(follow instructions in README.rst there).