-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Allow debug evaling IR logp graphs #7666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow debug evaling IR logp graphs #7666
Conversation
2fed5fd
to
0fba15c
Compare
@@ -236,13 +236,16 @@ class ValuedRV(Op): | |||
and breaking the dependency of `b` on `a`. The new nodes isolate the graphs between conditioning points. | |||
""" | |||
|
|||
view_map = {0: [0]} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs view_map so PyTensor knows not to do inplace operations on this. The other Op already had the view_map defined
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting that you have to use view_map to do that. Somewhat hidden functionality? I thought pytensor only does inplace on things that proactively define a destroy_map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I don't say this output returns a view of the input, and this output is not itself a graph output, PyTensor will assume it can be safely destroyed as it is just an intermediate operation.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #7666 +/- ##
=======================================
Coverage 92.70% 92.70%
=======================================
Files 107 107
Lines 18391 18397 +6
=======================================
+ Hits 17049 17055 +6
Misses 1342 1342
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, just some questions
def make_node(self, rv, value): | ||
assert isinstance(rv, Variable) | ||
assert isinstance(value, Variable) | ||
return Apply(self, [rv, value], [rv.type(name=rv.name)]) | ||
|
||
def perform(self, node, inputs, out): | ||
raise NotImplementedError("ValuedVar should not be present in the final graph!") | ||
warnings.warn("ValuedVar should not be present in the final graph!") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we be logging instead of warning so people can more easily turn it off?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I want this to be loud, because it shouldn't be in the final graph.
@@ -236,13 +236,16 @@ class ValuedRV(Op): | |||
and breaking the dependency of `b` on `a`. The new nodes isolate the graphs between conditioning points. | |||
""" | |||
|
|||
view_map = {0: [0]} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting that you have to use view_map to do that. Somewhat hidden functionality? I thought pytensor only does inplace on things that proactively define a destroy_map.
Your messages are coming in weird place so I can't reply. You are reading it a bit backwards. By default an Op must never return a view or mutate an input in place. If you have an Op that does that, that's the "special case", and you have to tell PyTensor about it. That's the contract. Otherwise every Op would have to be explicit about view_map, and most times would be |
A user on discourse recently raised the question of why we can't eval logp graphs during their construction. We have some tagging Ops that raise on the perform method, that prevent this.
https://discourse.pymc.io/t/how-to-implement-bivariate-poisson-logbp/16400/7?u=ricardov94
This PR allows evaluating them, but issues a warning. I've felt the need to get values in interactive debug many times.
📚 Documentation preview 📚: https://pymc--7666.org.readthedocs.build/en/7666/