-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
BUG ensure monotonic property of lerp in numpy.percentile #15098
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ping @seberg, @eric-wieser, @arthertz which might be interested to have a look at this. |
Is there other regression tests which would be meaningful? |
failures seem not associated with the PR changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small comments:
Co-Authored-By: Olivier Grisel <olivier.grisel@ensta.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. @eric-wieser want to squash merge it when you are happy? Since percentile should normally do most of its work in looking up the indices and not in the calculation, I do not think I would worry about speed.
I think we do not need release notes here.
Tests are passing |
@eric-wieser do you require any further changes? |
@seberg ping. This is both approved and "pending review" |
numpy/lib/function_base.py
Outdated
x1 = np.moveaxis(x1, axis, 0) | ||
x2 = np.moveaxis(x2, axis, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you kept these lines, and did them before the computation, then you'd be able to do:
r_above = np.add(x1, diff_x2_x1 * weights_above, out=out)
r_below = np.subtract(x2, diff_x2_x1 * weights_below, out=r_above, where=weights_above < 0.5)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, always wanted to look at the formula mainly and the logic seems fine to me. The test failures are real, and the issue with out
being a scalar also. I think reorganizing the calculation to make it closer to the original (move it to where the add
was) should probably fix that issue.
|
||
# ensure axis with q-th is first | ||
x1 = np.moveaxis(x1, axis, 0) | ||
x2 = np.moveaxis(x2, axis, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this before the diff
calcluation I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll need to call moveaxis on the weight too, I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, do we even need this? We already moved the axis above...
I think perhaps the take should be axis=0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, axis is already 0 here, so this is a no-op
r_above = np.add(x1, diff_x2_x1 * weights_above, out=out) | ||
r_below = np.subtract( | ||
x2, diff_x2_x1 * weights_below, out=r_above, | ||
where=weights_above < 0.5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fun, but this way, please just call it r
(or something longer) and not above
and below
, and add a comment that we first calculate everything for above, and then replace it with the "below" version.
Are there test for the combinations of "zerod" and out=...
being given. Because this smells to me like we could get into trouble here.
Probably it is possible to move the if zerod
block before these calculations to avoid that problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the point is we made everything 1d at the top of the function, so the zerod block should remain below - to convert the final result back to 0d.
Perhaps we should push forward on the leavewrapped
thing at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But out
is never made 1-D, so we need to be careful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is a good example of for a need of leavewrapped
.
|
||
if out is not None: | ||
r = add(x1, x2, out=out) | ||
out[...] = r_above | ||
r = out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assignment to out should not be necessary here, the above operation is already in-place (i.e. similar to the old branch, which did not do this).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@seberg is the double-assignment cleanup a blocker?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not on its own, but I think the above also needs some cleanups, maybe I should just put it on my todo list to do it myself.
Tests are failing. |
It would be nice to get this working. @glemaitre, will you be able to resume working on this? There are a few issues still to be resolved. |
@glemaitre do you have time/want to pick this up? Otherwise I am happy to drag it over the finish line. |
@seberg I am sorry I don't think I will get any time to finish it up in the next 3 weeks. If you think you can finalize in the meanwhile, do not hesitate. |
Perhaps we ought to extract a |
Yeah, could create a hidden ufunc, should be simple enough, maybe I will look at it beinning of the week. |
For now I've got a local patch that extracts it to a pure python function, which builds upon #16274. |
Alright, my cleanups are in - there's now a private |
Fixed in gh-16273 thanks @glemaitre, the credit of figuring out how to best fix this goes fully to you! |
closes #14685
Change the way percentiles are computed (and linearly interpolated) to ensure monotonic property.