Skip to content

MAINT/BUG: improve gradient dtype handling #9411

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 21, 2017

Conversation

eric-wieser
Copy link
Member

In hardcoding floating point types, this fails for float16, and presumably float128 on platforms that support it.

Also, there is no need to cast timedelta to int

@charris
Copy link
Member

charris commented Jul 21, 2017

Let's give it a shot, thanks Eric.

Has there been an issue regarding float16? Be interesting for float16 applications if that case matters.

@charris charris added the 09 - Backport-Candidate PRs tagged should be backported label Jul 21, 2017
@eric-wieser
Copy link
Member Author

eric-wieser commented Jul 22, 2017

Has there been an issue regarding float16?

You mean here on github? I don't think so, it would be hard to spot - previously the result of gradient(some_float16) would just be promoted to float64, which is a higher precision than gradient(some_float32) gives!

@charris
Copy link
Member

charris commented Jul 22, 2017

The reason I asked is that float16 is not a type commonly used for computation. That has changed somewhat in recent years, where it is now used for some types of machine learning when range rather than precision is important. That has led to some requests for computation done in float16 rather than higher precision types.

Thinking it over, this change probably needs a compatibility note.

@charris charris removed the 09 - Backport-Candidate PRs tagged should be backported label Jul 22, 2017
@eric-wieser
Copy link
Member Author

eric-wieser commented Jul 22, 2017

Note that the actually type used for computation has not changed - the old code was equivalent to doing a .astype(np.float64) right at the end. I suppose that still changes the behaviour of code doing np.gradient(some_f16) + some_other_f16

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants