Skip to content

Voxelwise dvars explodes with voxels with variance ~1e-19 #3487

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
effigies opened this issue Jul 7, 2022 · 0 comments · Fixed by #3489
Closed

Voxelwise dvars explodes with voxels with variance ~1e-19 #3487

effigies opened this issue Jul 7, 2022 · 0 comments · Fixed by #3489
Labels

Comments

@effigies
Copy link
Member

effigies commented Jul 7, 2022

Summary

In compute_dvars we remove zero-variance voxels:

# Robust standard deviation (we are using "lower" interpolation
# because this is what FSL is doing
func_sd = (
np.percentile(mfunc, 75, axis=1, interpolation="lower")
- np.percentile(mfunc, 25, axis=1, interpolation="lower")
) / 1.349
if remove_zerovariance:
mfunc = mfunc[func_sd != 0, :]
func_sd = func_sd[func_sd != 0]

However, if func_sd is small but nonzero, this will propagate to diff_sdhat:

# Compute (predicted) standard deviation of temporal difference time series
diff_sdhat = np.squeeze(np.sqrt(((1 - ar1) * 2).tolist())) * func_sd
diff_sd_mean = diff_sdhat.mean()

Which then can explode the values in

diff_vx_stdz = np.square(
func_diff / np.array([diff_sdhat] * func_diff.shape[-1]).T
)

If sufficiently small, we will overflow float32.

Proposal

We should set a default threshold for "zerovariance". I think 1e-10 is more than small enough. In the example dataset I have, only one additional voxel is caught by this less stringent threshold, and would avoid the failure.

Will propose a fix soon, but wanted to give people a chance to comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant