-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Open
Labels
module: NaNs and InfsProblems related to NaN and Inf handling in floating pointProblems related to NaN and Inf handling in floating pointmodule: edge casesAdversarial inputs unlikely to occur in practiceAdversarial inputs unlikely to occur in practicetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
🐛 Describe the bug
Inconsistent torch.absolute results on complex128 between CPU and CUDA
Reproduction code:
import torch
max_float64 = 1.7976931348623157e+308
real = torch.ones((3, 3), dtype=torch.float64)
imag = torch.full((3, 3), max_float64, dtype=torch.float64)
input = torch.complex(real, imag)
input_gpu = input.cuda()
try:
out_gpu = torch.absolute(input=input_gpu)
print(out_gpu) //all inf
except Exception as e:
print(e)
input_cpu = input.cpu()
try:
out_cpu = torch.absolute(input=input_cpu)
print(out_cpu) //all 1.7977e+308
except Exception as e:
print(e)
[+]### Versions[/+]
[+]PyTorch version: 2.7.0+cu126[/+]
Versions
Inconsistent torch.absolute results on complex128 between CPU and CUDA
Metadata
Metadata
Assignees
Labels
module: NaNs and InfsProblems related to NaN and Inf handling in floating pointProblems related to NaN and Inf handling in floating pointmodule: edge casesAdversarial inputs unlikely to occur in practiceAdversarial inputs unlikely to occur in practicetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module