Skip to content

Inconsistent torch.absolute results on complex128 between CPU and CUDA #158412

@atester353

Description

@atester353

🐛 Describe the bug

🐛 Describe the bug

Inconsistent torch.absolute results on complex128 between CPU and CUDA
Reproduction code:

import torch
max_float64 = 1.7976931348623157e+308
real = torch.ones((3, 3), dtype=torch.float64)
imag = torch.full((3, 3), max_float64, dtype=torch.float64)
input = torch.complex(real, imag)
input_gpu = input.cuda()
try:
    out_gpu = torch.absolute(input=input_gpu)
    print(out_gpu) //all inf
except Exception as e:
    print(e)
input_cpu = input.cpu()
try:
    out_cpu = torch.absolute(input=input_cpu)
    print(out_cpu) //all 1.7977e+308
except Exception as e:
    print(e)

[+]### Versions[/+]
[+]PyTorch version: 2.7.0+cu126[/+]

Versions

Inconsistent torch.absolute results on complex128 between CPU and CUDA

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: NaNs and InfsProblems related to NaN and Inf handling in floating pointmodule: edge casesAdversarial inputs unlikely to occur in practicetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions