Skip to content

"nan" behaves differently in CPU and GPU #1810

@ShigekiKarita

Description

@ShigekiKarita

Should I report this to torch repository?

>>> torch.topk(torch.FloatTensor([-1,float("nan"),2, 1]), 1)
(
  2
 [torch.FloatTensor of size 1], 
  2
 [torch.LongTensor of size 1])

>>> torch.topk(torch.cuda.FloatTensor([-1,float("nan"),2]), 1)
(
 nan
 [torch.cuda.FloatTensor of size 1 (GPU 0)], 
  9.2054e+18
 [torch.cuda.LongTensor of size 1 (GPU 0)])

In CPU, topk seems to ignore "nan" and its indices from result.
In GPU, topk with "nan" is kind of undefined behavior?

and other examples

>>> torch.max(torch.cuda.FloatTensor([float("nan"), 1]))
1.0

>>> torch.max(torch.FloatTensor([float("nan"), 1]))
nan

>>> torch.min(torch.cuda.FloatTensor([float("nan"), 1]))
1.0

>>> torch.min(torch.FloatTensor([float("nan"), 1]))
nan
>>> torch.sort(torch.FloatTensor([float("nan"), 1]))
(
 nan
   1
 [torch.FloatTensor of size 2], 
  0
  1
 [torch.LongTensor of size 2])

>> torch.sort(torch.cuda.FloatTensor([float("nan"), 1]))
(
   1
 nan
 [torch.cuda.FloatTensor of size 2 (GPU 0)], 
  1
  0
 [torch.cuda.LongTensor of size 2 (GPU 0)])

cc @VitalyFedyunin @ngimel @heitorschueroff

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: cpuCPU specific problem (e.g., perf, algorithm)module: cudaRelated to torch.cuda, and CUDA support in generalmodule: numerical-stabilityProblems related to numerical stability of operationsmodule: sorting and selectiontriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions