Skip to content

Conversation

@skrah
Copy link
Contributor

@skrah skrah commented Jul 22, 2019

Fixes #20465.

@pytorchbot pytorchbot added module: cuda Related to torch.cuda, and CUDA support in general module: nn Related to torch.nn module: operators labels Jul 22, 2019
@skrah
Copy link
Contributor Author

skrah commented Jul 22, 2019

@pytorchbot retest this please.

@michaelklachko
Copy link

If all inputs are -inf, what would be the correct max_pool index to return?

@skrah
Copy link
Contributor Author

skrah commented Jul 22, 2019

I extended the existing algorithm, where the first index wins in case of a tie, to infinities:

>>> m = MaxPool2d(kernel_size=2, return_indices=True)
>>> v = 1.0
>>> t = torch.tensor([[[v, v], [v, v]]])
>>> m(t)
(tensor([[[1.]]]), tensor([[[0]]]))
>>> 
>>> v = float("-inf")
>>> t = torch.tensor([[[v, v], [v, v]]])
>>> m(t)
(tensor([[[-inf]]]), tensor([[[0]]]))

@skrah
Copy link
Contributor Author

skrah commented Jul 22, 2019

Compare with the current behavior:

>>> m = MaxPool2d(kernel_size=2, return_indices=True)
>>> v = float("-inf")
>>> t = torch.tensor([[[v, v], [v, v]]])
>>> m(t)
(tensor([[[-3.4028e+38]]]), tensor([[[-1]]]))

@skrah
Copy link
Contributor Author

skrah commented Jul 22, 2019

I was more concerned if the existing clamping to -3.4028e+38 was deliberate, but if it was, I cannot explain the index of -1.

@skrah
Copy link
Contributor Author

skrah commented Jul 22, 2019

@pytorchbot retest this please.

@skrah skrah requested a review from ezyang July 22, 2019 17:19
@skrah skrah changed the title [WIP] max_pool_with_indices: return valid indices if all input elements are -inf max_pool_with_indices: return valid indices if all input elements are -inf Jul 22, 2019
@skrah
Copy link
Contributor Author

skrah commented Jul 22, 2019

@ezyang I think this is ready for review. The issue is probably best summarized by the two code examples in this PR.

The open questions are whether returning -1 in the indices was ever valid and whether clamping -inf to -FLOAT_MAX was deliberate.

[The test failures look like known CI issues and GitHub outage fallout.]

@skrah
Copy link
Contributor Author

skrah commented Jul 23, 2019

@pytorchbot retest this please.

@ezyang
Copy link
Contributor

ezyang commented Jul 23, 2019

@pytorchbot rebase this please

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, this is a very nice fix!

@ezyang
Copy link
Contributor

ezyang commented Jul 23, 2019

I was more concerned if the existing clamping to -3.4028e+38 was deliberate, but if it was, I cannot explain the index of -1.

Blame would probably be the only way to find out, but my guess is that the person who wrote this kernel just tried to find "some minimum value" and picked the wrong one.

@ezyang
Copy link
Contributor

ezyang commented Jul 23, 2019

Waiting on CI.

@ezyang ezyang added this to the 1.2 milestone Jul 23, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 87d3f66.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 24, 2019
… -inf (#23161)

Summary:
Fixes pytorch/pytorch#20465.
Pull Request resolved: pytorch/pytorch#23161

Differential Revision: D16442672

Pulled By: ezyang

fbshipit-source-id: 8c2ee13acd73954c7307720c01c732f460266a63
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cuda Related to torch.cuda, and CUDA support in general module: nn Related to torch.nn open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Second order gradient cuda error

6 participants