Skip to content

Conversation

@colesbury
Copy link
Member

Previously, we didn't cast any 0-dim tensors used in CUDA operations. We
can only avoid the casts for 0-dim CPU tensors used in CUDA operations.

Fixes #11795

Previously, we didn't cast any 0-dim tensors used in CUDA operations. We
can only avoid the casts for 0-dim CPU tensors used in CUDA operations.

Fixes pytorch#11795
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

colesbury has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

op.needs_cast = true;
}
op.needs_cast = needs_cast(*op.tensor, type);
if (op.needs_cast && op.tensor->dim() == 0) {

This comment was marked as off-topic.

This comment was marked as off-topic.

@colesbury
Copy link
Member Author

@pytorchbot retest this please

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

colesbury has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

if (!tensor.defined() || dst_type == tensor.type()) {
return false;
}
if (dst_type.backend() == Backend::CUDA &&

This comment was marked as off-topic.


x = torch.tensor(1.5, device='cuda', dtype=torch.float16)
self.assertEqual(x * y, 4.5)
# half * int currently promotes to double

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

colesbury has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@colesbury
Copy link
Member Author

@pytorchbot retest this please

@colesbury colesbury deleted the issue_11795 branch September 21, 2018 21:22
zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 21, 2018
Summary:
Previously, we didn't cast any 0-dim tensors used in CUDA operations. We
can only avoid the casts for 0-dim CPU tensors used in CUDA operations.

Fixes #11795
Pull Request resolved: pytorch/pytorch#11808

Differential Revision: D9922406

Pulled By: colesbury

fbshipit-source-id: 940b8a8534770aa5cd70d5d09b96be0f0f8146ff
facebook-github-bot pushed a commit that referenced this pull request Sep 24, 2018
Summary:
Changes the result type of half type and any integer type to return half
type (instead of float or double).

This is based on top of #11808. The first new commit is "Make promoteType(half, integer) -> half". I'll rebase on top of master once that PR lands.
Pull Request resolved: #11941

Differential Revision: D10014122

Pulled By: colesbury

fbshipit-source-id: 16a5eb3406a5712069201d872d8736d0599e9411
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[TensorIterator] bug when performing inter-scalar ops on the GPU

5 participants