Skip to content

Conversation

@ngimel
Copy link
Collaborator

@ngimel ngimel commented Sep 17, 2018

currently grad assignment for half type fails with a misleading RuntimeError

RuntimeError: torch.cuda.sparse.HalfTensor is not enabled.

bool gradIsSparse = false;
if (grad.type().scalarType() != at::kHalf) {
auto& sparseType = var.type().toBackend(var.is_cuda() ? Backend::SparseCUDA : Backend::SparseCPU);
gradIsSparse = grad.type() == sparseType;

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want me to merge this I'm OK, but I think my preferred solution would be a smidge more robust to an eventual future where we do have half-precision sparse tensors.

Maybe you can construct a at::getVariableTypeOpt call from scratch, and then just test if the pointer is null or not, as a more robust patch.

@ngimel
Copy link
Collaborator Author

ngimel commented Sep 18, 2018

@ezyang can this go in?

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants