Skip to content

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Jun 12, 2017

Includes test_torch, test_autograd and docs changes.

Includes test_torch, test_autograd and docs changes.
return torch.mm(tensor1, tensor2)
else:
return torch.mm(tensor1, tensor2, out=out)
elif dim_tensor1 >= 2 and dim_tensor2 >= 2:

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

# if non-Variable torch function returns a scalar, compare to scalar
if not torch.is_tensor(unpacked_result):
assert(packed_result.dim() == 1)
assert(packed_result.nelement() == 1)

This comment was marked as off-topic.

try:
dim_tensor2 = tensor2.dim()
except AttributeError: # not a tensor
return NotImplemented

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

if out is None:
return torch.mm(tensor1.unsqueeze(0), tensor2).squeeze(0)
else:
return torch.mm(tensor1.unsqueeze(0), tensor2, out=out).squeeze_(0)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

return torch.mm(tensor1.unsqueeze(0), tensor2).squeeze(0)
else:
return torch.mm(tensor1.unsqueeze(0), tensor2, out=out).squeeze_(0)
elif dim_tensor1 == 2 and dim_tensor2 == 2:

This comment was marked as off-topic.

else:
return torch.mm(tensor1.unsqueeze(0), tensor2, out=out).squeeze_(0)
elif dim_tensor1 == 2 and dim_tensor2 == 2:
if out is None:

This comment was marked as off-topic.

This comment was marked as off-topic.

return torch.mm(tensor1, tensor2)
else:
return torch.mm(tensor1, tensor2, out=out)
elif dim_tensor1 >= 2 and dim_tensor2 >= 2:

This comment was marked as off-topic.

# But 1) is inconsistent with other functions (e.g. torch.bmm) that will maintain
# output non-contiguity if the size is correct (perhaps we should change this globally?)
# And 3) is a surprising output to accept if we aren't accepting 1).
# So let's just force accepting contiguous tensors.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@gchanan
Copy link
Contributor Author

gchanan commented Jun 13, 2017

Latest push should have addressed the review comments.

@soumith soumith merged commit 4e35652 into pytorch:master Jun 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants