Skip to content

Conversation

@hameerabbasi
Copy link
Collaborator

Fixes #15089

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me, thanks a lot!

I have a few minor comments, mostly cosmetic changes.

Also, could you add some tests in

pytorch/test/test_torch.py

Lines 7055 to 7079 in f032962

def test_abs(self):
def _test_abs(tensors_dict):
for category, tensors in tensors_dict.items():
for data in tensors:
switch = torch.rand(data.size()).mul(2).floor().mul(2).add(-1).type(data.dtype)
res = torch.mul(data, switch)
self.assertTensorsSlowEqual(res.abs(), data, 1e-16)
max_val = 1000
_test_abs(self._make_tensors((3, 4), val_range=(0, max_val)))
_test_abs(self._make_tensors((3, 5, 7), val_range=(0, max_val)))
_test_abs(self._make_tensors((2, 2, 5, 8, 2, 3), val_range=(0, max_val)))
_test_abs(self._make_tensors((1000, ), val_range=(0, max_val)))
_test_abs(self._make_tensors((10, 10, 10), val_range=(0, max_val)))
# Checking that the right abs function is called for LongTensor
bignumber = 2 ^ 31 + 1
res = torch.LongTensor((-bignumber,))
self.assertGreater(res.abs()[0], 0)
# One of
rec = torch.randn(2, 2, 3, 7, 6, 2).type(torch.float64).clamp(0, 1)
val1 = rec.select(-1, -1).data[0][0][0].sum()
val2 = rec.select(-1, -1).data.abs()[0][0][0].sum()
self.assertEqual(val1, val2, 1e-8, 'absolute value')
?
The cases for uint8 and int8 are not tested by _test_abs because _make_tensors don't take those into account

pytorch/test/test_torch.py

Lines 112 to 117 in f032962

def _make_tensors(self, shape, val_range=(-100, 100), use_floating=True, use_integral=True):
float_types = [torch.double,
torch.float]
int_types = [torch.int64,
torch.int32,
torch.int16]

I think you can just add a few special cases there, instead of modifying _make_tensors.

@hameerabbasi
Copy link
Collaborator Author

That would've been my next question, where to add tests! Glad to see you already answered that.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Feb 11, 2019
Summary:
Fixes #15089
Pull Request resolved: pytorch/pytorch#16893

Differential Revision: D14020115

Pulled By: ezyang

fbshipit-source-id: 6f3be6ed28d2d37667159be45959d400bc473451
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support torch.abs for ByteTensor, CharTensor

4 participants