-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add abs for ByteTensor and CharTensor. #16893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
fmassa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, thanks a lot!
I have a few minor comments, mostly cosmetic changes.
Also, could you add some tests in
Lines 7055 to 7079 in f032962
| def test_abs(self): | |
| def _test_abs(tensors_dict): | |
| for category, tensors in tensors_dict.items(): | |
| for data in tensors: | |
| switch = torch.rand(data.size()).mul(2).floor().mul(2).add(-1).type(data.dtype) | |
| res = torch.mul(data, switch) | |
| self.assertTensorsSlowEqual(res.abs(), data, 1e-16) | |
| max_val = 1000 | |
| _test_abs(self._make_tensors((3, 4), val_range=(0, max_val))) | |
| _test_abs(self._make_tensors((3, 5, 7), val_range=(0, max_val))) | |
| _test_abs(self._make_tensors((2, 2, 5, 8, 2, 3), val_range=(0, max_val))) | |
| _test_abs(self._make_tensors((1000, ), val_range=(0, max_val))) | |
| _test_abs(self._make_tensors((10, 10, 10), val_range=(0, max_val))) | |
| # Checking that the right abs function is called for LongTensor | |
| bignumber = 2 ^ 31 + 1 | |
| res = torch.LongTensor((-bignumber,)) | |
| self.assertGreater(res.abs()[0], 0) | |
| # One of | |
| rec = torch.randn(2, 2, 3, 7, 6, 2).type(torch.float64).clamp(0, 1) | |
| val1 = rec.select(-1, -1).data[0][0][0].sum() | |
| val2 = rec.select(-1, -1).data.abs()[0][0][0].sum() | |
| self.assertEqual(val1, val2, 1e-8, 'absolute value') |
The cases for
uint8 and int8 are not tested by _test_abs because _make_tensors don't take those into account Lines 112 to 117 in f032962
| def _make_tensors(self, shape, val_range=(-100, 100), use_floating=True, use_integral=True): | |
| float_types = [torch.double, | |
| torch.float] | |
| int_types = [torch.int64, | |
| torch.int32, | |
| torch.int16] |
I think you can just add a few special cases there, instead of modifying _make_tensors.
|
That would've been my next question, where to add tests! Glad to see you already answered that. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Fixes #15089 Pull Request resolved: pytorch/pytorch#16893 Differential Revision: D14020115 Pulled By: ezyang fbshipit-source-id: 6f3be6ed28d2d37667159be45959d400bc473451
Fixes #15089