-
Notifications
You must be signed in to change notification settings - Fork 26.3k
conv2d/conv3d for LongTensor #20730
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
conv2d/conv3d for LongTensor #20730
Conversation
Differential Revision: D15423752 Differential Version: 82449506
Differential Revision: D15423753 Differential Version: 82449507
test/test_nn.py
Outdated
| def extra_args(self): | ||
| return self._get_arg('extra_args', False) | ||
|
|
||
| class TestLongTensor(TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was the comment about depthwise, etc handled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depthwise is supported only for CUDA. I added transpose2d and dilated2d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added conv3d, transpose3d, dilated3d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of doing one-off tests like this, how about adding this as an option to the normal NN Module / Criterion tests? There is already some use of this, see check_eval.
@ifedan: Updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the benefit to move them to CriterionTest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will check way more combinations of parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with this approach is that initially Tensor is generated as random float and it's impossible to compare it with the long tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed tests
Differential Revision: D15423752 Differential Version: 83368139
Differential Revision: D15423753 Differential Version: 84238345
Differential Revision: D15423753 Differential Version: 84400111
Differential Revision: D15423753 Differential Version: 84408451
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you planning to add CUDA support to any of these?
test/test_nn.py
Outdated
| def extra_args(self): | ||
| return self._get_arg('extra_args', False) | ||
|
|
||
| class TestLongTensor(TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of doing one-off tests like this, how about adding this as an option to the normal NN Module / Criterion tests? There is already some use of this, see check_eval.
@ifedan: Updated
Differential Revision: D15423753 Differential Version: 85526199
Differential Revision: D15423753 Differential Version: 85526847
|
@pytorchbot retest this please |
Differential Revision: D15423752 Differential Version: 85543831
Differential Revision: D15423753 Differential Version: 85544793
test/test_nn.py
Outdated
| test_params['desc'] = 'with_long_tensor' if desc is None else desc + '_with_long_tensor' | ||
|
|
||
| def gen_long_tensor(size): | ||
| return torch.LongTensor(size=size).random_(-1000, 1000).double().cpu() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you use torch.randint instead?
Differential Revision: D15423753 Differential Version: 85563780
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just some small renames, changes.
test/common_nn.py
Outdated
| constructor_args=(10, 8), | ||
| input_size=(4, 10), | ||
| reference_fn=lambda i, p: torch.mm(i, p[0].t()) + p[1].view(1, -1).expand(4, 8), | ||
| reference_fn=lambda i, p, m: torch.mm(i, p[0].t()) + p[1].view(1, -1).expand(4, 8), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the standard in python is to use "_" if you aren't going to use the parameter (throughout this change).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
test/common_nn.py
Outdated
| self.FIXME_no_cuda_gradgrad_comparison = \ | ||
| kwargs.get('FIXME_no_cuda_gradgrad_comparison', False) | ||
| self.precision = kwargs.get('precision', 2e-4) | ||
| self.check_reference_only = kwargs.get('check_reference_only', False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just reuse the names in CriterionTest -- so this would be check_forward_only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
test/test_nn.py
Outdated
| desc = test_params.get('desc', None) | ||
| test_params['desc'] = 'with_long_tensor' if desc is None else desc + '_with_long_tensor' | ||
|
|
||
| def gen_long_tensor(size): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this name is confusing, because the name implies it returns a long tensor but it actually returns a double tensor.
Also, why do you need to specify CPU?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline, going to rename to double_equivalent_of_long_tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| test_params['input_fn'] = gen_long_tensor_input(test_params['input_size']) | ||
| test_params['reference_fn'] = reference_fn | ||
| test_params['check_reference_only'] = True | ||
| test_params['test_cuda'] = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
test/test_nn.py
Outdated
| def gen_long_tensor_constructor(constructor): | ||
| def long_tensor_constructor(*args, **kwargs): | ||
| cons = constructor(*args, **kwargs) | ||
| cons._apply(lambda t: Parameter(gen_long_tensor(t.size())) if t.is_floating_point() else t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine to do if t.is_floating_point and isinstance(t, Parameter) or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Differential Revision: D15423753 Differential Version: 85666615
Summary: Pull Request resolved: pytorch/pytorch#20730 Generates forward conv2d function for LongTensor Differential Revision: D15423753 fbshipit-source-id: 0e770b61257cc4c6559581796bf104ef68155c84
|
This pull request has been merged in 04fe245. |
Stack:
:black_circle: #20730 conv2d/conv3d for LongTensor 💛
Generates forward conv2d function for LongTensor
Differential Revision: D15423753