Skip to content

Conversation

@ifedan
Copy link
Contributor

@ifedan ifedan commented May 20, 2019

Stack:
    :black_circle:  #20730 conv2d/conv3d for LongTensor  💛

Generates forward conv2d function for LongTensor

Differential Revision: D15423753

ifedan added 2 commits May 20, 2019 15:39
Differential Revision: D15423752
Differential Version: 82449506
Differential Revision: D15423753
Differential Version: 82449507
@pytorchbot pytorchbot added module: cpu CPU specific problem (e.g., perf, algorithm) module: internals Related to internal abstractions in c10 and ATen module: nn Related to torch.nn module: operators labels May 20, 2019
@ifedan
Copy link
Contributor Author

ifedan commented May 20, 2019

Comments from #20105 #20370
@gchanan:
It also seems like you don't support a larger number of cases, e.g. depthwise, transposed, dilated convolutions.Maybe similar on the AvgPool / SumPool too, but I'm not sure.

test/test_nn.py Outdated
def extra_args(self):
return self._get_arg('extra_args', False)

class TestLongTensor(TestCase):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was the comment about depthwise, etc handled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depthwise is supported only for CUDA. I added transpose2d and dilated2d

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added conv3d, transpose3d, dilated3d

Copy link
Contributor

@gchanan gchanan Jun 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing one-off tests like this, how about adding this as an option to the normal NN Module / Criterion tests? There is already some use of this, see check_eval.

@ifedan: Updated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the benefit to move them to CriterionTest?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will check way more combinations of parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with this approach is that initially Tensor is generated as random float and it's impossible to compare it with the long tensor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed tests

ifedan added 4 commits May 29, 2019 11:23
Differential Revision: D15423752
Differential Version: 83368139
Differential Revision: D15423753
Differential Version: 84238345
Differential Revision: D15423753
Differential Version: 84400111
Differential Revision: D15423753
Differential Version: 84408451
@ifedan ifedan changed the title conv2d for LongTensor conv2d/conv3d for LongTensor Jun 6, 2019
Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you planning to add CUDA support to any of these?

test/test_nn.py Outdated
def extra_args(self):
return self._get_arg('extra_args', False)

class TestLongTensor(TestCase):
Copy link
Contributor

@gchanan gchanan Jun 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing one-off tests like this, how about adding this as an option to the normal NN Module / Criterion tests? There is already some use of this, see check_eval.

@ifedan: Updated

Differential Revision: D15423753
Differential Version: 85526199
@ifedan ifedan requested a review from gchanan June 25, 2019 02:11
Differential Revision: D15423753
Differential Version: 85526847
@ifedan
Copy link
Contributor Author

ifedan commented Jun 25, 2019

@pytorchbot retest this please

ifedan added 2 commits June 25, 2019 07:55
Differential Revision: D15423752
Differential Version: 85543831
Differential Revision: D15423753
Differential Version: 85544793
test/test_nn.py Outdated
test_params['desc'] = 'with_long_tensor' if desc is None else desc + '_with_long_tensor'

def gen_long_tensor(size):
return torch.LongTensor(size=size).random_(-1000, 1000).double().cpu()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you use torch.randint instead?

Differential Revision: D15423753
Differential Version: 85563780
@ifedan ifedan requested a review from gchanan June 25, 2019 18:51
Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just some small renames, changes.

constructor_args=(10, 8),
input_size=(4, 10),
reference_fn=lambda i, p: torch.mm(i, p[0].t()) + p[1].view(1, -1).expand(4, 8),
reference_fn=lambda i, p, m: torch.mm(i, p[0].t()) + p[1].view(1, -1).expand(4, 8),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the standard in python is to use "_" if you aren't going to use the parameter (throughout this change).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

self.FIXME_no_cuda_gradgrad_comparison = \
kwargs.get('FIXME_no_cuda_gradgrad_comparison', False)
self.precision = kwargs.get('precision', 2e-4)
self.check_reference_only = kwargs.get('check_reference_only', False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just reuse the names in CriterionTest -- so this would be check_forward_only.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

test/test_nn.py Outdated
desc = test_params.get('desc', None)
test_params['desc'] = 'with_long_tensor' if desc is None else desc + '_with_long_tensor'

def gen_long_tensor(size):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this name is confusing, because the name implies it returns a long tensor but it actually returns a double tensor.

Also, why do you need to specify CPU?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline, going to rename to double_equivalent_of_long_tensor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

test_params['input_fn'] = gen_long_tensor_input(test_params['input_size'])
test_params['reference_fn'] = reference_fn
test_params['check_reference_only'] = True
test_params['test_cuda'] = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

test/test_nn.py Outdated
def gen_long_tensor_constructor(constructor):
def long_tensor_constructor(*args, **kwargs):
cons = constructor(*args, **kwargs)
cons._apply(lambda t: Parameter(gen_long_tensor(t.size())) if t.is_floating_point() else t)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fine to do if t.is_floating_point and isinstance(t, Parameter) or something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Differential Revision: D15423753
Differential Version: 85666615
@ifedan ifedan changed the base branch from export-D15423752 to master June 26, 2019 21:45
@ailzhang
Copy link
Contributor

ailzhang commented Jun 26, 2019

This broke lint. cc @ifedan . [update: xla failure seems to be caused by the master failure. sorry!]Also please cc me or @dlibenzi or @asuhan on the thread before landing if this broke xla test. Thanks a lot!

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 26, 2019
Summary:
Pull Request resolved: pytorch/pytorch#20730

Generates forward conv2d function for LongTensor

Differential Revision: D15423753

fbshipit-source-id: 0e770b61257cc4c6559581796bf104ef68155c84
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 04fe245.

@ezyang ezyang deleted the export-D15423753 branch July 19, 2019 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cpu CPU specific problem (e.g., perf, algorithm) module: internals Related to internal abstractions in c10 and ATen module: nn Related to torch.nn

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants