Skip to content

Conversation

@ifedan
Copy link
Contributor

@ifedan ifedan commented May 17, 2019

Added some extra tests for std_mean and var_mean for multiple dims.
Some refactoring of previously created tests based on PR comments: #18731

@pytorchbot pytorchbot added module: operators module: tests Issues related to tests (not the torch.testing module) labels May 17, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ifedan ifedan requested a review from gchanan May 17, 2019 18:07
@gchanan gchanan requested a review from umanwizard May 17, 2019 19:50
@pytorchbot pytorchbot added the module: docs Related to our documentation, both in docs/ and docblocks label May 17, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

torch/tensor.py Outdated
__abs__ = _C._TensorBase.abs

def _std_mean(self, dim=None, unbiased=True, keepdim=False):
def std_mean(self, dim=None, unbiased=True, keepdim=False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs two changes:

  1. this shouldn't be done in python, it should be done in native_functinos.
  2. the method should be called _std_mean, because it's just there for testing purposes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gchanan Can we just make it a Tensor method, if we want to move them to native_functions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it shouldn't just be a method. I'm suggesting two things:

  1. A function called std_mean
  2. A method called _std_mean, that is just there for testing purposes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will be the benefit of doing this in native_functions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the parsing will be consistent and you can maintain everything in one place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed autograd test, to be able to avoid fail when there is a method that implemented only in torch

torch/tensor.py Outdated
__abs__ = _C._TensorBase.abs

def __std_mean__(self, dim=None, unbiased=True, keepdim=False):
def _std_mean(self, dim=None, unbiased=True, keepdim=False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you just make this a native function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed autograd test, to be able to avoid fail when there is a method that implemented only in torch

@pytorchbot pytorchbot added the module: autograd Related to torch.autograd, and the autograd engine in general label May 21, 2019
dims = len(sizes)
for device in torch.testing.get_all_device_types():
x = torch.rand(sizes, device=device)
for rng in range(2, dims):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is rng supposed to mean? It doesn't look like a random number generator (the standard definition of rng)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed

for device in torch.testing.get_all_device_types():
x = torch.rand(sizes, device=device)
for rng in range(2, dims):
dim_list = list(combinations(list(range(dims)), r=rng))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how long do these tests take?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It takes around 2 sec on my machine

f_args_variable = (self_variable,) + args_variable
f_args_tensor = (self_tensor,) + args_tensor
# could run the gradchecks again, but skip since we did it for the methods above.
run_gradcheck = exclude_tensor_method(name, test_name) and not is_inplace and name not in EXCLUDE_GRADCHECK
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the comment above is out of date now. Also, why do you need not is_inplace?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't run gradcheck for in_place functions here, because I will get "a leaf Variable that requires grad has been used in an in-place operation"

output_variable = getattr(self_variable, name)(*args_variable, **kwargs_variable)
else:
self_and_args_variable = (self_variable,) + args_variable
output_variable = getattr(torch, name)(*self_and_args_variable, **kwargs_variable)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this -- do we have "inplace" functionals we are trying to test? And why wouldn't they be tested above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't run gradcheck for in_place functions here, because I will get "a leaf Variable that requires grad has been used in an in-place operation"

@ifedan ifedan requested a review from gchanan June 13, 2019 17:53
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ifedan merged this pull request in abd6cff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: autograd Related to torch.autograd, and the autograd engine in general module: docs Related to our documentation, both in docs/ and docblocks module: tests Issues related to tests (not the torch.testing module)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants