-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Added some extra tests for std_mean and var_mean for multiple dims. #20650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ome refactoring for prev tests
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
torch/tensor.py
Outdated
| __abs__ = _C._TensorBase.abs | ||
|
|
||
| def _std_mean(self, dim=None, unbiased=True, keepdim=False): | ||
| def std_mean(self, dim=None, unbiased=True, keepdim=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this needs two changes:
- this shouldn't be done in python, it should be done in native_functinos.
- the method should be called
_std_mean, because it's just there for testing purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gchanan Can we just make it a Tensor method, if we want to move them to native_functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it shouldn't just be a method. I'm suggesting two things:
- A function called std_mean
- A method called _std_mean, that is just there for testing purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will be the benefit of doing this in native_functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the parsing will be consistent and you can maintain everything in one place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed autograd test, to be able to avoid fail when there is a method that implemented only in torch
torch/tensor.py
Outdated
| __abs__ = _C._TensorBase.abs | ||
|
|
||
| def __std_mean__(self, dim=None, unbiased=True, keepdim=False): | ||
| def _std_mean(self, dim=None, unbiased=True, keepdim=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you just make this a native function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed autograd test, to be able to avoid fail when there is a method that implemented only in torch
test/test_torch.py
Outdated
| dims = len(sizes) | ||
| for device in torch.testing.get_all_device_types(): | ||
| x = torch.rand(sizes, device=device) | ||
| for rng in range(2, dims): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is rng supposed to mean? It doesn't look like a random number generator (the standard definition of rng)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed
test/test_torch.py
Outdated
| for device in torch.testing.get_all_device_types(): | ||
| x = torch.rand(sizes, device=device) | ||
| for rng in range(2, dims): | ||
| dim_list = list(combinations(list(range(dims)), r=rng)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how long do these tests take?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It takes around 2 sec on my machine
| f_args_variable = (self_variable,) + args_variable | ||
| f_args_tensor = (self_tensor,) + args_tensor | ||
| # could run the gradchecks again, but skip since we did it for the methods above. | ||
| run_gradcheck = exclude_tensor_method(name, test_name) and not is_inplace and name not in EXCLUDE_GRADCHECK |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the comment above is out of date now. Also, why do you need not is_inplace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't run gradcheck for in_place functions here, because I will get "a leaf Variable that requires grad has been used in an in-place operation"
| output_variable = getattr(self_variable, name)(*args_variable, **kwargs_variable) | ||
| else: | ||
| self_and_args_variable = (self_variable,) + args_variable | ||
| output_variable = getattr(torch, name)(*self_and_args_variable, **kwargs_variable) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this -- do we have "inplace" functionals we are trying to test? And why wouldn't they be tested above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't run gradcheck for in_place functions here, because I will get "a leaf Variable that requires grad has been used in an in-place operation"
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Added some extra tests for std_mean and var_mean for multiple dims.
Some refactoring of previously created tests based on PR comments: #18731