-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[ready] torch.* doc update for Variable/Tensor merge, and other improvements #5443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
changing |
torch/csrc/Module.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
8516fd6 to
9be20cc
Compare
The nn.* counterpart of pytorch#5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU. Notice that torch.randn(*, requires_grad=True) isn't documented until pytorch#5462 is done.
torch/__init__.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@colesbury could you take a look at this? Most of the changes are examples formatting and |
torch/_tensor_docs.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
…inting style 2. Remove functions in torch/functional.py that are already implemented with native_function 3. Add set_detault_tensor_type doc
|
@pytorchbot retest this please |
colesbury
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changes lgtm assuming the tests pass
Summary: ~~This PR fixes #8525 by renaming `split_with_sizes` to `split` so that 2 `aten::split` ops are generated (previously `aten::split(self, int, int)` and `aten::split_with_sizes(self, int[], int)` were generated)~~ ~~`split_with_sizes` was made in PR #5443, but I don't see a reason for it to have a different name than `split` rather than just overload `split`.~~ This PR fixes #8525 by adding `register_special_ops.cpp` to mirror Python dispatching from `split` to `split` and `split_with_sizes` in [tensor.py](https://github.com/pytorch/pytorch/blob/master/torch/tensor.py#L279). It also fixes #8520 by adding an `int[]` wherever it sees `torch.Size` In a follow up PR this could also be used to fix some of the other `unknown builtin op` test errors. Pull Request resolved: #11051 Differential Revision: D9582443 Pulled By: driazati fbshipit-source-id: d27201f85937d72e45e851eaa1460dd3dd1b61a9
Summary: ~~This PR fixes pytorch#8525 by renaming `split_with_sizes` to `split` so that 2 `aten::split` ops are generated (previously `aten::split(self, int, int)` and `aten::split_with_sizes(self, int[], int)` were generated)~~ ~~`split_with_sizes` was made in PR pytorch#5443, but I don't see a reason for it to have a different name than `split` rather than just overload `split`.~~ This PR fixes pytorch#8525 by adding `register_special_ops.cpp` to mirror Python dispatching from `split` to `split` and `split_with_sizes` in [tensor.py](https://github.com/pytorch/pytorch/blob/master/torch/tensor.py#L279). It also fixes pytorch#8520 by adding an `int[]` wherever it sees `torch.Size` In a follow up PR this could also be used to fix some of the other `unknown builtin op` test errors. Pull Request resolved: pytorch#11051 Differential Revision: D9582443 Pulled By: driazati fbshipit-source-id: d27201f85937d72e45e851eaa1460dd3dd1b61a9
Commits:
-> floatfunctions become-> Tensor), and new printing styletorch/functional.pythat are already implemented with native_function. Move their doc to_torch_docs.pyand_tensor_docs.py.set_detault_tensor_typedoctorch.gels(Docs Error: Incorrect Output of torch.gels() Example #5431 )torch.split. The method behaves differently basing on the type of 2nd arg.splitandsplit_with_sizes. The Python API stays the same with the if-statement invariable.py.torch.splitfunctional form, this commit also movesfrom .functional import *to after ATen bindings intorch/__init__.py._add_doc_strto work on Python functions sotorch.splitdoc can remain in_torch_docs.py.partially addresses #5571
cc @vishwakftw hope that this doesn't conflict with what you are working on.