Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Feb 27, 2018

Commits:

  1. Doc changes
  • Update doc to reflect changes in Variable/Tensor merge (-> float functions become -> Tensor), and new printing style
  • Remove functions in torch/functional.py that are already implemented with native_function. Move their doc to _torch_docs.py and _tensor_docs.py.
  • Add set_detault_tensor_type doc
  • Generally improve doc, including clarifying torch.gels (Docs Error: Incorrect Output of torch.gels() Example #5431 )
  1. Fix torch.split. The method behaves differently basing on the type of 2nd arg.
  • This commit breaks it into two functions in ATen split and split_with_sizes. The Python API stays the same with the if-statement in variable.py.
  • To override the torch.split functional form, this commit also moves from .functional import * to after ATen bindings in torch/__init__.py.
  • Finally, this commit also extends _add_doc_str to work on Python functions so torch.split doc can remain in _torch_docs.py.

partially addresses #5571

cc @vishwakftw hope that this doesn't conflict with what you are working on.

@ssnl ssnl changed the title Update doc to reflect changes in Variable/Tensor merge, and other improvements [wip] Update doc to reflect changes in Variable/Tensor merge, and other improvements Feb 27, 2018
@ssnl
Copy link
Collaborator Author

ssnl commented Feb 27, 2018

changing torch.split

@ssnl ssnl changed the title [wip] Update doc to reflect changes in Variable/Tensor merge, and other improvements [ready] Update doc to reflect changes in Variable/Tensor merge, and other improvements Feb 28, 2018
@ssnl ssnl changed the title [ready] Update doc to reflect changes in Variable/Tensor merge, and other improvements [ready] torch.* doc update for Variable/Tensor merge, and other improvements Feb 28, 2018

This comment was marked as off-topic.

This comment was marked as off-topic.

colesbury pushed a commit that referenced this pull request Mar 1, 2018
The nn.* counterpart of #5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU.

Notice that torch.randn(*, requires_grad=True) isn't documented until #5462 is done.
@ssnl ssnl force-pushed the var_doc branch 3 times, most recently from 8516fd6 to 9be20cc Compare March 2, 2018 05:35
jamesr66a pushed a commit to jamesr66a/pytorch that referenced this pull request Mar 2, 2018
The nn.* counterpart of pytorch#5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU.

Notice that torch.randn(*, requires_grad=True) isn't documented until pytorch#5462 is done.

This comment was marked as off-topic.

This comment was marked as off-topic.

@vishwakftw vishwakftw mentioned this pull request Mar 5, 2018
7 tasks
@ssnl
Copy link
Collaborator Author

ssnl commented Mar 7, 2018

@colesbury could you take a look at this? Most of the changes are examples formatting and -> float => -> Tensor.

This comment was marked as off-topic.

ssnl added 5 commits March 8, 2018 13:28
…inting style

2. Remove functions in torch/functional.py that are already implemented with native_function
3. Add set_detault_tensor_type doc
@ssnl
Copy link
Collaborator Author

ssnl commented Mar 8, 2018

@pytorchbot retest this please

Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes lgtm assuming the tests pass

@soumith soumith merged commit 71d7321 into pytorch:master Mar 9, 2018
@ssnl ssnl deleted the var_doc branch March 9, 2018 08:04
facebook-github-bot pushed a commit that referenced this pull request Sep 7, 2018
Summary:
~~This PR fixes #8525 by renaming `split_with_sizes` to `split` so that 2 `aten::split` ops are
generated (previously `aten::split(self, int, int)` and `aten::split_with_sizes(self, int[], int)` were generated)~~

~~`split_with_sizes` was made in PR #5443, but I don't see a reason for it to have
a different name than `split` rather than just overload `split`.~~

This PR fixes #8525 by adding `register_special_ops.cpp` to mirror Python dispatching from `split` to `split` and `split_with_sizes` in [tensor.py](https://github.com/pytorch/pytorch/blob/master/torch/tensor.py#L279).

It also fixes #8520 by adding an `int[]` wherever it sees `torch.Size`

In a follow up PR this could also be used to fix some of the other `unknown builtin op` test errors.
Pull Request resolved: #11051

Differential Revision: D9582443

Pulled By: driazati

fbshipit-source-id: d27201f85937d72e45e851eaa1460dd3dd1b61a9
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
Summary:
~~This PR fixes pytorch#8525 by renaming `split_with_sizes` to `split` so that 2 `aten::split` ops are
generated (previously `aten::split(self, int, int)` and `aten::split_with_sizes(self, int[], int)` were generated)~~

~~`split_with_sizes` was made in PR pytorch#5443, but I don't see a reason for it to have
a different name than `split` rather than just overload `split`.~~

This PR fixes pytorch#8525 by adding `register_special_ops.cpp` to mirror Python dispatching from `split` to `split` and `split_with_sizes` in [tensor.py](https://github.com/pytorch/pytorch/blob/master/torch/tensor.py#L279).

It also fixes pytorch#8520 by adding an `int[]` wherever it sees `torch.Size`

In a follow up PR this could also be used to fix some of the other `unknown builtin op` test errors.
Pull Request resolved: pytorch#11051

Differential Revision: D9582443

Pulled By: driazati

fbshipit-source-id: d27201f85937d72e45e851eaa1460dd3dd1b61a9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants