Skip to content

Conversation

@anjali411
Copy link
Contributor

@anjali411 anjali411 commented Jun 12, 2020

Stack from ghstack:

resolves #36323 by adding torch.sgn for complex tensors.
torch.sgn returns x/abs(x) for x != 0 and returns 0 + 0j for x==0

also updates the backward definition of torch.div, torch.abs
Differential Revision: D23460526

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Jun 12, 2020
ghstack-source-id: 367b82c
Pull Request resolved: #39955
@anjali411 anjali411 requested a review from ezyang June 12, 2020 19:19
@anjali411
Copy link
Contributor Author

anjali411 commented Jun 12, 2020

cc. @dylanbespalko for the Vec256 changes

@dr-ci
Copy link

dr-ci bot commented Jun 12, 2020

💊 CI failures summary and remediations

As of commit 0fed8c8 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 95 times.

@anjali411 anjali411 requested a review from mruberry June 12, 2020 20:16
['sigmoid', torch.sigmoid],
['sigmoid_', torch.sigmoid_],
['sign', torch.sign],
['sgn', torch.sign],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.sgn?

Example::
>>> x=torch.randn(4, dtype=torch.cfloat)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example doesn't really demonstrate the function because of the random values. Maybe pick some angles, like 0, 45, 90 degrees?

cos_angle = angle.cos()
sin_angle = angle.sin()
expected = cos_angle + 1j * sin_angle
self.assertEqual(x.sgn(), expected)
Copy link
Collaborator

@mruberry mruberry Jun 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this instead assert that x.sgn has the same angle as x.angle and that the absolute value is everywhere one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah sure

@mruberry
Copy link
Collaborator

Added some notes. This also still needs to update torch/_overrides.py, docs/source/tensors.rst, docs/source/torch.rst, docs/source/name_inference.rst, aten/src/ATen/core/aten_interned_strings.h and tools/derivatives.yaml.

Can this also be tested by TestTensorDeviceOps and TestTorchMathOps?

@ezyang
Copy link
Contributor

ezyang commented Jun 15, 2020

Relevant issues: #36323 #36655

anjali411 added a commit that referenced this pull request Jun 15, 2020
ghstack-source-id: 21a0c14
Pull Request resolved: #39955
Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't compute normalized vector using trig functions

resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

TODO:
1. add tests for backward (waiting on gradcheck PR for complex)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Jul 29, 2020
ghstack-source-id: bfce330
Pull Request resolved: #39955
@anjali411 anjali411 requested a review from mruberry July 31, 2020 15:07
@mruberry mruberry self-requested a review July 31, 2020 20:12
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor doc nits but otherwise this looks good to me!

resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

TODO:
1. add tests for backward (waiting on gradcheck PR for complex)

[ghstack-poisoned]
@anjali411 anjali411 requested a review from apaszke as a code owner September 3, 2020 18:07
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this still needs the gradients for the real case to be tested, even if you postpone the check for the complex version.

@anjali411
Copy link
Contributor Author

I think this still needs the gradients for the real case to be tested, even if you postpone the check for the complex version.

I was thinking of waiting to merge this PR until #43208 is merged and add tests for sgn.

resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

This PR doesn't test the correctness of the gradients. It will be done as a part of auditing all the ops in future once we decide the autograd behavior (JAX vs TF) and add gradchek.

Differential Revision: [D23460526](https://our.internmc.facebook.com/intern/diff/D23460526)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Sep 3, 2020
ghstack-source-id: a3d49b5
Pull Request resolved: #39955
resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

This PR doesn't test the correctness of the gradients. It will be done as a part of auditing all the ops in future once we decide the autograd behavior (JAX vs TF) and add gradchek.

Differential Revision: [D23460526](https://our.internmc.facebook.com/intern/diff/D23460526)

[ghstack-poisoned]
resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

also updates the backward definition of `torch.div`, `torch.abs`
Differential Revision: [D23460526](https://our.internmc.facebook.com/intern/diff/D23460526)

[ghstack-poisoned]
resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

also updates the backward definition of `torch.div`, `torch.abs`
Differential Revision: [D23460526](https://our.internmc.facebook.com/intern/diff/D23460526)

[ghstack-poisoned]
resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

also updates the backward definition of `torch.div`, `torch.abs`
Differential Revision: [D23460526](https://our.internmc.facebook.com/intern/diff/D23460526)

[ghstack-poisoned]
@codecov
Copy link

codecov bot commented Sep 21, 2020

Codecov Report

Merging #39955 into gh/anjali411/34/base will decrease coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@                   Coverage Diff                    @@
##           gh/anjali411/34/base   #39955      +/-   ##
========================================================
- Coverage                 67.86%   67.85%   -0.01%     
========================================================
  Files                       384      384              
  Lines                     50026    50029       +3     
========================================================
  Hits                      33948    33948              
- Misses                    16078    16081       +3     
Impacted Files Coverage Δ
torch/overrides.py 97.08% <ø> (ø)
...ch/testing/_internal/common_methods_invocations.py 91.37% <ø> (ø)
torch/_tensor_docs.py 100.00% <100.00%> (ø)
torch/_torch_docs.py 100.00% <100.00%> (ø)
torch/utils/_benchmark/utils/common.py 77.68% <0.00%> (-2.48%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 81bb19c...0fed8c8. Read the comment docs.

resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

also updates the backward definition of `torch.div`, `torch.abs`
Differential Revision: [D23460526](https://our.internmc.facebook.com/intern/diff/D23460526)

[ghstack-poisoned]
resolves #36323 by adding `torch.sgn` for complex tensors.
`torch.sgn` returns `x/abs(x)` for `x != 0` and returns `0 + 0j` for `x==0`

also updates the backward definition of `torch.div`, `torch.abs`
Differential Revision: [D23460526](https://our.internmc.facebook.com/intern/diff/D23460526)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in 58b6ab6.

loadbxh pushed a commit to loadbxh/Torch that referenced this pull request Sep 23, 2020
ghstack-source-id: e4ab67a
Pull Request resolved: pytorch/pytorch#39955
@facebook-github-bot facebook-github-bot deleted the gh/anjali411/34/head branch September 26, 2020 14:14
anjali411 added a commit that referenced this pull request Sep 29, 2020
This PR disables autograd for all C -> C, R -> C functions which are not included in the whitelist `GRADIENT_IMPLEMENTED_FOR_COMPLEX`. In practice, there will be a RuntimeError during forward computation when the outputs are differentiable:
```
>>> x=torch.randn(4, 4, requires_grad=True, dtype=torch.cdouble)
>>> x.pow(3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: pow does not support automatic differentiation for outputs with complex dtype.
```

The implicit assumption here is that all the C -> R functions have correct backward definitions. So before merging this PR, the following functions must be tested and verified to have correct backward definitions:
`torch.abs` (updated in #39955 ), `torch.angle`, `torch.norm`, `torch.irfft`, `torch.istft`.

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Sep 29, 2020
This PR disables autograd for all C -> C, R -> C functions which are not included in the whitelist `GRADIENT_IMPLEMENTED_FOR_COMPLEX`. In practice, there will be a RuntimeError during forward computation when the outputs are differentiable:
```
>>> x=torch.randn(4, 4, requires_grad=True, dtype=torch.cdouble)
>>> x.pow(3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: pow does not support automatic differentiation for outputs with complex dtype.
```

The implicit assumption here is that all the C -> R functions have correct backward definitions. So before merging this PR, the following functions must be tested and verified to have correct backward definitions:
`torch.abs` (updated in #39955 ), `torch.angle`, `torch.norm`, `torch.irfft`, `torch.istft`.

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Sep 29, 2020
This PR disables autograd for all C -> C, R -> C functions which are not included in the whitelist `GRADIENT_IMPLEMENTED_FOR_COMPLEX`. In practice, there will be a RuntimeError during forward computation when the outputs are differentiable:
```
>>> x=torch.randn(4, 4, requires_grad=True, dtype=torch.cdouble)
>>> x.pow(3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: pow does not support automatic differentiation for outputs with complex dtype.
```

The implicit assumption here is that all the C -> R functions have correct backward definitions. So before merging this PR, the following functions must be tested and verified to have correct backward definitions:
`torch.abs` (updated in #39955 ), `torch.angle`, `torch.norm`, `torch.irfft`, `torch.istft`.

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Sep 30, 2020
This PR disables autograd for all C -> C, R -> C functions which are not included in the whitelist `GRADIENT_IMPLEMENTED_FOR_COMPLEX`. In practice, there will be a RuntimeError during forward computation when the outputs are differentiable:
```
>>> x=torch.randn(4, 4, requires_grad=True, dtype=torch.cdouble)
>>> x.pow(3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: pow does not support automatic differentiation for outputs with complex dtype.
```

The implicit assumption here is that all the C -> R functions have correct backward definitions. So before merging this PR, the following functions must be tested and verified to have correct backward definitions:
`torch.abs` (updated in #39955 ), `torch.angle`, `torch.norm`, `torch.irfft`, `torch.istft`.

Differential Revision: [D23998156](https://our.internmc.facebook.com/intern/diff/D23998156)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Sep 30, 2020
This PR disables autograd for all C -> C, R -> C functions which are not included in the whitelist `GRADIENT_IMPLEMENTED_FOR_COMPLEX`. In practice, there will be a RuntimeError during forward computation when the outputs are differentiable:
```
>>> x=torch.randn(4, 4, requires_grad=True, dtype=torch.cdouble)
>>> x.pow(3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: pow does not support automatic differentiation for outputs with complex dtype.
```

The implicit assumption here is that all the C -> R functions have correct backward definitions. So before merging this PR, the following functions must be tested and verified to have correct backward definitions:
`torch.abs` (updated in #39955 ), `torch.angle`, `torch.norm`, `torch.irfft`, `torch.istft`.

Differential Revision: [D23998156](https://our.internmc.facebook.com/intern/diff/D23998156)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Sep 30, 2020
Summary:
Pull Request resolved: #45461

This PR disables autograd for all C -> C, R -> C functions which are not included in the whitelist `GRADIENT_IMPLEMENTED_FOR_COMPLEX`. In practice, there will be a RuntimeError during forward computation when the outputs are differentiable:
```
>>> x=torch.randn(4, 4, requires_grad=True, dtype=torch.cdouble)
>>> x.pow(3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: pow does not support automatic differentiation for outputs with complex dtype.
```

The implicit assumption here is that all the C -> R functions have correct backward definitions. So before merging this PR, the following functions must be tested and verified to have correct backward definitions:
`torch.abs` (updated in #39955 ), `torch.angle`, `torch.norm`, `torch.irfft`, `torch.istft`.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D23998156

Pulled By: anjali411

fbshipit-source-id: 370eb07fe56ac84dd8e2233ef7bf3a3eb8aeb179
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants