-
Notifications
You must be signed in to change notification settings - Fork 26.3k
symeig for variables #2026
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
symeig for variables #2026
Conversation
torch/autograd/_functions/linalg.py
Outdated
| x, w, v, = ctx.saved_variables | ||
|
|
||
| # gives an error if I don't do this.. | ||
| x = x.data |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/linalg.py
Outdated
| tri1 = lambda a: torch.triu(a, 1) | ||
|
|
||
| def G(n): | ||
| return sum([v[:, m] * grad_v.t()[n].matmul(v[:, m]) / (w[n] - w[m]) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/linalg.py
Outdated
| return sum([v[:, m] * grad_v.t()[n].matmul(v[:, m]) / (w[n] - w[m]) | ||
| for m in range(N) if m != n]) | ||
|
|
||
| g = sum([torch.ger(v[:, n], v[:, n] * grad_w[n] + G(n)) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
fmassa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! Do you think it would be possible to get eig as well?
I made some comments that I think need to be addressed before this PR can be merged. You could get the template for the tests from the gesv or inverse functions (I can give them to you tomorrow if needed, typing from the phone)
|
|
||
| @staticmethod | ||
| def forward(ctx, input, eigenvectors=False, upper=True): | ||
| ctx.eigenvectors = eigenvectors |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Major questions right now:
Also, standard |
formatting for pyflakes
|
|
Ad 2. You can use |
small typo
removed `.data` call (not needed) and renamed `symeig` return values to `e` and `V` instead of `w` and `x` to be consistent with pytorch documentation.
|
Re: 1. Ok, great that's a good solution. I will work on that. Re: 2. Well I just tested it again - in the codebase, it only works without |
|
It seems that the only problem is that you're mixing tensors with Variables, so just making sure that you keep everything wrapped would be enough. Add a couple of prints and see why do they get mixed up |
Summary: Partially fixes #6890. (backward pass for non-symmetric eigen-decomposition is not implemented in other packages, e.g. autograd, mxnet, tensorflow, presumably because the eigenvalues can be imaginary for the general case, and AFAIK we cannot support complex numbers). This patch adds a backward function for the symmetric eigen-decomposition function `torch.symeig`. The formula used is taken from [here](http://eprints.maths.ox.ac.uk/1079/1/NA-08-01.pdf). Unit tests are added to verify correctness. There is still one outstanding issue, which is how to handle the case where the `symeig` is called with `eigenvectors=False`. In this case, the eigenvectors are returned as a zero tensor, but the backward computation for the eigenvalues depends on the eigenvectors. There was a previous attempt to implement this in #2026, where apaszke mentioned that the `eigenvectors` argument should be overridden so that they are saved for the backwards pass. The forward code is autogenerated, though, and it isn't clear to me how that would be done. I'd appreciate any guidance. For now, there is a unit test that will fail until that issue is resolved. Pull Request resolved: #8586 Reviewed By: ezyang Differential Revision: D8872760 Pulled By: SsnL fbshipit-source-id: 76614495d0f9c118fec163a428f32e5480b4d115
Summary: Partially fixes pytorch#6890. (backward pass for non-symmetric eigen-decomposition is not implemented in other packages, e.g. autograd, mxnet, tensorflow, presumably because the eigenvalues can be imaginary for the general case, and AFAIK we cannot support complex numbers). This patch adds a backward function for the symmetric eigen-decomposition function `torch.symeig`. The formula used is taken from [here](http://eprints.maths.ox.ac.uk/1079/1/NA-08-01.pdf). Unit tests are added to verify correctness. There is still one outstanding issue, which is how to handle the case where the `symeig` is called with `eigenvectors=False`. In this case, the eigenvectors are returned as a zero tensor, but the backward computation for the eigenvalues depends on the eigenvectors. There was a previous attempt to implement this in pytorch#2026, where apaszke mentioned that the `eigenvectors` argument should be overridden so that they are saved for the backwards pass. The forward code is autogenerated, though, and it isn't clear to me how that would be done. I'd appreciate any guidance. For now, there is a unit test that will fail until that issue is resolved. Pull Request resolved: pytorch#8586 Reviewed By: ezyang Differential Revision: D8872760 Pulled By: SsnL fbshipit-source-id: 76614495d0f9c118fec163a428f32e5480b4d115
|
I think this is superceded by #8586; please reopen if I am incorrect. |
Summary: Partially fixes pytorch#6890. (backward pass for non-symmetric eigen-decomposition is not implemented in other packages, e.g. autograd, mxnet, tensorflow, presumably because the eigenvalues can be imaginary for the general case, and AFAIK we cannot support complex numbers). This patch adds a backward function for the symmetric eigen-decomposition function `torch.symeig`. The formula used is taken from [here](http://eprints.maths.ox.ac.uk/1079/1/NA-08-01.pdf). Unit tests are added to verify correctness. There is still one outstanding issue, which is how to handle the case where the `symeig` is called with `eigenvectors=False`. In this case, the eigenvectors are returned as a zero tensor, but the backward computation for the eigenvalues depends on the eigenvectors. There was a previous attempt to implement this in pytorch#2026, where apaszke mentioned that the `eigenvectors` argument should be overridden so that they are saved for the backwards pass. The forward code is autogenerated, though, and it isn't clear to me how that would be done. I'd appreciate any guidance. For now, there is a unit test that will fail until that issue is resolved. Pull Request resolved: pytorch#8586 Reviewed By: ezyang Differential Revision: D8872760 Pulled By: SsnL fbshipit-source-id: 76614495d0f9c118fec163a428f32e5480b4d115
this implements the symeig function for variables... I grabbed the gradient implementation directly from theano ... And I test it in this gist to show that it gets the same gradient as the theano version and that it passes
gradcheck. I've also used this in actual networks so I'm fairly confident it's correct.I obviously need to add actual tests, so any direction for that and anything else is much appreciated.