-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Fix contiguous AD and Autogradzero inconsistency #18633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
bb43f7c to
657f44b
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
The test change for AD support has been merged, would you mind rebasing and see if all tests passed? Thanks a lot! |
test/test_jit.py
Outdated
| def foo(x): | ||
| return 3 + x.contiguous() | ||
|
|
||
| x = torch.rand(1, dtype=torch.float, requires_grad=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x is contiguous here, we want to test against both when x is /isn't contiguous I think?
ailzhang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!! Please land if CI is green.
torch/csrc/jit/symbolic_script.cpp
Outdated
| if self.is_contiguous(): | ||
| return grad_output | ||
| else: | ||
| return grad_output.clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should just return grad_output. If an op downstream needs it to be contiguous, it will do it on its own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess if we simply return grad_output, it should be fine to make the test simpler.
test/test_jit.py
Outdated
|
|
||
| grad = torch.randn(5, 5, dtype=torch.float) | ||
| out.backward(grad) | ||
| self.assertEqual(x.grad, grad.transpose(1, 0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, but what are we testing here? This should be covered as part of autograd tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are testing the AD formula, the autograd test does not cover the Ad formula I believe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, please add an autograd test! It's a differentiable op just like any other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep just added them to autograd tests :)
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
fixed the AD formula to only return grad_output, and make test covered in autograd
Fixes #17962