-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add nn::functional::normalize() to C++ Frontend #27280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@yf225 : This might be a silly question but do I need to do anything additional to get the new files included in the build targets? I couldn't find hints from other pull requests unless they were .cpp modules but now trying to build the test binary gives me a not in namespace error. edit: Okay sorry I missed this but I I needed to add my headers to functional.h |
9b6d5c1 to
646776b
Compare
yf225
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gtamba Thanks a lot for the great work! My sincere apologies for the delay in reviewing as I was out of office last week. I left some minor comments in the PR, and we should be able to merge it very soon :D
|
@pytorchbot rebase this please |
|
@yf225 : it seems that the bot has done a merge instead of a rebase, should I reset the head to my initial commit before the merge, add my changes and then rebase to hide the merge commit or is that all right? (I'm assuming that this sort of small PR is expected with 1 commit ) |
|
@gtamba : it's totally alright and I think we should work on top of the merge commit. At PR merge time, all commits in this PR will automatically be squashed into one commit, so we don't need to worry about the number of commits we make to this PR :D |
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
|
@pytorchbot rebase this please |
4d936a7 to
849fb5e
Compare
|
@gtamba I feel that we might need to add |
|
|
||
| inline Tensor normalize( | ||
| const Tensor& input, | ||
| const NormalizeOptions& options, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we confirm that options = {} doesn't cause the problem, I believe we should change it back to options = {}, and it would be awesome to have a test for the F::normalize(input) (i.e. no options) use case as well :D Thanks so much for your help @gtamba !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 : Can I still make these changes? I notice that facebook-bot is trying to land the pull request, and I didn't add the test for no options yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gtamba : Yes sorry this is my fault - didn't notice this before trying to land yesterday :( I saw you updated the PR (thanks a lot :D) and I will try to land it again
I noticed that most other functions of the functional module also have a base NN Module and I couldn't find one which only has options included this way without a cpp counterpart declaration. I wonder if that could have something to do with it. But yeah I am confused from the error it's saying that it can't resolve the Options struct's internal variables, but the Options struct is pretty much identical to any other options struct in functional/ |
|
@gtamba I think your assessment is correct, and I just pushed a commit to add |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
yf225
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for the awesome work @gtamba!
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@gtamba Curious are you currently working on the Dropout modules? Please let me know if there is anything I can do to make the implementation easier. Thanks a lot for your help! |
|
@yf225 : Yes I'm currently working on the dropout modules, it took me a while to get started due to other commitments but I should have an initial PR up soon! |
|
@gtamba Awesome thanks so much for your help! Please keep me posted how it goes :D |
|
@yf225 : I am sorry for the delayed update, certain things came in the way and I could not work on this for a while. I noticed that #28424 has some relevant changes that might overlap with the rest of the dropout modules, I think I can follow the changes in Dropoutoptions in this PR? (Or just wait for it to get merged :)) |
|
@yf225 : That's totally fine it makes sense. If you're working on something else presently though, I should have a draft PR soon and it should wrap up shortly (+ code review) after the weekend, but iterations would make it risk taking too long. Feel free to hand me anything else. Again I apologize sorry for the delay |
|
@gtamba : Thanks so much for your help! Yes in that case I think if we can have it done over the next week or so it would be really awesome (I can personally help with the iterations by making changes to the PR if necessary). Please keep me posted on how it goes :D Thanks again! |
Summary: Addresses pytorch#27048 PR Summary: Files Added: _torch/csrc/api/include/torch/nn/options/normalization.h torch/csrc/api/include/torch/nn/functional/normalization.h_ Files Modified: _test/cpp/api/functional.cpp torch/csrc/api/include/torch/nn/functional.h_ --- yf225 : I couldn't find a C++ equivalent of gradcheck(), is there such a function or is it sufficient to call .backward() in the test body? I don't think any solutions are checked for the Python tests. Pull Request resolved: pytorch#27280 Differential Revision: D17902109 Pulled By: yf225 fbshipit-source-id: 1bce1a88103d0f1848633fec90fde95ea8f3d1ed
Addresses #27048
PR Summary:
Files Added:
torch/csrc/api/include/torch/nn/options/normalization.h
torch/csrc/api/include/torch/nn/functional/normalization.h
Files Modified:
test/cpp/api/functional.cpp
torch/csrc/api/include/torch/nn/functional.h
@yf225 : I couldn't find a C++ equivalent of gradcheck(), is there such a function or is it sufficient to call .backward() in the test body? I don't think any solutions are checked for the Python tests.