-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Wrapping namespace Reduction in namespace at (#26606) #27422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
da351de to
6fb7903
Compare
yf225
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for the fix @divyanshsinghvi! I left some very minor comments regarding using torch:: in C++ frontend.
test/test_cpp_api_parity.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For C++ frontend we encourage using torch:: namespace instead of at::, and it would be great to change all of the use sites here to torch::.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto in this file regarding changing at:: to torch::.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may be wrong in my understanding. But I was amazed at how the code would work with torch:: instead of at::. So, I tried to dig deeper and found the types.h class which wraps the object with torch namespace and with statement using namespace at we are removing the need for the use of at:: prefix. Please do correct me if my understanding is wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your understanding is 100% correct. :D For the C++ frontend we only support using torch:: namespace, as using at:: could get users into a lot of Variable vs. Tensor troubles.
|
@ailzhang XLA build seems to fail with because we are moving |
|
NOTE: if #27345 is landed first, we will need to change |
6fb7903 to
c23d8fd
Compare
|
@pytorchbot rebase this please |
|
@divyanshsinghvi #27345 is recently landed which has another occurrence of |
Yes, this PR includes changes for this. |
|
@divyanshsinghvi Awesome thanks a lot! After CI passes I will work with @ailzhang to land this PR and also fix XLA build at the same time. :) |
|
@yf225 I think the two fails are now because of xla. According to me, these three files would require the changes in xla repository. |
|
@divyanshsinghvi Yes I think so - I just opened a PR to the XLA repo: pytorch/xla#1176, and we will merge that immediately after merging this PR to unbreak XLA build. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
yf225
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@divyanshsinghvi Thanks so much for the awesome work!
|
I will merge this PR after internal CI passes. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
test/cpp/api/modules.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 why is it at:: instead of torch::
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@divyanshsinghvi thanks a lot for the catch! I changed it to torch::
1)Wrapped namespace Reduction in namespace at 2)Prefixed at:: wherever Reduction:: is used except C++ frontent 3)Prefixed torch:: in C++ frontend
1)Wrapped namespace Reduction in namespace at 2)Prefixed at:: wherever Reduction:: is used except C++ frontent 3)Prefixed torch:: in C++ frontend
This reverts commit 60435b0.
|
@divyanshsinghvi It seems that some unrelated commits were accidentally added into this PR branch, so I cleaned them up and rebase the PR on top of current master. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Yes, thanks for cleaning them up. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: 1) Wrapped namespace `Reduction` in namespace `at` 2) Prefixed `at::` wherever `Reduction::` is used Pull Request resolved: pytorch/pytorch#27422 Differential Revision: D17913759 Pulled By: yf225 fbshipit-source-id: 8f00ca01cad2e7f673d316b128abf59c026e216c
…#27422) Summary: 1) Wrapped namespace `Reduction` in namespace `at` 2) Prefixed `at::` wherever `Reduction::` is used Pull Request resolved: pytorch#27422 Differential Revision: D17913759 Pulled By: yf225 fbshipit-source-id: 8f00ca01cad2e7f673d316b128abf59c026e216c
Reductionin namespaceatat::whereverReduction::is used