Skip to content

Conversation

@divyanshsinghvi
Copy link
Contributor

@divyanshsinghvi divyanshsinghvi commented Oct 5, 2019

  1. Wrapped namespace Reduction in namespace at
  2. Prefixed at:: wherever Reduction:: is used

@pytorchbot pytorchbot added oncall: jit Add this issue/PR to JIT oncall triage queue module: cpp Related to C++ API module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen module: operators labels Oct 5, 2019
@divyanshsinghvi divyanshsinghvi changed the title Wrapping namespace Reduction in namespace at(#26606) Wrapping namespace Reduction in namespace at (#26606) Oct 5, 2019
@divyanshsinghvi divyanshsinghvi marked this pull request as ready for review October 5, 2019 14:44
@yf225 yf225 requested a review from li-roy October 7, 2019 04:51
Copy link
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for the fix @divyanshsinghvi! I left some very minor comments regarding using torch:: in C++ frontend.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For C++ frontend we encourage using torch:: namespace instead of at::, and it would be great to change all of the use sites here to torch::.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto in this file regarding changing at:: to torch::.

Copy link
Contributor Author

@divyanshsinghvi divyanshsinghvi Oct 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be wrong in my understanding. But I was amazed at how the code would work with torch:: instead of at::. So, I tried to dig deeper and found the types.h class which wraps the object with torch namespace and with statement using namespace at we are removing the need for the use of at:: prefix. Please do correct me if my understanding is wrong.

Copy link
Contributor

@yf225 yf225 Oct 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your understanding is 100% correct. :D For the C++ frontend we only support using torch:: namespace, as using at:: could get users into a lot of Variable vs. Tensor troubles.

@yf225
Copy link
Contributor

yf225 commented Oct 7, 2019

@ailzhang XLA build seems to fail with

Oct 05 20:02:56 torch_xla/csrc/tensor_ops.cpp:74:23: error: use of undeclared identifier 'Reduction'; did you mean 'at::Reduction'?
Oct 05 20:02:56   return reduction == Reduction::Mean
Oct 05 20:02:56                       ^~~~~~~~~
Oct 05 20:02:56                       at::Reduction
Oct 05 20:02:56 /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/Reduction.h:4:11: note: 'at::Reduction' declared here
Oct 05 20:02:56 namespace Reduction {
Oct 05 20:02:56           ^
Oct 05 20:02:56 torch_xla/csrc/tensor_ops.cpp:107:10: error: use of undeclared identifier 'Reduction'; did you mean 'at::Reduction'?
Oct 05 20:02:56     case Reduction::None:
Oct 05 20:02:56          ^~~~~~~~~
Oct 05 20:02:56          at::Reduction
Oct 05 20:02:56 /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/Reduction.h:4:11: note: 'at::Reduction' declared here
Oct 05 20:02:56 namespace Reduction {
Oct 05 20:02:56           ^
Oct 05 20:02:56 torch_xla/csrc/tensor_ops.cpp:109:10: error: use of undeclared identifier 'Reduction'; did you mean 'at::Reduction'?
Oct 05 20:02:56     case Reduction::Mean:
Oct 05 20:02:56          ^~~~~~~~~
Oct 05 20:02:56          at::Reduction
Oct 05 20:02:56 /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/Reduction.h:4:11: note: 'at::Reduction' declared here
Oct 05 20:02:56 namespace Reduction {
Oct 05 20:02:56           ^
Oct 05 20:02:56 torch_xla/csrc/tensor_ops.cpp:112:10: error: use of undeclared identifier 'Reduction'; did you mean 'at::Reduction'?
Oct 05 20:02:56     case Reduction::Sum:
Oct 05 20:02:56          ^~~~~~~~~
Oct 05 20:02:56          at::Reduction
Oct 05 20:02:56 /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/Reduction.h:4:11: note: 'at::Reduction' declared here
Oct 05 20:02:56 namespace Reduction {
Oct 05 20:02:56           ^
Oct 05 20:02:56 torch_xla/csrc/tensor_ops.cpp:142:10: error: use of undeclared identifier 'Reduction'; did you mean 'at::Reduction'?
Oct 05 20:02:56     case Reduction::None:
Oct 05 20:02:56          ^~~~~~~~~
Oct 05 20:02:56          at::Reduction
Oct 05 20:02:56 /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/Reduction.h:4:11: note: 'at::Reduction' declared here
Oct 05 20:02:56 namespace Reduction {
Oct 05 20:02:56           ^
Oct 05 20:02:56 torch_xla/csrc/tensor_ops.cpp:143:10: error: use of undeclared identifier 'Reduction'; did you mean 'at::Reduction'?
Oct 05 20:02:56     case Reduction::Sum:
Oct 05 20:02:56          ^~~~~~~~~
Oct 05 20:02:56          at::Reduction
Oct 05 20:02:57 /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/Reduction.h:4:11: note: 'at::Reduction' declared here
Oct 05 20:02:57 namespace Reduction {
Oct 05 20:02:57           ^
Oct 05 20:02:57 torch_xla/csrc/tensor_ops.cpp:145:10: error: use of undeclared identifier 'Reduction'; did you mean 'at::Reduction'?
Oct 05 20:02:57     case Reduction::Mean: {
Oct 05 20:02:57          ^~~~~~~~~
Oct 05 20:02:57          at::Reduction
Oct 05 20:02:57 /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/Reduction.h:4:11: note: 'at::Reduction' declared here
Oct 05 20:02:57 namespace Reduction {
Oct 05 20:02:57           ^
Oct 05 20:03:02 7 errors generated.

because we are moving Reduction into at:: namespace. Do you think we should issue a quick fix to XLA after merging this PR, or should we have a bridging PR that alias at::Reduction to Reduction first, and remove the alias after XLA is migrated?

@yf225
Copy link
Contributor

yf225 commented Oct 7, 2019

NOTE: if #27345 is landed first, we will need to change Reduction usage in torch::nn::CosineEmbeddingLoss as well.

@yf225
Copy link
Contributor

yf225 commented Oct 8, 2019

@pytorchbot rebase this please

@yf225
Copy link
Contributor

yf225 commented Oct 8, 2019

@divyanshsinghvi #27345 is recently landed which has another occurrence of Reduction (in torch/csrc/api/include/torch/nn/options/loss.h), and we might need to change it as well. Thanks so much for your help!

@divyanshsinghvi
Copy link
Contributor Author

@divyanshsinghvi #27345 is recently landed which has another occurrence of Reduction (in torch/csrc/api/include/torch/nn/options/loss.h), and we might need to change it as well. Thanks so much for your help!

Yes, this PR includes changes for this.

@yf225
Copy link
Contributor

yf225 commented Oct 8, 2019

@divyanshsinghvi Awesome thanks a lot! After CI passes I will work with @ailzhang to land this PR and also fix XLA build at the same time. :)

@divyanshsinghvi
Copy link
Contributor Author

@yf225 I think the two fails are now because of xla. According to me, these three files would require the changes in xla repository.
test/cpp/test_aten_xla_tensor.cpp
torch_xla/csrc/aten_xla_type.cpp
torch_xla/csrc/tensor_ops.cpp

@yf225
Copy link
Contributor

yf225 commented Oct 9, 2019

@divyanshsinghvi Yes I think so - I just opened a PR to the XLA repo: pytorch/xla#1176, and we will merge that immediately after merging this PR to unbreak XLA build.

@divyanshsinghvi divyanshsinghvi requested a review from yf225 October 11, 2019 22:30
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@divyanshsinghvi Thanks so much for the awesome work!

@yf225
Copy link
Contributor

yf225 commented Oct 14, 2019

I will merge this PR after internal CI passes.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor Author

@divyanshsinghvi divyanshsinghvi Oct 14, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yf225 why is it at:: instead of torch::

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@divyanshsinghvi thanks a lot for the catch! I changed it to torch::

divyanshsinghvi and others added 6 commits October 14, 2019 21:03
1)Wrapped namespace Reduction in namespace at
2)Prefixed at:: wherever Reduction:: is used except C++ frontent
3)Prefixed torch:: in C++ frontend
1)Wrapped namespace Reduction in namespace at
2)Prefixed at:: wherever Reduction:: is used except C++ frontent
3)Prefixed torch:: in C++ frontend
@yf225
Copy link
Contributor

yf225 commented Oct 15, 2019

@divyanshsinghvi It seems that some unrelated commits were accidentally added into this PR branch, so I cleaned them up and rebase the PR on top of current master.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@divyanshsinghvi
Copy link
Contributor Author

@divyanshsinghvi It seems that some unrelated commits were accidentally added into this PR branch, so I cleaned them up and rebase the PR on top of current master.

Yes, thanks for cleaning them up.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 15, 2019
Summary:
1) Wrapped namespace `Reduction` in namespace `at`
2) Prefixed `at::` wherever `Reduction::` is used
Pull Request resolved: pytorch/pytorch#27422

Differential Revision: D17913759

Pulled By: yf225

fbshipit-source-id: 8f00ca01cad2e7f673d316b128abf59c026e216c
@divyanshsinghvi divyanshsinghvi deleted the Reduction branch October 15, 2019 21:08
@facebook-github-bot
Copy link
Contributor

@yf225 merged this pull request in 3397d41.

thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
…#27422)

Summary:
1) Wrapped namespace `Reduction` in namespace `at`
2) Prefixed `at::` wherever `Reduction::` is used
Pull Request resolved: pytorch#27422

Differential Revision: D17913759

Pulled By: yf225

fbshipit-source-id: 8f00ca01cad2e7f673d316b128abf59c026e216c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cpp Related to C++ API module: cpu CPU specific problem (e.g., perf, algorithm) module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants