Skip to content

Conversation

@XiaobingSuper
Copy link
Collaborator

@XiaobingSuper XiaobingSuper commented Apr 23, 2020

Stack from ghstack:

Differential Revision: D22440968

[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented Apr 23, 2020

💊 CI failures summary and remediations

As of commit 756047b (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 78 times.

@zhangguanheng66 zhangguanheng66 added module: internals Related to internal abstractions in c10 and ATen triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Apr 23, 2020
@zhangguanheng66 zhangguanheng66 requested a review from albanD April 23, 2020 22:00
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, small comments only.

Tensor mkldnn_adaptive_avg_pool2d_backward(
const Tensor& grad_output,
const Tensor& input) {
TORCH_CHECK(input.dim() == 4, "mkldnn_adaptive_avg_pool2d_backward: Expect 2D input");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe make the error message slightly more explicit that a 4D Tensor is expected.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD, the file use many AT_ASSERTM, any difference with TORCH_CHECK? AT_ASSERTM will be deprecated, may can replace them.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you can replace all the AT_ASSERT with TORCH_CHECK

// adjust padding until output sizes agree
bool all_equal = false;
std::vector<int64_t> output_sizes;
while (!all_equal) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that all ops are calling here. How expensive is this? Should there be a warning in the docs that the ceil_mode might be inefficient for mkldnn Tensors?

bool ceil_mode) {
AT_ERROR(
"mkldnn_max_pool2d: ATen not compiled with MKLDNN support");
TORCH_CHECK(false, "mkldnn_max_pool2d: ATen not compiled with MKLDNN support");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking the time to fix these.

qiuxin2012 pushed a commit to qiuxin2012/pytorch that referenced this pull request Jul 27, 2020
ghstack-source-id: 962b825
Pull Request resolved: pytorch#37146
@facebook-github-bot
Copy link
Contributor

Hi @XiaobingSuper!

Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but we do not have a signature on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot facebook-github-bot deleted the gh/xiaobingsuper/14/head branch January 15, 2021 15:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed module: internals Related to internal abstractions in c10 and ATen open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants