Skip to content

Conversation

@XiaobingSuper
Copy link
Collaborator

@XiaobingSuper XiaobingSuper commented May 16, 2019

mkldnn backward ops list:

@pytorchbot pytorchbot added module: autograd Related to torch.autograd, and the autograd engine in general module: mkldnn Related to Intel IDEEP or oneDNN (a.k.a. mkldnn) integration module: operators labels May 16, 2019
@li-roy li-roy added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 16, 2019
facebook-github-bot pushed a commit that referenced this pull request Jun 13, 2019
Summary:
### mkldnn backward ops list:
 - [ ] \(#20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(#20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(#20573) Add aten mkldnn zero_ operator:yellow_heart:
 - [ ] \(#20575) Add mkldnn mul operator 💛
Pull Request resolved: #20575

Differential Revision: D15799529

Pulled By: bddppq

fbshipit-source-id: 4887d8ef1a0e316ad9db199b657d9481fc13e486
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 13, 2019
Summary:
### mkldnn backward ops list:
 - [ ] \(pytorch/pytorch#20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(pytorch/pytorch#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(pytorch/pytorch#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(pytorch/pytorch#20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(pytorch/pytorch#20573) Add aten mkldnn zero_ operator:yellow_heart:
 - [ ] \(pytorch/pytorch#20575) Add mkldnn mul operator 💛
Pull Request resolved: pytorch/pytorch#20575

Differential Revision: D15799529

Pulled By: bddppq

fbshipit-source-id: 4887d8ef1a0e316ad9db199b657d9481fc13e486
@gottbrath gottbrath requested review from bddppq and dzhulgakov June 13, 2019 17:10
facebook-github-bot pushed a commit that referenced this pull request Jun 14, 2019
Summary:
### mkldnn backward ops list:
 - [ ] \(#20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(#20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(#20573) Add aten mkldnn zero_ operator:yellow_heart:
 - [ ] \(#20575) Add mkldnn mul operator 💚
Pull Request resolved: #20573

Differential Revision: D15820477

Pulled By: bddppq

fbshipit-source-id: 35d95f5b4e013c8db1911f52148550a2e40a2e68
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 14, 2019
Summary:
### mkldnn backward ops list:
 - [ ] \(pytorch/pytorch#20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(pytorch/pytorch#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(pytorch/pytorch#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(pytorch/pytorch#20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(pytorch/pytorch#20573) Add aten mkldnn zero_ operator:yellow_heart:
 - [ ] \(pytorch/pytorch#20575) Add mkldnn mul operator 💚
Pull Request resolved: pytorch/pytorch#20573

Differential Revision: D15820477

Pulled By: bddppq

fbshipit-source-id: 35d95f5b4e013c8db1911f52148550a2e40a2e68
@gottbrath gottbrath requested a review from gchanan June 25, 2019 16:22
@XiaobingSuper XiaobingSuper force-pushed the mkldnn_bn_bwd branch 2 times, most recently from 5da8a09 to 671c018 Compare July 18, 2019 05:12
@VitalyFedyunin VitalyFedyunin self-requested a review October 21, 2019 16:09
@VitalyFedyunin
Copy link
Contributor

Can you please rebase on top of the master

@XiaobingSuper XiaobingSuper force-pushed the mkldnn_bn_bwd branch 2 times, most recently from ce582c3 to ded344f Compare October 25, 2019 04:13
@XiaobingSuper
Copy link
Collaborator Author

@VitalyFedyunin

@XiaobingSuper
Copy link
Collaborator Author

@VitalyFedyunin, I change the code according to your suggestions, thanks!

@XiaobingSuper
Copy link
Collaborator Author

@VitalyFedyunin , the two failed cases is about cuda build, which are not caused by this changes, please help see tnem, thanks!

@XiaobingSuper XiaobingSuper force-pushed the mkldnn_bn_bwd branch 2 times, most recently from 359fae7 to 24bd4d0 Compare October 29, 2019 06:47
@XiaobingSuper
Copy link
Collaborator Author

@VitalyFedyunin

1 similar comment
@XiaobingSuper
Copy link
Collaborator Author

@VitalyFedyunin

@XiaobingSuper
Copy link
Collaborator Author

@VitalyFedyunin

bool train,
double momentum,
double eps) {
TORCH_CHECK(input.dim() == 4 || input.dim() == 5,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No test case for 3d

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3d operators will be enable at next step, including conv3d, pool3d, and other mkldnn operators.

@VitalyFedyunin
Copy link
Contributor

Do you have specific subset of operators in mind to add mkldnn backwards support?

@XiaobingSuper
Copy link
Collaborator Author

Now, we want to enable conv, pool, batchnorm. linear, relu and reshape operators, which can run resnet and resnext models, and then add other operators for user's requests, such as 3d operators for Unet , laynorm, concat operators for transformer models. Thanks!

@XiaobingSuper
Copy link
Collaborator Author

@VitalyFedyunin, Can you help to merge this PR? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: autograd Related to torch.autograd, and the autograd engine in general module: mkldnn Related to Intel IDEEP or oneDNN (a.k.a. mkldnn) integration open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants