-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_av… #20571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: ### mkldnn backward ops list: - [ ] \(#20567) Add aten mkldnn conv2d backward operator 💛 - [ ] \(#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛 - [ ] \(#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛 - [ ] \(#20572) Add aten mkldnn batchnorm backward operator 💛 - [ ] \(#20573) Add aten mkldnn zero_ operator:yellow_heart: - [ ] \(#20575) Add mkldnn mul operator 💛 Pull Request resolved: #20575 Differential Revision: D15799529 Pulled By: bddppq fbshipit-source-id: 4887d8ef1a0e316ad9db199b657d9481fc13e486
Summary: ### mkldnn backward ops list: - [ ] \(pytorch/pytorch#20567) Add aten mkldnn conv2d backward operator 💛 - [ ] \(pytorch/pytorch#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛 - [ ] \(pytorch/pytorch#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛 - [ ] \(pytorch/pytorch#20572) Add aten mkldnn batchnorm backward operator 💛 - [ ] \(pytorch/pytorch#20573) Add aten mkldnn zero_ operator:yellow_heart: - [ ] \(pytorch/pytorch#20575) Add mkldnn mul operator 💛 Pull Request resolved: pytorch/pytorch#20575 Differential Revision: D15799529 Pulled By: bddppq fbshipit-source-id: 4887d8ef1a0e316ad9db199b657d9481fc13e486
a855f3a to
e9d10e1
Compare
Summary: ### mkldnn backward ops list: - [ ] \(#20567) Add aten mkldnn conv2d backward operator 💛 - [ ] \(#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛 - [ ] \(#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛 - [ ] \(#20572) Add aten mkldnn batchnorm backward operator 💛 - [ ] \(#20573) Add aten mkldnn zero_ operator:yellow_heart: - [ ] \(#20575) Add mkldnn mul operator 💚 Pull Request resolved: #20573 Differential Revision: D15820477 Pulled By: bddppq fbshipit-source-id: 35d95f5b4e013c8db1911f52148550a2e40a2e68
Summary: ### mkldnn backward ops list: - [ ] \(pytorch/pytorch#20567) Add aten mkldnn conv2d backward operator 💛 - [ ] \(pytorch/pytorch#20570) Add aten mkldnn backward ops: relu, linear and reshape 💛 - [ ] \(pytorch/pytorch#20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛 - [ ] \(pytorch/pytorch#20572) Add aten mkldnn batchnorm backward operator 💛 - [ ] \(pytorch/pytorch#20573) Add aten mkldnn zero_ operator:yellow_heart: - [ ] \(pytorch/pytorch#20575) Add mkldnn mul operator 💚 Pull Request resolved: pytorch/pytorch#20573 Differential Revision: D15820477 Pulled By: bddppq fbshipit-source-id: 35d95f5b4e013c8db1911f52148550a2e40a2e68
|
@gottbrath @bddppq @dzhulgakov @wesolwsk @VitalyFedyunin @ezyang This batch of PRs enabled the MKL-DNN for the training path and is expected to improve training models like resnext101 by 2x on CPU with AVX512 (also benefit AVX2). With more DL accelerating instructions like bfp16 in CPU roadmap, we think it is important to accelerate the Pytorch training path as we do for the inference path so the community get the benefits. |
|
Hey, @XiaobingSuper - is it possible to rebase these PRs on top of current master and make sure the CI passes? |
92aa6eb to
93c3595
Compare
93c3595 to
9741eda
Compare
15a6a04 to
ada659a
Compare
|
My research group is really interested in this. We are using PyTorch on CPUs because our research has extreme memory requirements, and TensorFlow has a limited tensor size whereas PyTorch does not. |
|
Hi, our group is interested in this PR to improve CPU training in our large-scale probabilistic programming work. |
ada659a to
f001604
Compare
|
I'd like to also support that we need these optimizations for the PyTorch we run in support of large-scale science at the NERSC supercomputing center. Currently we blocking on using more recent PyTorch 1.2 and 1.3 for some projects because of this. Thanks! |
|
@dariogarcia, @gbaydin, @wbhimji, thank you for your interest in our work. I will rebase the code after get the reponse from Facebook team. |
f001604 to
df4cffe
Compare
|
We at SURF (national super-computing facilities in the Netherlands) are also really interested in better MKL support in PyTorch. In our case specifically 3D operations. |
df4cffe to
01d0335
Compare
01d0335 to
c9ae822
Compare
VitalyFedyunin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, can you please rebase for final tests. Also what is the plan to start renaming functions to onednn?
@VitalyFedyunin , the renaming function will be ok at next step which will upgrade DNNL to v1.4, so I will rebase those PRs which stiil using the old name, thanks! |
mkldnn backward ops list: