-
Notifications
You must be signed in to change notification settings - Fork 26.3k
upgrade mkldnn-bridge #20569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upgrade mkldnn-bridge #20569
Conversation
|
The failure is not related to this change.
|
|
@yinghai Thanks a lot. |
|
We need to update internal ideep before accepting. |
|
@yinghai Thanks a lot. |
7fec349 to
be814f0
Compare
|
rebased to latest code base |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
We have run into the this error when compiling mkl-dnn v0.19 from source: |
|
@bddppq looks like the mkl version is diff. could you help check which version is your mkl on your machine? |
|
@gujinghui it's 2017.0.098 |
|
Should be fixed. Pls try again. Thanks. |
|
@gujinghui Where did you put the fix? The release tag v0.19 in mkl-dnn repo is still pointing to the same commit. |
|
|
cmake/Modules/FindMKLDNN.cmake
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bddppq
disabled this flag for MKLDNN, which caused the build failure.
This flag didn't impact on pytorch any more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does disabling MKL have perf implications on MKLDNN?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
according to below table from MKLDNN, should be no perf change.
Anyway, we use jit impl.
/* USE_MKL USE_CBLAS effect
* ------- --------- ------
* yes yes use Intel(R) MKL CBLAS
* yes no use jit
* no yes system-dependent CBLAS
* no no use jit
*/
f47c1ce to
a453a3c
Compare
|
@pytorchbot rebase this please |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bddppq has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bddppq has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@gujinghui We have built mkldnn v0.19 from source (with |
1. reduce the overhead of mkldnn-bridge itself 2. remove redundant code and useless APIs 3. provide new operators, including int8 inner_product, nD permute/transpose, ele_add/mul, and etc. 4. improve inner_product to support io format weights without implicit reorder 5. add softmax support Signed-off-by: Gu, Jinghui <[email protected]>
d826e46 to
bc1d9ee
Compare
|
@bddppq Per compatibility issue bwt v0.19 and old MKL, we're working on it now. Thanks a lot. |
|
@gujinghui Thanks! |
Yes. The trivial change is seen in ideep in this PR. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bddppq has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: 1. reduce the overhead of mkldnn-bridge itself 2. remove redundant code and useless APIs 3. provide new operators, including int8 inner_product, ND permute/transpose, elem_add/mul, and etc. 4. improve inner_product to support io format weights without implicit reorder 5. add SoftMax support Pull Request resolved: pytorch/pytorch#20569 Reviewed By: houseroad Differential Revision: D15558663 Pulled By: bddppq fbshipit-source-id: 79a63aa139037924e9ffb1069f7e7f1d334efe3a
Uh oh!
There was an error while loading. Please reload this page.