Skip to content

Conversation

@jgong5
Copy link
Collaborator

@jgong5 jgong5 commented Aug 2, 2018

Optimize the max_pooling operation for inference path by setting the "inference" flag to the underlying MKL-DNN, saving the computation and store of max indices which is only needed for training. To make the API compatible, training mode is still the default and inference mode is set in the optimizeForIdeep path.
Test shows the speed-up of a single max_pooling operation is up to 7X on BDW.

mkldnn::prop_kind::forward_training : mkldnn::prop_kind::forward_inference;

ideep::pooling_forward::compute(X, Y_dims, *Y,
stride_, kernel_, pad_tl(), pad_br(), algo_);

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@yinghai yinghai added the caffe2 label Aug 2, 2018
@jgong5
Copy link
Collaborator Author

jgong5 commented Aug 10, 2018

@yinghai OK to merge?

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yinghai has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
…10156)

Summary:
Optimize the max_pooling operation for inference path by setting the "inference" flag to the underlying MKL-DNN, saving the computation and store of max indices which is only needed for training. To make the API compatible, training mode is still the default and inference mode is set in the optimizeForIdeep path.
Test shows the speed-up of a single max_pooling operation is up to 7X on BDW.
Pull Request resolved: pytorch#10156

Differential Revision: D9276755

Pulled By: yinghai

fbshipit-source-id: ad533d53aabb8ccb3b592da984d6269d9b794a8a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants