Skip to content

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Jan 24, 2019

Fixes #16018.

Backwards appears to be fine because the derivative is written in terms of mkldnn_convolution itself.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@dzhulgakov
Copy link
Collaborator

I wonder if there's a generic way to test for non-contiguous inputs. Do we do it currently for generically written tests in test_nn.py?

@gchanan
Copy link
Contributor Author

gchanan commented Jan 24, 2019

@dzhulgakov we only test non contiguous inputs to NN modules, but it didn't actually matter in this case: there was already an explicit test for non-contiguous convolution weights (although not biases, I added a test for that), but it didn't exercise mkldnn because we only use mkldnn for float32 and we run autograd tests in float64...

A test revamp would at least check float32 as well (and if it made sense, direct calls to different convolution algorithms).

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jan 24, 2019
Summary:
Fixes pytorch/pytorch#16018.

Backwards appears to be fine because the derivative is written in terms of mkldnn_convolution itself.
Pull Request resolved: pytorch/pytorch#16300

Differential Revision: D13797776

Pulled By: gchanan

fbshipit-source-id: 68a990b8a3c186412a99d176931314806c9ed7bf
@ezyang ezyang added the merged label Jun 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

CPU convolution (MKL-DNN) with non-contiguous weights is incorrect

5 participants