Skip to content

Conversation

@VitalyFedyunin
Copy link
Contributor

Broken case:

x = torch.randn(192,16,50).cuda()
x = x.permute(0,2,1).contiguous().permute(0,2,1)
m = torch.nn.Conv1d(
       in_channels=16,
       out_channels=32,
       kernel_size=2,
       bias=True,
  ).cuda()

m(x)

This reverts commit 8160f39.

```python
x = torch.randn(192,16,50).cuda()
x = x.permute(0,2,1).contiguous().permute(0,2,1)
m = torch.nn.Conv1d(
       in_channels=16,
       out_channels=32,
       kernel_size=2,
       bias=True,
  ).cuda()

m(x)
```

This reverts commit 8160f39.
@VitalyFedyunin VitalyFedyunin changed the title Revert cudnn chnages #23861 Revert cudnn changes #23861 Nov 6, 2019
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Nov 6, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@jjsjann123
Copy link
Collaborator

Fill in the information here:

The implementation here forgot to accommodate the case where 1d convolution got converted to 2d convolution: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Convolution.cpp#L567-L571

So 1d conv is accidentally routed through NHWC (not desired behavior), which I'll fix and submit another PR.

Meanwhile, cuDNN is actually supposed to handle this normally instead of error out. I'm working with the team to fix it.

@facebook-github-bot
Copy link
Contributor

@VitalyFedyunin merged this pull request in 9a9bb44.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Nov 7, 2019
Summary:
Broken case:

```python
x = torch.randn(192,16,50).cuda()
x = x.permute(0,2,1).contiguous().permute(0,2,1)
m = torch.nn.Conv1d(
       in_channels=16,
       out_channels=32,
       kernel_size=2,
       bias=True,
  ).cuda()

m(x)
```

This reverts commit 8160f390cf678b3b98e0c1f73bd289ee3c96afcb.
Pull Request resolved: pytorch/pytorch#29329

Differential Revision: D18357674

Pulled By: VitalyFedyunin

fbshipit-source-id: cdd7e77e8dcbfc5f2ab3df54eb53ccfbf703b245
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants