Skip to content

Conversation

@ngimel
Copy link
Collaborator

@ngimel ngimel commented Apr 28, 2017

No description provided.

cudaDeviceProp* prop = THCState_getCurrentDeviceProperties(state);
return CUDNN_VERSION >= 6000 && prop->major >= 5 && !transposed;
return ((CUDNN_VERSION >=6021) || (CUDNN_VERSION >= 6000 && prop->major >= 5 )) && !transposed;
}

This comment was marked as off-topic.

@ngimel
Copy link
Collaborator Author

ngimel commented Apr 28, 2017

No, it still fails if I remove !transposed (in fact, it fails for 1d, 2d, and 3d whereas before it used to fail for 2d and 3d because 1d dilation parameters were not passed). I think there is something wrong with the parameters passed to transposed dilated cudnn case.

@soumith soumith merged commit 775481e into pytorch:master Apr 28, 2017
@soumith
Copy link
Contributor

soumith commented Apr 28, 2017

thanks!

Jiaming-Liu pushed a commit to Jiaming-Liu/pytorch that referenced this pull request May 18, 2017
@ngimel ngimel deleted the kepler_dilated branch June 5, 2017 21:46
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Jan 26, 2022
akashveramd pushed a commit to akashveramd/pytorch that referenced this pull request Apr 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants