-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
Milestone
Description
🐛 Bug
Pytorch fallback for convolutions with half inputs/weights fails. First reported on the forums https://discuss.pytorch.org/t/cuda-10-1-error-using-transposeconv2d-with-output-padding-1/51414?u=ptrblck
This is likely a regression introduced by #20994 (thanks for tracking this down @ptrblck ). It should be a must fix for 1.2 release.
In most cases, convolutions go through cudnn and thus this bug is hidden. However, when deconvolution has non-unit strides and output_padding, pytorch fallback path is used and it fails. Forward is ok, backward throws a runtime error.
To Reproduce
import torch
import torch.nn as nn
deconv = nn.ConvTranspose2d(in_channels=256, out_channels=128,
kernel_size=3, stride=2, padding=1, output_padding=1).cuda().half()
batch = torch.randn(8,256,16,16).cuda().half()
output = deconv(batch)
gO = torch.rand_like(output)
output.backward(gO)
fails with
Traceback (most recent call last):
File "convtransposefail_simple.py", line 10, in <module>
output.backward(gO)
File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: _cublasOpFromChar input should be 't', 'n' or 'c' but got `
Environment
fresh master built from source (CUDA 10.1, but that's most likely irrelevant).