Skip to content

CUDNN_STATUS_BAD_PARAM in BatchNorm backward #1866

@alykhantejani

Description

@alykhantejani
import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable

bn = nn.BatchNorm1d(10)
bn = bn.cuda()
input = Variable(torch.Tensor(4, 10).cuda())

o = bn(input)
o.sum().backward()

This throws a CUDNN_STATUS_BAD_PARAM on the backward call

Its fine in v0.12 and is fine in 5c74534 but broken in c573d53

Still trying to track it down but compiling takes ages

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions