Skip to content

Segfault during backward when using PReLU #10723

@tommm994

Description

@tommm994

I'm trying to train a network using PReLU module, but I get Segfault during the backward. Here's a piece of code that reproduces the bug:

gt = torch.rand(2,3,256,256)
gt = torch.autograd.Variable(gt.cuda(async=True))
input = torch.rand(2,134,256,256)
input = torch.autograd.Variable(input.cuda())
lossL1 = torch.nn.L1Loss()
lossL1 = lossL1.cuda()

net = nn.Sequential(nn.PReLU(), nn.Conv2d(134, 3, kernel_size=1, stride=1, bias=False)).cuda()

output = net(input)

loss = lossL1(output, gt)
loss.backward()

In this example, my network just consists in a PReLU followed by a simple convolution. Note that if I switch the order of both modules, the Segfault doesn't occur, so it only bugs when the PReLU is the first layer.

Also note that if I don't use the GPU, the Segfault doesn't occur neither.

Rem: I tried with pytorch versions 0.4.0 and 0.4.1.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions