Skip to content

Allow optimizers to skip nn.Parameters that have requires_grad=False #679

@alykhantejani

Description

@alykhantejani

I am trying to implement a gaussian blur as a convolution layer in a network, where the weights do not change. I currently have this:

class GaussianBlur(nn.Module):
    def __init__(self, kernelSize=5, sigma=1):
        self.weight = nn.Parameter(self._calculate_weights(kernelSize, sigma),
                                   requires_grad=False)

    def forward(self, x):
        return F.conv2d(x, self.weight)

This then gives the error optimizing a parameter that doesn't require gradients from the optimizer.

It would be a nice feature to be able to exclude some parameters from the optimizer.

I also tried changing self.weight to a Variable, but then when .cuda() is called, the weights are not transferred, resulting in other errors

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions