Skip to content

Inconsistent Documentation about Optimizer.step(closure) #38006

@khdlr

Description

@khdlr

The docstring for the Optimizer class notes that when using the .step function with a closure, this closure should not change the parameter gradients:

.. note::
Unless otherwise specified, this function should not modify the
``.grad`` field of the parameters.

However, every example I've found seems do be doing exactly that, e.g. also on the torch.optim documentation on the same page:

for input, target in dataset:
    def closure():
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        return loss
    optimizer.step(closure)

Is the note about not changing the gradients in the closure outdated?

cc @vincentqb

Metadata

Metadata

Assignees

Labels

module: docsRelated to our documentation, both in docs/ and docblocksmodule: optimizerRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions