-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
module: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblocksmodule: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
The docstring for the Optimizer class notes that when using the .step function with a closure, this closure should not change the parameter gradients:
pytorch/torch/optim/optimizer.py
Lines 174 to 176 in 4e93844
| .. note:: | |
| Unless otherwise specified, this function should not modify the | |
| ``.grad`` field of the parameters. |
However, every example I've found seems do be doing exactly that, e.g. also on the torch.optim documentation on the same page:
for input, target in dataset:
def closure():
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
return loss
optimizer.step(closure)Is the note about not changing the gradients in the closure outdated?
cc @vincentqb
ju-kreber, nnop and johannesSX
Metadata
Metadata
Assignees
Labels
module: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblocksmodule: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module