-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
high prioritymodule: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generalmodule: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblockssmallWe think this is a small issue to fix. Consider knocking off high priority small issuesWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
📚 Documentation
The torch.autograd.enable_grad documentation says:
Enables gradient calculation inside a
no_gradcontext. This has no effect outside ofno_grad.
This implies:
torch.set_grad_enabled(False)
with torch.enable_grad:
# Gradient tracking will NOT be enabled here.
torch.set_grad_enabled(True)
vs:
with torch.no_grad():
with torch.enable_grad:
# Gradient tracking IS enabled here.
However the observed behaviour (1.0.1.post2) is:
x = torch.tensor([.1], requires_grad=True)
with torch.no_grad():
with torch.enable_grad():
y = x * 2
print(y.requires_grad) # True (as expected)
with torch.set_grad_enabled(False):
y = x * 2
print(y.requires_grad) # False (as expected)
with torch.set_grad_enabled(False):
with torch.enable_grad():
y = x * 2
print(y.requires_grad) # True, but False expected from doc quote
Note the last example is not "inside a no_grad context", but it still works.
Other prior art: PyTorch set_grad_enabled(False) vs with no_grad():
I'm assuming the documentation is incorrect, and it should say simply:
Enables gradient calculation.
Would you accept a PR?
Metadata
Metadata
Assignees
Labels
high prioritymodule: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generalmodule: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblockssmallWe think this is a small issue to fix. Consider knocking off high priority small issuesWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module