-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Open
Labels
actionablebetter-engineeringRelatively self-contained tasks for better engineering contributorsRelatively self-contained tasks for better engineering contributorsenhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixfixathonmodule: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Even though it is not documented, many users still use it. And it leads to many bugs in user code.
So we should remove it completely to prevent this.
To do this, do one of:
- [preferred] remove
.data - [if possible] otherwise, replace with either
.detach()or wrap in atorch.no_gradblock - [otherwise] leave as is.
For 2), generally use whichever is nicer. torch.no_grad() is often clearer but is extra characters and is often done on another line. One place where you may have to use .detach() is in the return of a function, i.e.:
return x.detach() returns a tensor without a history. but:
with torch.no_grad():
return x
will return a tensor with history
For 3) This happens rarely, only if you explicitly don't want to share the version counter (e.g. because autograd would complain about the change but you happen to know it is okay, or the version counter is being explicitly tested).
The expected steps are:
-
benchmarks/ -
docs/source/scripts/build_activation_images.pyanddocs/source/notes/extending.rst -
test/test_numba_integration.py,test/onnx/... -
torch/testing -
torch/utils/tensorboard/
Metadata
Metadata
Assignees
Labels
actionablebetter-engineeringRelatively self-contained tasks for better engineering contributorsRelatively self-contained tasks for better engineering contributorsenhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixfixathonmodule: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module